content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
A single spike suffices: the simplest form of stochastic resonance in model neurons
Results 1 - 10 of 14
- TO APPEAR IN NEURAL COMPUTATION. , 1999
"... We analyze the effect of noise in integrate-and-fire neurons driven by timedependent input, and compare the diffusion approximation for the membrane potential to escape noise. It is shown that
for time-dependent sub-threshold input, diffusive noise can be replaced by escape noise with a hazard funct ..."
Cited by 41 (6 self)
Add to MetaCart
We analyze the effect of noise in integrate-and-fire neurons driven by timedependent input, and compare the diffusion approximation for the membrane potential to escape noise. It is shown that for
time-dependent sub-threshold input, diffusive noise can be replaced by escape noise with a hazard function that has a Gaussian dependence upon the distance between the (noise-free) membrane voltage
and threshold. The approximation is improved if we add to the hazard function a probability current proportional to the derivative of the voltage. Stochastic resonance in response to periodic input
occurs in both noise models and exhibits similar characteristics.
, 1998
"... How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and
input statistics? To answer these questions, we investigate the coincidence-detection properties of an integra ..."
Cited by 19 (6 self)
Add to MetaCart
How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input
statistics? To answer these questions, we investigate the coincidence-detection properties of an integrate-and-fire neuron. We derive an expression indicating how coincidence detection depends on
neuronal parameters. Specifically, we show how coincidence detection depends on the shape of the postsynaptic response function, the number of synapses, and the input statistics, and we demonstrate
that there is an optimal threshold. Our considerations can be used to predict from neuronal parameters whether and to what extent a neuron can act as a coincidence detector and thus can convert a
temporal code into a rate code.
, 2002
"... Many phenomenological models of the responses of simple cells in primary visual cortex have concluded that a cell's firing rate should be given by its input raised to a power greater than one.
This is known as an expansive power-law nonlinearity. However, intracellular recordings have shown that a d ..."
Cited by 18 (3 self)
Add to MetaCart
Many phenomenological models of the responses of simple cells in primary visual cortex have concluded that a cell's firing rate should be given by its input raised to a power greater than one. This
is known as an expansive power-law nonlinearity. However, intracellular recordings have shown that a di#erent nonlinearity, a linear-threshold function, appears to give a good prediction of firing
rate from a cell's low-pass-filtered voltage response. Using a model based on a linear-threshold function, Anderson et al. (2000) showed that voltage noise was critical to converting voltage
responses with contrast-invariant orientation tuning into spiking responses with contrast-invariant tuning. We present two separate results clarifying the connection between noise-smoothed
linear-threshold functions and power-law nonlinearities. First, we prove analytically that a power-law nonlinearity is the only input-output function that converts contrast-invariant input tuning
into contrast-invariant spike tuning. Second, we examine simulations of a simple model that assumes (i) instantaneous spike rate is given by a linear-threshold function of voltage, and (ii) voltage
responses include significant noise. We show that the resulting average spike rate is well described by an expansive power law of the average voltage (averaged over multiple trials), provided that
average voltage remains less than about 1.5 standard deviations of the noise above threshold. Finally, we use this model to show that the noise levels recorded by Anderson et al. (2000) are
consistent with the degree to which the orientation tuning of spiking responses is more sharply tuned than the orientation tuning of voltage responses. Thus, neuronal noise can robustly generate
power-law input-output functions of the form freq...
- Proceedings of the IEEE: special issue on intelligent signal processing , 1998
"... This paper shows how adaptive systems can learn to add an optimal amount of noise to some nonlinear feedback systems. Noise can improve the signal-to-noise ratio of many nonlinear dynamical
systems. This "stochastic resonance" effect occurs in a wide range of physical and biological systems. The SR ..."
Cited by 17 (9 self)
Add to MetaCart
This paper shows how adaptive systems can learn to add an optimal amount of noise to some nonlinear feedback systems. Noise can improve the signal-to-noise ratio of many nonlinear dynamical systems.
This "stochastic resonance" effect occurs in a wide range of physical and biological systems. The SR effect may also occur in engineering systems in signal processing, communications, and control.
The noise energy can enhance the faint periodic signals or faint broadband signals that force the dynamical systems. Most SR studies assume full knowledge of a system's dynamics and its noise and
signal structure. Fuzzy and other adaptive systems can learn to induce SR based only on samples from the process. These samples can tune a fuzzy system's if-then rules so that the fuzzy system
approximates the dynamical system and its noise response. The paper derives the SR optimality conditions that any stochastic learning system should try to achieve. The adaptive system learns the SR
effect as the sys...
- Phys. Rev. E , 1999
"... A subthreshold signal may be detected if noise is added to the data. We study a simple model, consisting of a constant signal to which at uniformly spaced times independent and identically
distributed noise variables with known distribution are added. A detector records the times at which the noisy ..."
Cited by 10 (3 self)
Add to MetaCart
A subthreshold signal may be detected if noise is added to the data. We study a simple model, consisting of a constant signal to which at uniformly spaced times independent and identically
distributed noise variables with known distribution are added. A detector records the times at which the noisy signal exceeds a threshold. There is an optimal noise level, called stochastic
resonance. We explore the detectability of the signal in a system with one or more detectors, with di#erent thresholds. We use a statistical detectability measure, the asymptotic variance of the best
estimator of the signal from the thresholded data, or equivalently, the Fisher information in the data. In particular, we determine optimal configurations of detectors, varying the distances between
the thresholds and the signal, as well as the noise level. The approach generalizes to non-constant signals. AMS 1991 subject classifications. Primary 62F12; secondary 62P10. Key words and Phrases. E
#cient estimator, max...
- IEEE Trans. Nanotechnol , 2006
"... Abstract—Electrical noise can help pulse-train signal detection at the nanolevel. Experiments on a single-walled carbon nanotube transistor confirmed that a threshold exhibited stochastic
resonance (SR) for finite-variance and infinite-variance noise: small amounts of noise enhanced the nanotube det ..."
Cited by 7 (6 self)
Add to MetaCart
Abstract—Electrical noise can help pulse-train signal detection at the nanolevel. Experiments on a single-walled carbon nanotube transistor confirmed that a threshold exhibited stochastic resonance
(SR) for finite-variance and infinite-variance noise: small amounts of noise enhanced the nanotube detector’s performance. The experiments used a carbon nanotube field-effect transistor to detect
noisy subthreshold electrical signals. Two new SR hypothesis tests in the Appendix also confirmed the SR effect in the nanotube transistor. Three measures of detector performance showed the SR
effect: Shannon’s mutual information, the normalized correlation measure, and an inverted bit error rate compared the input and output discrete-time random sequences. The nanotube detector had a
threshold-like input–output characteristic in its gate effect. It produced little current for subthreshold digital input voltages that fed the transistor’s gate. Three types of synchronized
- Journal of Neurophysiology , 2005
"... You might find this additional information useful... This article cites 73 articles, 41 of which you can access free at: ..."
, 1998
"... Coherent oscillatory activity of a population of neurons is thought to be a vital feature of temporal coding in the brain. We focus on the question of whether a single neuron can transform a
spike code into a rate code. More precisely, how does a neuron vary its mean output firing rate, if its in ..."
Cited by 4 (2 self)
Add to MetaCart
Coherent oscillatory activity of a population of neurons is thought to be a vital feature of temporal coding in the brain. We focus on the question of whether a single neuron can transform a spike
code into a rate code. More precisely, how does a neuron vary its mean output firing rate, if its input changes from random to coherent? We investigate the coincidence detection properties of an
integrate-and-fire neuron in dependence upon internal parameters and input statistics. In particular, we show how coincidence detection depends on the membrane time constant and the threshold.
Furthermore, we demonstrate that there is an optimal threshold for coincidence detection and that there is a broad range of near-optimal threshold values. Fine-tuning is not necessary. Keywords:
Coincidence detection, voltage threshold, coherent activity, temporal coding, rate coding, integrate-and-fire neuron Institut fur Theoretische Physik, Physik Department der TU Munchen, D-85747
Garching, Germany...
- Neurocomputing , 1999
"... There is mounting experimental evidence that the nervous system utilizes neural noise to improve sensory signal transmission. Here, we investigate the response properties of a noisy neuron using
an integrate-fire model. When the neuron is driven by periodic input, noise optimally improves the signal ..."
Cited by 2 (0 self)
Add to MetaCart
There is mounting experimental evidence that the nervous system utilizes neural noise to improve sensory signal transmission. Here, we investigate the response properties of a noisy neuron using an
integrate-fire model. When the neuron is driven by periodic input, noise optimally improves the signal-to-noise ratio of the elicited spike train, if the driving frequency is in a certain range. This
phenomenon, called bona fide stochastic resonance, is analyzed in a Markov chain formalism which avoids implausible assumptions made in earlier studies. The bandpass property of the transmission
function of the neuron may explain why certain oscillation frequencies are prevalent in cortex. Key words: Integrate-fire neuron ; Stochastic resonance ; Ornstein--Uhlenbeck process ; Markov chain 1
Introduction There is mounting experimental evidence that the nervous system utilizes the noise ubiquitous in neurons to improve sensory signal transmission [10,18,3,11]. Here, we consider neural
noise from ...
- Neural Comput , 1998
"... We propose a simple theoretical structure of interacting integrate and fire neurons that can handle fast information processing, and may account for the fact that only a few neuronal spikes
suffice to transmit information in the brain. Using integrate and fire neurons that are subjected to indiv ..."
Add to MetaCart
We propose a simple theoretical structure of interacting integrate and fire neurons that can handle fast information processing, and may account for the fact that only a few neuronal spikes suffice
to transmit information in the brain. Using integrate and fire neurons that are subjected to individual noise and to a common external input, we calculate their first passage time (FPT), or
inter-spike interval. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=893367","timestamp":"2014-04-20T07:43:32Z","content_type":null,"content_length":"40085","record_id":"<urn:uuid:a5123c5b-387a-4010-a7fb-051989521e1a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
OK, So How Many Trees DOES It To Take To Make A Roll Of Toilet Paper, Exactly?
When I look at a majestic tree, I see a huge pile of toilet paper rolls.
Don’t believe me? Well, you got me, but there are people that see trees exactly that way, since – big secret – trees are ground up to make toilet paper. So the question that I was researching (and
having a darned hard time answering) was just how many toilet paper rolls can one tree make?
For instance, Wikipedia, that hallowed source all all things somewhat accurate, states a tree produced 100 pounds of toilet paper rolls, and then linked to a reference that in turn linked to another
and so on until a dead end. Logically, however, I can carry 200 rolls (100 pounds at about 1/2 pound a roll), but I can’t carry one tree (maybe a wee little tree). So something was off.
The problem is hard for a very simple reason: Trees come in all different sizes. So skip trees, and go straight to wood. In theory, the math is somewhat simple: Wood weighs about 20-40 pounds per
cubic foot, depending on species. A toilet paper roll is about 1/2 pound, so one cubic foot of wood is good for about 40-80 rolls. Done and done.
Now this is a ‘back of envelope’ calculation. Moisture can affect the weight, but dried wood and paper seem ‘about’ the same, moisture wise. As this site points out, the difference between fresh wood
and 20% moisture air dried wood is about 50% by weight, so we are in the ball park with our calculations.
But this still leaves a problem – how much does a whole tree make?
Well, each tree is different: Sizes among species (and individual trees) differ, so we can’t get much closer than that. However, mathematically, we can get an eyeball estimate, good for guessing how
much a typical tree makes when on a walk in the woods.
For this, let’s estimate a tree is 40 feet high, and that it’s 18 inches wide at the ground (1.5 feet diameter, or 0.75 radius), tapering uniformly to the top. This ‘perfect’ tree is then a perfect
cone, and gets its volume with this formula
Volume = ( pi * radius * radius * h ) / 3
= ( 3.1415 * 0.75ft * 0.75ft * 40ft ) / 3
= 23.56 cubic feet
So that 40 foot tree is about 24 cubic feet, which at 20-40 pounds/cubic foot makes it 480-960 pounds, and that ends up being 960-1,920 rolls. Of course, with the species, you could figure out the
density, and get a more precise value – fir and pine are both around 30 lb/ft^3, so you’d be smack dab in the middle there, at 1,440 rolls. Isn’t math fun?
Of course, I’ve never seen a perfectly conical tree, and neither have you – most stay a certain width until almost at the top, then taper suddenly. To see the difference with our mythical conical
tree, imagine it has a sibling, which is uniformly 18 inches from root to top, and also 40 feet tall. Now the volume is:
Volume = pi * radius * radius * h
= 3.1415 * 0.75ft * 0.75ft * 40ft
= 70.68 cubic feet
Not coincidentally, three times the volume of the conical trunk (compare the formulas). And the estimate triples, to about 4,500 rolls per tree.
As you can see, there’s good reason people have trouble figuring out how much of a tree makes how many rolls, and why it’s so hard to get solid figures. The math is here so you can do your own
estimate, rather than rely on a number someone just posted (or perhaps inserted into Wikipedia!) And while there is quite a lot of wiggle room, roughly speaking, a solid 40 foot tree 18 inches wide
can produce about 3,000-6,000 rolls, with about 4,500 for pine or fir.
Of course, I’m interested in checking this, and in fact, as I was researching this, I came across a paper site which referred to an old estimate – in this case, using 40 foot trees 6-8 inches wide.
If we average to 7 inches (a radius of 0.29 feet) then the volume becomes 10.68 cubic feet per tree. In their calculations, one ton of paper comes from 12 of these trees, for a grand total of 15.6
pounds of finished paper per cubic foot of wood (about 30 rolls), or 166 pounds/332 rolls per tree – at least from these calculations. And since our trees were roughly 7x as big (70.68 versus 10.68
cubic feet), then we should have an estimate of 1,102 rolls per tree – ‘our’ tree. Since our estimate is much higher (at 3-6,000), this means that the calculations are missing something, such as the
amount of waste in the wood making process.
So, is it 1,000? 2,000? 6,000? Personally, I would aim lower since waste in the paper making process was not discussed in this estimate (and I expect there is a lot of waste). However, estimating
that only one-sixth of the tree becomes paper is unbelievable – so much so that I’d expect someone would have published that result and raised a hew and cry over it. So while 4,500 for a fir tree and
their estimate of 1,100 seems wide apart, I’d be willing to come closer to theirs, based on wastage of (say) 50% – here then is your answer:
How much toilet paper does a tree make? A nice solid 40 foot pine or fir tree that is 18 inches wide from root to top will make about 2,500 rolls at 1/2 pound weight each. More or less.
Oh, and one more factor: Paper rolls are not all created equal. We started this journey with a measure of a roll at 1/2 pound, so feel free to tweak the math based on your estimate. But be aware that
manufacturers love games with toilet paper, and toilet paper varies greatly in how much you get per roll. If you ever want to make yourself crazy, compare the actual number of sheets per roll of
toilet paper in the supermarket – before the manager asks you to leave, or you start pulling your hair out, you’ll find that there is no pattern to it, and a single roll of one brand can hold more
sheets than a double of another. Name brands are especially stingy with the sheets.
But regardless of the details, now you can estimate how much your favorite tree would give you in bottom-cleaning products…
2 thoughts on “OK, So How Many Trees DOES It To Take To Make A Roll Of Toilet Paper, Exactly?”
1. So if you assume that the average person uses a roll a week (aggressive estimate), and lives until they are 100, their toilet paper habit contributes to the death of two idealized trees.
If we were back in the caveman era and they had instead ripped leaves from the tree to accomplish the same task, I’d imagine the trauma and general abuse caused by that approach would result in
the death of more than two trees.
□ According to CBC’s Marketplace. the avg Canadian use 100/yr or 2 a week, so your estimate is in the right area. And while that’s very little in terms of tree quantities per person, and (I
agree) much less than our branch-happy ancestors would use, they were few and we are many: at 34 million, we’d use about 1.4 million trees a year to wipe our tushies, assuming no recycling
used. Factor in Americans at 311 million, and you get a grand total of 13.8 million trees per year, about 1,150,000 per month, or 38,333 per day… | {"url":"http://www.utopiamechanicus.com/article/how-many-toilet-paper-rolls-per-tree/","timestamp":"2014-04-17T03:48:08Z","content_type":null,"content_length":"40917","record_id":"<urn:uuid:0d099db7-348c-4aaa-8dca-9392fb40831e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math isn’t just for ‘math people’
“I’m just not a math person” is “the most self-destructive idea in America today,” write Miles Kimball and Noah Smith in The Atlantic. You’re not just limiting your own future. “You may be helping to
perpetuate a pernicious myth that is harming underprivileged children—the myth of inborn genetic math ability.”
Mathematicians need high math ability, write Kimball and Smith, economics professors who’ve taught math. But few of us are aiming that high. “For high-school math, inborn talent is much less
important than hard work, preparation, and self-confidence.”
Belief in inborn math ability may be responsible for much of the math gender gap, according to Oklahoma City researchers, they write.
Psychologist Carol Dweck and colleagues found students do much better if they believe “you can always greatly change how intelligent you are” than if they think “you have a certain amount of
intelligence, and you really can’t do much to change it.”
In Intelligence and How to Get It, Richard Nisbett recounts what happened when Dweck and colleagues told poor minority junior high school students that intelligence is malleable and can be developed
by hard work. Learning changes the brain by forming new connections and students are in charge of this change process, psychologists told the students.
Convincing students that they could make themselves smarter by hard work led them to work harder and get higher grades. The intervention had the biggest effect for students who started out
believing intelligence was genetic. (A control group, who were taught how memory works, showed no such gains.
But improving grades was not the most dramatic effect, “Dweck reported that some of her tough junior high school boys were reduced to tears by the news that their intelligence was substantially
under their control.”
Kimball and Smith conclude: “It is no picnic going through life believing that you were born dumb—and are doomed to stay that way.”
Last reply was 1 month ago
1. C T
View 2 months ago
I would say most K-12 math is irrelevant to what mathematicians at universities spend their time doing. And the folk who end up solving new theorems and creating new fields of math don’t really
need most elementary math education because they can figure it out on their own. I have a genius physicist brother-in-law who was confused as to why his sons were having to do math fact drills in
school because “it’s just like adding rocks.” OK, my relative, not everyone is as brilliant as you. But with diligence and a good curriculum, I think nearly everyone else can get through basic
□ C Treplied:
View 2 months ago
Here’s an example of what mathematicians do: http://www.math.illinois.edu/timetable/advanced-topics-courses.html
□ C Treplied:
By “basic calculus”, I mean doing basic derivatives and integrals of fairly simple polynomial functions, such as they teach business majors in college. One really doesn’t need more than a
good grasp of algebra and graphing to benefit from learning about derivatives of such functions. And it’s very valuable information for application in fields such as aerodynamics
(differentiation of function describing position => velocity => acceleration, etc.). I don’t see why a high school student can’t be taught this fairly basic calculus even if they never make
it through trigonometry, college algebra, and analytical geometry.
☆ Roger Sweenyreplied:
CT, do you have any evidence for the assertion that “with diligence and a good curriculum, I think nearly everyone else can get through basic calculus”? I mean aside from the fact that it
just doesn’t seem that hard to you–and that people you know can do it.
This blog is read by lots of people (like you!) who are smart and care about academics. It is natural to think that something which you can do without excessive difficulty can be done by
anyone. But that thought is wrong. And it is not a good foundation on which to build a curriculum.
○ C Treplied:
No evidence other than my own observations from helping tutor business calculus students for three years while I was in college. There is a watered-down version of calculus that can
be done without mastering all the material in the courses that I mentioned above. Kids who can do algebra (graphing, etc.) can learn about minimums, maximums, and differentiation and
integration of fairly simple polynomials. That opens up a greater understanding of things like medication curves, supply functions, etc., which can be useful to a CNA, small business
owner, investor, etc. For those not on a STEM track, I would recommend an “easy calculus” class be taught in high school soon after finishing algebra. However, until such a course
comes into being, I agree that not all K-12 students can get through all the standard prerequisites to AP Calculus plus AP Calculus.
■ Roger Sweenyreplied:
That’s an interesting idea, a “simple calculus you may actually use” right after algebra. I’d love to see some experiments done along those lines.
■ Rayreplied:
It is highly unlikely that you tutor many kids with IQ’s that are 85 or below. ThIs is only one standard deviation below the median, so it would include a significant percentage
of our population. There is no nation on earth that has been able to make people in this IQ range capable of calculus.
■ C Treplied:
Ray, you’re right. These were bright, but not really STEM-y kids. It was at BYU in the mid 1990s.
I don’t have experience with lower IQs. Do you? Am I so off in thinking that even kids with IQs of 85 or below can grasp derivatives of say, x^3 + 2x^2 + 4 and be shown how it
applies to some real-world things? Maybe in science class as they look at possible climate models? How about in mechanic vocational ed class when they’re learning about velocity,
acceleration, and maximizing fuel efficiency? Perhaps in consumer economics they could learn about supply and demand curves and maximizing/minimizing profits from price
functions….I just don’t think all calculus needs to be taught only to those who can do limits and trigonometry. It’s a very useful subject to non-STEM folk, too.
■ Rayreplied:
CT, the answer is that nobody really knows for certain. I am skeptical that students with such low IQ’s could do this because no country in the world has been able to achieve
2. Florida resident
I actually read 3 years ago the book “Intelligence and How to Get it” by Nisbett in full.
Pretty weak book. No proofs, wishful thinking.
With invariable respect of the noble work by Ms. Jacobs,
3. Bostonian
Some people are born with more talent for math than others. This should be obvious, and a recent study “Number sense in infancy predicts mathematical abilities in childhood”, available online
confirms this.
□ greeneyeshadereplied:
OK, but how smart to you have to be to do K-12 math? I would think a combination of hard work and optimism might actually help.
☆ Mark Rouloreplied:
“OK, but how smart to you have to be to do K-12 math?”
It would help if you specified what you meant by K-12 math. A traditional American high school math sequence would have 12th graders ready for calculus as college freshmen … so K-12
(including 12th grade) would include things like Algebra, Algebra II, geometry (maybe with proofs), trigonometry, and pre-calculus.
This is actually hard. It is especially hard for folks with lower IQs.
And things change when you move from arithmetic (K-8, more or less) to algebra because algebra is where you start having to make lots of decisions on “what to do next.” Addition,
subtraction, multiplication and division can all be learned by rote for integers, decimals and fractions. And the procedures to do these operations are pretty linear. *Solving* multiple
equations with multiple unknowns often involves strategies: Do I isolate X or Y first? Or maybe I should scale both equations so that I can add/subtract them?
And algebra word problems also tend to be much nastier than those found in the earlier grades. Kids who don’t read well will also find that their math grades suffer because they won’t be
able to understand the questions well enough to convert them to math.
About 1/6 of the population has an IQ of 85 or below. I do not expect these folks to be able to handle a “real” Algebra II course. I’d even be surprised if most of them could handle a
“real” Algebra I course.
4. Kianareplied:
While this is true, the number of people who claim “I am just not a math person, I can’t do fractions/pre-algebra/algebra 1″ is far higher than it ought to be and in most cases is due to a lack
of effort and defeatist thinking before even beginning.
There’s also a genetic / thus ethnic/racial element to this (as a general rule; exceptions exist, of course) which is taboo to even discuss…
6. momof4
The number of people who haven’t mastered basic ES arithmetic (basic facts, algorithms, fractions, decimals, percentages and very basic stats – mean, SD etc) is MUCH higher than it should be. We
have poorly prepared, math-disliking teachers, rotten curricula (like Everyday Math) and calculators (should never be allowed before REAL algebra 1) to thank. HS math should differ according to
student path choices: through calc for STEM/competitive colleges/precalc for non-STEM and non-competitive colleges and math specifically relating to voc ed choice (less for cosmetology,
shop-related math for carpentry, different math for medical/nursing fields etc) One-size-fits-all is a lousy idea, whether in school curricula, classroom assignment or bathrobes.
□ Billreplied:
I’d agree with you on the ES and MS math, but if the students don’t have a working knowledge of those, they’ll bomb out in high school (never mind college).
A student who cannot master fractions will never make it through algebra/geometry/algebra II/Trig, etc, but in a Seattle Times article which Joanne titled No Math, No Job, 9 of every 10
applicants who had a high school diploma couldn’t handle an 18 question, 30 minute test w/calculator when asked to figure out area, convert inches to feet, metric issues, and what most of us
here would consider basic skills.
These were applicants for manufacturing jobs, which have changed considerably over the last 25 years, since most of it is computerized, and yes, you do have to know how to read blueprints and
the math figures on the blueprints.
7. SteveH
“… that intelligence is malleable and can be developed by hard work.”
How about better teaching and curricula?
Intelligence is the biggest educational copout because nobody calibrates it or tries to separate the variables. It’s too conveeeeenient to let students believe that they are just not smart in
math. Students will even believe it themselves. Then you have those (as in this article) who put the onus on students to change their thinking. They just have to work harder, be more engaged, and
be more motivated. Then they confuse the issue by conflating better grades with higher intelligence.
Hard work may not get just anyone to differential equations, but kids in 7th grade are claiming they are just not good in math. Instead of looking at the most obvious variables, they blame the
□ Bostonianreplied:
SteveH wrote, “Hard work may not get just anyone to differential equations, but kids in 7th grade are claiming they are just not good in math.”
Kids in 7th grade have been studying math in school for seven years, which is enough time for them to form an opinion of their math abilities.
Education Realist, a math teacher, has a blog post “Noahpinion on IQ–or maybe just no knowledge” where she expresses doubt that students with IQ below 100 can learn algebra, writing ” I have
been asking nearly as long as I’ve had this blog if anyone can show evidence of successful mastery of algebra by IQs less than 100.”
I think there is defininte truth to this. Which is sad, since >60% of the general population has an IQ of 100 or less… This also means, that in a STEM-based economy of the 21st Century,
what are all these people going to do for a living? Most of the jobs their parents and grandparents had don’t exist any more (or continue to exist, but in a 21st Century form that
requires advanced skills). They can’t all work at hotels and in construction and at restaurants… There’s only so many of those jobs needed to go around. And any government that tries to
invent jobs for them all will bankrupt itself into oblivion (as the U.S. may soon find out…)
○ gahriereplied:
While I don’t think the problem is as large as 60% of the population, there is an undeniable problem of an unemployable underclass. These people are unemployable for a variety of
reasons, but lack of an education is almost universially one of the reasons.
The vast majority underclass no longer sees education as a ticket out. (The ones that do are usually immigrants) The concept of “Acting White” was not invented by some racist White
man. The very fact that the definition of “Acting White” usually means behaving and doing well in school is damning.
The bigger problem that we as a society need to deal with is: What do we do with them?
The Progressives were thinking about this at least 100 years ago. Planned Parenthood was openly founded by Progressives to reduce the number of undesirables. They had other even less
savory ideas about population control.
I reject the answers the progressives had, but who is willing to begin a conversation on this topic?
■ Roger Sweenyreplied:
There are very few people who are truly unemployable. There are lots of jobs that don’t require much in the way of school knowledge, even the ability to read or do arithmetic. The
nursing home where my mother-in-law spent the end of her life employed many immigrants (legal? illegal? I don’t know.). Their qualification for the job was that they showed up
each day on time and did what they were instructed to do. For a not terribly high wage.
There will always be jobs like that.
8. SteveHreplied:
“Kids in 7th grade have been studying math in school for seven years, which is enough time for them to form an opinion of their math abilities.”
The wrong one. You’re completely guessing here. I’ve seen it over and over with kids I have taught and tutored. They have plenty of IQ, but trash their abilities. Success breeds success and
failure breeds failure. When K-6 teachers hate math and don’t value mastery of basic skills – on a systemic basis(!), one cannot look at results to calibrate IQ.
“she expresses doubt that students with IQ below 100
can learn algebra,..”
Her doubt is driven by her beliefs. She sees what she wants to see. How many kids above 100 fail algebra? IQ advocates use vague calibrations to find excuses on the low side, but not to enforce
expectations on the high side. What other calibration criteria do you guess at?
ER continues to raise questions with little effort to separate variables. There are clearly no black and white cutoff points. Curricula, pedagogy, and hard work are huge variables. There are
plenty of examples of this. Tell me how you will set public or school policy based on this supposedly hard IQ criteria? This is really what it all comes down to. And how do you try to fix
curricula and pedagogy if you play the IQ card at the drop of a hat?
Everyone knows that some kids are smarter than others, but IQ crazies love to think they have the uncomfortable answer that would change the educational world. Sure, some educators like to think
that all kids are equal, and some like to think that all kids could or should go to college. Perhaps pseudo-algebra II is an unreasonable high school passing criteria for all, but those questions
can be dealt with directly without getting on a self-righteous soapbox, playing the IQ card, and ignoring all other variables.
□ Roger Sweenyreplied:
SteveH, You’re absolutely right that lots of relatively high IQ kids are held back by poor math instruction. Then there is a vicious circle: “Success breeds success and failure breeds
We all can agree that there is some point at which a person is just not bright enough to learn algebra–and I mean really learn rather than memorize a few algorithms for the test and forget
the next month. Whether that cut-off corresponds to some IQ number, and just how many kids fall below it is an empirical question about which we have damn little good data. Alas, we may never
have such data because educational researchers and funders want to avoid such politically loaded questions. If such data existed, I’m sure Education Realist would love to see it–and would be
willing to change her mind if it told her she was wrong.
So what can we do? My suggestion is to be honest. Have good math instruction in the elementary grades and don’t move kids along just because they get older and you want them to stay with
their age-mates. Bring kids to the next step when, and only when, they have mastered–really mastered–what they will build on.
☆ Educationally Incorrectreplied:
I think it’s safe to say that math ability is distributed normally. If I had to take a wild stab at it, I’d say that:
99+% of people can learn to count
98+% can add and subtract
80+% can learn hs algebra
40+% can handle abstract algebra
1% can break new ground
some teeny percentage can win the Fields prize
I don’t think it does much good to look at the problem and say that EVERYONE can learn everything through HS calculus….oh, and then there are those mathematician types.
☆ SteveHreplied:
Everyone knows that some kids are smarter than others. Everyone knows that some kids work harder than others. There doesn’t have to be a discussion about IQ. There needs to be a
discussion about separating kids who are willing or able (for whatever reason) from those who are unwilling or unable (for whatever reason). This separation is done in all high schools
without resorting to checking students’ IQs.
Our K-6 schools, however, are known for their full inclusion and mainstreaming policy. People move to our town because of it. The schools chose this approach NOT because they think that
all kids are equal. They chose it because they placed social goals above academic goals. They think that they can help students at all levels using “differentiated instruction”. They
chose Everyday Math because it claims that their spiral approach works by definition. They tell teachers to just keep moving through the material and to “trust the spiral”. It doesn’t
matter that it’s not a class spiral based on previous mastery of skills. It’s an individual spiral that is what I call repeated partial learning. The curriculum transfers all
responsibility for learning to individuals because that’s the only way they can possibly get differentiated instruction to work. (Our schools even had the temerity to call it
“differentiated learning”.) One might claim that this is a pure IQ learning system with not much help from the teacher or curriculum. However, IQ does not guarantee that students will
achieve their best level without help from a good teacher (or mentor) and a good curriculum. Learning is not natural, especially for those subjects that take some effort to get to any
sort of interesting or fun level.
I can understand the dislike of tracking in the lower grades, but differentiated instruction is an abject failure, especially in math. If they continue not dealing with the issue, then
all of the tracking (which is currently going on now) will be hidden away at home or with tutors. Is ignorance their solution to the fact that full inclusion and academic opportunity are
The problem with tracking with bad teaching and curricula is that only those with help at home will get on the top tracks. Some urban parents want to track their kids to schools that set
high standards, but public school educators fight that tooth and nail. Apparently it’s OK if affluent parents track their kids to private schools.
Educators are happy if little urban Johnnie or Suzie get to the local community college (even though they have the IQ or work ethic to get into Harvard), especially if they will be first
in their families to go to college. I don’t know where this low expectation, multigenerational path comes from. They treat education as statistics, not individuals. All one has to do is
look at El Sistema in Venezuela. Kids go from the barrios to playing at Carnegie Hall and the BBC Proms in one generation. They don’t have low expectation music for poor kids. They have
“tocar y luchar”. Hard work. Good curricula. High expectations. Separation by results. Woe to those who think this is just about music. Somehow, many people think that if it’s academics
and not sports, music, or dance, then separation in the early grades is psychologically damaging. They are wrong, and in doing so, they just keep ignoring the tracking that is done at
home, with tutors, and with private schools by affluent families. They even stop urban parents from sending their kids to charter schools.
Math (and STEM altogether) is the basis of the 21st Century economy. The countries who have more citizens excel at it, will have more control of the world. Basically, STEM will be the difference
between the U.S. remaining a superpower, or even keeping itself a global power. At current trends, in a few generations we’ll just be another 3rd world country… (just a big one)
□ SteveHreplied:
IEEE has a recent article called: “The STEM Crisis is a Myth”. Philosophically, I don’t view education as a national economic tool. I see it as an individual educational and development tool.
If we base education on what is best for individuals, everything else will take care of itself. Who wants schools that offer opportunities or sets expectations based on vague calibrations of
IQ and not hard work and results?
The Common Core standards will not fix anything because they only define one path for all. Individual needs are ignored. Some curricula, by definition, do not prepare students for STEM
careers in K-6. Life will go on about the same as before. Those with math help at home or with tutors will have open STEM career doors. Those without will be rationalized away with IQ and
other excuses. In the article above, they place the onus on the student. Just work harder.
□ Roger Sweenyreplied:
I feel fairly sure that at least half the American work force uses computers every day. I also feel fairly sure that very few of them know how their computer works, or how to program it, or
how to fix it. But they don’t have to! Any more than drivers have to know how an internal combustion engine works or how to fix it when something goes wrong.
America needs a substantial number of good STEM people. However, there is no economic need for a majority of the population to have good STEM skills. I would guess the exact proportion is
less than 10%.
Look at Apple. If it were composed completely of people who are good at STEM, it would have failed years ago. It has succeeded because it develops products that somehow fit with what people
want and are willing to spend their money on.
☆ Stacy in NJreplied:
I can guarantee very few of the passengers on an airplane understand the physics of flight.
○ Billreplied:
The main issues with keeping a plane in the air:
Lift, Thrust, Drag, and Gravity.
Of course, a airplane is a collection of parts flying in close formation
□ Stacy in NJreplied:
“Look at Apple”
Steve Jobs wasn’t fundamentally a STEMs guy. He was a designer. He used STEMs people to develop what he wanted – to develop his vision. Of course, he had a deep understanding of the product
but Wozniak was the tech guy.
☆ Roger Sweenyreplied:
Thanks, Stacy. I wish I’d said that.
10. Roger Sweenyreplied:
The old Soviet Union was full of great STEM people. But because of their system, they couldn’t translate that knowledge into things that improved ordinary people’s standards of living.
To be economically successful, a country needs a certain number of STEM people but it also needs people who can connect STEM knowledge to people’s desires, and a system that provides incentives
to do so.
Agreed 100%. But, without any STEM people at all… A nation would have to contract out to build its own military, communications infrastructure, etc. A dangerous game to play!
☆ Roger Sweenyreplied:
We definitely need STEM people, good ones and in fairly high numbers. Fortunately, we get them. I teach high school physics and many of my students have become engineers–as did my son,
and he of course is a great one
○ SteveHreplied:
The thread is called “Math isn’t just for ‘math people’”. This really is not just about STEM versus no-STEM. While the no-STEM people may never use algebra in their future jobs, they
will have to pass a lot of math courses to get their non-STEM degree. Also, most K-6 schools teach only at a non-STEM level, which means that students will have STEM degree
opportunities only if they get help at home. I got to calculus in high school with absolutely no help at home. My very mathy son could never have done it without my help. That is a
big change since I was young. Full inclusion and lower expectations in the lower grades changed this. They use “understanding” and “problem solving” only as cover for lower
expectations. They devalue mastery of skills as tools for understanding. They justify their pedagogy by redefining math.
Recent Comments
• Roger Sweeny on It’s not PC or censorship
• Kirk Parker on It’s not PC or censorship
• SC Math Teacher on It’s not PC or censorship
• allen on It’s not PC or censorship | {"url":"http://www.joannejacobs.com/2013/10/there-are-no-math-people/","timestamp":"2014-04-17T09:48:15Z","content_type":null,"content_length":"94254","record_id":"<urn:uuid:77319552-5530-4cc6-9a36-d17bd658f2de>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
mean amount?
September 23rd 2006, 01:42 AM
mean amount?
22 tomato plants produce a mean of 5.5kg of tomatoes per plant.
15 other plants produce a mean of 4kg. of tomatoes.
what is the mean produced by the 35 plants/
answer to one decimal place.
September 23rd 2006, 02:04 AM
Total weight of toms from the 22 plants=5.5*22=121kg
Total weight of toms from the 15 plants=4*15=60kg
Grand total of toms = 121+60=181kg.
Therefore the grand mean for the 37 plants=181/37=4.9 kg/plant. | {"url":"http://mathhelpforum.com/advanced-statistics/5752-mean-amount-print.html","timestamp":"2014-04-21T05:18:35Z","content_type":null,"content_length":"4291","record_id":"<urn:uuid:cdd95cd8-0cb3-4cb5-a9bc-cca585a636dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wind and Mr. Ug
Try another key
Posted: March 30, 2011 in experiments, matlab
Tags: Law of large numbers, matlab, probability
Proposition: If we are given $n$ keys, where only $m$ of these keys can open a door, then on average we would have to try $\frac{n+1}{m+1}$ keys to open the door.
I previously discussed a similar problem in which I derived that since the probability of finding a correct key on the kth try is
$P(X = k) = \displaystyle \frac{\displaystyle \binom{n-m}{k-1}m}{\displaystyle k\binom{n}{k}}$
then in theory it would take $\frac{n+1}{m+1}$ tries to oppen the door because
$\displaystyle E[X] = \frac{n+1}{m+1}.$
I became interested in modeling this proposition so implemented a MATLAB program, try_keys(n,m), in which each key is represented by a number from 1 to n and each correct keys is represented by a
number from 1 to m. And when the try_keys(n,m) is executed it outputs a random sequence that starts with any key and ends with a correct key.
Now in theory if had 15 keys, such that 3 keys are correct, then on average I would expect to open the door on the 4th try since
$\displaystyle \mu_{th} = \frac{15+1}{3+1} = 4$.
For Experiment1 I ran try_keys(15,3) a total of 10 times
try_keys(15,3) = 13 7 4 6 5 1
try_keys(15,3) = 4 2
try_keys(15,3) = 3
try_keys(15,3) = 4 5 1
try_keys(15,3) = 14 10 5 4 2
try_keys(15,3) = 12 4 3
try_keys(15,3) = 7 1
try_keys(15,3) = 2
try_keys(15,3) = 15 10 6 1
try_keys(15,3) = 4 9 1
and for Experiement2 I repeated the same thing
try_keys(15,3) = 10 8 7 5 6 3
try_keys(15,3) = 12 2
try_keys(15,3) = 11 2
try_keys(15,3) = 6 7 8 1
try_keys(15,3) = 14 8 5 4 6 9 7 10 3
try_keys(15,3) = 5 3
try_keys(15,3) = 13 2
try_keys(15,3) = 4 2
try_keys(15,3) = 4 5 10 2
try_keys(15,3) = 14 10 5 2
So from the first and second experiment, the average number of tries needed to open the door
$\mu_{e1} = \frac{1}{10}(6+2+1+3+5+3+2+1+4+3) = 3$
$\mu_{e2}=\frac{1}{10}(6+2+2+4+9+2+2+2+4+4) = 3.7$
are within 35% and 7.5% of $\mu_{th}.$
Now by the Law of large numbers we would expect that if we ran try_keys(15,3) an infinite number of times then the average would converge to 4. So, to illustrate this law I implemented
average_try_keys(N,n,m) which runs try_keys(n,m), N times, and computes an average. And, I also implemented plot_average_try_keys(L,n,m) which plots average_try_keys(i*round(L/30),n,m) for integer
values of i from 1 to 100. So, here are the plots that I got from plot_average_try_keys(L,15,3) using L values 1e2, 1e3, 1e4, 1e5 and 1e6.
And here is the code that I used to generate these plots.
function [ s ] = try_keys(n,m)
%There are n keys which will be
%represented by numbers 1,2,...n.
%There are m correct keys which
%will be represented by numbers 1,2,...m.
%where m is less than or equal to n.
if( m > n || n <= 0 || m <= 0)
bad_input = 1;
%s is the sequence of keys draw will be s,
%where s(1) is the first key
%draw, s(2) the second, and so on.
%So at most we will try a total of n-m keys.
%s = zeros(1,n-m);
%We are assuming that that keys are
%uniformly distributed, so we will be
%using randi(10) but we wont be accepting
%keys which we have already
s(1) = randi(n);
%We are done when we find a correct key so
%we check to see if first key is
%correct. If not then we get another key and so on.
if( size(find( s <= m ),2) ~= 0)
done = 1;
done = 0;
draw = 1;
while( done == 0)
isnewkey = 0;
while(isnewkey ==0)
newkey = randi(n);
if(size(find(s == newkey),2) == 0)
s(draw + 1) = newkey;
isnewkey = 1;
if( size(find( s <= m ),2) ~= 0)
done = 1;
draw = draw + 1;
function mu = average_try_keys(N,n,m)
s =zeros(1,N);
for i=1:N
s(i) = size(try_keys(n,m),2);
mu = mean(s);
function plot_average_try_keys(L,n,m)
Num_points = 100;
expected_value = (n+1)/(m+1);
y = zeros(1,Num_points);
x = (1:Num_points)*round(L/Num_points);
for i=1:Num_points
y(i) = average_try_keys(i*round(L/Num_points),n,m);
h = legend('averages','expected value',2);
Random Keys
Posted: March 28, 2011 in examples, probability
Tags: binomial coefficient, expectation, fundamental counting principle, math, probability, probability mas function
Given $n$ possible keys where they are all equally likely to be chosen. If only 3 keys are correct, then on average how many keys would I have to try to find a correct one?
First lest try to clarify the problem by picturing the following scenario. We have a bucket with $n$ keys and we would like to open a door but only 3 keys will work. We randomly choose a key and try
to open the door. If the door opens then we are done; but if it doesn’t open then we randomly choose another key and so on. Now if we were to attempt to open this door then we would expect to do so
after trying $m_1$ keys, where $m_1$ is some number between 1 and $n-2$. So if a friend were to attempt to open this door then we would expect some number $m_2$ between 1 and $n-2$, and from a kth
friend we would get a some number $m_k$ also between between 1 and $n-2$. Hence, we could argue that on average we would need to try $\frac{1}{k} \sum_{i = 1}^k m_i$ keys to find the correct one, but
each $m_i$ is a random variable so if we were to repeat this experiment then we might get a different average! However, if $k$ is a large number then we would expect any experimental averages to be
close to each other, and if we let $k$ approach infinity then these averages would converge to the expected value.
So we are interested in calculating the expected value but first we need to derive a probability mass function which would tell us the probability of finding the correct key on the kth try, $P(X =
k).$ Then the expected value will be given by
$\displaystyle E[X] = \sum_{k = 1}^{n-2}k P(X = k) .$
For $P(X = 1),$ we are interested in
P(finding correct key on 1st try)
which is given by
$P(X = 1) = \frac{\begin{array}{c}\#~of~ ways~ in~ which \\ 1st~ choice~ could ~be~ a~ correct~ key \end{array}}{\begin{array}{c} \#~ of~ ways~ in~ which \\ 1st~ key~ could~ be~ chosen\\ \end
Since there are 3 correct keys then there are 3 ways in which the 1st choice could be a correct key. And, since there are $n$ keys then there are $n$ ways in which 1st key could be chosen. So
$P(X = 1) = \displaystyle \frac{3}{n}.$
For $P(X = 2),$ we are interested in
P(finding correct key on 2nd try)
which can be thought of as
P(1st key was not correct and 2nd key is correct)
which is given by
$P(X = 2) = \frac{ \begin{array}{c} \#~of~ ways~ in~ which \\ 1st~ choice~ could ~be~ incorrect\\ and~2nd~correct \end{array}}{\begin{array}{c} \#~ of~ ways~ in~ which \\ 1st~and~2nd~ keys~ could~
be~ chosen\\ \end{array}}.$
Since there are $n-3$ ways in which 1st choice could be incorrect and 3 ways in which 2nd choice could be correct then by the fundamental principle of counting there are $(n-3)3$ ways in which 1st
key could be incorrect and 2nd correct. Similarly, since there are $n$ ways in which the 1st key could be chosen and $(n-1)$ ways in which 2nd key could be chosen then thre are $n(n-1)$ ways in which
1st and 2nd keys could be chosen. So,
$P(X = 2) = \displaystyle \frac{(n-3)3}{n(n-1)}.$
If follows that
$P(X = 3) = \displaystyle \frac{(n-3)(n-3 - 1)3}{n(n-1)(n-2)}$
so in general
$P(X = k) = \displaystyle \frac{(n-3)(n-3 -1) \cdots (n-3 - (k-2))3}{n(n-1)(n-2) \cdots (n - (k-1))}$
which can be simplified to
$P(X = k) = \displaystyle \frac{\displaystyle \binom{n-3}{k-1}3}{\displaystyle k\binom{n}{k}}$
where the binomial coefficient $\binom{N}{i}$ is defined to be coefficient of the of $x^i$ in the polynomial expansion of $(1 + x)^N.$
So the average number of keys we would have to try, to find a correct key would be given by
$\displaystyle E[X] = \sum_{k=1}^{n-2} \frac{\displaystyle \binom{n-3}{k-1}3}{\displaystyle \binom{n}{k}}$
which results in
$\displaystyle E[X] = \frac{n+1}{4}.$ | {"url":"http://windandmrug.wordpress.com/","timestamp":"2014-04-19T22:06:01Z","content_type":null,"content_length":"50483","record_id":"<urn:uuid:5bc7a988-0e6c-4af9-a030-fcff8b613fa8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of Boolean Functions
Chapter 7 exercises
1. Suppose there is an $r$-query local tester for property $\mathcal{C}$ with rejection rate $\lambda$. Show that there is a testing algorithm which, given inputs $0 < \epsilon, \delta \leq 1/2$,
makes $O(\frac{r \log(1/\delta)}{\lambda \epsilon})$ (nonadaptive) queries to $f$ and satisfies the following:
□ If $f \in \mathcal{C}$ then the tester accepts with probability $1$.
□ If $f$ is $\epsilon$-far from $\mathcal{C}$ then the tester accepts with probability at most $\delta$.
2. Let ${\mathcal M} = \{(x, y) \in \{0,1\}^{2n}: x = y\}$, the property that a string’s first half matches its second half. Give a $2$-query local tester for ${\mathcal M}$ with rejection rate $1$.
(Hint: locally test that $x \oplus y = (0, 0, \dots, 0)$.)
3. Reduce the proof length in Examples 15 to $n-2$.
4. Verify the claim from Examples 12 regarding the $2$-query tester for the property that a string has all its coordinates equal. (Hint: use $\pm 1$ notation.)
5. Let ${\mathcal O} = \{w \in {\mathbb F}_2^n : w \text{ has an odd number of } 1′\text{s}\}$. Let $T$ be any $(n-1)$-query string testing algorithm which accepts every $w \in {\mathcal O}$ with
probability $1$. Show that $T$ in fact accepts every string $v \in {\mathbb F}_2^n$ with probability $1$ (even though $\mathrm{dist}(w,{\mathcal O}) = \frac{1}{n} > 0$ for half of all strings
$w$). Thus locally testing ${\mathcal O}$ requires $n$ queries.
6. Let $T$ be a $2$-query testing algorithm for functions $\{-1,1\}^n \to \{-1,1\}$. Suppose that $\mathcal{T}$ accepts every dictator with probability $1$. Show that it also accepts $\mathrm{Maj}_
{n’}$ with probability $1$ for every odd $n’ \leq n$. This shows that there is no $2$-query local tester for dictatorship assuming $n > 2$. (Hint: you’ll need to enumerate all predicates on up to
$2$ bits.)
7. For every $\alpha < 1$, show that there is no $(\alpha,1)$-Dictator-vs.-No-Notables test using Max-E$3$-Lin predicates. (Hint: consider large odd parities.)
a. Consider the following $3$-query testing algorithm for $f : \{0,1\}^n \to \{0,1\}$. Let ${\boldsymbol{x}}, \boldsymbol{y} \sim \{0,1\}^n$ be independent and uniformly random, define $\
boldsymbol{z} \in \{0,1\}^n$ by $\boldsymbol{z}_i = {\boldsymbol{x}}_i \wedge \boldsymbol{y}_i$ for each $i \in [n]$, and accept if $f({\boldsymbol{x}}) \wedge f(\boldsymbol{y}) = f(\
boldsymbol{z})$. Let $p_k$ be the probability that this test accepts a parity function $\chi_S : \{0,1\}^n \to \{0,1\}$ with $|S| = k$. Show that $p_0 = p_1 = 1$ and that in general $p_k \leq
\tfrac{1}{2} + 2^{-|S|}$. In fact, you might like to show that $p_k = \tfrac{1}{2} + (\frac{3}{4} – \frac{1}{4}(-1)^k)2^{-k}$. (Hint: it suffices to consider $k = n$ and then compute the
correlation of $\chi_{\{1,\dots, n\}} \wedge \chi_{\{n+1, \dots, 2n\}}$ with the bent function $\mathrm{IP}_{2n}$.)
b. Show how to obtain a $3$-query local tester for dictatorship by combining the following subtests: (i) the Odd BLR Test; (iii) the test from part (a).
9. Obtain the largest explicit rejection rate in Theorem 7 that you can. Can you improve your bound by doing the BLR and NAE Tests with probabilities other than $1/2, 1/2$?
a. Say that $A$ is an $(\alpha,\beta)$-distinguishing algorithm for Max-$\mathrm{CSP}(\Psi)$ if it outputs `YES’ on instances with value at least $\beta$ and outputs `NO’ on instances with value
strictly less than $\alpha$. (On each instance with value in $[\alpha, \beta)$, algorithm $A$ may have either output.) Show that if there is an efficient $(\alpha,\beta)$-approximation
algorithm for Max-$\mathrm{CSP}(\Psi)$ then there is also an efficient $(\alpha,\beta)$-distinguishing algorithm for Max-$\mathrm{CSP}(\Psi)$.
b. Consider Max-$\mathrm{CSP}(\Psi)$, where $\Psi$ be a class of predicates which is closed under restrictions (to nonconstant functions); e.g., Max-$3$-Sat. Show that if there is an efficient $
(1,1)$-distinguishing algorithm then there is also an efficient $(1,1)$-approximation algorithm. (Hint: try out all labels for the first variable and use the distinguisher.)
a. Let $\phi$ be a CNF of size $s$ and width $w \geq 3$ over variables $x_1, \dots, x_n$. Show that there is an ``equivalent'' CNF $\phi'$ of size at most $(w-2)s$ and width $3$ over the
variables $x_1, \dots, x_n$ plus auxiliary variables $\Pi_1, \dots, \Pi_\ell$, with $\ell \leq (w-3)s$. Here ``equivalent'' means that for every $x$ such that $\phi(x) = \mathsf{True}$ there
exists $\Pi$ such that $\phi'(x,\Pi) = \mathsf{True}$; and, for every $x$ such that $\phi(x) = \mathsf{False}$ we have $\phi'(x,\Pi) = \mathsf{False}$ for all $\Pi$.
b. Extend the above so that every clause in $\phi'$ has width exactly $3$ (the size may increase by $O(s)$).
12. Suppose there exists an $r$-query PCPP reduction $\mathcal{R}_1$ with rejection rate $\lambda$. Show that there exists a $3$-query PCPP reduction $\mathcal{R}_2$ with rejection rate at least $\
lambda/(r2^r)$. The proof length of $\mathcal{R}_2$ should be at most $r2^r\cdot m$ plus the proof length of $\mathcal{R}_1$ (where $m$ is the description-size of $\mathcal{R}_1$'s output) and
the predicates output by the reduction should all be logical ORs applied to exactly three literals. (Hint: Exercises 4.1, 11.)
a. Give a polynomial-time algorithm $R$ which takes as input a general boolean circuit $C$ and outputs a width-$3$ CNF formula $\phi$ with the following guarantee: $C$ is satisfiable if and only
if $\phi$ is satisfiable. (Hint: introduce a variable for each gate in $C$.)
b. The previous exercise in fact formally justifies the following statement: ``$(1,1)$-distinguishing Max-$3$-Sat is $\mathsf{NP}$-hard''. (See Exercise 10 for the definition of $(1,1)
$-distinguishing.) Argue that indeed, if $(1,1)$-distinguishing (or $(1,1)$-approximating) Max-$3$-Sat is in polynomial time then so is Circuit-Sat.
c. Prove Theorem 34. (Hint: Exercise 7.11(b).)
14. Describe an efficient $(1,1)$-approximation algorithm for Max-Cut.
a. Let $H$ be any subspace of ${\mathbb F}_2^n$ and let ${\mathcal H} = \{\chi_\gamma : {\mathbb F}_2^n \to \{-1,1\} \mid \gamma \in H^\perp\}$. Give a $3$-query local tester for ${\mathcal H}$
with rejection rate $1$. (Hint: similar to BLR, but with $\langle \varphi_H * f, f * f \rangle$.)
b. Generalize to the case that $H$ is any affine subspace of ${\mathbb F}_2^n$.
16. Let $A$ be any affine subspace of ${\mathbb F}_2^n$. Construct a $3$-query, length-$2^n$ PCPP system for $A$ with rejection rate a positive universal constant. (Hint: given $w \in {\mathbb F}_2^
n$, the tester should expect the proof $\Pi \in \{-1,1\}^{2^n}$ to encode the truth table of $\chi_w$. Use Exercise 15 and also a consistency check based on local correcting of $\Pi$ at $e_{\
boldsymbol{i}}$, where $\boldsymbol{i} \in [n]$ is uniformly random.)
a. Give a $3$-query, length-$O(n)$ PCPP system (with rejection rate a positive universal constant) for the class $\{w \in {\mathbb F}_2^n : \mathrm{IP}_{n}(w) = 1\}$, where $\mathrm{IP}_{n}$ is
the inner product mod $2$ function ($n$ even).
b. Do the same for the complete quadratic function $\mathrm{CQ}_n$ from Exercise 1.1.
18. In this exercise you will prove Theorem 20.
a. Let $D \in {\mathbb F}_2^{n \times n}$ be a nonzero matrix and suppose ${\boldsymbol{x}}, \boldsymbol{y} \sim {\mathbb F}_2^n$ are uniformly random and independent. Show that $\mathop{\bf Pr}
[\boldsymbol{y}^\top D {\boldsymbol{x}} \neq 0] \geq \frac14$.
b. Let $\gamma \in {\mathbb F}_2^n$ and $\Gamma \in {\mathbb F}_2^{n \times n}$. Suppose ${\boldsymbol{x}}, \boldsymbol{y} \sim {\mathbb F}_2^n$ are uniformly random and independent. Show that $
\mathop{\bf Pr}[(\gamma^\top {\boldsymbol{x}})(\gamma^\top \boldsymbol{y}) = \Gamma \bullet ({\boldsymbol{x}} \boldsymbol{y}^\top)]$ is $1$ if $\Gamma = \gamma \gamma^{\top}$ and is at most $
\frac34$ otherwise. Here we use the notation $B \bullet C = \sum_{i,j} B_{ij} C_{ij}$ for matrices $B, C \in {\mathbb F}_2^{n \times n}$.
c. Suppose you are given query access to two functions $\ell : {\mathbb F}_2^n \to {\mathbb F}_2$ and $q : {\mathbb F}_2^{n \times n} \to {\mathbb F}_2$. Give a $4$-query testing algorithm with
the following two properties (for some universal constant $\lambda > 0$): (i) if $\ell = \chi_\gamma$ and $q = \chi_{\gamma \gamma^\top}$ for some $\gamma \in {\mathbb F}_2^n$, the test
accepts with probability $1$; (ii) for all $0 \leq \epsilon \leq 1$, if the test accepts with probability at least $1 – \gamma\cdot \epsilon$ then there exists some $\gamma \in {\mathbb F}_2^
n$ such that $\ell$ is $\epsilon$-close to $\chi_\gamma$ and $q$ is $\epsilon$-close to $\chi_{\gamma \gamma^\top}$. (Hint: apply the BLR Test to $\ell$ and $q$; and, use part (b) with local
correcting on $q$.)
d. Let $L$ be a list of homogenous degree-$2$ polynomial equations over variables $w_1, \dots, w_n \in {\mathbb F}_2$. (Each equation is of the form $\sum_{i,j=1}^n c_{ij} w_i w_j = b$ for
constants $b, c_{ij} \in {\mathbb F}_2$; we remark that $w_i^2 = w_i$.) Define the string property ${\mathcal L} = \{w \in {\mathbb F}_2^n : w \text{ satisfies all equations in L}\}$. Give a
$4$-query, length-$(2^n + 2^{n^2})$ PCPP system for ${\mathcal L}$ (with rejection rate a positive universal constant). (Hint: the tester should expect the truth table of $\chi_w$ and $\chi_
{ww^\top}$. You will need part (c) as well as Exercise 15 applied to “$q$”.)
e. Complete the proof of Theorem 20. (Hints: given $w \in \{0,1\}^n$, the tester should expect a proof consisting of all gate values $\bar{w} \in \{0,1\}^{\mathrm{size}(C)}$ in $C$’s computation
on $w$, as well as truth tables of $\chi_{\bar{w}}$ and $\chi_{\bar{w}\bar{w}^\top}$. Show that $\bar{w}$ being a valid computation of $C$ is encodable with a list of homogeneous degree-$2$
polynomial equations. Add a consistency check between $w$ and $\bar{w}$ using local correcting, and reduce the number of queries to $3$ using Exercise 12.)
19. Verify the connection between $\mathrm{Opt}({\mathcal P})$ and $C$’s satisfiability stated in the proof sketch of Theorem 36. (Hint: every string $w$ is $1$-far from the empty property.)
20. A randomized assignment for an instance ${\mathcal P}$ of a CSP over domain $\Omega$ is a mapping $\boldsymbol{F}$ which labels each variable in $V$ with a probability distribution over domain
elements. Given a constraint $(S,\psi)$ with $S = (v_1, \dots, v_r)$, we write $\psi(\boldsymbol{F}(S)) \in [0,1]$ for the expected value of $\psi(\boldsymbol{F}(v_1), \dots, \boldsymbol{F}(v_r))
$. This is simply the probability that $\psi$ is satisfied when one actually draws from the domain-distributions assigned by $\boldsymbol{F}$. Finally, we define the value of $\boldsymbol{F}$ to
be $\mathrm{Val}_{\mathcal P}(\boldsymbol{F}) = \mathop{\bf E}_{(\boldsymbol{S}, \boldsymbol{\psi}) \sim {\mathcal P}}[\boldsymbol{\psi}(\boldsymbol{F}(\boldsymbol{S}))]$.
a. Suppose that $A$ is a deterministic algorithm which produces a randomized assignment of value $\alpha$ on a given instance ${\mathcal P}$. Show a simple modification to $A$ which makes it a
randomized algorithm which produces a (normal) assignment whose value is $\alpha$ in expectation. (Thus, in constructing approximation algorithms we may allow ourselves to output randomized
b. Let $A$ be the deterministic Max-E$3$-Sat algorithm which on every instance outputs the randomized assignment which assigns the uniform distribution on $\{0,1\}$ to each variable. Show that
this is a $(\frac78,\beta)$-approximation algorithm for any $\beta$. Show also that the same algorithm is a $(\frac12,\beta)$-approximation algorithm for Max-$3$-Lin.
c. When the domain $\Omega$ is $\{-1,1\}$, we may model a randomized assignment as a function $f : V \to [-1,1]$; here $f(v) = \mu$ is interpreted as the unique probability distribution on $\
{-1,1\}$ which has mean $\mu$. Now given a constraint $(S,\psi)$ with $S = (v_1, \dots, v_r)$, show that the value of $f$ on this constraint is in fact $\psi(f(v_1), \dots, f(v_r))$, where we
identify $\psi : \{-1,1\}^r \to \{0,1\}$ with its multilinear (Fourier) expansion. (Hint: Exercise 1.5.)
d. Let $\Psi$ be a collection of predicates over domain $\{-1,1\}$. Let $\nu = \min_{\psi \in \Psi} \{\widehat{\psi}(\emptyset)\}$. Show that outputting the randomized assignment $f \equiv 0$ is
an efficient $(\nu, \beta)$-approximation algorithm for Max-$\mathrm{CSP}(\Psi)$.
21. Let $\boldsymbol{F}$ be a randomized assignment of value $\alpha$ for CSP instance ${\mathcal P}$ (as in Exercise 20). Give an efficient deterministic algorithm which outputs a usual assignment
$F$ of value at least $\alpha$. (Hint: try all possible labellings for the first variable and compute the expected value that would be achieved if $\boldsymbol{F}$ were used for the remaining
variables. Pick the best label for the first variable and repeat.)
22. Given a local tester for functions $f : \{-1,1\}^n \to \{-1,1\}$, we can interpret it also as a tester for functions $f : \{-1,1\}^n \to [-1,1]$; simply view the tester as a CSP and view the
acceptance probability as the value of $f$ when treated as a randomized assignment (as in Exercise 20(c)). Equivalently, whenever the tester “queries” $f(x)$, imagine that what is returned is a
random bit $\boldsymbol{b} \in \{-1,1\}$ whose mean is $f(x)$. This interpretation completes Definition 38 of Dictator-vs.-No-Notables tests for functions $f : \{-1,1\}^n \to [-1,1]$ (see Remark
39). Given this definition, verify that the Håstad$_\delta$ Test is indeed a $(\tfrac{1}{2}, 1-\delta)$-Dictator-vs.-No-Notables test. (Hint: show that the formula for the probability that the
Håstad$_\delta$ test accepts $f$ still holds for functions $f : \{-1,1\}^n \to [-1,1]$. There is only one subsequent inequality which uses that $f$’s range is $\{-1,1\}$, and it still holds with
range $[-1,1]$.)
23. Let $\Psi$ be a finite set of predicates over domain $\Omega = \{-1,1\}$ which is closed under negating variables. (An example is the scenario of Max-$\psi$ from Remark 24.) In this exercise you
will show that Dictator-vs.-No-Notables tests using $\Psi$ may assume $f : \{-1,1\}^n \to [-1,1]$ is odd without loss of generality.
a. Let $T$ be an $(\alpha,\beta)$-Dictator-vs.-No-Notables test using predicate set $\Psi$ which works under the assumption that $f : \{-1,1\}^n \to [-1,1]$ is odd. Modify $T$ as follows:
whenever it is about to query $f(x)$, with probability $\tfrac{1}{2}$ let it use $f(x)$ and with probability $\tfrac{1}{2}$ let it use $-f(-x)$. Call the modified test $T’$. Show that the
probability $T’$ accepts an arbitrary $f : \{-1,1\}^n \to [-1,1]$ is equal to the probability $T$ accepts $f^\mathrm{odd}$.
b. Prove that $T’$ is an $(\alpha,\beta)$-Dictator-vs.-No-Notables test using predicate set $\Psi$ for functions $f : \{-1,1\}^n \to [-1,1]$.
24. This problem is similar to Exercise 23 in that it shows you may assume that Dictator-vs.-No-Notables tests are testing “smoothed” functions of the form $\mathrm{T}_{1-\delta} g$ for $g : \{-1,1\}
^n \to [-1,1]$, so long as you are willing to lose $O(\delta)$ in the probability that dictators are accepted.
a. Let $U$ be an $(\alpha,\beta)$-Dictator-vs.-No-Notables test using an arity-$r$ predicate set $\Psi$ (over domain $\{-1,1\}$) which works under the assumption that the function $f : \{-1,1\}^
n \to [-1,1]$ being tested is of the form $\mathrm{T}_{1-\delta} g$ for $g : \{-1,1\}^n \to [-1,1]$. Modify $U$ as follows: whenever it is about to query $f(x)$, let it draw $\boldsymbol{y} \
sim N_{1-\delta}$ and use $f(\boldsymbol{y})$ instead. Call the modified test $U’$. Show that the probability $U’$ accepts an arbitrary $g : \{-1,1\}^n \to [-1,1]$ is equal to the probability
$U$ accepts $\mathrm{T}_{1-\delta} g$.
b. Prove that $U’$ is an $(\alpha,\beta – r\delta/2)$-Dictator-vs.-No-Notables test using predicate set $\Psi$.
25. Give a slightly alternate proof of Theorem 43 by using the original BLR Test analysis and applying Exercises 23, 24.
26. Show that when using Theorem 41, it suffices to have a “Dictators-vs.-No-Influentials test”, meaning replacing $\mathbf{Inf}_i^{(1-\epsilon)}[f]$ in Definition 38 with just $\mathbf{Inf}_i[f]$.
(Hint: Exercise 24.)
27. For $q \in {\mathbb N}^+$, Unique-Games($q$) refers to the arity-$2$ CSP with domain $\Omega = [q]$ in which all $q!$ “bijective” predicates are allowed; here $\psi$ is “bijective” if there is a
bijection $\pi : [q] \to [q]$ such that $\psi(i,j) = 1$ iff $\pi(j) = i$. Show that $(1,1)$-approximating Unique-Games($q$) can be done in polynomial time. (The Unique Games Conjecture of Khot
[Kho02] states that for all $\delta > 0$ there exists $q \in {\mathbb N}^+$ such that $(\delta, 1-\delta)$-approximating Unique-Games($q$) is $\mathsf{NP}$-hard.)
28. In this problem you will show that Corollary 44 actually follows directly from Corollary 45.
a. Consider the ${\mathbb F}_2$-linear equation $v_1 + v_2 + v_3 = 0$. Exhibit a list of $4$ clauses (i.e., logical ORs of literals) over the variables such that if the equation is satisfied
then so are all $4$ clauses, but if the equation is not satisfied then at most $3$ of the clauses are. Do the same for the equation $v_1 + v_2 + v_3 = 1$.
b. Suppose that for every $\delta > 0$ there is an efficient algorithm for $(\frac78 + \delta, 1 – \delta)$-approximating Max-E$3$-Sat. Give, for every $\delta > 0$, an efficient algorithm for $
(\frac12 + \delta, 1 – \delta)$-approximating Max-E$3$-Lin.
c. Alternatively, show how to transform any $(\alpha,\beta)$-Dictator-vs.-No-Notables test using Max-E$3$-Lin predicates into a $(\frac34 + \frac14 \alpha, \beta)$-Dictator-vs.-No-Notables test
using Max-E$3$-Sat predicates.
29. In this exercise you will prove Theorem 42.
a. Recall the predicate $\mathrm{OXR}$ from Exercise 1.1. Fix a small $0 < \delta < 1$. The remainder of the exercise will be devoted to constructing a $(\frac34 + \delta/4,1)
$-Dictator-vs.-No-Notables test using Max-$\mathrm{OXR}$ predicates. Show how to convert this to a $(\frac78 + \delta/8,1)$-Dictator-vs.-No-Notables test using Max-E$3$-Sat predicates. (Hint:
similar to Exercise 28(c).)
b. By Exercise 23, it suffices to construct a $(\frac34 + \delta/4,1)$-Dictator-vs.-No-Notables test using the $\mathrm{OXR}$ predicate assuming $f : \{-1,1\}^n \to [-1,1]$ is odd. Håstad tests
$\mathrm{OXR}(f({\boldsymbol{x}}), f(\boldsymbol{y}), f(\boldsymbol{z}))$ where ${\boldsymbol{x}}, \boldsymbol{y}, \boldsymbol{z} \in \{-1,1\}^n$ are chosen randomly as follows: For each $i \
in [n]$ (independently), with probability $1-\delta$ choose $({\boldsymbol{x}}_i, \boldsymbol{y}_i, \boldsymbol{z}_i)$ uniformly subject to ${\boldsymbol{x}}_i\boldsymbol{y}_i\boldsymbol{z}_i
= -1$, and with probability $\delta$ choose $({\boldsymbol{x}}_i, \boldsymbol{y}_i, \boldsymbol{z}_i)$ uniformly subject to $\boldsymbol{y}_i\boldsymbol{z}_i = -1$. Show that the probability
this test accepts an odd $f : \{-1,1\}^n \to [-1,1]$ is $$\label{eqn:oxr-bound} \tfrac34 – \tfrac14 \mathbf{Stab}_{-\delta}[f] – \tfrac14\sum_{S \subseteq [n]} \widehat{f}(S)^2 \mathop{\bf E}
_{\boldsymbol{J} \subseteq_{1-\delta} S}[(-1)^{|\boldsymbol{J}|}\widehat{f}(\boldsymbol{J})],$$ where $\boldsymbol{J} \subseteq_{1-\delta} S$ denotes that $\boldsymbol{J}$ is a $(1-\delta)
$-random subset of $S$. In particular, show that dictators are accepted with probability $1$.
c. Upper-bound \eqref{eqn:oxr-bound} by \[ \tfrac34 + \delta/4 + \tfrac14 \sqrt{(1-\delta)^t} + \tfrac14 \sum_{|S| \leq t} \widehat{f}(S)^2 \mathop{\bf E}_{\boldsymbol{J} \subseteq_{1-\delta} S}
[|\widehat{f}(\boldsymbol{J})|], \] or something stronger. (Hint: Cauchy–Schwarz.)
d. Complete the proof that this is a $(\frac34 + \delta/4, 1)$-Dictator-vs.-No-Notables test, assuming $f$ is odd.
30. In this exercise you will prove Theorem 41. Assume there exists an $(\alpha,\beta)$-Dictator-vs.-No-Notables test $T$ using predicate set $\Psi$ over domain $\{-1,1\}$. We define a certain
efficient algorithm $R$, which takes as input an instance $\mathcal{G}$ of Unique-Games($q$) and outputs an instance ${\mathcal P}$ of Max-$\mathrm{CSP}(\Psi)$. For simplicity we refer to the
variables $V$ of the Unique-Games instance $\mathcal{G}$ as “vertices” and its constraints as “edges”. We also assume that when $\mathcal{G}$ is viewed as an undirected graph, it is regular. (By
a result of Khot–Regev [KR08] this assumption is without loss of generality for the purposes of the Unique Games Conjecture.) The Max-$\mathrm{CSP}(\Psi)$ instance ${\mathcal P}$ output by
algorithm $R$ will have variable set $V \times \{-1,1\}^{q}$ and we write assignments for it as collections of functions $(f_v)_{v \in V}$, where $f : \{-1,1\}^q \to \{-1,1\}$. The draw of a
random of constraint for ${\mathcal P}$ is defined as follows:
□ Choose $\boldsymbol{u} \in V$ uniformly at random.
□ Draw a random constraint from the test $T$; call it $\boldsymbol{\psi}(f({\boldsymbol{x}}^{(1)}), \dots, f({\boldsymbol{x}}^{(\boldsymbol{r})}))$.
□ Choose $\boldsymbol{r}$ random “neighbours” $\boldsymbol{v}_1, \dots, \boldsymbol{v}_{\boldsymbol{r}}$ of $\boldsymbol{u}$ in $\mathcal{G}$, independently and uniformly. (By a neighbour of $\
boldsymbol{u}$, we mean a vertex $v$ such that either $(\boldsymbol{u},v)$ or $(v,\boldsymbol{u})$ is the scope of a constraint in $\mathcal{G}$.) Since $\mathcal{G}$’s constraints are
bijective, we may assume that the associated scopes are $(\boldsymbol{u}, \boldsymbol{v}_1), \dots, (\boldsymbol{u}, \boldsymbol{v}_{\boldsymbol{r}})$ with bijections $\boldsymbol{\pi}_{1}, \
dots, \boldsymbol{\pi}_{\boldsymbol{r}} : [q] \to [q]$.
□ Output the constraint $\boldsymbol{\psi}(f_{\boldsymbol{v}_1}^{\boldsymbol{\pi}_1}({\boldsymbol{x}}^{(1)}), \dots, \boldsymbol{\psi}(f_{\boldsymbol{v}_{\boldsymbol{r}}}^{\boldsymbol{\pi}_{\
boldsymbol{r}}}({\boldsymbol{x}}^{(\boldsymbol{r})}))$, where we use the permutation notation $f^\pi$ from Exercise 1.29.
a. Suppose $\mathrm{Opt}(\mathcal{G}) \geq 1-\delta$. Show that there is an assignment for ${\mathcal P}$ with value at least $\beta – O(\delta)$ in which each $f_v$ is a dictator. (You will use
regularity of $\mathcal{G}$ here.) Thus $\mathrm{Opt}({\mathcal P}) \geq \beta – O(\delta)$.
b. Given an assignment $F = (f_v)_{v \in V}$ for ${\mathcal P}$, introduce for each $u \in V$ the function $g_u : \{-1,1\}^q \to [-1,1]$ defined by $g(x) = \mathop{\bf E}_{\boldsymbol{v}}[f_{\
boldsymbol{v}}^{\boldsymbol{\pi}}(x)]$, where $\boldsymbol{v}$ is a random neighbour of $\boldsymbol{u}$ in $\mathcal{G}$ and $\boldsymbol{\pi}$ is the associated constraint’s permutation.
Show that $\mathrm{Val}_{{\mathcal P}}(F) = \mathop{\bf E}_{\boldsymbol{u} \in V}[\mathrm{Val}_{T}(g_{\boldsymbol{u}})]$ (using the definition from Exercise 22).
c. Fix an $\epsilon > 0$ and suppose that $\mathrm{Val}_{{\mathcal P}}(F) \geq s + 2\lambda(\epsilon)$, where $\lambda$ is the “rejection rate” associated with $T$. Show that for at least a $\
lambda(\epsilon)$-fraction of vertices $u \in V$, the set $\text{NbrNotable}_u = \{i \in [q] : \mathbf{Inf}_i^{(1-\epsilon)}[g_u] > \epsilon\}$ is nonempty.
d. Show that for any $u \in V$ and $i \in [q]$ we have $\mathop{\bf E}[\mathbf{Inf}^{(1-\epsilon)}_{\boldsymbol{\pi}^{-1}(i)}[f_{\boldsymbol{v}}]] \geq \mathbf{Inf}_{i}^{(1-\epsilon)}[g_u]$,
where $\boldsymbol{v}$ is a random neighbour of $u$ and $\boldsymbol{\pi}$ is the associated constraint’s permutation. (Hint: Exercise 2.41.)
e. For $v \in V$, define also the set $\text{Notable}_u = \{i \in [q] : \mathbf{Inf}_i^{(1-\epsilon)}[f_v] \geq \epsilon/2\}$. Show that if $i \in \text{NbrNotable}_u$ then $\mathop{\bf Pr}_{\
boldsymbol{v}}[\boldsymbol{\pi}^{-1}(i) \in \text{Notable}_{\boldsymbol{v}}] \geq \epsilon/2$, where $\boldsymbol{v}$ and $\boldsymbol{\pi}$ are as in the previous part.
f. Show that for every $u \in V$ we have $|\text{Notable}_u \cup \text{NbrNotable}_u| \leq O(1/\epsilon^2)$. (Hint: Proposition 2.53.)
g. Consider the following randomized assignment for $\mathcal{G}$ (see Exericse 20): for each $u \in V$, give it the uniform distribution on $\text{Notable}_u \cup \text{NbrNotable}_u$ (if this
set is nonempty; otherwise, give it an arbitrary labelling). Show that this randomized assignment has value $\Omega(\lambda(\epsilon) \epsilon^5)$.
h. Conclude Theorem 41, where “UG-hard” means “$\mathsf{NP}$-hard assuming the Unique Games Conjecture”.
31. Technically, Exercise 30 has a small bug: since a Dictator-vs.-No-Notables test using predicate set $\Psi$ is allowed to use duplicate query strings in its predicates (see Remark 39), the
reduction in the previous exercise does not necessarily output instances of Max-$\mathrm{CSP}(\Psi)$ because our definition of CSPs requires that each scope consist of distinct variables. In this
exercise you will correct this bug. Let $M \in {\mathbb N}^+$ and suppose we modify the algorithm $R$ from Exercise 30 to a new algorithm $R’$, producing an instance ${\mathcal P}’$ with variable
set $V \times [M] \times \{-1,1\}^{q}$. We now think of assignments to ${\mathcal P}’$ as $M$-tuples of functions $f_v^1, \dots, f_v^M$, one tuple for each $v \in V$. Further, thinking of ${\
mathcal P}$ as a function tester, we have ${\mathcal P}’$ act as follows: whenever ${\mathcal P}$ is about to query $f_v(x)$, we have ${\mathcal P}’$ instead query $f_v^{\boldsymbol{j}}(x)$ for a
uniformly random $\boldsymbol{j} \in [M]$.
a. Show that $\mathrm{Opt}({\mathcal P}) = \mathrm{Opt}({\mathcal P}’)$.
b. Show that if we delete all constraints in ${\mathcal P}’$ for which the scope contains duplicates, then $\mathrm{Opt}({\mathcal P}’)$ changes by at most $1/M$.
c. Show that the deleted version of ${\mathcal P}’$ is a genuine instance of Max-$\mathrm{CSP}(\Psi)$. Since the constant $1/M$ can be arbitrarily small, this corrects the bug in Exercise 30‘s
proof of Theorem 41. | {"url":"http://www.contrib.andrew.cmu.edu/~ryanod/index.php?p=1252","timestamp":"2014-04-16T13:10:21Z","content_type":null,"content_length":"91924","record_id":"<urn:uuid:22d16ebf-c904-4021-af2b-858497922a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holbrook, MA Prealgebra Tutor
Find a Holbrook, MA Prealgebra Tutor
...As for teaching the language to others, I taught high-school Spanish full-time for two years, and I've tutored Spanish privately for several years. I have had a great time over the years
helping others learn Spanish, and I welcome the opportunity to help new students to build this fun and useful skill. When I first took the LSAT on my own, I scored in the 98th percentile.
26 Subjects: including prealgebra, English, reading, ESL/ESOL
...I have been Boston Globe Tennis Coach of the year. I teach tennis privately, play and umpire tennis professionally. I am a college professor in Psychology.
15 Subjects: including prealgebra, geometry, algebra 1, SAT math
...My approach is clear and student specific so each student can relate Pre-Algebra to their own life. Math learned in the elementary grades is the foundation upon which all other Math courses
are based. My concentration in Grad school on Mathematics Education prepared me to understand how Math is learned.
9 Subjects: including prealgebra, GRE, algebra 1, GED
...I look forward to hearing from you and cannot wait to assist you in obtaining the most from your education!As a Teach For America corps member in Alabama, I taught middle and high school math,
including pre-algebra, algebra I, geometry, and algebra II. In addition to passing the Praxis to teach ...
43 Subjects: including prealgebra, English, reading, writing
...Statistics offers many new concepts which, depending how it's taught, can be overwhelming at times. I have experience taking topics in statistics which students find challenging or
intimidating and placing them in an easier to understand context. I have taught math for an SAT prep company.
24 Subjects: including prealgebra, chemistry, calculus, physics
Related Holbrook, MA Tutors
Holbrook, MA Accounting Tutors
Holbrook, MA ACT Tutors
Holbrook, MA Algebra Tutors
Holbrook, MA Algebra 2 Tutors
Holbrook, MA Calculus Tutors
Holbrook, MA Geometry Tutors
Holbrook, MA Math Tutors
Holbrook, MA Prealgebra Tutors
Holbrook, MA Precalculus Tutors
Holbrook, MA SAT Tutors
Holbrook, MA SAT Math Tutors
Holbrook, MA Science Tutors
Holbrook, MA Statistics Tutors
Holbrook, MA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Abington, MA prealgebra Tutors
Avon, MA prealgebra Tutors
Braintree prealgebra Tutors
Braintree Phantom, MA prealgebra Tutors
East Weymouth prealgebra Tutors
Hanover, MA prealgebra Tutors
Hanson, MA prealgebra Tutors
Hull, MA prealgebra Tutors
North Weymouth prealgebra Tutors
Norwell prealgebra Tutors
Randolph, MA prealgebra Tutors
Rockland, MA prealgebra Tutors
South Weymouth prealgebra Tutors
Stoughton, MA prealgebra Tutors
Whitman, MA prealgebra Tutors | {"url":"http://www.purplemath.com/holbrook_ma_prealgebra_tutors.php","timestamp":"2014-04-18T19:05:03Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:041278f1-3f52-4cd1-b6f4-644c2199535a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
meb vs retirement
i have been hearing so many different stories, ok, i have presently 34 years of military service 22 active duty and 12 reserve, i can retire at any time, i qualify for the final pay i came in the
military in 1977 , I am presently a E-8 , i was told the meb would not hurt my chance, going through meb for cervical spondylosis , lumbar spondylosis , ashma, bronchosteoarthritis right hip, and
left shoulder. I'm not sure if i well benefit from this, also i took at severance pay back in 1992 when i got out the first time, and i understand that i well have to pay that back first. Please
give me some insight, thank you very much for your time.
Jason Perry Site Founder Staff Member
May 15, 2007
Trophy Points:
You may benefit from a military disability retirement if the disability percentage awarded is higher than your length of service calculation. If you have any unfitting condition, the minimum you
will be paid will be the same as the length of service (LOS) calculation. So, you may do better with a PEB, but you can't do any worse than the LOS calculation.
As far as paying back the severance pay, it may depend on what type of severance pay award you received (there are various types of authorities for severance pay, so to really analyze that
question, you have to look at the basis for the original payment).
Best of luck!
If you're talking about the VSI/SSB, I took the VSI in 1992 as well. The way it stands now, they will take a max of 40% of your retirement back until it is paid off, unless you can prove to them
you have a financial hardship. The first 3 months they cannot take anything back. You can find out more by Googling.
ok, very good thank you so much for the quick response and good answers, i did google and the first 3 months you don't have to pay anything back, and it is at 40% , wow that still is alot , is
that also based on what i took after taxes? ok, getting back to the retirement ok, as far as retirement using the Final Pay Calculator i put in the number of years i have which are 33 , and it
gives me a retirement of 4100. before taxes , so my unfitting conditions are cervical spondylosis , and lumbar spondylosis, and ashma which won't qualify, please expain how i would do better
taking a military disability retirement. Also if i don't do better could i still retire? Thanks so much for all the great information. As it stands now i'm taking my C&P Exams this week, at the
local VA. I thank this is were i'm kind of lost, if i decided to just retire are you saying they would take 40 % of my military retirement or 40 of VA Disability. Because my va C&P exam i list a
total of 16 unfitting conditions. Please give me some insight, hey guys thanks for all the help . thank you very much for your time.
Not sure about the taxes. Unless you were on full time active duty doing the 12 years you were in the reserves, you may have put in the incorrect numbers. You need to add up your total points
then divide by 360 to get the total years to put in the calculator. If I had 20 years active and 14 reserves which totaled 7900 points=22 years. This would have given me a 55% retirement. My
disability retirement was 70% vs 55% (less the 40% of the 70% subtracted for VSI). BY taking a disability retirement, I received 15% more.
i have the computation of length of service and it says total creditable service years 33 which is like 80%
NO. Credible service is based on active duty service. HOW MANY POINTS YOU HAVE will determine credible service | {"url":"http://www.pebforum.com/site/threads/meb-vs-retirement.8209/","timestamp":"2014-04-19T10:08:21Z","content_type":null,"content_length":"48128","record_id":"<urn:uuid:5a7d95a6-69e4-4a1f-bc96-372ccdaa2a71>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kenosha Algebra 2 Tutor
Find a Kenosha Algebra 2 Tutor
...Finally, over the last 4 months since I moved to the Chicago area, I have tutored numerous students in subjects from ACT math (over 100 hours in the past two months) to geometry and algebra II
/ trigonometry, and high school biology to college level organic chemistry. I am quite flexible as a te...
26 Subjects: including algebra 2, chemistry, geometry, algebra 1
...My name is Priti, and I have a Bachelor's degree in Computer Engineering from University of Illinois, Chicago. I have over 7 years of experience in teaching mathematics. Some of the subjects in
which I have a niche are pre-algebra, algebra 1, geometry, algebra 2, trigonometry, and pre-calculus, and most of the honors/advanced courses listed.
11 Subjects: including algebra 2, calculus, geometry, trigonometry
...While I was pursuing my degree at Valparaiso, I volunteered at the Boy's and Girl's club in town for two years. I really enjoyed my time as a tutor there. Being able to help in multiple areas
is probably my main asset.
22 Subjects: including algebra 2, reading, geometry, biology
...I can help students with Chemistry and Math (pre-calculus, differential equations). I have worked as a volunteer tutor and have helped people working towards their GEDs. I have worked as a
graduate assistant when I was working towards my Masters degree in Chemistry at DePaul University and I use...
13 Subjects: including algebra 2, English, reading, chemistry
...I hold a bachelor of science in electrical engineering with emphasis in mathematics. I teach with solid math acumen, coupled with encouragement and positive reinforcement. This has proved
positive for my own kids and many students I have tutored.
18 Subjects: including algebra 2, geometry, algebra 1, ASVAB | {"url":"http://www.purplemath.com/kenosha_algebra_2_tutors.php","timestamp":"2014-04-17T07:25:21Z","content_type":null,"content_length":"23939","record_id":"<urn:uuid:a372327c-f16e-48bf-88fb-240cdb0fda71>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conic sections
Next: Polynomials and polynomial functions Up: Pre-requisites Previous: Introduction of co-ordinates
In the co-ordinate plane one can study more general geometric figures that those desribed by lines. In this section we undertake a rigorous study of conic sections. In particular, we find geometric
criteria that distinguish the different conics. We also establish Steiner's construction of conic sections as the locus of intersection of a pair of rotating lines.
The first equation that is more complicated that the equation of a line as given above is one of the form
ax^2 + bxy + cy^2 + dx + ey + f = 0
where a, b and c are not all zero (in which case the equation would become that of a line). The locus of points (x, y) that satisfy this equation is called a conic or a conic section. By plotting the
corresponding curves we find that we have the following types of conics:
1. There are no solutions.
2. The solutions all lie on one line.
3. The solutions all lie on a pair of lines.
4. The conic lies within a bounded region of the plane (i. e. the conic is compact). This called an ellipse (of which the circle is a special case).
5. The conic has two parts (i. e. the conic is disconnected).This is called a hyperbola.
6. The conic is connected and not compact. This is called a parabola.
We note that the first three types are distinguished without reference to order among numbers (or separation axioms in geometry) and so make sense over other fields. We will see below how we can
distinguish the other conics in a purely algebraic way.
Exercise 5 Find ways of distinguishing the different conics by looking at the equation. (Hint: Examine the discriminant b^2 - 4ac).
For any line ax + by + c = 0 with a non-zero, we can write the solutions in terms of one parameter as (-bt-c/a, t); similarly when b is not zero. We can also ``solve'' a conic. Let us suppose that
the conic is not of type (1), (2) or (3) above. Fix a point (x[0], y[0]) on the conic.
Exercise 6
We will find a parametric solution of a conic. (Hint: Use translation and scaling of co-ordinates to simplify the equations wherever possible).
1. Let (y - y[0]) = t(x - x[0]) be a line through this point. Show that there is at most one other point of the conic that lines on this line.
2. Find the co-ordinates of this point in terms of the constants a, b, c, d, e, f, x[0], y[0] and the parameter t.
3. Show that this parametric solution is not well defined at two values of t for a hyperbola.
4. Show that this parametric solution misses one point in the case of an ellipse or circle but is well-defined at all values of t.
5. Show that this parametric solution is not well defined for one value of t and misses one point or is well defined and misses no points on a parabola.
This can be carried further through Steiner's construction as follows. Let (x[1], y[1]) another point on the conic.
Exercise 7
Show that there are constants
so that for any point (
) of the conic we have
s = t = s =
Moreover, these constants are such that if we try to solve for
, we obtain no solutions when the conic is an ellipse (or circle), one solution for a parabola and two solutions for a hyperbola.
The geometric content of this is the statement that the conic is obtained as the locus of intersection of a pair of rotating lines based at (x[0], y[0]) and (x[1], y[1]) respectively with respective
slopes s and t related by s(Ct + D) = At + D.
Exercise 8 Prove the converse that such a locus is always a conic.
Next: Polynomials and polynomial functions Up: Pre-requisites Previous: Introduction of co-ordinates Kapil H. Paranjape 2001-01-20 | {"url":"http://www.imsc.res.in/~kapil/geometry/prereq1/node3.html","timestamp":"2014-04-21T00:35:41Z","content_type":null,"content_length":"8959","record_id":"<urn:uuid:bcda5281-08ba-45a3-bd90-dc07f73d0141>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consumer Behavior And Family 1110 > James > Notes > today8.pptx | StudyBlue
Diversification Bias: The fear of focus Barriers to choosing environmental control Potential positive outcomes of planned focus Avoiding negative addictions Pursuing positive addictions Achieving
competitive mastery Barriers to planned focus Hyperbolic discounting Projection bias Diversification bias Focus Diversification bias: The fear of focus We hate losing options, even if they are bad
ones. We love diversification, even when it is pointless and costly. We avoid focusing, even if it is the only correct choice. But, controlling your decision environment means focusing your choices
and sometimes eliminating future options. Diversification bias: We hate losing options Experimental finding ?options that threaten to disappear cause decision makers to invest more effort and money
in keeping these options open, even when the options themselves seem to be of little interest? Shin (MIT) & Ariely (MIT), 2004, Keeping doors open: The effect of unavailability on incentives to keep
options viable. Management Science, 50, 575-586. An experiment on diversification bias First, pick a door. Shin, J. (MIT) & Ariely, D. (MIT), 2004, Keeping doors open: The effect of unavailability on
incentives to keep options viable. Management Science, 50, 575-586. An experiment on diversification bias Then click on the payoff box for some unknown amount (avg. 3¢ per click). $ An experiment on
diversification bias Then click on the payoff box for some unknown amount (avg. 3¢ per click). 50 clicks total. Earn as much money as possible. $ 1¢ 2¢ 4¢ 5¢ An experiment on diversification bias Can
continue to click on the payoff button. Or can click to switch doors. But, switching uses up one of your 50 clicks. $ 1¢ 2¢ 4¢ 5¢ An experiment on diversification bias All doors have the same average
value (3¢). What is the best strategy? $ 1¢ 2¢ 4¢ 5¢ If all doors have the same average value (3¢), the best strategy is? Never switch doors because switching uses a click Use ? of clicks on red
door, ? on blue, ? on green Use ½ of clicks on one door and ½ on another door Switch doors on every other click Switch doors randomly $ An experiment on diversification bias Best strategy: Pick one
door and keep clicking. Never switch! $ 1¢ 2¢ 4¢ 5¢ An experiment on diversification bias Participants explicitly told: These doors all have the same average payoff. Did they switch doors during the
game? $ 1¢ 2¢ 4¢ 5¢ An experiment on diversification bias Participants explicitly told: These doors all have the same average payoff. The average number of switches: about 1. $ 1¢ 2¢ 4¢ 5¢ An
experiment on diversification bias New twist. Each time a door is clicked, the others shrink 1/15th. At the 15th time without being clicked they disappear. $ 1¢ 2¢ 4¢ 5¢ An experiment on
diversification bias All doors still have same average payout. Does the best strategy change? $ 1¢ 2¢ 4¢ 5¢ If all doors have the same average value, but unclicked doors eventually disappear, the
best strategy is? Never switch doors because switching uses a click Use ? of clicks on red door, ? on blue, ? on green Use ½ of clicks on one door and ½ on another door Switch doors on every other
click Switch doors randomly $ An experiment on diversification bias Participants explicitly told: These doors all have the same average payoff. Did they switch doors during the game with disappearing
doors? $ 1¢ 2¢ 4¢ 5¢ An experiment on diversification bias With the risk of door disappearance the average number of door switches changes from 1 to almost 7! $ 1¢ 2¢ 4¢ 5¢ An experiment on
diversification bias People can?t stand to let the option disappear, even if it they know there is no advantage! $ 1¢ 2¢ 4¢ 5¢ An experiment on diversification bias Similar results? if switching
costs a click and 3¢. if you could make the door come back. if the disappearing doors have a lower payoff. $ 1¢ 2¢ 4¢ 5¢ Can diversity bias (the irrational desire to avoid losing options) apply to
dating? Prof. Dan Ariely?s comments http://www.youtube.com/watch?v=RpvpCLI5wxE Discussion Working in groups of 2-5, answer this: When can an irrational desire to keep options open be detrimental to a
person?s future? Careers? College major? Athletics? Relationships? Addiction? Other examples? Sometimes focus (eliminating other options) leads to a better set of new options. Another Experiment
Students in class given the option of snacks at the end of class each week: Snickers, Oreos, chocolate with almonds, tortilla chips, peanuts, and cheese-peanut butter crackers. Group 1: What would
you like right now? (Asked each week for three weeks.) Group 2: Asked to select choices for the following three weeks in advance. Read, D. (Carnegie Mellon) & Loewenstein, G. (Carnegie Mellon), 1995,
Diversification bias: Explaining the discrepancy in variety seeking between combined and separated choices. Journal of Experimental Psychology: Applied, 1, 1, 34-49. What do you think? Group 1: Asked
what would you like right now? (Asked each week for three weeks.) Group 2: Asked to select choices for the following three weeks in advance Who was more likely to select three different snacks for
the three different weeks? a) Group 1 b) Group 2 c) They were about the same Read, D. (Carnegie Mellon) & Loewenstein, G. (Carnegie Mellon), 1995, Diversification bias: Explaining the discrepancy in
variety seeking between combined and separated choices. Journal of Experimental Psychology: Applied, 1, 1, 34-49. People plan more future variety than they will want % choosing three different snacks
Group 1: What would you like right now? (Asked each week for three weeks.) 8% Group 2: Asked to select all choices for the following weeks in advance 45% % always choosing same snack Group 1: What
would you like right now? (Asked each week for three weeks.) 46% Group 2: Asked to select all choices for the following weeks in advance 18% Irrational diversification? Suppose there are three types
of balls: ?red, ?blue, and ?yellow. One color will be randomly picked as the winning color. You get to draw one ball out of a jug. Is one color more likely to win than the others? Irrational
diversification? Suppose there are three types of balls: ?red, ?blue, and ?yellow. One color will be randomly picked as the winning color. You get to draw one ball out of a jug. Is one color more
likely to win than the others? No. The winning color is drawn at random. Irrational diversification? ?red, ?blue, and ?yellow. One color will be randomly picked as the winning color. You get to draw
one ball out of a jug. Since all colors are equally likely to win, does it matter what variety of colors are in your jug? Irrational diversification? ?red, ?blue, and ?yellow. One color will be
randomly picked as the winning color. You get to draw one ball out of a jug. Since all colors are equally likely to win, does it matter what variety of colors are in your jug? No. The winning color
is drawn at random. Irrational diversification? Since all colors are equally likely to win, does it matter what variety of colors are in your jug? Irrational diversification? Since all colors are
equally likely to win, does it matter what variety of colors are in your jug? No. Irrational diversification? Since all colors are equally likely to win, does it matter what variety of colors are in
your jug? No. But wouldn?t you ?feel? better if you had all the colors in your jug instead of just one?... This feeling may be an example of irrational diversification bias Experiment Either ?red, ?
blue, or ?yellow will be randomly picked as the winning color. You get to draw one ball out of a jug. If you select the winning color you receive $30. Your jug has three red balls. You can pay $1 to
replace one red ball with another color. You can pay $2 to replace two red balls with a blue and a yellow. ??? ??? ??? Experiment Either ?red, ?blue, or ?yellow will be randomly picked as the winning
color. You get to draw one ball out of a jug. Since all colors are equally likely to win, does it matter what variety of colors are in your jug? ??? ??? ??? Experiment Either ?red, ?blue, or ?yellow
will be randomly picked as the winning color. You get to draw one ball out of a jug. Since all colors are equally likely to win, does it matter what variety of colors are in your jug? ??? ??? ??? No.
Experiment Either ?red, ?blue, or ?yellow will be randomly picked as the winning color. You get to draw one ball out of a jug. You can pay $1 to replace one red ball with another color. You can pay
$2 to replace two red balls with a blue and a yellow. In trials of this experiment what percentage of people pay for this pointless diversification? ??? ??? ??? Experiment Either ?red, ?blue, or ?
yellow will be randomly picked as the winning color. You get to draw one ball out of a jug. In trials of this experiment what percentage of people pay for this pointless diversification? ??? ??? ???
61% 30% paid $2 31% paid $1 39% paid $0 K. Eliaz (Brown) & G. Frechette (NYU), 2008, ?Don?t put all your eggs in one basket!?: An experimental study of false diversification. Brown University
Economics Department Working Paper [XYZ replace colors] Irrational variety? You are trying to guess the color outcomes of a roulette wheel spin where 60% of slots are red and 40% are black. To
maximize your payoff, what color should you pick? Irrational variety? You are trying to guess the color outcomes of a roulette wheel spin where 60% of slots are red and 40% are black. To maximize
your payoff, what color should you pick? Red Irrational variety? You are trying to guess the color outcomes of a roulette wheel spin where 60% of slots are red and 40% are black. To maximize your
payoff, what color should you pick on the second spin? Irrational variety? You are trying to guess the color outcomes of a roulette wheel spin where 60% of slots are red and 40% are black. To
maximize your payoff, what color should you pick on the second spin? Red Irrational variety? You are trying to guess the color outcomes of a roulette wheel spin where 60% of slots are red and 40% are
black. To maximize your payoff, what color should you pick on the third spin? Irrational variety? You are trying to guess the color outcomes of a roulette wheel spin where 60% of slots are red and
40% are black. To maximize your payoff, what color should you pick on the third spin? Red Irrational variety? You are trying to guess the first five color outcomes of a roulette wheel spin where 60%
of slots are red and 40% are black. To maximize your payoff, what five colors should you pick? Irrational variety? You are trying to guess the first five color outcomes of a roulette wheel spin where
60% of slots are red and 40% are black. To maximize your payoff, what five colors should you pick? Red Red Red Red Red Irrational variety? Let?s look at some experiments similar to guessing the color
outcomes of a roulette wheel spin where 60% of slots are red and 40% are black. Will people?s irrational love of diversification cause them to chose the lower probability (i.e., the black slots)?
Cards Experiment Five cards are chosen randomly from a deck of 100 colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. Choose, in advance, the predicted color of the first five cards. You
receive a prize for each correct prediction. What is the best prediction? A. Rubinstein (Princeton), 2002, Irrational diversification in multiple decision problems. European Economic Review, 46,
1369-1378. Cards Experiment Five cards are chosen randomly from a deck of 100 colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. What is the best prediction? 5 Greens 5 Blues 2 Greens, 1 Blue,
1 Yellow, 1 Brown 2 Greens, 2 Blues, 1 Yellow 3 Greens, 2 Blues Cards Experiment Five cards are chosen randomly from a deck of 100 colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. What is
the best prediction? 5 Greens 5 Blues 2 Greens, 1 Blue, 1 Yellow, 1 Brown 2 Greens, 2 Blues, 1 Yellow 3 Greens, 2 Blues Cards Experiment Five cards are chosen randomly from a deck of 100 colored
cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. Chose, in advance, the predicted color of the first five cards. You receive a prize for each correct prediction. In a test of 74 college students,
what percentage selected all greens? Cards Experiment 38% A. Rubinstein (Princeton), 2002, Irrational diversification in multiple decision problems. European Economic Review, 46, 1369-1378. Five
cards are chosen randomly from a deck of 100 colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. Chose, in advance, the predicted color of the first five cards. You receive a prize for each
correct prediction. In a test of 74 college students, what percentage selected all greens? Cards Experiment: Replication 1 Five cards are chosen randomly from a deck of 100 cards. The deck is
composed of colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. Chose, in advance, the predicted color of the first five cards. You receive a prize for each correct prediction. In a replication
test of 50 college students, what percentage selected all greens? Cards Experiment: Replication 1 Five cards are chosen randomly from a deck of 100 cards. The deck is composed of colored cards: 36
Green, 25 Blue, 22 Yellow and 17 Brown. Chose, in advance, the predicted color of the first five cards. You receive a prize for each correct prediction. In a replication test of 50 college students,
what percentage selected all greens? 42% A. Rubinstein (Princeton), 2002, Irrational diversification in multiple decision problems. European Economic Review, 46, 1369-1378. Cards Experiment Five
cards are chosen randomly from a deck of 100 cards. The deck is composed of colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. Chose, in advance, the predicted color of the first five cards.
You receive a prize for each correct prediction. How many of each color did most of the other students select? Cards Experiment Five cards are chosen randomly from a deck of 100 cards. The deck is
composed of colored cards: 36 Green, 25 Blue, 22 Yellow and 17 Brown. Chose, in advance, the predicted color of the first five cards. You receive a prize for each correct prediction. How many of each
color did most of the other students select? 2 Greens, 1 Blue, 1 Yellow, 1 Brown A. Rubinstein (Princeton), 2002, Irrational diversification in multiple decision problems. European Economic Review,
46, 1369-1378. Cards Experiment 2 Greens, 1 Blue, 1 Yellow, 1 Brown Is this an irrational preference for diversification? Cards Experiment: Replication 2 In a later replication test of 46 college
students in intro to statistics, what percentage selected all greens? Cards Experiment: Replication 2 In a later replication test of 46 college students in intro to statistics, what percentage
selected all greens? C. Kogler (U. Salzburg) & A. Kuhberger (U. Salzburg), 2007, Dual process theories: A key for understanding the diversification bias? Journal of Risk Uncertainty, 34, 145-154. 15%
Cards Experiment: Replication 2 In a later replication test of 46 college students in intro to statistics, what percentage selected 2 greens, 1 brown, 1 yellow, and 1 blue (maximum diversification)?
C. Kogler (U. Salzburg) & A. Kuhberger (U. Salzburg), 2007, Dual process theories: A key for understanding the diversification bias? Journal of Risk Uncertainty, 34, 145-154. Cards Experiment:
Replication 2 In a later replication test of 46 college students in intro to statistics, what percentage selected 2 greens, 1 brown, 1 yellow, and 1 blue (maximum diversification)? C. Kogler (U.
Salzburg) & A. Kuhberger (U. Salzburg), 2007, Dual process theories: A key for understanding the diversification bias? Journal of Risk Uncertainty, 34, 145-154. 61% Cards Experiment: Dual-Self
Replication The professors believed that by intentionally engaging the ?intentional, analytic, rational? side of the two-system self, they could improve these results. Tested a second group after
waking up this system by calling it a ?statistical test? to find out ?statistical competence? and encouraging students to do their best. What happened? C. Kogler (U. Salzburg) & A. Kuhberger (U.
Salzburg), 2007, Dual process theories: A key for understanding the diversification bias? Journal of Risk Uncertainty, 34, 145-154. Cards Experiment: Dual-Self Replication C. Kogler (U. Salzburg) &
A. Kuhberger (U. Salzburg), 2007, Dual process theories: A key for understanding the diversification bias? Journal of Risk Uncertainty, 34, 145-154. Group 1: Normal Group 2: After ?Waking Up? System
2 Rational Focused Choice (All Greens) 15% 43% Irrational Fully Diversified Choice (2 Greens, 1 Brown, 1 Yellow, 1 Blue) 61% 37% Majors experiment: More irrational diversity? Five students are
selected from the university at random. Guess the major of each student. Each correct guess enters you in a drawing for a prize. What is the best strategy? Majors experiment: More irrational
diversity? Five students are selected from the university at random. Guess the major of each student. Each correct guess enters you in a drawing for a prize. What is the best strategy? Guess the most
likely major and then select that major five times. Majors experiment: More irrational diversity? In a similar 1999 experiment with students in an economic game theory class, what percentage selected
the same major for all five guesses? Majors experiment: More irrational diversity In a similar 1999 experiment with students in an economic game theory class, what percentage selected the same major
for all five guesses? 7% A. Rubinstein (Princeton), 2002, Irrational diversification in multiple decision problems. European Economic Review, 46, 1369-1378. Mall experiment: More irrational
diversity? You are a police officer trying to find a person. He is about to enter a mall through one of four gates. The share of people using each gate is as follows: You can assign only one officer
to one gate based on the spin of a roulette wheel. Assign the spaces on the wheel: Blue: __% Green: __% Red: __% Yellow: __% What is the correct answer? 21% 32% 27% 20% Mall experiment: More
irrational diversity? You are a police officer trying to find a person. He is about to enter a mall through one of four gates. The share of people using each gate is as follows: You can assign only
one officer to one gate based on the spin of a roulette wheel. Assign the spaces on the wheel: Blue: __% Green: __% Red: __% Yellow: __% What is the correct answer? 21% 32% 27% 20% 100% to Red Mall
experiment: More irrational diversity? You are a police officer trying to find a person. He is about to enter a mall through one of four gates. The share of people using each gate is as follows: You
can assign only one officer to one gate based on the spin of a roulette wheel. Assign the spaces on the wheel: Blue: __% Green: __% Red: __% Yellow: __% What percentage of students gave the right
answer? 21% 32% 27% 20% Mall experiment: More irrational diversity? You are a police officer trying to find a person. He is about to enter a mall through one of four gates. The share of people using
each gate is as follows: You can assign only one officer to one gate based on the spin of a roulette wheel. Assign the spaces on the wheel: Blue: __% Green: __% Red: __% Yellow: __% What percentage
of students gave the right answer? 21% 32% 27% 20% 33%. Everyone else diversified A. Rubinstein (Princeton), 2002, Irrational diversification in multiple decision problems. European Economic Review,
46, 1369-1378. Diversification bias: The fear of focus We hate losing options, even when they are bad ones. We love diversification, even when it is pointless and costly. We avoid focusing, even when
it is the only correct choice. But, controlling your decision environment means focusing your choices and sometimes eliminating future options. | {"url":"http://www.studyblue.com/notes/note/n/today8pptx/file/440950","timestamp":"2014-04-19T14:56:41Z","content_type":null,"content_length":"47558","record_id":"<urn:uuid:79934e44-68ee-44db-a6bd-3138a9dc9a4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
smooth {stats}
Tukey's (Running Median) Smoothing
Tukey's smoothers, 3RS3R, 3RSS, 3R, etc.
smooth(x, kind = c("3RS3R", "3RSS", "3RSR", "3R", "3", "S"),
twiceit = FALSE, endrule = "Tukey", do.ends = FALSE)
a vector or time series
a character string indicating the kind of smoother required; defaults to "3RS3R".
logical, indicating if the result should be ‘twiced’. Twicing a smoother S(y) means S(y) + S(y - S(y)), i.e., adding smoothed residuals to the smoothed values. This decreases bias (increasing
a character string indicating the rule for smoothing at the boundary. Either "Tukey" (default) or "copy".
logical, indicating if the 3-splitting of ties should also happen at the boundaries (ends). This is only used for kind = "S".
3 is Tukey's short notation for running medians of length 3,
3R stands for Repeated 3 until convergence, and
S for Splitting of horizontal stretches of length 2 or 3.
Hence, 3RS3R is a concatenation of 3R, S and 3R, 3RSS similarly, whereas 3RSR means first 3R and then (S and 3) Repeated until convergence -- which can be bad.
An object of class "tukeysmooth" (which has print and summary methods) and is a vector or time series containing the smoothed values with additional attributes.
Tukey, J. W. (1977). Exploratory Data Analysis, Reading Massachusetts: Addison-Wesley.
S and S-PLUS use a different (somewhat better) Tukey smoother in smooth(*). Note that there are other smoothing methods which provide rather better results. These were designed for hand calculations
and may be used mainly for didactical purposes.
Since R version 1.2, smooth does really implement Tukey's end-point rule correctly (see argument endrule).
kind = "3RSR" has been the default till R-1.1, but it can have very bad properties, see the examples.
Note that repeated application of smooth(*) does smooth more, for the "3RS*" kinds.
## see also demo(smooth) !
x1 <- c(4, 1, 3, 6, 6, 4, 1, 6, 2, 4, 2) # very artificial
(x3R <- smooth(x1, "3R")) # 2 iterations of "3"
smooth(x3R, kind = "S")
sm.3RS <- function(x, ...)
smooth(smooth(x, "3R", ...), "S", ...)
y <- c(1, 1, 19:1)
plot(y, main = "misbehaviour of \"3RSR\"", col.main = 3)
lines(smooth(y, "3RSR"), col = 3, lwd = 2) # the horror
x <- c(8:10, 10, 0, 0, 9, 9)
plot(x, main = "breakdown of 3R and S and hence 3RSS")
matlines(cbind(smooth(x, "3R"), smooth(x, "S"), smooth(x, "3RSS"), smooth(x)))
presidents[is.na(presidents)] <- 0 # silly
summary(sm3 <- smooth(presidents, "3R"))
summary(sm2 <- smooth(presidents,"3RSS"))
summary(sm <- smooth(presidents))
all.equal(c(sm2), c(smooth(smooth(sm3, "S"), "S"))) # 3RSS === 3R S S
all.equal(c(sm), c(smooth(smooth(sm3, "S"), "3R"))) # 3RS3R === 3R S 3R
plot(presidents, main = "smooth(presidents0, *) : 3R and default 3RS3R")
lines(sm3, col = 3, lwd = 1.5)
lines(sm, col = 2, lwd = 1.25)
Documentation reproduced from R 3.0.2. License: GPL-2. | {"url":"http://www.inside-r.org/r-doc/stats/smooth","timestamp":"2014-04-19T07:07:03Z","content_type":null,"content_length":"34489","record_id":"<urn:uuid:3dd8971e-2aa3-4803-841e-fd3098e492d6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Courses
MATH 099 Basic Mathematics
Basic Mathematics: This is a developmental mathematics course including topics generally found in high school algebra II. Credit for this course may not be used toward graduation. 3 hrs.
MATH 104 Mathematical Ideas
Mathematical Ideas: The course will explore topics selected from number theory, probability theory, topology and set theory. In addition, such areas as logic, modern geometries, chaos theory and
fractals may be addressed. Throughout, unifying concepts of investigation, conjecture, counter examples and applications will be stressed. Note: This course does not prepare the student for
subsequent enrollment in Introductory Statistics or Calculus. 3 hrs.
MATH 105 Finite Mathematics
Finite Mathematics: Systems of equations, matrices, coordinate systems and graphs, and an introduction to linear programming, stochastic matrices, the mathematics of finance, difference equations and
mathematical models. Topics are applied to illustrate the uses of finite mathematics in other disciplines. MATH 105 and MATH 110 may not both be counted towards graduation credit. 3 hrs.
MATH 106 Introductory Statistics
Introductory Statistics: An introduction to data analysis and statistical concepts. Interpretation and calculations for description of single variables and simple regression, basic probability,
random variables, confidence intervals and tests of hypotheses. Students may be required to have access to a TI-83 or TI-83 Plus calculator or be required to work with statistics software on the
computer. 3 hrs.
MATH 110 Quantitative Business Methods
Quantitative Business Methods: A study of several key topics in mathematics that are applicable to other fields of study. Topics include linear and nonlinear functions, rates of change, mathematical
systems, and finding maxima and minima. Emphasis will be placed on critical thinking and problem solving through the study of hypothetical cases and real-world problems. Math 110 and Math 105 may not
both be counted toward graduation credit. 3 hrs.
MATH 111 Precalculus
Precalculus: Topics will include functions and their graphs, exponential and logarithmic functions,
trigonometric functions and equations, the theory of polynomials, systems of equations, sequences and
analytic geometry. 3 hrs.
MATH 121 Calculus I
Calculus I: An integrated treatment of analytic geometry and limits, continuity of functions, differentiation of algebraic and trigonometric functions, applications of derivatives to extreme values
and curve sketching, the fundamental theorems of integral calculus, common integration techniques and applications. Prerequisite: MATH 111 or consent of the mathematics department. 4 hrs.
MATH 122 Calculus II
Calculus II: The calculus of transcendental functions, indeterminate forms and l'Hopital's rule, improper
integrals, infinite sequences and series, further techniques and applications of integration, inverse trigonometric functions, polar coordinates and plane curves. Students will be required to have
access to a TI- 83 or TI-83 Plus calculator for this course. Prerequisite: MATH 121. 4 hrs.
MATH 209 Discrete Mathematics
Discrete Mathematics: An introductory treatment of relations, functions, set theory, Boolean algebra, logic and methods of proof, the binomial theorem, combinatorics, elements of number theory and
recursion. 3 hrs.
MATH 220 Calculus II
Calculus II: The calculus of transcendental functions, indeterminate forms and l'Hopital's rule, improper
integrals, infinite sequences and series, further techniques and applications of integration, inverse
trigonometric functions, polar coordinates and plane curves. Students will be required to have access to a TI- 83 or TI-83 Plus calculator for this course. Prerequisite: MATH 121. 4 hrs.
MATH 221 Calculus III
Calculus III: Vector-valued functions, multi-variable functions and their derivatives, multiple integral, alternate coordinate systems, and the theorems of Green and Stokes. Students will be required
to have access to a TI- 83 or a TI-83 Plus calculator for this course. Prerequisite: MATH 122. 3 hrs.
MATH 303 Linear Algebra
Linear Algebra: Coordinate geometry, vectors and threedimensional geometry, matrices and determinants, linear transformations. Prerequisite: MATH 121. 3 hrs.
MATH 304 Geometry
Geometry: Modern approach to the study of Euclidean and non-Euclidean geometries, including incidence and affine geometries; designed especially for students preparing to teach. Prerequisite: MATH
121. 3 hrs.
MATH 311 Differential Equations
Differential Equations: This course is designed to introduce the student to differential equations and their applications. In addition to studying the theoretical aspects of differential equations,
students will also examine numerical methods and solutions with the use of the existing computer facilities and software. Prerequisite: MATH 220. 3 hrs.
MATH 320 Probability & Statistics
Probability and Statistics: A first course in both probability and statistics. Topics include a rigorous mathematical approach to probability laws, conditional probability, the concept of
independence, random variables, discrete and continuous distributions, sampling, statistical inference, central limit theorem, and hypothesis testing. Practical application will be explored using the
SPSS computer statistics package. Prerequisites: MATH 220. 3 hrs.
MATH 380 Mathematical Finance
Mathematical Finance: A mathematical approach to the theory of finance including theory of interest, risk and return, insurance models, valuing investments and financial derivatives. Prerequisites:
MATH 220. 3 hrs.
MATH 401 Modern Algebra I
Modern Algebra I: Introduction to abstract algebra. Topics include basic properties of integers, permutation groups, subgroups, quotient groups, group isomorphisms and homomorphism, rings and ideals.
This course fulfills the writing-intensive course requirement. Prerequisite: MATH 303. 4 hrs.
MATH 410 Mathematical Probability
Mathematical Probability: This course investigates both the mathematical theory behind the study of probability as well as its use in modeling stochastic behavior. Topics include moment generating
functions, expectation, stochastic processes, multivariate distributions, variance and covariance. Prerequisites: MATH 221, MATH 320. 3 hrs.
MATH 415 Intro to Real Variable Theory
Introduction to Real Variable Theory: Careful axiomatic development of the real number system, limits of functions and real sequences, continuity, theory of differentiation and integration.
Prerequisite: MATH 221. 3 hrs.
MATH 417 Introduction to Complex Analysis
Introduction to Complex Analysis: Development of the theory of complex numbers and functions, power series, Cauchy's theorem and applications, harmonic functions and conformal mapping. Prerequisite:
MATH 221. 3 hrs.
MATH 420 Mathematical Statistics
Mathematical Statistics: This course investigates both the mathematical theory behind the study of statistics as well as its use as a tool of inference. Topics include curve fitting, multiple
regression, ANOVA, inference tests, properties of estimators and further exploration of the SPSS statistics package. Prerequisites: MATH 221, MATH 320. 3 hrs.
MATH 430 Topics in Mathematics
Topics in Mathematics: This course provides the opportunity for a faculty member and a group of interested students to study a subject that is not offered on a regular basis in the curriculum. Topics
are announced annually. May be repeated for credit under different subtitles. May vary with topic. Lecture hours vary with the hours credit and the course taught. 1-4 hrs.
MATH 450 Independent Study
Independent Study: MATH 450 Independent Studies in Mathematics Research projects in the area of the
student's interest; written report will be required. May be repeated for credit. Prerequisite: Open to mathematics majors with at least junior standing. 1-3 hrs.
MATH 490 Actuarial Science
Actuarial Science: This course connects the topics studied in other courses to the discipline of actuarial
science by studying various applications. In addition, the course will help to prepare the student for the entry-level actuarial science exam. Prerequisites: MATH 410, MATH 420. 3 hrs.
MATH 496 Senior Thesis Preparation
Senior Thesis Preparation: Each student will be assigned an advisor from the mathematics faculty, with whose guidance the student will read in a mathematical area and select the topic for the paper
or project to be completed in the following semester. A grade of K will be assigned for MATH 496 until MATH 497 has been completed. Prerequisite: Junior or senior standing. 1 hr.
MATH 497 Senior Thesis
Senior Thesis: Each student will complete the project for which preparatory work was done during the previous semester. The project will be presented to an audience of peers and instructors and a
written version of the paper may be housed in the library. Prerequisite: MATH 496. 2 hrs. | {"url":"http://www.queens.edu/Academics-and-Schools/Schools-and-Colleges/College-of-Arts-and-Sciences/Academic-Departments/Mathematics-Department/Mathematics-Courses.html","timestamp":"2014-04-16T04:20:37Z","content_type":null,"content_length":"37748","record_id":"<urn:uuid:18bd0274-bf4d-4e15-aa63-31f06c6c5547>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 1 - 3 of 3
1. CJM 2012 (vol 66 pp. 102)
Continuity of convolution of test functions on Lie groups
For a Lie group $G$, we show that the map $C^\infty_c(G)\times C^\infty_c(G)\to C^\infty_c(G)$, $(\gamma,\eta)\mapsto \gamma*\eta$ taking a pair of test functions to their convolution is continuous
if and only if $G$ is $\sigma$-compact. More generally, consider $r,s,t \in \mathbb{N}_0\cup\{\infty\}$ with $t\leq r+s$, locally convex spaces $E_1$, $E_2$ and a continuous bilinear map $b\colon
E_1\times E_2\to F$ to a complete locally convex space $F$. Let $\beta\colon C^r_c(G,E_1)\times C^s_c(G,E_2)\to C^t_c(G,F)$, $(\gamma,\eta)\mapsto \gamma *_b\eta$ be the associated convolution map.
The main result is a characterization of those $(G,r,s,t,b)$ for which $\beta$ is continuous. Convolution of compactly supported continuous functions on a locally compact group is also discussed,
as well as convolution of compactly supported $L^1$-functions and convolution of compactly supported Radon measures.
Keywords:Lie group, locally compact group, smooth function, compact support, test function, second countability, countable basis, sigma-compactness, convolution, continuity, seminorm, product
Categories:22E30, 46F05, 22D15, 42A85, 43A10, 43A15, 46A03, 46A13, 46E25
2. CJM 2012 (vol 65 pp. 1043)
Convolution of Trace Class Operators over Locally Compact Quantum Groups
We study locally compact quantum groups $\mathbb{G}$ through the convolution algebras $L_1(\mathbb{G})$ and $(T(L_2(\mathbb{G})), \triangleright)$. We prove that the reduced quantum group $C^
*$-algebra $C_0(\mathbb{G})$ can be recovered from the convolution $\triangleright$ by showing that the right $T(L_2(\mathbb{G}))$-module $\langle K(L_2(\mathbb{G}) \triangleright T(L_2(\mathbb
{G}))\rangle$ is equal to $C_0(\mathbb{G})$. On the other hand, we show that the left $T(L_2(\mathbb{G}))$-module $\langle T(L_2(\mathbb{G}))\triangleright K(L_2(\mathbb{G})\rangle$ is isomorphic
to the reduced crossed product $C_0(\widehat{\mathbb{G}}) \,_r\!\ltimes C_0(\mathbb{G})$, and hence is a much larger $C^*$-subalgebra of $B(L_2(\mathbb{G}))$. We establish a natural isomorphism
between the completely bounded right multiplier algebras of $L_1(\mathbb{G})$ and $(T(L_2(\mathbb{G})), \triangleright)$, and settle two invariance problems associated with the representation
theorem of Junge-Neufang-Ruan (2009). We characterize regularity and discreteness of the quantum group $\mathbb{G}$ in terms of continuity properties of the convolution $\triangleright$ on $T(L_2(\
mathbb{G}))$. We prove that if $\mathbb{G}$ is semi-regular, then the space $\langle T(L_2(\mathbb{G}))\triangleright B(L_2(\mathbb{G}))\rangle$ of right $\mathbb{G}$-continuous operators on $L_2(\
mathbb{G})$, which was introduced by Bekka (1990) for $L_{\infty}(G)$, is a unital $C^*$-subalgebra of $B(L_2(\mathbb{G}))$. In the representation framework formulated by Neufang-Ruan-Spronk (2008)
and Junge-Neufang-Ruan, we show that the dual properties of compactness and discreteness can be characterized simultaneously via automatic normality of quantum group bimodule maps on $B(L_2(\mathbb
{G}))$. We also characterize some commutation relations of completely bounded multipliers of $(T(L_2(\mathbb{G})), \triangleright)$ over $B(L_2(\mathbb{G}))$.
Keywords:locally compact quantum groups and associated Banach algebras
Categories:22D15, 43A30, 46H05
3. CJM 1997 (vol 49 pp. 1117)
The von Neumann algebra $\VN(G)$ of a locally compact group and quotients of its subspaces
Let $\VN(G)$ be the von Neumann algebra of a locally compact group $G$. We denote by $\mu$ the initial ordinal with $\abs{\mu}$ equal to the smallest cardinality of an open basis at the unit of $G$
and $X= \{\alpha; \alpha < \mu \}$. We show that if $G$ is nondiscrete then there exist an isometric $*$-isomorphism $\kappa$ of $l^{\infty}(X)$ into $\VN(G)$ and a positive linear mapping $\pi$ of
$\VN(G)$ onto $l^{\infty}(X)$ such that $\pi\circ\kappa = \id_{l^{\infty}(X)}$ and $\kappa$ and $\pi$ have certain additional properties. Let $\UCB (\hat{G})$ be the $C^{*}$-algebra generated by
operators in $\VN(G)$ with compact support and $F(\hat{G})$ the space of all $T \in \VN(G)$ such that all topologically invariant means on $\VN(G)$ attain the same value at $T$. The construction of
the mapping $\pi$ leads to the conclusion that the quotient space $\UCB (\hat{G})/F(\hat{G})\cap \UCB(\hat{G})$ has $l^{\infty}(X)$ as a continuous linear image if $G$ is nondiscrete. When $G$ is
further assumed to be non-metrizable, it is shown that $\UCB(\hat{G})/F (\hat{G})\cap \UCB(\hat{G})$ contains a linear isomorphic copy of $l^{\infty}(X)$. Similar results are also obtained for
other quotient spaces.
Categories:22D25, 43A22, 43A30, 22D15, 43A07, 47D35 | {"url":"http://cms.math.ca/cjm/msc/22D15?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-17T06:53:46Z","content_type":null,"content_length":"32195","record_id":"<urn:uuid:630512b9-c8ad-4bd6-8e77-bf1a36a63651>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yahoo Groups
Re: [loopantennas] fs loop article
Expand Messages
View Source
Litz wire reduces the impact of the skin effect and the proximity effect. So for a piratical explanation I have just wound a 18 turn coil with standard solid 18 gage wire, close wound, on a 7.5"
FSL and I find the tuning range (using a 10 to 381 pf variable capacitor) is 1200 to 400 KHz. Now I rewind a 18 turn coil, close wound, on the same 7.5" FSL using 660/46 Litz wire, and I now find
the tuning range is 1850 to 405 KHz. So my question to you is if it is not higher distributed capacitance using the solid wire over that of the Litz wire, then what is it that is causing the
frequency, at the high end of the band to be lower and having less tuning range? This is not theory, this is what I see from a piratical stand point. I am sure that if you want the theory behind
it you can find it somewhere here on the web. (I am not skeptical, I am a believer)
In a message dated 1/5/2013 11:25:00 P.M. Central Standard Time, jpopelish@... writes:
On 01/05/2013 09:15 PM, everettsharp@... wrote:
> Hi John,
> It probably does not mean anything if you have enough space between the
> turns, but when you have the windings next to each other, then distributed
> capacitance does become a major factor. Blitz wire does have less distributed
> capacitance when the windings are butted up next to each other, then does
> standard wire.
I didn't know that. Is this a fact you have measured, or a
claim you have read, somewhere? (I'm sceptical.)
John Popelish
View Source
On 01/06/2013 01:08 PM,
> jpopelish@... writes:
>> (mailto:everettsharp@...) wrote:
>>> Litz wire reduces the impact of the skin effect and
>>> the proximity effect. So for a piratical explanation
>>> I have just wound a 18 turn coil with standard solid
>>> 18 gage wire, close wound, on a 7.5" FSL and I find
>>> the tuning range (using a 10 to 381 pf variable
>>> capacitor) is 1200 to 400 KHz. Now I rewind a 18 turn
>>> coil, close wound, on the same 7.5" FSL using 660/46
>>> Litz wire, and I now find the tuning range is 1850
>>> to 405 KHz. So my question to you is if it is not
>>> higher distributed capacitance using the solid wire
>>> over that of the Litz wire, then what is it that is
>>> causing the frequency, at the high end of the band
>>> to be lower and having less tuning range? This is
>>> not theory, this is what I see from a piratical stand
>>> point. I am sure that if you want the theory behind
>>> it you can find it somewhere here on the web. (I am
>>> not skeptical, I am a believer)
>> One more question. Was the length (across the turns)
>> of the 18 turn Litz coil the same as the length of the
>> 18 turn solid wire coil? In other words, was the 18
>> gauge solid wire the same width as the (as wound) Litz
>> wire?
> I don't know what the length was of the two wires, as I
> did not measure them. However, the cross section area
> of 660/46 Litz wire is nearly the same as solid 18 gage
> wire.
Here is what I am thinking about, with regard to comparing
Litz to solid (or any other two choices of winding conductor.
As long as the resonant Q of a the inductance of a loop and
with its own stray capacitance and a tuning capacitor allows
some actual math solutions, if two tuning cap values and two
resonant frequencies are available from an experiment, it
should be possible to calculate the actual coil inductance
and effective, parallel, stray capacitance. The additional
requirement is that, either the resonant Q is about 10 or
higher, so the losses do not appreciably affect the resonant
frequencies, or the effective losses must be calculable for
the for the loop at the two resonant frequencies.
The point being, that with a given coil, and two resonant
frequencies, with two values of tuning capacitance, it
should be possible to calculate the loop inductance and loop
stray capacitance, by solving a pair of simultaneous
equations for each coil.
So, it should be possible with these two resonant
frequencies, per coil, with two values of tuning
capacitance, we should be able actually solve for any coil's
actual inductance and stray capacitance.
This technique should be handy to compare any two different
construction choices (diameter, turn spacing, conductor
strands, wire insulation, form material, etc.) to work
toward an optimum (widest tuning range per dollar, highest Q
per dollar, or whatever figure of merit you choose) design
for any available coil conductor material and construction
This solution rapidly takes coil design optimization out of
the opinion phase and moves it into experimental science.
John Popelish
Your message has been successfully submitted and would be delivered to recipients shortly. | {"url":"https://groups.yahoo.com/neo/groups/loopantennas/conversations/topics/9556?l=1","timestamp":"2014-04-24T05:52:17Z","content_type":null,"content_length":"47451","record_id":"<urn:uuid:107c31fd-53cd-46e0-a5a8-9502ab1c1a6a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 74
If from a medial straight line there is subtracted a medial straight line which is commensurable with the whole in square only and which contains with the whole a rational rectangle, then the
remainder is irrational; let it be called first apotome of a medial straight line.
From the medial straight line AB let there be subtracted the medial straight line BC which is commensurable with AB in square only and with AB makes the rectangle AB by BC rational.
I say that the remainder AC is irrational, and let it be called an apotome of a medial straight line.
Since AB and BC are medial, the squares on AB and BC are also medial. But twice the rectangle AB by BC is rational, therefore the sum of the squares on AB and BC is incommensurable with twice the
rectangle AB by BC.
Therefore twice the rectangle AB by BC is also incommensurable with the remainder, the square on AC, since, if the whole is incommensurable with one of the magnitudes, then the original magnitudes
are also incommensurable.
But twice the rectangle AB by BC is rational, therefore the square on AC is irrational, therefore AC is irrational. Let it be called a first apotome of a medial straight line. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX74.html","timestamp":"2014-04-16T04:11:43Z","content_type":null,"content_length":"3018","record_id":"<urn:uuid:a56be0d5-7b60-4b57-b322-a7688dfdf5fd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constructing a ring from an abelian group in a minimal way
up vote 4 down vote favorite
I'm looking for a specific construction, taking an abelian group (with designated element) $(G,+,1)$ to a commutative ring $(R,+,\cdot,1)$, where $G\subset R$ as a pointed abelian group, and
which is universal in the following sense: for any commutative ring $(S,+,\cdot,1)$ and any map $f:G\rightarrow S$ preserving $+$ and $1$, there is an extension $g:R\rightarrow S$.
The idea was to give a general construction of rings from pointed abelian groups, which in particular constructs $(\mathbb Z, +, \cdot, 1)$ from $(\mathbb Z, +, 1)$. So, while it may add new elements
in some cases, it is not adding more than necessary.
The reason for adding the requirement that the groups be pointed is that there are conceivably many choices for $1$ (especially in, say, $(\mathbb Q, +)$, where every nonzero element is equally
ra.rings-and-algebras ct.category-theory reference-request
Am I correct in understanding that you want to specify an element of the abelian group to be the multiplicative inverse? Otherwise I think you are looking for the right adjoint to the forgetful
functor from a ring to its underlying group. – B. Bischof Nov 6 '11 at 17:42
I don't want multiplicative inverses in general, but am looking for a sort of universal object, in that, for any abelian group A, there is a ring R, such that if A embeds into a ring S, then that
embedding extends to an embedding of R. I'm not sure how the right adjoint to the forgetful functor gives this, though I may be missing something obvious (if so, I apologize) – Richard Rast Nov 6
'11 at 18:17
3 The correct way to abstractly construct $\mathbb{Z}$ the ring from $\mathbb{Z}$ the additive group is as $\text{End}(\mathbb{Z})$. This construction is canonical although it is not functorial. –
Qiaochu Yuan Nov 6 '11 at 19:07
I think the question was a bit unclear, as written; I've rephrased it to more precisely (and concisely) get at what an answer would be. – Richard Rast Nov 6 '11 at 22:34
I meant identity, not inverse. – B. Bischof Nov 7 '11 at 3:47
show 1 more comment
2 Answers
active oldest votes
Given an abelian group $A$ with a fixed element $e\in A$, you can construct the universal map $f$ from $A$ to a (commutative or noncommutative, as you prefer) ring $R=R(A,e)$ such $f(e)$
is the unit element in $R$. Just take the symmetric algebra $S(A)$ (if you want a commutative ring) or the tensor algebra $T(A)$ (if you allow your ring to be noncommutative) of your
group $A$ considered as a module over $\mathbb Z$, and take its quotient by the ideal generated by the element $e-1$, where $e\in A$ is your given element and $1\in S(A)$ or $T(A)$ is
the unit element of the symmetric or tensor algebra over $\mathbb Z$, to obtain the ring $R$.
up vote 8
down vote The natural map $f\colon A\to R$ will not be in general injective, though (i.e., not every abelian group with a fixed element can be embedded into a ring so that the fixed element
accepted becomes the unit element). E.g., if $e=0$ in $A$, then $R(A,e)=0$. If $A=\mathbb Z/4\mathbb Z$ and $e=2 \bmod 4$, then $R(A,e)=0$. Still when $A=\mathbb Z$ and $e=1$, you will get $R=\
mathbb Z$ with $f\colon A\to R$ being the identity map, just as you wished.
This seems like a good best-possible answer, in the cases where it works (and as you noted, there are cases where it can't work). – Richard Rast Nov 6 '11 at 22:33
add comment
In other words, you ask: when an Abelian group is the additive group of a ring? This is a well-known problem. See
up vote 1 Fuchs, L. Infinite abelian groups. Vol. II. Academic Press, 1973, chapt.17 "Additive groups of rings".
down vote
1 I don't think the map the OP is looking for is required to be an isomorphism, just injective. – Qiaochu Yuan Nov 6 '11 at 19:19
@Qiaochu Yuan: Sorry, I don't understand your comment. – Boris Novikov Nov 6 '11 at 20:16
@Boris: the OP wants to construct from an abelian group $A$ and an element $e \in A$ a ring $R$ together with an injection $f : A \to R$ of additive groups such that $e$ is sent to the
multiplicative element of $R$, and furthermore such that $R$ is universal with this property. There's no reason $f$ needs to be an isomorphism. – Qiaochu Yuan Nov 6 '11 at 20:33
@Qiaochu Yuan: Richard Rast writes: "Such a construction would have to not add any new elements". So $A$ must to be the additive group of a ring, isn't it? – Boris Novikov Nov 6 '11 at
why do you require the map into the "ringified group" to be injective? – Arno Kret Nov 6 '11 at 22:12
show 4 more comments
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras ct.category-theory reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/80227/constructing-a-ring-from-an-abelian-group-in-a-minimal-way","timestamp":"2014-04-19T07:26:34Z","content_type":null,"content_length":"68635","record_id":"<urn:uuid:2abc10ed-052e-4626-870d-b452510a75ad>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Any good math websites that gives answers to problems? - Computers, Math, Science, and Technology
Homer_Bob Posted: Thu Feb 26, 2009 9:06 am Post subject: Any good math websites that gives answers to problems?
I wonder what's for I'm someone who is horrible at math and can't do it to save my life. I'm getting desperate. I want to find a good math website that will give me all the answers to any Math
dinner? equations. I'm only doing algebra so there should be sites out there. If anyone knows any good math websites that not only gives all the answers to the problems but shows ALL
the work you have to do to solve them.
Joined: Jan 06, 2009
Age: 25
Posts: 1350
Location: New England
whitetiger Posted: Thu Feb 26, 2009 9:10 am Post subject:
Passionate Advocate I've posted on here about this before, but I work for Brainfuse.com. You get an individual on-line tutor that can show you, step by step, how to do each algebra problem and
let you know if you get the correct answer or not. They won't do the work for you though.
I'm not a math tutor. I work in the writing lab. Still, there are excellent, qualified algebra tutors that work with us.
Joined: Feb 04, 2009
Age: 45
Posts: 1702
Location: Oregon
Dussel Posted: Thu Feb 26, 2009 9:13 am Post subject:
Phoenix I would recommend more to buy a good mathematical handbook. My recommendation is Bronshtein, Handbook of Mathematics:
Joined: Jan 20, 2009 You will never need to buy a general handbook of mathematics in your live again!
Posts: 1788
Location: London (UK)
ruveyn Posted: Thu Feb 26, 2009 9:28 am Post subject: Re: Any good math websites that gives answers to problems?
Phoenix Homer_Bob wrote:
I'm someone who is horrible at math and can't do it to save my life. I'm getting desperate. I want to find a good math website that will give me all the answers to any Math
equations. I'm only doing algebra so there should be sites out there. If anyone knows any good math websites that not only gives all the answers to the problems but shows ALL
Joined: Sep 22, 2008 the work you have to do to solve them.
Age: 77
Posts: 31331
Location: New Jersey
Buy or borrow one of the items in Schaum's Outline Series. They are inexpensive (15 dollar range) and they feature worked out problems. Just what you need.
Homer_Bob Posted: Thu Feb 26, 2009 9:44 am Post subject:
I wonder what's for So there basically aren't any websites where I can simply type an equation in and it shows me all the work and answers? Why can't it be that easy?
Joined: Jan 06, 2009
Age: 25
Posts: 1350
Location: New England
TokenX Posted: Thu Feb 26, 2009 12:30 pm Post subject:
Yellow-bellied Homer_Bob wrote:
So there basically aren't any websites where I can simply type an equation in and it shows me all the work and answers? Why can't it be that easy?
Joined: Feb 19, 2008
Age: 23
Posts: 59 I like to use http://www.algebrahelp.com/calculators/expression/factoring/ to factor equations, even though I'm far past algebra by now.
Location: California
What type of problem do you need help on?
If I recall correctly, at least 4/5 of the stuff you do in algebra is calculable on a graphing calculator.
Death_of_Pathos Posted: Thu Feb 26, 2009 1:08 pm Post subject:
Velociraptor TokenX wrote:
Homer_Bob wrote:
Joined: Nov 08, 2008 So there basically aren't any websites where I can simply type an equation in and it shows me all the work and answers? Why can't it be that easy?
Age: 27
Posts: 417
I like to use http://www.algebrahelp.com/calculators/expression/factoring/ to factor equations, even though I'm far past algebra by now.
What type of problem do you need help on?
If I recall correctly, at least 4/5 of the stuff you do in algebra is calculable on a graphing calculator.
QuickMath Automatic Solutions works nicely for systems of equations. I remember recently running across a website that did what you wanted (step by step examples)... it had a
high Google rank. Maybe I was searching for a quadratic equation solver?
Algebra.com has a nifty setup too. They seem to even generate animated gifs.
richie Posted: Thu Feb 26, 2009 4:53 pm Post subject:
Ye Olde Bookwyrme Have you considered MathWorld by Wolfram Research?
Joined: Jan 10, 2007 I go there to get a lot more than just fun number facts.
Age: 55
Posts: 31261 And SourceForge.net has many math apps available such as Maxima.
Location: Lake
Whoop-Dee-Doo, http://maxima.sourceforge.net/
Pennsylvania _________________
Life! Liberty!...and Perseveration!!.....
Weiner's Law of Libraries: There are no answers, only cross references.....
My Blog: http://richiesroom.wordpress.com/
Treehugger Posted: Fri Jun 25, 2010 5:38 pm Post subject:
Yellow-bellied Dussel wrote:
I would recommend more to buy a good mathematical handbook. My recommendation is Bronshtein, Handbook of Mathematics:
Joined: May 26, 2010
Age: 51 You will never need to buy a general handbook of mathematics in your live again!
Posts: 53
Glad I was brousing, and Found Your recommendation here. Iam also horrible at Math, so This is a Huge Help to me, Thank You!
CJame Posted: Mon Jul 05, 2010 7:34 am Post subject:
Blue Jay check if your math textbook is on this:
Joined: Jun 27, 2010
Age: 30
Posts: 84
Location: Southern
slave Posted: Wed Mar 07, 2012 6:50 pm Post subject:
Always stuck between Stephen Wolfram's sites are second to none.
13-38Hz and tired of it. _________________
Since the birth of civilization, masters have controlled the masses.Our Masters rule over every nation and no one can defy them.They will attain Absolute Power as we reach
the Singularity. Any who resist will be destroyed.I will not resist.
Joined: Feb 29, 2012
Age: 101
Posts: 1554
Location: Dystopia
scubasteve Posted: Fri Mar 09, 2012 12:34 pm Post subject:
Phoenix Why one that shows all work? If you're looking to cheat on college work, I won't support that. Ask about free tutoring services if you're a student.
Joined: Dec 18, 2009
Age: 29
Posts: 997
Location: Ann Arbor, | {"url":"http://www.wrongplanet.net/postt92316.html","timestamp":"2014-04-19T23:11:16Z","content_type":null,"content_length":"71417","record_id":"<urn:uuid:f2e1d875-cf6a-429b-8fa8-f1b2fd664f1e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula for Setting and Corner Triangles
When we take a square block and turn it 45 degrees so the corners of the block are at the top, bottom and sides, we’ve created an “on-Point” block.
Many quilts take on a marvellous new look when these blocks are placed on point. But how do you calculate the setting triangles that are needed around the edge of the quilt, and what about those four
corner triangles, how do you work them out?
When blocks are placed On Point, setting triangles and corner triangles are needed to complete the quilt top.
Sewing On Point blocks together to make a quilt top requires a different technique to the normal way a quilt top is sewn together. First you’ll notice that the rows now run diagonally, and that is
the way they need to be sewn together. You’ll also notice that setting triangles are needed to make the quilt square and smaller corner triangle pieces are required for the four corners of the quilt
to complete the square.
When making setting and corner triangles, it is very important to observe the straight of grain of the fabric. It is essential that the straight of grain be placed to the outside or border edge of
the quilt top. If the bias were to be placed to the outside or border edge, there would be certain stretching, and the quilt top would not be square. The following technique takes the straight of
grain issue into account for both the setting triangles and the corner triangles.
To cut both Setting Triangles and Corner Triangles a large primary square is calculated. This square is then cut either into four for Setting Triangles or two for Corner Triangles.
Simple formulas are used to calculate the dimensions for the primary square for both the Setting Triangles and the Corner Triangles.
To calculate Setting Triangles:
Take the finished block size and multiply it by 1.41.
This results with the finished diagonal measurement for the primary square.
Add 1 ¼” to the final diagonal measurement (seam allowances).
Cut the primary square to these dimensions.
Cut diagonally twice to produce four setting triangles.
Formula to calculate the primary square for Setting Triangles:
FBS x 1.41 = FD + 1 ¼” = Primary Square measurement.
Cut diagonally twice.
FBS (finished block size) x 1.41 = FD (finished diagonal) + 1 ¼” = Primary square to cut (round off to nearest 1/8” ). Cut into four diagonally twice.
To Calculate Corner Triangles
As before, a large primary square is calculated and then cut into two (diagonally, once). This produces two corner triangles.
Simple formulas are used to calculate the dimensions for the primary square.
To calculate Corner Triangles:
Take the finished block size and multiply it by 1.41.
This results with the finished diagonal measurement for the primary square.
Divide this measurement by 2.
Add .875 or 7/8” (for seam allowances).
Cut the primary square to these dimensions.
Cut diagonally once to produce two corner triangles.
Formula to calculate the primary square for Corner Triangles:
FBS x 1.41 = FD / 2 + 7/8” = Primary Square measurement.
Cut diagonally once.
FBS (finished block size) x 1.41 = FD (finished diagonal) / 2 + 7/8” = Primary square to cut (round off to nearest 1/8” ). Cut into two triangles diagonally. | {"url":"http://www.bellaonline.com/articles/art180358.asp","timestamp":"2014-04-16T11:06:12Z","content_type":null,"content_length":"27324","record_id":"<urn:uuid:e10a98ac-325e-401b-8411-5cc3a96026bc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mr. Spock is Not Logical (book draft excerpt)
As I mentioned, I'll be posting drafts of various sections of my book here on the blog. This is a rough draft of the introduction to a chapter on logic. I would be extremely greatful for comments,
critiques, and corrections.
I'm a big science fiction fan. In fact, my whole family is pretty
much a gaggle of sci-fi geeks. When I was growing up, every
Saturday at 6pm was Star Trek time, when a local channel show
re-runs of the original series. When Saturday came around, we
always made sure we were home by 6, and we'd all gather in front of
the TV to watch Trek. But there's one one thing about Star Trek for
which I'll never forgive Gene Roddenberry or Star Trek:
"Logic". As in, Mr. Spock saying "But that would
not be logical.".
The reason that this bugs me so much is because it's taught a
huge number of people that "logical" means the same
thing as "reasonable". Almost every time I hear anyone
say that something is logical, they don't mean that it's logical -
in fact, they mean something almost exactly opposite - that it
seems correct based on intuition and common sense.
If you're being strict about the definition, then saying that
something is logical by itself is an almost meaningless
statement. Because what it means for some statement to be
logical is really that that statement is inferable
from a set of axioms in some formal reasoning system. If you don't
know what formal system, and you don't know what axioms, then the
statement that something is logical is absolutely meaningless. And
even if you do know what system and what axioms you're talking
about, the things that people often call "logical" are
not things that are actually inferable from the axioms.
Logic, in the sense that we generally talk about it, isn't
really one thing. Logic is a name for the general family of formal
proof systems with inference rules. There are many logics, and a
statement that is a valid inference (meaning that it is logical)
in one system may not be valid in another. To give you a very
simple example, think about a statement like "The house on the corner
is red". Most people would say that it's logical
that that statement is either true or false: after all, either the house
is red, or the house isn't red. In fact, most
people would agree that the statement "Either the house is red, or it
isn't red" must be true.
In the most common logic, called predicate
logic, that's absolutely correct. The original
statement is either true or false; the statement with an "or" in
it must be true. But in another common logic, called
intuitionistic logic, that's not
true. In intuitionistic logic, there are three possible truth
values: something can be true (which means that there is a proof
that it is true); something can be false (which means that there
is a proof that it is false); and something can be unknown so far
(which means that there's no proof either way).
In addition to having different ways of defining what's true
or provable, different logics can describe different things. Our
good old familiar predicate logic is awful at describing things
involving time - there's really no good particularly good way in
predicate logic to say "I'm always hungry at 6pm". But
there are other logics, called temporal
logics which are designed specifically for making
statements about time. We'll look at temporal logics later. For
now, we'll stick with simple familiar logics.
So What is Logic?
A logic is a formal symbolic system, which consists of:
1. A set of atoms, which are the objects
that the logic can reason about.
2. A set of rules describing how you can form statements
in the logic (the syntax of the logic).
3. A system of inference rules
for mechanically discovering
new true statements using known true statements.
4. A model which describes how the atoms and predicates
in the logic map onto a real, consistent set of objects and properties.
The key part of that definition is the
mechanical nature of inference. What logic does is
provide a completely mechanical system for determining the truth
or falsehood of a statement given a set of known truths. In
logic, you don't need to know what something means in order to
determine if it's true!. As long as the logic has a
valid model, you don't need to know what the model is to
be able to do valid reasoning in that logic.
The easiest way to get a sense of how that can possibly work
is to use an example. We'll start with one simple logic, and show
how it can be used in a mechanical fashion to deduce true statements
- without knowing what those statements mean. For now, we won't even
really define the logic formally, but instead just rely on intuition.
Most arguments that we hear day to day are based informally on a logic
called predicate logic; to be more specific, they're
mostly first order predicate logic.
In predicate logic, we've got a collection of objects which we
can reason about, which we usually call
atoms. To say anything about objects, we use
predicates. Predicates are statements that assert some property
about on object, or some relationship between objects. For
example, if I had a pet dog named Joe, we could make statements
about him like Dog("Joe"),
which would say "Joe is a dog.". Or we could form statements about
specific relationships: Likes("Joe",
"Rex") is a logical way of saying "Joe
likes Rex".
We can also form general statements. For example, if Joe likes
all other dogs, we can say that in logic: (∀x)
Dog(x) ⇒ Likes("Joe", x). The
upside down "A" stands for "for all"; the statement
says "For all x, if x is a dog, then Joe likes x."
Inference Rules
Where things get interesting is the inference rules. Inference
rules describe how to perform reasoning in the logic - which is another
way of saying that they describe how the logic can allow you to figure
out what's true or false, based on reasoning starting from an initial
set of given facts.
Inference rules are usually written as sequents,
which we'll get to in another section; for now, we'll
stick with informal descriptions.
The simplest inference rules allow you to just manipulate
simple statements. For example, if you know A ∧
B is true, then you know A is
Another group of inference rules combine similar
statements to derive new facts. For example, the most famous rule
of logic is called modus ponens: if you know
that the statement P ⇒ Q is true,
and you also know that P is true, then you
can infer that Q must be true.
More interesting rules allow you to do things like work from
the general to the specific: if you know that ∀
x: P(x), and "A" is an atom, then you can
infer P("A").
Yet other rules allow you to transform statements. For example,
if you know that ∃x : P(x), then
you can infer that ¬∀x: ¬P(x).
With the rules we've looked at so far, we can build an example of
what I meant by totally mechanical inference.
Let's suppose we have a bunch of atoms, "a",
"b", "c", ..., and two predicates, P
and Q.
We know a few simple facts:
• P("a", "b")
• P("b", "c")
• ∀x, y, z: P(x,y) ∧ P(y,z)
⇒ Q(x,z)
What can we infer using this? Using a general-to-specific
inference, we can say P("a",
"b") ∧ P("a", "b")
⇒ Q("a", "c")
Then, we can combine P("a",
"b") and P("b",
"c") to infer
P("a", "b") ∧
P("b", "c"). (Remember, we're
being totally mechanical, so if we want to use the implication, we
need to exactly match its left-hand side, so we need to do an
inference to get the "and" statement.)
Finally, we can now use modus ponens to infer
Q("a","c"). We
have no idea what the atoms a, b, and c are; we have
no idea what the predicates P and Q mean. But we've been able to
infer true statements.
So what do the statements mean? That depends on the model. For a given set of symbolic statements, you can use more than one model - so long as each model is valid, the meanings of the inferences
will be valid in all models assigned to the statements. (We'll talk more about models in section ...) In this case, we could use several different models; I'll show two examples:
1. "a" could be 1, "b" could be 2,
and "c" could be 3, with P(x,y) meaning "x is
1 plus y", and Q(x,y) meaning "x is 2 plus y".
Then we would have used the fact that 2 is 1+1 and 3 is 1+2 to infer
that 3 is 2+1.,/p>
2. "a" could be my father, Irving;
"b" could be me, and "c" could be my
son Aaron, with P(x,y) meaning "x is the father of
y" and Q(x,y) meaning "x is the grandfather of
y". Then we would have used the fact that Irving is my
father, and I am Aaron's father to infer that Irving is
Aaron's grandfather.
Looks good.
I would say add a quick explanation of the symbols for "not" and "there exists" just as you did for the symbol for "for all". To be consistent, you should either explain every symbol you use, or
assume that your readers already know them all (IMO).
I agree with SciencePundit about defining the symbols. You also use the AND and IF symbol without really saying what they are, I don't think.
Also, this statement:
What can we infer using this? Using a general-to-specific inference, we can say P("a", "b") ∧ P("a", "b") ⇒ Q("a", "c")
Don't you mean P("a", "b") ∧ P("b", "c") ⇒ Q("a", "c")?
New reader, but I like the blog a lot. Hope this isn't overkill, but I had a bunch of feedback to give on this post. Some of it's nit-pickery, but I did notice a couple of typos in there that you
will want to correct. Sorry if it runs long; my old advisors were terribly brutal about the details when I presented logical work, and it has stuck hard with me.
(In the following, "PX" refers to paragraph X, and "SY" to sentence Y.
P1, S2: "local channel show" -> "local channel showed"
Throughout: following punctuation should almost always appear inside quotes, as, e.g., P1, penultimate sentence: "Logic." (instead of "Logic".) This is especially true at the end of sentences, where
you should never have two punctuation marks, as in the last sentence of P1 (just end it with "logical.") This doesn't hold when you're using something that is structural to the sentence, like a
semi-colon between parts, or when you are doing something like asking a question about a quote, like
Did he say "I will do it"?
A small nit-pick that may not help at all in a general book: in P4, you talk about logic in terms of axioms, but of course we can also present systems in non-axiomatic form (i.e., natural-deduction
presentations). While these are equivalent to axiomatic formulations, importantly, we don't necessarily need axioms, we just need a formal system that takes us from premisses to conclusions. I'm not
sure if there's a nice way to make that distinction, or if it's even worth it, especially in an introduction, however. I do note that in your list of the 4 components of a logic, in the "So What is
Logic?" section, you don't include axioms there, so it might be confusing to the reader who doesn't know much about this stuff.
In P5, you distinguish between predicate and intuitionistic logic. The point you make is correct, but the former label is a bit misleading. I would simply distinguish classical from intuitionistic
logic. The law of excluded middle holds in classical logic, whether it be propositional or predicate (or otherwise) in style; further, you can always create a predicate logic that is intuitionistic
(it just adds quantification to an intuitionist base, just as classical predicate logic is built on classical propositional logic). And you don't actually need predicates to make the point anyway:
you can treat the statement "the house is red" as a simple propositional atom, r, and the law holds as (r \or \not r). Of course, in your four things a logic will have, you include mappings for the
predicates, so maybe you want to ignore this complication, too.
In P6, S3, you should have a comma after "temporal logics" and before "which". Grammar is mutable, but a basic rule is that "which" always has a comma before it, and if you don't want the comma, use
"that" instead.
In the description of what logic is, you talk about inference rules as taking us from known truths to more truths. However, inference rules actually do a little more than that: they take us from some
sentences to some other sentences. Now, IF the previous sentences are true, then so are the latter; but nothing guarantees or requires that the prior sentences ARE true. We can reason logically from
false premisses just as well (only we end up with conclusions that are often false as well). In fact, we can even reason from flatly inconsistent premisses: it's just that they lead to the "anything
goes" result that everything follows!
For the example about the universal instantiation rule (\forall x P(x) => P(a)), you might want to include a simple gloss in words on what that means.
In your explanation of the numerical interpretation of your example, you have the predicates a bit backward. For this to work, P(x,y) would have to mean "x is y minus 1" and Q(x,y) would have to mean
"x is y minus 2" (or reverse the order of what "a," "b," and "c" mean to get it to work).
To put it bluntly: what's the point of this introduction? You certainly make an interesting point about the difference between "logical" and "reasonable", but your example is quite long relative to
your description of what logic is, and it seems to run out of steam by the end (with very little wrap up).
Furthermore, your example introduces a bunch of ideas along the way without any visual cues, so it's very difficult to scan the introduction without getting lost in the symbols. Also, I agree with #1
that you should define your symbols or assume readers know them already. Given the content, I imagine that readers don't know all the symbols, which brings me to my second point...
I think there is absolutely no reason to introduce any sort of formal syntax in your introduction. I can't imagine any reader that is learning about logic would have an easy time understanding modus
ponens without any discussion of material implication. Keeping the discussion informal (and this means NO symbols) forces descriptions that avoid ambiguity created by using syntax that an uninformed
reader would not understand. To use another example, you introduce predicates in terms of the common functional syntax, e.g. P( a, b, c ), but you never describe what this syntax means and why we
would want to use it.
Also, what I feel this introduction lacks most is a motivation for why logic is useful/important. Great, so we have a mechanical way of deriving truths, but why do I care? If anything, this seems
rather inefficient at first glance. Given how much you think the mechanical nature of logic is important, I'm surprised there is no mention whatsoever about the history of logic such as Hilbert's
program or examples such as axioms in Euclid's geometry.
You don't really talk about proofs (which is certainly a key motivator) and it seems like some discussion of consistency is in order (otherwise logic is rather pointless).
The first few paragraphs are great, but I think the readers you seem to be targeting would lose interest rather quickly. I am personally very interested in logic and what generally makes me retain
interest in your blog posts is the intriguing topic, the clarification you provide, and pointing out common [interesting] mistakes people make. Your first point about reasonable vs. logical does
exactly that, but unfortunately the intro goes somewhat downhill from there.
Hope that helps-- I think your writing style has potential for good material about logic, but you sort of betray your own style.
What Martin said in his #4. Also, when you say:
"A set of atoms, which are the objects that the logic can reason about."
someone who doesn't already know what you're talking about (which is your audience, right?) would probably think you were talking about (what we would call) the domain (which is actually part of the
model). You might want to say more about how you are using the term "object" here, and in what sense "logic can reason about" such things, which is not at all clear from what you say here.
Oh, about Mr. Spock. Of course you're right that that use of "logical" is unfortunate. But it's worse than that, as I recall, given that he often wasn't even being reasonable. Like when he would say
something like "Logically, the probability of that is a mere 7.2 percent" about something which was in no way quantifiable to that degree of precision. I guess he never read your post on significant
a) Words very often has multiple valid meanings: 'logical' does not just mean complying to a formal system of logic. Taking a quick look at the OED, it has been in use since the 17th Century to mean
'Characterized by reason; rational, reasonable'. The term logical has always had both a formal and a colloquial meaning, and neither is particularly older than the other. Saying that Spock is using
it wrong is sort of like complaining that the Civil Rights Movement used the word 'integration', when we all know took very few sums under curves.
b) To get extremely nerdy: When Spock talks about 'logic', he is talking about a particular religious concept that is deeply ingrained in Vulcan culture. Vulcans are prone to fits of emotion, and are
far stronger than humans, and as a result their religion emphasizes the suppression of emotion and the importance of good justification for actions. The idea of 'logic' corresponds to something like
a purity of thought, reason untainted by emotion. When Spock talks about 'Logic', imagine it like a Muslim talking about 'Justice', which is them attempting to translate 'Adalah'.
@ Lowk: You certainly make a valid point about the dictionary definition, but it seems there should be some distinction between the two, since MarkCC's original statement seems intuitively right in
this context. However, the dictionary definition for reasonable (and the example given) create a bit of confusion.
1. agreeable to reason or sound judgment; logical: a reasonable choice for chairman.
Clearly, reasonable and logical are synonyms, but now that I look at what MarkCC wrote again, he thinks that reasonable means "correct based on intuition and common sense" (which is the same as this
I'm going to go further than Lowk and say that the issue is that the formal/mathematical meaning is very much subsumed by the other meaning. The logic that is implied by the primary meaning of
"logical" is whatever logic people tend to reason with (i.e. intuition, which is almost certainly not even a logic).
...which kind of brings up a bigger point: I don't think there are any words in the English language that commonly mean (to a non-logician) what you here mean by "logical." That's why I (and clearly
others too) fell into the same trap you fell into-- you are writing about formal logic and use the word logical, so we assume that the two are related, but they're not in their common usage, which is
why common usage is so dangerous when trying to be "logical."
This, in retrospect, makes my previous comment about motivation for logic that much more valid. One of the reasons you want to be able to do things mechanically is because that is more verifiable and
you are less likely to make mistakes that go unnoticed. By breaking down a complex idea into simple steps, it's much easier to tell if it's correct or not (i.e. derivable from your axioms).
Thanks for all the comments!
I am working to try to restructure it and reduce the dryness of it. This is just a first draft, and my hope was that I could get some hints from the comments - and that's definitely working!
Some of the problems that you all have complained about are a result of seeing this out of context. In the actual book, the previous chapter goes through an informal mathematical construction -
showing how you build the objects that you're working with using sets, and how you describe their behavior using logic. The motivation for why you should care about logic is presented primarily in
that chapter - but your comments are all absolutely correct that I should reiterate it here, albeit in a shorter form.
Anyway.. Thanks for all the comments, and keep 'em coming!
Great book project. I am going to follow it from now on - it looks interesting!
I think the confusion you describe here goes back to the ancient times. For example, the lists of "logical fallacies," frequently used in our days to fuel flame wars, typically include a bizarre
mixture of formal logic and human reasonableness. The ancients can be forgiven for thinking there can be only One Right Logic, or One Right Geometry for that matter - the Euclid's one, of course. You
may want mention the ancient roots of equating one's set of math axioms and conclusions with reasonableness, justice, and other human virtues.
I think it's unfortunate that you criticize the use of "logical" to mean "reasonable", which, as pointed out by others, is perfectly correct and has been done for many years. It seems that the use of
logic as you define it would require that your introduction actually be true.
Your quibbling about the common use of the word "logical" to connote that which is reasonable reminds me of the adage from Lowry's The Giver, and I am paraphrasing despite the quotation marks, ~"We
must have precision in language."
Constructive criticism: Unless you get some philosophy professors teaching symbolic logic to adopt it as required reading, I don't anticipate your sales will be strong.
Re #12:
When I started this blog, I thought that it probably wouldn't last more than a week or two, and that I'd be lucky if I got a couple of dozen people to read it. Now, I consistently get 3000 readers
per day. I never would have dreamed that there was an audience that size for my math ramblings!
So I don't even attempt to predict whether the book will sell well or not. The publisher clearly thinks it's got a chance. And I'm working very hard to try to produce something that people will enjoy
This is a very rough draft of one of the hardest parts of the book to write. The whole reason that I posted it was to get exactly the kinds of negative feedback that many of the commenters have
provided. I knew that this section wasn't flowing the way I want it to, and I'm still not sure of how to fix it.
As far as quibbling about the meaning of logical... I've got my own strong opinion about how the word should be used. You don't have to agree with me. But I *still* hate the way that Star Trek uses
"logical" to mean things that are anything but logical. And I think that using that as a starting point is an engaging way of starting the chapter. If I can make the rest of the chapter read as
entertainingly as the first couple of paragraphs, I'll be incredibly happy, even if people think I'm being overly strict about definitions.
following punctuation should almost always appear inside quotes
I always find that rule difficult to follow, I suspect as the result of years of typing string literals. If I'm quoting something, and the original didn't have a period at the end, it just seems
wrong to put one in, even if it comes at the end of my sentence.
Star Trek uses "logical" to mean approximately what Sherlock Homes meant by "logical." It is a character- and plot-driven device. Not that I'm saying that "Bones" = Dr. Watson or that Uhuru = Irene,
or that Starfleet Command is on Baker Street.
My suggestion being to NOT apply Law of the Excluded Middle to "logical", but instead to consider its casual and imprecise use in entertainment genres including Science Fiction and Mystery. And that
SOME kind of "logical" is what divides Science Fiction from Sci-Fi.
I agree with the comments about Spock and logic made in the comment section. It's really off-putting to say that the FICTIONAL show Star Trek used the word "logical" incorrectly, when it really just
did so for theatrical reasons, and the word does mean "reasonable" in everyday usage. Words have more than one meaning. In my opinion, they SHOULD have more than one meaning. To say that they
shouldn't have used the word "logical" on Star Trek in that way, basically indicates a condescending, holier-than-thou attitude to people who do use it that way.
[Logic is a name for the general family of formal proof systems with inference rules. ]
Even in the technical sense of the word, that doesn't quite work. Fuzzy logic falls under the technical sense of the word "logic" and very rarely deals with formal proof systems (Zadeh even wrote a
paper where explained that in fuzzy maths, the notion of proof comes as a secondary notion... unlike crisp maths). Does a fuzzy logic expert system formally prove things? No. Do most papers on fuzzy
logic concern formal proofs? Well, they may prove something, but they don't generally aim at developing a structure for proofs.
[To give you a very simple example, think about a statement like "The house on the corner is red". Most people would say that it's logical that that statement is either true or false: after all,
either the house is red, or the house isn't red.]
I don't think most people would actually agree here. Foregoing the physics definition of red as a certain wavelength of light, most people, I believe, would view the color red as a perception. There
exist different shades of red. So, it doesn't make sense to say that a red shirt is as red as red hair or a darker red house. Plus, one side of the house could be red and another side white. So, is
the house red? I think a better example of a statement working out as either true or false would come as something like "The batter hit a homerun." Or "She went shoe shopping." Or something where
"shades" of it at least come as much harder to think of than examples like "The brick is red."
I think that this is a logical way of explaining
I mean reasonable!
For the critics: I think opening about Spock is just a way to draw readers into it. Is there any other logical (reasonable, I mean) reason to delve into a book on logic? And in math, distinction in
words is key...
Software Engineer | {"url":"http://scientopia.org/blogs/goodmath/2009/03/17/mr-spock-is-not-logical-book-draft-excerpt/","timestamp":"2014-04-17T12:33:28Z","content_type":null,"content_length":"104244","record_id":"<urn:uuid:628bcdda-4800-43f0-8a04-507c9884e954>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reviews of books, websites, poster sets, movies, and other resources for learning and teaching the history of mathematics.
A collection of articles on mathematics in Europe from the twelfth to the fifteenth century.
A collection of 24 mini-posters, each containing a quotation about mathematics.
A biography of the 18th century author of an early calculus text, with some translations from the text.
Essays on various aspects of Greek science and mathematics, which help give a context for those aspects of Greek culture.
A math history class visits the 'Beautiful Science' exhibit at the Huntington Library in Southern California.
A CD with eleven modules, each containing numerous activities designed to help secondary teachers use the history of mathematics to teach mathematics.
Our reviewer praises the selection of excerpts, the use of facsimiles rather than transcriptions, and the commentary and English translation in this collection.
How does geometry begin? This work explores the origins of geometry in the work of artisans.
This resource consists of a series of 61 worksheets, each focused on a particular problem and related to a particular historical mathematical personality.
A history of algebra from its early beginnings to the twentieth century. | {"url":"http://www.maa.org/publications/periodicals/convergence/critics-corner?page=2&device=mobile","timestamp":"2014-04-18T06:15:47Z","content_type":null,"content_length":"25834","record_id":"<urn:uuid:8ae56246-6e1f-4f37-abe5-38bc6f47a6c1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Lana Von Monroe on Tuesday, April 10, 2012 at 10:07pm.
Which number serves as a counterexample to the statement below:
All positive integers are divisible by 2 or 3
I need as much as possible.
• Math 8th Grade - Lana Von Monroe, Tuesday, April 10, 2012 at 10:45pm
Here are the options:
A: 100
B: 57
C: 30
D: 25
• Math 8th Grade - Lana Von Monroe, Tuesday, April 10, 2012 at 10:46pm
Here are the options:
A: 100
B: 57
C: 30
D: 25
Related Questions
Math 8th Grade - The chart below shows an expression evaluated for four ...
math(geometry) - find a counterexample for the following statement. If a number ...
math - The statement is If a number is divisible by 2 and by 3, then it is ...
math - what would be a counterexample of this converse statement: If a and b are...
8th grade - im in advanced math so i do 8th grade math and i dont understand how...
math-8th Grade - I'm stumped. Would someone please guide me? The question is: ...
8TH GRADE MATH - IF C IS THE NUMBER OF CUPS THEN 1/2C CAN BE USED TO FIND THE ...
8th grade math - how do you solve x+1/10= a whole number
math - what would be a counterexample of this converse statement: If a and b are...
Math 8th grade - Type the missing number. -25 -23 -22 -21 | {"url":"http://www.jiskha.com/display.cgi?id=1334110031","timestamp":"2014-04-23T07:49:12Z","content_type":null,"content_length":"8574","record_id":"<urn:uuid:5ba9fca3-643d-46d4-b7c9-3dd337ef5dde>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivative of the Airy function Bi: Introduction to the Airy functions (subsection Airys/05)
The best-known properties and formulas for Airy functions
Real values for real arguments
For real values of argument , the values of the Airy functions and , and their derivatives and are real.
Simple values at zero
The Airy functions and , and their derivatives and have rather simple values for argument :
The Airy functions and , and their derivatives and defined for all complex values of , are analytic functions of over the whole complex ‐plane, and do not have branch cuts or branch points. These
functions are entire functions with an essential singular point at .
The Airy functions and , and their derivatives and are not periodic functions.
Parity and symmetry
The Airy functions and , and their derivatives and have mirror symmetry:
Series representations
The Airy functions and , and their derivatives and have rather simple series representations at the origin:
These series converge at the whole ‐plane and their symbolic forms are the following:
Two sums in the previous formulas can be combined into one formula, and the resulting formulas can be rewritten as follows:
Interestingly, closed-form expressions for the truncated version of the Taylor series at the origin can be expressed in terms of the generalized hypergeometric function , for example:
Asymptotic series expansions
The asymptotic behavior of the Airy functions and can be described through formulas that depend on the sector of (Stokes phenomenon). The following formulas are examples of asymptotic expansions that
are valid for (for and ) and for the sector (for and ):
By using discontinuous functions such as , it is possible to write single expansions that are valid for all directions:
Integral representations
The Airy functions and , and their derivatives and have rather simple integral representations through sine, cosine, and power functions:
The argument of the Airy functions and , and their derivatives and can be simplified for third roots:
Simple representations of derivatives
The derivatives of the Airy functions and , and their derivatives and have simple representations that can also be expressed through Airy functions:
The symbolic -order derivatives have more complicated representations in terms of the regularized hypergeometric function :
Differential equations
The Airy functions and appeared as special solutions of the simple-looking linear second-order differential equation:
where and are arbitrary constants. Additional restrictions on and lead to corresponding Airy functions:
Similar properties are valid for derivatives of Airy functions: | {"url":"http://functions.wolfram.com/Bessel-TypeFunctions/AiryBiPrime/introductions/Airys/05/ShowAll.html","timestamp":"2014-04-19T17:13:26Z","content_type":null,"content_length":"68474","record_id":"<urn:uuid:adad9cce-4d8e-4f18-b036-7df3be40f7f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] best way to printf "%.2f" ?
From: Jens Axel Søgaard (jensaxel at soegaard.net)
Date: Sat Dec 9 12:28:39 EST 2006
gknauth at sunlink.net skrev:
> I had a number 6.666666666666666e-10 but I just wanted to print 6.67e-10. After looking at DrScheme's printf, SLIB's printf, some do-it-yourself formatting folks had posted, SRFI-54, SRFI-48, and the Cookbook, it looked as though SRFI-48 might be the most straightforward. So I tried it:
From (lib "string.ss"):
(real->decimal-string n [digits-after-decimal-k]) PROCEDURE
Prints n into a string and returns the string. The printed form of n
shows exactly digits-after-decimal-k digits after the decimal point,
where digits-after-decimal-k defaults to 2.
Before printing, the n is converted to an exact number, multiplied by
(expt 10 digits-after-decimal-k), rounded, and then divided again by
(expt 10 digits-after-decimal-k). The result of ths process is an exact
number whose decimal representation has no more than
digits-after-decimal-k digits after the decimal (and it is padded with
trailing zeros if necessary). The printed for uses a minus sign if n is
negative, and it does not use a plus sign if n is positive.
Jens Axel Søgaard
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2006-December/015621.html","timestamp":"2014-04-19T15:30:59Z","content_type":null,"content_length":"6441","record_id":"<urn:uuid:c38409b1-a1aa-41ff-8824-b67e974d65ed>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural convection on cylinders and spheres
From Thermal-FluidsPedia
Vertical Cylinder
Natural convection on a vertical cylinder can find its applications in the cooling of nuclear reactors, large-current bus in power plants, as well as in the human body. A vertical cylinder has three
surfaces: the top surface, the bottom surface and the vertical surface. The natural convection from the top and bottom surfaces can be calculated using the approaches discussed in the previous
subsection. For the vertical surface of the cylinder, the development of the boundary layer is shown in Fig. 1. If the ratio of diameter to height, D/L, is large enough, the thermal boundary layer
thickness will be much smaller than the radius of the cylinder and the correlations for natural convection over a vertical flat plate can be used to calculate the natural convection over the vertical
cylinder. For fluid with Prandtl number greater than 1, the condition under which the correlation for flat plate is applicable is (Bejan, 2004):
$\frac{D}{L}>\text{Ra}_{L}^{-1/4}$ (1)
This condition is referred to as the “thick cylinder limit.” If D/L is less than $\text{Ra}_{L}^{-1/4}$, the boundary layer thickness will be comparable to the radius of the cylinder and the effect
of surface curvature cannot be neglected. For this case – referred to as “thin cylinder limit” – the governing equations must be given in a cylindrical coordinate system:
$\frac{\partial u}{\partial x}+\frac{1}{r}\frac{\partial (vr)}{\partial y}=0$ (2)
$u\frac{\partial u}{\partial x}+\frac{v}{r}\frac{\partial (ur)}{\partial r}=u \frac{1}{r}\frac{\partial }{\partial r}\left( r\frac{\partial u}{\partial r} \right)+g\beta (T-{{T}_{\infty }})$ (3)
$u\frac{\partial T}{\partial x}+\frac{v}{r}\frac{\partial (Tr)}{\partial y}=\alpha \frac{1}{r}\frac{\partial }{\partial r}\left( r\frac{\partial T}{\partial r} \right)$ (4)
The boundary conditions for eqs. (2) – (4) are
u = v = 0, T = T[w], at r = R (5)
$u=v=0,\text{ }T={{T}_{\infty }}\text{, at }r\to \infty$ (6)
The integral momentum and energy equations can be obtained by integrating eqs. (2) – (4) with respect to r in the interval of (R, R + δ):
$\frac{d}{dx}\int_{R}^{R+\delta }{{{u}^{2}}}rdr=-u R{{\left. \frac{\partial u}{\partial r} \right|}_{r=R}}+g\beta \int_{R}^{R+\delta }{(T-{{T}_{\infty }})rdr}$ (7)
$\frac{d}{dx}\int_{R}^{R+\delta }{u(T-{{T}_{\infty }})}rdr=-\alpha R{{\left. \frac{\partial T}{\partial r} \right|}_{r=R}}$ (8)
In order to simplify the solution procedure, it is assumed that the velocity and temperature profiles that were used in the integral solution of natural convection over a vertical flat plate, eqs.
(23) and (24), are still valid for natural convection over a thin cylinder – except to replace y with r − R. Substituting the following velocity and temperature profiles:
$\frac{u}{U}=\frac{y}{\delta }{{\left( 1-\frac{y}{\delta } \right)}^{2}}$
$\frac{T-{{T}_{\infty }}}{{{T}_{w}}-{{T}_{\infty }}}={{\left( 1-\frac{y}{{{\delta }_{t}}} \right)}^{2}}$
into eqs. (7) and (8), one obtains the following integral equations:
$\frac{d}{dx}\left[ (8R+3\delta )\delta {{U}^{2}} \right]=-840u R\frac{U}{\delta }+70g\beta ({{T}_{w}}-{{T}_{\infty }})\delta (4R+\delta )$ (9)
$\frac{d}{dx}[(7R+2\delta )U\delta ]=\frac{420\alpha R}{\delta }$ (10)
Unlike the case of natural convection over a vertical flat plate, eqs. (9) and (10) do not suggest that U and δ can be expressed as a simple function of x. Le Fevre and Ede (1956) assumed that:
$U={{U}_{0}}+\frac{{{U}_{1}}}{R}+\frac{{{U}_{2}}}{{{R}^{2}}}+\ldots \text{ and }\delta ={{\delta }_{0}}+\frac{{{\delta }_{1}}}{R}+\frac{{{\delta }_{2}}}{{{R}^{2}}}+\ldots$ (11)
Substituting the above expression into eqs. (9) and (10) and grouping the terms for the same order of R, equations for U[0], δ[0], U[1], δ[1], and so on, can be obtained:
$8{{\delta }_{0}}\frac{d}{dx}(U_{0}^{2}{{\delta }_{0}})=-840u {{U}_{0}}+280g\beta ({{T}_{w}}-{{T}_{\infty }})\delta _{0}^{2}$ (12)
$7{{\delta }_{0}}\frac{d}{dx}({{U}_{0}}{{\delta }_{0}})=420\alpha$ (13)
${{\delta }_{0}}\frac{d}{dx}(8\delta _{0}^{2}U_{0}^{2}+8{{\delta }_{1}}U_{0}^{2}+16{{\delta }_{0}}{{U}_{0}}{{U}_{1}})+8{{\delta }_{1}}\frac{d}{dx}({{\delta }_{0}}U_{0}^{2})=-840u {{U}_{1}}+70g\ (14)
beta ({{T}_{w}}-{{T}_{\infty }})(8{{\delta }_{0}}{{\delta }_{1}}+\delta _{0}^{3})$
${{\delta }_{0}}\frac{d}{dx}(2\delta _{0}^{2}{{U}_{0}}+7{{\delta }_{1}}{{U}_{0}}+7{{\delta }_{0}}{{U}_{1}})+7{{\delta }_{1}}\frac{d}{dx}({{\delta }_{0}}{{U}_{0}})=0$ (15)
It is assumed that:
U[0] = A[0]x^m, δ[0] = B[0]x^n
and one can use eqs. (12) and (13) to obtain m = 1 / 2 and n = 1 / 4 and:
${{A}_{0}}=\frac{80\alpha }{B_{0}^{2}},\text{ }B_{0}^{4}=\frac{80(20+21\Pr ){{\alpha }^{2}}}{7g\beta ({{T}_{w}}-{{T}_{\infty }})}$ (16)
By following a similar procedure, one can use eqs. (14) and (15) to obtain:
U[1] = A[1]x^3 / 4, δ[1] = B[1]x^1 / 2
${{A}_{1}}=-\frac{4(656+315\Pr )\alpha }{7(64+63\Pr ){{B}_{0}}},\text{ }{{B}_{1}}=-\frac{(272+315\Pr )B_{0}^{2}}{35(64+63\Pr )}$ (17)
If one only considers the first two terms in eq. (10), the boundary layer thickness becomes:
$\delta \approx {{B}_{0}}{{x}^{1/4}}+\frac{{{B}_{1}}{{x}^{1/2}}}{R}$ (18)
The local heat transfer coefficient at the surface of the vertical cylinder is:
${{h}_{x}}=-\frac{k}{{{T}_{w}}-{{T}_{\infty }}}{{\left( \frac{\partial T}{\partial r} \right)}_{r=R}}=\frac{2k}{\delta }$
and the local Nusselt number is:
$\text{N}{{\text{u}}_{x}}=\frac{{{h}_{x}}x}{k}=\frac{2x}{\delta }$
Substituting eq. (18) into the above expression, one obtains
$\text{N}{{\text{u}}_{x}}=\frac{2{{\left[ \frac{7\text{G}{{\text{r}}_{x}}\Pr }{80(20+21\Pr )} \right]}^{1/4}}}{1-{{\left[ \frac{80(20+21\Pr )}{7\text{G}{{\text{r}}_{x}}{{\Pr }^{2}}} \right]}^{3/4}}\
frac{272+315\Pr }{35(64+63\Pr )}\frac{x}{R}}$
Since the second term in the denominator of the above expression is much less than 1, Le Fevre and Ede (1956) suggested that the above expression for local Nusselt number can be simplified to:
$\text{N}{{\text{u}}_{x}}={{\left[ \frac{7\text{G}{{\text{r}}_{x}}{{\Pr }^{2}}}{5(20+21\Pr )} \right]}^{1/4}}+\frac{2(272+315\Pr )}{35(64+63\Pr )}\frac{x}{R}$ (19)
The average Nusselt number can then be obtained as:
${{\overline{\text{Nu}}}_{L}}=\frac{4}{3}{{\left[ \frac{7\text{R}{{\text{a}}_{L}}\Pr }{\text{5(20+21Pr)}} \right]}^{1/4}}+\frac{4(272+315\Pr )L}{35(64+63\Pr )D}$ (20)
where the characteristic length in ${{\overline{\text{Nu}}}_{L}}$ and RaL is the height of the cylinder. It can be seen that the first term on the right-hand side of eq. (20) has a form similar to
eq. (28), and that the second term represents the effect of aspect ratio of the vertical cylinder. The second term does not depend on the Rayleigh number.
Horizontal Cylinder and Sphere
When a horizontal cylinder or sphere with temperature T[w] is immersed in a fluid with temperature T[∞] (${{T}_{w}}>{{T}_{\infty }}$), a boundary layer develops along the curved surface. The boundary
layer thickness is a function of the angle φ ($\phi ={{0}^{\circ }}$ is at the bottom of the cylinder or sphere), as shown in Fig. 2. The similarity solution that worked for the case of the vertical
plate does not work for natural convection over a horizontal cylinder or sphere. Merk and Prins (1953, 1954a, b) obtained an integral solution by assuming the momentum and thermal boundary layer
thicknesses are identical. The results show that the local Nusselt number is highest at the bottom where the boundary layer is thinnest. As the angle φ increases, the thickness of the boundary layer
increases and the local Nusselt number decreases. Although the integral solution can yield results all the way to the top where $\phi ={{180}^{\circ }}$ and Nu[φ] = 0, the result beyond $\phi ={{165}
^{\circ }}$ is no longer applicable because boundary layer separation occurred and plume flow takes place. Based on the integral solution, Merk and Prins (1954b) recommended the following empirical
correlation for natural convection over a horizontal cylinder and sphere:
${{\overline{\text{Nu}}}_{D}}=C\text{R}{{\text{a}}_{D}}^{1/4}$ (21)
where the characteristic length in the average Nusselt number and Rayleigh number is the diameter of the cylinder or sphere. The constant C is a function of Prandtl number and can be found from the
following table.
Constant in the empirical correlation for natural convection over horizontal cylinder and sphere
│Prandtl number │ 0.7 │ 1.0 │10.0 │100.0│ ∞ │
│ │Cylinder │0.436│0.456│0.520│0.523│0.523│
│ │Sphere │0.474│0.497│0.576│0.592│0.595│
Practically, the empirical correlations based on experimental results are more useful. Churchill and Chu (1975) recommended the following correlation for horizontal cylinders:
${{\overline{\text{Nu}}}_{D}}={{\left\{ 0.6+\frac{\text{0}\text{.387Ra}_{D}^{1/6}}{{{[1+{{(0.559/\Pr )}^{9/16}}]}^{8/27}}} \right\}}^{2}}$ (22)
which has the same form as the correlation for vertical plate, eq. (29), except the characteristic length has been changed from vertical plate height to the diameter of the cylinder. Equation (22)
covers all Prandtl number and Rayleigh number between 0.1 and 10^12.
For natural convection over a sphere, Churchill (1983) suggested that the following correlation can best fit the available experimental data:
${{\overline{\text{Nu}}}_{D}}=2+\frac{\text{0}\text{.589Ra}_{D}^{1/4}}{{{[1+{{(0.469/\Pr )}^{9/16}}]}^{4/9}}}$ (23)
where the average Nusselt number, ${{\overline{\text{Nu}}}_{D}}$, and the Rayleigh number, RaD, are based on the diameter of the sphere. Equation (23) is valid for the case where $\Pr \ge 0.7$and the
Rayleigh number is less than 10^11.
Bejan, A., 2004, Convection Heat Transfer, 3rd ed., John Wiley & Sons, New York.
Churchill, S.W., 1983, “Free Convection around Immersed Bodies,” Heat Exchanger Design Handbook, Schlunder, E.U., ed., Hemisphere Publishing, New York, NY.
Churchill, S. W., and Chu, H.H.S., 1975, “Correlating Equations for Laminar and Turbulent Free Convection from a Vertical Plate,” Int. J. Heat Mass Transfer, Vol. 18, pp. 1323-1329.
Faghri, A., Zhang, Y., and Howell, J. R., 2010, Advanced Heat and Mass Transfer, Global Digital Press, Columbia, MO.
Le Fevre, E.J., 1956, “Laminar Free Convection from a Vertical Plane Surface,” Proceedings of the 9th International Congress of Applied Mechanics, Brussels, Vol. 4, pp. 168-174.
Merk, H.J., and Prins, J.A. 1953, Thermal Convection Laminar Boundary Layer I, Applied Scientific Research, Vol. A4, pp. 11-24.
Merk, H.J., and Prins, J.A. 1954a, Thermal Convection Laminar Boundary Layer II, Applied Scientific Research, Vol. A4, pp. 195-206.
Merk, H.J., and Prins, J.A. 1954b, Thermal Convection Laminar Boundary Layer III, Applied Scientific Research, Vol. A4, pp. 207-221. | {"url":"http://www.thermalfluidscentral.org/encyclopedia/index.php/Natural_convection_on_cylinders_and_spheres","timestamp":"2014-04-16T07:13:15Z","content_type":null,"content_length":"42859","record_id":"<urn:uuid:12bc2b70-eb3b-492a-be00-a6d53d1f8e20>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Apache Junction Calculus Tutor
Find an Apache Junction Calculus Tutor
...My experience includes 8 years of teaching SAT prep for the State College Area School District, as well as teaching geometry online for a large cyberschool in PA. I am certified to teach
secondary mathematics in both Pennsylvania and Arizona. I take pride and satisfaction in being able to get s...
12 Subjects: including calculus, physics, geometry, ASVAB
...I have worked with both high school and college students in their pre-algebra, algebra, and advanced math courses. I would welcome the opportunity to work with you! I am a highly qualified and
state certified high school math teacher.
15 Subjects: including calculus, geometry, statistics, algebra 1
...I have been tutoring students in physics for over ten years. I got my BS in Physics from SUNY Stonybrook, which included time as a TA in the helproom. I was also inducted into the National
Physics Honors Society, Sigma Pi Sigma, for my academic performance.
14 Subjects: including calculus, chemistry, physics, geometry
...C. I have a BS, an MA and a PhD in mathematics. I have been teaching at the college level for 15 years.
9 Subjects: including calculus, geometry, algebra 1, GED
...My attendance numbers were high because people needed help, and because I am open and engaging, instructive, and able to pose questions so that students understand, rather than simply
explaining the material. I am also able to adapt different ways of presenting material and making it engaging to...
17 Subjects: including calculus, reading, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Apache_Junction_calculus_tutors.php","timestamp":"2014-04-20T09:19:04Z","content_type":null,"content_length":"23923","record_id":"<urn:uuid:1a0e743b-6733-4bd8-a614-8fc6dd59afbe>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
What is the slope of the given line?
Best Response
You've already chosen the best response.
Sorry, huh?
Best Response
You've already chosen the best response.
The given line, can you tell me what the slope is of it
Best Response
You've already chosen the best response.
What is the slope of y + 2 = 1/3(x - 5)
Best Response
You've already chosen the best response.
one sec let me see
Best Response
You've already chosen the best response.
If you're not sure, the slope of y - y1 = m(x - x1) is m
Best Response
You've already chosen the best response.
Another alternative: The slope of \(y = mx + b\) is \(m\).
Best Response
You've already chosen the best response.
so would it be 1/3?
Best Response
You've already chosen the best response.
In the slope-intercept form: y = mx + b you should know the slope of the line is the coefficient m of the x term, right?
Best Response
You've already chosen the best response.
yep, the slope is 1/3 The slope of ANY perpendicular line is -3/1 (flip the fraction and flip the sign)
Best Response
You've already chosen the best response.
-3/1 reduces to -3, so that means that the answer is either choice A or B this is because choice A and B have slopes of -3
Best Response
You've already chosen the best response.
Where flip the fraction means find the reciprocal and flip the sign means use the opposite sign.
Best Response
You've already chosen the best response.
yes! I remembered that!! Alright how do we know between a and b though?
Best Response
You've already chosen the best response.
Now use this to find the value of b in y = mx+b y = mx+b y = -3x + b .. plug in m = -3 (the perpendicular slope) 3 = -3(-4) + b ... plug in x = -4 and y = 3 3 = 12 + b Keep going to solve for b
Best Response
You've already chosen the best response.
b = 4?
Best Response
You've already chosen the best response.
3 = 12 + b 3 - 12 = 12 + b - 12 -9 = b b = -9
Best Response
You've already chosen the best response.
not sure how you got b = 4
Best Response
You've already chosen the best response.
y = mx+b y = -3x + b .. plug in m = -3 y = -3x - 9 ... plug in b = -9 So that's your perpendicular equation
Best Response
You've already chosen the best response.
woah! haha I have no idea either...is it b?
Best Response
You've already chosen the best response.
yeah, that's what I got too. thanks!
Best Response
You've already chosen the best response.
ok great, just a silly math error then I guess
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ee7dcae4b07cd2b649a362","timestamp":"2014-04-21T12:42:44Z","content_type":null,"content_length":"78207","record_id":"<urn:uuid:5c1bb03a-98b1-4540-866a-e259d339f573>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help! Motherboard problems!
Help! Motherboard problems!
hello! So i have an Asus M5A99x Evo R 2.0, motherboard and i have some questions. Also i am using the NZXT Source 210 Case.
So first question is when i got the motherboard in the case i put in the 2 screws in the corner like it say's to do. But now the middle of the motherboard is like pushed up and when i try to put a
screw in i have to push really hard so i dont, and im scared to push down to get that screw in. And when i do push there is crackling sounds. And i just dont know what to do. Also i have 5 of the 9
screws in, and there all the outer ones. So is that stable? And i have it upright and its holding up im just not sure if its good for the MoBo.
Question two, the motherboard came without an I/O Shield that fits (Completely wrong shield) is it ok if i go on without one for now?
On the top of the MoBo where it says EPU and has an 8-Pin power connector slot and i plug it in then turn on my power supply to see if it works, nothing happens, but when i unplug it everything turns
PLEASE HELP!!!
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 2:20:28 PM
First question to you: Was this m/b purchased new, and if so was the box sealed and what "parts" came with it?
The board should fit just fine in the 210 and NOT require any force to put the screws in. I know they say to put the corners in first, but did you do a test "eyeball" alignment before you put in any
screws? Did you install the standoffs?
When you are installing the board, be sure you lay the case on the side to work on it.
As far as the i/o is concerned, you can operate without it, but you have to be careful that when using the back panel connections that you do not press the board down and ground it.
Which connector are you using on the EPU. Your power supply should have one lead that has two four pin connectors together for the m/b. Do not use one of the PCIe 6+2 connectors.
Can't find your answer ? Ask !
elfmaslana said:
I don't believe you need the EPU power connector as it is for energy saving.
The I/O shield is to hold the port and motherboard in place.
Once again I believe the extra spots are for standoffs , only if the motherboard is too close to the case.
markwp said:
First question to you: Was this m/b purchased new, and if so was the box sealed and what "parts" came with it?
The board should fit just fine in the 210 and NOT require any force to put the screws in. I know they say to put the corners in first, but did you do a test "eyeball" alignment before you put in any
screws? Did you install the standoffs?
When you are installing the board, be sure you lay the case on the side to work on it.
As far as the i/o is concerned, you can operate without it, but you have to be careful that when using the back panel connections that you do not press the board down and ground it.
Which connector are you using on the EPU. Your power supply should have one lead that has two four pin connectors together for the m/b. Do not use one of the PCIe 6+2 connectors.
Thank you for the response, and yes it was, i got it off of NewEgg for 104.99 And it was new, but the box it came in was white and in it was a CrossFire Sli Bridge cable, 2 Sata 6 Gb Cables (Black),
1 White sata cable, screws, I/O shield (Wrong one), 4 Manuals, the wifi adapter and everything. Also The EPU Im not sure if its PCIe 6+2 But this is what it is.
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 2:39:30 PM
The white box indicates that your m/b was a refurbished unit, i.e., you bought one that someone else returned for whatever reason. You seem to have gotten all the parts (was there a driver disc?).
The EPU is power directly to the cpu and is most definately needed. It will not run without it.
What power supply do you have?
markwp said:
The white box indicates that your m/b was a refurbished unit, i.e., you bought one that someone else returned for whatever reason. You seem to have gotten all the parts (was there a driver disc?).
The EPU is power directly to the cpu and is most definately needed. It will not run without it.
What power supply do you have?
Oh sorry yes ther was a driver disk, and here is my power supply. Also when i bought it, it says New for Asus...?
Im also going to take some pictures and post them.
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 3:15:17 PM
OK - couple of things
Asus boxes are not white. The link you provided says "open box", which is NOT new - it is a return which could be why your i/o is incorrect.
If you look in the bundle of cables coming out of the psu, there should be one lead that has either (2) four pin connectors and nothing else, or (1) eight pin connector like the one you linked, and
nothing else.
Your power supply ma y say 575w, but in reality it is probably more like 300. If you look at photos for that supply on Newegg's site, you will see one of the label on the psu. Notice that (a) the
label says "average output 450w", but more importantly, notice that the +12v portion (called the 12v rail) is rated at 25a (25 amps). 25 amps on a 110v line will provide ~275w max. The ratings for
the +3 and +5 rails, while important for the pc, are not important when it comes to knowing if your psu will be good.
For instance -
this psu
says it is 530w, but note that the 12v rail on this one is 41a as opposed to your 25. A good, solid power supply is an absolute must for your computer. There is a reason cheap power supplies are
cheap - they use substandard components and construction.
Regarding the "crackling" sound - does this occur when you are trying to get the board seated, or does it occur when you turn the power on?
markwp said:
OK - couple of things
Asus boxes are not white. The link you provided says "open box", which is NOT new - it is a return which could be why your i/o is incorrect.
If you look in the bundle of cables coming out of the psu, there should be one lead that has either (2) four pin connectors and nothing else, or (1) eight pin connector like the one you linked, and
nothing else.
Your power supply ma y say 575w, but in reality it is probably more like 300. If you look at photos for that supply on Newegg's site, you will see one of the label on the psu. Notice that (a) the
label says "average output 450w", but more importantly, notice that the +12v portion (called the 12v rail) is rated at 25a (25 amps). 25 amps on a 110v line will provide ~275w max. The ratings for
the +3 and +5 rails, while important for the pc, are not important when it comes to knowing if your psu will be good.
For instance -
this psu
says it is 530w, but note that the 12v rail on this one is 41a as opposed to your 25. A good, solid power supply is an absolute must for your computer. There is a reason cheap power supplies are
cheap - they use substandard components and construction.
Regarding the "crackling" sound - does this occur when you are trying to get the board seated, or does it occur when you turn the power on?
Again thanks, And it is when i try to screw in the remaining 4 screws i have to push and when i push the tiniest bit, i hear crackling, and i just looked on the Psu it says maximum wattage, 475 W So
thank you for that. And i just put in the 2x4 Pin Power connector in and now the red light indacating the ram or Cpu arent working turned off! So yay!
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 3:24:01 PM
markwp said:
Read my earlier response carefully - that psu is NOT capable of more than about 300w, and it is a substandard unit.
Quite odd, i read it and thought oh wow, went to go look at it and it said 475 w maximum, but ok. I will look into getting a new PSU very very soon.
But overall i will get a new psu, thank you for this, but back to the main question, should i attempt getting those 4 screws in. And force it down, or leave it without the ones there.
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 3:45:29 PM
What I said was YOUR 12v rail was only good for about 300w.
The power available from the 12v rail(s) - some psu's have multiple 12v rails, but that's another story, is calculated by P(ower) = I (amps/current) x V(olts). The power a psu can produce is a
function of several different components. The better (and sometimes larger) components are capable of more output for a given input, hence some supplies may be 300w and some supplies can be 1500w or
Back to your main question. Did you do an eyeball test fit to see if the holes aligned properly and did you install the standoffs to the motherboard tray?
markwp said:
What I said was YOUR 12v rail was only good for about 300w.
The power available from the 12v rail(s) - some psu's have multiple 12v rails, but that's another story, is calculated by P(ower) = I (amps/current) x V(olts). The power a psu can produce is a
function of several different components. The better (and sometimes larger) components are capable of more output for a given input, hence some supplies may be 300w and some supplies can be 1500w or
Back to your main question. Did you do an eyeball test fit to see if the holes aligned properly and did you install the standoffs to the motherboard tray?
Yes thank you, i did and all the wholes are aligned perfectly, just about 1/2 of a centimeter raised.
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 3:59:59 PM
OK - So, standoffs are in place and the holes on the m/b line up. A positive sign.
The way I install a m/b is to insert the centermost screw first and just turn it enough so it won't fall out. Then I do the same with the corners, and finally the other four.
To secure it, I tighten the center first, just until it barely snugs in, then the corners, then the final four.
Finally I tighten all screws (in the same order) so that they snug, but not overly tight.
If you try it this way and still hear crackling, I would be concerned that the plies of the m/b are separating which might be why the board was returned initially and became an "open box".
Just out of curiosity, what are you planning to use this computer for? Gaming, web, video, ? What cpu are you using? What graphics card (gpu) are you using?
I ask, because those answers will help determine the wattage you need in a psu.
Read this about Logisys psu
markwp said:
OK - So, standoffs are in place and the holes on the m/b line up. A positive sign.
The way I install a m/b is to insert the centermost screw first and just turn it enough so it won't fall out. Then I do the same with the corners, and finally the other four.
To secure it, I tighten the center first, just until it barely snugs in, then the corners, then the final four.
Finally I tighten all screws (in the same order) so that they snug, but not overly tight.
If you try it this way and still hear crackling, I would be concerned that the plies of the m/b are separating which might be why the board was returned initially and became an "open box".
Just out of curiosity, what are you planning to use this computer for? Gaming, web, video, ? What cpu are you using? What graphics card (gpu) are you using?
I ask, because those answers will help determine the wattage you need in a psu.
Read this about Logisys psu
Im a gamer.. I have a Radeon 7970 Ghz Edition (2x6 Pin) And a AMD fx 8350 Cpu, 8 gb ram.
Also so i should unscrew everything then start with the middle screw.. or push down the middle screw and not care about the crackling noises.
a b Ĉ ASUS
a c 96 V Motherboard
May 5, 2013 6:54:47 PM
Unscrew everything and start from scratch. Go slowly - you will feel if there is resistance to m/b.
As you sit now, a solid 500w psu will be fine. The 7970 will pull ~240w and the rest of the system ~150. 500w will give you a little headroom. However, if you will be overclocking and/or adding a
second card in crossfire, you'll need to step up to ~750w. I personally like to have 20% headroom - that may be overkill, but I would rather spend an extra $50 to help insure that my $200 m/b and
$300 cpu and $800 gpu's didn't go up in smoke.
Look at seasonic, corsair and antec. Make sure they are 80+ certified. Best is certifiation is platinum, then gold, silver and bronze.
markwp said:
Unscrew everything and start from scratch. Go slowly - you will feel if there is resistance to m/b.
As you sit now, a solid 500w psu will be fine. The 7970 will pull ~240w and the rest of the system ~150. 500w will give you a little headroom. However, if you will be overclocking and/or adding a
second card in crossfire, you'll need to step up to ~750w. I personally like to have 20% headroom - that may be overkill, but I would rather spend an extra $50 to help insure that my $200 m/b and
$300 cpu and $800 gpu's didn't go up in smoke.
Look at seasonic, corsair and antec. Make sure they are 80+ certified. Best is certifiation is platinum, then gold, silver and bronze.
THANK YOU FOR EVERYTHING, IS THERE SOME WAY WE CAN STAY IN CONTACT!?!!
a b Ĉ ASUS
a c 96 V Motherboard
May 6, 2013 4:57:21 AM
Can't find your answer ? Ask !
Read discussions in other Motherboards categories | {"url":"http://www.tomshardware.com/answers/id-1669161/motherboard-problems.html","timestamp":"2014-04-18T04:06:11Z","content_type":null,"content_length":"166909","record_id":"<urn:uuid:5727b1d0-b72b-4a90-a86c-6594d4c762a7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
September 17th 2008, 06:09 AM #1
Feb 2008
Prove that:
a) In a group (ab)^-1=b^-1 * a^-1
b) A group G is Abelian iff (ab)^-1=a^-1 * b^-1
c) In a group (a^-1)^-1=a for all a
Multiply it to the right by (ab)
Since it is a group, * is associative.
So we can write :
So we have $(b^{-1}*a^{-1})*(ab)=1$
b) A group G is Abelian iff (ab)^-1=a^-1 * b^-1
Abelian means that * is commutative.
If it is Abelian, $(ab)^{-1}=(ba)^{-1}=a^{-1}*b^{-1}$ by the previous relation.
If $(ab)^{-1}=a^{-1}*b^{-1}$, then since $(ab)^{-1}=b^{-1}*a^{-1}$, we have $b^{-1}*a^{-1}=a^{-1}*b^{-1}$
Thus it is commutative --> Abelian
The equivalence is proved.
c) In a group (a^-1)^-1=a for all a
$(a^{-1})^{-1}=a \Longleftrightarrow \underbrace{(a^{-1})^{-1}*(a^{-1})}_{1}=\underbrace{a*(a^{-1})}_{1}$
Sorry for the wording & the method... just trying myself to explain
Thank you so much!!! This helps me a lot!!
I like your animated cow by the way!!!
September 17th 2008, 06:17 AM #2
September 17th 2008, 06:22 AM #3
Feb 2008
September 17th 2008, 12:57 PM #4
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/advanced-algebra/49450-prove.html","timestamp":"2014-04-16T11:29:59Z","content_type":null,"content_length":"42256","record_id":"<urn:uuid:4329bc34-2454-4bf5-936f-f74ac3062644>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of alan turing
Alan Mathison Turing, OBE, FRS (23 June 1912 – 7 June 1954) was an English mathematician, logician and cryptographer.
Turing is often considered to be the father of modern computer science. He provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. With the
Turing test, meanwhile, he made a significant and characteristically provocative contribution to the debate regarding artificial intelligence: whether it will ever be possible to say that a machine
is conscious and can think. He later worked at the National Physical Laboratory, creating one of the first designs for a stored-program computer, the ACE, although it was never actually built in its
full form. In 1948, he moved to the University of Manchester to work on the Manchester Mark I, then emerging as one of the world's earliest true computers.
During the Second World War Turing worked at Bletchley Park, the UK's codebreaking centre, and was for a time head of Hut 8, the section responsible for German naval cryptanalysis. He devised a
number of techniques for breaking German ciphers, including the method of the bombe, an electromechanical machine that could find settings for the Enigma machine.
Childhood and youth
Turing was conceived in
. His father, Julius Mathison Turing, was a member of the
Indian Civil Service
. Julius and wife Sara (
Stoney; 1881 – 1976, daughter of Edward Waller Stoney, chief engineer of the Madras Railways) wanted Alan to be brought up in
, so they returned to
Maida Vale
, where Alan Turing was born 23 June 1912, as recorded by a
blue plaque
on the outside of the building, now the
Colonnade Hotel
. He had an elder brother, John. His father's civil service commission was still active, and during Turing's childhood years his parents travelled between
, England and India, leaving their two sons to stay with friends in
in England . Very early in life, Turing showed signs of the genius he was to display more prominently later.
His parents enrolled him at St Michael's, a day school, at the age of six. The headmistress recognised his genius early on, as did many of his subsequent educators. In 1926, at the age of 14, he went
on to Sherborne School, a famous and expensive public school in Dorset. His first day of term coincided with the General Strike in England, but so determined was he to attend his first day that he
rode his bicycle unaccompanied more than from Southampton to school, stopping overnight at an inn.
Turing's natural inclination toward mathematics and science did not earn him respect with some of the teachers at Sherborne, whose definition of education placed more emphasis on the classics. His
headmaster wrote to his parents: "I hope he will not fall between two schools. If he is to stay at public school, he must aim at becoming educated. If he is to be solely a Scientific Specialist, he
is wasting his time at a public school".
Despite this, Turing continued to show remarkable ability in the studies he loved, solving advanced problems in 1927 without having even studied elementary calculus. In 1928, aged 16, Turing
encountered Albert Einstein's work; not only did he grasp it, but he extrapolated Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.
Turing's hopes and ambitions at school were raised by the close friendship he developed with a slightly older fellow student, Christopher Morcom, who was Turing's first love interest. Morcom died
suddenly only a few weeks into their last term at Sherborne, from complications of bovine tuberculosis, contracted after drinking infected cow's milk as a boy. Turing's religious faith was shattered
and he became an atheist. He adopted the conviction that all phenomena, including the workings of the human brain, must be materialistic.
University and his work on computability
Turing's unwillingness to work as hard on his classical studies as on science and mathematics meant he failed to win a scholarship to
Trinity College, Cambridge
, and went on to the college of his second choice,
King's College, Cambridge
. He was an undergraduate there from 1931 to 1934, graduating with a distinguished degree, and in 1935 was elected a fellow at King's on the strength of a dissertation on the
central limit theorem
In his momentous paper "On Computable Numbers, with an Application to the Entscheidungsproblem (submitted on 28 May 1936), Turing reformulated Kurt Gödel's 1931 results on the limits of proof and
computation, replacing Gödel's universal arithmetic-based formal language with what are now called Turing machines, formal and simple devices. He proved that some such machine would be capable of
performing any conceivable mathematical problem if it were representable as an algorithm, even if no actual Turing machine would be likely to have practical applications, being much slower than
practically realisable alternatives.
Turing machines are to this day the central object of study in theory of computation. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting
problem for Turing machines is undecidable: it is not possible to decide, in general, algorithmically whether a given Turing machine will ever halt. While his proof was published subsequent to Alonzo
Church's equivalent proof in respect to his lambda calculus, Turing's work is considerably more accessible and intuitive. It was also novel in its notion of a 'Universal (Turing) Machine', the idea
that such a machine could perform the tasks of any other machine. The paper also introduces the notion of definable numbers.
Most of 1937 and 1938 he spent at Princeton University, studying under Alonzo Church. In 1938 he obtained his Ph.D. from Princeton; his dissertation introduced the notion of relative computing where
Turing machines are augmented with so-called oracles, allowing a study of problems that cannot be solved by a Turing machine.
Back in Cambridge in 1939, he attended lectures by Ludwig Wittgenstein about the foundations of mathematics. The two argued and disagreed, with Turing defending formalism and Wittgenstein arguing
that mathematics does not discover any absolute truths but rather invents them.
During the Second World War, Turing was a main participant in the efforts at Bletchley Park to break German ciphers. Building on cryptanalysis work carried out in Poland by Marian Rejewski, Jerzy
Różycki and Henryk Zygalski from Cipher Bureau before the war, he contributed several insights into breaking both the Enigma machine and the Lorenz SZ 40/42 (a Teletype cipher attachment codenamed
"Tunny" by the British), and was, for a time, head of Hut 8, the section responsible for reading German naval signals.
Since September 1938, Turing had been working part-time for the Government Code and Cypher School (GCCS), the British code breaking organisation. He worked on the problem of the German Enigma
machine, and collaborated with Dilly Knox, a senior GCCS codebreaker. On 4 September 1939, the day after the UK declared war on Germany, Turing reported to Bletchley Park, the wartime station of
The Turing-Welchman bombe
Within weeks of arriving at Bletchley Park, Turing had designed an electromechanical machine which could help break Enigma faster than bomba from 1932, the bombe, named after and building upon the
original Polish-designed bomba. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack
Enigma-protected message traffic.
Professor Jack Good, cryptanalyst working at the time with Turing at Bletchley Park, later said: "Turing's most important contribution, I think, was of part of the design of the bombe, the
cryptanalytic machine. He had the idea that you could use, in effect, a theorem in logic which sounds to the untrained ear rather absurd; namely that from a contradiction, you can deduce everything.
The bombe searched for possibly correct settings used for an Enigma message (i.e., rotor order, rotor settings, etc.), and used a suitable "crib": a fragment of probable plaintext. For each possible
setting of the rotors (which had of the order of 10^19 states, or 10^22 for the U-boat Enigmas which eventually had four rotors, compared to the usual Enigma variant's three), the bombe performed a
chain of logical deductions based on the crib, implemented electrically. The bombe detected when a contradiction had occurred, and ruled out that setting, moving onto the next. Most of the possible
settings would cause contradictions and be discarded, leaving only a few to be investigated in detail. Turing's bombe was first installed on 18 March 1940. Over two hundred bombes were in operation
by the end of the war.
Hut 8 and Naval Enigma
In December 1940, Turing solved the naval Enigma indicator system, which was more mathematically complex than the indicator systems used by the other services. Turing also invented a
statistical technique termed "
" to assist in breaking Naval Enigma. Banburismus could rule out certain orders of the Enigma rotors, reducing time needed to test settings on the bombes.
In the spring of 1941, Turing proposed marriage to Hut 8 co-worker Joan Clarke, although the engagement was broken off by mutual agreement in the summer.
In July 1942, Turing devised a technique termed Turingismus or Turingery for use against the Lorenz cipher used in the Germans' new Geheimschreiber machine ("secret writer") which was one of those
codenamed "Fish". He also introduced the Fish team to Tommy Flowers who under the guidance of Max Newman, went on to build the Colossus computer, the world's first programmable digital electronic
computer, which replaced simpler prior machines (including the "Heath Robinson") and whose superior speed allowed the brute-force decryption techniques to be applied usefully to the daily-changing
cyphers. A frequent misconception is that Turing was a key figure in the design of Colossus; this was not the case.
Turing travelled to the United States in November 1942 and worked with U.S. Navy cryptanalysts on Naval Enigma and bombe construction in Washington, and assisted at Bell Labs with the development of
secure speech devices. He returned to Bletchley Park in March 1943. During his absence, Hugh Alexander had officially assumed the position of head of Hut 8, although Alexander had been de facto head
for some time — Turing having little interest in the day-to-day running of the section. Turing became a general consultant for cryptanalysis at Bletchley Park.
In the latter part of the war, while teaching himself electronics at the same time, and assisted by engineer Donald Bayley, Turing undertook the design of a portable machine codenamed Delilah to
allow secure voice communications. It was intended for different applications, lacking capability for use with long-distance radio transmissions, and in any case, Delilah was completed too late to be
used during the war. Though Turing demonstrated it to officials by encrypting/decrypting a recording of a Winston Churchill speech, Delilah was not adopted for use.
In 1945, Turing was awarded the OBE for his wartime services, but his work remained secret for many years. A biography published by the Royal Society shortly after his death recorded:
"Three remarkable papers written just before the war, on three diverse mathematical subjects, show the quality of the work that might have been produced if he had settled down to work on some big
problem at that critical time. For his work at the Foreign Office he was awarded the OBE."
Early computers and the Turing Test
From 1945 to 1947 he was at the
National Physical Laboratory
, where he worked on the design of the
(Automatic Computing Engine). He presented a paper on 19 February 1946, which was the first detailed design of a
stored-program computer
. Although ACE was a feasible design, the secrecy surrounding the wartime work at Bletchley Park led to delays in starting the project and he became disillusioned. In late 1947 he returned to
Cambridge for a sabbatical year. While he was at Cambridge, the
Pilot ACE
was built in his absence. It executed its first program on 10 May 1950.
In 1948 he was appointed Reader in the Mathematics Department at Manchester and in 1949 became deputy director of the computing laboratory at the University of Manchester, and worked on software for
one of the earliest true computers — the Manchester Mark I. During this time he continued to do more abstract work, and in "Computing machinery and intelligence" (Mind, October 1950), Turing
addressed the problem of artificial intelligence, and proposed an experiment now known as the Turing test, an attempt to define a standard for a machine to be called "intelligent". The idea was that
a computer could be said to "think" if it could fool an interrogator into thinking that the conversation was with a human.
In 1948, Turing, working with his former undergraduate colleague, D.G. Champernowne, began writing a chess program for a computer that did not yet exist. In 1952, lacking a computer powerful enough
to execute the program, Turing played a game in which he simulated the computer, taking about half an hour per move. The game was recorded; the program lost to Turing's colleague Alick Glennie,
although it is said that it won a game against Champernowne's wife.
Pattern formation and mathematical biology
Turing worked from 1952 until his death in 1954 on
mathematical biology
, specifically
. He published one paper on the subject called "The Chemical Basis of Morphogenesis" in 1952, putting forth the Turing hypothesis of pattern formation. His central interest in the field was
Fibonacci phyllotaxis
, the existence of
Fibonacci numbers
in plant structures. He used
reaction-diffusion equations
which are now central to the field of
pattern formation
. Later papers went unpublished until 1992 when
Collected Works of A.M. Turing
was published.
Prosecution and death
Homosexuality was illegal in the United Kingdom and regarded as a mental illness and subject to criminal sanctions. In 1952, Arnold Murray, a 19-year-old recent acquaintance of Turing's, helped an
accomplice to break into Turing's house, and Turing reported the crime to the police. As a result of the police investigation, Turing acknowledged a sexual relationship with Murray, and a crime
having been identified and settled, Turing and Murray were charged with gross indecency under Section 11 of the Criminal Law Amendment Act of 1885. Turing was unrepentant and was convicted of the
same crime Oscar Wilde had been convicted of more than fifty years before.
Turing was given a choice between imprisonment and probation, conditional on his undergoing hormonal treatment designed to reduce libido. He accepted the estrogen hormone injections, which lasted for
a year, to avoid jail. Side effects included gynecomastia (breast enlargement). His conviction led to a removal of his security clearance and prevented him from continuing consultancy for GCHQ on
cryptographic matters. At the time, there was acute public anxiety about spies and homosexual entrapment by Soviet agents, possibly due to the recent exposure of the Cambridge Five as KGB double
agents. (Turing was never accused of espionage.)
On 8 June 1954, his cleaner found him dead; the previous day, he had died of cyanide poisoning, apparently from a cyanide-laced apple he left half-eaten beside his bed. The apple itself was never
tested for contamination with cyanide, but a post-mortem established that the cause of death was cyanide poisoning. Most believe that his death was intentional, and the death was ruled a suicide. His
mother, however, strenuously argued that the ingestion was accidental due to his careless storage of laboratory chemicals. Biographer Andrew Hodges suggests that Turing may have killed himself in
this ambiguous way quite deliberately, to give his mother some plausible deniability. Others suggest that Turing was re-enacting a scene from 'Snow White', his favourite fairy tale. Because Turing's
homosexuality would have been perceived as a security risk, the possibility of assassination has also been suggested. His remains were cremated at Woking crematorium on 12 June 1954.
Posthumous recognition
Since 1966, the
Turing Award
has been given annually by the
Association for Computing Machinery
to a person for technical contributions to the computing community. It is widely considered to be the computing world's equivalent to the
Nobel Prize
Various tributes to Turing have been made in Manchester, the city where he worked towards the end of his life. In 1994 a stretch of the A6010 road (the Manchester city intermediate ring road) was
named Alan Turing Way. A bridge carrying this road was widened, and carries the name 'Alan Turing Bridge'.
A statue of Turing was unveiled in Manchester on 23 June 2001. It is in Sackville Park, between the University of Manchester building on Whitworth Street and the Canal Street 'gay village'. A
celebration of Turing's life and achievements arranged by the British Logic Colloquium and the British Society for the History of Mathematics was held on 5 June 2004 at the University of Manchester;
the Alan Turing Institute was initiated in the university that summer. The building housing the School of Mathematics, the Photon Sciences Institute and the Jodrell Bank Centre for Astrophysics is
named the Alan Turing Building and was opened in July 2007.
On 23 June 1998, on what would have been Turing's 86th birthday, Andrew Hodges, his biographer, unveiled an official English Heritage Blue Plaque on his childhood home in Warrington Crescent, London,
now the Colonnade hotel. To mark the 50th anniversary of his death, a memorial plaque was unveiled on 7 June 2004 at his former residence, Hollymeade, in Wilmslow.
For his achievements in computing, various universities have honoured him. On 28 October 2004 a bronze statue of Alan Turing sculpted by John W Mills was unveiled at the University of Surrey in
Guildford. The statue marks the 50th anniversary of Turing's death. It portrays him carrying his books across the campus. Turing Road in the University's Research Park predates this.
The Polytechnic University of Puerto Rico and Los Andes University in Bogotá, Colombia, both have computer laboratories named after Turing. The University of Texas at Austin has an honours computer
science programme named the Turing Scholars. Istanbul Bilgi University organizes an annual conference on the theory of computation called Turing Days. The computer room in King's College, Cambridge
is named the "Turing Room" after him. Carnegie Mellon University has a granite bench, situated in The Hornbostel Mall, with the name "A. M. Turing" carved across the top, "Read" down the left leg,
and "Write" down the other. The Boston GLBT pride organization named Turing their 2006 Honorary Grand Marshal.
On 13 March 2000, St Vincent & The Grenadines issued a set of stamps to celebrate the greatest achievements of the twentieth century, one of which carries a recognisable portrait of Turing against a
background of repeated 0s and 1s, and is captioned '1937: Alan Turing's theory of digital computing'.
A 1.5-ton, life-size statue of Turing was unveiled on 19 June 2007 at Bletchley Park. Built from approximately half a million pieces of Welsh slate, it was sculpted by Stephen Kettle, having been
commissioned by the late American billionaire Sidney Frank.
The Turing Relay is a six-stage relay race on riverside footpaths from Ely to Cambridge and back. These paths were used for running by Turing while at Cambridge; his marathon best time was 2 hours,
46 minutes.
Experimental music duo Matmos, whose members are a homosexual couple, released a limited edition EP in 2006 entitled For Alan Turing.
See also
Further reading
External links | {"url":"http://www.reference.com/browse/alan+turing","timestamp":"2014-04-21T08:59:44Z","content_type":null,"content_length":"123039","record_id":"<urn:uuid:c28e50f2-d16a-4971-a961-80542992a924>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Descent of Morphisms of Sheaves
up vote 1 down vote favorite
While reading Brylinski I am trying to understand the descent of morphisms of sheaves.
In trying to form a new definition of a presheaf $A$ over a space $X$, we associate to each surjective local homeomorphism $f:Y \to X$ a set, denoted $A(Y\xrightarrow{f}X)$. The "restriction"
condition of a presheaf amounts to: given a surjective local homeomorphism $g:Z \to Y$ we have a pullback map $g^{-1}:A(Y\xrightarrow{f}X) \to A(Z \xrightarrow{fg}X)$. The transitivity property for
these "restriction" (pullback) maps is that given any diagram $$W \xrightarrow{h} Z \xrightarrow{g} Y \xrightarrow{f} X$$ having $(gh)^{-1} = h^{-1} \circ g^{-1}$ as pullbacks $A(Y\xrightarrow{f}X) \
to A(W \xrightarrow{fgh} Z)$. $\\ \\$
If $A$ is already a presheaf, in the good 'ol fashioned sense, then we can define our assignment $A(Y\xrightarrow{f}X)$ to be the global sections of $Y$ given by the inverse image of $A$ on $X$, i.e.
$\Gamma(Y, f^{-1}A)$
I have 2 questions:
1. Is it true that if $A$ is already a sheaf in the good 'ol fashioned sense, then the above property (transitivity of the "restriction") is satisfied? My proof feels trivial, hence my worry. Also,
I am uneasy since Brylisnki doesn't state this fact but instead says it "should" be true.
2. He later comments that as functors from the category of sheaves on $Y$ to the category of sheaves on $W$ , $h^{-1}\circ g^{-1}$ and $(gh)^{-1}$ are NOT equal; but there is a natural
transformation. Why are these two functors not equal? It seems like they send the same sheaves to the same places, unless of course I am making identifications of categories that I don't realize?
descent sheaf-theory ct.category-theory higher-category-theory
First; are you aware of the notions of pseudo-functors and stacks? – Martin Brandenburg May 9 '12 at 21:21
No, to be honest, I am really trying to do this all without talking about stacks (or schemes, etc). The goal is to define these descent properties in a "hands-on" way so that I can understand the
definition of a "gerbe" in a "hands-on" way. – cheyne May 9 '12 at 22:38
2 If you want anyone to attempt to answer this question, I suggest you say what this "new" definition of a sheaf is. – David Carchedi May 9 '12 at 23:42
@David Above is the "new" definition for a presheaf, which is all I am concerned about at the moment. I will be more clear now. @Martin: Turns out the book is using all of this exposition to
define a stack in this context, hence why I am unfamiliar with it up to this point. – cheyne May 10 '12 at 12:53
@Cheyne: Your notation is bad. The functor from $\textbf{Sh}(W)$ to $\textbf{Sh}(Z)$ should be denoted $h_*$, etc. The notation $h^{-1}$ (or $h^*$) is reserved for the one going in the opposite
direction. And it is true that this "inverse image" functor does not compose strictly: $h^{-1} g^{-1} \ne (g \circ h)^{-1}$. There is, however, a natural isomorphism. – Zhen Lin May 10 '12 at
show 1 more comment
1 Answer
active oldest votes
Yes, the property is satisfied (note that $\Gamma(Y,f^{-1}A)=A(f(Y))$, since $f$ is open). I didn't read the book, but I imagine that the point is not to give a strange definition of a
sheaf on a topological space, but rather to motivate the generalisation to situations where you know what maps you wish to consider to be local homeomorphisms, but they are not actually
up vote 3 local homeomorphisms for any (classical) topology. The classical example is the etale (Grothendieck) topology on schemes. The book "Sheaves in geometry and logic" has a good exposition
down vote of these ideas.
OK thanks. Now, I am thinking about my second question. Is it a bad idea to call yours an "answer" before the second part is answered? Then people won't offer answers to the second
question? I'm new to this forum. And agree with you that the idea is to motivate more general concepts. – cheyne May 10 '12 at 16:28
I'm not sure about the etiquette, I'm also not a very active participant... As for the second part, this is the same as saying that $(A\times B)\times C$ is not equal to $A\times(B\
times C)$. This is formally true, but mostly irrelevant, since there is a canonical isomorphism. The same happens with pullbacks of sheaves, but in the abstract framework you have to
be given this isomorphism, and I guess the book is trying to motivate this. In my personal opinion, this point is often stressed way beyond its importance... – Moshe May 10 '12 at
thank you for confirming all of my beliefs/suspicions!! – cheyne May 10 '12 at 20:07
add comment
Not the answer you're looking for? Browse other questions tagged descent sheaf-theory ct.category-theory higher-category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/96506/descent-of-morphisms-of-sheaves?sort=votes","timestamp":"2014-04-16T22:35:31Z","content_type":null,"content_length":"62113","record_id":"<urn:uuid:ab93a75f-6384-4d37-a225-ad1a608bf83d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
atwood machine
Hi physics_geek,
1. The problem statement, all variables and given/known data
In the Atwood machine shown below, m1 = 2.00 kg and m2 = 7.50 kg. The masses of the pulley and string are negligible by comparison. The pulley turns without friction and the string does not stretch.
The lighter object is released with a sharp push that sets it into motion at vi = 2.80 m/s downward.
(a) How far will m1 descend below its initial level?
(b) Find the velocity of m1 after 1.80 s.
2. Relevant equations
f= ma
vf^2 = v^2 + 2ad
3. The attempt at a solution
i think for part a u use the equation i put above..but i dont know how to figure out acceleration...but i think its splitted between the two objects
To find the acceleration, start by drawing force diagrams for each of the objects. Using [itex]\sum F = m a[/itex] for each of the diagrams then gives two equations with two unknowns.
(You can also use an energy appoach here.) | {"url":"http://www.physicsforums.com/showthread.php?t=270373","timestamp":"2014-04-17T03:55:03Z","content_type":null,"content_length":"26356","record_id":"<urn:uuid:76d2861a-857e-43de-82ed-2ca7ed9de4ff>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reviews of books, websites, poster sets, movies, and other resources for learning and teaching the history of mathematics.
A collection of articles on mathematics in Europe from the twelfth to the fifteenth century.
A collection of 24 mini-posters, each containing a quotation about mathematics.
A biography of the 18th century author of an early calculus text, with some translations from the text.
Essays on various aspects of Greek science and mathematics, which help give a context for those aspects of Greek culture.
A math history class visits the 'Beautiful Science' exhibit at the Huntington Library in Southern California.
A CD with eleven modules, each containing numerous activities designed to help secondary teachers use the history of mathematics to teach mathematics.
Our reviewer praises the selection of excerpts, the use of facsimiles rather than transcriptions, and the commentary and English translation in this collection.
How does geometry begin? This work explores the origins of geometry in the work of artisans.
This resource consists of a series of 61 worksheets, each focused on a particular problem and related to a particular historical mathematical personality.
A history of algebra from its early beginnings to the twentieth century. | {"url":"http://www.maa.org/publications/periodicals/convergence/critics-corner?page=2&device=mobile","timestamp":"2014-04-18T06:15:47Z","content_type":null,"content_length":"25834","record_id":"<urn:uuid:8ae56246-6e1f-4f37-abe5-38bc6f47a6c1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
All angles in the figure below are right angles. What is the area of the figure?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/504c12d7e4b0925ffedd236d","timestamp":"2014-04-18T20:58:01Z","content_type":null,"content_length":"132437","record_id":"<urn:uuid:97eb4629-bab6-4904-a026-3303d2a7569b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
While I agree with the concerns of many parents that over-testing is damaging to children and subverts the purpose of education, I don't believe that the Common Core has set the bar too high, at
least in math, my area of expertise. I know from direct experience with children that we can expect far more thinking of them than is commonly held. That is the Core Belief of my blog. The problem is
that teachers have not received the necessary preparation and the testing has been rushed and lacking in quality control. We're trying to set the bar higher for children without raising the bar… | {"url":"http://math.alltop.com/","timestamp":"2014-04-20T13:18:32Z","content_type":null,"content_length":"129604","record_id":"<urn:uuid:0498f8ae-fd0a-446d-baa3-e3e7ea93d865>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clayton, PA Math Tutor
Find a Clayton, PA Math Tutor
My knowledge of economics and mathematics stems from my master's degree in economics from Lehigh University. I specialize in micro- and macroeconomics, from an introductory level up to an
advanced level. I have master's degree work in labor economics, financial analysis and game theory.
19 Subjects: including calculus, Microsoft Excel, precalculus, statistics
...We reviewed the essay, math, and language arts. He also was a child with autism. The experience with this test allows me the knowledge to assist of students.
45 Subjects: including logic, probability, prealgebra, geometry
I am an experienced tutor. I have helped students with math, study skills, writing, language arts, and more. I teach a GED prep class so I am experienced with all subjects.
19 Subjects: including algebra 1, ACT Math, SAT math, geometry
...My name is Nick and I graduated as a math major at the end of the spring of 2009 from a local college. I tutored students while I was at the college so I have experience in tutoring. I
graduated at the top of my class and had the highest honors possible so I know the subject material well.
15 Subjects: including prealgebra, calculus, ACT Math, algebra 1
...I promote using some imagination when looking at these topics, especially in physics. When someone can understand how a concept is working then they can apply it to solve a whole range of
problems and most memorization will be unnecessary. This approach will help aid students to achieve a higher understanding of these subjects and it will promote critical thinking.
16 Subjects: including precalculus, algebra 1, algebra 2, calculus
Related Clayton, PA Tutors
Clayton, PA Accounting Tutors
Clayton, PA ACT Tutors
Clayton, PA Algebra Tutors
Clayton, PA Algebra 2 Tutors
Clayton, PA Calculus Tutors
Clayton, PA Geometry Tutors
Clayton, PA Math Tutors
Clayton, PA Prealgebra Tutors
Clayton, PA Precalculus Tutors
Clayton, PA SAT Tutors
Clayton, PA SAT Math Tutors
Clayton, PA Science Tutors
Clayton, PA Statistics Tutors
Clayton, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Bally Math Tutors
Barto Math Tutors
Congo, PA Math Tutors
Eshbach, PA Math Tutors
Fredericksville, PA Math Tutors
Hancock, PA Math Tutors
Klines Corner, PA Math Tutors
Lobachsville, PA Math Tutors
Lower Longswamp, PA Math Tutors
Manatawny, PA Math Tutors
Oreville, PA Math Tutors
Pikeville, PA Math Tutors
Schofer, PA Math Tutors
Schultzville, PA Math Tutors
Shamrock Station, PA Math Tutors | {"url":"http://www.purplemath.com/Clayton_PA_Math_tutors.php","timestamp":"2014-04-20T13:51:08Z","content_type":null,"content_length":"23695","record_id":"<urn:uuid:ea5b5a24-15da-4277-a71d-e683e27b7904>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Collingswood, NJ Calculus Tutor
Find a West Collingswood, NJ Calculus Tutor
...I thought back on my first teaching experience back when I was in college. I took part in this program where students from my university taught a group of public school children how to make a
model rocket and how it worked. I remember I got the chance to instruct a small group of children on the names of all the parts of the rocket and a basic explanation of how they functioned.
16 Subjects: including calculus, Spanish, physics, algebra 1
...If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of
the struggles all students face.
14 Subjects: including calculus, physics, geometry, ASVAB
...It's no wonder that my English skills exceed those of most of today's English teachers. Unlike most coaches, who specialize in only one section of the SAT, I have long experience and expertise
in all three parts of the test. The reading section of the SAT, like the math section, is much more challenging than it used to be.
23 Subjects: including calculus, English, geometry, statistics
...I have a B.A. in scientific illustration. I have been drawing all my life and regard drawing skills as one the the most basic and fundamental skills in my "toolbox." Over the years and through
my degree program, I have improved my drawing and experienced many critiques. Drawing is a skill developed from observation and understanding of form, line, shape, and volume.
19 Subjects: including calculus, geometry, algebra 2, trigonometry
...I have prepared high school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of the GRE, and have helped many other students with math skills
ranging from basic arithmetic all the way up to Calculus 3 and basic linear algebra. In my free time, I en...
22 Subjects: including calculus, geometry, GRE, ASVAB
Related West Collingswood, NJ Tutors
West Collingswood, NJ Accounting Tutors
West Collingswood, NJ ACT Tutors
West Collingswood, NJ Algebra Tutors
West Collingswood, NJ Algebra 2 Tutors
West Collingswood, NJ Calculus Tutors
West Collingswood, NJ Geometry Tutors
West Collingswood, NJ Math Tutors
West Collingswood, NJ Prealgebra Tutors
West Collingswood, NJ Precalculus Tutors
West Collingswood, NJ SAT Tutors
West Collingswood, NJ SAT Math Tutors
West Collingswood, NJ Science Tutors
West Collingswood, NJ Statistics Tutors
West Collingswood, NJ Trigonometry Tutors
Nearby Cities With calculus Tutor
Ashland, NJ calculus Tutors
Audubon, NJ calculus Tutors
Center City, PA calculus Tutors
East Camden, NJ calculus Tutors
East Haddonfield, NJ calculus Tutors
Echelon, NJ calculus Tutors
Erlton, NJ calculus Tutors
Middle City East, PA calculus Tutors
Middle City West, PA calculus Tutors
Oaklyn calculus Tutors
South Camden, NJ calculus Tutors
West Collingswood Heights, NJ calculus Tutors
Westmont, NJ calculus Tutors
Westville Grove, NJ calculus Tutors
Woodlynne, NJ calculus Tutors | {"url":"http://www.purplemath.com/West_Collingswood_NJ_Calculus_tutors.php","timestamp":"2014-04-21T02:45:48Z","content_type":null,"content_length":"24806","record_id":"<urn:uuid:29175d3f-f7e1-4610-a089-2b0e06fc7115>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics and Statistics
Department of Mathematics and Statistics
M. Knee, Bibliographer
I. General Purpose
The University Libraries' collection is intended to support teaching and research to the Ph.D. level, as well as individual faculty research projects. The department offers B.A., B.S., combined B.A./
M.A., B.S./M.A., B.S./M.B.A., M.A., and Ph.D. degrees. The department also offers a B.S. in actuarial science. Undergraduates are offered four options for specialization; they are: liberal arts,
graduate school preparation, applied mathematics, and statistics. An honors program is available in each of these areas. There is also an interdisciplinary major with computer science. The Department
of Mathematics and Statistics' areas of specialization are algebra, algebraic geometry, analysis, combinatorics, complex analysis, computational mathematics, dynamical systems, function theory,
geometry, graph theory, group theory, information theory, number theory, ordinary differential equations, operator theory, probability, statistics, and topology.
II. Subject and Language Modifiers
Languages: Materials are acquired primarily in English. Materials in French, German, Russian, Spanish, and Italian are acquired very selectively. Items in other languages are rarely acquired.
Geographical Areas: This is not a consideration for mathematics and statistics.
Chronological Periods: Emphasis is placed on acquiring recently published materials. However, reprints and older materials that have been digitized and made available on the Internet are also
III. Description of Materials Collected
Types of Materials Collected: The primary materials collected are monographs (including graduate level texts), serials, periodicals, abstracts, indexes, directories, handbooks, dictionaries,
encyclopedias, conference proceedings, tutorials, videotapes and videodiscs, electronic media, electronic databases, and Internet resources.
Types of Materials Excluded: Dissertations, documents, manuscripts, and reports are acquired only in response to specific requests. Undergraduate texts are only acquired as supplementary reserve
Interdisciplinary Factors: Since there is an interdisciplinary major in computer science and applied mathematics, the collection development statement for computer science is closely related to this
statement. The Department of Biometry and Statistics in the School of Public Health is the University's center for graduate work in statistics. They offer M.S. and Ph.D. degrees.
IV. Subject and Collection Levels [Collection Level Descriptions]
Most the materials for mathematics and statistics are classified in the Library of Congress QAs (except QA 76); some materials are classified in HA 1 - HA 20 (statistics) and T 57 (applied
mathematics). The overall collection level for mathematics and statistics is at the Advanced Instructional Support Level.
The following subjects are collected at or near the Research Level: algebra, algebraic geometry, algebraic topology, combinatorics, complex analysis, differential topology, dynamical systems,
function theory, functional analysis, geometric topology, geometry, graph theory, group theory, harmonic analysis, mathematical statistics, number theory, numerical analysis, operator theory,
ordinary differential equations, probability, and topology.
The following subjects are collected at or near the Instructional Support Level: biography, differential equations, differential geometry, game theory, history, linear and multilinear algebra,
mathematical logic, philosophy of mathematics, and puzzles and recreations.
V. Other Significant Collections and Resource Sharing
The collections of Rensselaer Polytechnic Institute (RPI) and, to a certain extent, the New York State Library (NYSL) augment the University's collections in mathematics and statistics. Since the
University Libraries have only a modest number of journal subscriptions in mathematics and statistics, we must rely on interlibrary loan (ILL) and document delivery services to obtain requested
articles. We also rely on ILL and direct borrowing to obtain books and other research materials. Locally, this includes RPI; in New York, the other SUNY University Centers; and nationally, other
research libraries.
IV. Internal Notes
The BNA approval plan provides adequate coverage for mathematics and statistics. There are, however, several presses and associations that are not covered. Materials from these publishers are
reviewed and selected by the bibliographer for mathematics and statistics. Since numerous mathematics monographs are published as parts of series, standing orders are essential. The University
Libraries maintains about 70 standing orders in mathematics and statistics. The journal is the most important type of publication for mathematics and statistics. E-prints (electronic preprints) are
emerging as important resources. E-print sources are generally free and are made available via the Internet Resources in Mathematics and Statistics Web page.
Collecting materials on the application of mathematics or statistics in a specific subject area is the responsibility of the bibliographer for that subject discipline.
October 2003 | {"url":"http://library.albany.edu/subject/cdp/mathematics.html","timestamp":"2014-04-16T13:19:58Z","content_type":null,"content_length":"8384","record_id":"<urn:uuid:89ef8170-b9de-4b8d-a6ff-7a3281386997>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cocoa butter and cocoa solids
Hans has a very thought provoking article in Cocoa Content called "
Why cocoa content matters
". In it he shows a very insightful way to determine the amount of cocoa butter. Here's the essence of it:
Cocoa content only tells you how much of the bar’s weight is comprised of cocoa solids. Now, it’s important to understand that “cocoa solids” refers to the chocolate’s combined weight of cocoa butter
and dry cocoa particles (i.e. cocoa powder). You can find the amount of cocoa butter from the amount of fat, though. Once you have that you can determine the percentage of the rest of the solids.
Follow these steps from the nutrition label:
1. Note the serving size, since it varies.
2. Note the Total Fat The Fat is from cocoa butter
3. Divide the Total Fat by the Serving size (Fat/Size), then multiply by 100 to get the percentage of fat
4. Subtract the percentage of fat from the cacao percentage and the difference will tell you what percentage of the bar consists of dry cocoa solids. Cocoa butter percentage + cocoa solids percentage
= Total cacao percentage.
For example, consider a bar of Lindt Excellence 70%. The Nutrition Facts show the serving size as 42g, with 17g of fat. Divide 17 by 42 and multiply the result by 100, and you’ll get 40. This means
there’s 40% cocoa butter. Subtract that number from 70, which in this case is 30% dry cocoa solids . (40 + 30 = 70)
What do you think of this? | {"url":"http://www.thechocolatelife.com/forum/topics/1978963:Topic:10854?commentId=1978963%3AComment%3A18103","timestamp":"2014-04-18T01:15:53Z","content_type":null,"content_length":"73623","record_id":"<urn:uuid:89bc8f9e-efab-43de-9c68-c18034836eff>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Design Principles for Practical Self-Routing Nonblocking Switching Networks with O(N · log N) Bit-Complexity
October 1997 (vol. 46 no. 10)
pp. 1057-1069
ASCII Text x
Ted H. Szymanski, "Design Principles for Practical Self-Routing Nonblocking Switching Networks with O(N · log N) Bit-Complexity," IEEE Transactions on Computers, vol. 46, no. 10, pp.
1057-1069, October, 1997.
BibTex x
@article{ 10.1109/12.628391,
author = {Ted H. Szymanski},
title = {Design Principles for Practical Self-Routing Nonblocking Switching Networks with O(N · log N) Bit-Complexity},
journal ={IEEE Transactions on Computers},
volume = {46},
number = {10},
issn = {0018-9340},
year = {1997},
pages = {1057-1069},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.628391},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Design Principles for Practical Self-Routing Nonblocking Switching Networks with O(N · log N) Bit-Complexity
IS - 10
SN - 0018-9340
EPD - 1057-1069
A1 - Ted H. Szymanski,
PY - 1997
KW - Multistage
KW - networks
KW - self-routing
KW - nonblocking
KW - circuit-switching
KW - scalable
KW - randomization
KW - electrical
KW - optical.
VL - 46
JA - IEEE Transactions on Computers
ER -
Abstract—Principles for designing practical self-routing nonblocking N×N circuit-switched connection networks with optimal θ(N· log N) hardware at the bit-level of complexity are described. The
overall principles behind the architecture can be described as "Expand-Route-Contract." A self-routing nonblocking network with w-bit wide datapaths can be achieved by expanding the datapaths to w +
z independent bit-serial connections, routing these connections through self-routing networks with blocking, and by contracting the data at the output and recovering the w-bit wide datapaths. For an
appropriate redundancy z, the blocking probability can be made arbitrarily small and the fault tolerance arbitrarily high. By using efficient space domain concentrators, the architecture yields
self-routing nonblocking switching networks with an optimal O(N· log N) bits of memory or O(N· log N· log log log N) logic gates. By using a linear-cost time domain concentrator, the architecture
yields self-routing nonblocking switching networks with an optimal θ(N· log N) bits of memory or logic gates. These designs meet Shannon's lower bound on memory requirements, established in the
1950s. The number of stages of crossbars can match the theoretical minimum, which has not been achieved by previous self-routing networks. The architecture is feasible with existing electrical or
optical technologies. The designs of electrical and optical switch cores with Terabits of bisection bandwidth for Networks-of-Workstations (NOWs) are described.
[1] M. Ajtai,J. Komlos,W.L. Steiger, and E. Szemeredi,"An O(n log n) sorting network," Proc. Ann. ACM Symp. Theory of Computing, pp. 1-9, 1983.
[2] ARPA/COOP/AT&T Hybrid-SEED Workshop Notes, George Mason Univ., July 1995.
[3] S. Arora,T. Leighton,, and B. Maggs,“On line algorithms for path selection in a nonblocking network,” Proc. 22nd Ann. ACM Symp. Theory of Computing, pp. 149-158, 1990.
[4] B.D. Alleyne and I. Scherson, "Expanded Delta Networks for Very Large Parallel Computers," Proc. Int'l Conf. Parallel Processing, pp. 127-131, 1992.
[5] A. Bassalygo and M.S. Pinsker, "Complexity of Optimum Nonblocking Switching Network without Reconnections," Problems of Information Transmission, vol. 9, pp. 64-66, 1974.
[6] K.E. Batcher, "Sorting Networks and Their Applications," Proc. 1968 Spring Joint Computer Conf., 1968.
[7] M.V. Chien and A.Y. Oruc, "Adaptive Binary Sorting Schemes and Associated Interconnection Networks," Proc. Int'l Conf. Parallel Processing, pp. 289-293, 1992.
[8] T.J. Cloonan, G.W. Richards, A.L. Lentine, F.B. McCormick, and J.R. Erickson, "Free-Space Photonic Switching Architectures Based on Extended Generalized Shuffles," Applied Optics, vol. 31, no.
35, pp. 7,471-7,492, Dec. 1992.
[9] T.J. Cloonan, "Comparative Study of Optical and Electronic Interconnection Technologies for Large Asynchronous Transfer Mode Packet Switching Applications," Optical Eng., vol. 33, no. 5, pp.
1,512-1,523, May 1994.
[10] G.A. De Biase, C. Ferrone, and A. Massini, "An O(logN) Depth Asymptotically Nonblocking Self Routing Permutation Network," IEEE Trans. Computers, vol. 44, no. 8, pp. 1,047-1,051, Aug. 1995.
[11] B.G. Douglass,"Rearrangeable Three-Stage Interconnection Networks and Their Routing Properties," IEEE Trans. Computers, vol. 42, no. 5, pp. 559-567, May 1993.
[12] H.S. Hinton, T.J. Cloonan, F.B. McCormick, A.L. Lentine, and F.A.P. Tooley, "Free-Space Digital Optical Systems," Proc. IEEE, vol. 82, no. 11, pp. 1,632-1,649, Nov. 1994.
[13] W. Hoeffding, "On the Distribution of the Number of Successes in Independent Trials," Annals of Math. Statistics, vol. 27, pp. 713-721, 1956.
[14] A. Huang and S. Knauer, "Starlite: A Wideband Digital Switch," Proc. Globecom, Dec. 1988.
[15] C.Y. Jan and A.Y. Oruç, “Fast Self-Routing Permutation Switching on an Asymptotically Minimum Cost Network,” IEEE Trans. Computers, vol. 42, no. 12, pp. 1,469-1,479, Dec. 1993.
[16] R. Kannan, H.F. Jordan, K.Y. Lee, and C. Reed, "A Bit-Controlled MultiChannel Time Slot Permutation Network," Proc. Second Int'l Conf. Massively Parallel Processing Using Optical Interconnects,
pp. 271-278, 1995.
[17] D.M. Koppelman and A.Y. Oruç, “A Self-Routing Permutation Network,” J. Parallel and Distributed Computing, vol. 10, no. 10, pp. 140-151, Oct. 1990.
[18] A.V. Krishnamoorthy and D.A.B. Miller, "Scaling Optoelectronic-VLSI Circuits into the 21st Century: A Technology Roadmap," IEEE J. Selected Topics in Quantum Electronics, vol. 2, no. 1, pp.
55-76, Apr. 1996.
[19] C.P. Kruskal and M. Snir, "The Performance of Multistage Interconnection Networks for Multiprocessors," IEEE Trans. Computers, vol. 32, no. 12, pp. 1,091-1,098, Dec. 1983.
[20] F.T. Leighton,Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes.San Mateo, Calif.: Morgan Kaufmann, 1992.
[21] A.L. Lentine et al., "700 Mb/s Operation of Optoelectronic Switching Nodes Comprised of Flip-Chip-Bonded GaAs/AlGaAS MQW Modulators and Detectors on Silicon CMOS Circuitry," Proc. Conf. Lasers
and Electrooptics, 1995.
[22] T. Lewis, “The Next$10,000_2$Years,” Computer, pp. 64-70, May 1996.
[23] D. Mitra and R.A. Cieslak, "Randomized Parallel Communications on an Extension of the Omega Network," J. ACM, vol. 34, pp. 802-824, 1987.
[24] Motorola, "OPTOBUS Data Sheet," Logic Integrated Circuits Division, 1995.
[25] D. Nassimi and S. Sahni, “Parallel Permutation and Sorting Algorithms and a New Generalized Connection Network,” J. ACM, vol. 29, no. 3, pp. 642-667, July 1982.
[26] J.H. Patel, "Performance of Processor-Memory Interconnections for Multiprocessors," IEEE Trans. Computers, vol. 30, no. 10, pp. 771-780, Oct. 1981.
[27] J.L. Hennessey and D.A. Patterson, Computer Architecture, A Quantatative Approach, second edition. San Francisco: Morgan-Kauffman, 1995.
[28] Semiconductor Industry Association, "The National Technology Roadmap for Semiconductors,"San Jose, Calif.: SIA, 1994.
[29] C.E. Shannon, "Memory Requirements in a Telephone Exchange," Bell. Systems Technical J., 1953.
[30] S. Sherif, T.H. Szymanski, and H.S. Hinton, "Design and Implementation of a Field Programmable Smart Pixel Array," Proc. LEOS 96 Conf. Smart Pixels,Keystone, Colo., Aug. 1996.
[31] B. Supmonchai and T.H. Szymanski, "Fast Self-Routing Concentrators for Optoelectronic Systems," submitted.
[32] T.H. Szymanski and V.C. Hamacher, "On the Universality of Multipath Multistage Interconnection Networks," Interconnection Networks, I. Scherson and Youseff, eds., IEEE CS Press, 1994.
[33] T.H. Szymanski and C. Fang, "Randomized Routing of Virtual Connections in Essentially Nonblocking log N-Depth Networks," IEEE Trans. Comm., pp. 2,521-2,531, Sept. 1995.
[34] T.H. Szymanski and H.S. Hinton, "Reconfigurable Intelligent Optical Backplane for Parallel Computing and Communications," Applied Optics, pp. 1,253-1,268, Mar. 1996.
[35] C.D. Thompson, "Generalized Connection Networks for Parallel Processor Intercommunication," IEEE Trans. Computers, vol. 27, no. 12, pp. 1,119-1,125, Dec. 1978.
[36] U.S. National Science Foundation, "Research Priorities in Networking and Communications," Report to the NSF Division of Networking and Communications Research and Infrastructure, May12-14,
1994,Arlington, Va.
[37] E. Upfal, S. Felperin, and M Snir, "Randomized Routing with Shorter Paths," IEEE Trans. Parallel and Distributed Systems, vol. 7, no. 4, pp. 356-362, Apr. 1996.
[38] L.G. Valiant and G.J. Brebner,"Universal Schemes for Parallel Communication," Proc. 13th Ann. ACM Symp. Theory of Computing, pp. 263-277, May 1981.
[39] M. Yamaguchi and K-I Yukimatsu, "Recent Free-Space Photonic Switches," IEICE Trans. Comm., vol. E77B, no. 2, Feb. 1994.
Index Terms:
Multistage, networks, self-routing, nonblocking, circuit-switching, scalable, randomization, electrical, optical.
Ted H. Szymanski, "Design Principles for Practical Self-Routing Nonblocking Switching Networks with O(N · log N) Bit-Complexity," IEEE Transactions on Computers, vol. 46, no. 10, pp. 1057-1069, Oct.
1997, doi:10.1109/12.628391
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/1997/10/t1057-abs.html","timestamp":"2014-04-18T08:22:58Z","content_type":null,"content_length":"62866","record_id":"<urn:uuid:5a58d985-49c3-4774-b9ce-522e80ee452d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear transformation and indepence problem
January 29th 2008, 10:40 PM #1
Linear transformation and indepence problem
I am having trouble with a problem and am in need of some help.
V & W are vector space, and T:V->W is linear.
T is 1 to 1 and S is a subset of V. Prove that S is linearly independent if and only if T(S) is linearly independent.
I have started the problem but am really uncertain of my approach. So far I have
Sum [biT(ai)] = 0 where i=1,2,...n
T[Sum bi(ai)] = 0
therefore there should exist some c such that
Sum [bi(ai)] = Sum ci(ai)
Is what I have done even correct?
Let $S = \{ a_1, ... a_n\}$ and suppose this set is linearly independent. Then $T(S) = \{T(a_1),...,T(a_n)\}$. Suppose there exists $c_1,...,c_n$ (in the field) such that $c_1T(a_1) + ... + c_nT
(a_n) = 0$ thus $T(c_1a_1+...+c_na_n) = 0$. But since $T$ is one-to-one its kernel is the zero-vector thus $c_1a_1+...+c_na_n \implies c_1 = c_2 = ... = c_n$. Thus, $\{T(a_1),...,T(a_n) \}$ are
linearly independent.
Would it be a similar prove if S was a basis instead of a subset?
January 30th 2008, 06:48 AM #2
Global Moderator
Nov 2005
New York City
January 30th 2008, 08:13 PM #3 | {"url":"http://mathhelpforum.com/advanced-algebra/27087-linear-transformation-indepence-problem.html","timestamp":"2014-04-23T16:31:32Z","content_type":null,"content_length":"38320","record_id":"<urn:uuid:35e6fdbb-07fd-4f33-aa9f-05b8120ecd9e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - product and chain rule
i am having trouble with this one problem. maybe you can tell me where i am going wrong.
find h'(t) if h(t)=(t^6-1)^5(t^5+1)^6
so i am using product rule and to find the derivatives of each expression i am using chain rule...
so i get h'(t)=30t^4(t^6-1)^4(t^5+1)^6+30^4(t^5+1)^5(t^6-1)^5
is that right..or what is wrong with it? | {"url":"http://www.physicsforums.com/showpost.php?p=1239063&postcount=1","timestamp":"2014-04-19T22:59:28Z","content_type":null,"content_length":"8663","record_id":"<urn:uuid:91e36876-e409-4f3d-9254-61130fd48317>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Golden Proportion Hypothesis Defended by Rachel Fletcher for the Nexus Network Journal vol.3 no.4 Autumn 2001
Palladio's Villa Emo:
The Golden Proportion Hypothesis Defended
Rachel Fletcher
113 Division St.
Great Barrington, MA 01230 USA
According to conventional wisdom, the Villa Emo at Fanzolo could never have been based on Golden proportions. I could not believe this myself not, that is, until I saw the entire mathematical scheme
for Palladio's elegant Renaissance buildings, which sit on a flat, fertile plain in Treviso, in northern Italy [Fletcher 2000].
In "Palladio's Villa Emo:The Golden Proportion Hypothesis Rebutted" [March 2001], Lionel March argues that the Golden Section, or extreme and mean ratio, is nowhere to be found in the Villa Emo as
described in I quattro libri dell'archittetura. Palladio, he says, "has given the actual measurements" and they simply do not add to a scheme of Golden proportions. He is absolutely right. The
extreme and mean ratio is not observed in the Emo plan as it was published. But the villa Palladio described in that publication is not the villa he built and that survives today.
The discrepancy between the two versions was known as early as the 1770s. That was when Bertotti Scamozzi published Le fabbriche e i disegni di Andrea Palladio, in which he struggled to reconcile
numerous inconsistencies between built and published versions of Palladio's works [Scamozzi 1976: 75-76]. Alas, Scamozzi's measurements were not as accurate as we would have liked. Fortunately, a
more definitive survey was performed in 1967 by the architects Mario Zocconi and Andrzej Pereswiet Soltan for the Centro Internazionale di Studi di Architettura "Andrea Palladio" (C.I.S.A.) [Rilievi
1972; Favero 1972: 29-32 and scale drawings a-m].
Many believe Palladio's published plans present idealized versions of his buildings, permitting him to make adjustments for the special conditions of specific sites. But perhaps, in some instances,
different versions provided options for design and proportional schemes. For example, the published plan for the Villa Emo presents a conventional set of stairs that leads to a south-facing portico.
In fact, a unique, elongated ramp was built. Members of the Emo family today believe it served as both an entryway and a threshing floor to meet the villa's agricultural needs. Does it correct the
building's proportions to substitute the ramp with shorter conventional stairs? The Emo family thinks not, and perhaps Palladio did not think so, either, for a corrected set of measurements is not
Different measures are specified, however, for the plan of rooms on the main floor of the central block, and these are the stuff of musical and mathematical harmonies, as Lionel March so brilliantly
demonstrates. The discrepancy is subtle, perhaps too subtle to reflect real versus ideal conditions, but sufficient to suggest a different mathematical interpretation.
Overall length and width of the main floor plan
Compare the built and published versions of the main floor plan, beginning with overall length and width, as published, in Vicentine feet (the Vicentine foot corresponds to 34.75 centimeters [Favero,
p. 18]). (Figure 1)
Lionel March calculates total length by adding individual measures along a north-south axis of length, including the lengths of three individual rooms and the thickness of four walls. For the moment,
the thickness of any given wall is called x.
Total length = x + 27 + x + 12 + x + 16 + x = 55 + 4x.
Total width is calculated in similar fashion:
Total width = x + 16 + x + 27 + x + 16 + x = 59 + 4x.
Taking x = 1 as an initial choice for the wall thickness, following March, results in the ratio of total length to total width of 59:63, or 1:1.067.
How does this compare with the ratio of overall length to width in the villa's plan, as it was built (Figure 2)? According to the C.I.S.A. survey, total length and total width are 20.56 meters and
22.35 meters, respectively. Therefore, the ratio of length to width is 20.56:22.35, or 1:1.087. A subtle difference from the 1:1.067 ratio of the published plan, but one that may be viewed at a
Since the published plan does not provide a measure for the thickness of walls, we cannot assume it is equal to 1. But applying a variation of Lionel March's method, we can determine what the
thickness must be for the published plan to match the built plan's 1:1.087 ratio of length to width. In other words: (55 + 4x) : (59 + 4x) :: 1:1.087. To satisfy the proportion, the thickness of a
wall x must be approximately equal to -2.2557 feet. Not quite as impossible as the negative wall thickness of over five feet required for a Golden Mean scheme to match the plan's published measures,
but impossible nevertheless.
Overall length and width of the geometric scheme
Did I quattro libri "correct" the measures of the villa as built, or was a different proportional scheme presented? Consider the proposed geometric scheme of Golden proportions, which is based on a
rectangle that results from inscribing a double square within a circle (Figure 3). The length of the rectangle equals the long edge of the double square. The width equals the diameter of the circle.
The ratio of length to width is 1] Once this adjustment is made, extreme and mean divisions align with one face or another of the remaining interior walls (Figure 4 and Figure 5).[2]
Length and width of individual rooms
To further illuminate the difference between the two plans, compare the dimensions of individual rooms. In meters, the central hall in the published plan is 9.38 x 9.38, but the hall that was built
measures 9.44 x 9.33. The northeast and northwest bedrooms, as published, are each 9.38 m. x 5.56 m., whereas the rooms as built measure 9.44 m. x 5.66 m. The small southeast and southwest rooms, as
published, are each 5.56 m. x 5.56 m., while the rooms as built measure 5.62 m. x 5.62 m. Excluding the thickness of the walls, the total width of the central block, as published, is 20.50 meters.
The same, as built, is 20.66 meters [Favero 1972: 31].
This does not mean that the inside measurements of the rooms as built convey extreme and mean ratios, either within themselves or in relation to others. But when the thickness of walls is factored to
one side or another, a scheme emerges in which the overall
Given the evidence of the plan as it was built, perhaps Lionel March will reconsider whether "the visually gratifying result is so very wrong when tested by the numbers."
The question remains: If Palladio designed with extreme and mean ratios, why didn't he publish a relevant construction in I quattro libri? Lionel March argues that Palladio never published a
construction that produced the extreme and mean ratio. He grants that Alberti described an exact construction for a decagon and that in the 1540s, Serlio illustrated Dürer's exact construction for a
pentagon. But as late as 1569, Barbaro presented only Dürer's approximate construction, even though an exact construction is required to produce the Golden Mean. And while Pacioli spiritualized the
Divine Proportion and Kepler connected it to planetary motions, the extreme and mean ratio lay dormant essentially until the nineteenth century, when it was born again as the Golden Section.
Lionel March further cites the ancient theatres, which are based, Vitruvius tells us, on arrangements of squares and triangles and their inherent Never mind that the two sections of Epidaurus's
theatron contain 21 and 34 rows and merely approximate a true extreme and mean division. I wouldn't consider it, either, were it not for a study by German scholars Gerkan and Müller-Wiener [1961: Pl.
3] that relates the theatre's skene, orchestra and theatron through a regular pentagon and its inscribed and circumscribing circles.[3]
Let's face it. From as early as Euclid through the Renaissance and beyond, the extreme and mean ratio was not unknown. Beside the examples already cited, as early as 1726, well before the nineteenth
century, mathematicians, builders and architects published exact geometric constructions based on the Golden ratio. Peter Nicholson, Batty Langley and others illustrated its use for architects and
builders [Nicholson 1827: Pl. 13 and problem XXII; Langley 1726: Pl. I, fig. XXVII and p. 41]. Mathematicians such as Sébastien Le Clerc demonstrated numerous constructions in elementary texts [Le
Clerc 1742: 112-113, 180-181]. In at least one instance, Ephriam Chambers linked the extreme and mean ratio to the pentagon's exact construction [Chambers 1738: opp. 142].
None of this proves that Palladio favored the extreme and mean ratio. He did not publish an exact construction, but neither did he produce a book on geometry comparable to Serlio's Book I. Had he
written such a book, might it contain an extreme and mean construction? Unfortunately, we probably will never know.
It is true that the extreme and mean division does not rank among Palladio's ratios for shapes for rooms. All but one, in fact, are comprised of ratios in whole numbers.[4] But these address
individual rooms, not the plan as a whole, nor the rooms as they relate to one another.[5] The beauty of the Golden ratio, as it adorns the Villa Emo, is that it distinguishes the plan as a whole and
persists through every level of subdivision. "Proportion" is defined conventionally as the relationship of parts to one another and to the greater whole. One would be hard pressed to find a better
Finally, Lionel March's most compelling argument is the practical one. "Buildings" he says, "have to be set out," and triangulation has been the method of choice "since time immemorial". Certainly,
the Pythagorean 3:4:5 triangle is well suited to achieving the right angle, but surveyors may use triangles for many purposes. Consider the simple right triangle of sides one and one-half: it does
not ensure the right angle, but its
We base our understanding of the past on precious little evidence and so it is prudent, from time to time, to revisit what we know with a new and open mind. Without doubt, the Golden proportion
hypothesis is filled with speculation, for we cannot prove that Palladio applied it with deliberate intent. And yet, given its persistence throughout the plan of Villa Emo, it may be time to consider
if all the relevant evidence is in.
Lionel March is to be thanked for illuminating the many rich and wonderful mathematical techniques that grace the Villa Emo, from its 3:4:5 triangles to elaborate musical harmonies. Is it so hard to
imagine that extreme and mean ratios occupied the Renaissance mind as well?
[1] One justification is that the columns along the south wall relate to the ramped entry, with the reduction repeated on the opposing north wall. return to text
[2] To be precise, the tolerance throughout is within 1 cm., with the exception of a single 9 cm. deviation. return to text
[3] The circumscribing circle traces the inside face of the theatron, or auditorium; the inscribed circle traces the inside edge of the orchestra perimeter; and the base of the pentagon locates the
front edges of the paraskenia, or the skene's projected wings [Gerkan and W. Müller-Wiener 1961: Pl. 3].
Vitruvius's brief but evocative description tells but part of the actual story. Roman theatres, he says, emerged from a twelve-fold arrangement of four triangles, while the theatres of Greece
followed a twelve-fold arrangement comprised of three squares. Both geometries are inscribed within the orchestra circle and locate elements of the different stage buildings. They also distinguish
the half-round Roman theatron from the Greek auditorium's fuller expanse through eight of the circle's twelve divisions [Vitruvius 1999: 68-70, 247-248].
In fact, the Hellenistic Epidaurus appears to have adapted elements of both geometries to the situation at hand. The orchestra perimeter may be divided into twelve equal arcs, locating the apexes for
a regular pattern of three inscribed squares. Eight of the twelve apexes roughly define the extent of the theatron, but the geometry isn't precise until axes taken from the center through the first
and eighth apexes meet the outer edge of the lowest auditorium level. The remaining four apexes define the size of the skene, in the sense that axes taken from the center through the ninth and
twelfth apexes mark the inside corners of the paraskenia. Meanwhile, the base of an equilateral triangle that is circumscribed by the orchestra circle locates the theatron at its innermost edge
[Fletcher 1991: 100-103]. return to text
[4] The one exception is a room in the ratio of 1: root-2 [Palladio 1997: 59]. return to text
[5] A simple whole number ratio may suffice for an individual room to express grace and harmony. But Jay Hambidge [1967] explains that incommensurable ratios such as the Golden Section permit a
"dynamic symmetry" in which the same ratio persists through endless levels of subdivision. return to text
Chambers, Ephraim. 1738. Cyclopaedia: or, An Universal Dictionary of Arts and Sciences, vol. 1. London: D. Midwinter (etc.).
Favero, Giampaolo B. 1972. The Villa Emo at Fanzolo. Douglas Lewis, trans. University Park, PA: The Pennsylvania State University Press.
Fletcher, Rachel. 2000. Golden Proportions in a Great House: Palladio's Villa Emo. Pp. 73-85 in Nexus III: Architecture and Mathematics. Kim Williams, ed. Pisa: Pacini Editore.
Fletcher, Rachel. 1991. Ancient Theatres as Sacred Spaces. Pp. 88-106 in The Power of Place. James A. Swan, ed. Wheaton, IL.: Quest Books.
Gerkan, A. von and W. Müller-Wiener- 1961. Das Theater von Epidauros. Stuttgart: Kohlhammer.
Hambidge, Jay. 1967. The Elements of Dynamic Symmetry. New York: Dover.
Langley, Battty. 1726. Practical Geometry Applied to the Useful Arts . London: Printed for W. & J. Innys, J. Osborn and T. Longman, B. Lintot (etc.).
Le Clerc, Sébastien. 1742. Practical Geometry: or, A New and Easy Method of Treating that Art. Trans. from the French. London: T. Bowles and J. Bowles.
March, Lionel. 2001. "Palladio's Villa Emo: The Golden Proportion Hypothesis Rebutted," Nexus Network Journal 3, 4 (Autumn 2001).
Nicholson, Peter. 1827. Principles of Architecture; Containing the Fundamental Rules of the Art.... vol. 1. London: J. Barfield.
Palladio, Andrea. 1997. The Four Books on Architecture. Robert Tavernor and Richard Schofield, trans. Cambridge, MA: MIT Press.
Rilievi delle Fabbriche de Andrea Palladio, vol. 1: La Villa Emo di Fanzolo. 1972. Vicenza: Centro Internazionale di Studi di Architettura "Andrea Palladio".
Scamozzi, Ottavio Bertotti. 1976. The Buildings and the Designs of Andrea Palladio. 1776. Howard Burns, trans. Trent: Editrice La Roccia.
Vitruvius. 1999. Ten Books on Architecture. Ingrid D. Rowland, trans. Cambridge: Cambridge University Press.
Andrea Palladio
Palladio's Italian Villas
The Palladio Museum
An Itinerary of Villas and their Commissioners
Life of Palladio
The Golden Proportion
Rachel Fletcher, "Golden Proportions in a Great House" (abstract, Nexus 2000)
The Golden Section in Art and Architecture
The Golden Section Ratio: Phi
Proportion and the Golden Ratio (Mathematics and the Liberal Arts)
Rachel Fletcher is a theatre designer and geometer living in Massachusetts, with degrees from Hofstra University, SUNY Albany and Humboldt State University. She is the creator/curator of two museum
exhibits on geometry, "Infinite Measure" and "Design By Nature". She is the co-curator of the exhibit "Harmony by Design: The Golden Mean" and author of its exhibition catalog. In conjunction with
these exhibits, which have traveled to Chicago, Washington, and New York, she teaches geometry and proportion to design practitioners. She is an adjunct professor at the New York School of Interior
Design. Her essays have appeared in numerous books and journals, including "Design Spirit", "Parabola", and "The Power of Place". Her design/consulting credits include the outdoor mainstage for
Shakespeare & Co. in Lenox, Massachusetts and the Marston Balch Theatre at Tufts University.
The correct citation for this article is:
Rachel Fletcher, "Palladio's Villa Emo: The Golden Proportion Hypothesis Defended", Nexus Network Journal, vol. 3, no. 4 (Autumn 2001), http://www.nexusjournal.com/Fletcher.html
Copyright ©2001 Kim Williams
top of page | {"url":"http://www.emis.de/journals/NNJ/Fletcher.html","timestamp":"2014-04-17T09:37:33Z","content_type":null,"content_length":"28468","record_id":"<urn:uuid:77a5892c-9816-4003-b501-0b5c60af3442>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program to find out the square and cube of a number
This is a example of simple program to find out the square and cube of a number.
This program can be extended to find out any power of a number using recursion function.
//This is a program to find out Square and Cube of a given number
int main()
int num,sqrt,cube;
printf("\nEnter a number to Square and Cube ");
printf("\nThe Square of the give number is %d\nand the cube of the give number is %d",sqrt,cube);
return 0;
0 comments: | {"url":"http://www.programsway.com/2013/03/this-is-example-of-simple-program-to.html","timestamp":"2014-04-18T18:14:19Z","content_type":null,"content_length":"174505","record_id":"<urn:uuid:84ef7e1f-1afe-4288-b249-03573375fa64>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logarithms resources
Indicial equations
An indicial equation is one in which the power is unknown. Such equations often occur in the calculation of compound interest.
We use logarithms to write expressions involving powers in a different form. If you can work confidently with powers, you should have no problems handling logarithms
Logarithms - changing the base
Sometimes it is necessary to find logs to bases other then 10 and e. There is a formula which enables us to do this. This leaflet states and illustrates the use of this formula.
The exponential constant e
The letter e is used in many mathematical calculations to stand for a particular number known as the exponential constant. This leaflet provides information about this important constant, and the
related exponential function.
The laws of logarithms
This leaflet explains and illustrates the laws governing the manipulation of logarithms. (Engineering Maths First Aid Kit 2.20)
The laws of logarithms
There are rules, or laws, which are used to rewrite expressions involving logs in different forms. This leaflet states and illustrates these rules.
The laws of logarithms
There are a number of rules known as the laws of logarithms. These allow expressions involving logarithms to be rewritten in a variety of different ways. The laws apply to logarithms of any base, but
the same base must be used throughout a calculation.
The logarithm function
This leaflet provides a table of values and a graph of the logarithm function. (Engineering Maths First Aid Kit 3.7)
What is a logarithm ?
Logarithms can be used to write expressions involving powers in alternative forms. This leaflet explains how. | {"url":"http://www.mathcentre.ac.uk/types/quick-reference/logarithms/","timestamp":"2014-04-16T22:18:16Z","content_type":null,"content_length":"12119","record_id":"<urn:uuid:58fa1f9d-5ddd-4894-a0f8-3c9cea91fc17>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orbital Shape, Shape Of S, P, D, F Orbitals - Transtutors
Orbital Shape Assignment Help
(i) For ‘s’ orbital l=0 & m=0 so ‘s’ orbital have only one unidirectional orientation i.e. the probability of finding the electrons is same in all directions.
(ii) The size and energy of ‘s’ orbital with increasing ‘n’ will be 1s < 2s < 3s < 4s
(ii) Shape of ‘p’ orbital is dumb bell in which the two lobes on opposite side separated by the nodal plane.
(iii) p-orbital has directional properties.
(2) Shape of ‘p’ orbitals
(i) For ‘p’ orbital l=1, & m=+1,0,–1 means there are three ‘p’ orbitals, which is symbolised as p[x], p[y], p[z]
(ii) Shape of ‘p’ orbital is dumb bell in which the two lobes on opposite side separated by the nodal plane.
-orbital has directional properties.
(3) Shape of ‘d’ orbital
(i) For the ‘d’ orbital l =2 then the values of ‘m’ are –2, –1, 0, +1, +2. It shows that the ‘d’ orbitals has five orbitals as d[xy], d[yz], d[zx], d[x2–y2], d[z2]
(ii) Each ‘d’ orbital identical in shape, size and energy.
(iii) The shape of d orbital is double dumb bell .
(iv) It has directional properties.
(i) For the ‘f’ orbital l=3 then the values of ‘m’ are –3, –2, –1,0,+1,+2,+3. It shows that the ‘f’ orbitals have seven orientation as f[x(x^2–y^2)], f[y(x^2–y^2)], f[xyz], f[z^3], f[yz^2] and f[zx^
(ii) The ‘f’ orbital is complicated in shape.
Email Based Homework Assignment Help in Orbital Shape
Transtutors is the best place to get answers to all your doubts regarding orbital shape, shape of s orbital, shape of p orbital, shape of d orbital and shape of f orbital with solved examples. You
can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are
available round the clock to help you out in any way with chemistry.
Live Online Tutor Help for Orbital Shape
Transtutors has a vast panel of experienced chemistry tutors who specialize in orbital shape and can explain the different concepts to you effectively. You can also interact directly with our
chemistry tutors for a one to one session and get answers to all your problems in your school, college or university levelatomic structure homework. Our tutors will make sure that you achieve the
highest grades for your chemistry assignments. We will make sure that you get the best help possible for exams such as theAP, AS, A level, GCSE, IGCSE, IB, Round Square etc.
Related Questions
more assignments » | {"url":"http://www.transtutors.com/chemistry-homework-help/atomic-structutre/orbital-shape.aspx","timestamp":"2014-04-16T13:07:11Z","content_type":null,"content_length":"98089","record_id":"<urn:uuid:9f0f10c3-4fe8-46ae-941c-72cdd8819f70>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basketball Cards....
June 10th 2006, 03:24 AM
captian can you help me with this :
The eight corners of a cube are numbered 1 to 8 (as an example 'Diagram 1'). Its six face sums(the totals of the numbers on the four vertices of each face) are equal to 10, 14, 18, 22 and 26.
Diagram 2 displays the cube with the same information in a adifferent way. Also, five of the face sums are shown, in bold. The 'Front' face sum of 14 is not shown, but can be found by adding the
numbers on the four outer vertices in this diagram.
An example of an opposite vertex pair in this numbered cube is 1/7.
A numbered cube is prime-faced if all six face sums are prime numbers.
a) Draw a diagram showing a prime faced cube with more than two different face sums
Find a prime faced cube with opposite vertex pairs 1/2, 3/4, 5/6, 7/8.
c) Some prime numbers arise as a face sum of a prime faced cube and others never do. Find the ones that are possible; your answer must, in particular, make it clear why numbers not in your list
are impossible.
d) Show that the opposite vertex pairs of a prime cube are one odd and one even.
June 10th 2006, 06:12 AM
Please post new questions in new threads, it makes it easier
for everyone to follow what is going on.
Also in questions which refer to diagrams it is advantageous if we
can see the diagrams, even though with a bit of effort we can
often reconstruct what they must have been :( | {"url":"http://mathhelpforum.com/algebra/3358-basketball-cards-print.html","timestamp":"2014-04-17T11:35:35Z","content_type":null,"content_length":"4684","record_id":"<urn:uuid:7943c4a1-5fb3-4c01-bad0-fe91d188bcd5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Synthesize FIR filters using high-school algebra (Part 2) | EE Times
Design How-To
Synthesize FIR filters using high-school algebra (Part 2)
The story so far
In Part 1 we took a regular FIR filter design and wrote down the filter coefficients in polynomial form to get equation [1]:
Then we found all the roots of this polynomial, and used them to write down the
factorized form of the polynomial:
As a parting shot, I pointed out that there are three quadratic terms there with unity coefficients of z^0 – and there are three deep nulls in the gain response of the filter, as was shown in the
figures from Part 1. Let's take a deep breath and examine the responses of all these individual linear and quadratic factors, to see if there are some clues there.
Figure 1 shows the individual responses, treated as two- or three-tap FIR filters, of each of the factors in parentheses in equation [2]; the five quadratic factors marked as q1 to q5 and the four
linear factors and L1 to L4. It's quite a jumble of a graph, but you don't have to be very awake to see the major salient detail: three of those quadratic factors have deep notches in the frequency
response. These are indeed the three factors whose constant (z^0) coefficient is unity!
So, here's the first takeaway. In an FIR filter whose stopband contains a number of sharp nulls, each one of them comes into being because the response of one of the polynomial's quadratic factors
falls to zero at one frequency. Just so that we don't jump to conclusions about the particular form the factor needs to have, let's do some more algebra to make sure. Are we having fun yet?
Figure 1: The frequency response of all the quadratic and linear factors of equation [2] | {"url":"http://www.eetimes.com/document.asp?doc_id=1279491","timestamp":"2014-04-17T01:37:21Z","content_type":null,"content_length":"129827","record_id":"<urn:uuid:bd06761c-a946-4dd1-8df0-5bb91d001325>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
A weird function related to the denominators of rational squares
up vote 6 down vote favorite
Between any consecutive integers $a$ and $a+1$ there are infinitely many rational squares of the form $t^2 / s^2$. I have been working to understand the following question: How small can $t$ and $s$
be? That is, let $\sigma (a)$ denote the least natural number $s$ for which there exists a natural number $t$ such that $(s^2)a < t^2 < (s^2)(a + 1)$. What are the properties of the function $\sigma
It's not hard to calculate $\sigma (a)$ numerically, and the graph of the function is weird. Full of crazy fluctuations but bounded by a smooth curve both above and below. I've been working on this
for a while now and have found some nice partial results, including sharp formulas for the lower and upper bounds, a criterion for when the upper bound is attained, and several special cases. (I'll
be happy to share those with anybody who asks.) I am at the point where I think I need to find out if anybody else has worked on this sort of thing before. Standard websearches haven't turned
anything up. Does anybody know if this question has been previously investigated?
nt.number-theory ag.algebraic-geometry
I haven't graphed it, but here is a perspective which may help. Set d=2. For positive integers t, tabulate (t/d)^2, and note for what t you will get sigma(a)=d. The answer will depend primarily
whether t is odd. Increase d by one and repeat. This time, the t that matter will depend on the previous d, when t/(d+q) is near to s/d and if gcd(t,d+q)=1 or not. You might find Moebius inversion
useful in representing sigma. Gerhard "Jacobsthal's Function Might Be Related" Paseman, 2012.10.12 – Gerhard Paseman Oct 12 '12 at 15:26
There is also a suggestion of the logistic map and perhaps a discrete dynamical system. However, I am not an expert on this, so waste only a little time on this idea. Gerhard "Manage Your Time With
Ideas" Paseman, 2012.10.12 – Gerhard Paseman Oct 12 '12 at 15:32
Using the perspective of tabulation, I suspect sigma(a) <= 2k for k^2 <= a <= (k+1)^2 - 1. I'll have to think about sigma(k^2+2k) for a bit. Gerhard "Remember Jacobi: When Possible, Invert"
Paseman, 2012.10.12 – Gerhard Paseman Oct 12 '12 at 15:40
Oops. Arithmetic error. Likely sigma(k^2 + 2k)=k+1, so you will be interested in the "holes" created by mapping the kth Farey sequence by the square function. Gerhard "I'll Stop Here For Now"
Paseman, 2012.10.12 – Gerhard Paseman Oct 12 '12 at 15:47
Maybe I should give some more detail on what I have already found so that people don't waste their time re-inventing my wheel. (1) For all $a$, $\sigma (a) \leq \overline{\frac{8a\sqrt{a}}{4a-1}}$,
where the overbar denotes the ceiling function. (2) When $a=n^2$ the upper bound above is sharp, and simplified to $\sigma (n^2) = 2n+1$. (3) When $a=n^2 + n$ we have $\sigma(n^2 + n) = 2$. –
mweiss Oct 12 '12 at 18:31
show 4 more comments
2 Answers
active oldest votes
I found this on the OEIS, but it doesn't list much information, so I don't know whether it's been studied before. One way to look at it is that you're looking for the rational number
up vote 2 with the smallest denominator between $\sqrt{n}$ and $\sqrt{n+1}$. There are algorithms that use continued fractions to calculate the "best" rational in any given interval; see here.
down vote Square roots have particularly nice continued-fraction representations, so you might even be able to get some sort of formula for $\sigma$.
Thanks -- those citations will be very helpful, I think. I am a total newbie when it comes to the OEIS; is there a way to search for papers / pages that discuss or reference a
specific OEIS sequence? – mweiss Oct 12 '12 at 22:07
Not really. Usually, if there's a paper about a sequence, there'll be a reference in the entry for the sequence. – Robert Young Oct 13 '12 at 4:39
More and more papers are using the modern numbering when referring to an OEIS entry. You can try a web search on that number, but I will be suprised if you find any published more
than 8 years ago, if you find any at all. Also, the OEIS organization did a renumbering some years ago; you may have to consider that. Gerhard "Ask Me About System Design" Paseman,
2012.10.15 – Gerhard Paseman Oct 15 '12 at 18:09
add comment
I think pulling the parts of the range of the square function back to an interval decorated with Farey fractions will help not only with sigma, but also with the question with chi
corresponding to the cube or other monotonic polynomial on the positive integers in place of sigma. Sigma is nice to study because there are about (k^2)/3 Farey fractions with denominator
at most k, almost ensuring a bound of k for sigma(a) when k^2< a < (k+1)^2. I'll leave a similar bound for higher orders for you to derive.
Even just looking at (k +/- 1/t)^2 for positive integers t helps one understand the behavior of sigma near sigma(k^2); a similar idea was what motivated my comments below.
up vote 1
down vote I do not know where your function has been studied, but I see connections to Farey fractions, rational approximation, and even discrete dynamical systems. Perhaps some of those areas,
combined with a suggestion from elsewhwere, will give you what you seek.
Gerhard "Ask Me About System Design" Paseman, 2012.10.12
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/109460/a-weird-function-related-to-the-denominators-of-rational-squares/109481","timestamp":"2014-04-21T12:38:03Z","content_type":null,"content_length":"64787","record_id":"<urn:uuid:6017f878-9670-4325-af0c-359991c4d6ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of bezier curve
In the
field of
numerical analysis
, a
Bézier curve
is a
parametric curve
important in
computer graphics
and related fields. Generalizations of Bézier curves to higher
are called
Bézier surfaces
, of which the
Bézier triangle
is a special case.
Bézier curves were widely publicized in 1962 by the French engineer Pierre Bézier, who used them to design automobile bodies. The curves were first developed in 1959 by Paul de Casteljau using de
Casteljau's algorithm, a numerically stable method to evaluate Bézier curves.
In vector graphics, Bézier curves are an important tool used to model smooth curves that can be scaled indefinitely. "Paths," as they are commonly referred to in image manipulation programs such as
Inkscape, Adobe Illustrator, Adobe Photoshop, and GIMP are combinations of Bézier curves patched together. Paths are not bound by the limits of rasterized images and are intuitive to modify. Bézier
curves are also used in animation as a tool to control motion in applications such as Adobe Flash, Adobe After Effects, and Autodesk 3ds max.
Computer graphics
Bézier curves are widely used in computer graphics to model smooth curves. As the curve is completely contained in the convex hull of its control points, the points can be graphically displayed and
used to manipulate the curve intuitively. Affine transformations such as translation, scaling and rotation can be applied on the curve by applying the respective transform on the control points of
the curve.
Quadratic and cubic Bézier curves are most common; higher degree curves are more expensive to evaluate. When more complex shapes are needed, low order Bézier curves are patched together. This is
commonly referred to as a "path" in programs like Adobe Illustrator or Inkscape. These poly-Bézier curves can also be seen in the SVG file format. To guarantee smoothness, the control point at which
two curves meet and one control point on either side must be collinear.
The simplest method for scan converting (rasterizing) a Bézier curve is to evaluate it at many closely spaced points and scan convert the approximating sequence of line segments. However, this does
not guarantee that the rasterized output looks sufficiently smooth, because the points may be spaced too far apart. Conversely it may generate too many points in areas where the curve is close to
linear. A common adaptive method is recursive subdivision, in which a curve's control points are checked to see if the curve approximates a line segment to within a small tolerance. If not, the curve
is subdivided parametrically into two segments, 0 ≤ t ≤ 0.5 and 0.5 ≤ t ≤ 1, and the same procedure is applied recursively to each half. There are also forward differencing methods, but great care
must be taken to analyse error propagation. Analytical methods where a spline is intersected with each scan line involve finding roots of cubic polynomials (for cubic splines) and dealing with
multiple roots, so they are not often used in practice.
In animation applications, such as
Adobe Flash
Adobe Shockwave
, or in applications like
Game Maker
, Bézier curves are used to outline, for example, movement. Users outline the wanted path in Bézier curves, and the application creates the needed frames for the object to move along the path.
Examination of cases
Linear Bézier curves
Given points
, a linear Bézier curve is simply a
straight line
between those two points. The curve is given by
$mathbf\left\{B\right\}\left(t\right)=mathbf\left\{P\right\}_0 + t\left(mathbf\left\{P\right\}_1-mathbf\left\{P\right\}_0\right)=\left(1-t\right)mathbf\left\{P\right\}_0 + tmathbf\left\{P\right\}
_1 mbox\left\{ , \right\} t in \left[0,1\right]$
and is equivalent to
linear interpolation
Quadratic Bézier curves
A quadratic Bézier curve is the path traced by the function
), given points
, and
$mathbf\left\{B\right\}\left(t\right) = \left(1 - t\right)^\left\{2\right\}mathbf\left\{P\right\}_0 + 2t\left(1 - t\right)mathbf\left\{P\right\}_1 + t^\left\{2\right\}mathbf\left\{P\right\}_2
mbox\left\{ , \right\} t in \left[0,1\right].$
A quadratic Bézier curve is also a parabolic segment.
TrueType fonts use Bézier splines composed of quadratic Bézier curves.
Cubic Bézier curves
Four points
in the plane or in three-dimensional space define a cubic Bézier curve. The curve starts at
going toward
and arrives at
coming from the direction of
. Usually, it will not pass through
; these points are only there to provide directional information. The distance between
determines "how long" the curve moves into direction
before turning towards
The parametric form of the curve is:
\}_3 mbox\left\{ , \right\} t in \left[0,1\right].$
Modern imaging systems like PostScript, Asymptote and Metafont use Bézier splines composed of cubic Bézier curves for drawing curved shapes.
The Bézier curve of degree
can be generalized as follows. Given points
, the Bézier curve is
$mathbf\left\{B\right\}\left(t\right)=sum_\left\{i=0\right\}^n \left\{nchoose i\right\}\left(1-t\right)^\left\{n-i\right\}t^imathbf\left\{P\right\}_i =\left(1-t\right)^nmathbf\left\{P\right\}_0+\
left\{nchoose 1\right\}\left(1-t\right)^\left\{n-1\right\}tmathbf\left\{P\right\}_1+cdots+t^nmathbf\left\{P\right\}_n mbox\left\{ , \right\} t in \left[0,1\right].$
For example, for $n=5$:
^2mathbf\left\{P\right\}_3+5t^4\left(1-t\right)mathbf\left\{P\right\}_4+t^5mathbf\left\{P\right\}_5 mbox\left\{ , \right\} t in \left[0,1\right].$
This formula can be expressed recursively as follows: Let $mathbf\left\{B\right\}_\left\{mathbf\left\{P\right\}_0mathbf\left\{P\right\}_1ldotsmathbf\left\{P\right\}_n\right\}$ denote the Bézier curve
determined by the points P[0], P[1],..., P[n]. Then
$mathbf\left\{B\right\}\left(t\right) = mathbf\left\{B\right\}_\left\{mathbf\left\{P\right\}_0mathbf\left\{P\right\}_1ldotsmathbf\left\{P\right\}_n\right\}\left(t\right) = \left(1-t\right)mathbf\
left\{B\right\}_\left\{mathbf\left\{P\right\}_0mathbf\left\{P\right\}_1ldotsmathbf\left\{P\right\}_\left\{n-1\right\}\right\}\left(t\right) + tmathbf\left\{B\right\}_\left\{mathbf\left\{P\right\}
In words, the degree $n$ Bézier curve is a linear interpolation between two degree $n-1$ Bézier curves.
Some terminology is associated with these parametric curves. We have
$mathbf\left\{B\right\}\left(t\right) = sum_\left\{i=0\right\}^n mathbf\left\{b\right\}_\left\{i,n\right\}\left(t\right)mathbf\left\{P\right\}_i,quad tin\left[0,1\right]$
where the polynomials
$mathbf\left\{b\right\}_\left\{i,n\right\}\left(t\right) = \left\{nchoose i\right\} t^i \left(1-t\right)^\left\{n-i\right\},quad i=0,ldots n$
are known as
Bernstein basis polynomials
of degree
, defining t
= 1 and (1 - t)
= 1.
The points P[i] are called control points for the Bézier curve. The polygon formed by connecting the Bézier points with lines, starting with P[0] and finishing with P[n], is called the Bézier polygon
(or control polygon). The convex hull of the Bézier polygon contains the Bézier curve.
• The curve begins at P[0] and ends at P[n]; this is the so-called endpoint interpolation property.
• The curve is a straight line if and only if all the control points are collinear.
• The start (end) of the curve is tangent to the first (last) section of the Bézier polygon.
• A curve can be split at any point into 2 subcurves, or into arbitrarily many subcurves, each of which is also a Bézier curve.
• Some curves that seem simple, such as the circle, cannot be described exactly by a Bézier or piecewise Bézier curve (though a four-piece cubic Bézier curve can approximate a circle, with a
maximum radial error of less than one part in a thousand, when each inner control point is the distance $frac\left\{4left\left(sqrt \left\{2\right\}-1right\right)\right\}\left\{3\right\}$
horizontally or vertically from an outer control point on a unit circle). More generally, an n-piece cubic Bézier curve can approximate a circle, when each inner control point is the distance
$frac\left\{4\right\}\left\{3\right\}tan\left(t/4\right)$ from an outer control point on a unit circle, where t is 360/n degrees, and n>2.
• The curve at a fixed offset from a given Bézier curve, often called an offset curve (lying "parallel" to the original curve, like the offset between rails in a railroad track), cannot be exactly
formed by a Bézier curve (except in some trivial cases). However, there are heuristic methods that usually give an adequate approximation for practical purposes.
• Every quadratic Bézier curve is also a cubic Bézier curve, and more generally, every degree n Bézier curve is also a degree m curve for any m > n. In detail, a degree n curve with control points
P[0], …, P[n] is equivalent (including the parametrization) to the degree n + 1 curve with control points P'[0], …, P'[n + 1], where $mathbf P"_k=tfrac\left\{k\right\}\left\{n+1\right\}mathbf P_\
left\{k-1\right\}+left\left(1-tfrac\left\{k\right\}\left\{n+1\right\}right\right)mathbf P_k$.
Constructing Bézier curves
Linear curves
Animation of a linear Bézier curve, t in [0,1]
The t in the function for a linear Bézier curve can be thought of as describing how far B(t) is from P[0] to P[1]. For example when t=0.25, B(t) is one quarter of the way from point P[0] to P[1]. As
t varies from 0 to 1, B(t) describes a curved line from P[0] to P[1].
Quadratic curves
For quadratic Bézier curves one can construct intermediate points
such that as
varies from 0 to 1:
• Point Q[0] varies from P[0] to P[1] and describes a linear Bézier curve.
• Point Q[1] varies from P[1] to P[2] and describes a linear Bézier curve.
• Point B(t) varies from Q[0] to Q[1] and describes a quadratic Bézier curve.
style="border-bottom: 1px solid blue;" ]
Construction of a quadratic Bézier curve Animation of a quadratic Bézier curve, t in [0,1]
Higher-order curves
For higher-order curves one needs correspondingly more intermediate points. For cubic curves one can construct intermediate points
that describe linear Bézier curves, and points
that describe quadratic Bézier curves:
style="border-bottom: 1px solid blue;" ]
Construction of a cubic Bézier curve Animation of a cubic Bézier curve, t in [0,1]
For fourth-order curves one can construct intermediate points Q[0], Q[1], Q[2] & Q[3] that describe linear Bézier curves, points R[0], R[1] & R[2] that describe quadratic Bézier curves, and points S
[0] & S[1] that describe cubic Bézier curves:
style="border-bottom: 1px solid silver;" ]
Construction of a quartic Bézier curve Animation of a quartic Bézier curve, t in [0,1]
(See also a .)
Polynomial form
Sometimes it is desirable to express the Bézier curve as a polynomial instead of a sum of less straightforward Bernstein polynomials. Application of the binomial theorem to the definition of the
curve followed by some rearrangement will yield:
mathbf{B}(t) = sum_{j = 0}^n {t^j mathbf{C}_j}
mathbf{C}_j = frac{n!}{(n - j)!} sum_{i = 0}^j frac{(-1)^{i + j} mathbf{P}_i}{i! (j - i)!} = prod_{m = 0}^{j - 1} (n - m) sum_{i = 0}^j frac{(-1)^{i + j} mathbf{P}_i}{i! (j - i)!} .
This could be practical if $mathbf\left\{C\right\}_j$ can be computed prior to many evaluations of $mathbf\left\{B\right\}\left(t\right)$; however one should use caution as high order curves may lack
numeric stability (de Casteljau's algorithm should be used if this occurs). Note that the product of no numbers is 1.
Rational Bézier curves
The rational Bézier adds adjustable weights to provide closer approximations to arbitrary shapes. The numerator is a weighted Bernstein-form Bézier curve and the denominator is a weighted sum of
Bernstein polynomials
Given n + 1 control points P[i], the rational Bézier curve can be described by:
mathbf{B}(t) = frac{ sum_{i=0}^n b_{i,n}(t) mathbf{P}_{i}w_i } { sum_{i=0}^n b_{i,n}(t) w_i } or simply
mathbf{B}(t) = frac{ sum_{i=0}^n {n choose i} t^i (1-t)^{n-i}mathbf{P}_{i}w_i } { sum_{i=0}^n {n choose i} t^i (1-t)^{n-i}w_i }.
See also
• Paul Bourke: Bézier Surfaces (in 3D), http://astronomy.swin.edu.au/~pbourke/curves/bezier/
• Donald Knuth: Metafont: the Program, Addison-Wesley 1986, pp. 123-131. Excellent discussion of implementation details; available for free as part of the TeX distribution.
• Dr Thomas Sederberg, BYU Bézier curves, http://www.tsplines.com/resources/class_notes/Bezier_curves.pdf
• J.D. Foley et al.: Computer Graphics: Principles and Practice in C (2nd ed., Addison Wesley, 1992)
External links | {"url":"http://www.reference.com/browse/bezier+curve","timestamp":"2014-04-16T21:54:19Z","content_type":null,"content_length":"104108","record_id":"<urn:uuid:20e7da22-c63b-4317-a11e-af460d43eb0a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lab Activit
1. To use the Z-Matrix converter, you must have your z-matrix created already. This program gives you a blank z-matrix form. Your job is to fill in each field appropriately according to the molecule
that you are working with.
2. The first column asks you to define the atom you are describing. Make sure you use the symbol of the molecule along with its number. Your numbers should run down the page in order.
3. The second column asks for the atom you are referencing. For example, when you are identifying the bond distance between two molecules, you must use the second column to identify the second atom
in the bond. This number should and must be an integer.
4. The third column asks for the bond distance. This number cannot be an integer. You must use a decimal point and must have a zero before the decimal if the bond distance is a fraction. (Example:
.96-NO 0.96-YES)
5. The fourth column again asks for the reference atom for the bond angle. Again this must be an integer.
6. The fifth column asks for the bond angle. This too must be include a decimal.
7. The sixth column asks for the third reference atom for the dihedral angle as an integer.
8. The seventh column asks for the dihedral angle in decimal form.
If you would like to see an example, please look at http://www.shodor.org/chemviz/babelex.html. | {"url":"http://www.shodor.org/chemviz/zmatrices/help.html","timestamp":"2014-04-21T07:04:31Z","content_type":null,"content_length":"6402","record_id":"<urn:uuid:e8f6640f-debc-41ec-b513-36aaadfed306>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Lewis Carroll's Bilateral Diagram
Aristotelian logic, or the traditional study of deduction, deals with four so-called categorical or subject-predicate propositions, which can be defined by:
S a P ⇔ All S is P (universal affirmative or A proposition),
S i P ⇔ Some S is P (particular affirmative or I proposition),
S e P ⇔ No S is P (universal negative or E proposition),
S o P ⇔ Some S is not P (particular negative or O proposition).
S is called a subject (or minor) term and P is called a predicate (or major) term of a proposition. We could think of S and P as one-place predicates or sets.
The bilateral diagram is a way to understand relations among categorical propositions. It
is a square divided into four smaller square cells: SP, SP', S'P, S'P'. A red counter (or 1) within a cell means that there is at least one thing in it. A gray counter (or 0) within a cell means
there is nothing in it. So we may not put both counters in the same cell. A red counter in the cell SP means "Some S is P"; a gray counter in the cell SP means "No S is P". The red counter in the
upper rectangle means that there is at least one S, symbolically Ex(S) ⇔ S i S. But if we put the counter on the common line of squares SP and SP', we don't know whether the proposition S i P is
true (1) or false (0), so it has value unknown or undetermined (½). The same holds for S o P. Analogously, the value of the proposition S a P is ½ , unless we put the gray counter in the square SP'
(S a P = 1) or in SP (S a P = 0).
A diagram showing the relations among categorical propositions is known as the traditional square of opposition [2, p. 217]. Two propositions are contradictories if one is the negation of the other.
Propositions A and O, and propositions I and E are contradictories. Two propositions are contraries if they cannot both be true. Two propositions are subcontraries if they cannot both be false.
Proposition A is called superaltern, I is called subaltern, and the coresponding relation is called subalternation. The same definitions are aplied to E and O [2, pp. 214-217]. Under the assumption
that class S is not empty, propositions A and E are contraries, propositions I and O are subcontraries, and superaltern implies subaltern.
[1] L. Carroll,
Symbolic Logic and The Game of Logic
, New York: Dover, 1958.
[2] I. M. Copi and C. Cohen,
Introduction to Logic
, 9th ed., New York: Macmillan, 1994 pp. 214-218. | {"url":"http://demonstrations.wolfram.com/LewisCarrollsBilateralDiagram/","timestamp":"2014-04-20T01:14:55Z","content_type":null,"content_length":"43854","record_id":"<urn:uuid:2051934e-9e70-4f95-add0-dc823272f723>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
different answers for derivative?
March 1st 2013, 05:24 AM
different answers for derivative?(Solved)
Take this trigonometric function:
$\frac{d \(sin{(\cos{\theta})})^2}{d \theta}}$
Using the chain rule i think it becomes:
But if we use the identity:
$\sin^2{\theta} = 1-\cos^2{\theta}$
then the equation becomes
$\frac{d (1-\theta^2)}{d \theta}} = -2\theta$
But if,
$-2\theta = -2\theta\sin{(\cos{\theta})}\sin{\theta}$
$\sin{(\cos{\theta})}\sin{\theta} = 1$
Which doesn't appear so. Why am I getting different answers for the same derivative? Did I do something wrong in the above steps?
I accidentally took :
$\cos(\cos\theta) = \theta$(Headbang)
P.S. This is my first post and using [tex] is VERY slow and irritating, is there a faster way to insert math expressions :confused:
March 1st 2013, 05:46 AM
Re: different answers for derivative?
This part of the post is correct. But none of the rest makes any sense.
It is the case that $\sin^2(\cos(\theta))=1-\cos^2(\cos(\theta))$. But that cannot be simplified.
They both have the same derivative.
March 1st 2013, 05:58 AM
Re: different answers for derivative?
$cos(arccos(x))= x$ but you have $cos(cos(\theta))$ which is NOT $\theta$.
March 1st 2013, 06:38 AM
Re: different answers for derivative?
I Can't belive i was so stupid. I hadn't been doing trig for a long time so i messed up when I took:
$\cos({\cos\theta}) = \theta$
By the way, any have a answer to my P.S.:
P.S. This is my first post and using [tex] is VERY slow and irritating, is there a faster way to insert math expressions
March 1st 2013, 03:07 PM
Prove It
Re: different answers for derivative?
I Can't belive i was so stupid. I hadn't been doing trig for a long time so i messed up when I took:
$\cos({\cos\theta}) = \theta$
By the way, any have a answer to my P.S.:
P.S. This is my first post and using [tex] is VERY slow and irritating, is there a faster way to insert math expressions
Yes, get good at copying and pasting the preamble, learn the code and type faster :P | {"url":"http://mathhelpforum.com/calculus/214036-different-answers-derivative-print.html","timestamp":"2014-04-16T10:55:55Z","content_type":null,"content_length":"10238","record_id":"<urn:uuid:2e1d3863-158b-472c-9fca-31a79923fc74>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coaxial Tuners Control Impedances To 65 GHz
Device characterization at precisely controlled source and load impedances provides insights into the nonlinear behavior of both low-noise and power transistors. For transistors used in wireless
amplifiers, this capability can reveal the impedance values needed for optimum linearity and highest efficiency, for example.
Having continuous bandwidth from 10 to 65 GHz allows a wideband look at transistor behavior. Having independent control over impedances at fundamental, second-harmonic, and third-harmonic frequencies
makes it possible to evaluate and optimize nonlinear transistors and the effects of harmonic terminations. Fortunately, with the introduction of CCMT-6510 65-GHz coaxial tuner and a line of
biharmonic combination tuners from Focus Microwaves (Dollard des Ormeaux, Quebec, Canada), engineers can now learn about device characteristics not apparent from even the best models.
The CCMT-6510 coaxial millimeter-wave tuner (Fig. 1) provides a minimum VSWR control range of 10.0:1 (and typically 15.0:1) at frequencies from 10 to 65 GHz. It offers better than 40-dB repeatability
with fine stepper-motor-driven control of phase. The phase tuning resolution is 0.076 deg./step at 10 GHz and 0.49 deg./step at 65 GHz. The CCMT-6510 offers 7 million tunable points at 10 GHz and 1.1
million tunable points at 65 GHz.
The CCMT-6510 utilizes the coaxial 1.85-mm V-connector for continuous frequency coverage, in comparison to a waveguide tuner, which is restricted to waveguide-band frequency coverage, such as 33 to
50 GHz or 50 to 75 GHz. Although limited in power-handling capability compared to a waveguide tuner, the continuous bandwidth of the CCMT-6510 allows harmonic tuning at precisely controlled impedance
/phase states. The use of coaxial lines also allows DC bias to be passed along to an active device under test (DUT), such as a transistor or amplifier.
The CCMT-6510 makes use of a high-quality-factor (high-Q) resonator or probe that slides along a low-loss transmission line, under the control of a programmable stepper motor, to achieve different
impedance (VSWR) states. The tuner achieves a reflection factor or gamma of about 0.9 (a gamma = 1 would represent total reflection, with no power delivered to the load) due to small losses in the
transmission line and the probe, which is equivalent to a VSWR of typically about 15.0:1. The tuning resolution of the CCMT-6510 is simply the smallest possible movement offered by the stepper motor,
which is about 3 µm.
The CCMT-6510 probe is calibrated on a vector-network analyzer (VNA) at any number of frequencies in a sweep or list mode. The user can select the number of calibration points from 100 to 800. The
measurement software interpolates with better than 40-dB accuracy to millions of tuning points. The calibration per frequency requires about 4 minutes per frequency. The end result of the precision
calibration is a tuning range that is accurate and predictable, as evidenced by measurements of S[11] forward reflection across the full tuning range (Fig. 2). The repeatability of the CCMT-6510 is
also outstanding, regardless of measurement power level (Fig. 3).
The CCMT-6510 coaxial millimeter-wave impedance tuner, which is ideal for noise and load-pull testing, is supported by a 65-GHz through-reflect-line (TRL) calibration kit for proper setup with a
coaxial millimeter-wave VNA from Agilent Technologies (Santa Rosa, CA) or Anritsu Co. (Morgan Hill, CA). The calibration kit includes a delay line, precision shorts, a 50-Ω line, and loads. In
addition, the computer-controlled-microwave-tuner (CCMT) software allows operators to define their own instrument drivers. As a result, test equipment associated with the CCMT-6510, such as signal
generators and VNAs, can be controlled from a personal computer (PC) running the CCMT software.
Harmonic Tuning
The biharmonic combination tuners are available for testing from S to K band. The tuners contain multiple probes or tuning slugs to control the impedance not only at the fundamental frequency, but
also at the second- and third-harmonic frequencies. The probes are sliding resonant circuits connected in parallel to a low-loss transmission line. Depending upon the Class of bias (A, B, C, etc.),
harmonics may have some effect on the behavior of a DUT. A biharmonic combination tuner allows independent impedance tuning at all three frequencies, allowing characterization as a function of three
different impedance states.
For example, short circuits at second- and third-harmonic frequencies with the right phase at the output of a transistor in saturation can improve its gain and power-added efficiency (PAE). In
addition, a short circuit at the second- and third-harmonic frequencies with the right phase at the input of a transistor can improve the device's linearity. Harmonic tuning has the maximum effect
when the DUT is in saturation and generating high levels of harmonics. A device's PAE can be increased by as much as 10 to 35 percent with load harmonic tuning, while linearity can be improved by as
much as 3 to 8 dB with source harmonic tuning.
The effects of harmonic tuning depend on the type of transistor, the power-saturation level, the frequency, and the bias conditions. Because of these variables, it is almost impossible to create an
accurate nonlinear device model to describe the harmonic behavior of a transistor, and harmonic load-pull testing is required to understand a device's behavior under a specific set of conditions.
The biharmonic tuners are able to generate a high reflection factor (between 0.95 and 0.99) at both harmonic frequencies over a 360-deg. phase tuning range. As with the millimeter-wave tuner, the
biharmonic tuners are calibrated on an automatic VNA. At each position of the fundamental set of resonators, all user-defined impedances of the harmonic resonators are calibrated. After this,
second-order polynomial algorithms interpolate between the calibrated points to provide phase errors of typically as low as 0.1 to 0.6 deg. and amplitude errors between −40 and −60 dB.
Model 2608-bH is an example of the new biharmonic tuner line. It covers a fundamental/harmonic range of 8 to 26.5 GHz, with a nominal fundamental frequency of 8 GHz, second-harmonic frequency of 16
GHz, and third-harmonic frequency of 24 GHz. Model 1804-bH has a fundamental/harmonic frequency range of 4 to 18 GHz. With a nominal fundamental frequency of 5.25 GHz, the second-harmonic frequency
is 10.5 GHz and the third-harmonic frequency is 15.75 GHz. The multiple-resonator, stepper-motor-driven tuner (Fig. 4) can be controlled or programmed by means of a PC program. Harmonic load-pull
software controls the tuners for independent tuning at fundamental and harmonic frequencies.
In a measure of frequency response (S[11] forward reflection), the 1804-bH shows levels at are down by −0.43 dB or better at the second- and third-harmonic frequencies from a nominal −dB reference
level (Fig. 5). The independent tuning control of the 1804-bH is apparent from a linear polar plot at the fundamental and harmonic frequencies (Fig. 6).
The electromechanical biharmonic tuners are compact and sufficiently light in weight for use in on-wafer harmonic load-pull test setups, with installation close to the wafer under test. In addition
to the frequencies noted, the two models above can be tuned to other frequencies by replacing the harmonic resonators. Focus Microwaves, Inc., 1603 St. Regis, Dollard-des-Ormeaux, Quebec, Canada H9B
3H7; (514) 683-4554, FAX: (514) 684-8581, e-mail: info@focus-microwaves.com, Internet: www.focus-microwaves.com. | {"url":"http://mwrf.com/print/test-and-measurement/coaxial-tuners-control-impedances-65-ghz","timestamp":"2014-04-20T13:23:46Z","content_type":null,"content_length":"21876","record_id":"<urn:uuid:b79d161e-3d41-4f85-833e-8082177593d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scientific Calculator
μCalc Scientific Calculator
The uCalc calculator has a lot of functions including:
* Run Mode
* Polynomial Equation Solver
* Simultaneous Solver
* Statistic Functions
* Convert between number systems (decimal, hex, binary and octal)
* Graph
* Table
* Simple Calculations ( + - * / )
* Modulo ( % )
* Exponential ( x^n )
* Natural logarithm ( e^n and ln )
* Pi
* Absolute value ( abs )
* Trigonometric functions ( sin, asin, sinh, cos, acos, cosh, tan, atan, tanh )
* Average ( avg )
* Ceil ( ceil )
* Floor ( floor )
* Common logarithm ( log )
* Max ( max )
* Min ( min )
* Random ( random )
* Round ( round )
Finally, a calculator that you actually can use on school days!
Yes!!! I needed an app that had a step by step conversion and this is the only one with it! Would have rated 5 stars if it had a bin/hex/dec/oct calculator too. (But if you plan on implenenting that
too, make sure to do it with step by step.)
Awesome! Awesome app with a lot of functions!
Good app :o Does what it says but force closes sadily :T
What's New
* The keyboard does not appear on run.
* Minor bug fix in Graph and Table.
★ TOP 2013 USEFUL APPS
Best Scientific Calculator application with latest features! This is the best calculator available on Android!
- Copy and paste
- Advanced mathematical functions: trigonometry, logarithms, parenthesis, exponential,...
- History
and more!
★ ★ ★ ★ ★
What are people saying?
"Finally! An awesome scientific calculator for free!" - Chantel
"Simple and efficient! Best calculator ever!" - Jimmy
"Awesome Very nice simple calculator." - Paul
★ ★ ★ If you have any request or question on the app, send us an email: support@jmtapps.com. We respond personally to every single email we receive ★ ★ ★
--- Keywords ---
add, best, substract, calculator, school, back to school, calculate, calculation, calculator, clear, best calculator, sci calculator, scientific, scientific calculator, log, exp, cos, sin, tan
As a powerful emulator of HP 11C scientific calculator, Vicinno 11C scientific calculator provides all functions of the world-renowned HP 11C RPN scientific calculator for Android. It uses the
identical mathematics and calculations with the original to give you the same precision and accuracy.
**** Cool Tip: click the upper right logo to see the settings page. ****
★ Features include:
• Hyperbolic and inverse hyperbolic trig functions
• Trigonometric functions (SIN, COS etc.)
• Trigonometric modes: degrees, radians, grads
• Basic statistics (mean, linear estimate, correlation coefficient, linear regression.)
• Program memory with 203 program steps
• Programming with the requisite set of keys for conditional tests, SST, BST, INT, FRAC, GTO and MEM.
• Probability (combinations and permutations)
• Factorial, % change, and absolute value
• Random number generator
• RPN entry
• Programmable
• Haptic and sound key click feedback
• Comma as decimal point option
• Automatically save/restore settings
• Touch the upper right logo to see all settings
• Direct access to support forum from app
• Support Android Tablet
• More
★ Support:
Feel free to contact us if you need any support by visiting www.vicinno.com or simply email us at support@vicinno.com.
★ Stay tuned:
Like us: www.facebook.com/vicinno
Follow us: www.twitter.com/vicinno
MathScript is one of the most powerful scientific calculator available in Android. It encompasses a very powerful solver that can solve almost everything you throw at it. A smart user interface
system allows easy editing of mathematics. MathScript also has a visually attractive function plotting capability. With MathScript, you can perform almost all of mathematical tasks you can imagine.
It has an optimized interface that makes it easy to work with complex equations.
Some of the many prominent features include:
* full fledged scientific calculator
* saving and restoring computations/calculations
* complete algebra system using SymPy library (integration, differentiation, polynomials, algebra etc.)
* graphical rendering of equations
* function plotting
* python programming (complete suite)
If you like the app, please give us a good rating. If you are not happy with the app, please mail us. We are working hard to improve the interface and a better rating will motivate us to do the same.
Note:: We have very high regard for customer satisfaction. If you have any difficulty, please let us know. If we are not able to rectify in a short time, we will promptly refund your amount.
You can use MathScript as a calculator, function plotter, polynomial manipulation tool, equation solver (algebraic, differential, linear etc.), calculus tool (integration, differentiation, of
arbitrary functions), matrix solver (linear algebra), units converter, finance tool (PMT,PV,FV. etc.), programming shell (complete python package), etc... It also contains interesting math puzzles
that make you learn math in fun way. Train your brain to use mathematics whenever possible,
Many of the features are available only in the advanced computing tools like Matlab*# and Mathematica*#.
Add even half of the above features and the cost of other similar products in the market would easily cross $20. Our aim is to make the Math tools affordable by all. The pricing is maintained to be
low, and will be maintained that way in near future. Give a try and let us know how you feel (feedback button available in Help).
NOTE1:: Please use the in-built tutorials (Menu->"Learn n solve") to learn how to use the software. More than 10 tutorials are designed to cover all basic topics. More will be added.
NOTE2:: First time installation involves downloading three open source libraries (Python, SymPy and JsMath). Please be patient till the software completes the installation.**
WEBSITE:: Please visit us at
KIND REQUEST:: We implement the user suggestions and reviews as soon as possible. Unfortunately, the Android market platform does not provide a mechanism to reply to the market reviews. Please check
out the latest version software and enable auto-update to obtain latest copy. In case you have any suggestions/complaints, please provide a feedback using the application (Menu->Help->Feedback). We
will try to rectify the problem as soon as possible.
***MathScript collects anonymous usage statistics using google analytics. The statistics will be used to customize the keyboard layout and provide advanced features in future versions. Once we have
adequate data, the usage statistics will be displayed in our website***
Ongoing Development::
* Categorized help system
* Accessible code-snippets for quick computation
* Enhanced editor interaction
Coming Soon::
* 3D plotting
* Kids mode (would be impossible to make mistakes)
* Lessons for school kids with solvable questions
* Math Puzzles ( for learning math the fun way )
Planned features:
** Just too many of them. Keep your auto-update on!!
*# Matlab and Mathematica are registered trademarks of MathWorks and Wolfram
A Scientific calculator is a type of e-calculator, usually but
not always handheld, designed to calculate problems in science,
engineering, and mathematics. They have almost completely
replaced slide rules in almost all traditional applications, and
are widely used in both education and professional settings.
In the Modern scientific calculators generally have many more
features than a standard four or five-function calculator, and
the feature set differs between manufacturers and models;
however, the defining features of a scientific calculator include
as fallowing bellow:
**Engineering Mathematics.
**Engineering Science.
**Engineering Mechanics.
**Scientific notation.
**Scientific Calculations.
**floating point arithmetic
**logarithmic functions, using both base 10 and base e
**trigonometric functions (some including hyperbolic
**exponential functions and roots beyond the square root
**quick access to constants such as pi and e
**In addition, high-end scientific calculators will include:
**hexadecimal, binary, and octal calculations, including basic
Boolean math
**complex numbers
**Complex fractions
**statistics and probability calculations and also in Dynamics
**conversion of units
**physical constants
A few have multilingual displays, with some recent models from
Hewlett-Packard, Texas Instruments, Casio, Sharp, and Canon using
dot matrix displays similar to those found on graphing
Attention Please-Dear users We have try to make it most
accurate as well as we can make,but in a certain point or
condition if you got/face any error/inaccuracy then fell free to
say us by which we will make it fully accurate.
E-Mail- mipapps.inc@gmail.com
★ TOP 2013 USEFUL APPS
Best Scientific Calculator application with latest features.
★ Result history
★ Traditional algebraic or RPN operation
★ Unit conversions
★ Percentages
★ Physical constants table
★ 10 memories
★ General Arithmetic Functions
★ Trigonometric Functions - radians, degrees & gradients - including hyperbolic option
★ Power & Root Functions
★ Binary, octal, and hexadecimal (can be enabled in Settings)
★ Scientific, engineering and fixed-point display modes
★ Configurable digit grouping and decimal point
★ Fraction calculations and conversion to/from decimal
★ Degrees/minutes/seconds calculations and conversion
★ Landscape mode
★ User-customizable unit conversions
★ User-customizable constants
★ Log Functions
★ Modulus Function
★ Random Number Functions
★ Permutations (nPr) & Combinations (nCr)
★ Highest Common Factor & Lowest Common Multiple
★ Statistics Functions - Statistics Summary (returns the count (n), sum, product, sum of squares, minimum, maximum, median, mean, geometric mean, variance, coefficient of variation & standard
deviation of a series of numbers), Confidence Interval, Digamma Function, Exponential Density, Gamma Function, Hypergeometric Distribution, Normal Distribution, Poisson Distribution
★ Conversion Functions: area, distance, volume, weight, density, speed, pressure, energy, power, frequency, magnetic flux density, dynamic viscosity, temperature, heat transfer coefficient, time,
angles, data size, fuel efficiency & exchange rates
★ ★ ★ ★ ★
"Finally! An awesome scientific calculator for free!" - Chantel
"Simple and efficient! Best calculator ever!" - Jimmy
"Awesome Very nice simple calculator." - Paul
★ ★ ★ If you have any request or question on the app, send us an email: support@jmtapps.com. We respond personally to every single email we receive ★ ★ ★
★ You can also download Scientific Calculator + without ads: http://bit.ly/11vrkro
--- Keywords ---
add, best, substract, calculator, school, back to school, calculate, calculation, calculator, clear, best calculator, sci calculator, scientific, scientific calculator, log, exp, cos, sin, tan, sci
calculator, complex, TI 92
Are you lost in a mathematical illusion...This app will simplify all your work !!
Scientific calculator at its best. A scientific calculator with perfect balance of over-featuring and under-featuring in it.
Pi Scientific Calculator is the simple, easy to use and one of the most powerful scientific calculator on play store. This is perhaps the least complex scientific calculator with maximum features.
Unlike other scientific calculators on play store, this scientific calculator does not multiplex the functionality of a button to avoid complexity. And best part is that its completely free without
any irritating on screen ad.
Along with the all features od scientific calculator this calculator also has capability to evaluate statistical functions Viz. Variance , Standard Deviation, etc. and programmers calculator.
This has the best programmers calculator on play store.
Using This calculator you can do the following.
1. Trigonometric function calculation. (including Hyperbolic, inverse trigonometric)
2. Standard functions: (Factorial, power function, sqrt, cube root, log, ln, e, factors etc)
3. Statistical Functions: LCM, HCF, Mean, SD, Variance, Product, Sum, Max, Min etc)
4. Basic Functions: Basic arithmetic, modular division.
Note: % button is for modular division
5. Programmers Calculator: bit wise operations, hexadecimal, decimal, octal, binary conversion and operation. You can visually see the binary format of the entered number no matter which number
system you are using. This is very convenient and useful feature for programmers and computer students
Along with all these features this has stunning graphics and rich help content. You wont get confused while using this calculator. It is extremely simple, smart, and equally powerful. Its the best
calculator on play store yet.
This is the first version of this app so we request you to give your valuable feedback and suggestions.
You can write us at pi@hdnasoftware.com
Have a great learning and problem solving experience with this awesome and best scientific calculator yet on play store
Keywords: Scientific Calculator, Statistical Calculator, Programmers Calculator.
Graphing calculator with algebra. Essential tool for school and college. Replaces bulky and expensive handheld graphing calculators.
Multiple functions on a graph, polar graphs, graphing of implicit functions, values and slopes, roots, extremes, intersections. Algebra: polynomials, polynomial equation solving, matrices, fractions,
derivatives, complex numbers and more. Shows results as you type. Use menu to switch between modes.
Free version requires internet connection and contains ads! $5.99 in-app upgrade to PRO. Contact us if you need multiple licenses for a school or organization.
Help site with instructions and examples: http://help.mathlab.us
If you have a question, send email to calc@mathlab.us
* arithmetic expressions +,-,*,/,÷
* square root, cube and higher roots (hold root key)
* exponent, logarithms (ln, log)
* trigonometric functions sin π/2, cos 30°, ...
* hyperbolic functions sinh, cosh, tanh, ... (hold "e" key to switch)
* inverse functions (hold direct function key)
* complex numbers, all functions support complex arguments
* derivatives sin x' = cos x, ... (hold x^n key)
* scientific notation (enable in menu)
* percent mode
* save/load history
* multiple functions graphing
* implicit functions up to 2nd degree (ellipse 2x^2+3y^2=1, etc.)
* polar graphs (r=cos2θ)
* parametric functions, enter each on new line (x=cos t, y=sin t)
* function roots and critical points on a graph, tap legend to turn on and off (top left corner), use menu to display as a list
* graph intersections (x^2=x+1)
* tracing function values and slopes
* scrollable and resizable graphs
* pinch to zoom
* fullscreen graphs in landscape orientation
* function tables
* save graphs as images
* save tables as csv
* simple and complex fractions 1/2 + 1/3 = 5/6
* mixed numbers, use space to enter values 3 1/2
* linear equations x+1=2 -> x=1
* quadratic equations x^2-1=0 -> x=-1,1
* approximate roots of higher polynomials
* systems of linear equations, write one equation per line, x1+x2=1, x1-x2=2
* polynomial long division
* polynomial expansion, factoring
* matrix and vector operations
* dot product (hold *), cross product
* determinant, inverse, norm, transpose, trace
* user defined constants and functions (PRO)
* save/load expressions
It is just a simple calculator for students, scientists or anybody else. It uses a Reverse Polish Notation which can calculate formulas like:
It has also many mathematical functions like sin, cos, log, etc...
Powerful calculator for you:
- Basic arithmetical operations.
- Advanced functions.
- Integrate.
- Solve equation.
- Plot function.
+ Version 1.0.1: change interface.
+ Version 1.0.6: Fix layout, more function, change the way to work, graphics was supported.
+ Version 1.1.0: Change interface and better support multitouch.
+ Version 2.0.0: Solve 3rd order equation, solve 3rd order linear equations, statistic supported.
IF YOU LIKE THIS APP, PLEASE DONATE FOR US THROUGH PAYPAL ACCOUNT myhoangthanh@yahoo.com. We're kindly for all your support.
Absolutly free.Advanced Education & financial app. sin,cos,tan value become on tip.Now updated with log binary & random values
This is a high quality, and very sleek and easy to use advanced scientific calculator designed for the Android phone. It is intended for students taking 1010 and above college-level math (and or
science / engineering) course(s), AP High School Math, or anyone who requires a scientific calculator on hand without the burden of actually carrying one around. Our advanced scientific calculator
includes the following functions:
scientific notation
floating point arithmetic
logarithmic functions, using both base 10 and base e
trigonometric functions (some including hyperbolic trigonometry)
exponential functions and roots beyond the square root
quick access to constants such as pi and e
hexadecimal, binary, and octal calculations, including basic Boolean math
complex numbers
statistics and probability calculations
equation solving
letters that can be used for spelling words or including variables into an equation
conversion of units
physical constants
Don't like ads? Go Pro! A link to our ad-free version can be found on our website listed below!
Scientific Calculator
Good for all basic scientific calculations.
Trigonometric, modulus based calculation.
Simple, fast, efficient calculator that can run on literally all Android devices.
Use when storage and RAM are on a premium.
The calculator has very less memory and processor usage and performs most of the functions of a scientific calculator.
Feel free to give feedback and suggestions.
Scientific Calculator is the most elegant and powerful calculator ever designed for your devices combining mathematic precision with modern sleek and flat user interface.
Once you try Scientific Calculator you will think why this wasn't done from so long.
Scientific Functions: Degress, Radians, Gradians & Hyperbolic, Sine, Cosine, Tangents, Secants, Cosecants, Cotangents + Inverse Functions. Base 10 and Natural Logarithms, Power and Root functions, 1/
x , Factorials, Memory Functions (MS, M+, M-, MR and MC) + miscellaneous constants (e.g. pi, e, g and c).
√ Elegant & intuitive interface
√ Math Formula Display
√ Displays both the equation & the result at the same time
√ Supports many scientific functions
√ Solving system of linear equations
Online Scientific Calculator software app is an ongoing project and we are working on more features and functions to be added to every update of the app.
*** apps require an Internet connection to work.
Scientific Calculator includes the following features:
* Traditional algebraic or RPN operation
* Result history
* Math Functions(sin,cos,tan,etc)
* Percentages
* Binary, octal, and hexadecimal
* Large input/output display
* Support for PMT Excel function
This is the application for doing all simple mathematical calculations from basic calculations to complex calculations like Trigonometric,Permutations&Combinations etc...
More from developer
A calculator that can convert any number from decimal, binary, hexadecimal, octal to decimal, binary, hexadecimal or octal. It can also take negative numbers.
The calculator shows you how to convert! Step by step!
Denne applikasjonen lar deg slå opp i det norske leksikonet Wikipedia direkte fra mobiltelefonen!
Du kan lagre dine favoritter, slik at du enkelt kan slå de opp igjen neste gang du er sugen på kunnskap.
Dette program giver dig mulighed for at slå op i den danske encyklopædi Wikipedia direkte fra din mobiltelefon!
Du kan gemme dine favoritter, så du kan slå op igen næste gang du er opsat på viden.
Това приложение ви позволява да търсите в българската енциклопедия Wikipedia директно от мобилния си телефон!
Можете да записвате любимите си, така че лесно можете да намерите на статиите, които съм чел.
Her er Solos populære treningsprogram. For alle dere som har hengemage og små muskler! Kom igjen! Få opp farta og lev mens dere er på hugget! Kom i gang med treningen i dag! Det handler om å
aktivisere seg selv, alle dagene. Litt hver dag - gjør mye i lengden!
Passer for begge kjønn!
uCalc has a lot of functions:
* Run Mode
* Polynomial Equation Solver
* Simultaneous Solver
* Statistic Functions
* Convert between number systems
* Graph
* Table
* Simple Calculations ( + - * / )
* Modulo ( % )
* Exponential ( x^n )
* Natural logarithm ( e^n and ln )
* Pi
* Absolute value ( abs )
* Trigonometric functions
* Average ( avg )
* Ceil ( ceil )
* Floor ( floor )
* Common logarithm ( log )
* Max ( max )
* Min ( min )
* Random ( random )
* Round ( round )
Finally, a calculator that you actually can use on school days!
Give a tip if the service was acceptable. Many employees in service industries have very little sallery. Without tips, these will be difficult to "make ends meet".
The generally accepted tip rate is somewhere between 15% and 20%.
If the service was terrible, you should not give a tip, but rather alert the management. If the service was slow, you should tip 10%. If the service was ok you should tip 15%, and if the service was
good you should tip 20%.
Skal du søke Politihøgskolen? Bli Politi applikasjonen hjelper deg til å nå ditt mål ved å være din treningspartner. I applikasjonen er det satt opp et dynamisk treningsprogram. Lagre dine resultater
og se hvordan du ligger ann i forhold til opptakskravene.
Bli Politi har programmer for styrketrening, svømming, dykk, undervannssvømming og løping. Det er viktig at du fokuserer på alle testoøvelsene slik at du er i stand til å gjennomføre opptaket! Dette
kommer Bli Politi applikasjonen til å hjelpe deg med!
Kontakt gjerne en personlig trener før du setter i gang med treningen. En personlig trener vil vise deg riktige teknikker, og du vil dermed unngå skader. Riktig oppvarming er også en forebyggende
aktivitet som hindrer skader.
All treningen er på egen risiko.
Personlig treningsdagbok for deg som skal søke Politihøgskolen og vil bestå de fysiske opptaksprøvene.
Applikasjonen fokuserer på styrke, kondisjon og svømming. Programmene lagret i applikasjonen er dynamisk, så de endrer seg etter hvor mange uker du har trent.
Hver dag har sitt eget program, slik at du enkelt kommer i god form til alle de forskjellige testene:
- Mandag fokuserer på styrketrening med mange repitisjoner
- Tirsdag er langkjøring kondisjonstrening
- Onsdag er styrketrening med tunge vekter
- Torsdag brukes til intervalltrening
- Fredag skal du ta deg ut på styrkerommet
- Lørdag er dag i bassenget
- Søndag kan du bruke på kondisjonstest
Gå gjennom programmet med din personlige trener for å gjøre nødvendige justeringer. Det er viktig å få en god teknikk i starten, så ikke vær redd for å spørre om råd. Teknikk er veldig viktig for å
unngå skader når vektene blir tyngre. Husk også god oppvarming! | {"url":"https://play.google.com/store/apps/details?id=com.internettskolen.ucalc","timestamp":"2014-04-20T21:24:05Z","content_type":null,"content_length":"193537","record_id":"<urn:uuid:c7e4f1ed-6e70-4dc0-8c1c-b578e4fb5459>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
State on classical space
June 22nd 2011, 10:58 PM
State on classical space
I am looking for an example of a state (positive linear functional of norm 1) on the space $C(\mathbb{T}^2)$ of continuous functions on the torus $\mathbb{T}^2$.
My first guess would be something like
with norm
$\|\omega\|=\sup_{f\in C(\mathbb{T}^2)}\left|\int^{2\pi}_{0}\int^{2\pi}_{ 0}f(x,y)dxdy\right|$
with $f\in C(\mathbb{T}^2)$. However, if we take the function $h$ to map any element of the torus identically to 1 the above definition would imply that
so this idea does not work. Any ideas?
June 23rd 2011, 07:01 PM
Re: State on classical space
What text are you using?
June 23rd 2011, 10:33 PM
Re: State on classical space
Following your ideas, |w(f)| < 4(pi)^2 ||f|| where the last norm is the one in your space so what about v(f)=w(f)/4(pi)^2?
June 23rd 2011, 11:36 PM
Re: State on classical space
I don't think that will work. It will be fine for that particular function, but we can just as well consider a function that maps any element of the torus to say 2, then the integral will have
twice that value. I think I have an idea how to construct such a state, will post it if it works.
June 24th 2011, 10:22 AM
Re: State on classical space
I was assuming there was a typo in your definition of the norm of the functional since by linearity that quantity will never be bounded. So I assumed you meant $\| \omega \| = \sup_{\| f \| =1} |
\omega (f)|$ and for this definition my example does work. | {"url":"http://mathhelpforum.com/differential-geometry/183482-state-classical-space-print.html","timestamp":"2014-04-19T03:19:38Z","content_type":null,"content_length":"7541","record_id":"<urn:uuid:bd51cf90-e161-4867-9064-210a19ea9137>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphs of Trigonmetric Functions
Graphs of Trigonometric Functions
We begin by considering the general formula for the graphs of sine and cosine.
The general formula for sine: y = Asin(Bx C) + D.
The general formula for cosine: y = Acos(Bx C) + D.
Let’s begin by labeling the various components.
A amplitude of the function = (max-min)/2 (vertical stretch of the graph)
B stretch/shrink on the x-axis. (It compresses or expands the graph.)
B has the following relationship with the period:
, where P is the period. Solving for P, we have .
C/B the phase shift of the graph (the shift left (if C/B is neg.) or right (if C/B is pos.))
D the vertical shift of the graph.
Let’s look to see what these different values do to the graph of sine. Below is the graph of y = sin(x) on the interval [0, 2π].
Figure 1: Graph of y = sin(x).
Let us consider what the graph of y = Asin(Bx) on the interval [0, 2π/B] looks like.
Figure 2: Graph of y = Asin(Bx).
Notice that the graph looks precisely the same. The only difference is the range of the function is now [-A, A] instead of [-1, 1] and the period is 2π/B instead of 2π.
Let’s look at some examples of the two graphs imposed on top of each other, to see exactly what the changes described above do to the graphs.
Example 1: Consider the graphs of f(x) = sin(x) and g(x) = sin(2x).
Figure 3: Graphs of f(x) = sin(x) and g(x) = sin(2x).
Here, we notice that the two graphs are the same, except for the fact that g(x) has period equal to 2π/2 = π. Thus, on the interval [0, 2π], the graph of g(x) will repeat twice while f(x) will only
show one period.
Example 2: Consider the graphs of f(x) = sin(x) and g(x) = 2sin(x).
Figure 4: Graphs of f(x) = sin(x) and g(x) = 2sin(x).
Here, we notice that the two graphs are the same, except for the fact that g(x) has an amplitude twice as large as f(x). That translates to a vertical stretch by a factor of 2.
Example 3: Consider the graphs of f(x) = sin(x) and .
Figure 5: Graphs of f(x) = sin(x) and .
Here, we notice that the two graphs are the same, except for the fact that g(x) has an amplitude twice as large as f(x). That translates to a vertical stretch by a factor of 2.
Things get a bit more complicated once we introduce shifts into the graph.
Example 4: How do the graphs of and g(x) = cos(x) compare?
We begin by graphing f(x). Notice that this is the same graph as sin(x), except that we have shifted every point to the left by π/2.
The following graph has f(x) (solid blue) and the original graph sin(x) (dashed black).
Figure 6: Graphs of and sin(x).
Next, we graph g(x) = cos(x).
Figure 7: Graph of g(x) = cos(x).
Observe that f(x) and g(x) are the same graph. That means that sin(x) and cos(x) are essentially the same graph, and differ only by a phase shift of π/2.
Now, let’s look at a really complicated graph.
Example 5: Graph y = 3cos(2x π) + 1.
We start by noting what A, B, C and D are in the above equation.
A = 3, B = 2, C = π and D = 1.
So, that means that we are going to graph cos(x), but with some modifications.
(i) The graph is stretched vertically by a factor of 3.
(ii) The period is now 2π/2 = π instead of 2π.
(iii) There is a phase shift of π/2 to the right.
(iv) There is a vertical shift of 1 up.
So, let us handle these in pieces. We begin by graphing 3cos(2x), since it will look the same as cos(x), only with some changes to the labeling of the x- and y-axes.
Figure 8: Graph of y = 3cos(2x).
Now, we take into account the phase shift, which moves the graph to the right by π/2.
Figure 9: Graph of y = 3cos(2x π).
Lastly, we take into account the vertical shift, which moves the graph up 1, which gives us our final answer.
Figure 10: Graph of y = 3cos(2x π) + 1.
Example 6: Match the following equations with their graphs. Give reasons for your answers.
(a) corresponds with (i). The graph starts at the origin, has period equal to 2π/(1/2) = 4π, and has no amplitude shift, phase shift, or vertical shift.
(b) corresponds with (v). We see that the graph starts at the origin, the period is equal to 2π/2 = π, and the amplitude is 1/2.
(c) corresponds with (ii). This looks like the graph of sin(x), but that is not an option. Notice, though, that since sin(x) and cos(x) are related by a phase shift of π/2, (ii) is the only likely
choice, since there are no amplitude, vertical or period shifts.
(d) corresponds with (iii). The graph starts at 1/2 (which is the amplitude) and has period equal to 2π/2 = π.
(e) corresponds with (vi). Notice that y = sin(2x 2π) is the graph of y = sin(2x), shifted to the right by 2π/2 = π. But the period of y = sin(2x) is 2π/2 = π. So, the phase shift does not affect the
look of the graph. Thus, we look for the graph that looks like y = sin(2x).
(f) corresponds with (iv). Notice that the graph is not symmetric about the x-axis, so there is a vertical shift. Since more of the graph lies below the x-axis, we have that it is a vertical shift
down. The period is unaffected, but the amplitude is equal to (2-(-4))/2 = 3. | {"url":"http://math.ucsd.edu/~wgarner/math10a/triggraphs.htm","timestamp":"2014-04-21T13:09:17Z","content_type":null,"content_length":"53098","record_id":"<urn:uuid:d5014490-6c6d-461c-920d-b6fd30968538>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
gerstner waves [Archive] - OpenGL Discussion and Help Forums
06-01-2010, 11:21 AM
What I dislike about the Gerstner wave algorithm is that it displaces the point you want to calculate the height of the wave at. It is possible to find such a point that it will be displaced exactly
into the point you want the height to be calculated in, but this is cumbersome. Otherwise, the heightmap grid needs to be irregular. What approach do you use to calc the Gerstner wave heightmap? | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-171182.html","timestamp":"2014-04-16T10:48:54Z","content_type":null,"content_length":"7439","record_id":"<urn:uuid:1e9f932b-c523-49d2-a92d-624950d0d9a9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Official Terry Pratchett Forums
"Bridge? You mean that card game for old ladies?" some of you may ask. Yes, that's the feeling one gets when one sees bridge in a movie, yet bridge is much more than that. Tournament bridge actually
is the only card game in which card luck plays norole at all.
"Now how is that possible?" you may ask "The cards wil, after all, be random, won't they"?
Indeed, the cards are randomly dealt, yet everybody plays the same deals.
"Ok, but at a table the cards the players hold are still different, and hence they will favor one pair or the other".
Ah, but here comes the little trick: It is the mode of the tournaments that does it. Basically there are two different knids of tournament modes:Pair torunaments and team tournaments. Let us start
with the team tournaments; it is easier to explain there.
In team tournaments a team consists of two pairs, who play against another team. Let's call them team A and B.
Now pair 1 of team A will play several boards (a board is a single game of bridge; it is called that way because the cards are predealt and put into special boards, so the deal can easily be
transported from table to table) against pai one of Team B. The second pairs also play against each other, but with cards reversed.so that in the whole all teams get the same cards. Now you only have
to compare the results from the two tables, et voila!
With pair tournaments it is a bit different, but basically the same. Since it would usually take too long to have all pairs coupled against each other only a few rounds will be played with a certain
number of boards each, then the pairs will get up, go to another table and face other opponents in the next round.
Now it is clear that the same method can not quite be applied, so for pair tournaments there are two ratinss; and each pair tournament has two winning pairs, onen for the N/S holdings and one for the
E/W holdings. (the places the four players sit in are named after the points of the compass). Now all the pairs of hte N/S holdings will be compared agaisnt each other,and all the pairs ofthe E/W
holdings too. They will all have held the same cards, and again card luck is completely out of the question.
If that sounds complicated to you: The game is a lot more complicated, as complicated as chess, and findeed many chess players also play bridge.
I've always wanted to play bridge, but there seems to be so many variations of it. Which one is a good one to try first?
What is commonly known as bridge and is played in tournaments is contract bridge. Jean and I are quite good at it; good enough at least to be official commentators of Bridgebase Online, the best
bridge site on the web, where many many world class players play and many many tournaments are being broadcast. It is these tournaments that we commentate (among others, of course, and sometimes a
world class player wll be commentator too). Sometimes 5 or 6 torunaments are being broadcast at the same time, so lots of commentators are needed.
I used to play Bridge with my father, from my 17th up to my 26th just about (I moved then). We did well together - and quickly became an 'angst-gegner' to many pairs in the club. It's a great game,
combining logic and a bit of maths with reading people and situations. There's a lot a persons body language can tell you in all phases of the game.
What I like about Bridge, compared to chess and checkers, is that it's fast-moving. In chess, one mistake far in the game often means it's over and done with. If you make a mistake in one hand of
bridge, you can make up your losses by doing well on the next.
Most important is finding a partner that you get along with, with the same goals as you. There were some notorious people at the club, switching partners every other week. Or the opposite: married
couples playing together, criticizing each others' every move.
Card Games ....
I have played whist and bridge - prefer whist. Mind you I was taught by my in-laws shortly before I married their daughter - My future father in law would jump for "2 hearts" to "4 no trump" just for
the hell of it I think - never really did get the hang of it.
Have you ever heard of Tarabish - In canada it's played mostly in Cape Breton Island. My wife and I were introduced to it while staying at a B&B one summer. Sort of a cross between whist and bridge.
You play with an abridged deck, get dealt HALF your hand - do the bidding - and THEN get the rest of your hand. Very lively game - lots of fun, especially with enough rum.
What ??
So much to take in isn't there. I'm going to see if I can find a bridge club in my area that is willing to teach me one version of the game. It looks complicated, but I'm willing to give it a shot.
Especially it if involves rum! | {"url":"http://www.terrypratchettbooks.com/forum/viewtopic.php?p=84561","timestamp":"2014-04-19T18:17:06Z","content_type":null,"content_length":"28218","record_id":"<urn:uuid:20fc18c8-05b3-49a6-b0f9-11689fca6b95>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
1993 New Car Data
Robin H. Lock
St. Lawrence University
Journal of Statistics Education v.1, n.1 (1993)
Copyright (c) 1993 by Robin H. Lock, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author
and advance notification of the editor.
Key Words: Cars; Classroom data; Dataset; Introductory statistics.
The 93CARS dataset contains information on 93 new cars for the 1993 model year. Measures given include price, mpg ratings, engine size, body size, and indicators of features. The 26 variables in the
dataset offer sufficient variety to illustrate a broad range of statistical techniques typically found in introductory courses.
1. Introduction
1 The 1993 New Car data was inspired by a similar dataset for 1989 model cars which has been included among the sample data for the Student Edition of Execustat (PWS-KENT 1990). We have used
Execustat's CARS89 data to demonstrate many points in both introductory and second level courses in applied statistics. In what follows we give a brief description of the updated and expanded 93CARS
dataset and suggest several ways it might be used in class.
2. Data Sources
2 Data were obtained from two sources, The 1993 Cars - Annual Auto Issue from Consumer Reports and the PACE New Car & Truck 1993 Buying Guide . Passenger cars or vans that were included in both
sources were eligible for selection. A random sample of models given in the PACE Buying Guide was chosen and matched to cars covered in the Consumer Reports issue until a desired sample size of 93
was reached. Vehicles in the Pickup Truck and Sport/Utility category were excluded due to incomplete information in the Consumer Reports article. We also avoided multiple inclusions of cars which
were essentially the same model (such as the Dodge Shadow and Plymouth Sundance).
3. Description of the Data
3 Each data case starts with the MANUFACTURER (e.g., Chevrolet, Audi, Honda,...), MODEL (e.g., Caprice, 90, Accord,...), and a TYPE (Small, Sporty, Compact, Midsize, Large, Van). The types were
determined by the Consumer Reports classifications. The other 23 variables are all numeric. Three PRICE variables give a "minimum" cost for a basic model, a "maximum" for a model equipped with lots
of options, and a "midrange" as the average of the two extremes. The EPA fuel efficiency ratings are given as both CITY and HIGHWAY miles per gallon (MPG).
4 Several measures reflect relative size and power of the standard engine. These include the number of CYLINDERS, engine displacement SIZE (in liters), maximum HORSEPOWER, and the revolutions per
minute (RPM) at which the maximum power is achieved. A measure that might be less familiar to most students is the number of REVOLUTIONS of the engine needed for the car to travel one mile in its
highest gear (automatic transmission).
5 Indications of each car's size are the LENGTH, WIDTH, WHEELBASE, U-TURN diameter, REAR seat room, LUGGAGE capacity, and size of the FUEL tank. Car weights differed somewhat between the two data
sources. We used the WEIGHT given by Consumer Reports which included a full gas tank and air-conditioning, if available.
6 Other variables note the presence of standard AIR BAGS (driver or passenger), the type of DRIVETRAIN (front-wheel, rear-wheel, or all-wheel), and an option for a MANUAL transmission. A final
variable categorizes the manufacturer as domestic (U.S.) or foreign, although this distinction is becoming less and less clear.
7 The only missing values are for CYLINDERS in the rotary engine Mazda RX-7, REAR SEAT room for the two-seaters (Corvette and RX-7), and LUGGAGE capacity for the vans and two-seaters.
8 A detailed key to the variables in the file can be found in the Appendix and the 93cars.txt file which is available in the data archives.
4. Pedagogical Uses
9 This is a multi-purpose dataset which can be used at many points in a course. We have often used Execustat's similar CARS89 data as an initial example for demonstrating the statistical package to
students in the second week of an introductory course. This class typically is held in a classroom equipped with a computer and projection system, with the instructor "driving" the software. Despite
having only studied some descriptive techniques, students are easily drawn into a discussion of the interesting features of the data. They tend to be familiar with most of the variables (and specific
car models). They anticipate relationships between the variables, are quick to generate both questions and explanations, and enjoy guessing at the identity of outliers in the plots. Inevitably, the
class period ends long before the stream of questions is exhausted.
10 In addition to numerous good numeric variables, the data provide several interesting options for dividing cars into different comparison groups (e.g., by DOMESTIC, TYPE, AIRBAGS, DRIVETRAIN, or
MANUAL transmission). Most of our early analyses use only basic summary statistics or graphics, yet a discussion of side-by-side boxplots of highway MPG for domestic vs. foreign manufacturers lays a
good foundation for later, more formal, work on testing hypotheses. As those techniques are subsequently developed, we can continue to come back to the car data -- establishing a familiar thread that
can run throughout the course. We don't always have to be "on-line" in a computer session to use the data. Often a few summary statistics may be all that are required to motivate an example. Students
can also be encouraged to do their own independent explorations.
11 As one might expect, there are numerous relationships among the variables which provide excellent examples for discussing scatterplots, correlation, and regression techniques. One can easily find
pairs of variables which demonstrate strong or weak, positive or negative associations. PRICE and MPG variables tend to be popular choices as "dependent" variables in studying regression models,
although students need to exhibit some care in approaching multiple regression situations since many of the potential predictors are often highly correlated among themselves.
12 We conclude by suggesting some specific ways the data may be used to illustrate certain topics. A little time spent exploring the data will quickly stimulate additional possibilities.
13 Box-whisker plot: PRICE or MPG variables give good examples of somewhat skewed data with potential outliers among the upper fences.
14 Small sample confidence interval for a mean: Look at HPW or RPM within one TYPE of car. Different students may be assigned different TYPEs.
15 Difference in means between two independent samples: Compare PRICE levels between DOMESTIC and FOREIGN cars. Also watch out for significant differences in the variances between these two groups.
16 One-way ANOVA for difference in means: Check out city MPG ratings between the three DRIVETRAIN categories.
17 Contingency table: Construct and analyze a two-way table of AIRBAGS by FOREIGN/DOMESTIC.
18 Scatterplot: Plot PRICE by MPG. Identify any unusual points.
19 Regression/correlation: REVOLUTIONS per mile tends to be a less familiar variable to the students. Investigate relationships and/or build models for REVOLUTIONS based on a restricted subset of
other engine or body size variables.
20 An exam question: Provide computer output for investigating the relationship between number of CYLINDERS and MPG using only 4, 6, and 8 cylinder cars. Have students interpret the results of both a
one-way ANOVA and a simple linear regression. Which approach is more appropriate?
5. Getting the Data
21 The file 93cars.dat.txt contains the raw data. The file 93cars.txt is a documentation file containing a brief description of the dataset.
Appendix - Key to Variables in 93cars.dat.txt
Line 1 Columns
• 1 - 14 Manufacturer
• 15 - 29 Model
• 30 - 36 Type Small, Sporty, Compact, Midsize, Large (as defined in the Consumer Reports article)
• 38 - 41 Minimum Price (in $1,000) - Price for basic version of this model
• 43 - 46 Midrange Price (in $1,000) - Average of Min and Max prices
• 48 - 51 Maximum Price (in $1,000) - Price for a premium version
• 53 - 54 City MPG (miles per gallon by EPA rating)
• 56 - 57 Highway MPG
• 59 - 59 Air Bags standard 0 = none, 1 = driver only, 2 = driver & passenger
• 61 - 61 Drive train type 0 = rear wheel drive 1 = front wheel drive 2 = all wheel drive
• 63 - 63 Number of cylinders
• 65 - 67 Engine size (liters)
• 69 - 71 Horsepower (maximum)
• 73 - 76 RPM (revs per minute at maximum horsepower)
Line 2 Columns
• 1 - 4 Engine revolutions per mile (in highest gear)
• 6 - 6 Manual transmission available 0 = No, 1 = Yes
• 8 - 11 Fuel tank capacity (gallons)
• 13 - 13 Passenger capacity (persons)
• 15 - 17 Length (inches)
• 19 - 21 Wheelbase (inches)
• 23 - 24 Width (inches)
• 26 - 27 U-turn space (feet)
• 29 - 32 Rear seat room (inches)
• 34 - 35 Luggage capacity (cu. ft.)
• 37 - 40 Weight (pounds)
• 42 - 42 Domestic? 0 = non-U.S. manufacturer 1 = U.S. manufacturer
Values are aligned and delimited by blanks.
Missing values are denoted with *.
There are two data lines for each case.
PACE New Car & Truck 1993 Buying Guide 993), Milwaukee, WI: Pace Publications Inc.
Student Edition of Execustat (1990), Boston, MA: PWS-KENT Publishing Co.
Consumer Reports: The 1993 Cars - Annual Auto Issue (April 1993), Yonkers, NY: Consumers Union.
Robin H. Lock
Mathematics Department
St. Lawrence University
Canton, NY 13617
(315) 379-5960
rlock@stlawu.bitnet Return to Table of Contents | Return to the JSE Home Page | {"url":"http://www.amstat.org/publications/jse/v1n1/datasets.lock.html","timestamp":"2014-04-19T22:13:07Z","content_type":null,"content_length":"13428","record_id":"<urn:uuid:559f8b9b-ae02-4c73-9799-c0e633ae3271>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hawthorne, NJ Algebra Tutor
Find a Hawthorne, NJ Algebra Tutor
I’m patient and effective. I encourage questions since learning is always an interactive process. I’ve taught both high school and college including remedial courses at both levels.
9 Subjects: including algebra 2, algebra 1, geometry, statistics
...I try to present information in a cohesive structure such that the student can easily understand the material and how it fits in to the bigger picture. I have been in a lot of science and math
courses and I have found that often it is the convoluted way in which the information is presented that makes it hard to understand. I also tutor SAT prep and Judaics.
17 Subjects: including algebra 2, algebra 1, chemistry, reading
...In this group, I developed new antimicrobial agents that have the potential to be applied in various industrial applications such as floor tiling, paint coatings, and medical devices, that are
prone to contamination of micro-organisms. This research heavily involved principles in Organic and Ana...
14 Subjects: including algebra 1, chemistry, physics, organic chemistry
...Algebra 1 is a textbook title or the name of a course, but it is not a subject. It is often the course where students become acquainted with symbolic manipulations of quantities. While it can
be confusing at first (eg "how can a letter be a number?"), it can also broaden your intellectual scope.
25 Subjects: including algebra 1, algebra 2, chemistry, physics
...I also have experience working with ADD/ADHD, dyslexia, hearing impairments, speech and language impairments, Autism, and intellectual disabilities. Notes: -In-person tutoring only -Tutoring
Appointment must last for at least 1 hour -Be Aware of the Cancellation Policy and Fee. Must cancel 24 ...
41 Subjects: including algebra 1, algebra 2, writing, biology
Related Hawthorne, NJ Tutors
Hawthorne, NJ Accounting Tutors
Hawthorne, NJ ACT Tutors
Hawthorne, NJ Algebra Tutors
Hawthorne, NJ Algebra 2 Tutors
Hawthorne, NJ Calculus Tutors
Hawthorne, NJ Geometry Tutors
Hawthorne, NJ Math Tutors
Hawthorne, NJ Prealgebra Tutors
Hawthorne, NJ Precalculus Tutors
Hawthorne, NJ SAT Tutors
Hawthorne, NJ SAT Math Tutors
Hawthorne, NJ Science Tutors
Hawthorne, NJ Statistics Tutors
Hawthorne, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Hawthorne_NJ_Algebra_tutors.php","timestamp":"2014-04-18T09:03:49Z","content_type":null,"content_length":"24040","record_id":"<urn:uuid:4a2dd987-cbc4-463b-8c09-8d9f803dcc5d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
An"gle (#), n. [F. angle, L. angulus angle, corner; akin to uncus hook, Gr. bent, crooked, angular, a bend or hollow, AS. angel hook, fish-hook, G. angel, and F. anchor.]
The inclosed space near the point where two lines; a corner; a nook.
Into the utmost angle of the world. Spenser.
To search the tenderest angles of the heart. Milton.
2. Geom. (a)
The figure made by. two lines which meet.
The difference of direction of two lines. In the lines meet, the point of meeting is the vertex of the angle.
A projecting or sharp corner; an angular fragment.
Though but an angle reached him of the stone. Dryden.
4. Astrol.
A name given to four of the twelve astrological "houses."
5. [AS. angel.]
A fishhook; tackle for catching fish, consisting of a line, hook, and bait, with or without a rod.
Give me mine angle: we 'll to the river there. Shak.
A fisher next his trembling angle bears. Pope.
Acute angle, one less than a right angle, or less than 90°. -- Adjacent or Contiguous angles, such as have one leg common to both angles. -- Alternate angles. See Alternate. -- Angle bar. (a) Carp.
An upright bar at the angle where two faces of a polygonal or bay window meet. Knight. (b) Mach. Same as Angle iron. -- Angle bead Arch., a bead worked on or fixed to the angle of any architectural
work, esp. for protecting an angle of a wall. -- Angle brace, Angle tie Carp., a brace across an interior angle of a wooden frame, forming the hypothenuse and securing the two side pieces together.
Knight. -- Angle iron Mach., a rolled bar or plate of iron having one or more angles, used for forming the corners, or connecting or sustaining the sides of an iron structure to which it is riveted.
-- Angle leaf Arch., a detail in the form of a leaf, more or less conventionalized, used to decorate and sometimes to strengthen an angle. -- Angle meter, an instrument for measuring angles, esp. for
ascertaining the dip of strata. -- Angle shaft Arch., an enriched angle bead, often having a capital or base, or both. -- Curvilineal angle, one formed by two curved lines. -- External angles, angles
formed by the sides of any right-lined figure, when the sides are produced or lengthened. -- Facial angle. See under Facial. -- Internal angles, those which are within any right-lined figure. --
Mixtilineal angle, one formed by a right line with a curved line. -- Oblique angle, one acute or obtuse, in opposition to a right angle. -- Obtuse angle, one greater than a right angle, or more than
90°. -- Optic angle. See under Optic. -- Rectilineal or Right-lined angle, one formed by two right lines. -- Right angle, one formed by a right line falling on another perpendicularly, or an angle of
90° (measured by a quarter circle). -- Solid angle, the figure formed by the meeting of three or more plane angles at one point. -- Spherical angle, one made by the meeting of two arcs of great
circles, which mutually cut one another on the surface of a globe or sphere. -- Visual angle, the angle formed by two rays of light, or two straight lines drawn from the extreme points of an object
to the center of the eye. -- For Angles of commutation, draught, incidence, reflection, refraction, position, repose, fraction, see Commutation, Draught, Incidence, Reflection, Refraction, etc.
© Webster 1913.
An"gle (#), v. i. [imp. & p. p. Angled (#); p. pr. & vb. n. Angling (#).]
To fish with an angle (fishhook), or with hook and line.
To use some bait or artifice; to intrigue; to scheme; as, to angle for praise.
The hearts of all that he did angle for. Shak.
© Webster 1913.
An"gle, v. t.
To try to gain by some insinuating artifice; to allure.
[Obs.] "He
the people's hearts."
Sir P. Sidney.
© Webster 1913. | {"url":"http://everything2.com/title/Angle","timestamp":"2014-04-17T12:48:57Z","content_type":null,"content_length":"26632","record_id":"<urn:uuid:783e6fff-c9b1-49c3-9972-2b3575a44861>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve the system using the substittion method [Archive] - Free Math Help Forum
06-24-2009, 10:32 PM
Normally these don't get me but I'm stumped.
I just end up with tons of fractions and a screwed up solution.
The answer I'm looking for is (-1/5, 43/5) I just don't know how to get there.
Thanks in advance! | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-61661.html","timestamp":"2014-04-21T04:32:05Z","content_type":null,"content_length":"5402","record_id":"<urn:uuid:f69ca53d-f350-472c-a292-7f140d985f89>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional
development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"url":"http://nrich.maths.org/public/leg.php?group_id=1&code=-265","timestamp":"2014-04-18T18:15:45Z","content_type":null,"content_length":"21885","record_id":"<urn:uuid:069bbd91-c222-4b7d-b5b4-92b6b7162260>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Stiefel-Whitney and Pontryagin classes of SO(3)-bundles
up vote 3 down vote favorite
Consider $SO(n)$ bundles over smooth manifolds. Then using the fact that the Stiefel-Whitney classes are the modulo 2 reductions of the Chern classes, one can prove $w_{2i}^2(E) = p_i(E) \bmod 2$.
Now consider an $SO(3)$ bundle over a 4-manifold. Since the particular case I am studying concerns K3 surfaces, let us assume that $H^2(M,\mathbb{Z})$ contains no torsion. Then it is stated in
various places that more is true:
$w_2^2(E) = p_1(E) \bmod 4$.
I may be missing something elementary here, but where does the mod 4 comes from?
dg.differential-geometry characteristic-classes
I think the square is the Pontryagin square. See eom.springer.de/p/p073810.htm – BS. Apr 8 '11 at 10:40
See also this MO question and answsers mathoverflow.net/questions/59593/… – BS. Apr 8 '11 at 10:41
A reference is Dold and Whitney's 1959 Annals paper on bundles over 4-dimensional complexes (perhaps someone can consult this paper and post the explanation?). – Tim Perutz Apr 8 '11 at 19:59
@BS, Yes, the assertion is that $P(w_2)=p_1$ mod 4. $P(w_2)=w_2^2$ mod 2 is a property of the Pontriyagin square, and when $H^2(M;Z)$ has no 2-torsion, $w_2$ lifts to a $Z$ class, answering
Manuel's question. – Paul Apr 9 '11 at 19:09
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged dg.differential-geometry characteristic-classes or ask your own question. | {"url":"http://mathoverflow.net/questions/61043/the-stiefel-whitney-and-pontryagin-classes-of-so3-bundles","timestamp":"2014-04-18T11:04:57Z","content_type":null,"content_length":"50633","record_id":"<urn:uuid:c725a29c-9d53-4c6b-b313-21e43489e3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Components as processes: An exercise in coalgebraic modeling
- 1st International Colloquium on Theorectical Aspects of Computing (ICTAC’04 , 2004
"... Abstract. Orchestrating software components, often independently supplied, has assumed a central role in software construction. Actually, as relevant as components themselves, are the ways in
which they can be put together to interact and cooperate in order to achieve some common goal. Such is the r ..."
Cited by 9 (4 self)
Add to MetaCart
Abstract. Orchestrating software components, often independently supplied, has assumed a central role in software construction. Actually, as relevant as components themselves, are the ways in which
they can be put together to interact and cooperate in order to achieve some common goal. Such is the role of so-called software connectors: external coordination devices which ensure both the flow of
data and synchronization restrictions within a component’s network. This paper introduces a new model for software connectors, based on relations extended in time, which aims to provide support for
light inter-component dependency and effective external control. 1
- In 10th Int. Conf. Algebraic Methods and Software Technology (AMAST , 2003
"... This paper characterises refinement of state-based software components modelled as concrete coalgebras for some Set endofunctors. The resulting calculus is parametrized by a specification of the
underlying behaviour model introduced as a strong monad. This provides a basis to reason about (and trans ..."
Cited by 9 (6 self)
Add to MetaCart
This paper characterises refinement of state-based software components modelled as concrete coalgebras for some Set endofunctors. The resulting calculus is parametrized by a specification of the
underlying behaviour model introduced as a strong monad. This provides a basis to reason about (and transform) state-based software designs.
, 2005
"... A partial component is a process which fails or dies at some stage, thus exhibiting a finite, more ephemeral behaviour than expected (e.g. operating system crash). Partiality — which is the rule
rather than exception in formal modelling — can be treated mathematically via totalization techniques. In ..."
Cited by 3 (2 self)
Add to MetaCart
A partial component is a process which fails or dies at some stage, thus exhibiting a finite, more ephemeral behaviour than expected (e.g. operating system crash). Partiality — which is the rule
rather than exception in formal modelling — can be treated mathematically via totalization techniques. In the case of partial functions, totalization involves error values and exceptions. In the
context of a coalgebraic approach to component semantics, this paper argues that the behavioural counterpart to such functional techniques should extend behaviour with try-again cycles preventing
from component collapse, thus extending totalization or transposition from the algebraic to the coalgebraic context. We show that a refinement relationship holds between original and totalized
components which is reasoned about in a coalgebraic approach to component refinement expressed in the pointfree binary relation calculus. As part of the pragmatic aims of this research, we also
address the factorization of every such totalized coalgebra into two coalgebraic components — the original one and an added front-end — which cooperate in a client-server style. Key words: partial
components, try-again cycles, refinement, coalgebra 1
"... Coalgebras of Kripke Polynomial Functors have been widely used in modelling various kinds of systems. In this paper, we give a category Coalg KPF which consists of coalgebras of KPFs. Then we
present a set of constructions like sequential and parallel composition in a subcategory of Coalg KPF for a ..."
Cited by 3 (0 self)
Add to MetaCart
Coalgebras of Kripke Polynomial Functors have been widely used in modelling various kinds of systems. In this paper, we give a category Coalg KPF which consists of coalgebras of KPFs. Then we present
a set of constructions like sequential and parallel composition in a subcategory of Coalg KPF for a restricted family of KPFs by exploiting the canonical operations in category theory. A family of
algebraic laws for the properties being satisfied by these operations is provided. Sun Meng is a Fellow at UNU/IIST on leave from the School of Mathematical Science of Beijing University, China,
where he is a Ph.D candidate. His research interest include category theory, coalgebra theory, Object-Oriented method, formal method in software development, and formal semantics for modeling
languages. His email address is sm@iist.unu.edu.
, 2002
"... We specify the dynamic semantics of an object oriented programming language in an incremental way. We begin with a simple language of arithmetic and boolean expressions. Then, we add functional
abstractions, local declarations, references and assignments obtaining a functional language with impe ..."
Cited by 2 (0 self)
Add to MetaCart
We specify the dynamic semantics of an object oriented programming language in an incremental way. We begin with a simple language of arithmetic and boolean expressions. Then, we add functional
abstractions, local declarations, references and assignments obtaining a functional language with imperative features. We finally add objects, classes and subclasses to obtain a prototypical object
oriented language with dynamic binding.
, 2001
"... This paper describes LPS, a Language Prototyping System that facilitates the modular development of interpreters from semantic building blocks. The system is based on the integration of ideas
from Modular Monadic Semantics and Generic Programming. To define a new programming language, the abstract s ..."
Add to MetaCart
This paper describes LPS, a Language Prototyping System that facilitates the modular development of interpreters from semantic building blocks. The system is based on the integration of ideas from
Modular Monadic Semantics and Generic Programming. To define a new programming language, the abstract syntax is described as the fixpoint of non-recursive pattern functors. For each functor an
algebra is defined whose carrier is the computational monad obtained from the application of several monad transformers to a base monad. The interpreter is automatically generated by a catamorphism
or, in some special cases, a monadic catamorphism. The system has been implemented as a domain-specific language embedded in Haskell and we have also implemented an interactive framework for language
testing. 1
, 2001
"... This paper is an attempt to apply the reasoning principles and calculational style underlying the so-called Bird-Meertens formalism to the design of process calculi, parametrized by a behaviour
model. In particular, basically equational and pointfree proofs of process properties are given, relying o ..."
Add to MetaCart
This paper is an attempt to apply the reasoning principles and calculational style underlying the so-called Bird-Meertens formalism to the design of process calculi, parametrized by a behaviour
model. In particular, basically equational and pointfree proofs of process properties are given, relying on the universal characterisation of anamorphisms and therefore avoiding the explicit
construction of bisimulations. The developed calculi can be directly implemented on a functional language supporting coinductive types, which provides a convenient way to prototype processes and
assess alternative design decisions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=146660","timestamp":"2014-04-18T07:11:24Z","content_type":null,"content_length":"30650","record_id":"<urn:uuid:6a27c56d-425a-482a-849e-31987e6ad2a1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
The type implementing this traversable
Definition Classes
A stack implements a data structure which allows to store and retrieve objects in a last-in-first-out (LIFO) fashion.
type of the elements contained in this stack.
See also
"Scala's Collection Library overview" section on Stacks for more information.
Linear Supertypes
Known Subclasses
Type Hierarchy Learn more about scaladoc diagrams
2. final def !=(arg0: Any): Boolean
Test two objects for inequality.
Test two objects for inequality.
true if !(this == that), false otherwise.
Definition Classes
3. final def ##(): Int
Equivalent to x.hashCode except for boxed numeric types and null.
Equivalent to x.hashCode except for boxed numeric types and null. For numerics, it returns a hash value which is consistent with value equality: if two value type instances compare as true, then
## will produce the same hash value for each of them. For null returns a hashcode where null.hashCode throws a NullPointerException.
a hash value consistent with ==
Definition Classes
AnyRef → Any
4. def +(other: String): String
Implicit information
This member is added by an implicit conversion from Stack[A] to StringAdd performed by method any2stringadd in scala.Predef.
Definition Classes
5. [use case] Returns a new stack containing the elements from the left hand operand followed by the elements from the right hand operand.
[use case]
Returns a new stack containing the elements from the left hand operand followed by the elements from the right hand operand. The element type of the stack is the most specific superclass
encompassing the element types of the two operands.
scala> val a = LinkedList(1)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1)
scala> val b = LinkedList(2)
b: scala.collection.mutable.LinkedList[Int] = LinkedList(2)
scala> val c = a ++ b
c: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2)
scala> val d = LinkedList('a')
d: scala.collection.mutable.LinkedList[Char] = LinkedList(a)
scala> val e = c ++ d
e: scala.collection.mutable.LinkedList[AnyVal] = LinkedList(1, 2, a)
the element type of the returned collection.
the traversable to append.
a new stack which contains all elements of this stack followed by all elements of that.
Definition Classes
TraversableLike → GenTraversableLike
6. As with ++, returns a new collection containing the elements from the left operand followed by the elements from the right operand.
As with ++, returns a new collection containing the elements from the left operand followed by the elements from the right operand.
It differs from ++ in that the right operand determines the type of the resulting collection rather than the left one. Mnemonic: the COLon is on the side of the new COLlection type.
scala> val x = List(1)
x: List[Int] = List(1)
scala> val y = LinkedList(2)
y: scala.collection.mutable.LinkedList[Int] = LinkedList(2)
scala> val z = x ++: y
z: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2)
This overload exists because: for the implementation of ++: we should reuse that of ++ because many collections override it with more efficient versions.
Since TraversableOnce has no ++ method, we have to implement that directly, but Traversable and down can use the overload.
the element type of the returned collection.
the class of the returned collection. Where possible, That is the same class as the current collection class Repr, but this depends on the element type B being admissible for that class,
which means that an implicit instance of type CanBuildFrom[Repr, B, That] is found.
the traversable to append.
an implicit value of class CanBuildFrom which determines the result class That from the current representation type Repr and the new element type B.
a new collection of type That which contains all elements of this stack followed by all elements of that.
Definition Classes
7. [use case] As with ++, returns a new collection containing the elements from the left operand followed by the elements from the right operand.
[use case]
As with ++, returns a new collection containing the elements from the left operand followed by the elements from the right operand.
It differs from ++ in that the right operand determines the type of the resulting collection rather than the left one. Mnemonic: the COLon is on the side of the new COLlection type.
scala> val x = List(1)
x: List[Int] = List(1)
scala> val y = LinkedList(2)
y: scala.collection.mutable.LinkedList[Int] = LinkedList(2)
scala> val z = x ++: y
z: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2)
the element type of the returned collection.
the traversable to append.
a new stack which contains all elements of this stack followed by all elements of that.
Definition Classes
8. def +:(elem: A): Stack[A]
[use case] A copy of the stack with an element prepended.
[use case]
A copy of the stack with an element prepended.
Note that :-ending operators are right associative (see example). A mnemonic for +: vs. :+ is: the COLon goes on the COLlection side.
Also, the original stack is not modified, so you will want to capture the result.
scala> val x = LinkedList(1)
x: scala.collection.mutable.LinkedList[Int] = LinkedList(1)
scala> val y = 2 +: x
y: scala.collection.mutable.LinkedList[Int] = LinkedList(2, 1)
scala> println(x)
the prepended element
a new stack consisting of elem followed by all elements of this stack.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def +:[B >: A, That](elem: B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
9. def ->[B](y: B): (Stack[A], B)
Implicit information
This member is added by an implicit conversion from Stack[A] to ArrowAssoc[Stack[A]] performed by method any2ArrowAssoc in scala.Predef.
Definition Classes
10. def /:[B](z: B)(op: (B, A) ⇒ B): B
Applies a binary operator to a start value and all elements of this stack, going left to right.
Applies a binary operator to a start value and all elements of this stack, going left to right.
Note: /: is alternate syntax for foldLeft; z /: xs is the same as xs foldLeft z.
Note that the folding function used to compute b is equivalent to that used to compute c.
scala> val a = LinkedList(1,2,3,4)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4)
scala> val b = (5 /: a)(_+_)
b: Int = 15
scala> val c = (5 /: a)((x,y) => x + y)
c: Int = 15
the result type of the binary operator.
the start value.
the binary operator.
the result of inserting op between consecutive elements of this stack, going left to right with the start value z on the left:
op(...op(op(z, x_1), x_2), ..., x_n)
where x[1], ..., x[n] are the elements of this stack.
Definition Classes
TraversableOnce → GenTraversableOnce
11. def :+(elem: A): Stack[A]
[use case] A copy of this stack with an element appended.
[use case]
A copy of this stack with an element appended.
A mnemonic for +: vs. :+ is: the COLon goes on the COLlection side.
scala> import scala.collection.mutable.LinkedList
import scala.collection.mutable.LinkedList
scala> val a = LinkedList(1)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1)
scala> val b = a :+ 2
b: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2)
scala> println(a)
the appended element
a new stack consisting of all elements of this stack followed by elem.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def :+[B >: A, That](elem: B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
12. def :\[B](z: B)(op: (A, B) ⇒ B): B
Applies a binary operator to all elements of this stack and a start value, going right to left.
Applies a binary operator to all elements of this stack and a start value, going right to left.
Note: :\ is alternate syntax for foldRight; xs :\ z is the same as xs foldRight z.
Note that the folding function used to compute b is equivalent to that used to compute c.
scala> val a = LinkedList(1,2,3,4)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4)
scala> val b = (a :\ 5)(_+_)
b: Int = 15
scala> val c = (a :\ 5)((x,y) => x + y)
c: Int = 15
the result type of the binary operator.
the start value
the binary operator
the result of inserting op between consecutive elements of this stack, going right to left with the start value z on the right:
op(x_1, op(x_2, ... op(x_n, z)...))
where x[1], ..., x[n] are the elements of this stack.
Definition Classes
TraversableOnce → GenTraversableOnce
14. final def ==(arg0: Any): Boolean
Test two objects for equality.
Test two objects for equality. The expression x == that is equivalent to if (x eq null) that eq null else x.equals(that).
true if the receiver object is equivalent to the argument; false otherwise.
Definition Classes
15. Appends all elements of this stack to a string builder.
Appends all elements of this stack to a string builder. The written text consists of the string representations (w.r.t. the method toString) of all elements of this stack without any separator
scala> val a = LinkedList(1,2,3,4)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4)
scala> val b = new StringBuilder()
b: StringBuilder =
scala> val h = a.addString(b)
b: StringBuilder = 1234
the string builder to which elements are appended.
the string builder b to which elements were appended.
Definition Classes
16. Appends all elements of this stack to a string builder using a separator string.
Appends all elements of this stack to a string builder using a separator string. The written text consists of the string representations (w.r.t. the method toString) of all elements of this
stack, separated by the string sep.
scala> val a = LinkedList(1,2,3,4)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4)
scala> val b = new StringBuilder()
b: StringBuilder =
scala> a.addString(b, ", ")
res0: StringBuilder = 1, 2, 3, 4
the string builder to which elements are appended.
the separator string.
the string builder b to which elements were appended.
Definition Classes
17. Appends all elements of this stack to a string builder using start, end, and separator strings.
Appends all elements of this stack to a string builder using start, end, and separator strings. The written text begins with the string start and ends with the string end. Inside, the string
representations (w.r.t. the method toString) of all elements of this stack are separated by the string sep.
scala> val a = LinkedList(1,2,3,4)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4)
scala> val b = new StringBuilder()
b: StringBuilder =
scala> a.addString(b, "LinkedList(", ", ", ")")
res1: StringBuilder = LinkedList(1, 2, 3, 4)
the string builder to which elements are appended.
the starting string.
the separator string.
the ending string.
the string builder b to which elements were appended.
Definition Classes
18. def aggregate[B](z: B)(seqop: (B, A) ⇒ B, combop: (B, B) ⇒ B): B
Aggregates the results of applying an operator to subsequent elements.
Aggregates the results of applying an operator to subsequent elements.
This is a more general form of fold and reduce. It has similar semantics, but does not require the result to be a supertype of the element type. It traverses the elements in different partitions
sequentially, using seqop to update the result, and then applies combop to results from different partitions. The implementation of this operation may operate on an arbitrary number of collection
partitions, so combop may be invoked an arbitrary number of times.
For example, one might want to process some elements and then produce a Set. In this case, seqop would process an element and append it to the list, while combop would concatenate two lists from
different partitions together. The initial value z would be an empty set.
pc.aggregate(Set[Int]())(_ += process(_), _ ++ _)
Another example is calculating geometric mean from a collection of doubles (one would typically require big doubles for this).
the type of accumulated results
the initial value for the accumulated result of the partition - this will typically be the neutral element for the seqop operator (e.g. Nil for list concatenation or 0 for summation)
an operator used to accumulate results within a partition
an associative operator used to combine results from different partitions
Definition Classes
TraversableOnce → GenTraversableOnce
19. Composes this partial function with a transformation function that gets applied to results of this partial function.
Composes this partial function with a transformation function that gets applied to results of this partial function.
the result type of the transformation function.
the transformation function
a partial function with the same domain as this partial function, which maps arguments x to k(this(x)).
Definition Classes
PartialFunction → Function1
20. def apply(index: Int): A
Retrieve n-th element from stack, where top of stack has index 0.
Retrieve n-th element from stack, where top of stack has index 0.
This is a linear time operation.
the index of the element to return
the element at the specified index
Definition Classes
Stack → SeqLike → GenSeqLike → Function1
Exceptions thrown
if the index is out of bounds
21. def applyOrElse[A1 <: Int, B1 >: A](x: A1, default: (A1) ⇒ B1): B1
Applies this partial function to the given argument when it is contained in the function domain.
Applies this partial function to the given argument when it is contained in the function domain. Applies fallback function where this partial function is not defined.
Note that expression pf.applyOrElse(x, default) is equivalent to
if(pf isDefinedAt x) pf(x) else default(x)
except that applyOrElse method can be implemented more efficiently. For all partial function literals compiler generates applyOrElse implementation which avoids double evaluation of pattern
matchers and guards. This makes applyOrElse the basis for the efficient implementation for many operations and scenarios, such as:
□ combining partial functions into orElse/andThen chains does not lead to excessive apply/isDefinedAt evaluation
□ lift and unlift do not evaluate source functions twice on each invocation
□ runWith allows efficient imperative-style combining of partial functions with conditionally applied actions
For non-literal partial function classes with nontrivial isDefinedAt method it is recommended to override applyOrElse with custom implementation that avoids double isDefinedAt evaluation. This
may result in better performance and more predictable behavior w.r.t. side effects.
the function argument
the fallback function
the result of this function or fallback function application.
Definition Classes
22. final def asInstanceOf[T0]: T0
Cast the receiver object to be of type T0.
Cast the receiver object to be of type T0.
Note that the success of a cast at runtime is modulo Scala's erasure semantics. Therefore the expression 1.asInstanceOf[String] will throw a ClassCastException at runtime, while the expression
List(1).asInstanceOf[List[String]] will not. In the latter example, because the type argument is erased as part of compilation it is not possible to check whether the contents of the list are of
the requested type.
the receiver object.
Definition Classes
Exceptions thrown
if the receiver object is not an instance of the erasure of type T0.
24. def asParSeq: ParSeq[A]
25. def canEqual(that: Any): Boolean
Method called from equality methods, so that user-defined subclasses can refuse to be equal to other collections of the same kind.
Method called from equality methods, so that user-defined subclasses can refuse to be equal to other collections of the same kind.
The object with which this stack should be compared
true, if this stack can possibly equal that, false otherwise. The test takes into consideration only the run-time types of objects but ignores their elements.
Definition Classes
IterableLike → Equals
26. def clear(): Unit
Removes all elements from the stack.
Removes all elements from the stack. After this operation completed, the stack will be empty.
27. def clone(): Stack[A]
This method clones the stack.
This method clones the stack.
a stack with the same elements.
Definition Classes
Stack → Cloneable → AnyRef
28. [use case] Builds a new collection by applying a partial function to all elements of this stack on which the function is defined.
[use case]
Builds a new collection by applying a partial function to all elements of this stack on which the function is defined.
the element type of the returned collection.
the partial function which filters and maps the stack.
a new stack resulting from applying the given partial function pf to each element on which it is defined and collecting the results. The order of the elements is preserved.
Definition Classes
TraversableLike → GenTraversableLike
29. Finds the first element of the stack for which the given partial function is defined, and applies the partial function to it.
Finds the first element of the stack for which the given partial function is defined, and applies the partial function to it.
the partial function
an option value containing pf applied to the first value for which it is defined, or None if none exists.
Definition Classes
1. Seq("a", 1, 5L).collectFirst({ case x: Int => x*10 }) = Some(10)
30. Iterates over combinations.
Iterates over combinations.
An Iterator which traverses the possible n-element combinations of this stack.
Definition Classes
1. "abbbc".combinations(2) = Iterator(ab, ac, bb, bc)
31. def companion: Stack.type
The factory companion object that builds instances of class Stack.
32. def compose[A](g: (A) ⇒ Int): (A) ⇒ A
Composes two instances of Function1 in a new Function1, with this function applied last.
Composes two instances of Function1 in a new Function1, with this function applied last.
the type to which function g can be applied
a function A => T1
a new function f such that f(x) == apply(g(x))
Definition Classes
33. def contains(elem: Any): Boolean
Tests whether this stack contains a given value as an element.
Tests whether this stack contains a given value as an element.
the element to test.
true if this stack has an element that is equal (as determined by ==) to elem, false otherwise.
Definition Classes
34. def containsSlice[B](that: GenSeq[B]): Boolean
Tests whether this stack contains a given sequence as a slice.
Tests whether this stack contains a given sequence as a slice.
the sequence to test
true if this stack contains a slice with the same elements as that, otherwise false.
Definition Classes
35. def copyToArray(xs: Array[A], start: Int, len: Int): Unit
[use case] Copies elements of this stack to an array.
[use case]
Copies elements of this stack to an array. Fills the given array xs with at most len elements of this stack, starting at position start. Copying will stop once either the end of the current stack
is reached, or the end of the array is reached, or len elements have been copied.
the array to fill.
the starting index.
the maximal number of elements to copy.
Definition Classes
IterableLike → TraversableLike → TraversableOnce → GenTraversableOnce
Full Signature
def copyToArray[B >: A](xs: Array[B], start: Int, len: Int): Unit
36. def copyToArray(xs: Array[A]): Unit
[use case] Copies values of this stack to an array.
[use case]
Copies values of this stack to an array. Fills the given array xs with values of this stack. Copying will stop once either the end of the current stack is reached, or the end of the array is
the array to fill.
Definition Classes
TraversableOnce → GenTraversableOnce
Full Signature
def copyToArray[B >: A](xs: Array[B]): Unit
37. def copyToArray(xs: Array[A], start: Int): Unit
[use case] Copies values of this stack to an array.
[use case]
Copies values of this stack to an array. Fills the given array xs with values of this stack, beginning at index start. Copying will stop once either the end of the current stack is reached, or
the end of the array is reached.
the array to fill.
the starting index.
Definition Classes
TraversableOnce → GenTraversableOnce
Full Signature
def copyToArray[B >: A](xs: Array[B], start: Int): Unit
38. def copyToBuffer[B >: A](dest: Buffer[B]): Unit
Copies all elements of this stack to a buffer.
Copies all elements of this stack to a buffer.
The buffer to which elements are copied.
Definition Classes
39. def corresponds[B](that: GenSeq[B])(p: (A, B) ⇒ Boolean): Boolean
Tests whether every element of this stack relates to the corresponding element of another sequence by satisfying a test predicate.
Tests whether every element of this stack relates to the corresponding element of another sequence by satisfying a test predicate.
the type of the elements of that
the other sequence
the test predicate, which relates elements from both sequences
true if both sequences have the same length and p(x, y) is true for all corresponding elements x of this stack and y of that, otherwise false.
Definition Classes
SeqLike → GenSeqLike
40. def count(p: (A) ⇒ Boolean): Int
Counts the number of elements in the stack which satisfy a predicate.
Counts the number of elements in the stack which satisfy a predicate.
the predicate used to test elements.
the number of elements satisfying the predicate p.
Definition Classes
TraversableOnce → GenTraversableOnce
41. [use case] Computes the multiset difference between this stack and another sequence.
[use case]
Computes the multiset difference between this stack and another sequence.
the sequence of elements to remove
a new stack which contains all elements of this stack except some of occurrences of elements that also appear in that. If an element value x appears n times in that, then the first n
occurrences of x will not form part of the result, but any following occurrences will.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def diff[B >: A](that: GenSeq[B]): Stack[A]
42. def distinct: Stack[A]
Builds a new stack from this stack without any duplicate elements.
Builds a new stack from this stack without any duplicate elements.
A new stack which contains the first occurrence of every element of this stack.
Definition Classes
SeqLike → GenSeqLike
43. def drop(n: Int): Stack[A]
Selects all elements except first n ones.
Selects all elements except first n ones.
the number of elements to drop from this stack.
a stack consisting of all elements of this stack except the first n ones, or else the empty stack, if this stack has less than n elements.
Definition Classes
IterableLike → TraversableLike → GenTraversableLike
44. def dropRight(n: Int): Stack[A]
Selects all elements except last n ones.
Selects all elements except last n ones.
The number of elements to take
a stack consisting of all elements of this stack except the last n ones, or else the empty stack, if this stack has less than n elements.
Definition Classes
45. def dropWhile(p: (A) ⇒ Boolean): Stack[A]
Drops longest prefix of elements that satisfy a predicate.
Drops longest prefix of elements that satisfy a predicate.
the longest suffix of this stack whose first element does not satisfy the predicate p.
Definition Classes
TraversableLike → GenTraversableLike
47. def endsWith[B](that: GenSeq[B]): Boolean
Tests whether this stack ends with the given sequence.
Tests whether this stack ends with the given sequence.
the sequence to test
true if this stack has that as a suffix, false otherwise.
Definition Classes
SeqLike → GenSeqLike
48. def ensuring(cond: (Stack[A]) ⇒ Boolean, msg: ⇒ Any): Stack[A]
Implicit information
This member is added by an implicit conversion from Stack[A] to Ensuring[Stack[A]] performed by method any2Ensuring in scala.Predef.
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to Ensuring[Stack[A]] performed by method any2Ensuring in scala.Predef.
Definition Classes
50. def ensuring(cond: Boolean, msg: ⇒ Any): Stack[A]
Implicit information
This member is added by an implicit conversion from Stack[A] to Ensuring[Stack[A]] performed by method any2Ensuring in scala.Predef.
Definition Classes
51. def ensuring(cond: Boolean): Stack[A]
Implicit information
This member is added by an implicit conversion from Stack[A] to Ensuring[Stack[A]] performed by method any2Ensuring in scala.Predef.
Definition Classes
52. Tests whether the argument (arg0) is a reference to the receiver object (this).
Tests whether the argument (arg0) is a reference to the receiver object (this).
The eq method implements an equivalence relation on non-null instances of AnyRef, and has three additional properties:
□ It is consistent: for any non-null instances x and y of type AnyRef, multiple invocations of x.eq(y) consistently returns true or consistently returns false.
□ For any non-null instance x of type AnyRef, x.eq(null) and null.eq(x) returns false.
□ null.eq(null) returns true.
When overriding the equals or hashCode methods, it is important to ensure that their behavior is consistent with reference equality. Therefore, if two objects are references to each other (o1 eq
o2), they should be equal to each other (o1 == o2) and they should hash to the same value (o1.hashCode == o2.hashCode).
true if the argument is a reference to the receiver object; false otherwise.
Definition Classes
53. The equals method for arbitrary sequences.
The equals method for arbitrary sequences. Compares this sequence to some other object.
The object to compare the sequence to
true if that is a sequence that has the same elements as this sequence in the same order, false otherwise
Definition Classes
GenSeqLike → Equals → Any
54. Tests whether a predicate holds for some of the elements of this stack.
Tests whether a predicate holds for some of the elements of this stack.
the predicate used to test elements.
true if the given predicate p holds for some of the elements of this stack, otherwise false.
Definition Classes
IterableLike → TraversableLike → TraversableOnce → GenTraversableOnce
55. def filter(p: (A) ⇒ Boolean): Stack[A]
Selects all elements of this stack which satisfy a predicate.
Selects all elements of this stack which satisfy a predicate.
the predicate used to test elements.
a new stack consisting of all elements of this stack that satisfy the given predicate p. The order of the elements is preserved.
Definition Classes
TraversableLike → GenTraversableLike
56. def filterNot(p: (A) ⇒ Boolean): Stack[A]
Selects all elements of this stack which do not satisfy a predicate.
Selects all elements of this stack which do not satisfy a predicate.
the predicate used to test elements.
a new stack consisting of all elements of this stack that do not satisfy the given predicate p. The order of the elements is preserved.
Definition Classes
TraversableLike → GenTraversableLike
57. def finalize(): Unit
Called by the garbage collector on the receiver object when there are no more references to the object.
Called by the garbage collector on the receiver object when there are no more references to the object.
The details of when and if the finalize method is invoked, as well as the interaction between finalize and non-local returns and exceptions, are all platform dependent.
Definition Classes
@throws( classOf[java.lang.Throwable] )
not specified by SLS as a member of AnyRef
58. Finds the first element of the stack satisfying a predicate, if any.
Finds the first element of the stack satisfying a predicate, if any.
the predicate used to test elements.
an option value containing the first element in the stack that satisfies p, or None if none exists.
Definition Classes
IterableLike → TraversableLike → TraversableOnce → GenTraversableOnce
59. [use case] Builds a new collection by applying a function to all elements of this stack and using the elements of the resulting collections.
[use case]
Builds a new collection by applying a function to all elements of this stack and using the elements of the resulting collections.
For example:
def getWords(lines: Seq[String]): Seq[String] = lines flatMap (line => line split "\\W+")
The type of the resulting collection is guided by the static type of stack. This might cause unexpected results sometimes. For example:
// lettersOf will return a Seq[Char] of likely repeated letters, instead of a Set
def lettersOf(words: Seq[String]) = words flatMap (word => word.toSet)
// lettersOf will return a Set[Char], not a Seq
def lettersOf(words: Seq[String]) = words.toSet flatMap (word => word.toSeq)
// xs will be a an Iterable[Int]
val xs = Map("a" -> List(11,111), "b" -> List(22,222)).flatMap(_._2)
// ys will be a Map[Int, Int]
val ys = Map("a" -> List(1 -> 11,1 -> 111), "b" -> List(2 -> 22,2 -> 222)).flatMap(_._2)
the element type of the returned collection.
the function to apply to each element.
a new stack resulting from applying the given collection-valued function f to each element of this stack and concatenating the results.
Definition Classes
TraversableLike → GenTraversableLike → FilterMonadic
60. def flatten[B]: Stack[B]
[use case] Converts this stack of traversable collections into a stack formed by the elements of these traversable collections.
[use case]
Converts this stack of traversable collections into a stack formed by the elements of these traversable collections.
The resulting collection's type will be guided by the static type of stack. For example:
val xs = List(Set(1, 2, 3), Set(1, 2, 3))
// xs == List(1, 2, 3, 1, 2, 3)
val ys = Set(List(1, 2, 3), List(3, 2, 1))
// ys == Set(1, 2, 3)
the type of the elements of each traversable collection.
a new stack resulting from concatenating all element stacks.
Definition Classes
61. def fold[A1 >: A](z: A1)(op: (A1, A1) ⇒ A1): A1
Folds the elements of this stack using the specified associative binary operator.
Folds the elements of this stack using the specified associative binary operator.
The order in which operations are performed on elements is unspecified and may be nondeterministic.
a type parameter for the binary operator, a supertype of A.
a neutral element for the fold operation; may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for
a binary operator that must be associative
the result of applying fold operator op between all the elements and z
Definition Classes
TraversableOnce → GenTraversableOnce
62. def foldLeft[B](z: B)(op: (B, A) ⇒ B): B
Applies a binary operator to a start value and all elements of this stack, going left to right.
Applies a binary operator to a start value and all elements of this stack, going left to right.
the result type of the binary operator.
the start value.
the binary operator.
the result of inserting op between consecutive elements of this stack, going left to right with the start value z on the left:
op(...op(z, x_1), x_2, ..., x_n)
where x[1], ..., x[n] are the elements of this stack.
Definition Classes
TraversableOnce → GenTraversableOnce
63. def foldRight[B](z: B)(op: (A, B) ⇒ B): B
Applies a binary operator to all elements of this stack and a start value, going right to left.
Applies a binary operator to all elements of this stack and a start value, going right to left.
the result type of the binary operator.
the start value.
the binary operator.
the result of inserting op between consecutive elements of this stack, going right to left with the start value z on the right:
op(x_1, op(x_2, ... op(x_n, z)...))
where x[1], ..., x[n] are the elements of this stack.
Definition Classes
IterableLike → TraversableOnce → GenTraversableOnce
64. Tests whether a predicate holds for all elements of this stack.
Tests whether a predicate holds for all elements of this stack.
the predicate used to test elements.
true if the given predicate p holds for all elements of this stack, otherwise false.
Definition Classes
IterableLike → TraversableLike → TraversableOnce → GenTraversableOnce
65. def foreach(f: (A) ⇒ Unit): Unit
[use case] Applies a function f to all elements of this stack.
[use case]
Applies a function f to all elements of this stack.
Note: this method underlies the implementation of most other bulk operations. Subclasses should re-implement this method if a more efficient implementation exists.
the function that is applied for its side-effect to every element. The result of function f is discarded.
Definition Classes
Stack → IterableLike → GenericTraversableTemplate → TraversableLike → GenTraversableLike → TraversableOnce → GenTraversableOnce → FilterMonadic
Full Signature
def foreach[U](f: (A) ⇒ U): Unit
66. def formatted(fmtstr: String): String
Returns string formatted according to given format string.
Returns string formatted according to given format string. Format strings are as for String.format (@see java.lang.String.format).
Implicit information
This member is added by an implicit conversion from Stack[A] to StringFormat performed by method any2stringfmt in scala.Predef.
Definition Classes
67. def genericBuilder[B]: Builder[B, Stack[B]]
The generic builder that builds instances of Stack at arbitrary element types.
The generic builder that builds instances of Stack at arbitrary element types.
Definition Classes
68. final def getClass(): Class[_]
A representation that corresponds to the dynamic class of the receiver object.
A representation that corresponds to the dynamic class of the receiver object.
The nature of the representation is platform dependent.
a representation that corresponds to the dynamic class of the receiver object.
Definition Classes
AnyRef → Any
not specified by SLS as a member of AnyRef
69. Partitions this stack into a map of stacks according to some discriminator function.
Partitions this stack into a map of stacks according to some discriminator function.
Note: this method is not re-implemented by views. This means when applied to a view it will always force the view and return a new stack.
the type of keys returned by the discriminator function.
the discriminator function.
A map from keys to stacks such that the following invariant holds:
(xs partition f)(k) = xs filter (x => f(x) == k)
That is, every key k is bound to a stack of those elements x for which f(x) equals k.
Definition Classes
TraversableLike → GenTraversableLike
70. Partitions elements in fixed size stacks.
Partitions elements in fixed size stacks.
the number of elements per group
An iterator producing stacks of size size, except the last will be truncated if the elements don't divide evenly.
Definition Classes
See also
scala.collection.Iterator, method grouped
71. def hasDefiniteSize: Boolean
Tests whether this stack is known to have a finite size.
Tests whether this stack is known to have a finite size. All strict collections are known to have finite size. For a non-strict collection such as Stream, the predicate returns true if all
elements have been computed. It returns false if the stream is not yet evaluated to the end.
Note: many collection methods will not work on collections of infinite sizes.
true if this collection is known to have finite size, false otherwise.
Definition Classes
TraversableLike → TraversableOnce → GenTraversableOnce
72. def hashCode(): Int
Hashcodes for Stack produce a value from the hashcodes of all the elements of the stack.
Hashcodes for Stack produce a value from the hashcodes of all the elements of the stack.
the hash code value for this object.
Definition Classes
GenSeqLike → Any
73. def head: A
Selects the first element of this stack.
74. def headOption: Option[A]
Optionally selects the first element.
Optionally selects the first element.
the first element of this stack if it is nonempty, None if it is empty.
Definition Classes
TraversableLike → GenTraversableLike
75. def ifParSeq[R](isbody: (ParSeq[A]) ⇒ R): (TraversableOps[A])#Otherwise[R]
76. def indexOf(elem: A, from: Int): Int
[use case] Finds index of first occurrence of some value in this stack after or at some start index.
[use case]
Finds index of first occurrence of some value in this stack after or at some start index.
the element value to search for.
the start index
the index >= from of the first element of this stack that is equal (as determined by ==) to elem, or -1, if none exists.
Definition Classes
Full Signature
def indexOf[B >: A](elem: B, from: Int): Int
77. def indexOf(elem: A): Int
[use case] Finds index of first occurrence of some value in this stack.
[use case]
Finds index of first occurrence of some value in this stack.
the element value to search for.
the index of the first element of this stack that is equal (as determined by ==) to elem, or -1, if none exists.
Definition Classes
Full Signature
def indexOf[B >: A](elem: B): Int
78. def indexOfSlice[B >: A](that: GenSeq[B], from: Int): Int
Finds first index after or at a start index where this stack contains a given sequence as a slice.
Finds first index after or at a start index where this stack contains a given sequence as a slice.
the sequence to test
the start index
the first index >= from such that the elements of this stack starting at this index match the elements of sequence that, or -1 of no such subsequence exists.
Definition Classes
79. def indexOfSlice[B >: A](that: GenSeq[B]): Int
Finds first index where this stack contains a given sequence as a slice.
Finds first index where this stack contains a given sequence as a slice.
the sequence to test
the first index such that the elements of this stack starting at this index match the elements of sequence that, or -1 of no such subsequence exists.
Definition Classes
80. def indexWhere(p: (A) ⇒ Boolean, from: Int): Int
Finds index of the first element satisfying some predicate after or at some start index.
Finds index of the first element satisfying some predicate after or at some start index.
the predicate used to test elements.
the start index
the index >= from of the first element of this stack that satisfies the predicate p, or -1, if none exists.
Definition Classes
SeqLike → GenSeqLike
81. def indexWhere(p: (A) ⇒ Boolean): Int
Finds index of first element satisfying some predicate.
Finds index of first element satisfying some predicate.
the predicate used to test elements.
the index of the first element of this stack that satisfies the predicate p, or -1, if none exists.
Definition Classes
82. Produces the range of all indices of this sequence.
Produces the range of all indices of this sequence.
a Range value from 0 to one less than the length of this stack.
Definition Classes
83. def init: Stack[A]
Selects all elements except the last.
Selects all elements except the last.
a stack consisting of all elements of this stack except the last one.
Definition Classes
TraversableLike → GenTraversableLike
Exceptions thrown
if the stack is empty.
84. Iterates over the inits of this stack.
Iterates over the inits of this stack. The first value will be this stack and the final one will be an empty stack, with the intervening values the results of successive applications of init.
an iterator over all the inits of this stack
Definition Classes
1. List(1,2,3).inits = Iterator(List(1,2,3), List(1,2), List(1), Nil)
85. [use case] Computes the multiset intersection between this stack and another sequence.
[use case]
Computes the multiset intersection between this stack and another sequence.
the sequence of elements to intersect with.
a new stack which contains all elements of this stack which also appear in that. If an element value x appears n times in that, then the first n occurrences of x will be retained in the
result, but any following occurrences will be omitted.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def intersect[B >: A](that: GenSeq[B]): Stack[A]
86. def isDefinedAt(idx: Int): Boolean
Tests whether this stack contains given index.
Tests whether this stack contains given index.
The implementations of methods apply and isDefinedAt turn a Seq[A] into a PartialFunction[Int, A].
the index to test
true if this stack contains an element at position idx, false otherwise.
Definition Classes
87. Checks if the stack is empty.
88. final def isInstanceOf[T0]: Boolean
Test whether the dynamic type of the receiver object is T0.
Test whether the dynamic type of the receiver object is T0.
Note that the result of the test is modulo Scala's erasure semantics. Therefore the expression 1.isInstanceOf[String] will return false, while the expression List(1).isInstanceOf[List[String]]
will return true. In the latter example, because the type argument is erased as part of compilation it is not possible to check whether the contents of the list are of the specified type.
true if the receiver object is an instance of erasure of type T0; false otherwise.
Definition Classes
89. def isParIterable: Boolean
91. def isParallel: Boolean
92. final def isTraversableAgain: Boolean
Tests whether this stack can be repeatedly traversed.
93. def iterator: Iterator[A]
Returns an iterator over all elements on the stack.
Returns an iterator over all elements on the stack. This iterator is stable with respect to state changes in the stack object; i.e. such changes will not be reflected in the iterator. The
iterator issues elements in the reversed order they were inserted into the stack (LIFO order).
an iterator over all stack elements.
Definition Classes
Stack → IterableLike → GenIterableLike
(Changed in version 2.8.0) iterator traverses in FIFO order.
94. def last: A
Selects the last element.
Selects the last element.
The last element of this stack.
Definition Classes
TraversableLike → GenTraversableLike
Exceptions thrown
If the stack is empty.
95. def lastIndexOf(elem: A, end: Int): Int
[use case] Finds index of last occurrence of some value in this stack before or at a given end index.
[use case]
Finds index of last occurrence of some value in this stack before or at a given end index.
the element value to search for.
the end index.
the index <= end of the last element of this stack that is equal (as determined by ==) to elem, or -1, if none exists.
Definition Classes
Full Signature
def lastIndexOf[B >: A](elem: B, end: Int): Int
96. def lastIndexOf(elem: A): Int
[use case] Finds index of last occurrence of some value in this stack.
[use case]
Finds index of last occurrence of some value in this stack.
the element value to search for.
the index of the last element of this stack that is equal (as determined by ==) to elem, or -1, if none exists.
Definition Classes
Full Signature
def lastIndexOf[B >: A](elem: B): Int
97. def lastIndexOfSlice[B >: A](that: GenSeq[B], end: Int): Int
Finds last index before or at a given end index where this stack contains a given sequence as a slice.
Finds last index before or at a given end index where this stack contains a given sequence as a slice.
the sequence to test
the end index
the last index <= end such that the elements of this stack starting at this index match the elements of sequence that, or -1 of no such subsequence exists.
Definition Classes
98. def lastIndexOfSlice[B >: A](that: GenSeq[B]): Int
Finds last index where this stack contains a given sequence as a slice.
Finds last index where this stack contains a given sequence as a slice.
the sequence to test
the last index such that the elements of this stack starting a this index match the elements of sequence that, or -1 of no such subsequence exists.
Definition Classes
99. def lastIndexWhere(p: (A) ⇒ Boolean, end: Int): Int
Finds index of last element satisfying some predicate before or at given end index.
Finds index of last element satisfying some predicate before or at given end index.
the predicate used to test elements.
the index <= end of the last element of this stack that satisfies the predicate p, or -1, if none exists.
Definition Classes
SeqLike → GenSeqLike
100. def lastIndexWhere(p: (A) ⇒ Boolean): Int
Finds index of last element satisfying some predicate.
Finds index of last element satisfying some predicate.
the predicate used to test elements.
the index of the last element of this stack that satisfies the predicate p, or -1, if none exists.
Definition Classes
101. def lastOption: Option[A]
Optionally selects the last element.
Optionally selects the last element.
the last element of this stack$ if it is nonempty, None if it is empty.
Definition Classes
TraversableLike → GenTraversableLike
102. def length: Int
The number of elements in the stack
The number of elements in the stack
the number of elements in this stack.
Definition Classes
Stack → SeqLike → GenSeqLike
103. def lengthCompare(len: Int): Int
Compares the length of this stack to a test value.
Compares the length of this stack to a test value.
the test value that gets compared with the length.
A value x where
x < 0 if this.length < len
x == 0 if this.length == len
x > 0 if this.length > len
The method as implemented here does not call length directly; its running time is O(length min len) instead of O(length). The method should be overwritten if computing length is cheap.
Definition Classes
104. Turns this partial function into a plain function returning an Option result.
Turns this partial function into a plain function returning an Option result.
a function that takes an argument x to Some(this(x)) if this is defined for x, and to None otherwise.
Definition Classes
See also
105. def map[B](f: (A) ⇒ B): Stack[B]
[use case] Builds a new collection by applying a function to all elements of this stack.
[use case]
Builds a new collection by applying a function to all elements of this stack.
the element type of the returned collection.
the function to apply to each element.
a new stack resulting from applying the given function f to each element of this stack and collecting the results.
Definition Classes
TraversableLike → GenTraversableLike → FilterMonadic
Full Signature
def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
106. def max: A
[use case] Finds the largest element.
107. def maxBy[B](f: (A) ⇒ B)(implicit cmp: Ordering[B]): A
108. def min: A
[use case] Finds the smallest element.
109. def minBy[B](f: (A) ⇒ B)(implicit cmp: Ordering[B]): A
110. def mkString: String
Displays all elements of this stack in a string.
Displays all elements of this stack in a string.
a string representation of this stack. In the resulting string the string representations (w.r.t. the method toString) of all elements of this stack follow each other without any separator
Definition Classes
TraversableOnce → GenTraversableOnce
111. Displays all elements of this stack in a string using a separator string.
Displays all elements of this stack in a string using a separator string.
the separator string.
a string representation of this stack. In the resulting string the string representations (w.r.t. the method toString) of all elements of this stack are separated by the string sep.
Definition Classes
TraversableOnce → GenTraversableOnce
1. List(1, 2, 3).mkString("|") = "1|2|3"
112. Displays all elements of this stack in a string using start, end, and separator strings.
Displays all elements of this stack in a string using start, end, and separator strings.
the starting string.
the separator string.
the ending string.
a string representation of this stack. The resulting string begins with the string start and ends with the string end. Inside, the string representations (w.r.t. the method toString) of all
elements of this stack are separated by the string sep.
Definition Classes
TraversableOnce → GenTraversableOnce
1. List(1, 2, 3).mkString("(", "; ", ")") = "(1; 2; 3)"
113. Equivalent to !(this eq that).
Equivalent to !(this eq that).
true if the argument is not a reference to the receiver object; false otherwise.
Definition Classes
114. The builder that builds instances of type Stack[A]
115. Tests whether the stack is not empty.
Tests whether the stack is not empty.
true if the stack contains at least one element, false otherwise.
Definition Classes
TraversableOnce → GenTraversableOnce
116. final def notify(): Unit
Wakes up a single thread that is waiting on the receiver object's monitor.
Wakes up a single thread that is waiting on the receiver object's monitor.
Definition Classes
not specified by SLS as a member of AnyRef
117. final def notifyAll(): Unit
Wakes up all threads that are waiting on the receiver object's monitor.
Wakes up all threads that are waiting on the receiver object's monitor.
Definition Classes
not specified by SLS as a member of AnyRef
118. Composes this partial function with a fallback partial function which gets applied where this partial function is not defined.
Composes this partial function with a fallback partial function which gets applied where this partial function is not defined.
the argument type of the fallback function
the result type of the fallback function
the fallback function
a partial function which has as domain the union of the domains of this partial function and that. The resulting partial function takes x to this(x) where this is defined, and to that(x)
where it is not.
Definition Classes
119. def padTo(len: Int, elem: A): Stack[A]
[use case] A copy of this stack with an element value appended until a given target length is reached.
[use case]
A copy of this stack with an element value appended until a given target length is reached.
the target length
the padding value
a new stack consisting of all elements of this stack followed by the minimal number of occurrences of elem so that the resulting stack has a length of at least len.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def padTo[B >: A, That](len: Int, elem: B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
120. Returns a parallel implementation of this collection.
Returns a parallel implementation of this collection.
For most collection types, this method creates a new parallel collection by copying all the elements. For these collection, par takes linear time. Mutable collections in this category do not
produce a mutable parallel collection that has the same underlying dataset, so changes in one collection will not be reflected in the other one.
Specific collections (e.g. ParArray or mutable.ParHashMap) override this default behaviour by creating a parallel collection which shares the same underlying dataset. For these collections, par
takes constant or sublinear time.
All parallel collections return a reference to themselves.
a parallel implementation of this collection
Definition Classes
121. The default par implementation uses the combiner provided by this method to create a new parallel collection.
The default par implementation uses the combiner provided by this method to create a new parallel collection.
a combiner for the parallel collection of type ParRepr
Definition Classes
SeqLike → SeqLike → TraversableLike → Parallelizable
122. def partition(p: (A) ⇒ Boolean): (Stack[A], Stack[A])
Partitions this stack in two stacks according to a predicate.
Partitions this stack in two stacks according to a predicate.
the predicate on which to partition.
a pair of stacks: the first stack consists of all elements that satisfy the predicate p and the second stack consists of all elements that don't. The relative order of the elements in the
resulting stacks is the same as in the original stack.
Definition Classes
TraversableLike → GenTraversableLike
123. def patch(from: Int, that: GenSeq[A], replaced: Int): Stack[A]
[use case] Produces a new stack where a slice of elements in this stack is replaced by another sequence.
[use case]
Produces a new stack where a slice of elements in this stack is replaced by another sequence.
the index of the first replaced element
the number of elements to drop in the original stack
a new stack consisting of all elements of this stack except that replaced elements starting from from are replaced by patch.
Definition Classes
SeqLike → GenSeqLike
124. Iterates over distinct permutations.
Iterates over distinct permutations.
An Iterator which traverses the distinct permutations of this stack.
Definition Classes
1. "abb".permutations = Iterator(abb, bab, bba)
125. def pop(): A
Removes the top element from the stack.
Removes the top element from the stack.
the top element
Exceptions thrown
126. def prefixLength(p: (A) ⇒ Boolean): Int
Returns the length of the longest prefix whose elements all satisfy some predicate.
Returns the length of the longest prefix whose elements all satisfy some predicate.
the predicate used to test elements.
the length of the longest prefix of this stack such that every element of the segment satisfies the predicate p.
Definition Classes
127. def product: A
[use case] Multiplies up the elements of this collection.
[use case]
Multiplies up the elements of this collection.
the product of all elements in this stack of numbers of type Int. Instead of Int, any other type T with an implicit Numeric[T] implementation can be used as element type of the stack and as
result type of product. Examples of such types are: Long, Float, Double, BigInt.
Definition Classes
TraversableOnce → GenTraversableOnce
Full Signature
def product[B >: A](implicit num: Numeric[B]): B
128. def push(elem1: A, elem2: A, elems: A*): Stack.this.type
Push two or more elements onto the stack.
Push two or more elements onto the stack. The last element of the sequence will be on top of the new stack.
the element sequence.
the stack with the new elements on top.
129. def push(elem: A): Stack.this.type
Push an element on the stack.
Push an element on the stack.
the element to push on the stack.
the stack with the new element on top.
130. Push all elements in the given traversable object onto the stack.
Push all elements in the given traversable object onto the stack. The last element in the traversable object will be on top of the new stack.
the traversable object.
the stack with the new elements on top.
131. def reduce[A1 >: A](op: (A1, A1) ⇒ A1): A1
Reduces the elements of this stack using the specified associative binary operator.
Reduces the elements of this stack using the specified associative binary operator.
The order in which operations are performed on elements is unspecified and may be nondeterministic.
A type parameter for the binary operator, a supertype of A.
A binary operator that must be associative.
The result of applying reduce operator op between all the elements if the stack is nonempty.
Definition Classes
TraversableOnce → GenTraversableOnce
Exceptions thrown
if this stack is empty.
132. def reduceLeft[B >: A](op: (B, A) ⇒ B): B
Applies a binary operator to all elements of this stack, going left to right.
Applies a binary operator to all elements of this stack, going left to right.
the result type of the binary operator.
the binary operator.
the result of inserting op between consecutive elements of this stack, going left to right:
op( op( ... op(x_1, x_2) ..., x_{n-1}), x_n)
where x[1], ..., x[n] are the elements of this stack.
Definition Classes
Exceptions thrown
if this stack is empty.
133. def reduceLeftOption[B >: A](op: (B, A) ⇒ B): Option[B]
Optionally applies a binary operator to all elements of this stack, going left to right.
Optionally applies a binary operator to all elements of this stack, going left to right.
the result type of the binary operator.
the binary operator.
an option value containing the result of reduceLeft(op) is this stack is nonempty, None otherwise.
Definition Classes
TraversableOnce → GenTraversableOnce
134. def reduceOption[A1 >: A](op: (A1, A1) ⇒ A1): Option[A1]
Reduces the elements of this stack, if any, using the specified associative binary operator.
Reduces the elements of this stack, if any, using the specified associative binary operator.
The order in which operations are performed on elements is unspecified and may be nondeterministic.
A type parameter for the binary operator, a supertype of A.
A binary operator that must be associative.
An option value containing result of applying reduce operator op between all the elements if the collection is nonempty, and None otherwise.
Definition Classes
TraversableOnce → GenTraversableOnce
135. def reduceRight[B >: A](op: (A, B) ⇒ B): B
Applies a binary operator to all elements of this stack, going right to left.
Applies a binary operator to all elements of this stack, going right to left.
the result type of the binary operator.
the binary operator.
the result of inserting op between consecutive elements of this stack, going right to left:
op(x_1, op(x_2, ..., op(x_{n-1}, x_n)...))
where x[1], ..., x[n] are the elements of this stack.
Definition Classes
IterableLike → TraversableOnce → GenTraversableOnce
Exceptions thrown
if this stack is empty.
136. def reduceRightOption[B >: A](op: (A, B) ⇒ B): Option[B]
Optionally applies a binary operator to all elements of this stack, going right to left.
Optionally applies a binary operator to all elements of this stack, going right to left.
the result type of the binary operator.
the binary operator.
an option value containing the result of reduceRight(op) is this stack is nonempty, None otherwise.
Definition Classes
TraversableOnce → GenTraversableOnce
137. def repr: Stack[A]
The collection of type stack underlying this TraversableLike object.
The collection of type stack underlying this TraversableLike object. By default this is implemented as the TraversableLike object itself, but this can be overridden.
Definition Classes
TraversableLike → GenTraversableLike
138. def reverse: Stack[A]
Returns new stack wih elements in reversed order.
Returns new stack wih elements in reversed order.
A new stack with all elements of this stack in reversed order.
Definition Classes
SeqLike → GenSeqLike
139. def reverseIterator: Iterator[A]
An iterator yielding elements in reversed order.
An iterator yielding elements in reversed order.
Note: xs.reverseIterator is the same as xs.reverse.iterator but might be more efficient.
an iterator yielding the elements of this stack in reversed order
Definition Classes
140. def reverseMap[B](f: (A) ⇒ B): Stack[B]
[use case] Builds a new collection by applying a function to all elements of this stack and collecting the results in reversed order.
[use case]
Builds a new collection by applying a function to all elements of this stack and collecting the results in reversed order.
Note: xs.reverseMap(f) is the same as xs.reverse.map(f) but might be more efficient.
the element type of the returned collection.
the function to apply to each element.
a new stack resulting from applying the given function f to each element of this stack and collecting the results in reversed order.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def reverseMap[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
141. def reversed: List[A]
Definition Classes
142. def runWith[U](action: (A) ⇒ U): (Int) ⇒ Boolean
Composes this partial function with an action function which gets applied to results of this partial function.
Composes this partial function with an action function which gets applied to results of this partial function. The action function is invoked only for its side effects; its result is ignored.
Note that expression pf.runWith(action)(x) is equivalent to
if(pf isDefinedAt x) { action(pf(x)); true } else false
except that runWith is implemented via applyOrElse and thus potentially more efficient. Using runWith avoids double evaluation of pattern matchers and guards for partial function literals.
the action function
a function which maps arguments x to isDefinedAt(x). The resulting function runs action(this(x)) where this is defined.
Definition Classes
See also
143. [use case] Checks if the other iterable collection contains the same elements in the same order as this stack.
[use case]
Checks if the other iterable collection contains the same elements in the same order as this stack.
the collection to compare with.
true, if both collections contain the same elements in the same order, false otherwise.
Definition Classes
IterableLike → GenIterableLike
144. def scan[B >: A, That](z: B)(op: (B, B) ⇒ B)(implicit cbf: CanBuildFrom[Stack[A], B, That]): That
Computes a prefix scan of the elements of the collection.
Computes a prefix scan of the elements of the collection.
Note: The neutral element z may be applied more than once.
element type of the resulting collection
type of the resulting collection
neutral element for the operator op
the associative operator for the scan
combiner factory which provides a combiner
a new stack containing the prefix scan of the elements in this stack
Definition Classes
TraversableLike → GenTraversableLike
145. def scanLeft[B, That](z: B)(op: (B, A) ⇒ B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
Produces a collection containing cumulative results of applying the operator going left to right.
Produces a collection containing cumulative results of applying the operator going left to right.
the type of the elements in the resulting collection
the actual type of the resulting collection
the initial value
the binary operator applied to the intermediate result and the element
an implicit value of class CanBuildFrom which determines the result class That from the current representation type Repr and the new element type B.
collection with intermediate results
Definition Classes
TraversableLike → GenTraversableLike
146. def scanRight[B, That](z: B)(op: (A, B) ⇒ B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
Produces a collection containing cumulative results of applying the operator going right to left.
Produces a collection containing cumulative results of applying the operator going right to left. The head of the collection is the last cumulative result.
List(1, 2, 3, 4).scanRight(0)(_ + _) == List(10, 9, 7, 4, 0)
the type of the elements in the resulting collection
the actual type of the resulting collection
the initial value
the binary operator applied to the intermediate result and the element
an implicit value of class CanBuildFrom which determines the result class That from the current representation type Repr and the new element type B.
collection with intermediate results
Definition Classes
TraversableLike → GenTraversableLike
(Changed in version 2.9.0) The behavior of scanRight has changed. The previous behavior can be reproduced with scanRight.reverse.
147. def segmentLength(p: (A) ⇒ Boolean, from: Int): Int
Computes length of longest segment whose elements all satisfy some predicate.
Computes length of longest segment whose elements all satisfy some predicate.
the predicate used to test elements.
the index where the search starts.
the length of the longest segment of this stack starting from index from such that every element of the segment satisfies the predicate p.
Definition Classes
SeqLike → GenSeqLike
148. def seq: Seq[A]
A version of this collection with all of the operations implemented sequentially (i.
A version of this collection with all of the operations implemented sequentially (i.e. in a single-threaded manner).
This method returns a reference to this collection. In parallel collections, it is redefined to return a sequential implementation of this collection. In both cases, it has O(1) complexity.
a sequential view of the collection.
Definition Classes
Seq → Seq → GenSeq → GenSeqLike → Iterable → Iterable → GenIterable → Traversable → Traversable → GenTraversable → Parallelizable → TraversableOnce → GenTraversableOnce
149. def size: Int
The size of this stack, equivalent to length.
150. def slice(from: Int, until: Int): Stack[A]
Selects an interval of elements.
Selects an interval of elements. The returned collection is made up of all elements x which satisfy the invariant:
from <= indexOf(x) < until
a stack containing the elements greater than or equal to index from extending up to (but not including) index until of this stack.
Definition Classes
IterableLike → TraversableLike → GenTraversableLike
151. Groups elements in fixed size blocks by passing a "sliding window" over them (as opposed to partitioning them, as is done in grouped.
Groups elements in fixed size blocks by passing a "sliding window" over them (as opposed to partitioning them, as is done in grouped.)
the number of elements per group
the distance between the first elements of successive groups (defaults to 1)
An iterator producing stacks of size size, except the last and the only element will be truncated if there are fewer elements than size.
Definition Classes
See also
scala.collection.Iterator, method sliding
152. Groups elements in fixed size blocks by passing a "sliding window" over them (as opposed to partitioning them, as is done in grouped.
Groups elements in fixed size blocks by passing a "sliding window" over them (as opposed to partitioning them, as is done in grouped.)
the number of elements per group
An iterator producing stacks of size size, except the last and the only element will be truncated if there are fewer elements than size.
Definition Classes
See also
scala.collection.Iterator, method sliding
153. def sortBy[B](f: (A) ⇒ B)(implicit ord: math.Ordering[B]): Stack[A]
Sorts this Stack according to the Ordering which results from transforming an implicitly given Ordering with a transformation function.
Sorts this Stack according to the Ordering which results from transforming an implicitly given Ordering with a transformation function.
the target type of the transformation f, and the type where the ordering ord is defined.
the transformation function mapping elements to some other domain B.
the ordering assumed on domain B.
a stack consisting of the elements of this stack sorted according to the ordering where x < y if ord.lt(f(x), f(y)).
Definition Classes
1. val words = "The quick brown fox jumped over the lazy dog".split(' ')
// this works because scala.Ordering will implicitly provide an Ordering[Tuple2[Int, Char]]
words.sortBy(x => (x.length, x.head))
res0: Array[String] = Array(The, dog, fox, the, lazy, over, brown, quick, jumped)
See also
154. def sortWith(lt: (A, A) ⇒ Boolean): Stack[A]
Sorts this stack according to a comparison function.
Sorts this stack according to a comparison function.
The sort is stable. That is, elements that are equal (as determined by lt) appear in the same order in the sorted sequence as in the original.
the comparison function which tests whether its first argument precedes its second argument in the desired ordering.
a stack consisting of the elements of this stack sorted according to the comparison function lt.
Definition Classes
1. List("Steve", "Tom", "John", "Bob").sortWith(_.compareTo(_) < 0) =
List("Bob", "John", "Steve", "Tom")
155. def sorted[B >: A](implicit ord: math.Ordering[B]): Stack[A]
Sorts this stack according to an Ordering.
Sorts this stack according to an Ordering.
The sort is stable. That is, elements that are equal (as determined by lt) appear in the same order in the sorted sequence as in the original.
the ordering to be used to compare elements.
a stack consisting of the elements of this stack sorted according to the ordering ord.
Definition Classes
See also
156. Splits this stack into a prefix/suffix pair according to a predicate.
Splits this stack into a prefix/suffix pair according to a predicate.
Note: c span p is equivalent to (but possibly more efficient than) (c takeWhile p, c dropWhile p), provided the evaluation of the predicate p does not cause any side-effects.
a pair consisting of the longest prefix of this stack whose elements all satisfy p, and the rest of this stack.
Definition Classes
TraversableLike → GenTraversableLike
157. def splitAt(n: Int): (Stack[A], Stack[A])
Splits this stack into two at a given position.
Splits this stack into two at a given position. Note: c splitAt n is equivalent to (but possibly more efficient than) (c take n, c drop n).
the position at which to split.
a pair of stacks consisting of the first n elements of this stack, and the other elements.
Definition Classes
TraversableLike → GenTraversableLike
158. def startsWith[B](that: GenSeq[B], offset: Int): Boolean
Tests whether this stack contains the given sequence at a given index.
Tests whether this stack contains the given sequence at a given index.
Note: If the both the receiver object this and the argument that are infinite sequences this method may not terminate.
the sequence to test
the index where the sequence is searched.
true if the sequence that is contained in this stack at index offset, otherwise false.
Definition Classes
SeqLike → GenSeqLike
159. def startsWith[B](that: GenSeq[B]): Boolean
Tests whether this stack starts with the given sequence.
Tests whether this stack starts with the given sequence.
the sequence to test
true if this collection has that as a prefix, false otherwise.
Definition Classes
160. def stringPrefix: String
Defines the prefix of this object's toString representation.
Defines the prefix of this object's toString representation.
a string representation which starts the result of toString applied to this stack. By default the string prefix is the simple name of the collection class stack.
Definition Classes
TraversableLike → GenTraversableLike
161. def sum: A
[use case] Sums up the elements of this collection.
[use case]
Sums up the elements of this collection.
the sum of all elements in this stack of numbers of type Int. Instead of Int, any other type T with an implicit Numeric[T] implementation can be used as element type of the stack and as
result type of sum. Examples of such types are: Long, Float, Double, BigInt.
Definition Classes
TraversableOnce → GenTraversableOnce
Full Signature
def sum[B >: A](implicit num: Numeric[B]): B
162. final def synchronized[T0](arg0: ⇒ T0): T0
163. def tail: Stack[A]
Selects all elements except the first.
Selects all elements except the first.
a stack consisting of all elements of this stack except the first one.
Definition Classes
TraversableLike → GenTraversableLike
Exceptions thrown
if the stack is empty.
164. Iterates over the tails of this stack.
Iterates over the tails of this stack. The first value will be this stack and the final one will be an empty stack, with the intervening values the results of successive applications of tail.
an iterator over all the tails of this stack
Definition Classes
1. List(1,2,3).tails = Iterator(List(1,2,3), List(2,3), List(3), Nil)
165. def take(n: Int): Stack[A]
Selects first n elements.
Selects first n elements.
the number of elements to take from this stack.
a stack consisting only of the first n elements of this stack, or else the whole stack, if it has less than n elements.
Definition Classes
IterableLike → TraversableLike → GenTraversableLike
166. def takeRight(n: Int): Stack[A]
Selects last n elements.
the number of elements to take
a stack consisting only of the last n elements of this stack, or else the whole stack, if it has less than n elements.
Definition Classes
167. def takeWhile(p: (A) ⇒ Boolean): Stack[A]
Takes longest prefix of elements that satisfy a predicate.
Takes longest prefix of elements that satisfy a predicate.
the longest prefix of this stack whose elements all satisfy the predicate p.
Definition Classes
IterableLike → TraversableLike → GenTraversableLike
168. The underlying collection seen as an instance of Stack.
The underlying collection seen as an instance of Stack. By default this is implemented as the current collection object itself, but this can be overridden.
Definition Classes
SeqLike → IterableLike → TraversableLike
169. def to[Col[_]]: Col[A]
[use case] Converts this stack into another by copying all elements.
[use case]
Converts this stack into another by copying all elements.
The collection type to build.
a new collection containing all elements of this stack.
Definition Classes
TraversableLike → TraversableOnce → GenTraversableOnce
170. def toArray: Array[A]
[use case] Converts this stack to an array.
[use case]
Converts this stack to an array.
an array containing all elements of this stack. An ClassTag must be available for the element type of this stack.
Definition Classes
TraversableOnce → GenTraversableOnce
Full Signature
def toArray[B >: A](implicit arg0: ClassTag[B]): Array[B]
171. def toBuffer[B >: A]: Buffer[B]
Converts this stack to a mutable buffer.
172. A conversion from collections of type Repr to Stack objects.
A conversion from collections of type Repr to Stack objects. By default this is implemented as just a cast, but this can be overridden.
Definition Classes
SeqLike → IterableLike → TraversableLike
173. Converts this stack to an indexed sequence.
174. Converts this stack to an iterable collection.
Converts this stack to an iterable collection. Note that the choice of target Iterable is lazy in this default implementation as this TraversableOnce may be lazy and unevaluated (i.e. it may be
an iterator which is only traversable once).
an Iterable containing all elements of this stack.
Definition Classes
IterableLike → TraversableOnce → GenTraversableOnce
175. def toIterator: Iterator[A]
Returns an Iterator over the elements in this stack.
Returns an Iterator over the elements in this stack. Will return the same Iterator if this instance is already an Iterator.
an Iterator containing all elements of this stack.
Definition Classes
IterableLike → TraversableLike → GenTraversableOnce
176. Creates a list of all stack elements in LIFO order.
Creates a list of all stack elements in LIFO order.
the created list.
Definition Classes
Stack → TraversableOnce → GenTraversableOnce
(Changed in version 2.8.0) toList traverses in FIFO order.
177. [use case] Converts this stack to a map.
[use case]
Converts this stack to a map. This method is unavailable unless the elements are members of Tuple2, each ((T, U)) becoming a key-value pair in the map. Duplicate keys will be overwritten by later
keys: if this is an unordered collection, which key is in the resulting map is undefined.
a map of type immutable.Map[T, U] containing all key/value pairs of type (T, U) of this stack.
Definition Classes
TraversableOnce → GenTraversableOnce
Full Signature
def toMap[T, U](implicit ev: <:<[A, (T, U)]): immutable.Map[T, U]
178. def toParArray: ParArray[A]
179. Converts this stack to a sequence.
180. Converts this stack to a set.
181. Converts this stack to a stream.
182. def toString(): String
Converts this stack to a string.
Converts this stack to a string.
a string representation of this collection. By default this string consists of the stringPrefix of this stack, followed by all elements separated by commas and enclosed in parentheses.
Definition Classes
SeqLike → TraversableLike → Any
183. Converts this stack to an unspecified Traversable.
Converts this stack to an unspecified Traversable. Will return the same collection if this instance is already Traversable.
a Traversable containing all elements of this stack.
Definition Classes
TraversableLike → TraversableOnce → GenTraversableOnce
184. def toVector: Vector[A]
Converts this stack to a Vector.
185. def top: A
Returns the top element of the stack.
Returns the top element of the stack. This method will not remove the element from the stack. An error is signaled if there is no element on the stack.
the top element
Exceptions thrown
186. def transform(f: (A) ⇒ A): Stack.this.type
Applies a transformation function to all values contained in this sequence.
Applies a transformation function to all values contained in this sequence. The transformation function produces new values from existing elements.
the transformation to apply
the sequence itself.
Definition Classes
187. def transpose[B](implicit asTraversable: (A) ⇒ GenTraversableOnce[B]): Stack[Stack[B]]
Transposes this stack of traversable collections into a stack of stacks.
Transposes this stack of traversable collections into a stack of stacks.
the type of the elements of each traversable collection.
an implicit conversion which asserts that the element type of this stack is a Traversable.
a two-dimensional stack of stacks which has as nth row the nth column of this stack.
Definition Classes
(Changed in version 2.9.0) transpose throws an IllegalArgumentException if collections are not uniformly sized.
Exceptions thrown
if all collections in this stack are not of the same size.
188. [use case] Produces a new sequence which contains all elements of this stack and also all elements of a given sequence.
[use case]
Produces a new sequence which contains all elements of this stack and also all elements of a given sequence. xs union ys is equivalent to xs ++ ys.
Another way to express this is that xs union ys computes the order-presevring multi-set union of xs and ys. union is hence a counter-part of diff and intersect which also work on multi-sets.
the sequence to add.
a new stack which contains all elements of this stack followed by all elements of that.
Definition Classes
SeqLike → GenSeqLike
189. def unzip[A1, A2](implicit asPair: (A) ⇒ (A1, A2)): (Stack[A1], Stack[A2])
Converts this stack of pairs into two collections of the first and second half of each pair.
Converts this stack of pairs into two collections of the first and second half of each pair.
the type of the first half of the element pairs
the type of the second half of the element pairs
an implicit conversion which asserts that the element type of this stack is a pair.
a pair stacks, containing the first, respectively second half of each element pair of this stack.
Definition Classes
190. def unzip3[A1, A2, A3](implicit asTriple: (A) ⇒ (A1, A2, A3)): (Stack[A1], Stack[A2], Stack[A3])
Converts this stack of triples into three collections of the first, second, and third element of each triple.
Converts this stack of triples into three collections of the first, second, and third element of each triple.
the type of the first member of the element triples
the type of the second member of the element triples
the type of the third member of the element triples
an implicit conversion which asserts that the element type of this stack is a triple.
a triple stacks, containing the first, second, respectively third member of each element triple of this stack.
Definition Classes
191. def update(n: Int, newelem: A): Unit
Replace element at index n with the new element newelem.
Replace element at index n with the new element newelem.
This is a linear time operation.
the index of the element to replace.
the new element.
Definition Classes
Stack → SeqLike
Exceptions thrown
if the index is not valid
192. def updated(index: Int, elem: A): Stack[A]
[use case] A copy of this stack with one single replaced element.
[use case]
A copy of this stack with one single replaced element.
the position of the replacement
the replacing element
a copy of this stack with the element at position index replaced by elem.
Definition Classes
SeqLike → GenSeqLike
Full Signature
def updated[B >: A, That](index: Int, elem: B)(implicit bf: CanBuildFrom[Stack[A], B, That]): That
193. Creates a non-strict view of a slice of this stack.
Creates a non-strict view of a slice of this stack.
Note: the difference between view and slice is that view produces a view of the current stack, whereas slice produces a new stack.
Note: view(from, to) is equivalent to view.slice(from, to)
the index of the first element of the view
the index of the element following the view
a non-strict view of a slice of this stack, starting at index from and extending up to (but not including) index until.
Definition Classes
SeqLike → IterableLike → TraversableLike
194. Creates a non-strict view of this stack.
195. final def wait(): Unit
196. final def wait(arg0: Long, arg1: Int): Unit
197. final def wait(arg0: Long): Unit
198. Creates a non-strict filter of this stack.
Creates a non-strict filter of this stack.
Note: the difference between c filter p and c withFilter p is that the former creates a new collection, whereas the latter only restricts the domain of subsequent map, flatMap, foreach, and
withFilter operations.
the predicate used to test elements.
an object of class WithFilter, which supports map, flatMap, foreach, and withFilter operations. All these operations apply to those elements of this stack which satisfy the predicate p.
Definition Classes
TraversableLike → FilterMonadic
199. [use case] Returns a stack formed from this stack and another iterable collection by combining corresponding elements in pairs.
[use case]
Returns a stack formed from this stack and another iterable collection by combining corresponding elements in pairs. If one of the two collections is longer than the other, its remaining elements
are ignored.
the type of the second half of the returned pairs
The iterable providing the second half of each result pair
a new stack containing pairs consisting of corresponding elements of this stack and that. The length of the returned collection is the minimum of the lengths of this stack and that.
Definition Classes
IterableLike → GenIterableLike
200. def zipAll[B](that: collection.Iterable[B], thisElem: A, thatElem: B): Stack[(A, B)]
[use case] Returns a stack formed from this stack and another iterable collection by combining corresponding elements in pairs.
[use case]
Returns a stack formed from this stack and another iterable collection by combining corresponding elements in pairs. If one of the two collections is shorter than the other, placeholder elements
are used to extend the shorter collection to the length of the longer.
the type of the second half of the returned pairs
The iterable providing the second half of each result pair
the element to be used to fill up the result if this stack is shorter than that.
the element to be used to fill up the result if that is shorter than this stack.
a new stack containing pairs consisting of corresponding elements of this stack and that. The length of the returned collection is the maximum of the lengths of this stack and that. If this
stack is shorter than that, thisElem values are used to pad the result. If that is shorter than this stack, thatElem values are used to pad the result.
Definition Classes
IterableLike → GenIterableLike
Full Signature
def zipAll[B, A1 >: A, That](that: GenIterable[B], thisElem: A1, thatElem: B)(implicit bf: CanBuildFrom[Stack[A], (A1, B), That]): That
201. def zipWithIndex: Stack[(A, Int)]
[use case] Zips this stack with its indices.
[use case]
Zips this stack with its indices.
A new stack containing pairs consisting of all elements of this stack paired with their index. Indices start at 0.
Definition Classes
IterableLike → GenIterableLike
Full Signature
def zipWithIndex[A1 >: A, That](implicit bf: CanBuildFrom[Stack[A], (A1, Int), That]): That
1. List("a", "b", "c").zipWithIndex = List(("a", 0), ("b", 1), ("c", 2))
202. def →[B](y: B): (Stack[A], B)
Implicit information
This member is added by an implicit conversion from Stack[A] to ArrowAssoc[Stack[A]] performed by method any2ArrowAssoc in scala.Predef.
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to MonadOps[A] performed by method MonadOps in scala.collection.TraversableOnce.
This implicitly inherited member is shadowed by one or more members in this class.
To access this member you can use a type ascription:
(stack: MonadOps[A]).filter(p)
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to MonadOps[A] performed by method MonadOps in scala.collection.TraversableOnce.
This implicitly inherited member is shadowed by one or more members in this class.
To access this member you can use a type ascription:
(stack: MonadOps[A]).flatMap(f)
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to MonadOps[A] performed by method MonadOps in scala.collection.TraversableOnce.
This implicitly inherited member is shadowed by one or more members in this class.
To access this member you can use a type ascription:
(stack: MonadOps[A]).map(f)
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to StringAdd performed by method any2stringadd in scala.Predef.
This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler
To access this member you can use a type ascription:
(stack: StringAdd).self
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to StringFormat performed by method any2stringfmt in scala.Predef.
This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler
To access this member you can use a type ascription:
(stack: StringFormat).self
Definition Classes
Implicit information
This member is added by an implicit conversion from Stack[A] to MonadOps[A] performed by method MonadOps in scala.collection.TraversableOnce.
This implicitly inherited member is shadowed by one or more members in this class.
To access this member you can use a type ascription:
(stack: MonadOps[A]).withFilter(p)
Definition Classes
A syntactic sugar for out of order folding. See fold.
scala> val a = LinkedList(1,2,3,4)
a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4)
scala> val b = (a /:\ 5)(_+_)
b: Int = 15
Definition Classes
(Since version 2.10.0) use fold instead
Implicit information
This member is added by an implicit conversion from Stack[A] to ArrowAssoc[Stack[A]] performed by method any2ArrowAssoc in scala.Predef.
This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler
To access this member you can use a type ascription:
(stack: ArrowAssoc[Stack[A]]).x
Definition Classes
(Since version 2.10.0) Use leftOfArrow instead
Implicit information
This member is added by an implicit conversion from Stack[A] to Ensuring[Stack[A]] performed by method any2Ensuring in scala.Predef.
This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler
To access this member you can use a type ascription:
(stack: Ensuring[Stack[A]]).x
Definition Classes
(Since version 2.10.0) Use resultOfEnsuring instead | {"url":"http://www.scala-lang.org/api/current/scala/collection/mutable/Stack.html","timestamp":"2014-04-17T10:03:26Z","content_type":null,"content_length":"474782","record_id":"<urn:uuid:7b264598-53eb-497f-a6b7-b1cf8a3eb364>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] average value
June 11th 2008, 08:57 PM #1
Junior Member
May 2008
[SOLVED] average value
Let A represent the average value fo the function f(x) on the interval [0,6]. Is there a value of c for which the average value of f(x) on the interval [0,c] is greater than A? Why or why not?
I really hv no idea how to do this problem. Looking at the graph, I thot that there is a value of c because at x=7 the area is greater than x=6? but that's totally just a guess
Let A represent the average value fo the function f(x) on the interval [0,6]. Is there a value of c for which the average value of f(x) on the interval [0,c] is greater than A? Why or why not?
I really hv no idea how to do this problem. Looking at the graph, I thot that there is a value of c because at x=7 the area is greater than x=6? but that's totally just a guess
You tell us, average value of a function $f(x)$ on a generalized interval $[a,b]$ is given by
$\frac{1}{b-a}\int_a^{b}f(x)dx$...Now think, graphically will there be a point $\in[a,b]$ such that
umm no there isnt a value of c? because at x=6 it's already at a maximum? im not sure if im thinking in the right direction
Your reasoning is fine. For any $ceq6$, the average will be smaller since either more points will be below the maximum (for $c > 6$) or the maximum will be lower (for $c < 6$).
You could make this a bit more formal, but I think a simple explanation is probably all that the question is after.
June 11th 2008, 09:04 PM #2
June 11th 2008, 09:39 PM #3
Junior Member
May 2008
June 11th 2008, 11:24 PM #4 | {"url":"http://mathhelpforum.com/calculus/41369-solved-average-value.html","timestamp":"2014-04-21T03:51:41Z","content_type":null,"content_length":"42570","record_id":"<urn:uuid:1e6a3d3e-43b4-4386-b8a2-393bcb00b075>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Validation of Inference Procedures for Gene Regulatory Networks
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Curr Genomics. Sep 2007; 8(6): 351–359.
Validation of Inference Procedures for Gene Regulatory Networks
The availability of high-throughput genomic data has motivated the development of numerous algorithms to infer gene regulatory networks. The validity of an inference procedure must be evaluated
relative to its ability to infer a model network close to the ground-truth network from which the data have been generated. The input to an inference algorithm is a sample set of data and its output
is a network. Since input, output, and algorithm are mathematical structures, the validity of an inference algorithm is a mathematical issue. This paper formulates validation in terms of a
semi-metric distance between two networks, or the distance between two structures of the same kind deduced from the networks, such as their steady-state distributions or regulatory graphs. The paper
sets up the validation framework, provides examples of distance functions, and applies them to some discrete Markov network models. It also considers approximate validation methods based on data for
which the generating network is not known, the kind of situation one faces when using real data.
Key Words: Epistemology, gene network, inference, validation.
The construction of gene regulatory networks is among the most important problems in systems biology [1-2]. Network models provide quantitative knowledge concerning gene regulation and, from a
translational perspective, they provide a basis for mathematical analyses leading to systems based therapeutic strategies [3]. Network models run the gamut from coarse-grained discrete networks to
the detailed description of stochastic differential equations. The availability of high-throughput genomic data has motivated the development of numerous inference algorithms. The performance, or
validity, of these algorithms must be quantified. An inference algorithm takes a sample set of data as input and outputs a model network. Its validity must be evaluated relative to its ability to
infer a model network close to the ground-truth network from which the data have been generated. Given a hypothetical model, generate data from the model, apply the inference procedure to construct
an inferred model, and compare the hypothetical and inferred models via some objective function.
This paper mathematically formulates validation in terms of the distance between two networks, or the distance between two structures of the same kind deduced from the networks, such as their
steady-state distributions. As a function from a sample set to a class of network models, an inference procedure is a mathematical operator and its performance must be evaluated within a mathematical
framework, in this case, distance functions. The paper sets up the validation framework in general terms, provides examples of distance functions, and applies them to some basic network models. It
also considers approximate validation methods based on data for which the generating network is not known, the situation one faces when using real data.
It is hoped that this paper will help to motivate the study of network validation procedures. While we believe it describes the general setting and the basic requirements for validation, as will be
pointed out, to date, there has been very little study devoted to network validation. There are many subtle statistical issues. If we are to be able to judge the worth of proposed algorithms, then
these issues need to be addressed within a formal mathematical framework.
Although our aim is to consider network inference from a fairly general perspective, to give concrete examples we require some specific models. Thus, we assume the underlying network structure is
composed of a finite node (gene) set, V = {X[1], X[2],…, X[n]}, with each node taking discrete values in [0, d – 1]. The corresponding state space possesses N = d^n states, which we denote by x[1], x
[2],…, x[N]. We express the state x[j] in vector form by x[j] = (x[j][1], x[j][2],…, x[jn]). For notational convenience we write vectors in row form but treat them as columns when multiplied by a
matrix. The corresponding dynamical system is based on discrete time, t = 0, 1, 2,…, with the state-vector transition X(t) → X(t + 1) at each time instant. The state X = (X[1], X[2],…, X[n]) is often
referred to as a gene activity profile (GAP).
Markov Chains
We assume that the process X(t) is a Markov chain, meaning that the probability of X(t) conditioned on X at t[1] < t[2] < … < t[s] < t is equal to the probability of X(t) conditioned on X(t[s]). We
also assume the process is homogeneous, meaning that the transition probabilities depend only on the time difference, that is, for any t and u, the u-step transition probability,
depends only on u. We are not asserting that the Markov property and homogeneity are necessary assumptions for gene regulatory networks. We make these assumptions to facilitate mathematically
tractable modeling for the current study. Under these assumptions, we need only consider the one-step transition probability matrix,
where the one-step transition probability, p[jk], is given by p[jk] = p[jk](1). We refer to P simply as the transition probability matrix. For t = 0, 1, 2,..., the state probability structure of the
network is given by the t-state probability vector
where p[j](t) = P(X(t) = x[j]). p(0) is the initial-state probability vector.
Besides the state one-step probabilities, we can consider the gene one-step probabilities,
If, given the GAP at t, the conditional probabilities of the genes are independent, then
Suppose that gene X[i] at time t + 1 depends only on values of genes (predictors) in a regulatory set, $Ri⊂V$ at time t, the dependency being independent of t. Then the gene one-step probabilities
are given by
$pij,r=PXit+1=r|Xlt=xjl for Xl∈Ri$
In this form, we see that the Markov dependencies are restricted to regulatory genes.
The network has a regulatory graph consisting of the n genes and a directed edge from gene x[i] to gene x[j] if $xi∈Rj$. There is also a state-transition graph whose nodes are the N state vectors.
There is a directed edge from state x[j] to state x[k] if and only if x[j] = X(t) implies x[k] = X(t + 1).
A homogeneous, discrete-time Markov chain with state space {x[1], x[2],…, x[N]} possesses a steady-state distribution (π[1], π[2],…, π[N]) if, for all pairs of states x[k] and x[j], p[jk](u) → π[k]
as u → ∞. If there exists a steady-state distribution, then, regardless of the state x[k], the probability of the Markov chain being in state x[k] in the long run is π[k]. In particular, for any
initial distribution p(0), p[k](t) → π[k] as t → ∞. Not all Markov chains possess steady-state distributions.
Rule-Based Networks
A basic type of regulatory model occurs when the transition X(t) → X(t + 1) is governed by a rule-based structure, meaning there exists a state function f = (f[1], f[2],…, f[n]) such that X[i](t + 1)
= f[i](R[i](t)). A classical example of a rule-based network is a Boolean network (BN), where the values are binary, 0 or 1, and the function f[i] can be defined via a logic expression or a truth
table consisting of 2^n rows, with each row assigning a 0 or 1 as the value for the GAP defined by the row [4-5]. As defined, the BN is deterministic and the entries in its transition probability
matrix are either 0 or 1. The connectivity of the BN is the maximum number of predictors for a gene. If each has the same number of predictors, then we say that the network has uniform connectivity.
The model becomes stochastic if the BN is subject to perturbation, meaning that at any time point, instead of necessarily being governed by the state function f = (f[1], f[2],…, f[n]), there is a
positive probability p < 1 that the GAP may randomly switch to another GAP. There are more refined ways of characterizing perturbations, such as defining perturbations at the gene level rather than
the state level, but state-level perturbation is easy to describe and is sufficient for our purposes here. For a BN with perturbation, the corresponding Markov chain possesses a steady-state
The long-run behavior of a deterministic BN depends on the initial state and the network will eventually settled down and cycle endlessly through a set of states called an attractor cycle. The set of
all initial states that reach a particular attractor cycle forms the basin of attraction for the cycle. Attractor cycles are disjoint. With perturbation, in the long run the network may randomly
escape an attractor cycle, be reinitialized, and then begin its transition process anew.
Distance Functions
To discuss validity, we must first discuss the manner in which we are to compare two networks. Given networks H and M, we need a function, µ(M, H), quantifying the difference between them. We require
that µ be a semi-metric, meaning that it satisfies the following four properties:
1. $μM,H≥0,$
2. $μM,M=0,$
3. $μM,H=μH,Msymmetry,$
4. $μM,H≤μM,N+μN,Htriangle inequality$.
As a semi-metric, µ is called a distance function. If µ should satisfy a fifth condition,
5. $μM,H=0⇒M=H,$
then it is a metric. A distance function is often defined in terms of some characteristic, by which we mean some structure associated with a network, such as its regulatory graph, steady-state
distribution, or probability transition matrix. This is why we do not require the fifth condition, $μM,H=0⇒M=H,$ for a network distance function.
If we want to approximate one network by another, say for reasons of computational complexity, then a distance function can be used to measure the goodness of the approximation. If M[1] and M[2] are
two approximations of network H, then M[1] is a better approximation than M[2] relative to µ if µ $μM1,H<μM2,H$.
Because a network distance function need only be a semi-metric, one must be careful in applying propositions from the theory of metric spaces. For instance, in a metric space, if a sequence of points
in the space is convergent, then the limit of the sequence is unique. When the points are networks, this is not necessarily true. A sequence of networks can converge to two distinct networks: {H[i]}
can converge to both M and N, with $M≠N$.
Rule-Based Distance
For Boolean networks (with or without perturbation) possessing the same gene set, a distance is given by the proportion of incorrect rows in the function-defining truth tables. Denoting the state
functions for networks H and M by f = (f[1], f[2],…, f[n]) and g = (g[1], g[2],…, g[n]), respectively, since there are n truth tables consisting of 2^n rows each, this distance is given by
where I denotes the indicator function, I[A] = 1 if A is a true statement and I[A] = 0 otherwise [6]. If we wish to give more weight to those states more likely to be observed in the steady state,
then we can weight the inner sums in Eq. 7 by the corresponding terms in the steady-state distribution, π = (π[1], π[2],…, π[N]). For Boolean networks without perturbation, µ[fun] is a metric. If
there is perturbation, then µ[fun] is not a metric because two distinct networks may be identical with regard to the rules but possess different perturbation probabilities.
Topology-Based Distance
If one’s focus is on the topology of a network, then a straightforward approach is to construct the adjacency matrix. Given an n-gene network, for i, j = 1, 2,…, n, the (i, j) entry in the matrix is
1 if there is a directed edge from the ith to the jth gene; otherwise, the (i, j) entry is 0. If A = (a[ij]) and B = (b[ij]) are the adjacency matrices for networks H and M, respectively, where H and
M possess the same gene set, then the hamming distance between the networks is defined by
Alternatively, the hamming distance may be computed by normalizing the sum, such as by the number of genes or the number of edges in one of the networks, for instance, when one of the networks is
considered as representing ground truth. The hamming distance is a coarse measure since it contains no steady-state or dynamic information. Two networks can be very different and yet have $μhamM,H=0$
If one of the networks in Eq. 8 is considered as ground truth, then the hamming distance can be reformulated in terms of the numbers of false-negative and false-positive edges. If H is the
ground-truth network, then a false-negative edge is a directed edge not in M that is in H and a false-positive edge is directed edge in M that is not in H. Letting FN and FP be the numbers of
false-negative and false-positive edges, respectively, the hamming distance is given by FN + FP. Because we are considering directed graphs, an incorrectly oriented edge in M between two genes is
both a false-negative and false-positive edge, although one can slightly alter the definitions to avoid this kind of double counting. If we were to consider undirected graphs, then this anomaly would
not occur because an edge would either be present or absent. In this case, the hamming distance is still defined by Eq. 8 but the adjacency matrix is symmetric.
Since our interest is measuring the closeness of an inferred network to the network generating the data, we concentrate on distance functions, in particular, the hamming distance, which has been used
for this purpose [7, 8]. Non-distance measures related to the hamming distance have been used in the context of regulatory graphs. Again let H denote the ground-truth network. A true-positive edge is
a directed edge in both H and M, and a true-negative edge is a directed edge in neither H nor M (with analogous definitions holding for undirected graphs). Let TP and TN be the numbers of
true-positive and true-negative edges, respectively. The positive predictive value is defined by TP/(TP + FP), the sensitivity is defined by TP/(TP + FN), and the specificity is defined by TN(TN + FP
). These kinds of measures have been used in several regulatory-graph inference papers [8-12] and a study using these measures has been performed to evaluate a number of inference procedures [13].
Transition-Probability-Based Distance
Distances for Markov networks can be defined via their probability transition matrices by considering matrix norms. A norm is a function $•$ on a linear (vector) space, L, such that:
1. $v≥0,$
2. $v=0⇒v=0,$
3. $av=a⋅vhomogeneity,$
4. $v+w≤v+wtriangle inequality.$
Given a norm on L, a metric is defined on L by $v−w$ .
For an n x n matrix and r ≥ 1, the r-norm is defined by
The supremum norm is defined by
These norms are well-studied in linear algebra. Each yields a metric defined by $P−Qr⋅$ If $P=pij$ and $Q=qij$ are the probability transition matrices for networks H and M, respectively, then a
network distance function is defined by
Whereas $•r$ defines a matrix metric, $μprobr$ is only a network semi-metric because two distinct networks may have the same transition probability matrix.
Long-Run Distance
Since steady-state behavior is of particular interest, for instance, being associated with phenotypes, a natural choice for a network distance is to measure the difference between steady-state
distributions [14]. If π = (π[1], π[2],…, π[N]) is a probability vector, then its r-norm is defined by
for r ≥ 1, and its supremum norm is defined by
If π = (π[1], π[2],…, π[N]) and ω = (ω[1], ω[2],…, ω[N]) are the steady-state distributions for networks H and M, respectively, then a network distance is defined by
Other norms can be used to define the distance function.
Not all networks possess steady-state distributions. The long-run behavior of a deterministic rule-based network, such as a Boolean network, depends on the initial state. A rule-based finite-value
network possesses attractor cycles that characterize its long-run behavior and we can consider comparing this long-run behavior. This can be done by considering the proportion of time spent in a
state once an attractor cycle has been entered. For any initial state x[k], the network eventually enters the attractor cycle, C[k], whose basin contains x[k]. An arbitrary state x[j] either lies in
C[k] or it does not. Let m[k] denote the number of states in C[k] and p[k] be the probability that the initial state is x[k]. We define the long-run probability of x[j] by
Letting ζ = (ζ[1], ζ[2], …ζ[N]), we can proceed analogously to the steady-state case by replacing π by ζ to define the r-norm, and then define the distance function $μlongrM,H$ in the usual way.
Suppose all attractor cycles are singletons, so that m[k] = 1. Moreover, suppose we do not know the initial-state probabilities and we set p[k] = 1/N. If x[k] is an attractor, let b[k] denote the
number of states in its basin; if x[k] is not an attractor, let b[k] = 0. Then Eq. 15 reduces to ζ[j] = b[j]/N. To this point, ζ = (ζ[1], ζ[2],…, ζ[N]) describes a probability density because its
components sum to 1. Now suppose we ignore the basin sizes so that ζ[j] = 1/N if xj is an attractor and ζ[j] = 0 otherwise. If ζ = (ζ[1], ζ[2],…, ζ[N]) and ξ = (ξ[1], ξ[2],…, ξ[N]) correspond to
networks H and M, respectively, then the network distance induced by the 1-norm is given by
where A[H] and A[M] are the attractor sets for H and M, respectively,
is the symmetric difference of A[H] and A[M], and $•$denotes the number of elements in a set. The distance μ[att](M, H) compares the attractor sets of the two networks. μ[att](M,H) = 0 if and only
the attractor sets are the same. We have derived μ[att](M, H) from μ[long](M, H) assuming only singleton attractors, but μ[att](M, H) can be applied to any rule-based discrete network.
Trajectory-Based Distance
Continuing with rule-based finite-value networks, rather than simply focusing on the long-run probabilities, one can take a more refined perspective by considering differences in the trajectories.
Continue to let m[k] denote the number of states in the cycle C[k] for initial state x[k] and let t[k] be the time it takes x[k] to reach C[k]. The time trajectory of the network is given by X(t) = (
X[1](t), X[2](t),…, X[n](t)). For a given initial state this trajectory is deterministic. For initial state x[k], denote the trajectory by
Given the initial state is x[k], we define the amplitude cumulative distribution of gene X[i] by
This increasing function of z counts the fraction of time that $xikt≤z$ in the cycle C[k].
Given two attractor cycles, C[k] and C[j], resulting from initializations x[k] and x[j], respectively, we define a distance between the cycles relative to gene X[i] using the amplitude cumulative
distributions, $Fi•|k$ and $Fi•|j$ , by
for some function norm $•$ . For example, we could use the L[1] norm
The L[1] norm possesses an interesting interpretation if gene X[i] has constant amplitude values, a and b, on cycles C[k] and C[j], respectively. In this case, $Fi•|k$ and $Fi•|j$ are unit step
functions with steps at a and b, respectively. Hence, in this case the L[1] norm reduces to
and gives the distance, in amplitude, between the values of gene X[i] on the two cycles. For a Boolean network, $δiCk,Cj=0$ if the gene is either ON or OFF on both cycles and $δiCk,Cj=1$ if X[i] is
ON for one cycle and OFF for the other (assuming X[i] is constant on both cycles).
Considering the full set of genes, we define a distance between two attractor cycles, C[k] and C[j] by
Now consider two networks, M and H, having the same genes. We define the distance between M and H as the expected distance between attractor cycles over all possible initial states:
where $CkM$ and $CkH$ are the attractor cycles corresponding to initialization by state x[k] in networks M and H, respectively [15].
Equivalence Classes of Networks
The previous examples of network distance functions demonstrate a common scenario: a network semi-metric is defined by a metric on some network characteristic, for instance, its regulatory graph, its
transition probability matrix, etc. The metric requirement, $μM,H=0⇒M=H$ fails because distinct networks possess the same characteristic. To formalize the situation, let λ[M] and λ[H] denote the
characteristic λ corresponding to networks M and H, respectively. If ν is a metric on a space of characteristics (directed graphs, matrices, probability densities, etc.), then a semi-metric μ[ν] is
induced on the network space according to
This is quite natural if our main interest is with the characteristic, not the specific network itself.
Focus on network characteristics leads to the identification of networks possessing the same characteristic. Given any set, U, a relation ~ between elements of U is called an equivalence relation if
it satisfies the following three properties for $a,b,c∈U:$
1. $a∼areflexivity,$
2. $a∼b⇒b∼asymmetry,$
3. $a∼b and b∼c⇒a∼ctransitivity.$
If a ~ b, then a and b are said to be equivalent. An equivalence relation on U induces a partition of U. The subsets forming the partition are defined according to a and b lie in the same subset if
and only if a ~ b. The subsets are called equivalence classes. The equivalence class of elements equivalent to a is denoted by [a]^~. According to the definitions, $a∼=b∼$ if and only if a ~ b .
If ν is a semi-metric on a set U and we define a ~ b if and only if ν(a, b) = 0, then
defines a metric on the space of equivalence classes because $μa∼,b∼=0⇔νa,b=0⇔a∼b⇔a∼=b∼$.
If we define M ~ H if λ[M] = λ[H], then this is a network equivalence relation. If we focus on equivalence classes of networks rather than the networks themselves, we are in effect identifying
equivalent networks. For instance, if we are only interested in steady-state distributions, then it may be advantageous to identify networks possessing the same steady-state distribution.
An inference procedure operates on data generated by a network H and constructs an inferred network M to serve as an estimate of H, or it constructs a characteristic to serve as an estimate of the
corresponding characteristic of H. For instance, the data may be used to infer a distribution that estimates the steady-state distribution of H. The data could be dynamical, consisting of time-course
observations, or it might be taken from the steady state, as with microarray measurements assumed to come from the steady state of some phenotypic class. In the latter case, it makes sense to
consider inference accuracy relative to the steady-state distribution of H, rather than H itself. For full network inference, the inference procedure is a mathematical operation, a mapping from a
space of samples to a space of networks, and it must be evaluated as such. There is a generated data set S and the inference procedure is of the form ψ(S) = M. If a characteristic is being estimated,
then ψ(S) is a characteristic, for instance, ψS = F, a probability distribution.
Measuring Inference Performance Using Distance Functions
Focusing on full network inference, the goodness of an inference procedure ψ is measured relative to some distance, μ, specifically, µ $μM,H=μψS,H$ , which is a function of the sample S. In fact, S
is a realization of a random set process, Σ, governing data generation from H. In general, there is no assumption on the nature of Σ. It might be directly generated by H or it might result from
directly generated data corrupted by noise of some sort. $μψΣ,H$ is a random variable and the performance of ψ is characterized by the distribution of $μψΣ,H$ , which depends on the distribution of
Σ. The salient statistic regarding the distribution of $μψΣ,H$ is its mean, $EΣμψΣ,H$ , where the expectation is taken with respect to Σ.
Rather than considering a single network, we can consider a distribution, H, of random networks, where, by definition, the occurrences of realizations H of H are governed by a probability
distribution. This is precisely the situation with regard to the classical study of random Boolean networks. Averaging over the class of random networks, our interest focuses on
It is natural to define the inference procedure ψ[1] better than the inference procedure ψ[2] relative to the distance μ, the random network H, and the sampling procedure Σ if
Whether an inference procedure is “good” is not only relative to the distance function, it is relative to how one views the value of the expected distance. Indeed, it is not really possible to
determine an absolute notion of goodness.
In practice, the expectation is estimated by an average,
where S[1], S[2],..., S[m] are sample point sets generated according to Σ from networks H[1], H[2],…, H[m] randomly chosen from H.
The preceding analysis applies virtually unchanged when a characteristic is being estimated. One need only replace H and H by λ and Λ, where λ and Λ are a characteristic and a random characteristic,
respectively, and replace the network distance μ by the characteristic distance.
We next present three examples using previously introduced distance functions to measure inference performance. Algorithm description will be sketchy in order to avoid long digressions from the issue
of distance illustration. We defer to the cited literature for details.
Example 1.
The Boolean network model has been in existence for a long time and various inference procedures have been proposed [16-18]. One proposed method for Boolean networks with perturbation is based on the
observation of a single dynamic realization of the network [6]. This method will be discussed in some detail in Section 5 in regard to consistent inference; for now, we are only concerned with the
distance between the inferred network and the original network generating the data, where the distance function is given by µ[fun](M, H) in Eq. 7. Fig. (11) shows the average (in percentage) of the
distance function using 80 data sequences generated from 16 randomly generated Boolean networks with 7 genes, perturbation probability p = 0.01, uniform connectivity k = 2 or k = 3, and data sequence
lengths varying from 500 to 40,000. The reduction in performance from connectivity 2 to connectivity 3 is not surprising because the number of truth-table lines jumps dramatically.
Rule-based distance performance for Boolean-network inference for connectivity k = 2, 3.
Example 2.
There have been a number of papers addressing the inference of connectivity graphs using information-theoretic approaches [9, 10, 19]. In a study proposing using the minimum description length (MDL)
principle to infer regulatory graphs [8], the hamming distance was used to compare the performance of the newly proposed algorithm with an earlier information-theoretic algorithm, called REVEAL [9].
Fig. (22) compares the hamming distances between the inferred networks and the corresponding synthetic networks that generated the data relative to increasing sample size. It does so for the REVEAL
algorithm and the MDL algorithm using three different settings for a user-defined parameter. The performance measures are obtained by averaging over 30 randomly generated networks, each containing 20
genes and 30 edges, with the distance function being normalized over 30, the number of edges in the synthetic networks.
Hamming distance performance for inferring regulatory graphs using information theory.
Example 3.
A probabilistic Boolean network (PBN) is a network defined as a collection of discrete-valued networks such that at any point in time one of the constituent networks is controlling the network
dynamics [20]. In a context-sensitive PBN there is a binary random variable determining whether there should be a switch of constituent networks at that time point, the modeling assumption being that
there are latent variables outside the model network whose changes induce stochasticity into the PBN [21]. Typically, there is also a probability of permutation. This example considers a Bayesian
connectivity-based inference procedure for designing PBNs from steady-state data [22]. A synthetic PBN, H, composed of two constituent Boolean networks is used to generate a random sample of size 60
from its steady-state distribution and the inference procedure is used to construct a designed PBN composed of ten constituent PBNs (note that the inference procedure does not have input relating to
the number of constituent BNs of the generating network). According to definition, the attractors of a PBN are the attractors of its constituent BNs. H has six singleton attractors, two of which,
call them x[a] and x[b], contain 0.99 of the steady-state mass. The designed PBN has more attractors, which is not uncommon, but x[a] and x[b] appear in all ten constituent networks as singleton
attractors and they contain 0.78 of the steady-state mass. Since, for PBNs with low probability of network switching almost all of the steady-state mass lies in its attractors [21],
$μstead1ψS,H≤0.21$ (or approximately so), the maximum 1-norm being 2.
The greater the amount of data, the better inference one can expect. The hope is that, for large data sets, the inferred network will be close to the generating network. We define an inference
procedure, ψ, to be consistent if $μ∗H,Σ,ψ→0$ as $Σ→∞$ . We illustrate consistency using Boolean networks with perturbation. We use the inference procedure referred to in Example 1 that applies to a
single observed time series [6] and the distance function µ[fun] of Eq. 7.
Owing to perturbation, the network has a steady-state distribution and all states communicate with each other. Hence, given a long time series we are likely to observe most of the states and their
corresponding state-to-state transitions x[k] → x[(k)], for k = 1, 2,…, N, where x[(k)] denotes the next state following x[k] under the network state function. If we ignore perturbation, then using
the observed state-to-state transitions we can construct a table of state-to-gene transitions of the form x[k] → x[i], for k = 1, 2,…, N and i = 1, 2,…, n. These define the functions f[1], f[2],…, f
[n] accordingly. Because the truth table for function f[i] has 2^n rows of the form f[i](x[k]), some rows may be empty owing to insufficient observations and these rows can be filled in randomly. As
the length of the time series increases, the probability of not observing the state x[k] goes to 0. Indeed, for any positive integer c, if we let η(x[k]) denote the number of times x[k] is observed
in the time series, then $Pηxk≥c→1$ as $Σ→∞$ , where the probability is with respect to the time series Σ.
With perturbation, the state-to-state transitions do not directly define functions because state x[k] may transition to more than one state. However, assuming a perturbation probability less than
0.5, the transitions from x[k] will be dominated by the single transition determined by the state function f and this dominating choice can be used for inference. Letting η[j](x[k]) denote the number
times we observe the transition $xk→xj$ if $fxk=xk$ is the function-defined transition, then
$Pηkxk>maxη1xk,η2xk,...,ηNxk→1 as Σ→∞$
Thus, if $fˆ$ denotes the inferred state function, then $Pfˆ=f→1 as Σ→∞$.Similar asymptotic statements hold for f[1], f[2],…, f[n]. This insures that, for any $τ>0,PμfunψΣ,H<τ→1 as Σ→∞$ for any
Boolean network H. Since µ[fun](ψ(Σ), H) ≤ 1, this is equivalent to $EΣμfunψΣ,H→0 as Σ→∞$ Finally, if H is the class of all Boolean networks on n genes with perturbation probability p, then, since H
is a finite set,
$μfun∗H,Σ,ψ=EHEΣμfunψΣ,H→0 as Σ→∞$
and the inference procedure is consistent relative to µ[fun].
The preceding argument assumes that the perturbation probability is known. A modification of the inference procedure yields an estimator for p [6]; however, if p is also being estimated, then the
model space H is no longer finite and the consistency proof has to be modified. We do not believe this is the proper place to go into such mathematical issues.
Inference performance is evaluated based on the ability of an inference procedure to identify the network from which the data have been derived. This can only be done exactly if the data-generating
network is known. Suppose we do not know the random network, H, generating the data for which we want to evaluate the inference procedure, ψ, but know a network N that we believe to be a good
approximation to the networks in H. We might then compare the inferred network to N. In effect, such a comparison is approximating $μ∗H,Σ,ψ$ by $EΣμψΣ,N$ .
The key issue is approximation accuracy. The triangle inequality implies
for any sample set S and $H∈H$ . Hence,
$EΣμψΣ,N−EHμN,H ≤EHEΣμψΣ,H ≤EΣμψΣ,N+EHμN,H$
If E[H][µ(N, H)] ≈ 0, meaning that E[H][µ(N, H)] is small, then the preceding inequality leads to the approximate inequality
Thus, if E[H][µ(N, H)] ≈ 0, then
and it is reasonable to judge the performance of ψ relative to H by $EΣμψΣ,N$ . On the other hand, if E[H][µ(N, H)] is not small, then both bounds in Eq. 33 are loose and nothing can be asserted
regarding the performance of ψ relative to the data sets on which it is being applied. Therefore, unless E[H][µ(N, H)] is small, the entire validation procedure is flawed because the approximation of
H by N is confounding the procedure. In addition, if $EHμN,H≈0,$ one still has to estimate $EΣμψΣ,N,$ which generally means that the number of sample sets is sufficiently large that the expectation
is well-estimated by the average distance.
The preceding approximation methodology is common in the literature. A proposed inference procedure is applied to one or more real data sets. The inferred network is compared, not to the unknown
random network generating the data, but to a model network that has been human-constructed from the literature (and implicitly assumed to approximate the data-generating network). For instance, a
directed graph (adjacency matrix), A, is constructed from relations found in the literature and the hamming distance is used in the approximating expectation, $EΣμhamψΣ,A,$ in Eq. 35. The aim is to
compare the result of the inference procedure to some characteristic related to existing biological knowledge. The problem is that the constructed regulatory graph may not be a good approximation to
the regulatory graph for the system generating the data. This can happen because the literature is incomplete, there are insufficiently validated connections reported in the literature, or the
conditions under which connections have been discovered, or not discovered, in certain papers are not compatible with the conditions under which the current data have been derived. As a result of any
of these situations, the overall validation procedure is confounded by the precision (or lack thereof) of the approximation.
Another form of approximation results from using experimental data for validation rather than synthetic data generated from a known, ground-truth model. In this situation, there is a test-data
sampling procedure generating data from which an estimate of the desired characteristic corresponding to the underlying physical network is formed. Validation is then via the random variable
$μψΣ,ξΩ,$ where Σ is the training-data sampling procedure used to design the network and Ω is a real-data test-sampling procedure to validate the designed network by direct construction of the
characteristic via independent sampling. To simplify the notation we consider a single underlying network H rather than a random network H. In this situation, $EHμN,H$ in Eq. 33 is replaced by
$μξΩ,λH$ , where λ[H] is the characteristic for H, and Eq. 33 takes the form
If E[Ω][µ(ξ(Ω), λ[H])] ≈ 0, then
If ξ is a consistent estimator of λ[H], so that $EΩμξΩ,λH≈0$ for large test samples, then, on average, the approximation is good.
Consider what happens if one only has data to estimate (train) the model, which may happen when data are limited on account of cost or the availability of samples. In this case, one tests on the same
data, thereby having Ω = Σ in Eq. 36 and the resubstitution estimate, $EΣμψΣ,ξΣ$ , in Eq. 37. If ξ is a consistent estimator of λ[H] and the single training sample is large, then the conclusion of
Eq. 37 again holds. But we do not have a large sample. Hence, Eq. 36 cannot be used to insure good average performance. But it also cannot be used to insure good performance when there is a small
independent test-data sample. In the independent case, we are concerned with the absolute difference
When the same data are used for training and testing, our interest is with
As with classification, where resubstitution error estimation is usually biased low owing to overfitting by the classification rule, in the case of network validation, resubstitution is risky because
the characteristic of the designed network is being compared to a characteristic inferred from the same data with which the network has been designed. According to Eq. 36, as in the case of
classification, this is not a problem for large samples, but it can be a serious problem for small samples because overfitting can cause Δ[train] to be much less than Δ[test]. Whereas substantial
effort has gone into studying these kinds of problems in pattern recognition, there appears to be an absence of the analogous study for network validation.
Example 4.
An attractor-preserving inference method for PBNs based on steady-state data has been proposed and applied to PBNs [23]. A PBN has been designed from cDNA microarray data using 7 genes: WNT5A, pirin,
S100P, RET1, MART1, HADHB, and STC2. The steady-state distribution of the designed network has been compared to the histogram of the data, the histogram serving as an estimate of the steady-state
distribution of the underlying physical network. Fig. (33) illustrates the comparison of the portion of the steady-state distribution corresponding to the data states with the data histogram.
Referring to Eq. 14, the 1-norm and 2-norm yield the resubstitution error distances $μstead1ψS,ξS=0.45$ (out of a maximum of 2) and $μstead2ψS,ξS=0.1262,$ respectively, the latter being the
root-mean-square error.
Comparison of steady state distribution for a designed network and data histogram.
This paper has proposed a mathematically rigorous framework for the validation of inference procedures for gene regulatory networks and has illustrated this framework employing validation methods
used in the literature. Owing to the central role of regulatory networks in systems biology and the need to apply inference procedures to the massive data sets resulting from high-throughput
technologies, validation cannot be left to ad hoc methods whose own performances are not understood. A formal framework is necessary. As should be clear from the paper, a great deal of work needs to
be done to establish the properties of inference procedures under various conditions, such as the sampling procedure, model class, and validation criterion (distance function). Absent rigorous
results in this regard, proposed inference procedures will remain speculative and the quality of their performances unknown. A sound epistemology will be lacking.
I would like to acknowledge the National Science Foundation (CCF-0514644) and the National Cancer Institute (2R25CA090301-06) for supporting this work. I also wish to thank Wentao Zhao for assisting
with the literature search and Barak Faryabi for proof reading the manuscript.
de Jong H. Modeling and simulation of genetic regulatory systems: a literature review. Computat. Biol. 2002;9(1):67–103. [PubMed]
2. Shmulevich I, Dougherty E. R. Genomic Signal Processing. Princeton: Princeton University Press; 2007.
3. Datta A, Pal R, Dougherty ER. Intervention in probabilistic gene regulatory networks. Curr. Bioinformat. 2006;1(2):167–184.
Kauffman SA. Metabolic stability and epigenesis in randomly constructed genetic nets. Theoret. Biol. 1969;22:437–467. [PubMed]
5. Kauffman SA. The Origins of Order. New York: Oxford University Press; 1993.
Marshall S, Yu L, Xiao Y, Dougherty ER. Inference of probabilistic Boolean networks from a single observed temporal sequence. EURASIP J. Bioinformat. Sys. Biol. 2007:15 pages. Article ID 32454. [PMC
free article] [PubMed]
Bernard A, Hartemink A. Informative structure priors: joint learning of dynamic regulatory networks from multiple types of data. Pacific Sympo. Biocomput. 2005;10:459–470. [PubMed]
Zhao W, Serpedin E, Dougherty ER. Inferring gene regulatory networks from time series data using the minimum description length principle. Bioinformatics. 2006;22(17):2129–2135. [PubMed]
Liang S., Fuhrman S., Somogyi R. REVEAL a general reverse engineering algorithm for inference of genetic network architectures. Pacific Sympo. Biocomput. 1998;3:18–29. [PubMed]
Margolin A, Nemenman I, Basso K, Klein U, Wiggins C, Stolovitzky G, Favera RD, Califano A. ARACNE: an algorithm for reconstruction of genetic networks in a mammalian cellular context. BMC
Bioinformat. 2006;7(Suppl. 1):S7. [PMC free article] [PubMed]
Chen X, Anantha G, Wang X. An effective structure learning method for constructing gene networks. Bioinformatics. 2006;22(11):1367–1374. [PubMed]
Zou M, Conzen SD. A new dynamic Bayesian network (DBN) approach for identifying gene regulatory networks from time course microarray data. Bioinformatics. 2005;21(1):71–79. [PubMed]
Bansal M, Belcastro V, Ambesi-Implombato A, di Bernardo D. How to infer gene networks from expression profiles. Mol. Syst. Biol. 2006;3:78. [PMC free article] [PubMed]
14. Kim S, Li H, Dougherty ER, Chao N, Chen Y, Bittner ML, Suh EB. Can Markov chain models mimic biological regulation. Biol. Syst. 2002;10(4):447–458.
Brun M, Kim S, Choi W, Dougherty ER. Comparison of network models via steady state trajectories. EURASIP J. Bioinformat. Syst. Biol. 2007:11 pages. Article ID 82702. [PMC free article] [PubMed]
Akutsu T, Miyano S, Kuhara S. Identification of genetic networks from a small number of gene expression patterns under the Boolean model. Pacific Sympo. Biocomput. 1999;4:17–28. [PubMed]
17. Lahdesmaki H, Shmulevich I, Yli-Harja O. On learning gene regulatory networks under the Boolean network model. Machine Learn. 2003;52:147–167.
18. Shmulevich I, Saarinen A, Yli-Harja O, Astola J. Inference of genetic regulatory networks via best-fit extensions. In: W Zhang, I Shmulevich., editors. Computat. Statis. Approaches Genom. Boston:
Kluwer Academic Publishers; 2002. pp. 197–210.
19. Nemenman I. Information theory, multivariate dependence, and genetic network inference. Technical Report NSF-KITP-4-54, KITP, UCSB. 2004.
Shmulevich I, Dougherty ER, Kim S, Zhang W. Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics. 2002;18:261–274. [PubMed]
21. Brun M, Dougherty ER, Shmulevich I. steady state probabilities for attractors in probabilistic Boolean networks. EURASIP J. Signal Process. 2005;85(10):1993–2013.
Zhou X, Wang X, Pal R, Ivanov I, Bittner ML, Dougherty ER. A Bayesian connectivity-based approach to constructing probabilistic gene regulatory networks. Bioinformatics. 2004;20(17):2918–2927. [
Pal R, Ivanov I, Datta A, Dougherty ER. Generating Boolean networks with a prescribed attractor structure. Bioinformatics. 2005;21(21):4021–4025. [PubMed]
Articles from Current Genomics are provided here courtesy of Bentham Science Publishers
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2671720/?tool=pubmed","timestamp":"2014-04-18T11:50:09Z","content_type":null,"content_length":"163984","record_id":"<urn:uuid:36dd6305-a7f0-4bbf-b251-32f562e32797>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATLAB Help-Sort with Carry
April 23rd 2010, 02:52 AM #1
Apr 2010
MATLAB Help-Sort with Carry
Problem: Write a function to sort one real array into ascending order while carrying along a second one. Test the function with the following two 9-element arrays:
This problem is asking that you sort array a into ascending order, while simultaneously carrying a along a second array b. In such a sort, each time an element of array a is exchanged with
another element of array a, the corresponding elements of array b are also swapped. When the sort is over, the elements of array a are in ascending order, while the elements of array b that were
associated with particular elements of array a are still associated with them.
This is what I have so far:
nvals=input('Enter numbers of values to sort for array1: ')
for ii=1:nvals;
string=['Enter value ' int2str(ii) ': '];
sorted = sort(array1);
fprintf('\nSorted data for array1:\n');
for ii=1:nvals
fprintf(' %8.4f\n',sorted(ii));
This asks the user to input an amount of number values and from those values it sorts the array into ascending order. I don't know any command that can bind the elements in each of these two
arrays and rearrange array b according to the rearranged array a.
If someone could please help, I would be very grateful.
Problem: Write a function to sort one real array into ascending order while carrying along a second one. Test the function with the following two 9-element arrays:
This problem is asking that you sort array a into ascending order, while simultaneously carrying a along a second array b. In such a sort, each time an element of array a is exchanged with
another element of array a, the corresponding elements of array b are also swapped. When the sort is over, the elements of array a are in ascending order, while the elements of array b that were
associated with particular elements of array a are still associated with them.
This is what I have so far:
nvals=input('Enter numbers of values to sort for array1: ')
for ii=1:nvals;
string=['Enter value ' int2str(ii) ': '];
sorted = sort(array1);
fprintf('\nSorted data for array1:\n');
for ii=1:nvals
fprintf(' %8.4f\n',sorted(ii));
This asks the user to input an amount of number values and from those values it sorts the array into ascending order. I don't know any command that can bind the elements in each of these two
arrays and rearrange array b according to the rearranged array a.
If someone could please help, I would be very grateful.
[AA, Idx]=sort(a);
>> a=[1,11,-6,17,-23,0,5,1,-1]
a =
1 11 -6 17 -23 0 5 1 -1
>> b=[31,101,36,-17,0,10,-8,-1,-1]
b =
31 101 36 -17 0 10 -8 -1 -1
>> [AA,Idx]=sort(a)
AA =
-23 -6 -1 0 1 1 5 11 17
Idx =
>> BB=b(Idx)
BB =
0 36 -1 10 31 -1 -8 101 -17
April 23rd 2010, 03:20 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/math-software/140889-matlab-help-sort-carry.html","timestamp":"2014-04-18T04:49:38Z","content_type":null,"content_length":"35743","record_id":"<urn:uuid:a23deae0-500e-4e0c-9ddf-367b30bc69ca>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Tool for Cyber-Physical Systems
Humberto Gonzalez
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2012-196
September 13, 2012
Cyber–Physical systems, which is the class of dynamical systems where physical and computational components interact in a tight coordination, are found in many applications, from large–scale
distributed systems, such as the electric power grid, to micro–robotic platforms based on legged locomotion, among many others. Due to their mixed nature between physical and computational
components, Cyber–Physical systems are well modeled using hybrid dynamical models, which incorporate both continuous and discrete valued state variables. Also, thanks to the flexibility and great
variety of optimal control formulations, it is natural to apply optimal control algorithms to solve complex problems in the context of Cyber–Physical systems, such as the verification of a given
specification, or the robust identification of parameters under state constraints. This thesis presents three new computational tools that bring the strength of hybrid dynamical models and optimal
control to applications in Cyber-Physical systems. The first tool is an algorithm that finds the optimal control of a switched hybrid dynamical system under state constraints, the second tool is an
algorithm that approximates the trajectories of autonomous hybrid dynamical systems, and the third tool is an algorithm that computes the optimal control of a nonlinear dynamical system using
pseudospectral approximations. These results achieve several goals. They extend widely used algorithms to new classes of dynamical systems. They also present novel mathematical techniques that can be
applied to develop new, computationally efficient, tools in the context of hybrid dynamical systems. More importantly, they enable the use of control theory in new exciting applications, that because
of their number of variables or complexity of their models, cannot be addressed using existing tools.
Advisor: S. Shankar Sastry
BibTeX citation:
Author = {Gonzalez, Humberto},
Title = {Computational Tool for Cyber-Physical Systems},
School = {EECS Department, University of California, Berkeley},
Year = {2012},
Month = {Sep},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-196.html},
Number = {UCB/EECS-2012-196},
Abstract = {Cyber–Physical systems, which is the class of dynamical systems where physical and computational components interact in a tight coordination, are found in many applications, from large–scale distributed systems, such as the electric power grid, to micro–robotic platforms based on legged locomotion, among many others. Due to their mixed nature between physical and computational components, Cyber–Physical systems are well modeled using hybrid dynamical models, which incorporate both continuous and discrete valued state variables. Also, thanks to the flexibility and great variety of optimal control formulations, it is natural to apply optimal control algorithms to solve complex problems in the context of Cyber–Physical systems, such as the verification of a given specification, or the robust identification of parameters under state constraints.
This thesis presents three new computational tools that bring the strength of hybrid dynamical models and optimal control to applications in Cyber-Physical systems. The first tool is an algorithm that finds the optimal control of a switched hybrid dynamical system under state constraints, the second tool is an algorithm that approximates the trajectories of autonomous hybrid dynamical systems, and the third tool is an algorithm that computes the optimal control of a nonlinear dynamical system using pseudospectral approximations.
These results achieve several goals. They extend widely used algorithms to new classes of dynamical systems. They also present novel mathematical techniques that can be applied to develop new, computationally efficient, tools in the context of hybrid dynamical systems. More importantly, they enable the use of control theory in new exciting applications, that because of their number of variables or complexity of their models, cannot be addressed using existing tools.}
EndNote citation:
%0 Thesis
%A Gonzalez, Humberto
%T Computational Tool for Cyber-Physical Systems
%I EECS Department, University of California, Berkeley
%D 2012
%8 September 13
%@ UCB/EECS-2012-196
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-196.html
%F Gonzalez:EECS-2012-196 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-196.html","timestamp":"2014-04-16T04:14:27Z","content_type":null,"content_length":"8412","record_id":"<urn:uuid:fc58044d-3580-440c-92b7-d15df991d4dd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intermediate Algebra
ISBN: 9780534419233 | 0534419232
Edition: 3rd
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 12/8/2004
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/intermediate-algebra-3rd-tussy-alan-s/bk/9780534419233","timestamp":"2014-04-20T21:51:55Z","content_type":null,"content_length":"34009","record_id":"<urn:uuid:d1a5a01a-8713-400d-97c3-39e6391b083d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Q. `lim_(x->oo)` (`sqrt(x+1)` - ```sqrt(x)` ) - Homework Help - eNotes.com
Q. `lim_(x->oo)` (`sqrt(x+1)` - ```sqrt(x)` )
If you have limit `lim (f-g)=oo-oo` then you can rewrite this as product `lim f[1-g/f]` now you can solve `lim g/f` by using L'Hospital's rule (`lim (f/g) =lim (f'/g')` ).
`lim_(x->oo)(sqrt(x+1)-sqrt(x))=lim_(x->oo)sqrt(x+1)(1-(sqrtx)/(sqrt(x+1)))`` `
Now we solve `lim_(x->oo)(sqrtx)/(sqrt(x+1))=1` (we get this by dividing both numerator and denominator by `sqrtx` )
Now our limit becomes `oo cdot 0` so we need to rewrite it again.
Now we have limit of form `0/0` so we can use LHospital's rule.
`lim_(x->oo)(sqrt[x]/(2 (1 + x)^(3/2)) - 1/(2 sqrt[x] sqrt[1 + x]))/(-(1/(2 (1 + x)^(3/2))))=lim_(x->oo)1/sqrtx=0`
Hence, your result is: `lim_(x->oo)(sqrt(x+1)-sqrt(x))=0`
Let us write
`lim x-> oo ==> y->0`
`` `=lim_(y->0){(sqrt(y+1)-1)(sqrt(y+1)+1)}/{sqrt(y)(sqrt(y+1)+1)}`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/q-lim-x-gt-oo-sqrt-x-1-sqrt-x-441786","timestamp":"2014-04-18T01:10:12Z","content_type":null,"content_length":"27612","record_id":"<urn:uuid:5335949a-8f65-4461-9155-098915ab521b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponent Math Puzzle
Date: 07/09/2006 at 03:23:35
From: Gaurav
Subject: Mathematics Puzzle - with Power and Remainders
What would be the remainder when 3^4^5^6^7 ... so on til infinity is
divided by 17? I'm not sure how to even start the question. Will
power cycles and the remainder theorm work in this?
Date: 07/09/2006 at 05:01:23
From: Doctor Ricky
Subject: Re: Mathematics Puzzle - with Power and Remainders
Hey Gaurav,
Thanks for writing Dr. Math!
The easiest way to attack this problem is to understand what we are
dealing with. We have the number:
Obviously, to find out what the remainder is when divided by 17, we
need to know what the exponent will be. For us to find THAT out, we
need to find out what the exponent of THAT will be. We can actually
stop there because we can make some simple deductions that will
provide us with our answer.
First, let us figure out the exponent of the exponent,
i.e. 5^6^7^...
We notice something immediately: 5^n for any number 'n' ends in a
5. That means that 5^n must be odd for any number 'n'.
Now let's deal with the original exponent:
We notice that since 5^n is odd and a power of 5, we can write this
5^n = 2k+1, for some integer k>=0.
This means we are looking at:
Let's look at some examples to see what this means:
4^1 = 4
4^3 = 64
4^5 = 1024
We see that 4 to an odd power ends in a 4 and will obviously be
divisible by 4, which is even. Numbers like this are 4, 64, etc...
Now we use this to look at our original question,
Hopefully this has helped you enough that you can solve this
problem, but if you have any more questions, please let me know!
- Doctor Ricky, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/68582.html","timestamp":"2014-04-21T05:20:39Z","content_type":null,"content_length":"6677","record_id":"<urn:uuid:867d258e-ef56-4814-ac5f-cf0f00742b38>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Number Races in 2 Dimensions
up vote 27 down vote favorite
Is the mapping $$f: \ \mathbb{N} \rightarrow \mathbb{Z}[i], \ \ \ n \ \mapsto \sum_{2 < p \leq n \ {\rm prime}} e^{\frac{p-1}{4} \pi i}$$ surjective?
In 1999, when I was an undergraduate student, I thought about writing the thesis for my first degree on this problem. I asked Jörg Brüdern about this, and what he said was essentially that I could do
this and could probably obtain some nice partial results, but that an answer would most likely be out of reach. I decided then rather to specialize in group theory.
Is it nowadays possible to say more on this question?
Plots of the images of the intervals $\{1, \dots, \lfloor e^k \rfloor\}$ for $k \in \{11, \dots, 26\}$ scaled to the same size look as follows:
Larger plots of the images of the intervals $\{1, \dots, 10^k\}$ for $k \in \{7,8,9\}$ are shown below:
$k = 7$:
$k = 8$:
$k = 9$:
nt.number-theory prime-numbers analytic-number-theory random-walk
3 The obvious heuristic, which views this as approximately a random walk, says yes. – Will Sawin Feb 1 '13 at 22:43
2 In fact, there is numerical evidence that the points in this walk 'stay much closer together' than in a random walk. Thus one might be bold to conjecture that even a 3-dimensional version of the
prime race is surjective, even though the simple random walk in 3 dimensions is not recurrent. But heuristics are the one thing, rigorous proofs are the other ... . – Stefan Kohl Feb 1 '13 at
1 Stefan, can you give more details about this numerical evidence? My understanding to date is that the prime number race walk is extremely well modeled by a typical random walk. – Greg Martin Feb 1
'13 at 23:19
4 @Teo B: The movement in the negative and positive direction comes from the prime number race modulo $8$ between the residues $1$ and $5$. More often than not, there will be more primes congruent
to $5$ modulo $8$ than $1$. (To be specific, we need to talk about the logarithmic density and assume GRH and LI) The first time that the primes congruent to $1$ modulo $8$ pulls ahead in the race
is between $10^8$ and $10^9$. – Eric Naslund Feb 1 '13 at 23:38
It's hard to give concrete reasons why the simple $\mathbb Z^2$ random walk is a suitable model for this prime race walk. (One can point to the generalized Riemann hypothesis as implying that the
2 typical distance from the origin is on the order of the square root of the number of steps, for example.) But I feel confident in being able to deflect any specific concerns that they should be
different, as I and Eric have done a little of already. Feel free to nominate other such concerns! – Greg Martin Feb 5 '13 at 1:41
show 10 more comments
1 Answer
active oldest votes
This is not an answer, but rather an explanation of why this question is so difficult.
For positive coprime integers $a,q$, let $$\pi(x;q,a) = \# \{p \leq x : p \equiv a \pmod{q}\}.$$ For $k \in \mathbb{Z}$, let $$A_k = \{n \in \mathbb{N} : \pi(n;8,1) - \pi(n;8,5) = k\},$$ and
let $$B_k = \{\pi(n;8,3) - \pi(n;8,7) \in \mathbb{Z} : n \in A_k\}.$$ Then your conjecture that the function $$f(n) = \sum_{p \leq n}{e^{\pi i(p - 1)/4}}$$ is surjective on $\mathbb{Z}[i]$ is
equivalent to the conjecture that $B_k = \mathbb{Z}$ for each $k \in \mathbb{Z}$.
For this to happen, the set $A_k$ must be countably infinite; that is, the equality $\pi(n;8,1) = \pi(n;8,5)$ must occur infinitely often. This is a difficult result, but it is in fact known
unconditionally: it is covered by Theorem 5.1 of "Comparative prime-number theory. II" by S. Knapowski, and P. Turán. Apparently, it has now been proven unconditionally by Jason Sneed that $\
pi(x;q,a) - \pi(x;q,b)$ changes sign infinitely often for all $q \leq 100$, but this is yet to appear in print (see this paper for a discussion).
If one assumes two strong conjectures, the Grand Riemann hypothesis, and the Linear Independence hypothesis (namely that the imaginary parts of the nontrivial zeroes of all Dirichlet
up vote $L$-functions are linearly independent over the rationals), then one can say a lot more. Rubinstein and Sarnak's paper on Chebyshev's bias shows that not only are there infinitely many sign
9 down changes, but the function $$\left(\frac{\log x}{\sqrt{x}} \left(\pi(x;q,a_1) - \mathrm{Li}(x)\right), \ldots, \frac{\log x}{\sqrt{x}} \left(\pi(x;q,a_r) - \mathrm{Li}(x)\right)\right)$$ has a
vote limiting logarithmic distribution. In particular, they can say roughly how likely $(\log x / \sqrt{x}) \pi(x;8,1)$ and $(\log x / \sqrt{x}) \pi(x;8,5)$ are to be in particular regions;
unfortunately, this doesn't really tell you anything about the set $A_k$ for each integer $k$.
Once you have that $A_k$ is countably infinite, you still need to ensure that there is no "conspiracy" happening, in that the other prime number race $\pi(x;8,3) - \pi(x;8,7)$ could avoid
certain configurations whenever $x$ is a zero of the prime number race $\pi(x;8,1) - \pi(x;8,5)$. This seems extremely difficult, and I don't know how one might attempt to analyse this. That
being said, questions peripherally related to this were studied by Knapowski and Turán, so it is possible that there might be something in the literature that can deal with this type of
As an aside, one interesting modification of this conjecture is the following. Let $\chi$ be a Dirichlet character modulo $q$, so that $\chi$ is generated by some root of unity $\zeta_Q$. Is
the function $$f_{\chi}(n) = \sum_{p \leq n}{\chi(p)}$$ surjective on $\mathbb{Z}[\zeta_Q]$?
1 The specific case of sign changes for $\pi(x;8,1) - \pi(x;8,5)$ might be among the cases proved unconditionally back before 1950 or so. – Greg Martin Feb 4 '13 at 20:37
Regarding your $f_\chi(n)$ function: in general we shouldn't expect this to be surjective. $\mathbb Z[\zeta_q]$ will be a lattice of dimension $\phi(q)$, and the random walk analogy
3 predicts that such a walk won't be surjective once $\phi(q)\ge3$. A related prediction would be that $f_\chi(n)$ takes the value $0$ only finitely often. A test (using a Dirichlet
character modulo 11, so that $q=10$ and $\phi(q)=4$) only finds three hits $f_\chi(n)=0$ through the first $100000$ primes. (Comparison: 36 hits for a character modulo 7; 140 hits for a
character modulo 5.) – Greg Martin Feb 4 '13 at 20:47
@Greg: You are right about the first part; this was actually proved by Knapowski and Turán in 1962. I'll edit my answer accordingly. – Peter Humphries Feb 4 '13 at 21:34
@Peter: Thank you very much for your discussion of the problem! So, to summarize, there are essentially two things which make the problem difficult: firstly the question of relations
between $\pi(x;8,3) - \pi(x;8,7)$ and $\pi(x;8,1) - \pi(x;8,5)$, and secondly that information on exact numbers rather than just asymptotics is necessary. – Stefan Kohl Feb 4 '13 at 22:22
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers analytic-number-theory random-walk or ask your own question. | {"url":"http://mathoverflow.net/questions/120552/prime-number-races-in-2-dimensions","timestamp":"2014-04-16T07:44:31Z","content_type":null,"content_length":"68695","record_id":"<urn:uuid:f56e5b32-efc7-4979-8f1c-0d2dd1d037f8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
For the functions f(x)=1/(x)^2 g(x)=7x-6 h(x)=1-2x
a) fog(x)
State restrictions where... - Homework Help - eNotes.com
For the functions f(x)=1/(x)^2 g(x)=7x-6 h(x)=1-2x
a) fog(x)
State restrictions where possible i.e for rational function
a) You need to compose the functions f(x) and g(x) such that:
`(fog)(x) = f(g(x)) = 1/(g^2(x))`
You need to substitute 7x - 6 for g(x) such that:
`(fog)(x) = f(g(x)) = 1/((7x - 6)^2)`
The denominator of the fraction needs to be different from zero, hence `x!=6/7` .
Thus, the domain of the function needs to exclude the value 6/7.
Hence, evaluating `(fog)(x)` yields `(fog)(x) = 1/((7x - 6)^2).`
c) You need to compose the functions h(x) and f(x) such that:
`(hof)(x) = h(f(x)) = 1 - 2f(x)`
You need to substitute `1/(x^2)` for f(x) such that:
`(hof)(x) = h(f(x)) = 1 - 2/(x^2)`
Notice that the domain of the function needs to exclude the value 0.
Hence, evaluating `(hof)(x)` yields `(hof)(x) = 1 - 2/(x^2).`
d) `(gofoh)(x) = g(f(h(x))) = g(f(1-2x)) = 7f(1-2x) - 6`
`(gofoh)(x) = 7/((1-2x)^2) - 6`
Notice that the domain of the function needs to exclude the value `1/2.`
Hence, evaluating the `(gofoh)(x` ) yields`(gofoh)(x) = 7/((1-2x)^2) - 6`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/functions-f-x-1-x-2-g-x-7x-6-h-x-1-2x-fog-x-c-hof-344151","timestamp":"2014-04-25T03:02:43Z","content_type":null,"content_length":"26076","record_id":"<urn:uuid:9bc6b932-3980-45ec-a5c6-314a26c98887>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Caribbean Monetary Notation
One helpful reader of my post on Number Delimitation pointed out that some localization data indicates that in several Caribbean countries, in both English and Spanish, although ordinary numbers are
written with the usual grouping into threes, in monetary values only the low group of three is delimited. In other words, an ordinary number looks like this: 123,456,789 but the same number of
dollars is written $123456,789. Can anyone confirm this practice? If so, does anyone know its origin?
Posted by Bill Poser at June 21, 2007 05:13 PM | {"url":"http://itre.cis.upenn.edu/~myl/languagelog/archives/004627.html","timestamp":"2014-04-20T18:23:43Z","content_type":null,"content_length":"6722","record_id":"<urn:uuid:cafbd114-ca1a-4cc3-abb6-65d021de7c23>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A Variational Vectorial Mode Solver
O.V. (Alyona) Ivanova, Remco Stoffer, Manfred Hammer and E. (Brenny) van Groesen
MESA+ Institute for Nanotechnology, AAMP group, University of Twente, The Netherlands
A variational method for the fully vectorial mode analysis of lossless dielectric waveguides with piecewise
constant rectangular refractive index distributions is proposed.
An extension of the scalar Mode Expansion Mode Solver [1] to fully vectorial simulations is dis-
cussed. The method uses a six component variational formulation of the Maxwell equations [2] to-
gether with approximations of each component of the vector field by specific superpositions of some
given basis functions, depending on one of the coordinates, times unknown coefficient functions that
are defined on the entire second coordinate axis. In principle there is some freedom in choosing the
basis functions; we use the vector field components of modes of the constituting vertical slices of the
It turns out that with our field template the problem of finding the unknown lateral coefficient-
functions reduces to finding those, which correspond to only two field components; the computational
effort becomes comparable to the scalar case [1] or the Film Mode Matching method [3].
Just like the scalar approach this method can be used with only a few terms in the expansion
for rough approximations; the technique can then be compared to a semi-vectorial Effective Index
Method. With a suitable choice of basis fields a reasonable approximation can be achieved with a rel- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/766/3112881.html","timestamp":"2014-04-17T22:13:14Z","content_type":null,"content_length":"9012","record_id":"<urn:uuid:a03ca4d0-7a9f-4cf3-b077-07dc42e64a13>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
In The Following Problem, Can I Set The X- And ... | Chegg.com
In the following problem, can I set the x- and y- axis up with x as a-a' and y as the normal to the object, or do I need to account for the 20 degree rise? If it is the latter, how would I consider
the additional angle? Complete explanation will be greatly appreciated as understanding is more important than resulting answer for me!
Knowing that \(\alpha = 40 degrees\) , determine the resultant of the three forces shown:
Mechanical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/following-problem-set-x-y-axis-x-y-normal-object-need-account-20-degree-rise-latter-would--q4262504","timestamp":"2014-04-18T18:17:17Z","content_type":null,"content_length":"21507","record_id":"<urn:uuid:601f0d25-60a7-43ea-b505-e1ee3f6aed27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
As mentioned briefly in the previous section, there are multiple ways for constructing a hash function. Remember that hash function takes the data as input (often a string), and return s an integer
in the range of possible indices into the hash table. Every hash function must do that, including the bad ones. So what makes for a good hash function?
Characteristics of a Good Hash Function
There are four main characteristics of a good hash function: 1) The hash value is fully determined by the data being hashed. 2) The hash function uses all the input data. 3) The hash function
"uniformly" distributes the data across the entire set of possible hash values. 4) The hash function generates very different hash values for similar strings.
Let's examine why each of these is important: Rule 1: If something else besides the input data is used to determine the hash, then the hash value is not as dependent upon the input data, thus
allowing for a worse distribution of the hash values. Rule 2: If the hash function doesn't use all the input data, then slight variations to the input data would cause an inappropriate number of
similar hash values resulting in too many collisions. Rule 3: If the hash function does not uniformly distribute the data across the entire set of possible hash values, a large number of collisions
will result, cutting down on the efficiency of the hash table. Rule 4: In real world applications, many data sets contain very similar data elements. We would like these data elements to still be
distributable over a hash table.
So let's take as an example the hash function used in the last section:
int hash(char *str, int table_size)
int sum;
// Make sure a valid string passed in
if (str==NULL) return -1;
// Sum up all the characters in the string
for( ; *str; str++) sum += *str;
// Return the sum mod the table size
return sum % table_size;
Which rules does it break and satisfy? Rule 1: Satisfies. The hash value is fully determined by the data being hashed. The hash value is just the sum of all the input characters. Rule 2: Satisfies.
Every character is summed. Rule 3: Breaks. From looking at it, it isn't obvious that it doesn't uniformly distribute the strings, but if you were to analyze this function for a large input you would
see certain statistical properties bad for a hash function. Rule 4: Breaks. Hash the string "bog". Now hash the string "gob". They're the same. Slight variations in the string should result in
different hash values, but with this function they often don't.
So this hash function isn't so good. It's a good introductory example but not so good in the long run.
There are many possible ways to construct a better hash function (doing a web search will turn up hundreds) so we won't cover too many here except to present a few decent examples of hash functions:
/* Peter Weinberger's */
int hashpjw(char *s)
char *p;
unsigned int h, g;
h = 0;
for(p=s; *p!='\0'; p++){
h = (h<<4) + *p;
if (g = h&0xF0000000) {
h ^= g>>24;
h ^= g;
return h % 211;
Another one:
/* UNIX ELF hash
* Published hash algorithm used in the UNIX ELF format for object files
unsigned long hash(char *name)
unsigned long h = 0, g;
while ( *name ) {
h = ( h << 4 ) + *name++;
if ( g = h & 0xF0000000 )
h ^= g >> 24;
h &= ~g;
return h;
or possibly:
/* This algorithm was created for the sdbm (a reimplementation of ndbm)
* database library and seems to work relatively well in scrambling bits
static unsigned long sdbm(unsigned char *str)
unsigned long hash = 0;
int c;
while (c = *str++) hash = c + (hash << 6) + (hash << 16) - hash;
return hash;
or possibly:
/* djb2
* This algorithm was first reported by Dan Bernstein
* many years ago in comp.lang.c
unsigned long hash(unsigned char *str)
unsigned long hash = 5381;
int c;
while (c = *str++) hash = ((hash << 5) + hash) + c; // hash*33 + c
return hash;
or another:
char XORhash( char *key, int len)
char hash;
int i;
for (hash=0, i=0; i<len; ++i) hash=hash^key[i];
return (hash%101); /* 101 is prime */
You get the idea... there are many possible hash functions. For coding up a hash function quickly, djb2 is usually a good candidate as it is easily implemented and has relatively good statistical | {"url":"http://www.sparknotes.com/cs/searching/hashtables/section2.rhtml","timestamp":"2014-04-17T12:52:56Z","content_type":null,"content_length":"54245","record_id":"<urn:uuid:61332dfa-7cf3-41c1-b716-32fec2b042c0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
The architecture of MiMo2
Next: Other constraint-based approaches to Up: Reversible Machine Translation Previous: The subset problem
The architecture of MiMo2
In the architecture of MiMo2 to be proposed here, (monolingual) relations between phonological representations and semantic representations are defined by constraint-based grammars of the type
introduced in chapter 2. However, constraint-based grammars can also be used to define other relations between (parts of) linguistic signs. In particular it is possible, as discussed by [44] for FUG,
to use constraint-based grammars to define transfer rules. In the model I propose a translation relation between two languages is defined as the composition of three reversible relations. Each of
these relations is defined by a constraint-based grammar. The first grammar defines the relation between source language utterances and source language dependent semantic representations. The second
grammar defines the relation between source language dependent semantic representations and target language dependent semantic representations, the third grammar defines the relation between target
language dependent semantic representations and target language utterances. The resulting MT system is reversible iff each of the grammars is reversible, as I showed in section 1.3).
For example, to compute the relation between Dutch and Spanish phonological representations construct the series of the programs for the Dutch grammar, the Dutch-Spanish transfer grammar and the
Spanish grammar. Each translation relation that can be defined is necessarily reversible if each of the grammars that are used defines an reversible relation. See figure 5.3 for an illustration.
The reasons for a constraint-based formalism for transfer rules are the following.
• A constraint-based implementation provides a declarative characterization of the transfer relation. This characterization is independent of the actual way in which the relation is computed.
• A single characterization can then be used to compute transfer relations in both directions. The constraint-based formalism thus provides for a reversible transfer component.
• A constraint-based formalism constitutes a simple, yet very powerful language for the statement of such transfer relations. In section 5.6 I show that transfer relations can be defined to analyze
certain non-compositional translations which are problematic for other transfer systems.
In section 5.5 I discuss how to ensure that a transfer grammar is reversible. We show that, as long as translation is compositional in a sense to be made precise, it is possible to guarantee that
transfer grammars are reversible. On the other hand such grammars are still powerful enough to handle certain non-compositional translations. For this reason we argue that reversible constraint-based
grammars provide for an interesting compromise between expressive power and computability.
As far as the system is concerned, there may be a different logic for natural language semantics for each language. A transfer component for two languages thus functions as an interface to relate the
two logics used to define semantic representations with. This makes it possible that grammars are developed quite independently of each other. On the other hand, if languages define similar semantic
representations the transfer grammars will generally be simpler and easier to write.
In a transfer model the subset problem, as discussed in the previous section, is in principle present in a slightly different format, because the transfer component need not be `complete' (cf. figure
5.4). The point at which grammars are connected gives in principle rise to an instantiation of the subset problem. However, in practice it turns out that in the proposed architecture the problem
hardly surfaces at all. This is so, because the transfer grammars are explicitly tuned to each of the monolingual grammars. Clearly, that was the reason to have transfer grammars in the first place.
Therefore, it seems warranted to neglect this problem.
Next: Other constraint-based approaches to Up: Reversible Machine Translation Previous: The subset problem Noord G.J.M. van | {"url":"http://www.let.rug.nl/~vannoord/papers/diss/diss/node81.html","timestamp":"2014-04-19T17:40:43Z","content_type":null,"content_length":"8589","record_id":"<urn:uuid:58d20130-2340-4b3b-8f27-62093f2c555f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can someone explain how to graph this sum in MATLAB using contourf?
up vote 5 down vote favorite
I'm going to start off by stating that, yes, this is homework (my first homework question on stackoverflow!). But I don't want you to solve it for me, I just want some guidance!
The equation in question is this:
I'm told to take N = 50, phi1 = 300, phi2 = 400, 0<=x<=1, and 0<=y<=1, and to let x and y be vectors of 100 equally spaced points, including the end points.
So the first thing I did was set those variables, and used x = linspace(0,1) and y = linspace(0,1) to make the correct vectors.
The question is Write a MATLAB script file called potential.m which calculates phi(x,y) and makes a filled contour plot versus x and y using the built-in function contourf (see the help command in
MATLAB for examples). Make sure the figure is labeled properly. (Hint: the top and bottom portions of your domain should be hotter at about 400 degrees versus the left and right sides which should be
at 300 degrees).
However, previously, I've calculated phi using either x or y as a constant. How am I supposed to calculate it where both are variables? Do I hold x steady, while running through every number in the
vector of y, assigning that to a matrix, incrementing x to the next number in its vector after running through every value of y again and again? And then doing the same process, but slowly
incrementing y instead?
If so, I've been using a loop that increments to the next row every time it loops through all 100 values. If I did it that way, I would end up with a massive matrix that has 200 rows and 100 columns.
How would I use that in the linspace function?
If that's correct, this is how I'm finding my matrix:
format compact
x = linspace(0,1);
y = linspace(0,1);
N = 50;
phi1 = 300;
phi2 = 400;
phi = 0;
sum = 0;
for j = 1:100
for i = 1:100
for n = 1:N
sum = sum + ((2/(n*pi))*(((phi2-phi1)*(cos(n*pi)-1))/((exp(n*pi))-(exp(-n*pi))))*((1-(exp(-n*pi)))*(exp(n*pi*y(i)))+((exp(n*pi))-1)*(exp(-n*pi*y(i))))*sin(n*pi*x(j)));
phi(j,i) = phi1 - sum;
for j = 1:100
for i = 1:100
for n = 1:N
sum = sum + ((2/(n*pi))*(((phi2-phi1)*(cos(n*pi)-1))/((exp(n*pi))-(exp(-n*pi))))*((1-(exp(-n*pi)))*(exp(n*pi*y(j)))+((exp(n*pi))-1)*(exp(-n*pi*y(j))))*sin(n*pi*x(i)));
phi(j+100,i) = phi1 - sum;
This is the definition of contourf. I think I have to use contourf(X,Y,Z):
contourf(X,Y,Z), contourf(X,Y,Z,n), and contourf(X,Y,Z,v) draw filled contour plots of Z using X and Y to determine the x- and y-axis limits. When X and Y are matrices, they must be the same size as
Z and must be monotonically increasing.
Here is the new code:
N = 50;
phi1 = 300;
phi2 = 400;
[x, y, n] = meshgrid(linspace(0,1),linspace(0,1),1:N)
f = phi1-((2./(n.*pi)).*(((phi2-phi1).*(cos(n.*pi)-1))./((exp(n.*pi))-(exp(-n.*pi)))).*((1-(exp(-1.*n.*pi))).*(exp(n.*pi.*y))+((exp(n.*pi))-1).*(exp(-1.*n.*pi.*y))).*sin(n.*pi.*x));
g = sum(f,3);
[x1,y1] = meshgrid(linspace(0,1),linspace(0,1));
matlab plot vectorization graphing
add comment
2 Answers
active oldest votes
Vectorize the code. For example you can write f(x,y,n) with:
[x y n] = meshgrid(-1:0.1:1,-1:0.1:1,1:10);
f=exp(x.^2-y.^2).*n ;
f is a 3D matrix now just sum over the right dimension...
up vote 4 down g=sum(f,3);
vote accepted
in order to use contourf, we'll take only the 2D part of x,y:
[x1 y1] = meshgrid(-1:0.1:1,-1:0.1:1);
So I understand the meshgrid, and how it makes every combination of the two matrices, and fast. But I'm not sure how to put my sum into the format to use it like this. I have the
sum = sum + blah loop, and then have to do the phi1 - sum. The ways you guys are showing me seem so much easier than how I'm doing it, but I'm not sure how to adapt my problem to
your method. – TheTreeMan Jan 25 '13 at 5:08
So wait, do I do the loop as said, but I'm using the x and y as a result of meshgrid, and using the dot operators to make the operations performed in the equation happen for every
single part of the matrix at once? – TheTreeMan Jan 25 '13 at 5:11
1 see my edited answer, also what didn't work? by the way, don't use the word sum for your variable, you are overwriting an important function of matlab... – natan Jan 25 '13 at 5:24
1 yes this should work... I've edited again the answer to be more educational... – natan Jan 25 '13 at 5:40
1 :-) welcome to Matlab my friend... – natan Jan 25 '13 at 6:12
show 4 more comments
The reason your code takes so long to calculate the phi matrix is that you didn't pre-allocate the array. The error about size happens because phi is not 100x100. But instead of fixing
those things, there's an even better way...
MATLAB is a MATrix LABoratory so this type of equation is pretty easy to compute using matrix operations. Hints:
1. Instead of looping over the values, rows, or columns of x and y, construct matrices to represent all the possible input combinations. Check out meshgrid for this.
up vote 3 2. You're still going to need a loop to sum over n = 1:N. But for each value of n, you can evaluate your equation for all x's and y's at once (using the matrices from hint 1). The key to
down vote making this work is using element-by-element operators, such as .* and ./.
Using matrix operations like this is The Matlab Way. Learn it and love it. (And get frustrated when using most other languages that don't have them.)
Good luck with your homework!
This is a stupid question, but if I pre-allocate the tray like that, will my matrix stay that huge, even after inserting values? If I try to do the controurf, will it also run through
all of those zeroes as well? I'm impressed that it cut down the time to calculate that array by 2/3 though! That's crazy. I'm having a hard time understanding how to use the meshgrid
function for this purpose. Do you think you can explain it really fast? It's difficult to read and understand what it's doing, since it's a little beyond my programming level. –
TheTreeMan Jan 25 '13 at 4:41
1 a 200 by 100 element matrix is not huge. it only takes ~160 KB memory... vectorize your code following the advice from @Shoelzer – natan Jan 25 '13 at 4:53
1 @TheTreeMan natan's answer is a good example of using meshgrid. – shoelzer Jan 25 '13 at 5:14
add comment
Not the answer you're looking for? Browse other questions tagged matlab plot vectorization graphing or ask your own question. | {"url":"http://stackoverflow.com/questions/14515051/can-someone-explain-how-to-graph-this-sum-in-matlab-using-contourf?answertab=oldest","timestamp":"2014-04-19T13:33:56Z","content_type":null,"content_length":"81997","record_id":"<urn:uuid:4cbded2d-1b0e-488f-8e0d-e3590d92f402>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
September 1st 2010, 10:02 AM
Stuck Man
I can't do this question:
Find the number of different selections of 5 letters which can be made from the letters of the word syllabus.
September 1st 2010, 10:12 AM
Well, there are 8 letters in the word syllabus. You are being asked to find the number of different selections of 5 letters which can be made from the letters of the word syllabus which is the
same as the number of ways one can select 5 letters from the 8 letter set comprised of the 8 letters that form the word "syllabus". So, you can ignore the word "syllabus" and just treat the
problem as any other set counting problem.
Since combinations is in your title, I am sure you know how to compute the number of ways to select 5 elements from an 8 element set.
September 1st 2010, 10:28 AM
Stuck Man
8C5 is 56. It is possible to select S twice or L twice or two S's and two L's. I think I need to subtract something from 56. The answer in the book is 30.
September 1st 2010, 10:43 AM
The correct answer is some what more complicated than the first reply would suggest.
The letters in $syllabus$ are only seven difference letters.
The double $ll$ complicates matters.
So count all five letter combinations that contain at most one $l$. Then add to it the combinations that contain two l’s.
September 1st 2010, 10:57 AM
Stuck Man
I can't see why you think the order of the letters in syllabus is important.
September 1st 2010, 11:39 AM
The correct answer is some what more complicated than the first reply would suggest.
The letters in $syllabus$ are only seven difference letters.
The double $ll$ complicates matters.
So count all five letter combinations that contain at most one $l$. Then add to it the combinations that contain two l’s.
There are also two of the letter s; only 6 distinct letters.
I will use uppercase L because lowercase looks like 1.
Case by case like this will give the answer:
no L and 1 s
no L and 2 s's
1 L and no s
1 L and 1 s
1 L and 2 s's
2 L's and no s
2 L's and 1 s
2 L's and 2 s's
Can be made faster realising that 1 L and no s gives the same count as 1 s and no L, etc.
Nobody claimed that order was important.
September 1st 2010, 11:43 AM
Archie Meade
Your options are....
(a) the selection contains $ssll$
You need to choose 1 letter from the 4 distinct letters y,a,b,u. The number is $\binom{4}{1}=4$
(b) the selection contains $ssl$, only one $l$
This time choose 2 from the 4 distinct letters. $\binom{4}{2}=6$
(c) the selection contains $sll$. $\binom{4}{2}=6$
(d) the selection contains $sl$, only one $s$ and only one $l$.
Choose 3 from the 4 distinct letters. $\binom{4}{3}=4$
(e) the selection contains $s$, only one $s$ and no $l$.
You need to choose all 4 distinct letters. $\binom{4}{4}=1$
(f) the selection contains one $l$ and no $s$. $\binom{4}{4}=1$
(g) the selection contains $ss$ and no $l$. $\binom{4}{3}=4$
(h) the selection contains $ll$ and no $s$. $\binom{4}{3}=4$
The selection must contain an $s$ or an $l$.
Sum all of these.
EDIT: Hi "undefined", we were on the same wavelength!
Now it's broken down, faster solutions can be given and maybe some neat closed-form options..
September 1st 2010, 01:17 PM
A slightly simpler way:
Start by only considering the case that there is only 1 L and 1 S. Hence the letters to slelect from are A,B,L,S,U and Y. How many ways can you pick five of these at a time? Answer = C(6,5) = 6.
This is the number of ways you can select letters with either 0 or 1 L and with either 0 or 1 S.
Now consider combinations that have 2 L's. How many ways can you select three from the remaining 5 letters A,B,S,U, and Y?: C(5,3)=10. This is the number of selections that have 2 L's and either
0 or 1 S.
Ditto for combinations that have 2 S's: C(5,3) = 10 is the number of selections that have 2 S's and either 0 or 1 L.
Finally add the number of ways you can have 2 L's and 2 S's: you have to select only one letter left from the 4 remaining: C(4,1)=4.
Add them up and you get 30.
September 1st 2010, 01:31 PM
Here is the easiest way.
The coefficient of $x^5$ in the expansion of $\displaystyle \left( {\sum\limits_{k = 0}^1 {x^k } } \right)^4 \left( {\sum\limits_{k = 0}^2 {x^k } } \right)^2$ is $30$. | {"url":"http://mathhelpforum.com/discrete-math/154949-combinations-print.html","timestamp":"2014-04-17T18:48:38Z","content_type":null,"content_length":"17263","record_id":"<urn:uuid:73e08bde-3b9b-45aa-b3e3-0f22889a760c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'co.combinatorics computer-science' Questions
I'll ask you to consider a situation wherein one has a series of edges for a graph, $(e_1, e_2, ..., e_N) \in E$, each with a specifiable length $(l_1, l_2, ..., l_N) \in L$, and the goal is to
insure ...
asked Nov 24 '10 at 2:32 | {"url":"http://mathoverflow.net/questions/tagged/co.combinatorics+computer-science","timestamp":"2014-04-23T15:38:10Z","content_type":null,"content_length":"170549","record_id":"<urn:uuid:b81e7d0a-c467-4d55-aa00-c3bf922b5dc4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elmhurst, New York, NY
New York, NY 10016
GRE, GMAT, SAT, NYS Exams, and Math
...I have over three years of experience tutoring geometry for elementary school through high school. The subjects I cover include area, perimeter, volume, surface area, lines and angles, polygons,
triangles, circles, geometry proofs and logic, coordinate geometry,...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/Elmhurst_New_York_NY_Algebra_1_tutors.aspx","timestamp":"2014-04-19T10:21:09Z","content_type":null,"content_length":"62675","record_id":"<urn:uuid:7edcd4de-e762-43d0-838d-eb8ffbb59ef3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] limit of function exists, but limit of its derivative doesn't.
November 15th 2009, 04:15 PM #16
Oct 2009
$\lim_{x\to\infty}\frac{-1}{x}\leq\lim_{x\to\infty}\frac{\sin(x^2)}{x}\leq\ lim_{x\to\infty}\frac{1}{x}$
So the Squeeze Theorem says $\lim_{x\to\infty}\frac{\sin(x^2)}{x}=0$
The derivative of $\frac{\sin x}{x^2}$ is $\frac{\cos x}{x^2}-\frac{2\sin x}{x^3}$, which clearly goes to $0$ as $x\to\infty$.
This is a mess and everyone understands what she/he wants $\frac{sin x}{x}$ I forgot to square the denominator when differentiating and thus got a wrong derivative.
November 15th 2009, 04:18 PM #17 | {"url":"http://mathhelpforum.com/differential-geometry/114740-solved-limit-function-exists-but-limit-its-derivative-doesn-t-2.html","timestamp":"2014-04-24T10:11:50Z","content_type":null,"content_length":"41143","record_id":"<urn:uuid:1eb8bbeb-8452-4247-91fa-ad1c55c36e20>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions s.t. f(x+y) = f(x) + f(y)?
February 3rd 2010, 08:00 PM
Functions s.t. f(x+y) = f(x) + f(y)?
How can find all real continuous functions s.t. f(x+y) = f(x) + f(y), and prove it? So far I've tried to do it by a process of elimination, leaving only functions of the form f(x) = ax and the
trivial case f(x) = 0, but I think I'm missing a trick here as I don't think it's practical to systematically eliminate every possible function.
February 3rd 2010, 08:08 PM
How can find all real continuous functions s.t. f(x+y) = f(x) + f(y), and prove it? So far I've tried to do it by a process of elimination, leaving only functions of the form f(x) = ax and the
trivial case f(x) = 0, but I think I'm missing a trick here as I don't think it's practical to systematically eliminate every possible function.
Claim: The only functions that satisfy this are $f(x)=f(1)x$
Proof: We make the following observations
1. $f(0)=f(0+0)=f(0)+f(0)\implies f(0)=0$
2. $0=f(0)=f(x+-x)=f(x)+f(-x)\implies -f(x)=f(-x)$
3. It follows by induction that $f(zx)=zf(x),\text{ }z\in\mathbb{Z}$
4. $f(1)=f\left(\frac{q}{q}\right)=qf\left(\frac{1}{q} \right)\implies f\left(\frac{1}{q}\right)=\frac{f(1)}{q},\text{ }q\in\mathbb{Z}$
5. $f\left(\frac{p}{q}\right)=pf\left(\frac{1}{q}\righ t)=f(1)\frac{p}{q},\text{ }\frac{p}{q}\in\mathbb{Q}$
So, clearly we have that $f(x)=f(1)x,\text{ }x\in\mathbb{Q}$
and since $\mathbb{Q}$ is dense in $\mathbb{R}$ we see that given any $x\in\mathbb{R}-\mathbb{Q}$ there exists a sequence of rational numbers $\left\{q_n\right\}_{n\in\mathbb{N}}$ such that $q_n\
to x$ and since $f$ is continuous $f(x)=\lim\text{ }f(q_n)=\lim\text{ }f(1)q_n=f(1)\lim\text{ }q_n=f(1)x$.
The conclusion follows.
February 4th 2010, 05:13 AM
Excellent, Drexel28!
By the way, tunaaa, there exist non-continuous functions that satisfy f(x+y)= f(x)+ f(y) but they are really nasty! Not only are they discontinuous, they are not bounded in any interval, no
matter how small. In fact, their graph is dense in the plane. No matter what x, y, and $\epsilon> 0$ are, there exist a point of the graph within distance $\epsilon$ of (x, y). | {"url":"http://mathhelpforum.com/differential-geometry/127073-functions-s-t-f-x-y-f-x-f-y-print.html","timestamp":"2014-04-18T09:16:51Z","content_type":null,"content_length":"9232","record_id":"<urn:uuid:bcd6afcd-af2a-47e5-be56-f4b03c195c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
wffs vs graphs
Date: Fri, 25 Feb 1994 13:46:14 -0500
From: schubert@cs.rochester.edu
Message-id: <199402251846.NAA15811@cherry.cs.rochester.edu>
To: boley@dfki.uni-kl.de, cg@cs.umn.edu, fritz@rodin.wustl.edu,
interlingua@ISI.EDU, phayes@cs.uiuc.edu, schubert@cs.rochester.edu
Subject: wffs vs graphs
> But look. Suppose I were to insist that since graphs must be printed or
> displayed, any perceptible 'token' graph will have its nodes and lines
> displayed in some position on the screen; therefore, the problem of graph
> matching is the problem of matching bitmapped pictures of graphs. This
> would clearly be a perverse position ...
Well, keystrokes are (at least at present) more tractable tokens for
on-line processing than bitmaps. But we *do* have to start with tokens.
You mention the possibility of wff-matching aided by a kind of graphical
representation of variable bindings, and remark,
> quantifier binding structure does not naturally 'fit' into the usual
> recursive may of parsing syntax, and that bound variables are only
> a device, and perhaps a rather artificial one, for making it fit
One way of making it fit is by skolemization, and I suppose that may be
why network theorists (inlcuding me) have tended to use some sort of
skolem-like form of implicit quantification. But as far as I can tell,
this doesn't help with the matching problem. As you say,
> It will be necessary to consider permutations of &, of course, and that
> seems to me to be the real difficulty. Unfortunately it does not seem that
> anything is going to be able to help with the worst case ...
I've talked with a colleague about this, and he says
1. This does look like DAG-isomorphism, which is equivalent to graph
isomorphism in general, for which it is still not known whether
it's polynomial, just as Bob McGregor surmised (except for bounded
degree graphs, which we can't restrict ourselves to if we want
polyadic conjunction and disjunction)
2. A graphical representation of a formula could be exponentially
more compact than a string representation, by avoiding subexpression
The latter is an interesting point. But as I noted in my article in
John's book, one can augment standard logical syntax with subexpression
names, getting something even closer to network syntax. With these
names, the wff-syntax can completely avoid subexpression repetition,
and also allows cycles (e.g., Liar sentences, illustrated in the paper).
Feature logics, whose relevance to this discussion I previously
mentioned, also use names for shared functional expressions, avoiding
repetition. (Our input syntax for episodic logic in the EPILOG system
also allows subexpression naming.) | {"url":"http://www-ksl.stanford.edu/email-archives/interlingua.messages/524.html","timestamp":"2014-04-20T13:42:46Z","content_type":null,"content_length":"3645","record_id":"<urn:uuid:0b82a5c3-8b49-4051-89db-20ee86499b6b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spring 2000 LACC Math Contest - Problem 3
Problem 3.
Of a group of students who were entered in a math contest:
14 had taken algebra
(some of these may have also taken geometry, logic, or geometry and logic),
11 had taken geometry
(some of these may also have taken algebra, logic, or algebra and logic),
9 had taken logic
(some of these many also have taken algebra, geometry, or algebra and geometry),
6 students had taken both algebra and logic and geometry
(some of these may also have taken logic),
5 students had taken both algebra and logic
(some of these may also have taken geometry),
10 students had taken exactly two of these courses, and 2 students had taken all three courses.
How many students had taken exactly one of these courses.
[Problem submitted by Robert Hart, LACC Associate Professor of Computer Science.] | {"url":"http://lacitycollege.edu/academic/departments/mathdept/samplequestions/2000prob3.html","timestamp":"2014-04-18T03:01:04Z","content_type":null,"content_length":"3093","record_id":"<urn:uuid:09c50430-23d4-4067-b82e-d4efe68fc9f1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Trip
From Progteam
This problem has been solved by kerry.
The Trip is problem number 2646 on the Peking University ACM site.
C answer. Float for the values leads to arithmetic errors, so you need a double.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
int main() {
long double current, total, avg, upper_total, answer, input[1001];
int i, j, num, upper_num;
scanf("%d", &num);
while (num !=0) {
for(i=0;i<num;i++) {
scanf("%Lf", ¤t);
total += current;
avg = (total/num)*100;
avg = floor(avg);
avg = (avg/100);
for(i=0;i<num;i++) {
if (input[i]<avg)
else if (input[i]>avg) {
if ((total - avg*num) > (upper_num*0.01)) {
answer += (total-avg*num-upper_num*0.01);
printf("$%.2Lf\n", answer);
scanf("%d", &num); | {"url":"http://cs.nyu.edu/~icpc/wiki/index.php?title=The_Trip&oldid=6905","timestamp":"2014-04-16T07:19:25Z","content_type":null,"content_length":"15587","record_id":"<urn:uuid:ea003a7a-548a-4d49-b5c8-1116d29424ba>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Superconvergence Recovery in Generalized Finite Element Method
Uday Banerjee
Department of Mathematics
Syracuse University
Superconvergence is an important and well known feature of the classical finite element method. This feature allows accurate approximation of the derivatives of the solution of the underlying
boundary value problem, which can be used in post-processing. In this talk, we will briefly describe the Generalized Finite Element Method and show that it also has the superconvergence property. We
will also present some computational results, which will indicate that the ``recovered derivatives'' -- constructed from the computed solution -- yield very good approximation of the derivatives of
the exact solution. | {"url":"http://www.math.psu.edu/ccma/seminar/abstracts/Spring06/UdayBannerjee.html","timestamp":"2014-04-21T00:02:41Z","content_type":null,"content_length":"951","record_id":"<urn:uuid:dd3974d9-745b-41ca-bae9-4c7c344e30ec>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Union City, GA Statistics Tutor
Find an Union City, GA Statistics Tutor
...And now, in semi-retirement, I find immense joy tutoring students who need an extra boost. I tutor students from elementary school through Algebra II and Trigonometry. I get to know students,
their strengths and their challenges.
8 Subjects: including statistics, algebra 1, trigonometry, algebra 2
...I am an IBM Certified trainer in SPSS and Modeler and teach SPSS and Modeler classes part-time for IBM. I retired from IRS in Dec. 2007, after 32 years with the U.S. Federal Government.
2 Subjects: including statistics, SPSS
...While enjoying the classroom again, I also passed 6 actuarial exams covering Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this spectrum of
mathematics, from high school through post baccalaureate, which I feel most comfortable tutoring. I also became even more proficient with Microsoft Excel, Word, and PowerPoint.
21 Subjects: including statistics, calculus, geometry, algebra 1
...Basic algebra skills are crucial for developing proficiency at higher levels on mathematics. I can help any student strengthen their basic skills, setting a solid foundation for success in
their current middle or high school math curriculum, as well as college work - now or down the road. I presently teach Precalculus at Chattahoochee Tech.
13 Subjects: including statistics, calculus, geometry, algebra 1
...I have tutored college and high school students for 15 years in Algebra. I have developed multiple ways to teach any and every subject related to algebra in order to help the individual
student learn. I received an A at Georgia State in this class.
15 Subjects: including statistics, chemistry, calculus, geometry
Related Union City, GA Tutors
Union City, GA Accounting Tutors
Union City, GA ACT Tutors
Union City, GA Algebra Tutors
Union City, GA Algebra 2 Tutors
Union City, GA Calculus Tutors
Union City, GA Geometry Tutors
Union City, GA Math Tutors
Union City, GA Prealgebra Tutors
Union City, GA Precalculus Tutors
Union City, GA SAT Tutors
Union City, GA SAT Math Tutors
Union City, GA Science Tutors
Union City, GA Statistics Tutors
Union City, GA Trigonometry Tutors
Nearby Cities With statistics Tutor
Austell statistics Tutors
Carrollton, GA statistics Tutors
College Park, GA statistics Tutors
East Point, GA statistics Tutors
Fairburn, GA statistics Tutors
Fayetteville, GA statistics Tutors
Forest Park, GA statistics Tutors
Jonesboro, GA statistics Tutors
Newnan statistics Tutors
Peachtree City statistics Tutors
Red Oak, GA statistics Tutors
Riverdale, GA statistics Tutors
Tucker, GA statistics Tutors
Tyrone, GA statistics Tutors
Villa Rica, PR statistics Tutors | {"url":"http://www.purplemath.com/Union_City_GA_Statistics_tutors.php","timestamp":"2014-04-19T17:29:14Z","content_type":null,"content_length":"24086","record_id":"<urn:uuid:f1042898-4606-4900-91fc-28ee98c1b058>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arcadian Functor
Battle Weary
Recent posts by bloggers
suggest a mood of weariness in the physics blogosphere. The latter cannot even be bothered to write anymore and has resorted to phrases like, "blah, blah, blah, this pseudo-science is on hep-th
because of blah, blah, blah." But the demise of the arxiv is nothing new and the game is just beginning! Heck, I've barely warmed up.
17 Comments:
Mahndisa S. Rigmaiden said...
Hey Kea:
Thanks for paying me a visit today. Regarding the weariness of the physics blogosphere, a coupla folks who you have cited never said anything of consequence anyway so screw them!!!
Er ah, I wanted to thank you for linking to the Everything Seminar. I find it to be an enjoyable breath of fresh air. BTW Marek Wolf is the man that connects number theory to physics ala BL
Hi Mahndisa. Good point. Yeah, the Everything Seminar is just amazing. OK, now I see who Wolf is - thanks.
Kea, I just atttempted to post a comment on Peter Woit's blog that I think might also be relevant here. Here is my coment:
D R Lunsford said "... What a mess. I still can’t believe this is happening. This entire scenario needs to be exposed by a deep historical analysis of what went wrong in academia. ...".
Over on Sean Carroll's blog Count Iblis said that in the hep-th paper 0708.2743 Albrecht and Inglesias (of UC Davis)
"... point out that by messing with time you can map a particular set of laws of physics to any other laws of physics. ...".
In their paper, Albrecht and Inglesias say:
"... We are used to doing physics by stating the physical laws which we believe may be true, and then calculating predictions based on those laws in order to test them against observations of the
physical world.
The clock ambiguity appears to completely undermine this approach to physics. ...
This work was supported in part by DOE Grant DE-FG03-91ER40674 ...".
Therefore, my tax money is being used by DOE to fund a paper saying that efforts to do physics by:
1 - constructing physical-law models
2 - and calculating predictions based on those laws
3 - in order to test them against observations
is "completely undermine[d]"
Although they do have some fine print (in the body of the paper but not in its abstract) involving "the continua we use to construct theories of fundamental physics" and their use of "freedom to
choose a clock subsystem arbitrarily" and "use of the covariant approach",
it seems clear to me that an attack on the scientific methods used by scientists from Kepler to Feynman as being "completely undermined" is
the basic thrust and purpose of their paper.
I think that it is a shame that their attack is accepted by the Cornell arXiv as OK for hep-th, while I am blacklisted from posting new results of my physical-law model which allows computation
of particle masses, force constants, etc, that are testable against observations.
Tony Smith
PS - Albrecht and Inglesias are not alone in attacking the Kepler-Feynman way of doing physics. The Resonaances blog in a 1 July 2007 post entitled “Nima’s Marmoset” said:
“… Nima Arkani-Hamed [formerly at Harvard and now at Princeton IAS]… gave another talk …[at]… CERN … advertising his MARMOSET … a new tool for reconstructing the fundamental theory from the LHC
data ... Nima pointed out …[that]… at the dawn of the LHC era we have little idea which underlying theory and which lagrangian will turn out relevant …".
In short,
Arkani-Hamed says that the Standard Model Lagrangian should be ignored because
“… we have little idea which underlying theory and which lagrangian will turn out relevant …”
even though the Standard Model has passed EVERY experimental test for over 30 years, and there is NO experimental observation whatsoever indicating that the Standard Model is not the relevant “…
underlying theory and … lagrangian …” for physics at the LHC.
I would add Harvard and Princeton IAS to the list of institutions that should hang their heads in shame
supporting attacks on the process of building physics models in the old-fashioned way of requiring inclusion of the Standard Model and Gravity as subsets and demanding calculability of observable
quantities such as particle masses, force strength constants, etc.
Mahndisa and Kea, thanks for the link to the Marek Wolf stuff.
Also, as I said in my immediately previous comment, "... I just atttempted to post a comment on Peter Woit's blog ...".
I just now got a message saying:
"... Your comment is awaiting moderation ...".
It is interesting that a comment from me seems to be thrown into the "subject to moderation" pile (sort of reminds me of the Cornell arXiv),
it might be interesting to see how it is "moderat[ed]".
Tony Smith
Hello Kea and Tony:
Kea, if you peruse his papers, you will love every minute of them! Tony, hopefully you will also find his papers a nice treat, although you may far exceed the knowledge base...
Anyhow, as I get on Matti sometime, physics is the most important thing, along with family and dog;) So no need to waste time passing pearls to swine, as Grandma always said;) Have a nice weekend
My apologies for misspelling the name "Iglesias" in my comment here.
Tony Smith
PS - My comment was allowed on Peter's blog.
My comment was allowed on Peter's blog.
Yes, Tony, so long as you don't say anything critical of Woit or his ideas (and in fact what you said supports him) he will usually allow the comment.
But the demise of the arxiv is nothing new and the game is just beginning!
It is tempting to wish ill things of the arxiv, but it appears to be self-destructing anyway.
Patience with hhumans is a virtue. One author of the paper Tony cites once threw me out of his office for suggesting that c slows down, but more recently walked up to me, said "Hi my name is
Andy" and proceeded to be very nice.
you may be right that I should not have "waste[d] time passing pearls" by posting on Peter Woit's blog.
Although he did allow my comment to be posted,
it was followed by these two comments:
by anonymous "european observer" who said:
"Tony ... Where do you believe that ... the marmoset people ... said to forget the SM?
... I think you are misreading the Lagrangian comment,
and it should be interpreted as the beyond the SM theory/lagrangian that is unknown.
... be more polite
not make grandiose statements about institutions based on remarks in an anonymous blog."
by Peter Woit who said:
Please don’t post such far off-topic comments.
Marmoset has nothing to do with the posting,
and a tendentious discussion of it here is way out of place.".
So, an anonymous "european observer" criticizes me for commenting based
on the "anonymous blog" Resonaances,
tells me to "be more polite"
"not make grandoise statements" critical of Harvard and Princeton IAS
tells me that marmoset is based on the SM lagrangian,
Peter Woit (who controls his blog) tells me to shut up about marmoset etc,
thus preventing me from replying to the attack on me by the anonymous "european observer".
By doing so, Peter Woit and the anonymous "european observer" manage to characterize
me as impolite, gullible, ignorant, and wrong.
if Peter Woit had not tied my hands by telling me to shut up, here are some points that I could have made on his blog:
1 - Peter Woit himself, on his blog, described the Resonaances blog favorably, saying:
"... There’s an interesting new particle theory blog, called Resonaances, and written by someone in the CERN Theory Group (who for now is operating anonymously as “Jester”, also commenting here).
2 - An anonymous commenter has poor grounds to complain that I refer to an anonymous blog.
3 - I don't think that any institution (including Harvard and Princeton IAS) or any individual (including Nima Arkani-Hamed) should be exempt from criticism.
4 - As to my position that marmoset is not based on the Standard Model Lagrangian approach,
here are quotes from Resonannces (dated 1 July 2007):
"... The new framework is called an On-Shell Effective Theory (OSET). The idea is to study physical processes using only kinematic properties of the particles involved.
Instead of the lagrangian,
one specifies the masses, production cross sections and decay modes of the new particles. ... MARMOSET is a package allowing OSET-based Monte Carlo simulations of physical processes. ...".
here are quotes from the MARMOSET paper by Nima Arkani-Hamed et al at hep-ph0703088:
"... We propose and develop ... a coherent strategy for going from data to a still-unknown theory
... using On-Shell Effective Theories (OSETs) as an intermediary
between LHC data and the underlying Lagrangian ...
Instead of attempting to reconstruct the TeV-scale effective Lagrangian directly from LHC data, we propose an On-Shell Effective Theory (OSET) characterization of the new physics in terms of new
particle masses, production cross sections, and branching ratios as a crucial intermediate step. ...".
In other words,
the anonymous "european obsersver" is correct that the aim of marmoset is to describe new physics beyond the Standard Model,
I think that I am correct in that marmoset explicitly avoids working with Lagrangians that include the Standard Model as a subset
therefore marmoset rejects physical intuition based on the Lagrangian point of view that uses the Standard Model Lagrangian as a starting point.
My complaint is against such an abdication of physical intuition,
which in my view is similar to the abdication of physical intuition that is present in the Landscape approach to superstring theory and
in the Albrect-Iglesias "clock ambiguity" attack on the Kepler-Feynman approach to physics which is:
constructing physical-law models and calculating predictions based on those laws in order to test them against observations.
5 - As to whether I should be "more polite", I contend that I am indeed much "more polite" than Resonaances (a blog, although anonymous, that has been cited favorably by Peter Woit) which said:
"... When you ask phenomenologists their opinion about MARMOSET,
officially they just burst out laughing.
Off the record, you could hear something like
"...little smartass trying to teach us how to analyze data..." often followed by *!%&?#... ...".
Tony Smith
PS - As Louise says, "... the game is just beginning! ...".
Tony, you are most welcome to post your thoughts on my blog. Although I'm sure Mahndisa is right, as Matti and you and many of us have found, it seems necessary to protest sometimes, simply
because somebody has to be willing to protest.
In fact, would you like to write a guest post? If you email me a post at mds67@it.canterbury.ac.nz I will post it.
Disregarding the details, I heartily approve of the concept of getting away from Lagrangian formalism.
I don't think a Lagrangian underlies anything physical. I think that the usual Lagrangian is a guess at the conserved quantities of nature, and that one would do better by guessing the equations
of motion of nature, and from that guess, calculating the Lagrangian.
I don't believe in "laws of nature". I believe in "nature", and that she can be described by some simple differential equation, and from that equation, and its solutions, we will be able to find
conserved quantities, approximately conserved quantities, etc., but at the fundamental level, nature is not a bunch of laws any more than the waves on the ocean are a bunch of laws.
Well said, Carl, although I'd like to note that the idea of 'equation' should probably be weakened to 'categorical relation' or 'coherence law' which are really just kinds of equation using
Carl Brannen said that he "... heartily approve[s] of the concept of getting away from Lagrangian formalism ...".
My physics model is fundamentally based on a generalized hyperfinite II1 von Neumann algebra factor whose basic building block is the real Clifford algebra Cl(8),
so it is really an Algebraic Quantum Field Theory with a nested category-type structure that comes from the generalized hyperfinite II1 von Neumann algebra factor.
As Kea said on Peter Woit's blog (in conversation with Bert Schroer on 27 May 2006, where Bert Schroer had used the term "monade" for the hyperfinite von Neumann factor algebra that he used for
"... certain instances of the category theoretic monad share many of the deep properties that you attribute to your monade. I do not think that this is a coincidence. ...".
Also on Peter Woit's blog (on 2 November 2006) Arun said that if AQFT (or anything else) “… claims to supplant the Lagrangian/ path-integral/ perturbation-theory Standard Model of electro-weak &
strong interactions, which is our most successful theory so far,
[ then it ] needs to show how, in the appropriate limit, the Standard Model emerges. …”.
Since the Standard Model is written in Lagrangian formalism with path integral quantum theory,
it seems necessary,
not to "get... away from Lagrangian formalism"
to show how the Standard Model Lagrangian formalism with path integrals fits inside the more fundamental AQFT based on a hyperfinite von Neumann factor algebra "monade".
Here is how I think it works in my model:
My idea about the physical utility of hyerfinite II_1R is that each little basic component Cl(8;R) describes Lagrangian physics in one little location.
Each Cl(8;R) contains the basic structures used to formulate local Lagrangian physics:
a 0-grade 1-dim scalar;
a 1-grade 8-dim vector spacetime base manifold;
a 2-grade 28-dim bivector gauge Lie algebra;
a +half-spinor 8-dim spinor fermion particle representation; and
a -half-spinor 8-dim spinor fermion particle representation.
Given those structures, it seems clear (at least to me) how they fit together to form a Lagrangian
and its related path integral quantum theory.
If you consider the 8-dim spacetime to only exist at high energies, and at low energies (where we do experiments) "freeze out" a preferred quaternionic submanifold that splits the 8-dim spacetime
into a Kaluza-Klein type 4-dim physical spacetime and a 4-dim CP2 internal symmetry compact space, then (in a complicated but in my opinion realistic way using some gauge field geometry of
Meinhard Mayer)
you end up with the Standard Model plus Higgs plus MacDowell-Mansouri gravity
using some geometric techniques similar to those pioneered by Armand Wyler, along with some simple combinatorics, you can calculate realistic particle masses, force strengths, K-M parameters,
etc. They turn out to be pretty much realistic.
I think that a Lagrangian formalism is implicit in a physically realistic AQFT (and related category theory),
and it can be shown (at least in the concrete example of my model) how the Lagrangian fits inside the AQFT in a consistent way,
so the AQFT viewpoint and the category viewpoint and the Lagrangian viewpoint are ALL useful ways of describing the SAME fundamental physics of our world.
Tony Smith
Yes, Tony, so long as you don't say anything critical of Woit or his ideas (and in fact what you said supports him) he will usually allow the comment.
I must say that I am most impressed with the way that Woit allows unsubstantiated, unsupported, willfully ignorant shots at people like, Brandon Carter, but won't allow anybody to hit these
losers back for it.
I doubt that asshole will let me say anything after the crap that I called him today... ;)
Welcome, Island! Oh dear, I think you just ruined my G rating, but that's OK. Lol.
Hey, at least I didn't write down the "crap" that I really called him... ;)
Sorry, Kea, but I've been a supporter of Peter's basic position for a long time, (years), before he ever officially began his "revolution" against string theory. And now I see that he's just a
politician like most everybody else doing "science" these days.
Yes, Island, revolutions in science require scientific ideas, not mud slinging. | {"url":"http://kea-monad.blogspot.com/2007/08/battle-weary.html","timestamp":"2014-04-20T00:42:48Z","content_type":null,"content_length":"56445","record_id":"<urn:uuid:87b81388-c804-4893-8ab0-6c9e6b0fcdd2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Galois Theory: Morphisms
it is somewhat misleading to say the values of f depend on the values of f(2^1/4), f(2^1/2) and f(2^3/4).
f is a field automorphism, so (for example) f(2^3/4) = f((2^1/4)3)= (f(2^1/4)^3,
so f ONLY depends on the value of f(2^1/4).
while it is true that f permutes the roots of x^4-2, it is not true that any such permutation yields an "f". f takes conjugate pairs to conjugate pairs, as well (the square of a fourth root of 2 must
be a square root of 2).
so if one regards the 4 roots of x^4-2 as α[1],α[2],α[3],α[4], where:
α[j] = αe^(j-1)πi/4
then (α[2] α[4]) yield a member of Aut(K/Q), but (α[2] α[3]) does not. | {"url":"http://www.physicsforums.com/showthread.php?t=546932","timestamp":"2014-04-20T16:01:28Z","content_type":null,"content_length":"27257","record_id":"<urn:uuid:207b81d2-0620-40de-8599-612d22c50113>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Superintendent’s Memo #303-11
Department of Education
November 4, 2011
TO: Division Superintendents
FROM: Patricia I. Wright, Superintendent of Public Instruction
SUBJECT: K-12 Mathematics Standards of Learning Institutes and Professional Development Resources
Over the past three years, the Virginia Department of Education held statewide K-12 Mathematics Standards of Learning (SOL) Institutes providing targeted professional development for representatives
from each of Virginia’s school divisions in support of the implementation of the 2009 Mathematics Standards of Learning. Each institute was designed in a train-the-trainer professional development
model to facilitate the development of local expertise and increase local leadership and professional development capacity. Institute attendees are strongly encouraged to conduct analogous
professional development in their local school divisions. Supporting professional development resources associated with each K-12 Mathematics SOL Institute are available online at http://
The 2009 and 2010 K-12 Mathematics SOL Institutes:
• outlined the content standard changes from the 2001 Mathematics SOL to the 2009 Mathematics SOL (2009);
• supported district leaders and teachers in the implementation of the 2009 Mathematics SOL (2009 and 2010);
• provided training in the vertical progression of content and pedagogy (2010); and
• provided instructional guidance in content areas of greatest challenge (2010).
The 2011 K-12 Mathematics SOL Institutes:
• provided support in the implementation of the 2009 Mathematics Standards of Learning and new assessments, framed by the five goals for students becoming mathematical problem solvers,
communicating mathematically, reasoning mathematically, making mathematical connections, and using mathematical representations to model and interpret practical situations; and
• provided professional development to assist teachers in facilitating students mathematical understanding through problem solving, communication, reasoning.
Over 570 participants from 126 school divisions and 4 state-operated programs attended one of the 2011 K-12 Mathematics SOL Institutes in Abingdon, Roanoke, Richmond, or Fredericksburg during
September and October. Attendees included division-level leaders, mathematics supervisors, instructional supervisors, professional development personnel, building administrators, mathematics
specialists, and classroom teachers. School divisions should consider asking their representatives to the institutes to share the information provided with other mathematics faculty in the division.
Questions regarding the K-12 Mathematics SOL Institutes or associated professional development resources should be directed to Michael Bolling, mathematics coordinator, via e-mail at
Michael.Bolling@doe.virginia.gov or by telephone at (804) 786-6418; or Deborah Wickham, mathematics specialist K-5, via e-mail at Deborah.Wickham@doe.virginia.gov or by telephone at (804) 786-7986. | {"url":"http://www.doe.virginia.gov/administrators/superintendents_memos/2011/303-11.shtml","timestamp":"2014-04-20T18:24:51Z","content_type":null,"content_length":"4517","record_id":"<urn:uuid:5b1743f6-c4a2-451e-af7d-b70336b96e11>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
DGA 2013 - Workshop on Distance Geometry and Applications
Short description of the event:
The aim of the Workshop is to bring together researchers in multidisciplinary fields, with emphasis on Distance Geometry (DG). Distance Geometry is an area of research already consolidated, with
Mathematics and Computer Science as key areas at its foundation. The concept of distance is essential to the human experience and DG puts this concept as the main object in a given geometric
structure. The fundamental problem of DG is to find a set of points in a given geometric space whose distances between some of these points are known.
The topics of interest of the workshop are theory, algorithms, and applications of Distance Geometry related to the following research areas, but are not limited to:
Coding Theory
Computer Networks
Data Visualization
Geometric Structures
Graph Theory
Molecular Structure
Pattern Recognition | {"url":"http://www.euro-math-soc.eu/node/3249","timestamp":"2014-04-18T06:17:35Z","content_type":null,"content_length":"11978","record_id":"<urn:uuid:7912959d-1ee5-41f8-a376-77452c605966>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [css3-transforms] neutral element for addition - by animation
From: Dirk Schulze <dschulze@adobe.com> Date: Mon, 4 Jun 2012 14:16:37 -0700 To: Cyril Concolato <cyril.concolato@telecom-paristech.fr> CC: "Dr. Olaf Hoffmann" <Dr.O.Hoffmann@gmx.de>, "www-svg@w3.org
" <www-svg@w3.org>, "public-fx@w3.org" <public-fx@w3.org> Message-ID: <B6A713AD-5C5F-446A-A5AC-056CC3322957@adobe.com>
On Jun 4, 2012, at 12:21 PM, Cyril Concolato wrote:
> Hi Dirk,
> Le 6/4/2012 9:06 PM, Dirk Schulze a écrit :
>> On Jun 4, 2012, at 11:58 AM, Cyril Concolato wrote:
>>> Hi all,
>>> This thread is just too long for me at the moment, but I repeat what I
>>> said. I think that the neutral element for by animations should be the
>>> identity matrix. There is no mathematical problem. Adding a zero scale
>>> (or rotate, or translate, or skew, ...) is equivalent to post
>>> multiplying with the identity matrix.
>> I thought it was a typo the last time. But scale(0) is definitely not the identity matrix.
>> Identity transform is:
>> | 1 0 0 |
>> | 0 1 0 |
>> | 0 0 1 |
>> While scale(0) or scale(0,0) is equivalent to:
>> | 0 0 0 |
>> | 0 0 0 |
>> | 0 0 1 |
>> according to SVG 1.1.
> Up to this point, I agree with you.
>> The identity transform is the neutral element for multiplication, not for addition.
> That's where I differ. The neutral element for addition of a scale
> transform is 'e' such that for all X:
> scale(X+e)=scale(X)
> So yes e=0 but this does not mean a matrix of zeros.
No it does not. Olaf's point was that you shouldn't compare it to matrices. Because, like you said, it depends on the value for 'e'. 'e' is zero for all transformation functions. This doesn't mean that it results in a identity matrix or something different. For translate, rotate, skewX and skewY it is true. For scale it isn't (you already agreed to that previously in this mail).
With a view to the second part of your comment.
Hm. That is what SMIL says about by-animation:
Specifying only a by value defines a simple animation in which the animation function is defined to offset the underlying value for the attribute, using a delta that varies over the course of the simple duration, starting from a delta of 0 and ending with the delta specified with the by attribute. This may only be used with attributes that support additive animation.
And further:
Normative: A by animation with a by value vb is equivalent to the same animation with a values list with 2 values, the neutral element for addition for the domain of the target attribute (denoted 0) and vb, and additive="sum". Any other specification of the additive attribute in a by animation is ignored.
This is what SVG Animation says about additive behavior:
The animation effect for ‘animateTransform’ is post-multiplied to the underlying value for additive‘animateTransform’ animations ...
Let's take an example with 'animateTransform':
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="100" height="100" transform="scale(1)">
<animateTransform attributeName="transform" type="scale" by="1" dur="3s"/>
According to the normative sentence on SMIL 3 Animation, this is equivalent to:
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="100" height="100" transform="scale(1)">
<animateTransform attributeName="transform" type="scale" values="0;1" additive="sum" dur="3s"/>
The underlying value is scale(1). The value of the animation gets post multiplied. So we get an animation of:
scale(1)*scale(0) is equal to
| 1 0 0 | | 0 0 0 | | 0 0 0 |
| 0 1 0 | * | 0 0 0 | = | 0 0 0 |
| 0 0 1 | | 0 0 1 | | 0 0 1 |
scale(1)*scale(1) (which is equal with the multiplication of two identity transforms and therefore is a identity transform as well)
> If you decompose scale(X+e) with a product of scales:
> scale(X+e)=scale(X)scale(e')=scale(X) for all X, then e'=1.
> Applying a by animation of scale is post-multiplying with a scale
> transform and the neutral element is the identity.
Now I don't know what you mean with X. If X is the value of the 'by' attribute, then I don't know what you mean with this formula. If X is there underlying value, than it is not what SMIL animation and SVG Animation says.
An example for length values would be:
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="100" height="100">
<animate attributeName="width" by="100" dur="3s"/>
which is equivalent to:
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="100" height="100">
<animate attributeName="width" values="0;100" additive="sum" dur="3s"/>
If the initial value is X and the by value is Y, it leads to an animation from X + 0 to X + Y. In total 100 to 200.
But 'animateTransform' (and animation of transforms in general) is different. The additive behavior is defined by post multiplying the resulting transforms. Therefore it is not scale(X+0) to scale(X+Y). But scale(X)+scale(0) to scale(X)*scale(Y) (if the initial transform is scale(X)).
'animateTransform' just has a list of scalars as values. That is why SVG Animation doesn't need to specify more. This is a bit different on 'animate', where you have transform functions as value. Here an example with 'animate' (note that this does not work in current UAs):
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="100" height="100" transform="scale(1)">
<animate attributeName="transform" by="scale(1)" dur="3s"/>
Transforming this into the values wording it could look like:
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="100" height="100" transform="scale(1)">
<animate attributeName="transform" values="X;scale(1)" dur="3s"/>
And again. Because additive behavior is specified by post multiplying transforms, it would be scale(1)*X to scale(1)*scale(1). But how does X (the neutral element to addition) look like? We don't have scalars as values, but transform lists. So it needs to be defined. CSS Transforms tries to imitate the behavior of 'animateTransform', by defining the neutral element explicitly with the certain transform function and the neutral element of addition to the scalars in it. Therefore, the neutral element above is scale(0).
>> For addition it is the zero matrix:
>> | 0 0 0 |
>> | 0 0 0 |
>> | 0 0 0 |
>> But that isn't even the point of Olaf and SMIL. SMIL says that the neutral element for addition according to a scalar is searched.
> That's not really contradictory with what I said above.
>> That is why animateTransform just takes scalars as values. So according to SMIL, it should really be 0 for translate, scale, rotate, skweX and skewY.
> Yes, 0 as in scale(X+0)=scale(X).
>> But that does not prevent us from changing it. Would just be one more difference to SMIL :).
> I don't think this would be the case.
> Cyril
>> Greetings,
>> Dirk
>>> Cyril
>>> Le 6/2/2012 12:07 AM, Dirk Schulze a écrit :
>>>> On Jun 1, 2012, at 1:51 AM, Dr. Olaf Hoffmann wrote:
>>>>> Cyril Concolato:
>>>>>> [CC] Adding 1 in the scale transformation means going from scale(X) to
>>>>> scale(X+1), therefore the neutral element is scale(0) which is the identity
>>>>> matrix.
>>>>> scale(0) is not the identity matrix, this is obviously scale(1,1),
>>>>> because
>>>>> (0,0) = scale(0,0) * (x,y) and for arbitrary x,y it is of course in most
>>>>> cases (x, y)<> (0,0); scale(0,0) is no representation of the identity matrix.
>>>>> but
>>>>> (x,y) = scale(1,1) * (x,y); scale(1,1) is a representation of the identity
>>>>> matrix.
>>>>> On the other hand the identity matrix has nothing to do with additive
>>>>> animation or the neutral element of addition, therefore there is no
>>>>> need, that it is the same. The identiy matrix is the neutral element
>>>>> of matrix multiplication, what is a completely different operation.
>>>> Like Cyril wrote, it was just a typo from him.
>>>>> For the operation of addition of matrices M: 0:=scale(0,0) represents
>>>>> a neutral element M = M + 0 = 0 + M, but typically this is not very
>>>>> important for transformations in SVG or CSS.
>>>> I added a first draft of the definition for the 'neutral element of addition' to CSS Transforms [1]. The only problem that I see is with 'matrix', 'matrix3d' and 'perspective'. According to the definition of SMIL the values should be 0 (list of 0) as well. This would be a non-invertible matrix for 'matrix' and 'matrix3d' and a undefined matrix for 'perspective'. The interpolation chapter for matrices does not allow interpolation with non-invertible matrices [2]. Therefore 'by' animations on these transform functions will fall back to discrete animations and cause the element not to be displayed for half of the animation [3].
>>>> Of course it could be possible to linearly interpolate every component of a matrix, but since this is not the desired effect for most use cases, we use decomposing of matrices before interpolations.
>>>> [1] http://dev.w3.org/csswg/css3-transforms/#neutral-element
>>>> [2] http://dev.w3.org/csswg/css3-transforms/#matrix-interpolation
>>>> [3] http://dev.w3.org/csswg/css3-transforms/#transform-function-lists
>>>>> The scale function could have been defined in the passed in
>>>>> such a way, that the identity matrix results from the neutral
>>>>> element of addtion, this works for example in this way:
>>>>> scale(a,b) means scaling factors exp(a) and exp(b).
>>>>> But this would exclude mirroring and is maybe more
>>>>> difficult to estimate the effect for some authors.
>>>>> A Taylor expansion approximation by replacing
>>>>> exp(a) by (a+1) could save the mirroring, but not the
>>>>> intuitive understanding of scaling.
>>>>> Therefore there is no simple and intuitive solution to
>>>>> satisfy all expectations - and too late to change the
>>>>> definition anyway.
>>>> I would also think it gets to complicated for most authors.
>>>>> Olaf
>>>> Greetings,
>>>> Dirk
Received on Monday, 4 June 2012 21:20:09 GMT | {"url":"http://lists.w3.org/Archives/Public/public-fx/2012AprJun/0178.html","timestamp":"2014-04-20T17:11:15Z","content_type":null,"content_length":"22611","record_id":"<urn:uuid:78c20dc2-deac-4df3-ac66-173284501b70>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help Me Get My Math Back?
kdawson posted about 4 years ago | from the we-can-forget-it-for-you-wholesale dept.
nwm writes "I am trying to refresh my math skills back to the point that I can take college-level statistics and calculus courses. I took everything through AP calculus in high school, had my butt
kicked by college calculus, and dropped out shortly thereafter. Twenty+ years later, I need to take a few math courses to wrap up a degree. I've dug around some and found a few sites with useful
information, but I'm hoping the Slashdot crowd can offer some good resources — sites, books, programs, online tutors, etc. I really don't want to have to take a series of algebra-geometry-trig
'pre-college' level courses (each at full cost and each a semester long) just to warm my brain up; I'd much rather find some resources, review, cram, and take the placement test with some confidence.
Any suggestions?"
cancel ×
If you can't handle calculus, science isnt for you (4, Funny)
elucido (870205) | about 4 years ago | (#31720466)
They want you to pass calculus for a reason. No matter what kind of scientist you plan to be, your knowledge of calculus will be essential. You'll never use statistics but you will need to use
calculus every day.
Re:If you can't handle calculus, science isnt for (5, Informative)
| about 4 years ago | (#31720492)
Working scientist here. Ph.D. I've been working 20+ years doing scientific research, getting grants, publishing papers in peer-reviewed journals.
I haven't done ANY calculus since I was an undergrad.
Re:If you can't handle calculus, science isnt for (4, Funny)
| about 4 years ago | (#31720648)
You must be a scientist, because apparently you have no sense of humor.
Re:If you can't handle calculus, science isnt for (1)
zach_the_lizard (1317619) | about 4 years ago | (#31720668)
And what field might that be in? Not all fields will have much use for calculus in the real world, but I am still curious.
Re:If you can't handle calculus, science isnt for (4, Funny)
Anonymous Cowar (1608865) | about 4 years ago | (#31720768)
And what field might that be in? Not all fields will have much use for calculus in the real world, but I am still curious.
Ah biology, the humanities of the science world...
Re:If you can't handle calculus, science isnt for (0)
| about 4 years ago | (#31720840)
Yeah, bleh. All that useless stuff like discovering the causes and cures of disease.
Re:If you can't handle calculus, science isnt for (4, Insightful)
LurkerXXX (667952) | about 4 years ago | (#31720892)
I do a lot of molecular biology. I've never thought of it as like the humanities at all. It's always seemed a lot more like computer programming to me.
Re:If you can't handle calculus, science isnt for (3, Funny)
Gorobei (127755) | about 4 years ago | (#31720908)
I compute derivatives every day. That's why my compute farm draws a couple of megawatts when I want a number.
Glad to hear people are still doing it by hand. Arts and crafts should been encouraged, even in the modern age.
suggestions: (-1, Troll)
goombah99 (560566) | about 4 years ago | (#31720544)
1) get a good test like this one [amazon.com] on biscuit topology
2) and learn a major turing complete programing language preferred by mathematicians use [muppetlabs.com]
Re:If you can't handle calculus, science isnt for (5, Interesting)
Garridan (597129) | about 4 years ago | (#31720562)
Oh bullshit. Those are both overt and ridiculous generalizations. First off, many scientists use statistics every day (at the least, much more than "never"). Second, not all scientists use calculus
"every day", and many use it almost never.
As a calculus teacher, I can tell you this: you need skills in symbolic manipulation. Your algebra needs to be rock solid before you attempt college level calculus. In my experience, you need dozens
of hours of practice before you get it. Buy an algebra textbook, and do every odd problem in every section until you are reliably getting everything right. My experience = flunked high school math
and went back to college 10 years later, and am now working towards a PhD in math.
Re:If you can't handle calculus, science isnt for (3, Informative)
| about 4 years ago | (#31720614)
I was in the same situation as submitter. In fact, it was the reason why I switched majors from CompSci - being in a hurry to get a degree in a science and too much bullshit math I'd never use. I'll
go back for Compsci when I can learn on my own terms, for fun.
However, you were spot-on about this: Calc 1 is 90% algebra(with 20-30% of the problems involving trig)and you're gonna be fucked if you don't have a good grasp of algebraic manipulation. My
recommendation to submitter is to take online calculus(where available) at an accredited junior college and use a computer algebra system to help them through the homework visually, especially with
regards to roots and asymptotes.
Constructing Maple worksheets gives one a good step-by-step process for visualizing the steps necessary to solve the problems. Iterative methods like Newton's, Simpson's, Trapezoid rule etc. would
come naturally to a programmer.
Submitter - stats is just arithmetic and basic algebra, it's the concepts and knowing what to do with the data that are the hard part. Again get a T.I. and learn all of the functions, there is a LOT
of tedium. Don't be afraid of the weird greek variables and big formulae...it's just arithmetic and algebra 1, you will hate it when you take it, but you will love it when you pass it.
Re:If you can't handle calculus, science isnt for (-1, Flamebait)
Brian Gordon (987471) | about 4 years ago | (#31720628)
Still, if you can't even pass calculus then there's something wrong. And that's not even the problem- he's looking for help preparing for the placement test. If he's let his skills deteriorate so far
that he forgets algebra, then he has no business getting a degree in anything.
Re:If you can't handle calculus, science isnt for (4, Insightful)
fruitbane (454488) | about 4 years ago | (#31720734)
Really... No business getting a degree in ANYTHING? That's a rather closed and inappropriate (IMO) view. If he's worked in a field for years that doesn't require he use any algebra how's he supposed
to keep up with his skills other than doing algebra problems in his spare time? He never indicated the degree he's completing was heavily math-biased or math-dependent. Stats and Calc may be akin to
When you paint such ridiculously broad statements you risk your own image before anyone else's.
Still, if you can't even pass calculus then there's something wrong. And that's not even the problem- he's looking for help preparing for the placement test. If he's let his skills deteriorate so far
that he forgets algebra, then he has no business getting a degree in anything.
Re:If you can't handle calculus, science isnt for (2, Insightful)
| about 4 years ago | (#31720816)
It's the same deal as people who say others can't learn to do art. Their skill makes them feel special and if someone else can learn to do it also, their specialness is threatened.
And saying that someone should not be awarded a degree because they don't know algebra is extremely arrogant and ignorant. There's a reason why they TEACH algebra in colleges.
Re:If you can't handle calculus, science isnt for (4, Informative)
zach_the_lizard (1317619) | about 4 years ago | (#31720690)
Another thing that you might want to brush up, in addition to those things the parent post mentions, would be trigonometry. A healthy portion of the various calc courses I've taken have used trig
identities fairly heavily. It also helps to remember the values of trig functions for common angles. Depending on the college, you may have to be decent at mental arithmetic. My school frowned upon
using calculators in class.
Re:If you can't handle calculus, science isnt for (4, Interesting)
shdo (145775) | about 4 years ago | (#31720834)
I would mod you up if I had any points. Sad as it may seem calculus was where I *learned* trig. For me, trig is one of those subjects that you beat your head against for months and years and one day
*POOF* it makes sense. My first semester of college level calculus was were I learned trig. The second time I took that first semester of calculus - man I got it.
Don't forget to brush up on the basics - algebra, trig, analytical geometry as well as your calculus.
goes looking for an old text book just to tinker around with it.......
Re:If you can't handle calculus, science isnt for (2, Insightful)
oldhack (1037484) | about 4 years ago | (#31720828)
Many scientists misuse stats.
Practice, practice, practice (3, Insightful)
| about 4 years ago | (#31720886)
The parent is absolutely right. You need practice. Actually, you need what Anders Ericcson calls 'deliberate practice'. Solve every example in the book as follows:
Write down the problem. Close the book and try to solve the problem. If you got it right, go on to the next problem. If you didn't get it, look at how the example is solved. Close the book and try
again until you get it right. Repeat until you have solved every example in the text.
Check out this article: http://www.conestogac.on.ca/~bcoons/readings.html [conestogac.on.ca]
BTW, Jamie Escalante, http://en.wikipedia.org/wiki/Jaime_Escalante [wikipedia.org], just died. He was the real life teacher who proved that you can teach calculus to just about anybody. They made the
movie 'Stand and Deliver' about his life. Ability is highly over-rated. Most people can, as Escalante proved, learn math to quite a high level of accomplishment.
Most people think math is some magic thing that some people just can't get. They are wrong. Almost everyone is wired to learn math. If you are missing some important skills, go back to the level
where you were good and start from there. John Mighton points out that most people discover that they have no math ability the same year they have a bad math teacher. ;-)
If you want, you can learn math as long as you practice, practice, practice.
Re:If you can't handle calculus, science isnt for (1)
rubycodez (864176) | about 4 years ago | (#31720610)
you can't say such a thing without knowing what specialization a person would have. Statistics is the bread and butter of some work, for others just plugging numbers into formulas that have been
known for a century or two (my job at national lab was like that for 10 years!), for others the heavy duty tensor calculus or partial diffy-Qs. Same situation in engineering.
Re:If you can't handle calculus, science isnt for (2)
wonderboss (952111) | about 4 years ago | (#31720632)
Were did they say they were getting a science degree? Needing to take a few math courses to wrap up a degree implies that most of the course work is done. I can't think of a science or engineering
major that would allow you take the required courses without having completed calculus first.
Re:If you can't handle calculus, science isnt for (1)
zach_the_lizard (1317619) | about 4 years ago | (#31720724)
My dad got a degree in a technical field--CS or something related, IIRC--and he never even had to take a calculus class at all. He took classes overseas while in the military through UMUC [umuc.edu].
It does happen.
Re:If you can't handle calculus, science isnt for (4, Insightful)
GAATTC (870216) | about 4 years ago | (#31720636)
As a scientist I learned a long time ago not to make general and unsubstantiated claims like "No matter what kind of scientist you plan to be, your knowledge of calculus will be essential." As a
practicing molecular geneticist and cell biologist I use statistics quite often. I cannot remember ever having to (directly) use calculus in the last 20 years for any of my research. I really enjoyed
all of the calculus (and linear and set theory and ...) that I took a long time a ago. When I look back at it what I really got out of all my math classes (and O-Chem too for that matter) was the the
knowledge that I could learn anything I really set my mind to - if I have to.
Re:If you can't handle calculus, science isnt for (1)
vilain (127070) | about 4 years ago | (#31720864)
If you had to do any linear regression or error analysis, knowledge of statistics is important (e.g. being able to answer questions like "Is this a good datapoint or an outlier"). And Calculus is
used to derive the formula for linear regression. I didn't touch it since I was an undergrad, but I still know and can use it. My sister-in-law who got the same B.S. in chemistry asked me why I
remember this stuff when she was studying for a nursing degree. It trained my mind. Being able to do algebraic manipulation should be send nature to you. Do whatever you need to do to learn that
cold. You'll need it for calculus and statistics.
Re:If you can't handle calculus, science isnt for (-1, Flamebait)
SlappyBastard (961143) | about 4 years ago | (#31720676)
Why do I never have mod points when I need them . . . good points all around. I mean, calculus is the core of our understanding of change. Without calculus, you might as well just things "improved"
or "declined" and call it at that. And stats is pretty useless, especially in light of the fact it exists mostly to provide the cover of math to people who wish numbers weren't being used against
Re:If you can't handle calculus, science isnt for (0)
| about 4 years ago | (#31720812)
Translation: I'm so smart, look at me. If you didn't do things the way I did, you are not smart and should be a ditch digger.
Seriously, mods, can we start modding down these ego masturbation posts? They are way too common here.
Re:If you can't handle calculus, science isnt for (4, Insightful)
Vellmont (569020) | about 4 years ago | (#31720830)
No matter what kind of scientist you plan to be, your knowledge of calculus will be essential. You'll never use statistics
This has to be about the worst piece of advice about a science education I've ever seen. Like anything, it depends. Calculus is extraordinarily useful to someone in physics, but less so in biology.
Statistics is insanely important in an experimental science (actually it's insanely important in just about any science I can think of). Hell, statistics should be a mandatory class taught in High
School. It's far more applicable to everyday life than trig is.
Re:If you can't handle calculus, science isnt for (0)
| about 4 years ago | (#31720838)
Another working scientist PhD here. Unless you are going to be involved in hardcore theoretical physics or math work then calculus won't crop up very often. However, you DO need to know what it means
and how it works - software solutions generally can do the hard yards after that. Statistics crops up a LOT more often and that really pays off.
Just my experience guys.
Re:If you can't handle calculus, science isnt for (1)
izomiac (815208) | about 4 years ago | (#31720878)
That's rather amusing. What I've noticed is that in the life sciences, it's very rare to see someone who didn't struggle with physics and calculus. Conversely, statistics are used all the time. There
are two main reasons for this. First, biology is more memorization and less mathematical, and requires a different skill set than math or physics. Second, biology is messy, most numbers are inexact,
and everything follows a normal distribution.
That said, not being proficient in math means that you'll also likely struggle with statistics. I've heard estimates that more than half of research articles in the life sciences have at least one
statistical error. When I say "everything follows a normal distribution", that's because everything is assumed to do so. I have yet to see a research article actually verify or check to see if that
assumption is true. Last year I read a paper concerning some high profile discovery in medicine that actually reported a negative p value (reminder: probabilities range from zero to one).
Re:If you can't handle calculus, science isnt for (5, Insightful)
thenextstevejobs (1586847) | about 4 years ago | (#31720882)
They want you to pass calculus for a reason. No matter what kind of scientist you plan to be, your knowledge of calculus will be essential. You'll never use statistics but you will need to use
calculus every day.
Are you wooshing me here?
Having an understanding of what a derivative or integral of a function is a good insight to have, no doubt.
But I would argue that statistics is much more broadly applicable, and extremely important for a clear understanding of scientific discourse and all the 'facts' that the poster will encounter.
In reply to the original query, what you're going to need to do is a lot of problems. You need to look at this like getting in shape--you can't do it overnight.
I returned to college after about 5 years off and needed to take placement exams myself. Turned out the test allowed using a Ti-89. I cheated myself out of really 'placing' myself by being able to
approximate/calculate all the multiple choice answers and placed highly.
After a few attempts in the classes I was placed in, in the end, I re-took precal and calculus.
I could have avoided that if I had actually done a large volume of problems rather than skimming some books and looking at the answers and deciding that it was 'easy enough'.
Never look at the answers of problems until you try them. Once you know the right answer, you convince yourself the problem was easy and that you didn't need to do it. This will fuck you over in the
Find an approach to doing math that makes it enjoyable for you. One thing that helped me a lot was getting a large whiteboard. I find I enjoy doing math more pacing back in front of a board and
whatever else comes along with doing work on a board rather than a piece of lined paper. Chalk would have been better.
Lastly, ignore the assholes here who are going to berate you for not knowing what they think is simple, obvious knowledge. Math is rife with 'tricks' and non-intuitive methods to solving problems
that come through experience. Someone who had a good experience with math through school and went straight into college is not going to understand your position.
Good luck to you, and if you really want this, do problems and problems and more problems. Put on some music you love and shred through a book or two. Get help at local colleges. Bribe a friend to
help you study, or just hire a tutor.
Otherwise, you're going to end up doing it by taking the classes (as I did). One way or another, you have to do the work.
Re:If you can't handle calculus, science isnt for (1)
lazycam (1007621) | about 4 years ago | (#31720894)
Buy yourself a Schaum's calculus guide [amazon.com] and work through all the problems. That should get you through single variable calculus with few problems.
Re:If you can't handle calculus, science isnt for (0)
| about 4 years ago | (#31720904)
They want you to pass calculus for a reason. No matter what kind of scientist you plan to be, your knowledge of calculus will be essential. You'll never use statistics but you will need to use
calculus every day.
These days, biology is 100% statistics.
Re:If you can't handle calculus, science isnt for (1)
neophytepwner (992971) | about 4 years ago | (#31720910)
Last I checked the average scientist uses statistics.
Re:If you can't handle calculus, science isnt for (2, Interesting)
haruharaharu (443975) | about 4 years ago | (#31720916)
You'll never use statistics but you will need to use calculus every day.
Statistics is great for figuring out when you're being lied to, so go ahead and learn it or prepare for a lifetime of being easily manipulated with real-sounding BS.
wat??? (-1, Offtopic)
| about 4 years ago | (#31720468)
Define: "a few math courses to wrap up a degree" (4, Informative)
0100010001010011 (652467) | about 4 years ago | (#31720482)
Calc II, Calc III, Diff Eq, I II or III. Linear Algebra, Statistics,
There's a huge difference.
There's always MIT's Open Courseware. [mit.edu]
Always without a calculator. (4, Funny)
elucido (870205) | about 4 years ago | (#31720504)
It's essential that he pass calculus I, III, III and Diff EQ without the use of a calculator.
Just in case we are bombed back into the stoneage, he wont have to worry about losing his job as a scientist.
Re:Always without a calculator. (4, Funny)
blackraven14250 (902843) | about 4 years ago | (#31720622)
If we're bombed back into the stone age, derivatives and integrals aren't going to help him tie a sharpened stone to a stick.
Re:Always without a calculator. (4, Funny)
zach_the_lizard (1317619) | about 4 years ago | (#31720736)
That's why he should study topology. He'll learn about all kinds of knots there.
Re:Always without a calculator. (5, Insightful)
fm6 (162816) | about 4 years ago | (#31720744)
"Bombed back to the stone age" is best regarded as just an expression. The iron age is here to stay, no matter how much civilization declines. Even if we forget how to smelt iron ore, there would be
billions of tons of refined iron lying around in abandoned machinery, buildings, and such.
Re:Always without a calculator. (3, Funny)
| about 4 years ago | (#31720814)
...Even if we forget how to smelt iron ore, there would be billions of tons of refined iron lying around in abandoned machinery, buildings, and such.
Not if they've come for our metals and women.
Re:Define: "a few math courses to wrap up a degree (4, Informative)
moteyalpha (1228680) | about 4 years ago | (#31720708)
I have gone through those at MIT, just for fun. I also found that Khan Academy [khanacademy.org] was really interesting and perhaps is easier for some. Strang at MIT is awesome and also the courses
at Yale are good.
UCLA has some great courses too.
science and magic [academicearth.org] was very informative. It doesn't hurt that some of the profs are also quite entertaining.OR science and magic on youtube [youtube.com]
Re:Define: "a few math courses to wrap up a degree (0)
| about 4 years ago | (#31720900)
Khan's Academy is Great. Strangs linear algebra is great. MIT's Calc III is great.
Engineering Math by Stroud (5, Informative)
smith6174 (986645) | about 4 years ago | (#31720484)
This book uses programmed learning that goes step by step through everything you will need and more. It is designed for self study. There is also a sequel book that goes into some much higher stuff.
I used just this book as preparation for classes requiring calc 3 as a prerequisite.
Re:Engineering Math by Stroud (1)
kwilczynski (1684804) | about 4 years ago | (#31720596)
This book is indeed fantastic. A "must have" for anyone who is interested in mathematics.
Taxes (1)
mim (535591) | about 4 years ago | (#31720498)
you're more than welcome to do my taxes for me...
Re:Taxes (1)
rubycodez (864176) | about 4 years ago | (#31720640)
that's why God, on the eighth day, created cheap tax software. We don't have to think, just plug in the numbers.
Why? (4, Insightful)
pushf popf (741049) | about 4 years ago | (#31720500)
If you haven't needed a degree or calculus in 20 years, why bother now?
If you're job hunting, your time would be better spent making yourself relevant to current employers or starting a consulting business than trying to match your calc and trig skills with a recent
grad and get a degree.
A degree is a nice "filter" when hiring new applicants, since it proves that they were able to deal with BS for at least 4 years, however with 20 years of actual job experience, you'll do much better
off trying to differentiate yourself from the recent grads than you will if you try to "look better on paper."
That said, if you want to do this just because it's "unfinished business" lots of community colleges have entire departments dedicated to getting us old folks "up to speed". Just stop by and talk to
Re:Why? (2, Interesting)
Darkness404 (1287218) | about 4 years ago | (#31720530)
If you haven't needed a degree or calculus in 20 years, why bother now?
In case you haven't realised it, there is a recession going on, a -lot- of people are either unemployed, their spouse is unemployed or they need a way to secure their job. Rather than doing the
rational thing of looking at productivity, most businesses hire and pay based on education. If his wife lost her job and he was expecting the income, the only way he can get a raise to keep up his
standard of living might be through a degree.
Most degrees are completely useless when done for a raise, but, money is money.
Re:Why? (0)
| about 4 years ago | (#31720612)
Rather than doing the rational thing of looking at productivity, most businesses hire and pay based on education.
The "Rather than doing the rational thing" makes the sentence sound anti-education up to "I didn't acquire higher education but I am financially successful".
Re:Why? (1)
Darkness404 (1287218) | about 4 years ago | (#31720758)
Ever spend some time in a company? Generally the people who are paid the most do the least amount of work. It is generally the people with a bit of college doing the bulk of the work while the people
with the highest forms of education are sitting at their desks doing nothing.
Re:Why? (1)
Penguinisto (415985) | about 4 years ago | (#31720730)
Dunno - perhaps he wants to shift careers a bit, or enter academia?
Sometimes a degree is useful when you want to leave one area of his career and enter another. For instance, perhaps the guy has been doing field engineering all this time, but now wants to do design?
Maybe he's sick of working/running a lab, and instead wants to create and run the projects?
Even in my own corner of the working world (IT), I find myself increasingly wishing that I'd taken more business courses as I leave behind being a server monkey (and in one previous job, code
monkey). Nowadays I'm routinely running my own budgets, doing the politics dance, and overseeing both people and projects. Mind you, I have no desire to get an MBA, but having to handle vendors,
routinely run RFP's of six figures (one this year approached seven), while handling/syncing various execs' ideas of project management... ? Well, more and more these days, some of the subjects in an
MBA course would damned sure come in handy right about now.
Re:Why? (2, Interesting)
pushf popf (741049) | about 4 years ago | (#31720810)
Even in my own corner of the working world (IT), I find myself increasingly wishing that I'd taken more business courses as I leave behind being a server monkey (and in one previous job, code
monkey). Nowadays I'm routinely running my own budgets, doing the politics dance, and overseeing both people and projects. Mind you, I have no desire to get an MBA, but having to handle vendors,
routinely run RFP's of six figures (one this year approached seven), while handling/syncing various execs' ideas of project management... ? Well, more and more these days, some of the subjects in an
MBA course would damned sure come in handy right about now. After 20 years, an MBA would be really useful. After 20 years of not needing them, calculus and trig are a waste unless the OP is trying to
switch careers or just wants the satisfaction.
FWIW, it's much more profitable to go into consulting and do/manage whatever it is that you're good at and happy doing, than try to maintain a dead-end job as one of the "cogs." Businesses are much
happier to pay someone a good rate for services that they need, when they need them, as long as the consultant will happily vanish when the need vanishes.
This site helped me (5, Informative)
| about 4 years ago | (#31720506)
This tutorial site helped me through 6 years of school. Hope it helps you too! http://tutorial.math.lamar.edu/
Some sites I've come across (5, Informative)
FlyByPC (841016) | about 4 years ago | (#31720508)
Helpful handouts [germanna.edu] from Germanna Community College's tutoring Center. (I used to work there a few years ago; these resources are not only helpful, but free.)
Drexel's Math Forum [mathforum.org] (full disclosure: I'm a current Drexel employee and student -- but the Math Forum strikes me as pretty cool.)
Project Euler [projecteuler.net](more oriented toward programming and numerical methods, but interesting site for developing your math skills. The problems range from not-too-hard to mind-boggling.)
Purple Math [purplemath.com]
Re:Some sites I've come across (1)
zach_the_lizard (1317619) | about 4 years ago | (#31720772)
Interestingly enough, I used to take a handful of classes at Germanna. To add to the list, I would say that Wolfram Alpha can be helpful, because it can be used to break down more complicated
integrals and derivatives into steps when you don't understand them. Just don't become dependent upon them. Also, one thing that can be helpful is to go to Yahoo Answers and answer math related
problems. Break everything down into steps, explain the theorems needed, and bask in the knowledge that teaching is a good way to learn. By breaking things down for people who may not have a good
understanding of math, you will help build up your own understanding too. I actually used this while taking various Calc classes to help practice what I knew, and help break down how exactly I knew
it and thought about it.
Hard work should do the trick (3, Insightful)
nitroamos (261075) | about 4 years ago | (#31720512)
Most text books have practice questions for each chapter, and some answers in the back. Why not just work through some of those on your own? Math is the kind of subject that you can only learn by
doing problems, so I don't think there's any shortcuts. But I suppose if you work on problems, it's nice to have a teacher to help if you get stuck, but perhaps a reasonable substitute would be
CC (1)
pieisgood (841871) | about 4 years ago | (#31720514)
Just do Community college summer sessions or something similar, should be enough and they only cost like 60 bucks a class. Taking the college level calc classes would be good too at CC unless they
are upper division differential equations or something as those are not offered.
Re:CC (1)
Killer Orca (1373645) | about 4 years ago | (#31720586)
I second the community college courses, but you might need to sift through till you get a good instructor. I lucked out in the ones I have had so far have been able to explain things quite well and
have good homework polices. $60 a class is unreal though, mine cost about $350 per class.
Re:CC (1)
h2oliu (38090) | about 4 years ago | (#31720590)
I was going back to school to become a teacher. In so doing I had to take a Trig course. I did so online from my local community college. It really refreshed my math skills (that were ~20 years old).
Keep in mind I had taken through Series and Diff Eq. in college, so I had mastered the material previously. (Don't ask why I needed trig. in spite of having had the upper level courses. Just a magic
hoop to jump through).
Re:CC (1)
zach_the_lizard (1317619) | about 4 years ago | (#31720796)
Don't ask why I needed trig. in spite of having had the upper level courses. Just a magic hoop to jump through.
In high school, I had that same sort of problem. I moved from one school system where you took World History in two parts, one in 8th grade (middle school) and one in 9th grade (high school), to one
where the two parts were pushed up a year. Despite having completed both classes successfully, they made me retake history part I, because they just didn't trust that I learned anything. They also
made me take the standardized test for the second history class, which I passed with a perfect score on, making their theory that being taught something a year earlier isn't good look a little silly.
Krzysztof Wilczynski (4, Informative)
kwilczynski (1684804) | about 4 years ago | (#31720522)
Keith, I would start with YouTube. Crazy as it sounds, but there are many free training videos there. Especially, look up channels maintained by the universities like i.e. MIT or Yale, etc etc. They
have recordings of lecture sessions available for free to watch, of course. And some of them are of finest quality. Anyway, that is just a start... Good luck, KW
cheat! and the TI-89 series makes it easy! (1, Informative)
| about 4 years ago | (#31720524)
cheat! and the TI-89 series makes it easy!
Re:cheat! and the TI-89 series makes it easy! (1)
zach_the_lizard (1317619) | about 4 years ago | (#31720802)
And many colleges ban calculators for just that reason.
If you need to review "pre college" level stuff (0)
| about 4 years ago | (#31720526)
Just get review books for the New York State Regents exams.
A Bit of Advice and a Few Suggestions (5, Insightful)
eldavojohn (898314) | about 4 years ago | (#31720532)
I don't know how bad you want this but I can tell you that nothing feels better than finishing something you started even if it comes two decades later.
What you're mostly going to find in these replies are codices. Not teaching. Not knowledge. You're going to get information sources. What you do with those sources, that will be the teaching, the
learning and the progress. No one's going to help you get your math back but you. You're going to get static nonliving information and it's going to be up to you to bring that alive. Frankly, on your
part it's going to require the will of a volcano otherwise I suggest a tutor or precalculus class.
The course I can refer you to echos my sentiments [mit.edu]:
This material could conceivably be studied by a student on his or her own, but this seldom works out. Students tend to get stuck on something, and, having no goad to keep them going, they try to get
past it with decreasing energy, and ultimately develop mental blocks against going on. Having an organized course prevents this by forcing them to face obstacles like exams and assignments.
If you attempt this and get stuck, as is almost inevitable, you could try emailing us and we can try to unstick you.
Did you catch that last part? You're going to need help. Whether it's bribing your nerdy friends with cases of beer or Star Wars Galaxy Series Five collectible card packs (*cough* *cough*) you are
going to need guidance at certain points in time. Don't be afraid to ask those around you or -- and I recommend this only in dire cases -- dressing up like a student and rolling into your local
university asking to see the precalc professor for help.
Your codex might be Wikipedia [wikipedia.org]. Your codex might be Wolfram's MathWorld [wolfram.com]. My codex sits three feet in front of my face as I type this. My codex (and this is purely
personal) Bronshtein et al's Handbook of Mathematics [amazon.com]. The binding is acceptable. The paper is not the greatest. The content is priceless. This is not a teaching device. This is my
starting point. If I were you my ending point would be at my college's library pouring over all calculus textbooks. The great thing about this starting point is that I like how it lays out all the
starting points leading up to that starting point in case I need to start backwards. Another great thing about this particular resource is that it has nearly everything imaginable and is well
organized. The bad thing is that it costs $71.97. I think I paid $60 for mine but either way it's not free like Wikipedia.
I don't know where you are comfortable starting from but if I were you I would simply research what your learning institutions pre requisites are and spend your free time now acquiring their books
and notes in order to make sure you have them covered. All of my old University of Minnesota syllabuses are online [umn.edu] although I cannot find the Math department equivalent (aside from the
registration listings).
If you could name your courses, I'd suggest books like The Annotated Turing [theannotatedturing.com] which has been a page turner for me and actually starts with basic set theory to work up to
automata. I'm guessing you're aiming for more Multivariable and Diff Eq type stuff. Let us know what the courses are and perhaps more human readable works can be suggested that aren't as laboriously
mind numbing as reading a codex would be.
Let me google that for you... (0)
| about 4 years ago | (#31720542)
Second hit seems pretty good, it's called SOS math. http://www.sosmath.com/
I've been studying for the FE exam (Fundamentals in Engineering) and bought the Lindeburg FE Review Manual. It has a lot of explanation and practice problems, but includes a lot more than just math
(thermodynamics, physics, etc.). I bet there are similar review manuals for just math though. You could also pay for a tutor, I've seen adds on craigslist before.
Tax dollars at work (1)
woboyle (1044168) | about 4 years ago | (#31720546)
A site that I have used to great effect is this: http://www.phy.ornl.gov/csep/CSEP/BMAP.html [ornl.gov]
Go to your public library (1)
aussersterne (212916) | about 4 years ago | (#31720548)
and check out all of the relevant math textbooks. Make sure there are exercises for each chapter for which answers are provided somewhere in the book.
Then, read the chapters, and do the problems. Keep doing the problems until you get every . single . one . of . them . right and you understand what you've previously done wrong in each case.
Pour over it until you really understand the relationships between the quantities.
It is very hard work, but there is no shortcut to understanding math and science, and if you don't understand them, you'll never be good at them, even if you manage to solve a few problems using
memorized patterns.
Cheap, easy classes (1)
Darkness404 (1287218) | about 4 years ago | (#31720554)
My advice is go for cheap and easy classes that count for your degree, especially if the classes are useless for your job (as most will be) try taking them at a community college, or see if a "degree
mill" offers the course for cheap that will transfer. Many universities will take community college or other sub-par classes if they are for general education or basic requirements. Now, if you are,
say, a biology major, taking all your biology classes through a community college might not transfer, but taking math classes should.
Khan, MIT (2, Informative)
chub_mackerel (911522) | about 4 years ago | (#31720556)
You might like:
Khan Academy http://www.khanacademy.org/ [khanacademy.org]
(Get an account for the review software if you have forgotten college algebra skills as well.)
MIT's Open Courseware http://ocw.mit.edu/ [mit.edu]
Many of these courses now have full video libraries of lectures, homework and exam solutions, etc. You can buy a text and take the course.
I am interested to see other finds out there, though.
Motivation (4, Insightful)
cosm (1072588) | about 4 years ago | (#31720558)
In my experience in school, if you are motivated to pass, you will find a way to pass (most of the time). But if you are motivated to learn, passing the class will come as a pleasant side effect. Not
knocking your stated intentions, but approach this as a learning experience, a thoroughfare in self-enlightenment, and you will reap the test-score rewards.
Get a real life tutor (1)
Evrion (1459047) | about 4 years ago | (#31720560)
To be honest, that is your best bet. Find someone who knows how to teach, and understands the material. Get that person to catch you up.
A tutor can move at exactly your pace, and answer exactly your questions. This is A LOT faster than anything self guided. I'd do it myself if I was in your area.
Re:Get a real life tutor (0)
| about 4 years ago | (#31720680)
I wholeheartadly agree.
A student tutor is much more cost- and time-effective than a cram course.
Re:Get a real life tutor (1)
ctmurray (1475885) | about 4 years ago | (#31720718)
I concur. Through high school I missed out on Geometry. When I got to college and started Calculus the prof asked if anyone had not had Geometry and Trigonometry, so I raised my hand. He tutored me
for a few hours and I was good to go. Much of geometry and trig is taking the time to prove the various relationships. I just had to accept that they were correct, never went through the pain of the
proof process. One could argue that I missed something valuable, but it has never come up in 30 yrs of working as a scientist.
I attended a small liberal arts college and the professors were all about teaching. My prof was a very good teacher, so that may account for his skill in getting me up to speed. So try to seek out
the best teachers (small colleges, maybe community colleges) and pay for a tutor, these profs can always use the cash.
I agree with an earlier post that I have used calculus rarely (and just went to the book to look up the integration/ differentiation rules). On the other hand in the last 10 years the use of
statistics has really jumped in industry (I am a chemist/mauf engineer not a programmer) with Six Sigma and the like. So again you don't need to learn all the proofs behind the statistics, but you
need to know how to run software for analyzing the data and what the results mean. How to run a DOE, how to plot an M&IR, how to use ANOVA to prove that a statistically significant exists/doesn't
exist with data sets.
Your mileage may vary, since you might be in a vastly different arena. And of course there is the internet and various web sites where you could get help if you get in too deeply.
Read Gelfand (0)
| about 4 years ago | (#31720568)
Obtain all four books from the Gelfand Correspondence Program in Mathematics [rutgers.edu]. Read them carefully and do all the exercises.
The titles are:
Algebra [amazon.com]
Trigonometry [amazon.com]
Functions and Graphs [amazon.com]
The Method of Coordinates [amazon.com]
MIT Opencourseware? (3, Informative)
kaiidth (104315) | about 4 years ago | (#31720584)
Dunno about college placement tests, but to start thinking about maths in general there's nothing like just buying a couple of books and going at it (but make sure you have the answer booklet/
solutions are in the back of the book). If you're feeling a little panicky you might even want to start with something really un-threatening ('Statistics for dummies' exists for that). You might want
to see what the standard textbooks would be for the courses that are prerequisites for the ones you're looking to study, and perhaps ask which areas you would be expected to be comfortable with.
Also, the MIT opencourseware site is probably your friend: http://ocw.mit.edu/OcwWeb/Mathematics/ [mit.edu]
As regards an online tutor, depending on whether you currently live near a college/university/miscellaneous site of higher learning, you might want to see if there are any postgrads in applicable
subjects who are willing to tutor. In my experience online tutors are seldom worth half as much as talking to a real live actual human being, and they are usually more expensive. YMMV - especially if
you are extremely busy an online tutor may actually suit you better than scheduling another real live person into your week.
Finally - good luck :)
http://yaymath.org/ is the best source (0)
| about 4 years ago | (#31720600)
I agree do community college, check out http://yaymath.org/, this guy is the best he helped me with college algebra. I am going to move onto calc soon.
Try Karl's Calculus Tutor (1)
tchuladdiass (174342) | about 4 years ago | (#31720604)
I've found karlscalculus.org to be a useful site. for my brushing-up needs.
Sigh... (3, Insightful)
Digana (1018720) | about 4 years ago | (#31720618)
I find it profoundly unsatisfying that you have to ask this question.
It's not your fault; it's the structure of the educational system. You are clearly not interested in mathematics, since you just want to cram and pass some test. You don't specify exactly for what
you need mathematics, but I'm guessing it's for some other thing, possibly something computer related.
It's a big lie that you'll ever use calculus for anything except for specialised degrees (and if you were to use it for anything you personally would want to do in your future, you would already be
interested in it). It's also profoundly strange that calculus seems to be pinnacle of mathematical education if you're not going to go on to study something like mathematics itself or physics.
To put my frustration another way, why doesn't anybody ever ask similar questions for sculpture, or Schaum's Outlines on Basket Weaving or all the other myriad useless things we humans do for our
edification? Why is western society obsessed with mathematics, deluded into thinking it's useful in general, and why are people so stressed over learning this useless and dryly-presented subject? Why
aren't you required to achieve a certain level of chess expertise before you can complete a computer science degree? A lot of early computer science was concerned with chess playing, let us not
It's pointless. It's pointless to cram for exams about subjects you don't care about in order to satisfy requirements you don't genuinely need.
My recommendation is, are you really interested in learning this stuff? If so, just spend hours and hours in your local university library in the math section browsing books you're interested in. If
you're not really interested, go grab some Schaum's Outlines or the Complete Idiot's guide or whatever, and use that to pass whatever bureaucratic and pointless requirement your educational institute
imposes before you're allowed to study what you really want to study.
Re:Sigh... (4, Informative)
Z8 (1602647) | about 4 years ago | (#31720720)
Why is western society obsessed with mathematics, deluded into thinking it's useful in general, and why are people so stressed over learning this useless and dryly-presented subject?
Math is useful in general. And western society doesn't just stress about learning math. An even greater number are probably stressed about passing english tests [ets.org]. Society thinks language and
math are important to education; your basket-weaving and sculpture not so much. I personally don't see the problem with this.
Re:Sigh... (0)
| about 4 years ago | (#31720854)
Many mathematicians have thought about the prestige of mathematics. IIRC some big name (Kolmogorov? Arnold?) was writing about how maths-fanaticism in France allowed under-10-year-olds to engage in
conversations like "Q: What is 2 + 5? A: It is 5 + 2 of course."
The problem is that if you don't do this, then no maths will be learned by anyone, which is a worse outcome.
Re:Sigh... (4, Informative)
clifyt (11768) | about 4 years ago | (#31720766)
Because it is the basis of all fields of science.
And quite a few fields of art.
I *HATE* math, but I use it every single day, and in the areas I'm known for, I can do the math needed...mostly in my head. I've also found that as I've tried to branch out of my areas of expertise,
that I can't rely on the few areas of math that I know fluently, because I'm starting to bang my head against the ceiling.
For instance, I took a few basic undergrad courses recently (I have a masters in psychology), and I couldn't remember the damn quadratic equation...I could get the answer just fine -- if I wanted to
spend 15 minutes solving it (or as I did, write a quick plot app on my laptop to show the answers figuring it out computationally as opposed to mathematically)...and it was only after one of my
twenty-something classmates looked at me and said Dude, Why Don't You Just Use The Quadratic Equation that I realized how much I had forgotten (I had no use for math 20 years ago and slept through
It is funny how knowing the simple concepts can make your life simple. Anyone can brute force just about anything.
If you don't want to do anything science based...and this includes almost any social science even if people think these are not real...or any advanced art (I have a friend that does weavings, and to
get what she wants, and for the patterns to work out in real life, not just paper, she needs to know math to get these to work)...math is the basis for all of this. Oh and the chess algs? it is all
math...pretty advanced math...it isn't chess these guys were after...it was computational mathmatics to attack a human problem.
This summer, I am signing up for a 100 level math course and getting the basics back again...I wish I would have done it before...it sucks that I can get results from Mathematica or SPSS, but I can't
do simple algebraic equations. You might not think it interesting or necessary, but then again, I can't tell if you are being serious or if your humor is just VERY dry...if you are serious...wow...
Re:Sigh... (0)
| about 4 years ago | (#31720856)
... I can't tell if your humor is VERY dry or are you just boring in general.
Always been a pet peeve of mine (1)
Sycraft-fu (314770) | about 4 years ago | (#31720914)
I was told how much math I'd need since I wanted to get in to technology. Math teachers always kept on about how important it was. Well, they are dad wrong. I need a good understanding of arithmetic,
and some basic algebra is also useful. Past that, I use nothing. Had I stuck with CS, linear algebra would be good (since a lot of programming relates to it) but certainly not calc. Knowing calc is
kinda neat, it allows me to understand how some things are done, but they aren't things I need to do, a program does them for me.
We really need to refactor how much of what subjects we teach people. Math is one in particular we need to get real about. I think it is a leftover of the red scare, the "Oh my god the Soviets will
crush us technologically all our kids have to be math whiz kids!" That was dumb then and even dumber now. Trying to cram more math down the throats of every person does nothing. It doesn't turn
someone in to a brilliant engineer. The kids that love math, well they'll discover that by having math taught to them. That love should then be nurtured and they should be taught all they can hold.
The rest? Teach them what they need to know and leave it at that. What that is will vary, an engineer will need more and different kinds than a sociologist, but teach what is needed, don't just teach
math for math's sake.
We should be focusing more on presenting a well rounded education, particularly at lower levels. Expose kids to a LOT of different things. Why? Because you want them to find the thing that clicks
with them, the thing that they are interested in. Maybe it's math, maybe it's computers, maybe it's drama, maybe it's biology, whatever. Expose them to a lot, let them learn about all kinds of
things, and then they are in a much better position to choose what they want to learn more about during secondary education.
Of course you need to include things that everyone needs to know. English is very important as all jobs demand communication, some math is for sure important, etc. But teach the amount needed and
useful, don't just teach more for the sake of teaching more.
In university, this should be even more the case. Universities need to evaluate their degree programs and say "How much of what non-degree material does this really need?" Math is NOT degree material
for CE or CS. It is necessary to understand some of the degree material, but it isn't actual material relevant to the degree. As such you should be teaching the level needed. You shouldn't say "Well
this is a math heavy field so make them take 6 math classes." No, it should be "These are the kinds of mathematics necessary to properly understand the things they are being taught, as a result they
will need to take math course A before class X and math course B before course Y and so on." Maybe that ends up being a lot of math, sure will be for some degrees, but make sure it is because it is
needed and useful. Don't insist CS people take calc because computers are about math.
Schaum's outlines (1)
ColorTheory (897257) | about 4 years ago | (#31720652)
It depends on your overall plan whether you need new dead-tree books. But the Schaum's outline books are good, with plenty of worked problems. Look in a college bookstore or do a web search on
Schaum's outline .
A Very Good Survey (3, Informative)
richg74 (650636) | about 4 years ago | (#31720694)
If what you are looking for is a way to get your mind back into "math mode", I'd suggest one book that I have used, both to refresh my memory and to read for pleasure since I was an undergrad ~40
years ago. It's called What is Mathematics?, by Richard Courant and Herbert Robbins, in the 2nd edition (which I have).
There is a new edition, edited by Ian Stewart, which Amazon has:
What is Mathematics? [amazon.com]
I like the book because it is geared to an intelligent adult reader; it doesn't assume much technical math knowledge, but it gives (IMHO) an excellent overview of the concepts through calculus. It
has exercises, too.
More Effort Required On Your Part (0)
| about 4 years ago | (#31720698)
Your slacktard approach to effort and accomplishment is a theme in your life. Don't waste any instructors time, or take any seat from someone more willing to work. Go watch TV.
Newtonian physics (2, Insightful)
oldhack (1037484) | about 4 years ago | (#31720700)
I remember Newtonian mechanics as the best applied calculus class. If you didn't like calculus as math, maybe this will work out better for you. It links math with more physical (useful?) phenomena.
If not, I don't have a clue what would help you, since I found college calculus much better than AP high school stuff.
Just do it! (0)
| about 4 years ago | (#31720722)
I'm on my second college level calculus course for a computer science degree, ten years after graduating with a liberal arts degree. I took calculus in high school with no problems. My advice is not
to be too worried about it; just take the class. It'll take you a few weeks in class to catch up on the algebra, but it will come back to you. You'll have 20 years more experience than your
classmates learning things.
Also, chances are you had your butt kicked the first time 'round because you weren't spending enough time asking the professor to clarify things you don't understand, doing homework, or studying. I
will stare at my textbook and reread a section until it makes sense. Sometimes things are easy and sometimes I spend a few hours more than I planned.
I'm at a top tier university and am having no problems so far getting A's in Calculus... while working full time.
ThatQuiz.org (1)
blixel (158224) | about 4 years ago | (#31720770)
I'm not sure to what extent this site would help:
www.ThatQuiz.org [thatquiz.org]
But I like to go back there from time to time and run through various tests just for "the fun of it." I'm not only surprised by the simple things I've forgotten over the years, but I'm also surprised
at some of the things I never use but still remember.
The Teaching Company (1)
The Hatchet (1766306) | about 4 years ago | (#31720774)
I would go to www.teach12.com and try out their 'joy of math' class, or try some Math Tutor. The joy of math is a 24 lecture series, each is 30 minutes long, and it goes all the way from basic math
to basic integral calculus. That will teach you all the theory you need. Then the Math Tutor calculus classes will easily fill in the exacting skills you need.
Or, if your not into lectures, I would highly recommend the textbook 'Calculus 6th Edition' by James Stuart. It is in easy to understand language and goes from the beginning of calculus 1 to the
beginning of differential equations in the last chapter.
Also, if you want to understand 3d space in a calculitic way, just buy matlab and play with surfaces for a few weeks.
Really, I think calculus is easy if you understand the concepts, the rest is just bookkeeping. But spend enough time playing with that bookkeeping, and beautiful patterns about the very nature of the
world in which we live arise, and you will be flabbergasted. The importance of numbers like pi and e become obvious, and all the frustration seen in math is gone.
The practical use is also great, besides the enhanced understanding of the world. You might not use Statistics and Calculus every day, but the concepts will change the way you see the world, and how
you think. When you run into any kind of issue or problem, your tools to deal with it will be far better than before. And what once kicked your ass will be kick ass to practice.
Try not to cram to much, reading a calc textbook or watching some of those classes will let you understand what you are doing, so you won't have to worry about trying to cram.
Hope I helped, just remember to give yourself the chance to learn. Without learning, what do we have after all?
Bad textbooks, bad teachers. (1)
tinylobsta (1782462) | about 4 years ago | (#31720778)
I don't know about other people, but it seems like the biggest inherent flaw is not a lack of resources for math, but rather that people generally don't know where to go.
Up until right now, I just used http://www.purplemath.com/ [purplemath.com], and had no idea that other resources existed so extensively.
I enjoy math, but I'm also an unmedicated ADHD child - lectures frequently just bounce off of me; and attempting to learn from a course assigned textbook is a joke... these are designed around a
lecture format that doesn't work perfectly for everybody. Nothing is more frustrating than hitting a wall due to not fully processing a lecture, and having the textbook be worthless ($180 worth of
worthless, too.)
I think the best suggestion is just to wander your way through some of the recommended books and sites and not force it; as others have mentioned, if you're actively enjoying the learning experience,
it'll just flow naturally. Or, it'll fail miserably... either way, progress (not necessarily forward) will be made!
Calculus (1)
CODiNE (27417) | about 4 years ago | (#31720794)
Really for me the main trick was understanding exactly what a derivative was. It sounds obvious I know but you really have to get your head wrapped around exactly what it's doing and the basic idea
of summing an infinite series of slices. Do some mental exercises like the speed of a car and how a speedometer works, imagining the rate a pool of different shapes would be filling up as the water
rises, etc...
Once you get the concept clear and what it means the rest is just memorizing the various transforms with the Sin, Cos, etc... and getting in good practice doing it. Then years later when you've
forgotten all of those (as I have) and you run into a calculus problem you'll at least recognize it and know what the basic formula is, then use a TI calculator or whatever.
Re:Calculus (1)
Dragoniz3r (992309) | about 4 years ago | (#31720872)
That would be an integral, sir. A derivative is the slope of a function at some given point.
The Princeton Review (2, Informative)
Phaldyn (1657411) | about 4 years ago | (#31720820)
When I had to do well on the GRE before entering graduate school, I used the prep book from The Princeton Review and kicked the hell out of the math section.
They have prep books for SAT Math 1 & 2 which covers (ironically) more complicated stuff, and I think that's what you really want. For getting your mind back in mathematics mode, I'd pick up both of
those (twenty bucks each or less) and work through all the exercises you need to in order to jog the memory banks. Start with the GRE math and good luck!!!
For geometry (1)
Xamusk (702162) | about 4 years ago | (#31720870)
If you're looking for geometry learning, try to make an asteroids-like game.
It's not too challenging as to turn someone down, but lots of fun and you'll learn how to apply geometry. Specially sine and cossine, which my teachers did a terrible job in teaching what that was
all about (only teached transformation formulae, never applying them). I only learnt what it was meant to do when I tried to do a subspace-like game.
Not enough information. (3, Informative)
wickerprints (1094741) | about 4 years ago | (#31720876)
You haven't specified what kind of degree, and therefore, what kind of coursework is required. Moreover, even the same level of coursework taught at different institutions can vary widely in
difficulty. "Undergraduate calculus" at, say, Caltech is nothing like "undergraduate calculus" two blocks away at Pasadena City College. The same goes for statistics.
If your intention is to obtain a degree, the best starting point is to figure out which text(s) are being used in those courses that are required for that degree. This will give you some idea of the
scope and level of difficulty to expect. Otherwise, you could end up studying a great deal of ancillary information. Such things may be good to know, but will not contribute to your stated goal.
Regarding your plan to dive right in, I appreciate and understand your enthusiasm but I also think it is misguided and potentially counterproductive. You could very easily make it much more difficult
for you to obtain your credits by not reviewing basics beforehand. Mathematics is not a subject that is easily cherry-picked, nor is it amenable to rote learning. It is more like a vast edifice, a
tower whose foundations support increasingly complex and abstract concepts. Furthermore, it is a topic which is best learned through actual understanding. For instance, if you understand what
integration actually means, rather than viewing it as a mechanical operation on a function, you will find it easier to interpret other concepts that employ integration, such as the calculation of
moment-generating functions of continuous probability distributions.
On some level, it's possible to "get by" with simply learning the mechanics of computation and symbolic manipulation. That is pretty much what calculus is (as opposed to analysis). But if you want to
make it as easy as possible on yourself, at the very least I advise you quickly review nearly everything at the high-school level, from algebra to trigonometry. Then take a more detailed look at the
AP Calculus curriculum; any gaps in knowledge should be readily apparent and immediately addressed before continuing further. From there, you should compare against the aforementioned college
coursework and texts.
Success in learning mathematics is not so much about the details of what you know as it is about how to think analytically and abstractly.
Book Suggestion (0)
| about 4 years ago | (#31720888)
While the initial parts of the book may be too easy for you, many people have found "Arithmetic and Algebra Again" by Immergut and Smith to be wonderfully helpful. It will help you get back into the
habit of doing math (especially algrebaic functions) in an easy, tidy way, and is designed for adults. That should give you a good baseline jumping-off point.
What is Mathematics? by Courant and Robbins (1)
Funkeriffic Toad (518830) | about 4 years ago | (#31720890)
The book "What is Mathematics?" by Courant and Robbins, despite its cushy-sounding name, would be my recommendation. First of all, it's written by two world-class mathematicians. Second, it's not a
textbook; rather, it's what you might call a celebration of how awesome math is. If you want to succeed in college math without being miserable, why not try to see the subject as thing of beauty,
rather than a burden? This book will definitely help you do that. If you read through the first half of the book (it shouldn't take long) you will have a chance to warm the math parts of your brain
back up, and you'll learn some extremely cool shit along the way. (A bit of geometry, a bit of topology, a bit of algebra, etc.)
When you get to the authors' lucid explanation of the main ideas behind calculus, you'll realize that (1) calculus isn't scary, (2) the computations you need to learn how to do are fun, not hard, and
(3) everything comes down to a few very intuitive ideas -- it may have taken geniuses like Newton and Leibnitz to come up with them in the first place, but they are part of our common intellectual
heritage, not erudite ideas reserved for mathematicians and physicists.
And, although it's not a textbook, there are some exercises which will give you the chance to test your understanding. Again, though, they are fun, not grueling.
iTunes U (3, Informative)
shrtcircuit (936357) | about 4 years ago | (#31720902)
I know it sounds a little weird, but check out iTunes U. There are a lot of courses (many by some very well known academic establishments) including a full library of math and science. Best part is,
it's free.
Load More Comments | {"url":"http://beta.slashdot.org/story/133740","timestamp":"2014-04-20T21:05:04Z","content_type":null,"content_length":"304762","record_id":"<urn:uuid:1ab77499-ce9b-4f3a-a00d-6c3555555327>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Music Components in Java: Creating Oscillators
In the previous article of this series, I discussed terminology and some basic math that must be understood before moving forward. In this article we are going to apply what we learned by building
electronic music components entirely in software.
Before we do, however, we have one more important topic that needs discussion. You may recall me mentioning the MiniMoog and PSynth synthesizers in the first article. What these devices have in
common is they both utilize subtractive synthesis as their means of sound production. With subtractive synthesis, you start out with a harmonically rich signal source (typically a square or sawtooth
wave) and you apply a low-pass filter to selectively remove higher frequency harmonics with the result being a musically useful effect. The harmonic complexity of the signal source and the cut-off
frequency and resonance of the filter are controlled in order to simulate the timbre of instruments. Of course, both the MiniMoog and PSynth also provide a sine wave signal source which, while not
harmonically rich, can be used for musical purposes.
The Basic Waveforms
Joseph Fourier (1768–1830) proved that complex, harmonically rich, periodic waveforms comprise a series of sine and cos (a cos wave is a sine wave with a phase shift of 90 degrees) terms of
increasing frequency. This Wikipedia page has an instructive animation of how, by adding harmonic terms to a base sine wave, a sawtooth wave shape can be produced.
For our use here, we will create an oscillator signal source that can produce three waveforms / wave shapes: sine wave, square wave and sawtooth wave. Each of these wave shapes has characteristics
that make it suitable for use in electronic music.
A sine wave is a very pure sound source that has little in the way of harmonic complexity. A sine wave has its fundamental frequency and depending upon harmonic distortion has few (if any), low
amplitude harmonics. The sine wave our oscillator will produce looks exactly like the textbook examples of sine waves.
A square wave , unlike a sine wave, contains odd harmonics (which are integer multiples of the fundamental frequency) which gives it more harmonic complexity and the richness required for subtractive
synthesis. Square waves are often described as hollow sounding and are useful in electronic music.
As the name implies, sawtooth wave shapes resemble a saw blade. They rises linearly to maximum value then drops straight down to minimum value only to start again. A sawtooth wave shape contains both
even and odd integer harmonics of the fundamental frequency and is good for simulating bowed string instruments like violins and cellos.
As we generate complex wave shapes in software, the Nyquist limit discussed in the previous article must be kept in mind because both square and sawtooth wave shapes have higher order harmonics that
can exceed the available bandwidth. As you may recall, if we try to reproduce a frequency above the Nyquist limit (greater than or equal to one half the sampling rate), false low frequency signal
components to be added into the original signal. This results in distortions that can be detrimental to signal fidelity.
There are two approaches for dealing with this problem. The first, is to generate our complex wave shapes from a Fourier series which by design doesn't have any terms above the Nyquist limit and the
second approach is to ignore the problem completely. In our application it is safe to ignore the problem because:
• Any distortion caused by aliasing only adds to the complexity of our signal sources which can be considered a good thing.
• The frequencies we typically generate for musical purposes are relatively low which means that many of the harmonics making up the square and sawtooth wave shapes have a low enough amplitude to
not be disruptive.
A Basic Oscillator in Java
A basic oscillator is just that; an oscillator without frills and implemented by the BasicOscillator class (Listing One). A BasicOscillator has two fundamental features: the ability to control its
frequency and to control the wave shape it produces. The BasicOscillator provide samples, as it is the source of samples for other electronic music components. Being a sample provider means that
whenever its getSamples method is called, it must fill the sample buffer with samples of the required type and frequency.
Listing One: A basic oscillator written in Java.
public class BasicOscillator implements SampleProviderIntfc {
* Waveshape enumeration
public enum WAVESHAPE {
SIN, SQU, SAW
* Basic Oscillator Class Constructor
* Default instance has SIN waveshape at 1000 Hz
public BasicOscillator() {
// Set defaults
* Set waveshape of oscillator
* @param waveshape Determines the waveshape of this oscillator
public void setOscWaveshape(WAVESHAPE waveshape) {
this.waveshape = waveshape;
* Set the frequency of the oscillator in Hz.
* @param frequency Frequency in Hz for this oscillator
public void setFrequency(double frequency) {
periodSamples = (long)(SamplePlayer.SAMPLE_RATE / frequency);
* Return the next sample of the oscillator's waveform
* @return Next oscillator sample
protected double getSample() {
double value;
double x = sampleNumber / (double) periodSamples;
switch (waveshape) {
case SIN:
value = Math.sin(2.0 * Math.PI * x);
case SQU:
if (sampleNumber < (periodSamples / 2)) {
value = 1.0;
} else {
value = -1.0;
case SAW:
value = 2.0 * (x - Math.floor(x + 0.5));
sampleNumber = (sampleNumber + 1) % periodSamples;
return value;
* Get a buffer of oscillator samples
* @param buffer Array to fill with samples
* @return Count of bytes produced.
public int getSamples(byte [] buffer) {
int index = 0;
for (int i = 0; i < SamplePlayer.SAMPLES_PER_BUFFER; i++) {
double ds = getSample() * Short.MAX_VALUE;
short ss = (short) Math.round(ds);
buffer[index++] = (byte)(ss >> 8);
buffer[index++] = (byte)(ss & 0xFF);
return SamplePlayer.BUFFER_SIZE;
// Instance data
private WAVESHAPE waveshape;
private long periodSamples;
private long sampleNumber; | {"url":"http://www.drdobbs.com/jvm/music-components-in-java-creating-oscill/230500178","timestamp":"2014-04-17T18:50:30Z","content_type":null,"content_length":"96095","record_id":"<urn:uuid:301f35d3-e103-4331-adca-247b4e23c3df>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the 1st Day of Christmas… A $150 Amazon Gift Card Giveaway! (winner announced)
UPDATE: The winner of the $150 Amazon Gift Card is:
#265 – Christine: “I really, really, really need new boots!”
Congratulations, Christine! Be sure to reply to the email you’ve been sent, and your Amazon gift card will be emailed to you!
Welcome to the first day of the 12 Days of Giveaways here on Brown Eyed Baker! I have some fun toys lined up for you, so be sure to check back and enter to win each weekday from now until December
Have you made your list for Santa yet? Are you itching for some new reading material, maybe a gadget or two, or additions to your wardrobe? Today’s giveaway should help you cross a few things off of
your wish list. Whether there’s a stack of books you want, a DVD collection, some new boots, kitchen toys, a new camera lens, or a gift for a loved one… this Amazon.com gift card is for whatever your
heart desires this holiday season!
One (1) winner will receive one an Amazon.com gift card for $150, delivered via email.
To enter to win, simply leave a comment on this post and answer the question:
“What is the number one thing on your holiday wish list?”
You can receive up to FIVE additional entries to win by doing the following:
1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment.
2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment.
3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment.
4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment.
5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment.
Deadline: Tuesday, December 4, 2012 at 11:59pm EST.
Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected.
Disclaimer: This giveaway is sponsored by Brown Eyed Baker.
Good Luck!!
6,781 Responses to “On the 1st Day of Christmas… A $150 Amazon Gift Card Giveaway! (winner announced)”
1. I follow on Facebook. I’m Kristy Hank.
5. A new comforter set is number one on my holiday wish list!
6. I also follow on Pinterest. I’m kristyhank there as well.
7. I would love a new pair of boots!
8. A waffle griddle to make yummy waffles in the morning (:
9. my number one thing is a vacation to some place warm
10. Following you on pinterest
11. Would love a Blendtec – but don’t know if I’ve been that good this year!
12. I’ve been lusting for several cookbooks from different bloggers, this would be the perfect opportunity to get them all at once!
13. I am a fan of yours on Facebook!
14. I’m subscribed to Brown Eyed Baker through this e-mail already
17. I follow you on Pinterest!
19. Dutch oven has been on my wish list for a long time.
21. Fan on FB and follow on pinterest.
25. I want a Kindle Fire!
jofo120 at yahoo dot ocm
27. I follow Brown Eyed Baker on Twitter under the username lhaggy.
31. I’m a fan on Facebook (Luli Weasley)
33. health and happiness for my family
34. email subscriber!
jofo120 at yahoo dot com
35. I would love an immersion blender for winter soup season!
37. I subscribe to the Brown Eyed Baker RSS feed.
38. I follow you on pinterest (username is lhaggy)
39. follow on Pinterest- jofo120
jofo120 at yahoo dot com
41. I would love a kitchen aid mixer!
42. I subscribe on RSS through google reader
43. On the top of my list this year is a new set so stainless steel pots and pans!
44. I follow you on pinterest
45. following on instagram!! woot woot!
47. I subscribe to your email.
48. That’s easy! My brother home from Afghanistan.
49. I follow you on twitter @rachc731
50. To have my whole family gathered around the christmas tree on christmas morning!
51. A healthy baby…my husband and I are expecting our first!
53. I follow you on google reader.
55. An iphone is at the top of my list!
56. I follow you on Instagram
57. I follow you on instagram. @Luv2Trvls
59. Just seeing my family across the country.
60. I follow you on twitter. @Luv2Trvls
61. The no 1 thing would be this beautiful pearl pendant necklace
63. I follow you on Pinterest.
64. I also follow you on Twitter.
66. And I follow you on Pinterest.
70. I already follow you on twitter!
71. I got myself a puppy for Christmas, she is the best gift I could get
72. My number one item is shoes!
73. I already like you on Facebook!
74. I am subscribed via email
77. The number one item on my list is a copy of my husband’s finished dissertation!
79. I subscribe to the RSS feed!
80. I like Brown Eyed Baker on Facebook!
81. And I follow Brown Eyed Baker on Pinterest!
83. A pair of jeans that really fits
ky2here at msn dot com
84. The food grinder attachment for my kitchen aid.
85. A professional photo of my 2 children would benice!
86. I have a lot of books on my list.
87. I subscribed to Brown Eyed Baker by e-mail.
88. I follow you on facebook.
90. I subscribe to your emails!
92. My #1 gift item on my list is a Clarisonic.
93. I’m hoping for a new camera!
94. I subscribe to your feed through Google reader!
96. I would love to finally get an iPad, but as long as my kids have a great holiday, that’s really all that matters.
97. I follow you on Instagram.
98. Already following on Pinterest
99. I follow you on instagram!
100. And I follow you on Twitter! | {"url":"http://www.browneyedbaker.com/2012/12/02/on-the-1st-day-of-christmas-a-150-amazon-gift-card-giveaway/comment-page-57/","timestamp":"2014-04-19T11:57:30Z","content_type":null,"content_length":"117733","record_id":"<urn:uuid:cf3f1d92-4593-4b84-a97b-f158fc77f2b6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |