content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Base rate fallacy
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking - Cognitive processes Cognition - Outline Index
The base rate fallacy, also called base rate neglect or base rate bias, is an error that occurs when the conditional probability of some hypothesis H given some evidence E is assessed without taking
into account the "base rate" or "prior probability" of H and the total probability of evidence E.^[1]
In a city of 1 million inhabitants there are 100 known terrorists and 999,900 non-terrorists. The base rate probability of one random inhabitant of the city being a terrorist is thus 0.0001 and the
base rate probability of a random inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs a surveillance camera with automatic facial recognition software
. The software has two failure rates of 1%:
1. if the camera sees a terrorist, it will ring a bell 99% of the time, and mistakenly fail to ring it 1% of the time (in other words, the false-negative rate is 1%).
2. if the camera sees a non-terrorist, it will not ring the bell 99% of the time, but it will mistakenly ring it 1% of the time (the false-positive rate is 1%).
So, the failure rate of the camera is always 1%.
Suppose somebody triggers the alarm. What is the chance they are a terrorist?
Someone making the 'base rate fallacy' would incorrectly claim that there is a 99% chance that they are a terrorist, because 'the' failure rate of the camera is always 1%. Although it seems to make
sense, it is actually bad reasoning. The calculation below will show that the chances they are a terrorist are actually near 1%, not near 99%.
The fallacy arises from confusing two different failure rates. The 'number of non-terrorists per 100 bells' and the 'number of non-bells per 100 terrorists' are unrelated quantities, and there is no
reason one should equal the other. They don't even have to be roughly equal.
To show that they do not have to be equal, consider a camera that, when it sees a terrorist, rings a bell 20% of the time and fails to do so 80% of the time, while when it sees a nonterrorist, it
works perfectly and never rings the bell. If this second camera rings, the chance that it failed by ringing at a non-terrorist is 0%. However if it sees a terrorist, the chance that it fails to ring
is 80%. So, here 'non-terrorists per bell' is 0% but 'non-bells per terrorist' is 80%.
Now let's go back to our original camera, the one with 'bells per non-terrorist' of 1% and 'non-bells per terrorist' of 1%, and let's compute the 'non-terrorists per bell' rate.
Imagine that the city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—-and so will about 9,999 of the 999,900
non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So the probability that a person triggering the alarm is actually a terrorist is only
about 99 in 10,098, which is less than 1%, and very very far below our initial guess of 99%.
The base rate fallacy is only fallacious in this example because there are more non-terrorists than terrorists. If the city had about as many terrorists as non-terrorists, and the false-positive rate
and the false-negative rate were nearly equal, then the probability of misidentification would be about the same as the false-positive rate of the device. These special conditions hold sometimes: as
for instance, about half the women undergoing a pregnancy test are actually pregnant, and some pregnancy tests give about the same rates of false positives and of false negatives. In this case, the
rate of false positives per positive test will be nearly equal to the rate of false positives per nonpregnant woman. This is why it is very easy to fall into this fallacy: it gives the correct answer
in many common situations.
In many real-world situations, though, particularly problems like detecting criminals in a largely law-abiding population, the small proportion of targets in the large population makes the base rate
fallacy very applicable. Even a very low false-positive rate will result in so many false alarms as to make such a system useless in practice.
Mathematical formalism
In the above example, where P(A|B) means the probability of A given B, the base rate fallacy is the incorrect assumption that:
$P(\mathrm{terrorist}|\mathrm{bell}) \overset{\underset{\mathrm{?}}{}}{=} P(\mathrm{bell}|\mathrm{terrorist}) = 99%$
However, the correct expression uses Bayes' theorem to take into account the probabilities of both A and B, and is written as:
$P(\mathrm{terrorist}|\mathrm{bell}) = \frac{P(\mathrm{bell}|\mathrm{terrorist})P(\mathrm{terrorist})}{P(\mathrm{bell})}$$=0.99(100/1000000)/[(0.99\cdot 100+0.01\cdot 999900)/1000000] = 1/102 \approx
Thus, in the example the probability is overestimated by more than 100 times, due to the failure to take into account the fact that there are about 10000 times more nonterrorists than terrorists
(a.k.a. failure to take into account the 'prior probability' of being a terrorist).
Findings in psychology
In experiments, people have been found to prefer individuating information over general information when the former is available.^[2]^[3]^[4]
In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if
given descriptive information about the particular student, even if the new descriptive information was obviously of little or no relevance to school performance.^[3] This finding has been used to
argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics.Psychologists Daniel
Kahneman and Amos Tversky attempted to explain this finding in terms of a simple rule or "heuristic" called representativeness. They argued that many judgements relating to likelihood, or to cause
and effect, are based on how representative one thing is of another, or of a category.^[3] Richard Nisbett has argued that some attributional biases like the fundamental attribution error are
instances of the base rate fallacy: people underutilize "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simpler dispositional attributions.
Kahneman considers base rate neglect to be a specific form of extension neglect.^[6]
See also
• Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44, 211-233.
• Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251. (summary here)
• Nisbett, R.E., Borgida, E., Crandall, R., & Reed, H. (1976). Popular induction: Information is not always informative. In J.S. Carroll & J.W. Payne (Eds.), Cognition and social behavior, 2,
External links | {"url":"http://psychology.wikia.com/wiki/Base_rate_fallacy?oldid=158000","timestamp":"2014-04-17T22:07:13Z","content_type":null,"content_length":"73478","record_id":"<urn:uuid:b3fc84ae-9a38-469a-bf65-f12bb444fa98>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westmont, NJ Statistics Tutor
Find a Westmont, NJ Statistics Tutor
...I am able to tutor at flexible times and locations. I am able to provide references, documentation, etc. upon request. Thank you for your interest and I hope to hear from you soon!
58 Subjects: including statistics, reading, geometry, biology
...Each lesson would probably have a little bit of exercise to help build the foundation, then focus on the troublesome topic. Most of the tutoring I have done in the past has required only a few
meetings because I am able to quickly assess a student's background in the subject and address any weak...
7 Subjects: including statistics, chemistry, physics, calculus
...While I preferred to give my time to friends without charge, I began taking clients during my sophomore year for a senior level class named Theory of Probability. These clients were fellow
students that requested my help to better understand a very difficult topic. I was glad to help them and arranged a weekly meeting to review the course material.
20 Subjects: including statistics, chemistry, physics, calculus
...I'll make sure your ready for that big test before I leave. Sincerely, Charles H.I started taking piano lessons in 1964 in second grade and continued taking lessons from the same instructor
until my college years. I then started taking piano lessons from my college professors in 1978.
13 Subjects: including statistics, chemistry, physics, biology
...By first constructing a powerful personal statement, and then building a coherent individual story throughout the rest of the application to support and expand it, my students have been
accepted to schools such as Boston University and UPenn. Admissions counselors read hundreds of applications, ...
25 Subjects: including statistics, English, writing, piano
Related Westmont, NJ Tutors
Westmont, NJ Accounting Tutors
Westmont, NJ ACT Tutors
Westmont, NJ Algebra Tutors
Westmont, NJ Algebra 2 Tutors
Westmont, NJ Calculus Tutors
Westmont, NJ Geometry Tutors
Westmont, NJ Math Tutors
Westmont, NJ Prealgebra Tutors
Westmont, NJ Precalculus Tutors
Westmont, NJ SAT Tutors
Westmont, NJ SAT Math Tutors
Westmont, NJ Science Tutors
Westmont, NJ Statistics Tutors
Westmont, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Ashland, NJ statistics Tutors
East Camden, NJ statistics Tutors
East Haddonfield, NJ statistics Tutors
Echelon, NJ statistics Tutors
Ellisburg, NJ statistics Tutors
Erlton, NJ statistics Tutors
Haddon Township, NJ statistics Tutors
Haddonfield statistics Tutors
Middle City East, PA statistics Tutors
Oaklyn statistics Tutors
South Camden, NJ statistics Tutors
West Collingswood Heights, NJ statistics Tutors
West Collingswood, NJ statistics Tutors
Westville Grove, NJ statistics Tutors
Woodcrest, NJ statistics Tutors | {"url":"http://www.purplemath.com/Westmont_NJ_Statistics_tutors.php","timestamp":"2014-04-16T22:03:30Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:1930feb8-7e41-49cf-a8d2-e6f2459aacc7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
Firestone Algebra 2 Tutor
Find a Firestone Algebra 2 Tutor
...Math is important for understanding many topics in today's complex world. Good math skills start with strong base concepts. I take pre-algebra students with all starting levels, to establish a
connection to the language of mathematics.
30 Subjects: including algebra 2, reading, Spanish, English
...Through education and practical experience I have gained a diverse background in math as well as the biological and physical sciences that I can utilize to help students achieve their academic
and standardized testing goals. By training I am a scientist that has excelled at advanced courses in b...
8 Subjects: including algebra 2, chemistry, calculus, geometry
...Furthermore, as part of my tenure in the work force, I have had the opportunity to work on target tracking for the government (using complicated tracking algorithms coded into software) and
currently work on Digital Signal Processing (DSP) for a semiconductor manufacturer writing IDE software for...
47 Subjects: including algebra 2, chemistry, calculus, physics
...To work in this field, I've had to become very knowledgeable in many fields of science, math and engineering. I am also an excellent communicator, having written several technical papers and
presented to a wide range of audience. I've had 2 years working as a college teaching assistant for aero...
15 Subjects: including algebra 2, Spanish, physics, calculus
...As a Psychology major in college, I have extensive knowledge of most fields of Psychology- Introductory, Statistics, Biological/Neuroscience, Psychopathology, Social, Cognitive, Personality,
and History of Psychology. I scored in the 95th percentile on the ACT, and have recent standardized test ...
22 Subjects: including algebra 2, chemistry, English, algebra 1 | {"url":"http://www.purplemath.com/firestone_algebra_2_tutors.php","timestamp":"2014-04-16T19:09:03Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:56c8efd0-f95a-48a5-90d1-eeb757839644>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra with logarithms and roots - please help
March 3rd 2013, 01:32 PM #1
Mar 2013
United States
Algebra with logarithms and roots - please help
Hi, I'm trying to solve the following equation in terms of p, but finding it very difficult. Please help! Thanks in advance. - Charlie
y = exp^(w+1/^2(p^2))*sqrt(exp^p^2 - 1)
[exp is the natural log and y,w,p are variables]
Last edited by CharlesLark; March 3rd 2013 at 01:43 PM.
Re: Algebra with logarithms and roots - please help
Hi CharlesLark!
What do you get if you square both sides?
Can you substitute $u=\exp(p^2)$ and solve for u?
March 3rd 2013, 01:59 PM #2 | {"url":"http://mathhelpforum.com/new-users/214164-algebra-logarithms-roots-please-help.html","timestamp":"2014-04-17T11:33:57Z","content_type":null,"content_length":"33103","record_id":"<urn:uuid:8ae9591d-3f71-4e1f-b9d8-73a850205eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Following the posting on arXiv of the Statistical Science paper of Carvalho et al., and the publication by the same authors in Bayesian Analysis of Particle Learning for general mixtures I noticed on
Hedibert Lopes’ website his rejoinder to the discussion of his Valencia 9 paper has been posted. Since the discussion involved several points
Any R packages to solve Vehicle Routing Problem?
Are there any R packages to solve Vehicle Routing Problem (VRP)?I looked around but could not find any... Any leads?VRP is a classic combinatorial optimization challenge and has been an active area
of research for operations research gurus fo...
Any R packages to solve Vehicle Routing Problem?
Are there any R packages to solve Vehicle Routing Problem (VRP)?I looked around but could not find any... Any leads?VRP is a classic combinatorial optimization challenge and has been an active area
of research for operations research gurus fo...
R co-creator Ross Ihaka wins Lifetime Achievement Award in Open Source
The co-creator of R, University of Auckland Associate Professor of Statistics Dr. Ross Ihaka, was yesterday awarded the Catalyst Lifetime Achievement in Open Source Award at the 2010 New Zealand Open
Source Awards. From the announcement: Dr. Ihaka is one of the originators of the world-renown ‘R’ programming language and software environment for statistical computing and graphics. In 2008...
Promote your favorite R functions
The 27 base and recommended libraries of the standard R 2.12 distribution together contain 3556 functions (you can check using the code posted after the jump). Many of the functions are commonly
used: c, data.frame, rnorm, lm. But some of those functions, while being extremely useful, may be less well known to many R users. Some examples I'd wish...
New R User Group in Houston
The latest local R user group to form is located in Houston, Texas. The first meeting of the Houston R Users Group is tonight at Rice University (in conjunction with the Houston chapter of the ASA).
R hackr (typo intended!) Hadley Wickham will be giving a presentation on writing R packages, and you can check out the slides on...
Mapping drug war related homicides in 2010
There have been some very good visualizations of the Wikileaks data so I decided to create one of the drug war in MexicoThe above map was made using data collected by Walter McKay, mainly from El
Universal and El Diario reports. The data is stored as a Google Map...
Mapping drug war related homicides in 2010
There have been some very good visualizations of the Wikileaks data so I decided to create one of the drug war in MexicoThe above map was made using data collected by Walter McKay, mainly from El
Universal and El Diario reports. The data is stored as a Google Map...
The ARORA guessing game
The game ARORA (A random or real array) is a website that gives you two time series at a time. Your job is to guess which series is real market data and which is permuted data. It’s fun — try it.
With some practice you will probably be able to guess which is which well … Continue reading...
Computational position in Texas
José Bernardo forwaded this announcement that sounds quite attractive (conditional upon living in a remote part of Texas!) Senior Faculty Position in Computational Statistics At Texas A&M University
As part of a recognition of the increasing importance in the modeling and computational sciences, the Department of Statistics at Texas A&M University is recruiting for a | {"url":"http://www.r-bloggers.com/2010/11/page/16/","timestamp":"2014-04-20T21:05:58Z","content_type":null,"content_length":"37742","record_id":"<urn:uuid:fad5acc1-6a31-4a1e-b5e8-b3d51bf40c38>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integer Story
Where we could find integers in life is in the weather, time, video games, money, sea level, number line, and so much more.
For example in weather the temperature could be -5 degrees C, 6 degrees C, 3 degrees C.
An example of the number line is above.
And also examples of integers in money are: $5, $6, $3.
We could find integers anywhere, and we use them sometimes without knowing. Integers help us in some things and it helps us find out things. We use integers in our daily lives. We use them when we’re
calling someone, because when we call people we dial numbers. Phone numbers are sequences of integers.
Positive and negative numbers are integers. We could find positive numbers in anything that has to do with numbers. When we add with positive you go to the right of zero on the number line. And
adding with negative you go left on the number line. We could find negative numbers in the thermometer, number line, and the weather.
A negative number is any real number that is less than zero like, -5, -8, and -2. Negative numbers are also used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit
scales for temperature. And positive numbers are greater than zero like 5, 8, and 2.
When we multiply with a negative and a positive we’ll get a negative and the same thing for division. When adding with negatives you go to the left on the number line. Ex; -5+6=1
And when subtracting with negative and positives you go to the right on the number line.
Ex ; -5-6+= -11.
No comments: | {"url":"http://6thgrademathsgs.blogspot.com/2011/06/integer-story-of-integers.html","timestamp":"2014-04-20T04:12:48Z","content_type":null,"content_length":"69467","record_id":"<urn:uuid:e807bddf-08ca-4466-88c0-f14801fa21b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fortran 90
By Loren P. Meissner
PWS Publishing Company, 1995
ISBN 0-534-93372-6
There are only a few of these left, so they are being sold for not much more than the cost of shipping them from the author and shipping to you.
This text is a complete presentation of Fortran 90 features and applications. Copious exercises demonstrate the usefulness of Fortran 90 in the fields of science, statistics, applied mathematics, and
engineering. The text's terminology accurately reflects the new Fortran 90 standard, and a special prologue provides an overview of Fortran that gives first-time programmers a quick start in
understanding modern Fortran syntax.
There is an accompanying Reference Guide and even a few copies of an instructor's manual.
Features include:
• Complete coverage of Fortran 90
• A wide variety of engineering and science-related exercises
• Transition tips, interspersed in chapters, and comprehensive appendices at the back of the book instructing users how to convert existing Fortran 77 programs to the Fortran 90 standard
• Bound-in Quick Reference Guide giving users a handy working reference for Fortran programming
Please check out the F version of this book. | {"url":"http://pages.swcp.com/~walt/fortran_store/Html/Info/books/f90.html","timestamp":"2014-04-20T03:11:10Z","content_type":null,"content_length":"1725","record_id":"<urn:uuid:1c21fa2f-6709-4e23-ab79-50a04965eeda>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Has anybody ever seen something like this (optimization problem / variational calculus)
up vote 1 down vote favorite
Hi all,
I'm trying to minimize the following integral : $ \int_{0}^{\pi/2} \frac{\int_{0}^{x} \sqrt{r(\theta)^2 + r'(\theta)^2}d\theta}{\sin(x)} dx $ with boundary values r(0)=1 and r(pi/2)=0. As you may
have guessed, the numerator in the integrand represents arc length in polar coordinates of the curve $(\theta, r(\theta) )$. I have absolutely no idea where to start: I have tried looking into
optimization and variational calculus books but wasn't lucky. Are there numerical methods which I could try? Of course, an extra requirement is that $ \underset{x\to 0^{+}}{\lim} \frac{\int_{0}^{x} \
sqrt{r(\theta)^2 + r'(\theta)^2}d\theta}{\sin(x)}$ exists and is bounded.
2 Use Fubini's Theorem to reduce to a single integral, i.e. interchange the order of integration, so you are integrating a function of $\theta$ from $0$ to $\pi/2$. This requires the indefinite
integral of $1/ \sin x$. Now you are in the standard realm of the Calculus of Variations; I'll leave you to fill in the integration details...! There is no guarantee that the Euler-Lagrange
equations will actually have a closed form solution. But, you could at least solve them numerically, as you asked. – Zen Harper Jun 7 '11 at 0:55
Thanks for your reply Zen Harper. I'm afraid I don't quite follow: I don't see how to apply Fubini's theorem since 1/sin(x) is obstructing the way. I would greatly appreciate it if you could
elaborate on this detail. Thanks! – user15626 Jun 7 '11 at 16:00
add comment
1 Answer
active oldest votes
I don't understand what you mean by "getting in the way". Fubini's Thoerem is \int_A \int_B f(x,y) dy dx = \int_B \int_A f(x,y) dx dy provided \int_{AxB} |f(x,y)|d(x,y) < \infty. Since
you are assuming that the last requirement is fulfilled. Then we may proceed.
up vote 1 down Your integral \frac{\int_0^x \sqrt{r(\theta)+r'(\theta)^2}d\theta}{\sin(x)} can be written \int_0^x \frac{\sqrt{r(\theta)+r'(\theta)^2}}{\sin(x)}d\theta. From there, just follow Zen's
vote advice...
add comment
Not the answer you're looking for? Browse other questions tagged oc.optimization-control or ask your own question. | {"url":"http://mathoverflow.net/questions/67094/has-anybody-ever-seen-something-like-this-optimization-problem-variational-ca","timestamp":"2014-04-17T12:43:38Z","content_type":null,"content_length":"51965","record_id":"<urn:uuid:d18ff45e-b5f4-4427-a956-924dcc25101d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 83
4 The Schooling Process: Instructional Time and Course Enrollment A number of different approaches have been taken to identify the process variables that affect student learning. One approach has
focused on effective teaching practices (Rosenshine, 1976), including the capacity of a teacher to plan and make decisions, use appropriate instructional strategies, and manage the classroom. There
is some evidence that careful planning, decisiveness, and consistency on the part of a teacher has positive effects on student learning (Emmer et al., 1980; Brophy, 1983), but most of this research
has dealt with elementary school, and further documentation is needed. A second approach has been to identify differences in teacher/ student interaction and establish the effects, if any, on student
learning. The teacher behaviors that are thought to make a difference include frequency of interaction with students, frequency of feedback, small-group versus large-group instruction, and providing
for independent work suited to individual student learning style and progress. Here again, research is not sufficiently far advanced to provide unequivocal conclusions (Berliner, 1980; Gage, 1978).
The one process variable that, again and again, has shown to be correlated with student learning is the time devoted to an area of the curriculum, usually expressed in minutes per day at the
elementary level and in course enrollment at the secondary level. Borg (1980) summarized the considerable research in this area. While he suggests that further studies are needed to determine how
large an effect quantity of schooling has on achievement, he con- cludes (Borg, 1980:47): "There can hardly be any doubt, however, that a significant effect is present." An important caveat, however,
is to distinguish--especially in elementary school--among time allocated for instruc- 83
OCR for page 83
84 tion, time actually given to instruction, and time that students are engaged in learning tasks. A mechanical lengthening of allocated time may have little effect on student learning (Levin, 1984)
or may even have negative consequences (Rosenshine, 1980). INSTRUCTIONAL TIME AND STUDENT LEARNING The effect of time spent on a subject is particularly evident in mathematics instruction. Even such
gross measures as years of instruction and hours of homework are correlated with student achievement: Table 12 dis- plays the results of a general mathematics test given to some 28,000 1980 seniors
participating in High School and Beyond. Most of the 33 items in this test were on arith- metic, and all but 3 items dealt with mathematics gener- ally taught before 10th grade. Evidence of even more
marked effects on achievement of taking advanced mathe- matics courses comes from the level-1 mathematics test given to 1982 seniors in the High School and Beyond follow-up. This test largely covered
arithmetic and 9th-grade algebra; all but two of the 28 items on the test were based on mathematics taught before 10th grade. The effects on achievement persisted even when adjusted for race, sex,
socioeconomic status (SES), and 10th-grade scores on the same mathematics test given to the same students when they were sophomores in 1980: Table 13 shows that the average score for students who
took the full sequence of high school mathematics courses is nearly a standard deviation higher than that for students who took no mathematics at the level of Algebra 1 or above, even after
adjustment for other factors affecting test scores. Evidence also comes from data summarized in Table 14, derived from a special 1975-1976 NAEP study on mathematics achievement of the nation's
17-year-olds. A mathematics test was constructed from exercises selected from the first NAEP mathematics assessment in 1972-1973 to assess basic skills in computation, elementary algebra and
geometry, and logic and measurement. The analyst comments (Jones, 1984:1211): The average score for students who reported not having taken Algebra 1, Algebra 2, or geometry is seen to be 47%, whereas
the average for students who had taken all three courses is 82% correct for
OCR for page 83
85 a, s~ a) P. o o a, P~ ~n .,, a) s U] S~ ~: o .,, U) ~ o Y E~ o m ~o 0 ~ 0 ~ o ~ 0 ~ s z y o 3 :C .,, o V) ~5 ~r; ~ o I ~ _~ _ o . - o cn a— ~ ~ ~ ~ ~ ~o ~ - O C C: 54 ra O ~ ~ ~ ~ ~ O ~ ~ ~ ~ U]
V) — d~ ~ ~ ~ LC U~ tD ~ U] c) ~ C U) s ~ ~ Y ~ ~ 0 ~ C: E~ ~ 0 ., - ~ 4~ _ 0 C ~ ~ ~ O 4~ a) ~ cn a— O C ~ ~ 0 4~ C) U] U C~ 1 O a U] UO u: a 0 0 ~ C~ C) - U) .,4 s ~: ~ 0 ~ . - ra 0 C, -~ . O a) ~
cn a— o U) 0 a' U) _ E~ O y o 3 Y a) a 0 3 d. CO ~ ~ · · ~ a~ Ln ~ s ~ o c ~ ~ · - ~ c c (0 Ul O a) a, 0 c c ~l ~ ~ O O Q) 1 0 z z ~ ~ ~ · s · · o s o ' ~ ~ c~ ~ ~ · · · · ~ ~ + ~+ ~ cN o ~ ~ ~ ~ ~ ~
z ~ ~ <: ~ ~: ~r ~ c~ u~ ~ · · · · ~ r~ ~ ao ~ ~ ~ ~ o o · · ~ o o oo a, ~ ~n d. ~ ~ ~ ~ U~ U~ L' a' ~n ~; o ~l o q~ c ~ c u) c c ~ o ) D ~ ·-' 0~ Q~ 3 C U~ U~ U) ~ 0 · · · 0 a) C o ~ ~ ~ ~ ~ ~ s a'
~ n' o s 4~ m U, :r: o o P" Q. a) ~ U) E~ U] -,1 U, c ·. a, ~ U] 0 ~ U) o
OCR for page 83
86 o o s u U] s ~ ·- · - :n a) .,, s 11 en a) EN H U] U In ~ ~ S S ~ ~: o V ~n ~; ,' ~ 4 O a ~5 4 U) r~' ~: U] o ,. 0 a) ~, ~ ~. .,, a U. cn c ·— X s a U] ~ X 0 ~ ~ CO ~ ~ ~ `: U] :~ a ~: u O ~ U] 0
U] U) ~ 0 ~ .- a, a' ~ ~ U] — ~ c: u a O O s ~ c) s V] ·-, 0,, Y 0 E~ ~ 11 ~q O O O 0 a) ~ u' U) ~ ~ C) a' 0 V ~ O ~ ~ ¢5 m ~ S 0 ~0 u) ~ ~ o ~ ~ ~ ~ ~ 0 ~ ~ a~ r~ ~ e · · S O O 1 1 1 1 ~ U] o ., 4 0
r~ · e CD 1 1 ~D · — CO 1 1 S ~ n k (J O Z C~ O · ~ C~ O 1 a~ · ~ O 1 h 4J o C~ C~ CD · ~ CO CD CO · ~ ~4 a) k o ·_t k ~ O O a' a C~ s" aJ ~ <: ~: . n a' ~: h Q} · · O ~ C) .,' U) 8 UB .,. 3 S C~ S ~
._' k ~ k O . S k o 3 a) C' .,' a a) s ·a 0 ~ . a) 0 E~ U, :e U' .,, ~n ·~ _~ C) - - 0 ~ U) U)
OCR for page 83
87 TABLE 14 Average Mathematics Score (Percent Correct) by Number of Years of Algebra 1, Geometry, and Algebra 2 for 17-Year-Olds, 1975-1976 Number of Years of Average Courses Score Percent of
Students with 0, 1, 2, or 3 Years Black White All 0 47 29 18 20 1 59 37 24 26 2 70 21 26 25 3 82 13 32 29 SOURCE: Adapted from Jones (1984:1211). the same mathematics exercises. This is a differ-
ence of nearly two standard deviations. The relation of mathematics achievement to courses taken is strong and clear. . . . The data . . show a disproportionate representation of black students and
white students for differing numbers of years of high school algebra and geometry. About two thirds of black students but only 42% of white students report having taken 0 or 1 year of high school
algebra and geometry. This difference between black and white students in algebra and geometry enrollments might be responsible for a large part of the white-black average difference in mathematics
achievement scores. Jones's conjecture appears to be borne out by the previously cited results of the mathematics test (level 1) taken by the 1982 high school seniors in the High School and Beyond
follow-up. The mean test score was 51.6, with a standard deviation of 10 (see Table 13). When scores were adjusted for courses taken, the differ- ence in unadjusted scores between males and females
(adjusted for race) was reduced from 1.52 to 1.08; the difference between blacks and Asians (adjusted for sex) dropped from 12.58 to 5.25; and the difference between Asians and whites (adjusted for
sex) changed from 4.07 in favor of Asians to 1.26 in favor of whites. The strong relationship between enrollment in high school mathematics courses and test scores is likely, in part, to result from
the choices of high achievers to
OCR for page 83
88 enroll in more mathematics courses and the choices of low achievers to enroll in fewer courses. In the data from High School and Beyond, however, senior mathematics scores are related to
mathematics courses taken, even after adjusting not only for race, sex, and SES, but also for earlier (sophomore) scores on the mathematics test (see final column, Table 13); this strongly suggests
that course taking per se influences test performance. Although testing for science achievement is less common than for mathematics achievement, both Welch (1983) and Wolf (1977) found positive
correlations between science test scores and semesters of The correlations are somewhat lower than in mathematics, possibly because of the less sequential character of the science curriculum. Given
the robust findings regarding this variable and the need to limit the number of indicators, the committee selected instructional time given to a subject to stand as a proxy for schooling processes in
general. Even so, The complica- ~ science or course exposure. measurement of this variable is not simple. tions include: __ discrepancies between time schedules For a subject in school and time
actually devoted to instruc- Lion; time used for homework; and the different organiza- tion of elementary and secondary education, requiring different approaches to measuring time spent on a subject.
These issues are discussed below. Allocated Versus Actual Instructional Time A recent research review (Karweit, 1983) on the time used for instruction concludes that, at most, instruction in
elementary school may occupy 60 percent of the 6-hour school day; this is reduced further by student absences and student inattention. This loss of time from instruc- tion is not a new phenomenon;
some 20 years ago, PA W. Jackson (1965) published a landmark description of life in the classroom that vividly drove home this point. Classroom observation has continued to document the extent to
which students are actually engaged in learning during instruction. An example of such observation done on 21 Sth-grade classrooms in California is given In Table 15, which shows that students are
inattentive for as much as a one-third of instructional time. With respect to absences, the same study found that, of the 180 days in a school year, 30 days are usually lost to classroom instruction
due to field trips, student
OCR for page 83
89 TABLE 15 Allocated and Pupil Engaged Time in Mathematics for Four 5th-Grade Classes Classes B C D Time Number of days data collected Average minutes allocated daily Percent of time students
engaged Engaged minutes per day 73 23 74 17 89 28 91 61 80 80 22 49 93 57 66 38 SOURCE: Berliner (1978:21) as cited in Romberg and Carpenter (1985). illness, a Christmas play, and the like (Berliner
et al., 1978), although field trips and some extracurricular activities may enhance learning. At the same time, poor use of instructional time inhibits the effectiveness of teachers. Karweit (1983)
points out that, even under the most favorable assumptions of minimal school absence and loss of instructional time, students are occupied with actual learning only a little more than one-half their
scheduled time in school; for some students, it may be less than one-third of the time. While there have been fewer systematic studies of time use in secondary school, anecdotal information gives
little reason to think that the situation is much dif- ferent (see, e.g., Boyer, 1983). Homework Homework is an inexpensive way of extending instruc- tional time. In addition to the data from High
School and Beyond, evidence on its relationship to student performance also comes from the first TEA mathematics assessment. Husen (1967) reports a strong positive correlation between mean
mathematics scores for all countries and mean hours spent on all homework as well as on mathematics homework specifically. According to the IEA findings, a bit more than one-third of all homework
time, on average, is spent on mathematics in all coun- tries. These findings, based on student self-reports from the 1964 mathematics assessment, indicate that 8th
OCR for page 83
go TABLE 16 Hours per Week Scheduled for Mathematics: 8th Graders and Mathematics Students in Senior Year(1964 Data) Mathematics Instruction Mathematics Homework 8 th G r ade Sen i or s 8 th G r a de
Sen for s Country Mean SD Mean SD Mean SD Mean SD Australia 5.2 .6 6.9 1.6 2.5 1.6 6.1 3.3 Belgium 4.7 1.0 7.4 1.1 3.7 2.5 8.7 4.6 England 4.0 .8 4.4 1.3 1.8 .9 4.1 1.9 Finland 3.0 .2 4.0 0 2.9 2.2
6.6 3.5 France 4.4 .8 8.9 .5 3.4 1.9 9.6 3.5 Germany 3.9 .6 4.2 .5 3.4 1.9 5.1 2.a Netherlands 4.6 1.5 5.1 .3 2.6 1.9 5.7 3.4 Israel 4.1 .5 5.0 .3 4.4 2.6 7.5 3.7 Japan 4.5 .5 5.4 1.1 3.0 1.8 5.2 4.3
Scotland 4.6 1.0 6.2 1.5 2.2 1.6 4.1 2.3 Sweden 3.8 .9 4.6 1.6 1.9 1.3 4.9 2.9 United States 4.6 1.3 5.0 .9 3.1 2.3 4.1 2.4 NOTE: Hours of instruction may refer to periods somewhat shorter than 60
minutes. SOURCE: Husen (1967, Vol. I:278). graders in the United States spent about 3.1 hours per week on mathematics homework, slightly above the average for all countries (see Table 16). For
mathematics students in the last year of secondary school, however, the required hours of homework in most countries doubled between 8th grade and 12th grade, while in the United States the increase
was only from 3 to 4 hours per week. This difference may have contributed to the poorer per- formance of older U.S. students on the IEA tests. Data on homework were again collected by TEA from
students and teachers in 1981-1982 during the Second International Mathematics Study; the teacher responses have been analyzed. For U.S. 8th graders, teachers estimated that the time typically spent
on assigned homework was 2.3 hours per week; 75 percent of the students were estimated to spend 3 hours or less. For 12th graders, teachers reported that they expected an average 4 hours of homework
per week from students in precalculus classes and 5 hours from students in calculus classes (Travers, 1984). Most other information available on the amount of homework done by students is not
specific as to subject matter. Studies done on high school seniors in 1972 and
OCR for page 83
91 1980 show that time spent on all homework dropped during this period: the number of seniors who reported that they spent at least 5 hours per week on homework decreased from 35.2 to 24.5 percent,
with decreases greatest in the south (36 to 21 percent) (National Center for Education Statistics, 1984c). The average amount of homework time reported was 3.9 hours per week, down from 4.3 hours in
1972, although the amount of homework effort reported by students in academic programs remained virtually constant at 5.1 hours (National Center for Education Statistics, 1984c). In contrast,
according to a recent study (Fetters et al., 1983), six times as many seniors in Japan spend more than 10 hours per week on homework as in America (36 compared with 6 percent) and two-thirds of the
Japanese students spend at least 5 hours on homework compared with one-fourth in America. Research evidence indicates that the way homework assignments are treated affects the contribution of
homework to student achievement (Walberg, 1985). Checks on completion, discussion in class, and correction by the teacher greatly increase the value of homework. Hence, attempts to track hours of
homework should not only record the subject in which the homework is assigned, but also the way homework is used to support classroom instruction. MEASURING INSTRUCT IONAL T IME The organization of
high school according to curriculum area permits tracking instructional time through course enrollment, at least as a first approximation. For ele- mentary school, studies have been made of the time
spent on specific subjects, documented by classroom observation to determine actual versus allocated instructional time. The method to be used for tracking time for grades 7 and 8 varies depending on
their organization. Elementary School Recent national data on time scheduled for mathematics in grades 1-6 come from three sources, one of which--the Weiss (1978) survey--also collected information
on science instruction. Data reported by teachers, shown in Table 17, indicate that time spent in teaching mathematics and science increases somewhat in the upper elementary grades; average time
increases from 41 minutes in grades K-3 to
OCR for page 83
92 .,, o MU a) V) .,, .= CQ S .,, S a, En U] sit In .,, o as ~4 ~ V at: Q ' to pa U] ~ U] At; En ~4 Sat O ~ Sit U) o o En (Q V 1 1 ~ to ¢1: Z :~: ~ O A o at ca 54 0) k ~ A: Z £ Sit ~ O U) o (V U] A:
Z £ C) Q U] Go ~ ~ oo · ~ ~ ~ o US ~ CO ~ ~ d4 · · ~ ~ ~ c~ o · · · ~ o ~ ~ c~ ~ u] a ~l u] v ~ ) z ~ v ~ ~ ~ a) ~ ~ . - s ~ a e v 0 a u~ u: a; co u] o 1 u] a) s o a) _ Ul ~ aQ ·. ~o a' - ] ~ u] s E~
·e ·. c) E~ o o z u)
OCR for page 83
93 51 minutes in grades 4-6 for mathematics and from 17 to 28 minutes for science. Collecting information on time spent on science instruction in grades 1-6 is difficult because there is no common
understanding on what subjects in elementary school are actually considered part of science. With the coming of more work using computers, mathematics will also become more difficult to define. The
second source of information is the Sustaining Effects Study (Wang et al., 1978), which examined the nature and effectiveness of Title I compensatory educa- tion programs. The data from this study
show more time spent on mathematics per day than do the Weiss data, ranging from 47 to 68 minutes. One striking finding is a lowered emphasis on reading in the upper grades (see Figure 2).
Information from the previously mentioned California study (Berliner et al., 1978) shows the range of time allocated to mathematics in 5th grade to be from 23 to 61 minutes per day (see Table 15,
above); about the same amount of variation was observed in 2nd grade. According to these data, students in one classroom spent nearly three times as much time on mathematics as did students in
another class of the same grade. Similar variability has been observed in time alloca- tion studies over the past 60 years. Because several of these studies recorded their methodology with great
care, it is possible to compare allocation of instructional time over the last 100 years (Borg, 1980); see Figure 3. It is interesting that the time allocated to mathematics instruction has stayed
relatively stable, considering the general decrease in time devoted to all academic instruction. The amount of time scheduled for mathematics instruc- tion in 8th grade is approximately the same as
that in elementary school, according to the 1964 TEA data. Com- parison with 11 other industrialized countries shows that the mean hours of mathematics instruction reported for the United States were
exceeded only in Australia and Belgium; there was greater variation around the mean in the United States than in most other countries (see Table 16, above). Similar information, including time spent
on specific topics, was collected in 1981-1982 during the Second International Mathematics Study. Preliminary results for the United States indicate that, while mathematics is generally taught 5
periods per week in 8th grade, class length can vary from 40 to 60 minutes. Thus, while the median number of clock hours of mathe- matics instruction per year is 145, the range is from 115 to 180
hours (Travers, 1984).
OCR for page 83
99 TYPE OF COURSE Math Science Engl ish Social Studies Foreign Languages .. : : ~~ 1972 1 982 —; - .......................................... -.~ ~ ~ , ~ 1 1 1 1 1 1 0 1 2 3 4 5 6 NUMBER OF SEMESTERS
FIGURE 4 Courses reported taken in grades 10-12 by 1972, 1980, and 1982 high school seniors. Data from High School and Beyond and the National Longitudinal Study of 1972 Seniors. SOURCE: NCES
(1984:2-6); Wisconsin Center for Education Research (1984). Education Research, 1984) are also given. Not much change is apparent over the 2 years, although the enrollment gap between males and
females in calculus and total number of years of mathematics taken seems to be narrowing somewhat. As Figure 4 shows, in 1980, students reported taking an average of 3-1/2 semesters of science in
grades 10-12, about the same as in 1972. The amount of mathematics
OCR for page 83
100 reported has increased by about half a semester to 4+ semesters in 1980. Since almost all students report that they take mathematics in grade 9 as well, this means that, on average, students
report that they take more than 3 years of mathematics in secondary school. Since about three-fourths of all students take a science course in grade 9, high school graduates report that on average
they will have taken nearly 2-1/2 years of science. Some of the increase in mathematics enrollment may be due to the fact that, from 1972 to 1980, high school remedial mathematics courses increased
from 4 to 30 percent (National Center for Education Statistics, 1984b), however, data on college-bound students (see Figure 7, below) indicate that enrollment increased in the higher-level courses as
well. Preliminary data on course enrollments reported by 1982 seniors (Wisconsin Center for Education Research, 1984) show little change in mathematics or science since 1980 (see Figure 4). A recent
study by the National Center for Education Statistics (1984b) examined enrollment data from a sample of over 12,000 transcripts of the 1982 HSB high school seniors (see Table 20). As noted above, the
transcripts tended to reflect somewhat less course work taken than the seniors had reported. The transcripts of the 1982 seniors showed, on average, 2.2 years of science and 2.7 years of mathematics
taken during grades 9-12, rather than the 2.5 years of science and 3+ years of mathematics reported by the students themselves. Differences by selected student characteristics are also shown in Table
20. Data from National Assessment of Educational Progress (1983), shown in Table 21, appear to confirm that increases in mathematics enrollment may have come about in part through increased
enrollment in general and remedial mathematics, but there has also been a sizable increase of enrollment in computer courses. It should be noted that differences in mathematics enrollment between
whites and blacks and males and females persist, although they have narrowed somewhat (National Assessment of Educational Progress, 1983): in 1982, the percentage of students taking at least one-half
year of trigonometry was 14.9 for whites and 8.2 for blacks, 15.0 for males and 12.7 for females; for precalculus/calculus, it was 4.4 for whites and 2.8 for blacks, 4.7 for males and 3.6 for
females; for computer courses, it was 9.6 for whites and 11.3 for blacks, 11.1 for males and 8.6 for females.
OCR for page 83
101 TABLE 20 Average Number of Years of Science and Mathematics in Grades 9-12 by 1982 Seniors, by Selected Characteristics of Students Sample Subgroup Science Mathematics Size All students 2.2 2.7
12,116 Sex Male 2.4 2.7 5,914 Female 2.1 2.6 6,202 Race/ethnicity Hispanic 1.9 2.4 2,420 Black 2.1 2.6 1,599 American Indian 2.0 2.3 173 Asian American 2.7 3.2 327 White 2.3 2.7 7,497 High school
programa Academic 2.9 3.3 5,356 General 2.1 2.5 3,710 Vocational 1.7 2.2 2,744 Region New England 2.6 3.0 623 Middle Atlantic 2.6 2.9 - 2,154 South Atlantic 2.3 2.7 1,673 East south central 2.2 2.5
562 West south central 2.3 2.8 1,334 East north central 2.0 2.5 2,S71 West north central 2.3 2.7 901 Mountain 2.1 2.4 543 Pacific 1.8 2.6 1,755 NOTE: Transcript data from High School and Beyond
Abased on student self-reports in 1980. . SOURCE: National Center for Education Statistics (1984b) . Enrollment results derived from the 1982 High School and Beyond follow-up data are quite similar
to the NAEP data. A recent study (Welch et al., 1983) of enrollment in science courses in grades 7-12, including private schools, found that 56 percent of students in grades 10-12 were
OCR for page 83
102 TABLE 21 Percentages of 17-Year-Olds Who Have Completed at Least One-Half Year of Specific Courses Course 1978 1982 General or business mathematics Pre-algebra Algebra Geometry Algebra 2
Trigonometry Pre-calculus/calculus Computer science 45.6 45.8 72.1 51.3 36.9 12.9 3.9 5.0 50.0 44.3 70.9 51.8 38.4 13.8 4.2 9.7 SOURCE: National Assessment of Educational Progress (1983:3) enrolled
in science in 1981-1982, up 4 percent from 1976-1977; the percentage of students taking science in grades 7-9 has remained relatively stable at 86 percent of the total population. While recent trends
may be encouraging, science enrollments are still much lower than they were in the early 1960s, when science and mathematics was emphasized in the schools in response to the launching of Sputnik by
the Union of Soviet Socialist Republics. Total enrollment in eight science courses-- general science, biology, botany, zoology, physiology, earth science, chemistry, and physics--in grades 9-12
between 1949 and 1982 as a percentage of all students enrolled is shown in Figure 5; these courses make up about three-fourths of the total science enrollments in these grades. There are sizable
regional variations in the percentage of students taking science, with enroll- ments consistently higher in the northeast than elsewhere (see Figure 6). For all regions except the northeast, 10th
grade is the last year that the preponderance of students take a science course. The preparation of college-bound students is of interest because it is related to their future education and choice of
majors, and thus to the potential future supply of scientists and engineers. Enrollment data for students participating in the Admissions Testing Program of the College Board (1973-1984)--about
one-third of all
OCR for page 83
103 70 o CC Z 60 En By LLJ CD J a: LL o A UJ I: UJ 50 40 _ 30 1948-1949 1960-1961 1972-1973 1981-1982 \ - - - - 1 1 1 1 YEARS 1 976-1 977 FIGURE 5 Percentage of total enrollment in eight science
courses (general science, biology, botany, zoology, physiology, earth science, chemistry, and physics)-- grades 9-12, 1948-1949 to 1981-1982. SOURCE: Welch et al. (1983). high school seniors--show
considerably more mathematics and science for these students than for all students: the mean number of years of mathematics studied in grades 9-12 is 3.62, and mean number of years of science studied
is 3.25. The number of years of mathematics and of physical science being studied by these students has increased steadily between 1973 and 1983 (see Figure 7). Males still enroll in more mathematics
courses than do females, although the gap, at least for college-bound students, has been narrowing: in 1973, 60 percent of males and 37 percent of females taking the Scholastic Aptitude Tests (SATs)
reported expecting to complete 4 or more years of mathematics in high school; in 1983, the percentages were 71 for males and 57 for females.
OCR for page 83
104 100 80 60 20 o 100 80 60 L1J cat 40 20 o Northeast 1 00 80 60 UJ cat 40 20 o Southeast r I I I I I 7 8 9 10 11 12 7 8 9 10 11 12 G RAD E G R AD E Central 1 00 80 60 LL cat `< ~ 40 20 I _ - \/\
West 1 1 1 1 1 1 o 1 1 1 1 1 7 8 9 10 11 12 7 8 9 10 11 12 G RADE G RAD E FIGURE 6 Percentage of grade enrolled in science courses, by region; special survey of 16,000 students in 600 secondary
OCR for page 83
105 3.70 3.60 3.50 3.40 3.20 en cr a: us 3 10 r LL O ~ 111 ~ m 2.00 1.90 1 .80 1.70 1.60 1 .50 1.40 1.30 Mathem: - - - - - Biological Sciences - - - 1 1 1 1 1 1 1 1 1 1 1 1974 1976 1978 1980 1982 1
984 YEAR FIGURE 7 Number of years of selected subjects studied by college students taking scholastic aptitude tests. SOURCE: Admissions Testing Program of the College Board (1973-1984).
OCR for page 83
106 FINDINGS Instructional Time and Student Learning · The amount of time given to the study of a subject is consistently correlated with student per- formance as measured by achievement tests, at
the elementary school as well as at the secondary school level. Time spent on homework is also correlated with student achievement. The attention paid to homework by the teacher affects its
contribution to student performance. . Measuring Instructional Time Elementary School · For elementary schools, not enough data are available to discern clear trends over the last 20 years with
respect to amount of instructional time spent on mathematics and science. On average, about 45 minutes a day are spent on mathematics and 20 minutes on science. Existing information, however, points
to great variabil- ity from class to class in the amount of time given to instruction in general and to each academic area specifically. High School · The average high school senior graduating in the
early 1980s has taken about 2-3/4 years of mathematics and 2-1/4 years of science during grades 9-12. · Compared with 20 years ago, average enrollments of high school students in science have
declined. While this trend now appears to be reversing, enrollments have not returned to the level of the early 1960s. · High school enrollments in mathematics have increased over the last decade by
about a semester. · College-bound students are taking more mathematics and physical science courses in secondary school than they did 10 years ago, and the increases were continuous throughout that
period. The gap in enrollment between
OCR for page 83
107 males and females in advanced mathematics courses is narrowing. · A number of problems attend enrollment data currently available: uncertainties generated by using self-reports, differences in
questions and method from survey to survey, and ambiguities created by similar course titles in mathematics that refer to different content or different levels of instruction. CONCLUSIONS AND
RECOMMENDATIONS Elementary School Measures of Instructional Time · The average amount of time per week spent on mathematics instruction and on science instruction should be measured periodically for
samples of elementary schools. This measure would serve as an indicator of length of exposure to pertinent subject matter; values can be compared for different years. Care must be taken, however, to
ensure common understandings in collecting measures of time as to what constitutes science or mathematics instruction. Time given to mathematics or science, expressed as a percent of all
instructional time, would indicate the priority given to these fields. · Efficiency of instruction should be assessed by comparing allocated time with instructional time and with time that is
actually spent on learning tasks that appear to engage students, as established by observation. · Time spent on science and mathematics instruction in elementary school should be tracked on a sample
basis at the national, state, and local levels. Logs kept by teachers could be used for this purpose, with selective classroom observation employed to check their accuracy. Improving Methods for
Collecting Information . Time allocated by the teacher to instruction is not equivalent to time actually spent by the student. Classroom observation is needed to differentiate between the two. Time
spent on such different components of instruction as laboratory work, lecturing, and review of text or homework may also affect student outcomes. Case studies that document use of instructional time
are expen-
OCR for page 83
108 sive, but this variable has proven to be a sufficiently potent mediator of learning that the investment appears warranted. · Experimentation and research should be carried out to develop a proxy
measure for time spent on instruction that would permit collecting the pertinent information at reasonable costs. · Further documentation is needed to establish the variability of time spent on
instruction over classes and over calendar time. The results of such documentation should serve to establish the extent and periodicity of data collection needed for this indicator. Secondary School
Measures of Course Enrollment . For grades 7 to 12, enrollments in mathematics and science courses at each grade level and cumulatively for the 6 years of secondary school or for the 3 or 4 years of
senior high school should be systematically collected and recorded. (See the pertinent recommendation in the section on content in Chapter 2.) Alternatively, the mean number of years of mathematics
or science taken or percentages of students taking 1, 2, or 3 or more years of such courses can be used as a measure. · The disparities in mathematics and science enroll- ment among various
population groups warrant continued monitoring, so that distributional inequities can be addressed. National data on student enrollments collected in connection with the periodic surveys recommended
above may be insufficient for this purpose. States should consider biennial or tr iannual collection of enrollment data by gender, by ethnicity, and by density of the school population. Improving
Measures of Course Enrollment · Comparisons of enrollment over time are likely to be of great interest, but high-quality data are needed. Obtaining such data requires consistency in the design of
surveys, data collection, and analysis. It also requires reduction of current ambiguities, for example, using a standardized system for describing courses, relying on transcripts or school enrollment
logs rather than on
OCR for page 83
109 student self-reports, and sampling a comparable universe from study to study. · The periodic studies of high school students have provided useful information, but greater effort should be
directed toward reducing methodological dissimilarities. Also, the time between studies sometimes has been too long. Surveys of the type represented by High School and Beyond and NAEP should be
repeated no less than every 4 years. · Time spent on homework in mathematics and science should be documented at all levels of education. Studies need to record how homework is used to support
in-class instruction in order to prompt the use of better measures of total learning time in each grade. Assessing the Effects of Policy Changes · Many states are increasing requirements for high
school graduation; some state university systems are increasing requirements for admission. The effects of these policy changes on student enrollment in high school mathematics and science courses
and on the content of these courses should be monitored. | {"url":"http://www.nap.edu/openbook.php?record_id=238&page=83","timestamp":"2014-04-17T19:28:34Z","content_type":null,"content_length":"64652","record_id":"<urn:uuid:c4c15ee3-3db4-4d2d-bca3-3eef3981af71>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (1)
Title: Let's Throw Paper!! Addition Game
Description: Students will throw addition problems or answers (on paper) across the classroom and find the matching problem or answer! Students will use mental math to compute the matching answer.
Students will quietly walk around the classroom to find the person with the matching paper. Let’s throw math around! This is a College- and Career-Ready Standards showcase lesson plan.
Subject: Mathematics (1), or Technology Education (K - 2)
Title: Adding with the Associative Property
Description: Students will learn ways to add using the associative property of addition. Students will view a PowerPoint presentation to introduce associative property of addition.
Subject: Mathematics (1)
Title: Cookie Jar Subtraction
Description: In this language development subtraction lesson English Language Learners will learn and practice using vocabulary and sentence structures for subtraction. The lesson incorporates
technology, hands-on manipulatives, and multiple opportunities for interaction to engage students in the math content. The format follows the Sheltered English Instructional Protocol (SIOP).
Subject: Mathematics (1), or Technology Education (K - 2)
Title: You Can Add!
Description: Students will learn ways to add using the associative property of addition. Students will view a PowerPoint presentation to introduce associative property of addition.
Subject: Mathematics (1 - 4), or Social Studies (2)
Title: How Many Times Did You Add That?
Description: After watching the video clip of the Hershey's plant, students will use grid paper to investigate multiplication as repeated addition. This lesson plan was created by exemplary Alabama
Math Teachers through the AMSTI project.
Subject: Mathematics (K - 1), or Science (K - 1), or Technology Education (K - 2)
Title: To Push or Pull, That Is The Question?
Description: In group students will learn to identify pushes and pulls. Students will learn how a push or pull will affect various items. This lesson plan was created as a result of the Girls Engaged
in Math and Science University, GEMS-U Project.
Subject: Mathematics (K - 2), or Technology Education (3 - 5)
Title: We Are Family!
Description: This is a fun way to incorporate fun with addition and subtraction fact families. The students will have fun matching facts and making a happy home.This lesson plan was created as a
result of the Girls Engaged in Math and Science University, GEMS-U Project.
Subject: English Language Arts (2), or English Language Arts (2), or Mathematics (K - 3), or Technology Education (K - 2)
Title: "No More Money Trouble"
Description: This lesson will allow students to identify and count money. They will enjoy playing with coins that look like real money. This lesson is guaranteed to motivate students through the use
of hands-on and cooperative group activities. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (1)
Title: Cracker Math (Commutative Property)
Description: During this lesson students will listen to the story FISH EYES: A BOOK YOU CAN COUNT ON by Lois Ehlert and use concrete objects to explore the commutative property of addition. During
this lesson students will be guided through exercises to strengthen knowledge that the order of the addends has no effect upon the sum. In mathematical terms, a+b=b+a. They will also use the internet
to practice what they have learned during this lesson.
Subject: Mathematics (1 - 3), or Technology Education (3 - 5)
Title: Multiplication Facts, Facts, Facts and Some Reese's Pieces for Snack!!
Description: This lesson teaches the basic math multiplication facts in a fun way so that students will want to remember them. Knowledge of all multiplication facts will help with learning subsequent
skills in longer multiplication, division, fractions, etc.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Calculating Patterns Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students use an Internet-based calculator that is linked with an interactive Hundred Chart to create, extend, and record
numerical patterns in different ways. By connecting the two representations, students observe the numerical patterns as they are created.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Health,Mathematics
Title: Try for Five Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students explore the many ways to decompose numbers and then build on their knowledge of addition and subtraction to find
missing addends. They also visit a Web site related to the food pyramid.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Fact Family Fun Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, the relation of addition to subtraction is explored with fish-shaped crackers. The students search for related addition and
subtraction facts for a given number and investigate fact families when one addend or the difference is 0.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Addend Pairs to 12 Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students explore pairs of numbers that add to 12 or less by playing a game. They then add to their personal addition chart
and prepare a new chart that allows access to the facts they do not know.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Numbers Many Ways Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students work with subtraction at the intuitive level as they explore number families and ways to decompose numbers to 10.
They also identify members of fact families.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Block Pounds Add Bookmark
Description: In this lesson, from Illuminations, students explore the use of variables as unknowns as they solve for the weights of objects using information presented in pictures. They also model
situations that involve the addition and subtraction of whole numbers, using objects, pictures, and symbols.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: How Many Under the Shell? Add Bookmark
Description: This student interactive from Illuminations allows students to discover basic addition and subtraction facts. When Okta hides bubbles, students try to determine the number under the
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Finding Sums to Six Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students discover the role of the additive identity and explore sums to six. They continue to fill in their personal addition
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Counting to Find Sums Add Bookmark
Description: This lesson, one of a multi-part unit from Illuminations, focuses on the counting model for addition and begins with reading a counting book. Students model the numbers with counters as
the book is read. Then they count the spots on each side of a domino and write in vertical and horizontal format the sums suggested by dominoes. Finally, the students illustrate a domino and record a
sum it represents for their portfolio. Several pieces of literature appropriate for use with this lesson are suggested.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Finding Fact Families Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, the relationship of subtraction to addition is introduced with a book and with dominoes. Then, the children explore the
concept of missing addends. They also add cards with sums of 4 to their individual set of triangle-shaped flash cards. Several pieces of literature appropriate for use with this lesson are suggested.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: Some Special Sums Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students practice doubles and doubles-plus-one addition facts. They record their current level of mastery of the addition
facts on their personal addition chart.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2
Subject: Mathematics
Title: How Many More? Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students write subtraction problems, model them with sets of fish-shaped crackers, and communicate their findings in words
and pictures. They record differences in words and symbols. The additive identity is reviewed in the context of comparing equal sets. Several pieces of literature appropriate for use with this lesson
are suggested.
Thinkfinity Partner: Illuminations
Grade Span: K,PreK,1,2 | {"url":"http://alex.state.al.us/plans2.php?std_id=53557","timestamp":"2014-04-18T15:49:27Z","content_type":null,"content_length":"39170","record_id":"<urn:uuid:67d1e652-8a2e-4750-a38c-4b094262466d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Absolute Value Inequality Help
How do I solve this? Been stuck on it forever! $|x+2|\geq{x\over{x+2}}$
You need to be careful if you're squaring both sides of an inequality: $0.5 > - 3$ $\Rightarrow 0.25 > 9$? I don't think so. If $x = -1.5$, then $|x+2| = 0.5$ and $\frac{x}{x+2} = -3 \dots$ ??? I
think instead, you'll need to look at the signs of $(x+2)$ and $|x+2|$ in the cases $x>-2$ $x<-2$ and consider each one as a separate inequality. Have you tried this approach? Grandad
Last edited by Grandad; January 11th 2009 at 11:12 PM. Reason: Simplified solution
1. The domain is $d = \mathbb{R} \setminus \{-2\}$ 2. $|x+2| = \left\{ \begin{array}{l} x+2,\ x >-2 \\ -(x+2),\ x < -2\end{array}\right.$ 3. $x+2\geq\dfrac x{x+2}\ ,\ x > -2$ $(x+2)^2-x\geq 0~\
implies~\left(x+\frac32\right)^2+\dfrac74\geq 0$ is true for all x because both summands at the LHS are positive. Therefore $x>-2$ 4. $-(x+2) \geq \dfrac x{x+2}\ ,\ x < -2$ $-x^2-5x-4 \bold{{\color
{red}\leq}} 0~\implies~(x+1)(x+4) \bold{{\color{red}\geq}} 0~\implies~x \geq -1\wedge x\geq -4~\vee~ x \leq -1\wedge x \leq -4$ Therefore $x \leq -4$ Thanks o_O! EDIT: I've removed my mistake.
Corrections in red!
Last edited by earboth; January 12th 2009 at 12:47 AM. Reason: corrections | {"url":"http://mathhelpforum.com/algebra/67830-absolute-value-inequality-help.html","timestamp":"2014-04-20T01:07:42Z","content_type":null,"content_length":"52262","record_id":"<urn:uuid:25aaa2ed-c0e1-4ba3-a9df-0fce28c72ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The sum of three numbers is . The second number is times the third. The third number is less than the first . What are the numbers?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I think your question is not complete.. :)
Best Response
You've already chosen the best response.
oops! sorry haha here The sum of three numbers is 96 . The second number is3 times the third. The third number is 6 less than the first . What are the numbers?
Best Response
You've already chosen the best response.
let the three numbers are x,y,z from the first sentence, "The sum of three numbers is 96" how do you say this in x,y,z ?
Best Response
You've already chosen the best response.
not sure i understand what you mean by how we say this in x,y,z
Best Response
You've already chosen the best response.
let's say that the three numbers are x,y, and z. the first sentence say that "The sum of three numbers is 96" , we can write the sentence as "x+y+z = 96" it's what I mean.. :) do you understand
Best Response
You've already chosen the best response.
oh ok yesss sorry haha so x,y, and z would be 32 bcause 96/3 =32
Best Response
You've already chosen the best response.
wait..., we have no information that x, y, and z are equals... from the second sentence, we have " The second number is3 times the third" if y is the second number, and z is the third number, how
do you write this in math? (in y and z)
Best Response
You've already chosen the best response.
3z=y ?
Best Response
You've already chosen the best response.
correct... :) now, from the third sentence, we have " The third number is 6 less than the first " let, x be the first number and z be the third number, how do you write it in term of x and z?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
okay.., x - 6 = z, so x = z + 6, do you get it so far?
Best Response
You've already chosen the best response.
now, we have three equations x+y+z = 96 3z = y x = z + 6 now, substitute (change) the value of x and y in the first equation with the second and third equation, what do you get?
Best Response
You've already chosen the best response.
yes =) however i feel like that was the easy part.. and the hard part is coming =o
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bcb485e4b0bcefefa0824d","timestamp":"2014-04-16T19:08:23Z","content_type":null,"content_length":"57157","record_id":"<urn:uuid:68918f1c-bc4b-42c6-81f1-b8aa87291552>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with physics project involving force of friction of various surfaces.
Hey I have to do a project in which I am going to roll a soccer ball across different surfaces such as turf, grass, concrete, etc... and calculate the force of friction of each of these surfaces. I
believe that i need to find the Initial force of the ball, then the final and subtract to find the force of friction. The thing is i have no idea how to conduct the actual experiment. | {"url":"http://www.physicsforums.com/showthread.php?t=461252","timestamp":"2014-04-16T04:22:29Z","content_type":null,"content_length":"22855","record_id":"<urn:uuid:df4e8b14-faf1-488a-8411-0a65f69a973d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: negative binomial models with large fixed effect group size
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: negative binomial models with large fixed effect group size
From "Corey Phelps" <cphelps@u.washington.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: negative binomial models with large fixed effect group size
Date Mon, 9 Apr 2007 10:34:06 -0700
Hausman, Hall and Griliches (1984) is often cited as having developed (what
has become) the conventional negative binomial model for panel data. Allison
and Waterman (2002) recently criticized Hausman et al.'s (1984) conditional
negative binomial fixed effects model as not being a "true" fixed effects
method in that it does not control for all time invariant covariates.
According to Allison and Waterman, HHG did not formulate a true fixed
effects model in the mean of the random (dependent) variable. Their
formulation layers the fixed effect into the heterogeneity portion of the
model and not the conditional mean. This portion is then conditioned out of
the distribution to produce the HHG model that is estimated. Allison and
Waterman (2002) developed an unconditional negative binomial model that uses
dummy variables to represent fixed effects, which effectively controls for
all stable individual effects. However, estimates of b are inconsistent in
negative binomial models when using such a dummy variable approach in short
panels due to the incidental parameters problem (Cameron & Trivedi, 1998:
282). Contrary to linear regression models, the maximum likelihood estimates
for ai and b are not independent for negative binomial models since the
inconsistency of the estimates of ai are transmitted into the MLE of b.
Given that this method is a true fixed effects specification it does not
allow for time-invariant covariates. The panel Poisson FE specification is a
standard fixed effects estimator and does not suffer from the incidental
parameters problem.
Allison, P. and R. P. Waterman. 002 "Fixed-effects negative binomial
regression models." Sociological Methodology, 32: 247-265.
Hausman, J. A. 1978 "Specification tests in econometrics." Econometrica,
46(6): 1251-1271.
I hope this helps.
> -----Original Message-----
> From: owner-statalist@hsphsun2.harvard.edu
> [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of KBW
> Sent: Friday, April 06, 2007 12:44 PM
> To: statalist@hsphsun2.harvard.edu
> Cc: bunker@stanford.edu
> Subject: st: negative binomial models with large fixed effect
> group size
> Hi Statalist,
> 1) I am having trouble making sense of the best route to
> take regarding
> fixed effects and negative binomial regression. I have
> approximately 5,000
> individuals with an average of 10 observations each that I
> would like to
> obtain fixed effects estimates (within-individuals) for in a negative
> binomial regression. I've read the "xtnbreg, fe" does not perform a
> conventional individual fixed effects estimator, but that I
> can obtain what
> I am looking for by running nbreg with dummy variables for
> each individual
> and then correcting the standard errors afterwards. The
> problem is that it
> is challenging, if not realistically impossible, time-wise,
> to run these
> models with 5,000 dummy variables. Does anyone know of an
> alternative way
> to achieve this goal (in Stata, or even another package)?
> 2) In addition, if I were to run nbreg with dummy variables
> for the fixed
> effects, how does one interpret time-invariant independent
> variables in
> models? I realize that in theory time-invariant variables and fixed
> effects don't make sense, but in the few test models I have run (with
> smaller subsets of the dataset and using reg instead of
> nbreg) running
> "reg" with fixed effect dummy variables produces the same
> coefficients as
> "xtreg" for the time-variant variables, but "xtreg" drops the
> time-invariant one (expected) and "reg" does not. What, then, is the
> meaning of the "reg" output for time-invariant variables when
> individual
> dummies have been included in the model?
> Thanks very much for your assistance!
> KW
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-04/msg00236.html","timestamp":"2014-04-20T08:25:13Z","content_type":null,"content_length":"10355","record_id":"<urn:uuid:5e9d1b62-3252-4273-9bee-7a60500db066>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
How is centripetal force used for roller coasters (refer to radius, period and frequency). - Homework Help - eNotes.com
How is centripetal force used for roller coasters (refer to radius, period and frequency).
In the absence of a force acting on an object, it continues to stay at rest or moves with a constant velocity. An object moving along a circular path, is not moving at a constant velocity. As
velocity is a vector, the change is in the direction on motion of the object. To enable an object to move along a circular path, a force has to act on the object; this is known as centripetal force.
The magnitude of centripetal force is given by `F = mr (4 pi^(2))/(T^(2))` where m is the mass of the object, r is the radius of the path it is moving in, and T is the time period of rotation.
For roller coasters, the radius, and time period have to be kept in mind to ensure there is sufficient centripetal force to enable the moving bodies to continue with their motion in the required
path. If this is not done, the motion of the body along the desired path is not possible and this could lead to a serious accident. Centripetal force also has to be considered with respect to the
force that people on the roller coaster can withstand. If the force in place is very high it could lead to people going unconscious or having a very bad feeling once the ride in the roller coaster is
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/how-centripetal-force-used-roller-coasters-refer-460336","timestamp":"2014-04-17T13:31:06Z","content_type":null,"content_length":"26923","record_id":"<urn:uuid:36d37b44-cf3c-4d42-b1fe-0e464c83577d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2003 [00218]
[Date Index] [Thread Index] [Author Index]
Fwd: Linear and logarithmic fit
• To: mathgroup at smc.vnet.net
• Subject: [mg39373] Fwd: [mg39363] Linear and logarithmic fit
• From: "Jay D. Martin" <jdm111 at psu.edu>
• Date: Thu, 13 Feb 2003 04:52:19 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
>Hi everybody,
>If I have 9 points in a 2 dimensional space how do I decide if they
>fit better a linear function or a logarithmic function?
>Thanks in advance,
The first question to answer is, Do I need
to interpolate the data or just approximate it?
If the data is the result of physical experiments or
for some reason has a random error component
to it, there is no reason to interpolate the data
exactly. If the data is deterministic, ie if you
performed the experiment again you would get
exactly the same answer, you would want to interpolate
the data.
The answer to this question directs the selection of possible
function families to approximate the process that is represented
by the data you have sampled.
If your data has random error, it is non-deterministic,
you will want to select an approximating function. One way to do
this is to use the Statistics`LinearRegression` package and the Regress
function. Given a set of functions to fit, this function will return the
coefficients of the functions, the quality of fit of the model and the
importance of each function in the model.
If you want to fit nonlinear functions such as 2^x, you can use
the Statistics`NonlinearFit` package and the NonlinearRegress
function. I would recommend sticking to the LinearRegression
package and perform the nonlinear transformation on the "y"
values instead. The primary reason for this is efficiency in calculating
the optimal coefficients. NonlinearRegress requires an iterative optimization
while Regress has a closed-form solution.
If you need to interpolate the data, there are many options available.
The easiest to use is the standard function Interpolation. This function
will fit a spline between the points of any order you specify. The limitation
of this routine is your data must be on a cartesian grid. If this is not
the case,
and you have 2D data you can use a triangulation of the data to fit a plane
to each region defined by its three closest neighbors. I do not have the
latest version
of Mathematica, but I believe there is a function that supports this
directly now.
The last options for interpolation, especially for 2D+ data is to use
spatial correlation functions. These include radial basis functions and
kriging functions. In general, these functions assume nearby points have
some influence on the prediction of an unknown location. The amount of
becomes a function of the distance from all of the known points.
These functions can provide a great deal of flexibility in fitting data,
but they are more complex to use and in the case of kriging, difficult
to fit. Kriging requires an optimization process to determine the parameters
for the model that control the range of influence of nearby points and
the smoothness of the resulting surface.
I have probably confused you more than helped with my long-winded
answer, but I hope it helps you address your problem.
Jay Martin
Penn State University | {"url":"http://forums.wolfram.com/mathgroup/archive/2003/Feb/msg00218.html","timestamp":"2014-04-18T00:53:05Z","content_type":null,"content_length":"37061","record_id":"<urn:uuid:8ad88bca-0c67-4cd5-99e0-17329bd4aeb4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
On different approximations to multilocus identity-by-descent calculations and the resulting power of variance component-based linkage analysis
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Genet. 2003; 4(Suppl 1): S72.
On different approximations to multilocus identity-by-descent calculations and the resulting power of variance component-based linkage analysis
An empirical comparison between three different methods for estimation of pair-wise identity-by-descent (IBD) sharing at marker loci was conducted in order to quantify the resulting differences in
power and localization precision in variance components-based linkage analysis. On the examined simulated, error-free data set, it was found that an increase in accuracy of allele sharing calculation
resulted in an increase in power to detect linkage. Linkage analysis based on approximate multi-marker IBD matrices computed by a Markov chain Monte Carlo approach was much more powerful than linkage
analysis based on exact single-marker IBD probabilities. A "multiple two-point" approximation to true "multipoint" IBD computation was found to be roughly intermediate in power. Both multi-marker
approaches were similar to each other in accuracy of localization of the quantitative trait locus and far superior to the single-marker approach. The overall conclusions of this study with respect to
power are expected to also hold for different data structures and situations, even though the degree of superiority of one approach over another depends on the specific circumstances. It should be
kept in mind, however, that an increase in computational accuracy is expected to go hand in hand with a decrease in robustness to various sources of errors.
All methods of statistical gene mapping by means of linkage and/or linkage disequilibrium use, in one way or another, the information on polymorphic phenotypesÂ-typically, the genotypes at one or
several polymorphic marker lociÂ-to trace the inheritance of any specific chromosomal position through the available pedigree data. In variance-component (VC) linkage analysis, this transmission
pattern is captured by an identity-by-descent (IBD) matrix, which contains the estimated proportions of alleles shared at a particular genomic location for all pairs of pedigree members. Normally,
the observed marker locus genotypes provide only partial information about the meiotic transmissions of a given point on a chromosome, such that many different inheritance patterns are compatible
with the observed marker locus genotypes. For reasons of computational simplicity, it is currently standard, though likely sub-optimal, practice in VC-based linkage analysis to form a weighted
average of IBD sharing over all admissible segregation patterns, with the probability of each possible transmission pattern used for weighting. The resulting estimated IBD matrix is part of the
variance-covariance matrix used to compute the likelihood on the data under an assumed multivariate normal distribution [1] or a multivariate t distribution [2].
The IBD matrix may be estimated from the genotype information at single marker loci, one locus at a time. Alternatively, the genotypes observed at several linked marker loci may be used jointly to
infer the transmission pattern in the data set. Because the genotypes at a single marker locus are almost never fully informative, and because the joint use of several marker loci generally allows
more information on the point-wise transmission pattern to be extracted, the "multi-point" approach is often preferred to the "two-point" approach. This is especially true for VC-based linkage
analysis where, in contrast to penetrance-model-based linkage analysis, single-marker and multi-marker analyses are equally robust to misspecification of the trait phenotype-trait locus genotype
relationship, for reasons explained by Göring and Terwilliger [3]. It should be kept in mind, however, that a multi-marker approach is not penalty-free even for VC-based linkage analysis, because
multi-marker analysis is generally less robust to errors in pedigree structure and marker information [e.g., [4,5]].
A key problem with multi-marker analysis is its computational burden. The Elston-Stewart algorithm [6] allows likelihood computations on large pedigrees but only for a single marker locus or a small
number of loci at most, and the Lander-Green algorithm [7] makes possible the joint analysis of many loci but only on pedigrees of moderate size. Several approximate approaches have been developed to
overcome these limitations. Markov chain Monte Carlo (MCMC) methods [e.g., [8,9]] extend the feasibility of linkage analysis with regards to the complexity of a pedigree that can be handled while
leaving it intact, and to the number of loci that can be analyzed jointly, by sampling from the permissible inheritance patterns. However, even these approaches can require long computation times.
Furthermore, it is typically not clear how closely the obtained information on chromosomal transmissions approximates the information from an exact analysis. An alternative concept to approximating
exact multi-locus analysis is sometimes referred to as multiple two-point analysis. The idea behind this approach is to combine the computational simplicity of single-marker analysis and the
increased power of multi-marker analysis. In VC-based linkage analysis, this is achieved by first computing exact single-marker IBD matrices for all linked marker loci individually and by then
combining these IBD matrices into an approximate multi-marker IBD matrix for a given chromosomal location [10,11].
Here, we describe an empirical power comparison between VC-based linkage analysis using single-marker (two-point) analysis, approximate multi-marker analysis using a multiple two-point approach, and
approximate multi-marker analysis using a multipoint MCMC approach, to quantify the relative gain in power by increasing the computational complexity of IBD matrix estimation.
The simulated data prepared for the Genetic Analysis Workshop (GAW) 13 were used for analysis. The data set comprises 4692 individuals in 330 pedigrees in total, modeled after the Framingham Heart
Study [12]. The data set was "randomly" ascertained, i.e., without regard to a specific phenotype. The phenotypic and genotypic data from Cohort 2 was used, which consists of 1634 individuals of
younger generations. Cohort 1 contains older individuals connecting the younger individuals together into larger pedigrees. No phenotypic or genotypic information from Cohort 1 was used here. Thus,
for the most part, data were available only from the youngest one or two generations.
We analyzed height measured at the first clinic visit of this cohort (phenotype hgt1). This phenotype is largely controlled by additive genetic effects, which together explain 84% of the sex-specific
variance. The most important quantitative trait locus (QTL), G[b1], is located on chromosome 5 at 80.41 cM of the sex-averaged map and explains 40% of the sex-specific variance. The QTL is flanked by
the eight marker loci c5g9-c5g16 (four on either side), which have roughly 10 cM inter-marker spacing. The observed genotypes at these eight marker loci were used for analysis. To better highlight
the difference in power between VC-based linkage analysis based on the various examined approaches to IBD matrix estimation, two of these marker loci (c5g12 and c5g14) were made diallelic by
combining all even and all odd alleles. The other six marker loci had stated heterozygosities of at least 0.68. Replicates 1–10 were analyzed. The simulation settings (i.e., the "answers") were known
prior to analysis.
Statistical analysis
Single-marker and various multi-marker VC-based linkage analyses were performed using eight linked marker loci. A sex-averaged map was used throughout, and absence of recombination interference was
assumed in the analysis. Marker allele frequencies were estimated by a simple allele-counting algorithm on all genotyped individuals, regardless of relationship. Single marker IBD matrices were
computed by computer program SOLAR version 1.7.3 [11], which used computer program FASTLINK version 3.0P [13] as the underlying computation engine for these IBD calculations. SOLAR's built-in
multiple two-point regression-based approach [11] was used to combine the single-marker IBD matrices into approximate multi-marker IBD matrices. The computer program SIMWALK2 version 2.82 [8], which
uses a MCMC approximation to exact likelihood computation, was used to compute true multi-marker IBD matrices. Standard VC-based linkage analysis was performed with SOLAR assuming phenotypic
multivariate normality and using sex as a fixed effect covariate, based on the IBD matrices obtained by any of the three alternatives in turn.
Table Table11 shows the maximum LOD score in the region around the QTL for the three different methods of IBD sharing computation for Replicates 1–10. In 9 out of 10 replicates, the maximum LOD
score for the multiple two-point approach, which uses a regression procedure to combine the individual single marker IBD matrices into approximate multi-marker IBD matrices, is higher than the
maximum LOD score obtained in two-point analysis, which is based on IBD matrices computed from the genotypes at single marker loci individually. The difference in magnitude between the two LOD score
peaks is often quite substantial. The only replicate where the two-point approach is more powerful is the replicate giving the lowest LOD score peak for both methods.
Maximum LOD scores for three different methods of IBD sharing estimation
The true multi-marker approach, using an MCMC approximation to compute multi-locus IBD probabilities, is in turn more powerful than the multiple two-point approach in 9 out of 10 replicates, in many
cases giving a substantially higher LOD score peak. On average, the regression-based multiple two-point approach gives maximum LOD scores that are roughly intermediate between those from a
single-marker and a true multi-marker approach.
Table Table22 shows the genetic distance between the chromosomal position where the maximum LOD score occurred and the true chromosomal position of the QTL for the different approaches to IBD
sharing estimation for the same 10 replicates. The single-marker method fared poorly in comparison to the multi-marker approaches, giving much greater genetic distances on average. This was expected,
because the two flanking marker loci were ~6 and ~12 cM away from the QTL, respectively. The regression-based multiple two-point approach and the MCMC-based multipoint approaches were used to compute
IBD matrices every centimorgan (cM) and were comparable in accuracy of QTL localization.
Distances between positions of maximum LOD scores and QTL^A
Differences in power
We have compared three different approximations to multi-marker IBD sharing computation with regards to power of VC-based linkage analysis. On the examined data, it is clear that the multipoint
approach is more powerful than the multiple two-point approach, which in turn in more powerful than the two-point approach. In this data set, the multiple two-point method is able to capture more
information on the chromosomal segregation pattern than a two-point approach, without a significant increase in computational burden. On the other hand, the multiple two-point approach clearly does
not use all available information on the chromosomal transmissions among pedigree members contained in the observed genotypes.
The difference in power between the two considered multi-marker approaches is expected to be especially pronounced when the marker loci individually are quite uninformative (data not shown). The
degree to which the true multipoint approach is preferred may scale differently depending on the reasons why individual marker loci provide little information, such as low heterozygosity, e.g., when
single nucleotide polymorphisms are used, or when genotyped individuals are separated by multiple generations of ungenotyped individuals. The marker locus density is also expected to play a role.
We were unable to compute exact multi-marker IBD sharing probabilities on this data set for comparison because the pedigrees were found to be too large for such calculations, at least for the time
being. We suspect that such an approach would be at least as powerful as the employed MCMC approximation, unless the sampler underlying the SIMWALK2 computer program biases the IBD sharing
probabilities in a systematic fashion relative to the analyzed phenotype, which seems unlikely in this case given the "random" ascertainment of these pedigrees.
Generality of findings
This has been an empirical investigation on a data set with specific characteristics of the pedigrees, the phenotype, and the marker loci and their genotypes. While we suspect that our overall
conclusion, that power to detect linkage increases with an increased computational sophistication in computing IBD sharing probabilities, holds more generally, the following caveats should be kept in
The data were simulated to be without any errors. While the simulations were based on sex-specific recombination fractions, the analysis assumed equal genetic distances for both sexes. (This choice
was made to keep the conditions as similar as possible between the different IBD calculations. While SIMWALK2 can handle sex-specific maps currently, SOLAR's multiple two-point approach cannot at
present.) However, besides this one source of error, the data and analyses represent an ideal situation that is unrealistic for real data. It is known, however, that multi-marker analysis is
generally less robust to errors in pedigree structure, genetic marker map, marker allele/haplotype frequencies, and marker genotypes [e.g., [4,5]]. We suspect that the multiple two-point
approximation is more robust to most if not all of these errors than true multipoint analysis. Thus, there is a trade-off between increasing accuracy of computation and resulting increase in power on
the one hand and robustness to errors on the other hand. The critical point of balance between both considerations likely falls on different error levels for different data sets and conditions.
Support from National Institutes of Health grants HL45522, HL 28972, GM31575, and MH59490 is gratefully acknowledged.
Articles from BMC Genetics are provided here courtesy of BioMed Central
• Comparison of single-nucleotide polymorphisms and microsatellites in detecting quantitative trait loci for alcoholism: The Collaborative Study on the Genetics of Alcoholism[BMC Genetics. ]
Kim H, Hutter CM, Monks SA, Edwards KL. BMC Genetics. 6(Suppl 1)S5
• Approximating Identity-by-Descent Matrices Using Multiple Haplotype Configurations on Pedigrees[Genetics. 2005]
Gao G, Hoeschele I. Genetics. 2005 Sep; 171(1)365-376
• Evidence for a Novel Late-Onset Alzheimer Disease Locus on Chromosome 19p13.2[American Journal of Human Genetics. 2004]
Wijsman EM, Daw EW, Yu CE, Payami H, Steinbart EJ, Nochlin D, Conlon EM, Bird TD, Schellenberg GD. American Journal of Human Genetics. 2004 Sep; 75(3)398-409
See all...
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1866511/?tool=pubmed","timestamp":"2014-04-19T17:28:18Z","content_type":null,"content_length":"68303","record_id":"<urn:uuid:57dc2ea5-26dc-4977-a9d0-0eba8aa5bb29>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Absolute Value (math)
You didn't say which version of Word you're using. That information is
nearly always important when posting a question about Word.
Click on Insert. (Whether it's a menu or a tab on the Ribbon depends
on which version of Word you're using.) Find Object and click it. In
the list that appears, find Microsoft Equation 3.0. Click it. If
Microsoft Equation 3.0 isn't in the list, you can add it to the list,
but you may need your Office CD to do it.
With Equation Editor open, you'll see 2 rows of buttons. On the second
row, click the first one on the left. The absolute value template is
the first one in the second row. Click it. You'll get both bars, so
type whatever you need inside, then press the Tab key to get outside
the template. Continue typing the equation, then press the Esc key to
exit the Equation Editor.
The advantage this has over using the 2 vertical bar keys from the
keyboard is that it's the "real" absolute value symbol, rather than a
work-around. It'll look better.
Bob Mathews
Director of Training
Design Science, Inc.
bobm at dessci.com
FREE fully-functional 30-day evaluation of MathType
MathType, MathFlow, MathPlayer, MathDaisy, Equation Editor
On 4-Jun-2009,Artemis Darkdreamer wrote:
> How do I insert the "absolute value" sign in an equation in Word? | {"url":"http://www.techtalkz.com/ms-office-help/18682-absolute-value-math.html","timestamp":"2014-04-16T13:45:51Z","content_type":null,"content_length":"27914","record_id":"<urn:uuid:57d0c28b-cf5e-4c19-bb0a-ce535aa045b6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equation of a Line, Tangent Lines
On this page we hope to demonstrate the following:
Equation of a Line, standard form and point-slope
Ex: y = 3x + 4, y - 7 = 3(x - 1)
Finding x-intercept and y-intercept
Finding the equation of tangent lines
Ex: Find the equation of the tangent line of y = x^3 at x = 2
Equation of a line
Standard Form
When we try to graph polynomials, we quickly find that a polynomial of single degree, where nothing is squared or square-rooted, is always going to be a line. Let's analyze the line y = 3x + 4, whose
equation is written in standard form.
The first thing we notice about the graph of this function is that it has a very steep slope. Slope is defined as the change in y-value over the change in x-value between two points on a line, but
you can easily think of it as the steepness of the line.
It was stated above that this function is stated in standard form: y = 3x + 4. Lines written in standard form are always of the form
y = mx + b,
where m is the slope,
b is the y-intercept - the y value where the line intersects the y-axis.
The slope and the y-intercept are the only things that we need in order to know what the graph of the line looks like. To begin with, we start by drawing a point at the y-intercept, which in our
example is 4, on the y-axis. Then we draw a line that corresponds to our slope, which goes through that point. In our example, the slope was 3. That means that for every unit that we move to the
right on the x-axis, we must move 3 points up along the y-axis. The illustration on the right should help.
Point-Slope form
In order to write an equation for a line in standard form, we need to know a slope as well as the y-intercept. More often, however, we won't know the y-intercept until we know the equation for the
line. However, we can often write the equation for a line in point-slope form very easily. It only requires that we know the slope of the line, and a single point on the line.
Equations of lines in point slope formula look like y - y[0] = m(x - x[0]), where m is the slope, y[0] is the y-value of the point that we know, and x[0] is the x-value of the point that we know.
Given a point (x[0],y[0]) and a slope, we can write the equation of a line.
Point-slope form and standard form equations describe the same line - they just show it in a slightly different form. If we look at the graph of y = 3x + 4 above, we may note that (2,10) is a point
on the line. then, by our point-slope form, we may also see that an equation for the same line is y - 10 = 3(x - 2). However,
y - 10 = 3(x - 2)
y - 10 = 3x - 6
+10 +10
y = 3x + 4
Notice that they are really the same equation?
Important: The number attached to x is not always the slope. It only is if we eliminate the coefficient of y. Suppose we had an equation like 3y = 2x + 4. We must first divide the whole equation by 3
so that the coefficient of y is 1. We then find that the slope of that line is actually 2/3.
Finding the x-intercept and y-intercept
It is often advantageous, especially for graphing, to figure out where the graph of the line crosses the x-axis and the y-axis. If we have an equation in standard form, it tells us the y-intercept.
However, it's not a problem if we don't have an equation in standard form - the intercepts are really easy to find!
Suppose we want to find the y-intercept of the line 2y + 1 = 3x - 2. The y-intercept is precisely the y-value when the graph crosses the y-axis, which occurs when x = 0. All we have to do to find the
y-intercept, then, is to plug in x = 0, and then solve for y. In our example, this yields
2y + 1 = 3(0) - 2
2y + 1 = -2
y = -3/2
so y = -3/2 is the y-intercept. Similarly, the x-intercept is the x-value when y = 0, where the graph crosses the x-axis. To find the x-intercept, we plug in y = 0 and solve for x. In our example,
this yields
2(0) + 1 = 3x - 2
1 = 3x - 2
x = 1
Identify the slope and x and y intercepts of the following equations:
1. y = x - 5
2. 2y = 4x
3. y + 4 = 3x - 2
4. A line with a slope of -1 that passes through the point (3,2)
5. A line with a slope of 3/2 that passes through the point (1,0)
Tangent Line
A tangent line to a curve is a line that touches a curve, or a graph of a function, at only a single point. For an example, look at the following diagram.
Click here for a demo on a tangent line to a curve
Example of Tangents
The following demo demonstrates a tangent line to a function. The function is f(x) = x^2. Note how the tangent line touches the graph of the function only at one point.
The demo also shows the slope. Note how the slope changes depending on where we take the slope.
Dylan Cashman, July 1, 2008, Created with GeoGebra
It is very easy to find the equation for the tangent line to a curve at a certain point. Remember that to find an equation for a line, all we need is the slope at that point, and the coordinates of a
single point on that tangent line. The coordinates of the point where the tangent line meets the curve.
Finding the slope
To find the slope of a curve at a certain point, something we'll need to know for the tangent line, we will use the derivative. The derivative of a function gives you the slope of that function,
since a derivative gives change over time. For example, suppose we had the function f(x) = x^2. The derivative of this function is 2x. Note how it gives you a function for a slope - not a single
number. This is because the slope of a curve that is not a line is dependent on where you take it. To find the slope of our tangent line at a point, we just plug in the x-value of that point into our
function for the slope.
For example, for the function f(x) = x^2, we'll try to find the equation of the tangent line at x = 2. To begin with, we need the slope. The derivative of x^2 is 2x, so to find the slope at x = 2, we
plug x = 2 into 2x, to get the slope to be 4. We now have the slope. In order to complete the equation for the line, we also need a point the line goes to. We know that the tangent line touches the
function f at x = 2, so we can use that point. At x = 2, the function f(x) has the value f(2) = 2^2 = 4, so we know that the line passes through the point (2,4). Now we have a point on the line. We
can use the point-slope formula to find the equation of the tangent line: y - 4 = 4(x - 2).
In conclusion, to find the tangent line at a point x[0], you need to:
1. Take the derivative of your function.
2. Plug in your x-value, x[0], into the derivative to get your slope.
If the derivative ends up just being a constant, like 3, then just use that as your slope.
3. Use x[0] to find out what the y-coordinate of that point is.
Basically, just plug in x[0] to get f(x[0]) = y[0]. Then that point, (x[0],y[0]), is a point on your tangent line.
4. Use point-slope formula to write the equation out: y - y[0] = m(x - x[0])
Exercises: Find the equation of the tangent lines of the following functions at the indicated points.
1. f(x) = x^3 + 2x; x = 0
2. f(x) = 4x^2 - 4x + 1; x = 1
3. f(x) = 3x - 7; x = -14
4. f(x) = x^5; x = 5
5. f(x) = x^5; x = 1
6. Why is the slope so much larger in exercise 4 than in exercise 5? | {"url":"http://www.math.brown.edu/UTRA/tangentline.html","timestamp":"2014-04-18T23:53:18Z","content_type":null,"content_length":"13148","record_id":"<urn:uuid:edeb3cbf-0a24-4810-930c-0559cd930533>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
What could you do with 54,000 watts?
I already looked at ESPN’s Sport Science episode where they calculate that Marshawn Lynch produces 54,000 watts when pulling some tires. Yes, that is way too high. However, what would happen if some
was actually that powerful? What could that person do? How fast could they run 100 meters? That is what I am going to calculate.
First, I am going to assume that Marshawn has a mass of about 100 kg. Also, let me say that he can produce 54,000 watts no matter what his speed.
Take a short time interval. During this time, Marshawn will increase his speed from say v[1] to v[2] this would be a change in energy of:
And this would relate to the power by:
So, if I know this small time interval and the velocity he started at (at the beginning of the interval) then I can find the final velocity:
If the time interval is short, then the velocity is essentially constant (for very short time intervals) so that I can use the average velocity to write:
You see where I am going don’t you? This is all set up for a numerical calculation. Here it is – I made it as simple as I could:
I changed my mind. Instead of using the average velocity to find the new position, I just used the velocity. Trust me, it is ok. Here – you can check. One good way of checking your calculations is to
make the time interval (dt in this case) smaller and see if you get the same result.
So, what do I get. Here is a plot of the speed as a function of time:
There you go – 100 meter dash in under 3 seconds. Take that Usain Bolt. Note that Usain not only has a cool name (Bolt) but has the world record at 9.58 seconds. Another note – I just noticed that
lists the wind speed for these records. Boom. That is another blog post.
Not only would 54,000 watts give you a 100 meter time under 3 seconds, you would be going over 50 m/s. Yes, that is like 120 mph.
How about another check. What if I put in a more reasonable power of 2000 watts? Here is what I get:
Seems better, doesn’t it? Still a world-record time, but I did not take into account air resistance and I assumed the power would be constant. Oh, also that would give a speed of 40 mph – so that
isn’t quite right.
1. #1 anonymous March 24, 2010
The caption of the second plot should be Speed of 2000 Watts dude, no?
2. #2 Rhett Allain March 24, 2010
Thanks for catching that – I fixed the title.
3. #3 anonymous March 24, 2010
“What could you do with 54,000 watts?”
Produce one research paper on AGW?
4. #4 Rob (no, the other Rob) March 24, 2010
Hmmm… On a quick glance there would seem to be a problem. 54000 watts is about 72 horsepower. Most cars have more than this and do not accelerate to 120 m.p.h. in the space of 100 yards.
Being at work and all, I haven’t had time to think through whether there’s an error in your calculations, your assumptions or my comparison.
5. #5 Rhett Allain March 24, 2010
Cars don’t have a mass of 100 kg.
6. #6 Rob (no, the other Rob) March 24, 2010
*smacks forehead* duh…
7. #7 Anonymous Coward March 25, 2010
The ability of human muscle to produce power is very velocity dependent (obviously, otherwise everyone would ride 1-speed bikes).
Which is why the ability to apply 2 kW continuously might get you a world record time DESPITE the ability of sprinters to produce well over 2 kW of power. I think in a previous comment I
mentioned that Usain Bolt’s split times demonstrate a peak mechanical power production over 4 kW in his early acceleration. But he doesn’t have gearing on his legs, so biomechanical
inefficiencies at high velocities (and wind resistance) prevent him from reaching your times.
I still agree 54 kW is baloney. | {"url":"http://scienceblogs.com/dotphysics/2010/03/23/what-could-you-do-with-54000-w/","timestamp":"2014-04-17T07:29:51Z","content_type":null,"content_length":"65116","record_id":"<urn:uuid:11f62c2a-6449-4ae3-bf13-e32f00c23a61>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Brownie Bake
Students determine the amount of each ingredient needed to make brownies, and then they figure out how to divide the brownies evenly among their classmates. This lesson helps students reinforce their
measurement skills in a practical situation.
Preparing the Investigation
Students should be familiar with using both nonstandard and standard units of measure and have been introduced to the concepts of area and volume.
Before introducing this baking activity, the teacher should:
• know how the cost of required ingredients will be covered;
• know the package directions for making the brownies, including additional ingredients required and their amounts, pan sizes, serving sizes, and number produced;
• have access to an oven or microwave in which to bake the brownies;
• have a list of all ingredients, utensils, and equipment and the quantities of each item needed for preparing and baking brownies. Utensils and equipment should not be shared by groups working
simultaneously. Measuring cups for the oil and water should be made of glass or clear plastic and be designed for use with liquids. Teacher directions and student questions follow.
Structuring the Investigation
Each small working group of students should have two different-sized cake pans (8" × 8" × 2", or 9" × 9" × 2", or 9" × 13" × 2"), and a bowl or a pitcher containing 2 1/2 to 3 cups of water. The
water will represent the batter. Ask students to consider pouring a given amount of water from the pitcher into each cake pan. Have them first make a guess and then actually do the pouring. In doing
so, students can answer questions such as:
• Which pan will provide the deeper measure of water?
• Which pan will provide the more shallow measure of water?
• Which pan will provide the greater top surface?
• Why? How can you tell?
Put each pan on a piece of paper, draw the outline of the pan bottom, and then cut out the shapes. Make several sets of those shapes. Ask students questions such as: Which pan requires the larger
piece of paper? Which pan requires the larger number of square colored tiles to cover the bottom? Why? Without employing any measuring instruments, fold or cut them into two equal pieces. Try
dividing a piece into three equal pieces, six equal pieces, and twelve equal pieces. In doing so, students can answer questions such as:
• Which cuts or folds are the most accurate?
• Can you get reasonably accurate pieces more easily for some numbers than for others? Why?
• Which division can be completed most accurately?
• Which divisions are only approximations of equal pieces? Explain?
Pose the following scenario to students:
Would it be easier to complete the previous activity if the original shapes had been drawn on rectangular-grid or graph paper? Why? Use grid paper and repeat the previous activity. Is it any
easier? Why? Compare your sets of pieces with corresponding ones from another group. What do you notice? How easy will it be to cut one of the rectangles into exactly seven equal pieces? Why?
Note: So far, the word equal has been used; however, students may be thinking congruent. A 1-inch-by-4-inch rectangular shape is equal in area but not congruent to a 2-inch-by-2-inch square shape.
All congruent shapes have equal areas, so those shapes are also equal. However, all shapes with equal areas are not necessarily congruent.
Ask students, "Without measuring, is it easier to fold or cut a rectangular shape into two equal pieces or three equal pieces? Why?"
Pose the following second scenario to students:
If you have a rectangular piece of paper and no measuring tool, which of the following could probably be done the fastest:
a. mark the paper into six equal pieces,
b. mark it into seven equal pieces, or
c. mark it into eight equal pieces?
If you choose to demonstrate or allow students to practice on their own, you may mark the paper with a crease if necessary. Students should give reasons to support their choices. Ask students which
of the three markings listed above (a, b, or c) would probably be the most accurate, and why.
Once students have discussed these measurement concepts, it is time for them to bake (and eat!) some brownies. This may be done during a follow-up class session if there is not enough time today.
Baking Brownies
Show students how to break an egg and to check it for freshness before adding it to the mix.
Ask each group to read the package directions and discuss among themselves what is to be done and the proper order in which to do all steps.
Each group decides which member will be responsible for individual steps in the preparation. Ask the group to complete a sign-up sheet. All members are to help with cleanup. Job assignments need to
be recorded and posted at each workstation.
Sign-Up Sheet for Baking Duties
Commercial brownie mix
Additional ingredients to bake brownies
Two different sized baking pans for each working group
Utensils to make brownies
Access to an oven or microwave
Glass or clear plastic measuring cups for liquid use
Bowl or pitcher containing 2-3 cups of water
Sign-Up Sheet for Baking Duties (one per group)
1. Hold a lunchtime or after-school brownie sale. Students should make their own brownies and price them to make a reasonable profit.
2. Purchase a container of ready-to-use frosting. Determine from the volume of the container the approximate depth of the frosting if the contents were used to ice the top surface of one
9-inch-by-9-inch pan of brownies. Would the same amount of frosting make a thicker or thinner layer on a 13-inch-by-9-inch pan of brownies? Why? By using two containers of frosting, one for a
15-inch-by-9-inch pan of brownies and the other to be divided equally between two 8-inch-by-8-inch pans of brownies, which pans would have the thicker layer of frosting? Why? Bake a batch of
brownies, frost them, and compare your approximation with the actual result.
3. Prepare and bake a cake mix and divide it into at least six equal servings. Unlike brownies, a cake will rise more at the center; therefore, the term equal will then relate to volume. Frosting
the cake will cause what further difficulty? How should this situation be handled? Why?
4. This activity could also be used to introduce prime and composite numbers, powers, and multiples by using square tiles, cubes, and half‑inch grid paper.
Students determine, using a nonstandard cup or plastic drinking container, the minimum amount of fruit drink needed to serve class members.
This lesson plan was adapted from "Beverage Serving and Sharing," which appeared in the February 1994 The Arithmetic Teacher, Vol. 41, No. 6, pp. 309‑10, 313‑14.
Learning Objectives
NCTM Standards and Expectations
• Understand such attributes as length, area, weight, volume, and size of angle and select the appropriate attribute.
• Understand the need for measuring with standard units and become familiar with standard units in the customary and metric systems.
Common Core State Standards – Mathematics
Grade 3, Algebraic Thinking
• CCSS.Math.Content.3.OA.A.2
Interpret whole-number quotients of whole numbers, e.g., interpret 56 ÷ 8 as the number of objects in each share when 56 objects are partitioned equally into 8 shares, or as a number of shares
when 56 objects are partitioned into equal shares of 8 objects each. For example, describe a context in which a number of shares or a number of groups can be expressed as 56 ÷ 8.
Grade 3, Algebraic Thinking
• CCSS.Math.Content.3.OA.C.7
Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 x 5 = 40, one knows 40 ÷ 5 = 8) or properties of
operations. By the end of Grade 3, know from memory all products of two one-digit numbers.
Grade 5, Num & Ops Fractions
• CCSS.Math.Content.5.NF.A.2
Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to
represent the problem. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers. For example, recognize an incorrect result 2/5 + 1/2 =
3/7, by observing that 3/7 < 1/2.
Grade 5, Num & Ops Fractions
• CCSS.Math.Content.5.NF.B.6
Solve real world problems involving multiplication of fractions and mixed numbers, e.g., by using visual fraction models or equations to represent the problem.
Common Core State Standards – Practice
• CCSS.Math.Practice.MP4
Model with mathematics.
• CCSS.Math.Practice.MP5
Use appropriate tools strategically.
• CCSS.Math.Practice.MP6
Attend to precision. | {"url":"http://illuminations.nctm.org/Lesson.aspx?id=814","timestamp":"2014-04-20T16:50:43Z","content_type":null,"content_length":"76126","record_id":"<urn:uuid:e94d27ab-9f0f-4051-a6a0-460c548f1f30>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does the Effectiveness of Control Measures Depend on the Influenza Pandemic Profile?
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
PLoS ONE. 2008; 3(1): e1478.
Does the Effectiveness of Control Measures Depend on the Influenza Pandemic Profile?
Alison Galvani, Academic Editor^
Although strategies to contain influenza pandemics are well studied, the characterization and the implications of different geographical and temporal diffusion patterns of the pandemic have been
given less attention.
Methodology/Main Findings
Using a well-documented metapopulation model incorporating air travel between 52 major world cities, we identified potential influenza pandemic diffusion profiles and examined how the impact of
interventions might be affected by this heterogeneity. Clustering methods applied to a set of pandemic simulations, characterized by seven parameters related to the conditions of emergence that were
varied following Latin hypercube sampling, were used to identify six pandemic profiles exhibiting different characteristics notably in terms of global burden (from 415 to >160 million of cases) and
duration (from 26 to 360 days). A multivariate sensitivity analysis showed that the transmission rate and proportion of susceptibles have a strong impact on the pandemic diffusion. The correlation
between interventions and pandemic outcomes were analyzed for two specific profiles: a fast, massive pandemic and a slow building, long-lasting one. In both cases, the date of introduction for five
control measures (masks, isolation, prophylactic or therapeutic use of antivirals, vaccination) correlated strongly with pandemic outcomes. Conversely, the coverage and efficacy of these
interventions only moderately correlated with pandemic outcomes in the case of a massive pandemic. Pre-pandemic vaccination influenced pandemic outcomes in both profiles, while travel restriction was
the only measure without any measurable effect in either.
Our study highlights: (i) the great heterogeneity in possible profiles of a future influenza pandemic; (ii) the value of being well prepared in every country since a pandemic may have heavy
consequences wherever and whenever it starts; (iii) the need to quickly implement control measures and even to anticipate pandemic emergence through pre-pandemic vaccination; and (iv) the value of
combining all available control measures except perhaps travel restrictions.
The continuous spread of H5N1 avian influenza raises concerns about the possible consequences of a major human influenza pandemic. The three pandemics of the last century each spread differently
across the world [1]–[2]. So, although we can learn from past experience, current response plans need to consider the possibility that the eventual pandemic diffusion profile may differ substantially
geographically and temporally from previous pandemics.
Mathematical modeling has been used to simulate the spread of a pandemic at a local [3]–[10] and a global scale [11]–[15] and to estimate the impact of different control measures [3]–[19]. Ferguson
et al. [3] simulated the spread of a pandemic in South-East Asia and showed that containment at the source was feasible using a combination of antiviral prophylaxis and social distancing measures if
the basic reproductive number of the new virus was below 1.8. Longini et al. [4] showed that in the case where interventions were used jointly (targeted antiviral prophylaxis, quarantine and
pre-vaccination), the pandemic could be stopped at the source even for basic reproductive numbers as high as 2.4. These results were later extended to the United States and highlighted the potential
impact of pre-pandemic vaccination [5]–[6]. Other recent modeling studies have focused on the international spread of an emerging influenza strain taking into account air transportation between
countries [11]–[13], [20]. These studies confirm the importance of local control measures and show that restrictions on air travel were unlikely to be of great value in delaying epidemics [11]–[13].
However, the characteristics of a future pandemic could differ substantially from the previous ones. For example, international travel has increased dramatically since the last major pandemic in
1968–1969 and is likely to affect the geographical and temporal spread of the virus.
The great uncertainty on the characteristics of the future influenza pandemic is also due to the uncertainty of key parameters such as the geographical region where the pandemic will start, its
season of emergence, the extent of susceptibility of the population to the emerging viral strain, or the epidemiological parameters of influenza like mean durations of latent and infectious periods.
Our study aims to identify typical profiles of geographical and temporal diffusion of an influenza pandemic at the global level, taking into account the variability of these parameters. Simulations
obtained after sampling the model's parameters were clustered and a multivariate sensitivity analysis was performed to explore how the correlation of different control measures with the pandemic
outcomes would vary depending on these profiles. This paper adds to previous work by identifying potential diffusion profiles of a future pandemic on a global scale and by providing new insights on
the effectiveness of policies taking into account the great variability in geographical and temporal diffusion.
The Model
The mathematical model used in this study is a refinement of that developed by Flahault et al. [11] and implements a metapopulation approach with coupling between locations through transportation
[21]–[22]. The model simulates the spread of a pandemic through a worldwide network of 52 major cities. The epidemic at the city level is simulated by a deterministic model in discrete time, which is
composed, when no interventions are modeled, of four compartments representing disease states (Susceptible, Exposed, Infectious, Removed; S, E, I, R). Each compartment is divided into five sub-groups
corresponding to age groups to which individuals were assigned based on the international population database figures (www.census.gov). The E compartment corresponds to the incubation period and
individuals become infectious and enter the I compartment when symptoms develop. An air traffic matrix connects all cities. This matrix and information on city populations were collected in 2000 by
Grais et al. [23]. Individuals in the I compartment are supposed to not travel. As the model is formulated in a continuous state space whereas the variables represent discrete quantities (number of
individuals), we introduced a control on the number of latent and susceptible individuals similar to Rvachev and Longini [21]: if the sum of all individuals in each compartment at a particular stage
is less than one, the compartment is considered empty. This allows the simulation of trajectories leading to extinction as in a stochastic framework, even though this model is deterministic.
The seasonality was accounted through a cosine term in the monthly transmission rate formula:
where β[0] is the basic transmission rate-defined as the product of the number of contacts per unit of time and the probability of infection given a contact between an infectious and a susceptible
individual, in the absence of any seasonality of transmission; β[1] is the amplitude of seasonal variation of the basic transmission rate; and shift represents the delay in transmission (in months)
between Northern and Southern hemispheres. As it is well documented that seasonality of influenza transmission varies with location [24], the 52 cities were classified into one of three distinct
regions of seasonal variation of transmission as a good approximation of a more graduated variation: northern and southern zones, characterized by annual cycles in transmission and by a relative
delay of 6 months, and tropical regions without any seasonality in transmission.
We also assumed that only a fraction of newly infectious individuals was reported to the authorities.
Six preventive and control measures were integrated into the model: travel restrictions, use of masks, isolation of infectious individuals, antiviral prophylaxis, antiviral therapy and vaccination
campaigns (pre-pandemic-with vaccine based on the pre-pandemic strain and pandemic-with vaccine updated for matching pandemic circulating strains).
Input parameters that were varied were divided into two groups: seven parameters related to the pandemic and twenty parameters related to the control measures. Parameters related to the pandemic
were: (i) the mean duration of the latent period, (ii) the mean duration of the infectious period, (iii) the city of emergence–characterized by its size, its number of flight connections and the
average daily number of travelers from this city (expressed as ordered nominal variables with values representing categories, see Table S2); (iv) the month when the pandemic starts; (v) the basic
rate of transmission within the population (β[0]); (vi) the amplitude of seasonal effect (β[1]); and (vii) the initial proportion of susceptible individuals in the population–assumed to be the same
for all cities.
Pandemic vaccination, use of masks, prophylaxis, antiviral therapy and isolation were each characterized by three input parameters: theoretical efficacy, proportion of target population to which the
measure is applied, and time lag to introduction (counted from the first case). Pandemic vaccination was also characterized by the duration of the vaccination campaign (time needed to vaccinate
target population). Reduction of air traffic was modeled by two parameters: the proportion of air-traffic reduction and the time lag to introduction. Pre-pandemic vaccination was taken into account
simply by a coefficient affecting the number of initial susceptible individuals. For antiviral prophylaxis, the theoretical efficacy had two components: one for susceptibility to infection and one
for developing the illness if infected.
The effects of vaccination were modelled in our study according to an “all or nothing” action. This means that vaccination confers absolute protection to a given proportion of individuals and no
protection to the remaining proportion. Isolation was also taken into account in an “all-or-nothing” manner, and we considered two parameters: the actual proportion of individuals being isolated and
the theoretical efficacy of isolation to prevent transmission. In this way, we could take into account possible “leaks” in isolation of ill individuals. Antiviral therapy was considered to reduce the
transmission rate of ill patients (illustrating the reduction of infectiousness of those individuals) and also the length of the infectious period by an average of one day [16] (this parameter was
not varied in our study).
The model was implemented in Fortran 90: all parameters were specific to each city and to each sub-group, allowing the simulation of a range of eventualities. Figure 1 shows the flow diagram for the
epidemic model, describing the different compartments and their interactions for each sub-group (k) in each city (i). Mathematical details of the model and descriptions of the parameters and values
are given in the supplementary information (Appendix S1, Table S1, Table S2 and Table S3).
Flow diagram describing the infection spread within a given subgroup k of a city i and the implementation of interventions.
Pandemic profiles and impact of control measures
Influenza pandemic profiles and the study of the impact of interventions according to these profiles were identified through several steps:
1. Possible values of input parameters were sampled using the Latin Hypercube Sampling (LHS) method.
2. At first, we sampled values of input parameters related to the characteristics of the pandemic. These values were used to perform 1000 simulations.
3. We applied clustering methods to this set of simulations to identify typical pandemic profiles in the absence of any control measures.
4. A multivariate sensitivity analysis was applied to these 1000 simulations to identify which input parameters had the greatest influence on the temporal and geographical diffusion of the pandemic
in the absence of any control measures.
5. In a second step, we focused on two particular pandemic profiles previously identified (at step 3). For each, we performed 1000 simulations using sampled values of input parameters related to the
control measures.
6. Next, we performed another multivariate sensitivity analysis to study the independent and relative effects of each control measure on the burden of the pandemic according to the pandemic profile.
Latin Hypercube Sampling Method
We used the LHS sampling scheme, a type of stratified Monte Carlo sampling first proposed by Mc Kay, Conover and Beckman [25] and later applied to deterministic mathematical models, in particular by
Blower et al. [26]. This technique involves several steps: 1) the definition of probability distribution functions for each of the K input parameters; 2) the division of the range of each parameter
into N equi-probable intervals; and 3) the generation of the LHS K-sets of parameters by matching at random values sampled without replacement from each probability distribution function.
The ranges of input parameters were taken from previous studies as specified in Table S1. In the absence of available data on the distribution functions, we chose a uniform distribution for all input
parameters and large ranges of variation. For more information on the intervals of variation of the input parameters, see Table S1 and Table S2. The proportion of individuals protected by
pre-pandemic vaccination was varied between 0 and 0.2 (a range that includes low efficacy scenarios), similar to values considered in a recent work [27]. The theoretical efficacy of the pandemic
vaccine (with vaccine strains matching pandemic strains), was considered to be much higher (between 0.3 and 0.7) in agreement with literature values (see Table S1). Similarly, even more restrictive
intervals of variation (lower bound
Pandemic Profiles
Pandemic profiles were described by five outcome variables: (1) the cumulative number of cases at the end of the pandemic for all affected cities; (2) the total duration of the pandemic defined by
the time lag between the first case in the first city affected and the last case in the last city; (3) the number of cities affected by the pandemic; (4) the mean time to peak, calculated as the mean
time between the start of the pandemic and its peak over all cities affected; and (5) the standard deviation of the time to peak. The first three outcome variables explored the global burden of the
pandemic whereas the last two focused on the dynamics of the pandemic within the network of cities. Figure 2 represents the pandemic's course within four cities of the network, the total duration,
the mean time to peak and the total number of cases (the area under the curve of the global incidence). We considered that a city was affected if the daily incidence rate reached 1/100,000. The day
of peak was defined as the day when the incidence rate is maximal in each city.
Definition of a pandemic profile and of the outcome variables considered.
Clustering methods
Sets of input parameters related to the pandemic sampled using LHS were used in 1000 simulations of the model representing different possible profiles in the absence of any control measure. Typical
profiles within the first set of 1000 were identified by hierarchical classification using the Ward's minimum-variance method [28], based on the five outcome variables of the model taken in their
standardized form. This is a bottom up method, where objects are iteratively grouped in clusters of increasing size. The algorithm starts with as many clusters as objects, each one containing one
object. At each step, the grouping is performed by minimizing the within-cluster sum of squares over all the partitions obtainable by joining two clusters from the previous step. The choice of the
number of clusters was based on the values of three criteria: the pseudo t^2 statistics, the squared multiple correlation R^2-accounting for the proportion of variance explained by the clusters- and
the cubic clustering criterion CCC which compares the observed R^2 to the expected R^2 from a uniform distribution. We considered values of pseudo t^2 statistics markedly smaller than the consecutive
ones (when the number of cluster increases), values of R^2 grater than 0.85 and values of CCC greater than 3 indicating a good clustering.
Once the different clusters were identified, a typical profile for the simulated epidemic was determined in each cluster to allow them to be analyzed separately. Since the mean of each cluster was
not necessarily a simulated scenario, we selected the trajectory with the minimum sum of squared deviations of the five standardized outcome variables from the cluster mean. The mean of a cluster was
defined as the vector of the means of the five output variables. The reproductive rate R in the emerging city at the very beginning of the pandemic was calculated for each profile using a formula
that connects it to the rate (r) of the exponential increase of an epidemic in its initial phase. We fitted gamma distributions to empirical discrete distributions of latent and infectious durations
(Γ(k[1],θ[1]) and Γ(k[2],θ[2]) respectively) and used the corresponding exact expression to compute R [29]: [E] and T[I] are the mean duration of the latent and infectious phases respectively.
Multivariate sensitivity analysis
Two successive multivariate sensitivity analyses were performed, one to identify the input parameters with the greatest influence on the diffusion profile of the pandemic and the other to study the
impact of each control measure on each pandemic profile. In both cases, we calculated Partial Rank Correlation Coefficients (PRCCs) between input parameters and output variables. PRCC measures the
influence of uncertainty in estimating the values of the input parameter on the imprecision in predicting the value of the output variable [26], [30]. We considered values of PRCC greater than 0.4 as
indicating an important correlation between input parameters and output variables and values between 0.2 and 0.4 a moderate correlation.
SAS statistical software (version 9.1) and R statistical package (R Development Core Team; R Foundation for Statistical Computing, Vienna, Austria [http://www.R-project.org]) were used for all
statistical analyses.
Pandemic profiles
Clustering the first set of 1000 simulations identified six groups of pandemic profiles that could occur in the absence of any control measure.
As reproduced in Table 1, according to the values of clustering criteria, the set of simulated dynamics was split into six subsets, since it performs a significant decreasing in pseudo t^2 statistics
and corresponds to the first time the 0.85 threshold in R^2 values is exceeded.
Last 10 generations of the clustering history.
As is shown in Figure 3, where axes represent three of the discriminating criteria, profiles could be grouped based on (i) the total number of cases: massive pandemics (group A), moderate pandemics
(groups B, C and D) and mild pandemics (groups E and F), (ii) duration (groups A and F distinct from groups B, D and E), and (iii) the mean time to peak (groups A and C distinct from groups B and E).
Results of the clustering analysis: the six profiles (profile A in red, B in green, C in blue, D in light blue, E in pink and F in orange) are represented according to three criteria: the total
duration, the total number of cases and the mean time to ...
Table 2 contains the characteristics of the six profiles identified as representatives of their respective groups. Figure 4 shows disease incidence over time in the identified profiles in the absence
of any control measures.
Incidence curves of pandemic profiles identified over 1000 simulated dynamics without control measures.
Pandemic profiles-corresponding values of input parameters and outcome variables (all time variables are expressed in days).
Profile F corresponds to a situation where, despite initial cases in the city of emergence, the pandemic does not take off. In this case, the number of cases is around 415 (R[F]Figure 4 is
undistinguishable from the x-axis. This scenario with less than 500 cases in only one city is not strictly speaking a pandemic, but rather an influenza outbreak.
Profile A corresponds to a rapidly propagating pandemic with high attack rates. The spread of A is detailed in Figure 5. In this case, 86% of individuals are susceptible and the rate of transmission
at emergence is 1.37 (R[A]
Spatial and temporal spread of Profile A.
Profile B corresponds to a progressive and long lasting pandemic (Figure 6). In this case, 39% of the global population is susceptible, with a lower rate of transmission at emergence (1.13) and a
lower reproductive number (R[B]
Spatial and temporal spread of Profile B.
Profiles C, D and E are in-between these two extremes (represented by profiles A and B) in terms of global burden and total duration (R[C][D][E]
Input parameters influencing the pandemic profile
Table 3 shows the correlations between the input parameters and outcome variables. The basic transmission rate (the rate of transmission in the absence of any seasonality) and the initial proportion
of susceptibles correlated most strongly with outcomes. The greatest correlation was between the basic transmission rate and the total number of cases (PRCC
Absolute values of PRCCs between parameters related to the pandemic and outcome variables.
Correlation of control measures with pandemic outcomes
The correlation of interventions with pandemic outcomes was examined in profiles A and B (Figures 5 and and66 respectively). The PRCCs between input parameters and output variables are summarized in
Tables 4 and and5,5, respectively.
Absolute values of PRCCs between parameters related to the control measures and outcome variables for Profile A corresponding to a fast and massive pandemic (R[A]
Absolute values of PRCCs between parameters related to the control measures and outcome variables for Profile B corresponding to a long-lasting pandemic (R[B]
Regardless of the profile, restricting air travel (either expressed by the proportion and the date of introduction of transport limitation) had no impact on the global burden of the pandemic. Only
the date at which travel restrictions are introduced correlated slightly with the number of cities affected (profile A, PRCC
The other main finding is that early introduction of other control measures is the most important factor to reduce the number of infections, regardless of the profile and for all interventions
considered. In profile A, it impacted mainly on the number of cases, the number of cities affected and the duration (PRCCs ranging from 0.28 to 0.76, from 0.23 to 0.73 and from 0.15 to 0.58
respectively), and other outcomes also showed important correlation. In profile B, date of introduction of control measures (again excepting travel limitation) correlated slightly less with outcomes
and in a more homogeneous manner (PRCCs for all output variables in the range 0.14–0.44).
Apart from air traffic reductions, the effectiveness of control measures varied depending on the pandemic profile. In case of a fast and massive pandemic (profile A), efficacy and coverage play a
moderate role for several interventions, whereas in a progressive and long lasting pandemic (profile B), such correlations do not clearly appear, except for speed of intervention (as mentioned above)
and pre-pandemic vaccination. In this case, PRCCs show moderate correlation between efficacy of pre-pandemic vaccine and total number of cases, standard error of time to peak, and number of cities
affected (PRCC
For profile A, the PRCC of the efficacy for all interventions is higher than 0.10 for at least one of the outcome variables. In terms of theoretical efficacies, the interventions having an impact on
the pandemic dynamics are masks (PRCC>0.25 for the total number of cases, the duration and the number of cities affected), antiviral therapy (PRCC
The proportions of individuals of target populations to which interventions are applied are also correlated with outcomes: the coverage of prophylaxis have the greatest impact on all criteria (PRCCs
between 0.23 and 0.56), but coverage of pandemic vaccination, antiviral therapy, masks use and isolation also influence the pandemic dynamics (PRCC of respectively 0.33, 0.26, 0.29 and 0.37 with the
total duration). Profile A is also characterized by moderate correlations between the global effect of pre-pandemic vaccination and the total number of cases and of cities affected (PRCC equal to
0.30 and 0.27 respectively).
From the point of view of the output variables, the global pandemic burden and the total duration seem to concentrate the most of the impact of input parameters. However, this pattern is less obvious
for the profile B.
Using a mathematical model, we identified six typical profiles of geographical and temporal spread of an influenza pandemic, and the two key parameters influencing these profiles: the proportion of
susceptible individuals in the initial population and the basic rate of transmission between individuals. Supplementary analyses performed separately on each of two selected profiles suggest that the
variation in the impact of pandemic control measures and the spatial-temporal pattern subsequent to their implementation depend on the pandemic profile.
Although not unexpected, the importance of the proportion of susceptible individuals in the population may have important policy implications. The fact that not all individuals are susceptible to the
pandemic strain represents cross-immunity with previously circulating viruses. This assumption is supported, for instance, by what was observed during the 1968/A/H3N2 pandemic in United States: a
reduced mortality burden with respect to that of the previous pandemic which occurred in 1957 and was also caused by an A/H3N2 strain. One possible explanation is that human population was partially
protected in 1968 against H3N2 strain due to antibodies to N2 allele acquired after the 1957 pandemic [31]. In large urban areas or mega-cities, the pandemic virus will continue to spread even if
only a small proportion of the population is susceptible, but it will not in less populated areas. Where resources are potentially limited, these results stress the importance of focusing control
efforts on densely populated areas. Targeting high transmitters such as children would be an equally important step to limit transmission, since transmission rate was also identified as being
strongly correlated with pandemic outputs.
The central role of the proportion of susceptibles also indirectly illustrates the potential benefits of pre-pandemic vaccination, which aims to reduce the susceptibility of individuals before the
emergence of the pandemic strain. It is therefore not surprising that in this model, pre-pandemic vaccination correlates with the number of cases whatever the pandemic profile. Although the efficacy
of pre-pandemic vaccine remains uncertain, pre-pandemic vaccination should still be useful even at a low level of efficacy [27]. As our simulation results suggest, it could be beneficial if, on
average, complete protection is conferred to at least a proportion of population ranging from 0 to 0.2. In addition, for a given duration of infection and a specific transmission rate, there is a
minimum threshold of susceptible individuals in the city of emergence required for virus propagation, and hence the spread of the pandemic itself. As illustrated by profile F, with a basic rate of
transmission of 0.84 and 29% of the population susceptible in the city of emergence (much lower than the required threshold) only one city and 415 individuals were affected. Any interventions which
might lower the number of susceptible individuals below this theoretical threshold might go a great way to preventing a pandemic.
It is also noteworthy that the city of emergence, the month of emergence and seasonality do not play a major role in the profile of a pandemic. According to field evidence, it seems that pandemic flu
is more likely to start in a region where there is close proximity between humans and their poultry, a point that was not explicitly included in our modelling approach. However, our simulation
analysis shows that a pandemic is likely to occur independently of the characteristics of the city of origin, like its size or its number of air connections. Since the best way to mitigate its
consequences is to contain it at source [3]–[4] this highlights the importance of having every country as prepared as possible to react quickly if the pandemic emerges on its soil.
The variation of mean duration of latent and infectious periods also did not result in significant PRCC values with any of outcome variables. This finding, a little surprising at a first glance,
could have at least two explanations: 1) the relatively small range of variation (between 1.2 and 1.9 for T[E] and from 2.5 to 4, for T[I], where these values were taken from the literature [3]–[4]);
and 2) the relatively weak impact of these parameters' variation in relation, for instance, to the initial proportion of susceptibles, in the frame of a multivariate sensitivity analysis. This last
point is supported by important correlations coefficients between T[E] and T[I] with the total number of cases (0.93 and 0.88 respectively, data not shown) but weak relative variation of the global
burden (factors of 1.04 and 1.38 when bounds of variation intervals are considered for T[E] and T[I] respectively) computed in the case of a univariate analysis.
Our results also suggest that travel restrictions would have a limited impact on the spatial and temporal diffusion of an influenza pandemic. Indeed, regardless of the pandemic profile, restricting
air travel in our model has little effect on the global burden of the pandemic. Such restrictions have significant logistical, ethical and economic implications and their impact on an influenza
pandemic is currently debated [6], [9], [12], [13], [20], [23], [32].
Our research also highlights the importance of a timely response. Regardless of the spatial-temporal profile, the timing of interventions is crucial, underlining the need for vigilant and sensitive
surveillance to ensure an early detection and timely response. It also stresses the added value of pre-pandemic vaccination, which can be used immediately, even if less efficacious than the
appropriate pandemic vaccine which may take several months to be produced and distributed. But at this stage, it is impossible to predict which proportion of susceptibles will actually be immunized
by a vaccine based on a pre-pandemic strain, and the effectiveness of this control measure is strongly correlated with this missing information. The choice of using such pre-pandemic vaccine should
probably rely on preliminary immunogenicity studies.
The date of introduction of most of the control measures considered correlated with pandemic outcomes whatever the pandemic profile, although coverage and theoretic efficacy were more strongly
correlated to the outcomes of a fast, massive pandemic than a long-lasting pandemic. This can be interpreted as the need for a control measure to be used at a very large scale to have a real impact
in the case of a massive pandemic. This supports the idea that that a very aggressive pandemic will be very difficult to mitigate given the constraints on resource availability [5]–[6]. Conversely,
this result stresses the value of measures not relying on stockpiled resources such as isolation [18], measure that correlated moderately by its coverage with the total duration in the case of a
massive pandemic. Regardless of the profile, the date of isolation introduction also correlated with outcomes. When evaluating the potential impact of isolation measures, one should have in mind that
their outcome could be influenced by the pre-symptomatic or asymptomatic individuals, as it was discussed in Fraser et al. [33]. Here, we assumed that only infectious symptomatic individuals who
become infectious at the end of incubation period transmit. According to experimental and observational studies, viral shedding arises at low levels a short while before the onset of symptoms [34].
However, the public health impact of pre-symptomatic transmitters still remains unclear and could not be quantified precisely since there are few field studies reporting infections from such infected
individuals [35]. Nevertheless, considering the potential importance of such transmitters on the outcome of isolation-like interventions, we consider this statement in an indirect manner by assuming
that isolation efficacy could not be greater than 70%.
When interpreting the results of this analysis, it must be remembered that most are expressed in terms of correlation with outcomes and not in terms of level of impact. Our correlation results
express the ability to improve the results each time a control measure is more (or less) extensively used (rank correlation). A low correlation coefficient does not necessarily mean an absence of
impact. It means that increasing the use of a control measure is not systematically beneficial.
Beyond the results for any one specific measure, our analysis highlights the value for every country looking to limit the potential devastating consequences of a pandemic to 1) not rely on a single
control measure but use them all to complement each other, 2) be prepared with response planning, and stockpiling of antivirals and vaccines and 3) monitor the progression of the pandemic and adapt
the response to its profile.
The general applicability of our conclusion may be limited by the following considerations. Firstly, we used air travel data from 2000 and for 52 global cities. Although updating air travel data and
including more cities in the model might improve its accuracy, these values were chosen to be representative of global air travel volume and world geography. Secondly, we used a deterministic,
discrete time formula that has been shown to be suitable for use in large populations. Since the dynamics of internal epidemics within cities was not the focus of this research, but rather the global
spread, this type of approach would seem appropriate. Nevertheless, since we extensively explored the model behaviour by performing multivariate sensitivity, we can be confident that our modelling
approach reproduced a number of realistic potential scenarios and provides, in this sense, a panel of pandemic dynamics analogue to a fully stochastic model. The fact that our analyses led to similar
conclusions to previous studies using a slightly different methodology does not make them realistic, but points to probable robustness of these conclusions.
In conclusion, our key finding concerning the dependence of the efficiency of interventions on the pandemic profile demonstrates the critical importance of developing tools for early-stage
identification of the pandemic profile in order to adapt the public health response in as timely a manner as possible.
The authors would like to thank Fabian Alvarez, Judith Legrand, Raphaël Porcher and Matthieu Resche-Rigon for their insightful comments and help on the figures. The authors also thank Dr. Christopher
Fraser and an anonymous referee for their helpful suggestions and comments.
Competing Interests: Elisabeta Vergu, Rebecca F. Grais, Pierre-Yves Boëlle and Antoine Flahault have received fees for consultancies from Sanofi Pasteur in the past five years. Antoine Flahault has
received a research grant from Novartis vaccines.
Funding: Solen Kernéis received a research grant from the Fondation Recherche Médicale. The views expressed in this article are those of the authors.
Kilbourne ED. Influenza pandemics of the 20th century. Emerg Infect Dis. 2006;12:9–14. [PMC free article] [PubMed]
Mills CE, Robins JM, Lipsitch M. Transmissibility of 1918 pandemic influenza. Nature. 2004;432:904–906. [PubMed]
Ferguson NM, Cummings DA, Cauchemez S, Fraser C, Riley S, et al. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437:209–214. [PubMed]
Longini IM, Jr, Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, et al. Containing pandemic influenza at the source. Science. 2005;309:1083–1087. [PubMed]
Ferguson NM, Cummings DA, Fraser C, Cajka JC, Cooley PC, et al. Strategies for mitigating an influenza pandemic. Nature. 2006;442:448–452. [PubMed]
Germann TC, Kadau K, Longini IM, Jr, Macken CA. Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci U S A. 2006;103:5935–5940. [PMC free article] [PubMed]
Arino J, Brauer F, van den Driessche P, Watmough J, Wu J. Simple models for containment of a pandemic. J R Soc Interface. 2006;3:453–457. [PMC free article] [PubMed]
Chowell G, Ammon CE, Hengartner NW, Hyman JM. Transmission dynamics of the great influenza pandemic of 1918 in Geneva, Switzerland: Assessing the effects of hypothetical interventions. J Theor Biol.
2006;241:193–204. [PubMed]
Viboud C, Bjornstad ON, Smith DL, Simonsen L, Miller MA, et al. Synchrony, waves, and spatial hierarchies in the spread of influenza. Science. 2006;312:447–451. [PubMed]
Carrat F, Luong J, Lao H, Salle AV, Lajaunie C, et al. A ‘small-world-like’ model for comparing interventions aimed at preventing and controlling influenza pandemics. BMC Med. 2006;4:26. [PMC free
article] [PubMed]
Flahault A, Vergu E, Coudeville L, Grais RF. Strategies for containing a global influenza pandemic. Vaccine. 2006;24:6751–6755. [PubMed]
Cooper BS, Pitman RJ, Edmunds WJ, Gay NJ. Delaying the international spread of pandemic influenza. PLoS Med. 2006;3:e212. [PMC free article] [PubMed]
Hollingsworth TD, Ferguson NM, Anderson RM. Will travel restrictions control the international spread of pandemic influenza? Nat Med. 2006;12:497–499. [PubMed]
Colizza V, Barrat A, Barthelemy M, Valleron AJ, Vespignani A. Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PLoS Med. 2007;4(1):e13. [PMC free
article] [PubMed]
Caley P, Becker NG, Philp DJ. The waiting time for inter-country spread of pandemic influenza. PLoS ONE. 2007;2:e143. [PMC free article] [PubMed]
Longini IM, Jr, Halloran ME, Nizam A, Yang Y. Containing pandemic influenza with antiviral agents. Am J Epidemiol. 2004;159:623–633. [PubMed]
Wu JT, Riley S, Fraser C, Leung GM. Reducing the impact of the next influenza pandemic using household-based public health interventions. PLoS Med. 2006;3:e361. [PMC free article] [PubMed]
Nuno M, Chowell G, Gumel AB. Assessing the role of basic control measures, antivirals and vaccine in curtailing pandemic influenza: scenarios for the US, UK and the Netherlands. J R Soc Interface.
2007;4:505–521. [PMC free article] [PubMed]
Lipsitch M, Cohen T, Murray M, Levin BR. Antiviral Resistance and the Control of Pandemic Influenza. PLoS Med. 2007;4:e15. [PMC free article] [PubMed]
Colizza V, Barrat A, Barthelemy M, Vespignani A. The role of the airline transportation network in the prediction and predictability of global epidemics. Proc Natl Acad Sci U S A. 2006;103(7)
:2015–20. [PMC free article] [PubMed]
21. Rvachev L, Longini IM., Jr A mathematical model for the global spread of influenza. Math Biosci. 1985;75:3–22.
Grais RF, Ellis JH, Kress A, Glass GE. Modeling the spread of annual influenza epidemics in the U.S.: the potential role of air travel. Health Care Manag Sci. 2004;7:127–134. [PubMed]
Grais RF, Ellis JH, Glass GE. Assessing the impact of airline travel on the geographic spread of pandemic influenza. Eur J Epidemiol. 2003;18:1065–1072. [PubMed]
Lofgren E, Fefferman NF, Naumov YN, Gorski J, Naumova1 EN. Influenza Seasonality: Underlying Causes and Modeling Theories. J Virol. 2007;81:5429–5436. [PMC free article] [PubMed]
25. Mc Kay MD, Conover WJ, Beckman RJ. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics. 1979;21
26. Blower SM, Dowlatabadi H. Sensitivity and Uncertainty Analysis of Complex Models of Disease Transmission: an HIV Model, as an Example. International Statistical Review. 1994;62:229–243.
Riley S, Wu JT, Leung GM. Optimizing the dose of pre-pandemic influenza vaccines to reduce the infection attack rate. Plos Med. 2007;4:e218. [PMC free article] [PubMed]
28. Ward JH. Hierachical grouping to optimize an objective function. J Am Statist Asso. 1963;58:236–244.
Roberts MG, Heesterbeek JAP. Model-consistent estimation of the basic reproduction number from incidence of an emerging infection. J Math Biol. 2007;55:803–816. [PMC free article] [PubMed]
30. Kendall MG. Partial rank correlation. Biometrika. 1942;32:277–283.
31. Bush RM. Influenza evolution. In: Tibayrenc, editor. Encyclopaedia of Infectious Diseases: modern methodologies. John Wiley & Sons; 2007. pp. 199–214.
Epstein JM, Goedecke DM, Yu F, Morris RJ, Wagener DK, et al. Controlling pandemic flu: the value of international air travel restrictions. PLoS ONE. 2007;2:e401. [PMC free article] [PubMed]
Fraser C, Riley S, Anderson RM, Ferguson NM. Factors that make an infectious disease outbreak controllable. Proc Natl Acad Sci U S A. 2004;101:6146–6151. [PMC free article] [PubMed]
Carrat F, Vergu E, Lemaitre M, Ferguson NM, Cauchemez S, et al. Time lines of infection and disease in human influenza: a review of volunteer challenge studies. Am J Epidem. In press. [PubMed]
WHO Writing Group. 2006. Nonpharmaceutical interventions for pandemic influenza, international measures. Emerg Infect Dis [serial on the Internet]. [PMC free article] [PubMed]
Articles from PLoS ONE are provided here courtesy of Public Library of Science
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2198944/?tool=pubmed","timestamp":"2014-04-16T19:15:20Z","content_type":null,"content_length":"126879","record_id":"<urn:uuid:419cf1ab-4157-4b62-ba0d-10d45aaa8251>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Houston Algebra 1 Tutor
...I organize piano recitals for them twice a year at College of the Mainland (once in the summer and once in the winter) so that they may show off what they have learned. Each student has their
own unique way of being tutored, and I make sure to adjust to their tutoring needs. Though I remind my ...
8 Subjects: including algebra 1, chemistry, geometry, piano
I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for
groups of students to give them extra practice on their course material and help to answer any question...
7 Subjects: including algebra 1, calculus, statistics, algebra 2
...At the first session, I test each student to understand what skills they are lacking and develop customized programs to address those gaps. To keep the student on level with current and
upcoming classwork, I try to work with the students existing teacher I am familiar with the curriculum in eac...
21 Subjects: including algebra 1, English, elementary (k-6th), dyslexia
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including algebra 1, calculus, geometry, algebra 2
...If there's a topic you're interested in but it isn't listed below contact me and there's a good chance I can help with it, especially if it's a lower level of any of the topics listed. TSI Exam
Students: Yes I do tutor for the Math portion of the TSI exam. As the exam is new and there are no pu...
30 Subjects: including algebra 1, English, calculus, physics | {"url":"http://www.purplemath.com/south_houston_algebra_1_tutors.php","timestamp":"2014-04-16T04:22:47Z","content_type":null,"content_length":"24369","record_id":"<urn:uuid:7ca8d64b-215a-4d75-96d1-86b987c4235d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability inequality invovling function
January 9th 2011, 09:36 PM
probability inequality invovling function
Dear All.
I have read in somewhere that if f is non decreasing function then
$p(X > \lambda) = p(f(X)>f( \lambda))$, and even somewhere it is written
$p(X > \lambda) \le p(f(X)>f( \lambda))$.
My question is this how this inequality is true, and which inequality is correct?
Can some explain in detail.
January 9th 2011, 09:44 PM
Dear All.
I have read in somewhere that if f is non decreasing function then
$p(X > \lambda) = p(f(X)>f( \lambda))$, and even somewhere it is written
$p(X > \lambda) \le p(f(X)>f( \lambda))$.
My question is this how this inequality is true, and which inequality is correct?
Can some explain in detail.
Neither is true. Consider f(x) = 0 for all values of x.
Do you mean f is strictly increasing?
In which case both are true since f(x) > f(y) iff x > y
January 9th 2011, 09:45 PM
It has more to do with set theory
If a<b and f is strictly increasing then f(a)<f(b)
If f is strictly increasing then the two sets are equal... $\{x: x>a\} =\{x: f(x)>f(a)\}$
January 9th 2011, 09:56 PM
Thank for reply, but may you please check the following link, this inequality is stated here
January 27th 2011, 01:46 AM | {"url":"http://mathhelpforum.com/advanced-statistics/167935-probability-inequality-invovling-function-print.html","timestamp":"2014-04-16T05:33:16Z","content_type":null,"content_length":"7901","record_id":"<urn:uuid:0068aa54-eb92-4309-934f-1cf75b03481f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tue, 21/04/2009 - 23:55
Quick description
Suppose that you are trying to bound the size of a sum of a product of two functions, one reasonably smooth and the other reasonably oscillating, and suppose that you believe that the oscillations
give rise to cancellation that causes the sum to be small. Then a technique similar to integration by parts may well give rise to an efficient way of proving this.
Basic real analysis. Complex numbers.
Example 1: Abel summation
Let be a complex number, of modulus but not equal to , and let be a sequence of positive real numbers tending to zero. Then the sum converges.
How might we prove this? Why do we even believe it?
One can give a geometrical answer to the second question. If you plot the points , , , , and so on, drawing straight line segments between them, then you obtain a piecewise linear curve that seems to
spiral inwards to a point. (The closer is to , the bigger this "spiral" tends to be.)
How about a rigorous proof? Well, the observation on which the proof is based is that the set of numbers of the form is bounded. Indeed, the formula for summing a geometric progression tells us that
which has modulus at most .
Since we know how to work out sums of the above form, it makes sense to try to use this information to investigate the sum , which we do by breaking it up into sums of the form we like. We can set
aside the convergence issues for now and just look at the sum . In order to split this up into polynomials with constant coefficients, we begin by noting that the sequence can be split up as
The best motivation for this splitting comes from drawing a picture of the "graph" of the sequence , which we are chopping up horizontally.
Attention This article is in need of attention. If anyone knows how to produce a nice picture for this it will really help.
To be continued soon.
Login or register to post comments | {"url":"http://www.tricki.org/node/235/revisions/2505/view","timestamp":"2014-04-21T05:14:03Z","content_type":null,"content_length":"22222","record_id":"<urn:uuid:c8a8be49-54fb-479b-90b5-36ee640556e5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of confidence-interval
In statistics, a confidence interval (CI) is an interval estimate of a population parameter. Instead of estimating the parameter by a single value, an interval likely to include the parameter is
given. Thus, confidence intervals are used to indicate the reliability of an estimate. How likely the interval is to contain the parameter is determined by the confidence level or confidence
coefficient. Increasing the desired confidence level will widen the confidence interval.
For example, a CI can be used to describe how reliable survey results are. In a poll of election voting-intentions, the result might be that 40% of respondents intend to vote for a certain party. A
95% confidence interval for the proportion in the whole population having the same intention on the survey date might be 36% to 44%. All other things being equal, a survey result with a small CI is
more reliable than a result with a large CI and one of the main things controlling this width in the case of population surveys is the size of the sample questioned. Confidence intervals and interval
estimates more generally have applications across the whole range of quantitative studies.
In the above, the 95% associated with the confidence interval is called the confidence level of the interval: this is defined formally below.
Brief explanation
For a given proportion p (where p is the confidence level), a confidence interval for a population parameter is an interval that is calculated from a random sample of an underlying population such
that, if the sampling was repeated numerous times and the confidence interval recalculated from each sample according to the same method, a proportion p of the confidence intervals would contain the
population parameter in question. In unusual cases, a confidence set may consist of a collection of several separate intervals, which may include semi-infinite intervals, and it is possible that an
outcome of a confidence-interval calculation could be the set of all values from minus infinity to plus infinity.
Confidence intervals are the most prevalent form of interval estimation. Interval estimates may be contrasted with point estimates and have the advantage over these as summaries of a dataset in that
they convey more information – not just a "best estimate" of a parameter but an indication of the precision with which the parameter is known.
Confidence intervals play a similar role in frequentist statistics to the credibility interval in Bayesian statistics. However, confidence intervals and credibility intervals are not only
mathematically different; they have radically different interpretations.
Confidence regions generalise the confidence interval concept to deal with multiple quantities. Such regions can indicate not only the extent of likely estimation errors but can also reveal whether
(for example) the estimate for one quantity is too large then the other is also likely to be too large. See also confidence bands.
In applied practice, confidence intervals are typically stated at the 95% confidence level. However, when presented graphically, confidence intervals can show several confidence levels, for example
50%, 95% and 99%.
Theoretical basis
Confidence Intervals as random intervals
Confidence intervals are constructed on the basis of a given dataset: x denotes the set of observations in the dataset, and X is used when considering the outcomes that might have been observed from
the same population, where X is treated as a random variable whose observed outcome is X = x. A confidence interval is specified by a pair of functions u(.) and v(.) and the confidence interval for
the given data set is defined as the interval (u(x), v(x)). To complete the definition of a confidence interval, there needs to be a clear understanding of the quantity for which the CI provides an
interval estimate. Suppose this quantity is w. The property of the rules u(.) and v(.) that makes the interval (u(x),v(x)) closest to what a confidence interval for w would be, relates to the
properties of the set of random intervals given by (u(X),v(X)): that is treating the end-points as random variables. This property is the coverage probability or the probability c that the random
interval includes w,
Here the endpoints U = u(X) and V = v(X) are statistics (i.e., observable random variables) which are derived from values in the dataset. The random interval is (U, V).
Confidence intervals for inference
For the above to provide a viable means to statistical inference, something further is required: a tie between the quantity being estimated and the probability distribution of the outcome X. Suppose
that this probability distribution is characterised by the unobservable parameter θ, which is a quantity to be estimated, and by other unobservable parameters φ which are not of immediate interest.
These other quantities φ in which there is no immediate interest are called nuisance parameters, as statistical theory still needs to find some way to deal with them.
The definition of a confidence interval for θ is, for a given α,
${Pr}_{X;theta,phi}(u(X) for all (theta,phi).,$
The number $(1-alpha)$ (sometimes reported as a percentage (100%·$(1-alpha)$) is called the confidence level or confidence coefficient. Most standard books adopt this convention, where α will be a
small number. Here ${Pr}_{X;theta,phi}$ is used to indicate the probability when the random variable X has the distribution characterised by $(theta,phi)$. An important part of this specification is
that the random interval (U, V) covers the unknown value θ with a high probability no matter what the true value of θ actually is.
Note that here ${Pr}_{X;theta,phi}$ need not refer to an explicitly given parameterised family of distributions, although it often does. Just as the random variable X notionally corresponds to other
possible realisations of x from the same population or from the same version of reality, the parameters $(theta,phi)$ indicate that we need to consider other versions of reality in which the
distribution of X might have different characteristics.
Intervals for random outcomes
Confidence intervals can be defined for random quantities as well as for fixed quantities as in the above. See
prediction interval
. For this, consider an additional single-valued random variable
which may or may not be statistically dependent on
. Then the rule for constructing the interval(
)) provides a confidence interval for the as-yet-to-be observed value
${Pr}_{X,Y;theta,phi}(u(X) for all (theta,phi).,$
Here ${Pr}_{X,Y;theta,phi}$ is used to indicate the probability over the joint distribution of the random variables (X,Y) when this is characterised by parameters $(theta,phi)$.
Approximate confidence intervals
For non-standard applications it is sometimes not possible to find rules for constructing confidence intervals that have exactly the required properties. But practically useful intervals can still be
found. The coverage probability $c(theta,phi)$ for a random interval is defined by
and the rule for constructing the interval may be accepted as providing a confidence interval if
$c(theta,phi)approxeq 1-alpha$ for all $(theta,phi)$
to an acceptable level of approximation.
Comparison to Bayesian interval estimates
A Bayesian interval estimate is called a credible interval. Using much of the same notation as above, the definition of a credible interval for the unknown true value of θ is, for a given α,
Here Θ is used to emphasize that the unknown value of $theta$ is being treated as a random variable. The definitions of the two types of intervals may be compared as follows.
• The definition of a confidence interval involves probabilities calculated from the distribution of X for given $(theta,phi)$ (or conditional on these values) and the condition needs to hold for
all values of $(theta,phi)$.
• The definition of a credible interval involves probabilities calculated from the distribution of Θ conditional on the observed values of X=x and marginalised (or averaged) over the values of
$Phi$, where this last quantity is the random variable corresponding to the uncertainty about the nuisance parameters in $phi$.
Note that the treatment of the nuisance parameters above is often omitted from discussions comparing confidence and credible intervals but it is markedly different between the two cases.
In some simple standard cases, the intervals produced as confidence and credible intervals from the same data set can be identical. They are always very different if moderate or strong prior
information is included in the Bayesian analysis.
Desirable properties
When applying fairly standard statistical procedures, there will often be fairly standard ways of constructing confidence intervals. These will have been devised so as to meet certain desirable
properties, which will hold given that the assumptions on which the procedure rely are true. In non-standard applications, the same desirable properties would be sought. These desirable properties
may be described as: validity, optimality and invariance. Of these "validity" is most important, followed closely by "optimality". "Invariance" may be considered as a property of the method of
derivation of a confidence interval rather than of the rule for constructing the interval.
• Validity. This means that the nominal coverage probability (confidence level) of the confidence interval should hold, either exactly or to a good approximation.
• Optimality. This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible. Recall that one could throw away half of a
dataset and still be able to derive a valid confidence interval. One way of assessing optimality is by the length of the interval, so that a rule for constructing a confidence interval is judged
better than another if it leads to intervals whose widths are typically shorter.
• Invariance. In many applications the quantity being estimated might not be tightly defined as such. For example, a survey might result in an estimate of the median income in a population, but it
might equally be considered as providing an estimate of the logarithm of the median income, given that this is a common scale for presenting graphical results. It would be desirable that the
method used for constructing a confidence interval for the median income would give equivalent results when applied to constructing a confidence interval for the logarithm of the median income:
specifically the values at the ends of the latter interval would be the logarithms of the values at the ends of former interval.
Methods of derivation
For non-standard applications, there are several routes that might be taken to derive a rule for the construction of confidence intervals. Established rules for standard procedures might be justified
or explained via several of these routes. Typically a rule for constructing confidence intervals is closely tied to a particular way of finding a point estimate of the quantity being
considered.Sample statistics: This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the mean, in which case a natural estimate
is the sample mean. The usual arguments indicate that the sample variance can be used to estimate the variance of the sample mean. A naive confidence interval for the true mean can be constructed
centered on the sample mean with a width which is a multiple of the square root of the sample variance.Likelihood theory: Where estimates are constructed using the maximum likelihood principle, the
theory for this provides two ways of constructing confidence intervals or confidence regions for the estimates.Estimating equations: The estimation approach here can be considered as both a
generalization of the method of moments and a generalization of the maximum likelihood approach. There are corresponding generalizations of the results of maximum likelihood theory that allow
confidence intervals to be constructed based on estimates derived from estimating equations.Via significance testing: If significance tests are available for general values of a parameter, then
confidence intervals/regions can be constructed by including in the 100p% confidence region all those points for which the significance test of the null hypothesis that the true value is the given
value is not rejected at a significance level of (1-p).
Practical example
A machine fills cups with margarine, and is supposed to be adjusted so that the mean content of the cups is close to 250 grams of margarine. Of course it is not possible to fill every cup with
exactly 250 grams of margarine. Hence the weight of the filling can be considered to be a random variable
. The distribution of
is assumed here to be a normal distribution with unknown expectation μ and (for the sake of simplicity) known standard deviation σ = 2.5 grams. To check if the machine is adequately adjusted, a
sample of
= 25 cups of margarine is chosen at random and the cups weighed. The weights of margarine are
, a random sample from
To get an impression of the expectation μ, it is sufficient to give an estimate. The appropriate estimator is the sample mean:
$hat mu=bar X=frac{1}{n}sum_{i=1}^n X_i.$
The sample shows actual weights $x_1,dots,x_{25}$, with mean:
$bar x=frac {1}{25} sum_{i=1}^{25} x_i = 250.2,mathrm{grams}$.
If we take another sample of 25 cups, we could easily expect to find values like 250.4 or 251.1 grams. A sample mean value of 280 grams however would be extremely rare if the mean content of the cups
is in fact close to 250g. There is a whole interval around the observed value 250.2 of the sample mean within which, if the whole population mean actually takes a value in this range, the observed
data would not be considered particularly unusual. Such an interval is called a confidence interval for the parameter μ. How do we calculate such an interval? The endpoints of the interval have to be
calculated from the sample, so they are statistics, functions of the sample $X_1,dots,X_{25}$ and hence random variables themselves.
In our case we may determine the endpoints by considering that the sample mean $bar X$ from a normally distributed sample is also normally distributed, with the same expectation μ, but with standard
error $sigma/sqrt{n} = 0.5$ (grams). By standardizing we get a random variable
$Z = frac {bar X-mu}{sigma/sqrt{n}} =frac {bar X-mu}{0.5}$
dependent on μ, but with a standard normal distribution independent of the parameter μ to be estimated. Hence it is possible to find numbers −z and z, independent of μ, where Z lies in between with
probability 1 − α, a measure of how confident we want to be. We take 1 − α = 0.95. So we have:
$P(-zle Zle z) = 1-alpha = 0.95.$
The number
follows from:
$Phi(z) = P(Z le z) = 1 - frac{alpha}2 = 0.975,,$
$z=Phi^{-1}(Phi(z)) = Phi^{-1}(0.975) = 1.96,,$
(see probit and cumulative distribution function), and we get:
$0.95 = 1-alpha=P(-z le Z le z)=P left(-1.96 le frac {bar X-mu}{sigma/sqrt{n}} le 1.96 right)$
$=P left(bar X - 1.96 frac{sigma}{sqrt{n}} le mu le bar X + 1.96 frac{sigma}{sqrt{n}}right)$
$=Pleft(bar X - 1.96 times 0.5 le mu le bar X + 1.96 times 0.5right)$
$=P left(bar X - 0.98 le mu le bar X + 0.98 right).$
This might be interpreted as: with probability 0.95 to one we will choose a confidence interval in which we will meet the parameter μ between the stochastic endpoints, but that does not mean that
possibility of meeting parameter μ in confidence interval is 95% :
$bar X - 0{.}98$
$bar X + 0.98.$
Every time the measurements are repeated, there will be another value for the mean $bar X$ of the sample. In 95% of the cases μ will be between the endpoints calculated from this mean, but in 5% of
the cases it will not be. The actual confidence interval is calculated by entering the measured weights in the formula. Our 0.95 confidence interval becomes:
$(bar x - 0.98;bar x + 0.98) = (250.2 - 0.98; 250.2 + 0.98) = (249.22; 251.18).,$
This interval has fixed endpoints, where μ might be in between (or not). There is no probability of such an event. We cannot say: "with probability (1 − α) the parameter μ lies in the confidence
interval." We only know that by repetition in 100(1 − α) % of the cases μ will be in the calculated interval. In 100α % of the cases however it doesn't. And unfortunately we don't know in which of
the cases this happens. That's why we say: "with confidence level 100(1 − α) % μ lies in the confidence interval."
The figure on the right shows 50 realisations of a confidence interval for a given population mean μ. If we randomly choose one realisation, the probability is 95% we end up having chosen an interval
that contains the parameter; however we may be unlucky and have picked the wrong one. We'll never know; we're stuck with our interval.
Theoretical example
, ...,
are an
sample from a
normally distributed
population with
μ and
. Let
has a Student's t-distribution with n − 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters μ and σ^2; i.e., it is a pivotal quantity.
If c is the 95th percentile of this distribution, then
(Note: "95th" and "0.9" are correct in the preceding expressions. There is a 5% chance that T will be less than −c and a 5% chance that it will be larger than +c. Thus, the probability that T will be
between −c and +c is 90%.)
and we have a theoretical (stochastic) 90% confidence interval for μ.
After observing the sample we find values $overline{x}$ for $overline{X}$ and s for S, from which we compute the confidence interval
an interval with fixed numbers as endpoints, of which we can no more say there is a certain probability it contains the parameter μ. Either μ is in this interval or isn't.
Meaning and interpretation
For users of frequentist methods, various interpretations of a confidence interval can be given.
• The confidence interval can be expressed in terms of samples (or repeated samples): "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ
for each sample) would encompass the true population parameter 90% of the time." Note that this need not be repeated sampling from the same population, just repeated sampling .
• The explanation of a confidence interval can amount to something like: "The confidence interval represents values for the population parameter for which the difference between the parameter and
the observed estimate is not statistically significant at the 10% level. In fact, this relates to one particular way in which a confidence interval may be constructed.
• The probability associated with a confidence interval may also be considered from a pre-experiment point of view, in the same context in which arguments for the random allocation of treatments to
study items are made. Here the experimenter sets out the way in which they intend to calculate a confidence interval and know, before they do the actual experiment, that the interval they will
end up calculating has a certain chance of covering the true but unknown value. This is very similar to the "repeated sample" interpretation above, except that it avoids relying on considering
hypothetical repeats of a sampling procedure that may not be repeatable in any meaningful sense.
In each of the above, the following applies. If the true value of the parameter lies outside the 90% confidence interval once it has been calculated, then an event has occurred which had a
probability of 10% (or less) of happening by chance.
Users of Bayesian methods, if they produced an interval estimate, would by contrast want to say "My degree of belief that the parameter is in fact in this interval is 90%" . See Credible interval.
Disagreements about these issues are not disagreements about solutions to mathematical problems. Rather they are disagreements about the ways in which mathematics is to be applied.
Meaning of the term confidence
There is a difference in meaning between the common usage of the word 'confidence' and its statistical usage, which is often confusing to the layman. In common usage, a claim to 95% confidence in
something is normally taken as indicating virtual certainty. In statistics, a claim to 95% confidence simply means that the researcher has seen something occur that only happens one time in twenty or
less. If one were to roll two dice and get double six, few would claim this as proof that the dice were fixed, although statistically speaking one could have 97% confidence that they were. Similarly,
the finding of a statistical link at 95% confidence is not proof, nor even very good evidence, that there is any real connection between the things linked.
When a study involves multiple statistical tests, some laymen assume that the confidence associated with individual tests is the confidence one should have in the results of the study itself. In
fact, the results of all the statistical tests conducted during a study must be judged as a whole in determining what confidence one may place in the positive links it produces. If a researcher
conducting a study performs 40 statistical tests at 95% confidence, she can expect about two of the tests to return false positives. If she in fact finds 3 links, the confidence associated with those
links 'as the result of the survey' is actually about 32%; it's what she should expect to see two-thirds of the time.
Confidence intervals in measurement
The results of measurements are often accompanied by confidence intervals. For instance, suppose a scale is known to yield the actual mass of an object plus a normally distributed random error with
mean 0 and known standard deviation σ. If we weigh 100 objects of known mass on this scale and report the values ±σ, then we can expect to find that around 68% of the reported ranges include the
actual mass.
If we wish to report values with a smaller standard error value, then we repeat the measurement n times and average the results. Then the 68.2% confidence interval is $pm sigma/sqrt{n}$. For example,
repeating the measurement 100 times reduces the confidence interval to 1/10 of the original width.
Note that when we report a 68.2% confidence interval (usually termed standard error) as v ± σ, this does not mean that the true mass has a 68.2% chance of being in the reported range. In fact, the
true mass is either in the range or not. How can a value outside the range be said to have any chance of being in the range? Rather, our statement means that 68.2% of the ranges we report using ± σ
are likely to include the true mass.
This is not just a quibble. Under the incorrect interpretation, each of the 100 measurements described above would be specifying a different range, and the true mass supposedly has a 68% chance of
being in each and every range. Also, it supposedly has a 32% chance of being outside each and every range. If two of the ranges happen to be disjoint, the statements are obviously inconsistent. Say
one range is 1 to 2, and the other is 2 to 3. Supposedly, the true mass has a 68% chance of being between 1 and 2, but only a 32% chance of being less than 2 or more than 3. The incorrect
interpretation reads more into the statement than is meant.
On the other hand, under the correct interpretation, each and every statement we make is really true, because the statements are not about any specific range. We could report that one mass is 10.2 ±
0.1 grams, while really it is 10.6 grams, and not be lying. But if we report fewer than 1000 values and more than two of them are that far off, we will have some explaining to do.
It is also possible to estimate a confidence interval without knowing the standard deviation of the random error. This is done using the t distribution, or by using non-parametric resampling methods
such as the bootstrap, which do not require that the error have a normal distribution.
Confidence intervals for proportions and related quantities
An approximate confidence interval for a population mean can be constructed for random variables that are not normally distributed in the population, relying on the
central limit theorem
, if the
sample sizes
and counts are big enough. The formulae are identical to the case above (where the sample mean is actually normally distributed about the population mean). The approximation will be quite good with
only a few dozen observations in the sample if the
probability distribution
of the random variable is not too different from the
normal distribution
(e.g. its
cumulative distribution function
does not have any
and its
is moderate).
One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that have the
variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing. To apply the central limit theorem, one must use
a large enough sample. A rough rule of thumb is that one should see at least 5 cases in which the indicator is 1 and at least 5 in which it is 0. Confidence intervals constructed using the above
formulae may include negative numbers or numbers greater than 1, but proportions obviously cannot be negative or exceed 1. Additionally, sample proportions can only take on a finite number of values,
so the central limit theorem and the normal distribution are not the best tools for building a confidence interval. See "Binomial proportion confidence interval" for better methods which are specific
to this case.
See also
Online calculators
• Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh. (See p. 32.)
• Freund, J.E. (1962) Mathematical Statistics Prentice Hall, Englewood Cliffs, NJ. (See pp. 227–228.)
• Hacking, I. (1965) Logic of Statistical Inference. Cambridge University Press, Cambridge
• Keeping, E.S. (1962) Introduction to Statistical Inference. D. Van Nostrand, Princeton, NJ.
• Kiefer, J. (1977) "Conditional Confidence Statements and Confidence Estimators (with discussion)" Journal of the American Statistical Association, 72, 789–827.
• Neyman, J. (1937) "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability" Philosophical Transactions of the Royal Society of London A, 236, 333–380. (Seminal
• Robinson, G.K. (1975) "Some Counterexamples to the Theory of Confidence Intervals." Biometrika, 62, 155–161.
External links | {"url":"http://www.reference.com/browse/confidence-interval","timestamp":"2014-04-20T09:40:16Z","content_type":null,"content_length":"115380","record_id":"<urn:uuid:16fc073f-eec8-4cd5-b565-436a36603fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
Research interests
In 1972, after completing my PhD at Cambridge, I went to the University of Durham as Addison Wheeler Fellow; thereafter, except for a number of years absence at CERN, the Ecole Normale Superieure in
Paris, the California Institute of Technology, and some months in the ITP Santa Barbara, YITP Kyoto, ENS-Lyon and Cambridge UK, I worked in the Department of Mathematical Sciences, Durham until 1999,
moving to York for seven years. From January 2008 I was Principal of Collingwood College, Durham, until my return to York in October 2011. For many years (1992-2006) I was the coordinator of three
European Networks* and one INTAS Network. I have been fortunate to have had a strong collaborative relationship with YITP (Kyoto University), entailing many enjoyable visits to Japan.
My interests have varied over the years, ranging from string theory (in its early days) to integrable quantum field theory (especially Toda field theory), via gauge theories (monopoles and
My current research concerns various aspects of Mathematical Physics, especially
• Classical and quantum field theory;
• Two-dimensional integrable quantum field theories with boundaries and defects.
*York was the coordinating partner of the EC FP5 Network EUCLID from October 2002 - September 2006.
An almost complete list of my publications and other articles can be found in the INSPIRE database
Recent Publications
Full publication list
See all years | {"url":"http://maths.york.ac.uk/www/ec9","timestamp":"2014-04-19T19:33:28Z","content_type":null,"content_length":"24355","record_id":"<urn:uuid:d843c291-22c5-4531-bf1b-5fdadd358893>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy required for temperature raise - temperature-dependent specific heat capacity
Any help on this quistion would be greatly appreciated. I have no idea ho to answere it, and can't find anything in my nothes or books anywhere.
The temperature-dependent molar specific heat capacity at constant pressure of many substances is given by: c =a+2bT−cT^−2
For magnesium, the numerical values of the constants are: a=25.7, b=3.13x10^-3, c=3.27x10^5
where c has units J/Kxmol
Calculate the energy required to raise the temperature of 15g of Megnesium from 30 C to 300 C.
I have tried using the formula to qenerate a specific heat capacity for each temperature, but just seem to get crazy numbers that don't make any sense!
Thanks for the help! | {"url":"http://www.physicsforums.com/showthread.php?t=541253","timestamp":"2014-04-20T21:30:23Z","content_type":null,"content_length":"29573","record_id":"<urn:uuid:b1eeffb9-4dbf-44de-b9cc-65c7980db67f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A professional cliff diver dives from a ledge 65 feet above the surface of the water.The diver reaches an underwater depth of 15 feet before returning to to the surface.What was the diver's change in
elevation from the highest point of the dive to the lowest? There are 3 Questions:Question A:What operation do you use to find a change in elevation ? Question B: Which integers represent the highest
and lowest elevations of the dive ? Question C: Write and evaluate an expression to find the change in elevation.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fb2d304e4b055653428bc65","timestamp":"2014-04-21T02:19:26Z","content_type":null,"content_length":"121155","record_id":"<urn:uuid:411b54ef-6f2b-4678-af86-20a855cae30a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Fixing an -ml model- syntax problem
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Fixing an -ml model- syntax problem
From Richard Williams <Richard.A.Williams.5@ND.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Fixing an -ml model- syntax problem
Date Tue, 20 Feb 2007 17:29:13 -0500
At 04:42 PM 2/20/2007, Clive Nicholas wrote:
The model parameters and standard errors are exactly the same, but the model fit is better under -oglm-. The
Not really. The hypotheses being tested are different. There are 2 equations in the model (typically called choice and variance, or else location and scale). oglm is doing a likelihood ratio
chi-square test of whether the coefficients in both equations all equal zero. complogit is only testing the coefficients in the first equation and is using a Wald test. (Note that the d.f. reported
by the two programs are different.) With oglm, it is easy enough to do other Wald or LR tests if you don't happen to like the one that is reported.
key issue, however, is the value of the $\{delta}$ parameter. under -gplogit-, $\{delta} < 0$, implying that residual variation is larger amongst blacks than amongst non-blacks. Under -oglm-, $\
{delta} > 0$, which implies exactly the opposite. This throws up two follow-up questions:
Not so. As noted in one of my other followup messages, a little bit of algebra can switch you back and forth between Allison's delta and oglm's lnsigma, but they are not exactly the same thing.
Further, whatever tests you look at (z values, Wald or LR chi-square tests) you conclude that the residual variances do not significantly differ by race.
(1) If both of these achieved significance here, how on Earth do you decide whether or not to include an interactive term in the model?
None of the analyses presented so far have said anything about interaction terms; they've only addressed whether residual variation differs across groups, and the answer seems to be no. If you now
want to test interaction terms involving race, go ahead; and plain old -logit- will probably be adequate for your needs.
I'm not sure if this clearly answers your questions or not! If not, all sorts of reading materials are available at
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
FAX: (574)288-4373
HOME: (574)289-5227
EMAIL: Richard.A.Williams.5@ND.Edu
WWW (personal): http://www.nd.edu/~rwilliam
WWW (department): http://www.nd.edu/~soc
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-02/msg00663.html","timestamp":"2014-04-16T07:29:42Z","content_type":null,"content_length":"8583","record_id":"<urn:uuid:930da53c-d7ac-4d82-83ed-c238161f89c2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Mathematical Contest in Modeling Turns 20
October 26, 2004
James Case
Early in February, 613 three-student teams assembled on their home campuses for the 20th annual Mathematical Contest in Modeling. As usual, the students had the choice of two problems, one discrete
and the other continuous. The problems were posted on the Web on the evening of Thursday, February 5, and solutions were due by Monday evening, February 9.
Slightly more than a third of the teams elected to work on problem A, the continuous problem. The judges rated three of the problem A solutions outstanding and 24 meritorious; 50 received honorable
mention. The other 401 teams chose to work on problem B, the discrete alternative. Of the problem B papers, four were deemed outstanding, 38 were designated meritorious, and 109 received honorable
The A problem concerned the commonly held belief that the thumbprints of all humans who have ever lived are distinguishable from one another. The students were asked to develop and analyze a model
that would enable them to assess the probability that the foregoing belief is correct, and then to compare the odds of misidentification by fingerprinting against the odds of misidentification by DNA
Problem B grew from the observation that customers at amusement parks are often obliged to wait in line for two hours or more before embarking on the most popular rides, and that many such parks have
begun to implement express pass systems to shorten their waiting lines. Such passes are typically dispensed by machines, which issue tickets entitling the bearer to embark on a designated ride at or
before time T, provided only that he or she arrive at the point of departure no later than time t < T. Since t typically precedes T by no more than an hour, such tickets can save park goers
substantial amounts of time and aggravation.
The sheer variety of express pass systems in use at different parks, and the number of parks that still don't offer them, suggest a lack of consensus as to the magnitude of the associated costs and
potential benefits, not to mention confusion as to the most effective mode(s) of operation. The teams were asked to design a model that could be used to address such questions. In their solutions,
the students raised a bewildering variety of related questions, of which the following are only a few: How many rides should be included in the express pass system? Should the tickets be sold or
given away? How long should the [t,T] intervals be? How many such intervals should be available to a given customer? How many such tickets should a customer be allowed to hold simultaneously? Should
the park be allowed to overbook popular time slots? Under what circumstances should customers be compensated for non-performance on the part of the park?
The outstanding papers on the A problem were from teams at the University of Colorado at Boulder, which also received the MAA award; Harvey Mudd College, which also received both the INFORMS and the
SIAM awards; and University College, Cork. For the B problem, the outstanding papers were from Harvard (the MAA winner), the University of Washington at Seattle, the University of Colorado at Boulder
(the SIAM winner), and Merton College, Oxford (the INFORMS winner).
Both of the SIAM winners presented their prize-winning papers in a special session at the 2004 SIAM Annual Meeting in Portland.
This year, for the first time, an additional prize was awarded. The Ben Fusaro award-named for the contest founder-recognizes the papers, one for each problem, that best exemplify the following
• The paper presents a high-quality application of the complete modeling process.
• The team has demonstrated noteworthy originality and creativity in their modeling effort to solve the problem as given.
• The paper is well written, in a clear expository style, and is a pleasure to read.
The first recipients of the Fusaro award were a team from Central Washington University for the A problem and a team from MIT for the B problem.
James Case writes from Baltimore, Maryland. | {"url":"http://www.siam.org/news/news.php?id=262","timestamp":"2014-04-18T20:53:15Z","content_type":null,"content_length":"10991","record_id":"<urn:uuid:d98864bc-1eee-4c1c-af22-fcf770ec89c9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Proximal point methods for monotone operators in Banach spaces.
(English) Zbl 1241.47044
The authors provide some fundamental properties of resolvents of maximal monotone operators in Banach spaces. The results are used for the study of the asymptotic behavior of the sequences generated
by two modifications of the proximal point algorithm. By this, known convergence theorems of R. T. Rockafellar [SIAM J. Control Optimization 14, 877–898 (1976; Zbl 0358.90053)] and the authors [e.g.,
S. Kamimura, F. Kohsaka and W. Takahashi, Set-Valued Anal. 12, No. 4, 417–429 (2004; Zbl 1078.47050)] can be generalized.
Using the subdifferential mapping of convex functions, the approach can be applied to find minimizers of convex optimization problems. Another application concerns the problem of finding fixed points
of nonexpansive mappings in Hilbert spaces.
47H05 Monotone operators (with respect to duality) and generalizations
47J25 Iterative procedures (nonlinear operator equations)
49J53 Set-valued and variational analysis | {"url":"http://zbmath.org/?q=an:1241.47044","timestamp":"2014-04-17T21:47:46Z","content_type":null,"content_length":"21580","record_id":"<urn:uuid:99cca835-62c1-4fde-b93a-9c24708fd563>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Calculating class probabilities in a mixed model
Calculating class probabilities in a ...
Jan Ivanouw posted on Wednesday, March 21, 2012 - 5:15 am
I am wondering how the class probabilities are calculated in a mixed model (with no covariates, in order to keep things simple). In part 2 in the paper Lübke and Muthén "Investigating population
heterogeneity with factor mixed models" (2005) it is mentioned that class probabilities are calculated using multinomial regression. Is this a different approch for calculating class probability than
in a LCA (without any latent continuous factor), and which terms are used in this multinomial regression?
In the paper is also mentioned A as a parameter describing how class membership influences eta. Is this A the same as the parameter Alpha (C) given in the Mplus output from a mixed model?
Linda K. Muthen posted on Wednesday, March 21, 2012 - 9:18 am
In a model with no covariates, there is no multinomial regression. See the class proportions in the results.
Class membership influencing a factor is seen in the factor means varying across classes.
Jan Ivanouw posted on Wednesday, April 04, 2012 - 6:22 am
Thank you.
What I wonder is this:
Class probabilities for a LCA-model are calculated as described in appendix 8 of the Technical appendices.
It seems, though, that this method does not work quite the same way with FMM-model (of type FMM-2 in Clark, Muthen et al. - branch 1 of the paper Muthén, 2008 Latent variable hybrids) I would like to
ask how are class probabilities calculated for the FMM-model?
Linda K. Muthen posted on Wednesday, April 04, 2012 - 1:46 pm
There are no explicit formulas for this as numerical integration is required. The following paper which is available on the website might help:
Muthén, B. & Asparouhov, T. (2009). Growth mixture modeling: Analysis with non-Gaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data
Analysis, pp. 143-165. Boca Raton: Chapman & Hall/CRC Press.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=9236","timestamp":"2014-04-19T17:05:20Z","content_type":null,"content_length":"20807","record_id":"<urn:uuid:95d8f690-805f-4da0-a116-a9b4d9fa5009>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Output dtype
Bruce Southey bsouthey@gmail....
Mon Dec 13 14:20:01 CST 2010
On 12/13/2010 11:59 AM, Keith Goodman wrote:
> > From the np.median doc string: "If the input contains integers, or
> floats of smaller precision than 64, then the output data-type is
> float64."
>>> arr = np.array([[0,1,2,3,4,5]], dtype='float32')
>>> np.median(arr, axis=0).dtype
> dtype('float32')
>>> np.median(arr, axis=1).dtype
> dtype('float32')
>>> np.median(arr, axis=None).dtype
> dtype('float64')
> So the output doesn't agree with the doc string.
> What is the desired dtype of the accumulator and the output for when
> the input dtype is less than float64? Should it depend on axis?
> I'm trying to duplicate the behavior of np.median (and other
> numpy/scipy functions) in the Bottleneck package and am running into a
> few corner cases while unit testing.
> Here's another one:
>>> np.sum([np.nan]).dtype
> dtype('float64')
>>> np.nansum([1,np.nan]).dtype
> dtype('float64')
>>> np.nansum([np.nan]).dtype
> <snip>
> AttributeError: 'float' object has no attribute 'dtype'
> I just duplicated the numpy behavior for that one since it was easy to do.
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
Unless something has changed since the docstring was written, this is
probably an inherited 'bug' from np.mean() as the author expected that
the docstring of mean was correct. For my 'old' 2.0 dev version:
>>> np.mean( np.array([[0,1,2,3,4,5]], dtype='float32'), axis=1).dtype
>>> np.mean( np.array([[0,1,2,3,4,5]], dtype='float32')).dtype
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-December/054255.html","timestamp":"2014-04-18T18:48:57Z","content_type":null,"content_length":"4685","record_id":"<urn:uuid:0331456c-3255-4c9f-a672-c5252612e69e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fernwood, PA Precalculus Tutor
Find a Fernwood, PA Precalculus Tutor
...I believe that the best approach for teaching is to help students conceptualize some seemingly abstract topics in these subjects. Most of the time people get hung up on the language or complex
symbols used in math and science when really the key to understanding is to be able to look beyond thos...
16 Subjects: including precalculus, Spanish, physics, calculus
...Routinely score 800/800 on practice tests. Taught high school math and have extensive experience tutoring in SAT Math. Able to help students improve their math skills and also learn many
valuable test-related shortcuts and strategies.
19 Subjects: including precalculus, calculus, statistics, geometry
...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management
since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university.
13 Subjects: including precalculus, calculus, algebra 1, geometry
I'm good with all ages and am fun and accessible. I graduated in the top 5% in High School, and am a graduate of the University of Pennsylvania, majoring in Electrical Engineering. Currently, I'm
a PhD candidate at the University of Delaware.
35 Subjects: including precalculus, calculus, physics, geometry
...I am currently a junior in the University of Pennsylvania's undergraduate math program. Previously, I completed undergraduate work at North Carolina State University for a degree in
Philosophy. Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects
of math.
22 Subjects: including precalculus, calculus, geometry, statistics
Related Fernwood, PA Tutors
Fernwood, PA Accounting Tutors
Fernwood, PA ACT Tutors
Fernwood, PA Algebra Tutors
Fernwood, PA Algebra 2 Tutors
Fernwood, PA Calculus Tutors
Fernwood, PA Geometry Tutors
Fernwood, PA Math Tutors
Fernwood, PA Prealgebra Tutors
Fernwood, PA Precalculus Tutors
Fernwood, PA SAT Tutors
Fernwood, PA SAT Math Tutors
Fernwood, PA Science Tutors
Fernwood, PA Statistics Tutors
Fernwood, PA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Briarcliff, PA precalculus Tutors
Bywood, PA precalculus Tutors
Carroll Park, PA precalculus Tutors
Darby, PA precalculus Tutors
East Lansdowne, PA precalculus Tutors
Eastwick, PA precalculus Tutors
Kirklyn, PA precalculus Tutors
Lansdowne precalculus Tutors
Llanerch, PA precalculus Tutors
Overbrook Hills, PA precalculus Tutors
Primos Secane, PA precalculus Tutors
Primos, PA precalculus Tutors
Secane, PA precalculus Tutors
Westbrook Park, PA precalculus Tutors
Yeadon, PA precalculus Tutors | {"url":"http://www.purplemath.com/Fernwood_PA_Precalculus_tutors.php","timestamp":"2014-04-16T16:43:44Z","content_type":null,"content_length":"24409","record_id":"<urn:uuid:c7bd5da3-cf3b-4e12-b73e-cbeb2215f8bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I do the inverse of tan in Excel?
Hi Bernard,
I was afraid of that, but took the OP's text literally:
<written as 1/tan or tan^-1>
Hopefully we get some sort of feedback to see what the intention was
Kind regards,
Niek Otten
"Bernard Liengme" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> Niek,
> I think we have a language problem here.
> Inverse of a trig function is the TAN^1(x) not (TAN(x))^-10.
> We talk about Arcsine, Arctan so we use ASIN, ATAN
> best wishes
> --
> Bernard V Liengme
> remove caps from email
> "Niek Otten" <(E-Mail Removed)> wrote in message
> news:%(E-Mail Removed)...
>> =1/TAN(A1)
>> =TAN(A1)^(-1)
>> --
>> Kind regards,
>> Niek Otten
>> "MiddleEarthNet" <(E-Mail Removed)> wrote in
>> message news:(E-Mail Removed)...
>>>I know how to use the tan function in Excel 2003, but I need the inverse
>>> tan written as 1/tan or tan^-1) for a set of equations I'm doing. I've
>>> tried
>>> various combinations of writing it with brackets to separate terms but
>>> it
>>> still says there is an error. | {"url":"http://www.pcreview.co.uk/forums/do-do-inverse-tan-excel-t2423857.html","timestamp":"2014-04-18T03:17:23Z","content_type":null,"content_length":"63666","record_id":"<urn:uuid:eccfbaa8-cfa7-4e4e-9f4a-663001be9bf4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are Exponents?
Exponents are sometimes referred to as powers and means the number of times the 'base' is being multiplied. In the study of algebra, exponents are used frequently. In the example to the right, one
would say: Four to the power of 2 or four raised to the second power or four to the second. This would mean 4 x 4 or (4) (4) or 4 · 4 . Simplified the example would be 16.
If the power/exponent of a number is 1, the number will always equal itself. In other words, in our example if the exponent 2 was a 1, simplified the example would then be 4.
Exponent Rules
When working with exponents there are certain rules you'll need to remember.
When you are multiplying terms with the same base you can add the exponents.
This means: 4 x 4 x 4 x 4 x 4 x 4 x 4 or 4 · 4 · 4 · 4 · 4 · 4 · 4
When you are dividing terms with the same base you can subtract the exponents.
This means: 4 x 4 x 4 or 4 · 4 · 4
When parenthesis are involved - you multiply. (8^3)^2 =8^6
y^ay^b = y ^(a+b)
y^ax^a = (yx)^a
Squared and Cubed and 0's
When you multiply a number by itself it is referred to as being 'squared'. 4^2 is the same as saying "4 squared" which is equal to 16. If you multiply 4 x 4 x 4 which is 4^3 it is called 4 cubed.
Squaring is raising to the second power, cubing is raising to the third power. Raising something to a 1 means nothing at all, the base term remains the same. Now for the part that doesn't seem
logical. When you raise a base to the power of 0, it equals 1. Any number raised to the power 0 equals 1 and 0 raised to any exponent or power is 0! | {"url":"http://math.about.com/library/weekly/aa072002a.htm","timestamp":"2014-04-17T21:23:43Z","content_type":null,"content_length":"37221","record_id":"<urn:uuid:1f0b4a1d-2d1a-458b-8206-b96d69f9671d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: HostMath - Online LaTeX formula editor and math equation editor
Replies: 0
Jack HostMath - Online LaTeX formula editor and math equation editor
Posted: Apr 10, 2012 8:25 AM
Posts: 9
Registered: 4/10/12 HostMath is a powerful interactive mathematical expressions editor. It
uses WYSIWYG-style editing and allows creating mathematical equations
through simple point-and-click techniques.
1. Many pre-defined Templates and Symbols in well-organized palettes
that cover Mathematics, Physics, Electronics, and many other higher
2. Fine adjustment for Template shapes, gaps, and thicknesses with
visual interface
3. Multiple Undo and Redo
4. Can generate equations as MathML. MathML will allow you to copy and
paste math into many applications that understand MathML.
URL: http://www.hostmath.com/ | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2365838","timestamp":"2014-04-19T17:03:19Z","content_type":null,"content_length":"14192","record_id":"<urn:uuid:338e8db6-8a97-4e3d-b741-deadb7272f3b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pittsburg, CA Math Tutor
Find a Pittsburg, CA Math Tutor
...Focus on subject matter and strategies to learn the material and do well on the test. I have extensive experience in preparing students to take the PSAT, SAT, ACT and TOEFL college prep tests.
In order to do well on these tests, students have to develop a critical understanding of the material and the accompanying strategies that will allow them to be successful.
73 Subjects: including algebra 1, SAT math, English, Spanish
...I can also write Chinese fluently and have a Chinese software to communicate in Chinese electronically. With a certificate for the TESOL program, I should be qualified to tutor students who
need support in the English language, or tutor students in their development of the English language. I h...
17 Subjects: including discrete math, probability, algebra 1, algebra 2
...I have worked on thousands of these types of problems and can show your student how to do every single one, which will dramatically increase their test scores! I can help your student ace the
following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS,a nd more. I am an expert on math standardized testing, as stated in my reviews from previous students.
59 Subjects: including prealgebra, statistics, study skills, ESL/ESOL
...I've also taught summer courses in prealgebra and algebraic concepts to students preparing for high school algebra and community college courses in algebra. I really enjoyed my algebra
instructor, and I strive to have the same level of patience and understanding! My undergraduate major is in the biological sciences.
34 Subjects: including SAT math, chemistry, algebra 1, algebra 2
...My goal in working with with students is to help them understand what they are being asked to do, to plan how to complete a task, and to find ways to achieve desired results. Through my
college training, I have had experience writing various kinds of papers and reports, from informal observations to documented and referenced works. I am currently working on my graduate thesis.
12 Subjects: including prealgebra, reading, English, Microsoft Word
Related Pittsburg, CA Tutors
Pittsburg, CA Accounting Tutors
Pittsburg, CA ACT Tutors
Pittsburg, CA Algebra Tutors
Pittsburg, CA Algebra 2 Tutors
Pittsburg, CA Calculus Tutors
Pittsburg, CA Geometry Tutors
Pittsburg, CA Math Tutors
Pittsburg, CA Prealgebra Tutors
Pittsburg, CA Precalculus Tutors
Pittsburg, CA SAT Tutors
Pittsburg, CA SAT Math Tutors
Pittsburg, CA Science Tutors
Pittsburg, CA Statistics Tutors
Pittsburg, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Alamo, CA Math Tutors
Albany, CA Math Tutors
Antioch, CA Math Tutors
Brentwood, CA Math Tutors
Burlingame, CA Math Tutors
Castro Valley Math Tutors
Concord, CA Math Tutors
Danville, CA Math Tutors
Diamond, CA Math Tutors
Lafayette, CA Math Tutors
Oakley, CA Math Tutors
Pacifica Math Tutors
Pleasant Hill, CA Math Tutors
San Bruno Math Tutors
Walnut Creek, CA Math Tutors | {"url":"http://www.purplemath.com/Pittsburg_CA_Math_tutors.php","timestamp":"2014-04-17T04:36:00Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:f583bce7-474e-4229-ba7f-7d0d9deb1b65>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a deterministic quick-sort? What are some good examples of quick-sort and merge-sort. I have like 3 algorithm books, but the pseudo-code is really bad for those sorts.
You know, I did think of that, but only like 1% of my searches have ever come up with anything that could help me out, and I thought that I should just ask, since that is what I thought these board
were for...maybe we just need a search, and eliminate postings, since by now people must have posted so much stuff that any question could be answered somewhere, but with all that info it can be
EXTREMELY hard to find what you are looking for, when there is little AI in the search.
Since you're asking, here's a quick sort I coded after reading a slide show from FSU (Florida State). It's recursive, so probably not the fastest. Maybe someone else has an example of a merge sort.
Code: #include <cstdlib> #include <iostream> using namespace std; const int size = 30; int Partition( int table[], int first, int last ); void quick_sort( int table[], int first, int last ); void
print( int a[], int size ); int main(void) { int a[size]; //{1,2,3,4,5,6,7,8,9,10}; //{10,9,8,7,6,5,4,3,2,1}; srand(unsigned(time(NULL))); for (int i=0; i<size; i++) a[i] = rand() * 100 / RAND_MAX;
print( a,size ); quick_sort( a,0,size-1 ); print( a,size ); return 0; } void quick_sort( int table[], int first, int last ) { if (first < last) { int pivot_index = Partition( table,first,last );
quick_sort( table,first,pivot_index-1 ); quick_sort( table,pivot_index+1,last ); } } int Partition( int table[], int first, int last ) { int temp; int pivot = table[first]; int up = first; int down =
last; do { while (table[++up] <= pivot); while (table[down] > pivot) down--; if (up < down) { temp = table[up]; table[up] = table[down]; table[down] = temp; } } while (up < down); //Exchange the
pivot value and the value in down temp = table[first]; table[first] = table[down]; table[down] = temp; int pivot_index = down; return pivot_index; } void print( int a[], int size ) { for (int i=0; i
<size; i++) { cout << a[i] << " "; if ((i+1)%10 == 0) cout << endl; } cout << endl; }
#include <cstdlib> #include <iostream> using namespace std; const int size = 30; int Partition( int table[], int first, int last ); void quick_sort( int table[], int first, int last ); void print(
int a[], int size ); int main(void) { int a[size]; //{1,2,3,4,5,6,7,8,9,10}; //{10,9,8,7,6,5,4,3,2,1}; srand(unsigned(time(NULL))); for (int i=0; i<size; i++) a[i] = rand() * 100 / RAND_MAX; print(
a,size ); quick_sort( a,0,size-1 ); print( a,size ); return 0; } void quick_sort( int table[], int first, int last ) { if (first < last) { int pivot_index = Partition( table,first,last ); quick_sort(
table,first,pivot_index-1 ); quick_sort( table,pivot_index+1,last ); } } int Partition( int table[], int first, int last ) { int temp; int pivot = table[first]; int up = first; int down = last; do {
while (table[++up] <= pivot); while (table[down] > pivot) down--; if (up < down) { temp = table[up]; table[up] = table[down]; table[down] = temp; } } while (up < down); //Exchange the pivot value and
the value in down temp = table[first]; table[first] = table[down]; table[down] = temp; int pivot_index = down; return pivot_index; } void print( int a[], int size ) { for (int i=0; i<size; i++) {
cout << a[i] << " "; if ((i+1)%10 == 0) cout << endl; } cout << endl; }
By the way I think Prelude has written a Sort tutorial somewhere. I'm not sure where it is.
I guess that the quick sort splits the list into lists of five or less, and then finds the median, then the medians of the medians, and then chooses a pivot....Every time it runs, which at least to
me seems to be a complete waste of time
>> It's recursive, so probably not the fastest. Quicksort IS recursive - its a good example how a recursive solution to sorting can be in general faster than any other sorting method. mergesort has
same complexity as quicksort O(n*log(n)) but you can't implement mergsort in a single array thus you have to allocate memory all the time. quicksort can be implemented to work on a single array. >> I
guess that the quick sort splits the list into lists of five or less, and then finds the median, then the medians of the medians, and then chooses a pivot....Every time it runs, which at least to me
seems to be a complete waste of time No, its not a waste of time: Quicksort takes an pivot element and shifts all values less than that pivot to one side of the pivot, and all values greater than
that pivot to the other side of the pivot. thus the pivot reaches its final position in the sorted array! now we repeat that step with the array of the left and right side of the pivot until there
are only 1 elements in those sub arrays. then we are done. so the recursion implicitly creates a tree-structure for sorting. complexity is also in average O(n*log(n)) BUT if you take bad pivots
(always the smallest or larges element) the complexity becomes worst-case O(n^2)
>What is a deterministic quick-sort? Quicksort chooses a pivot for partitioning of the list. A deterministic quicksort will use a heuristic to find that pivot (Median of three partitioning is an
example of such a heuristic) while a probabilistic quicksort will randomly select a pivot and rely on probability to minimize the worst cases. You can find code for various types of quicksort and
mergesort all over the web. Google for sorting demos, they offer the source code (usually in Java, but that shouldn't be a problem) for you to read. | {"url":"http://cboard.cprogramming.com/cplusplus-programming/49921-sorting-printable-thread.html","timestamp":"2014-04-16T14:13:36Z","content_type":null,"content_length":"14193","record_id":"<urn:uuid:d15cade9-ef0a-46db-be3b-bf0547e26924>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - distinguishability - in context, according to definitions
Date: Feb 10, 2013 2:54 AM
Author: fom
Subject: distinguishability - in context, according to definitions
This post is motivated by recent discussions
involving a term with which I had been
personally unfamiliar. It is an exercise
in "basic" logic.
Consider the "math trick,"
in relation to
By pattern matching, one infers
Since x=1 is surmised from the algebraic
Now, in what follows, the particular
problem will be considering the nature
of eventually constant sequences taken
to be ontologically "the same." So, in
addition to the above, the identity,
is considered to be a presupposition
available to the analysis. So, what
is actually being considered is the
and, at the appropriate time, the
analysis will turn to binary sequences
for ease of presentation. Thus, this analysis
is not necessarily applicable to the
example with which it has been introduced.
It will be, however, applicable to the
context from which the term "distinguishability"
has been found to have a sourced definition.
In the tradition of logic that has become
popular, the logical problem being investigated
may be stated as being an assertion of ontological
identity using an informative identity statement. That
problem is particularly difficult, in general, as
the classic Fregean identity puzzle
"Hesperus is Phosphorus"
has shown. But, by analogy with polynomials
of degree greater than 4, there is no reason
to think that some classes of problems of
this nature are not amenable to analysis.
To be formally explicit, the analysis presented
here is based on Sausserian semiotics wherein a
syntactic signal antecedent to its consequent
use as a sign never regresses beyond an
observable sign-vehicle (i.e., an inscription).
In contrast, Piercean semiotics is subject
to infinitary regress into non-grammatical
So, that we clarify the nature of the received
paradigm in this matter, we address the issue
of uniform semantic interpretation of inscriptions
by invoking Carnap's notion of syntactic equality.
Hence, what is expressed by
is the identity of two equivalence classes
relative to which some quotient model must
be formed to accommodate the fact that an
ontological assertion is being made using
an informative identity.
Because the received paradigm does not
address informative identity directly, it
is possible that there are interpretations
in which
is false.
With these preliminaries out of the way,
it is time to consider how the presupposition
is indicated as the canonical representation
given that, alternatively,
is also a candidate for the canonical
Consider the construction of the real
numbers in relation to Dedekind cuts. To
accept this construction is to accept the
fact that in a hierarchy of definition, there
is information about the defined system that
is epistemically prior by virtue of the
What is taken to be known essentially is that
for any given Dedekind cut, it is in the class
of rational cuts or it is in the class of irrational
However, there is still more information associated
with the construction that informs with respect to
the analysis at hand. Because one has a choice as
to which direction of the linear order in the underlying
rationals will be used for the cuts, there is a
decision that may be applied to the representations
of rationals in relation to decimal expansions.
Since, for any finite initial segement of the
expression 0.999... it is held that
0.999 < 1.000...
the system of Dedekind cuts will be given its
fundamental partition according to whether or
not its greatest lower bound is an element. This
choice is compatible with the usual notion that
an eventually constant sequence of trailing zeros
is the same as a long division that terminates.
It is at this point that the context of the
analysis will be changed to binary strings. The
principal concern will be the relationship of
eventually constant strings to one another in
constrast to strings that never become eventually
The next step is to represent the orientation of
cuts with respect to a preferred canonical
Now, what motivates the choice of theoretical
context for this analysis is the information
content in the assertion,
The model that best seems to reflect a selection
of preference here is that of automata -- in
particular, the selection of preference is
characterized in terms of lossless and lossy
Since it is almost trivial, we begin by recalling
that there is prior knowledge that distinguishes
the cuts into two classes and look to those cuts
that are defined as irrational real numbers. Let
the automaton that is applied to identifying
concatenations of alphabet letters for the irrational
cuts simply copy its input to its output,
| 0 | 1 |
A | B0 | B1 |
B | C0 | C1 |
C | D0 | D1 |
D | E0 | E1 |
and so on
Thus, each state S_k copies its input symbol to
its output symbol and transitions to its grammatical
successor S_(k+1)
The simplicity of the irrational automaton arises
from the fact that it is applied to the identifying
sequence for a given Dedekind cut when that cut is
not a rational cut. It is defined negatively, and,
the simplicity of its content reflects that fact.
For the rational numbers, the situation is different
because the hierarchy of logical definition affords
prior knowledge. The automaton for each rational
cut simply outputs the identifying concatenation of
alphabet letters as a known sequence, regardless
of input symbols. Those rational numbers that do not enjoy
two distinct representations by virtue of eventually
constant sequences are lossless. That is,
| 0 | 1 |
A | B a_1 | B a_1 |
B | C a_2 | C a_2 |
C | D a_3 | D a_3 |
D | E a_4 | E a_4 |
and so on
They do repeat, however. Take for example,
the string obtained for the fraction 1/5,
| 0 | 1 |
A | B0 | B0 |
B | C0 | C0 |
C | D1 | D1 |
D | E1 | E1 |
E | F0 | F0 |
F | G0 | G0 |
G | H1 | H1 |
H | I1 | I1 |
and so on.
The states may be seen to correspond
with the labelling,
Although each state transitions to its
grammatical successor as with the machine
applied to irrational numbers, the output
symbols reflect the state of prior knowledge
and are not simply copies of the input
In constrast, those rational cuts with a
plural multiplicity of identifying concatenations
of alphabet letters have a different pattern
of state transitions from the prior cases.
| 0 | 1 |
A | B ab_1 | B ab_1 |
B | C ab_2 | C ab_2 |
C | D ab_3 | D ab_3 |
D | E ab_4 | E ab_4 |
E | F a_5 | G b_5 |
F | H a_6 | H a_6 |
G | I b_6 | I b_6 |
H | J a_7 | J a_7 |
I | K b_7 | K b_7 |
J | L a_8 | L a_8 |
K | M b_8 | M b_8 |
L | N a_9 | N a_9 |
M | O b_9 | O b_9 |
and so on
As the state table makes clear, it can
no longer be said that states transition
to their grammatical successor as a general
Consider a particular example. Suppose one
has the sequences,
Prior to any preference, the state table for
the automaton is given as
| 0 | 1 |
A | B1 | B1 |
B | C1 | C1 |
C | D0 | E0 |
D | F1 | F1 |
E | G0 | G0 |
F | H0 | H0 |
G | I1 | I1 |
H | J0 | J0 |
I | K1 | K1 |
J | L0 | L0 |
K | M1 | M1 |
L | N0 | N0 |
and so on
The states may be seen to correspond
with the labelling,
But, the choice to establish a canonical
representation -- the choice which informed
the directionality to be used for the
cuts -- has a different automaton,
| 0 | 1 |
A | B1 | B1 |
B | C1 | C1 |
C | D0 | E0 |
D | F1 | F1 |
E | G1 | G1 |
F | H0 | H0 |
G | I0 | I0 |
H | J0 | J0 |
I | K0 | K0 |
J | L0 | L0 |
K | M0 | M0 |
L | N0 | N0 |
and so on
It is this representation that is lossy.
Now, the following quoted material is the
definition for distinguishability that shall
be considered here. It is taken from "Finite-State
Models for Logical Machines" by Frederick C. Hennie.
Although it is not the best, it is all that is on
my bookshelves. To see what I mean in this regard,
consider the subsequent description for the associated
partitions carefully. By my reading, the uqualified
description for the construction of partitions would
not partition the states in the manner suggested by
the subsequent qualifying remarks describing a
sequence of refinements. Perhaps I am wrong.
Sometimes mathematics is so easy that it is hard.
I included the qualifying remarks concerning the
description of partitions specifically because I
am having a hard time seeing the claim directly
from the unqualified statement.
With the qualifying remarks, the description of
the partitions appears to correspond with the
Distinguishability is defined as follows
"The most convenient way of finding equivalences
that exist among the states of a machine is to
concentrate on the states that are not equivalent.
Thus, we say that two states are *distinguishable*
iff there exists a finite input sequence that
yields one output sequence when the machine is
started in one state and a different output sequence
when the machine is started in the other state.
If this input sequence contains k symbols, the
states are said to be distinguishable by an experiment
of length k, or simply k-distinguishable. States
that are not distinguishable by any experiment of
length k or less are called k-equivalent. It
follows that two states are equivalent iff they
are k-equivalent for every finite value of k.
"The definition of k-distinguishability becomes
more useful when recast in the form of two
(1) Two states are distinguishable by an experiment
of length one iff there is some input symbol that
produces different output symbols according to
which of the two states the machine is in.
(2) Two states are distinguishable by an experiment
of length k (with k>1) iff there is some input symbol,
say z, such that the z-successors of the two states
are distinguishable by an experiment of length (k-1).
These rules provide the basis for a step-by-step
procedure for determining which states of a machine
are one-equivalent, which are two-equivalent, and
so on.
"The first step is the formation of a partition P_1
in which two states are placed in the same block iff
they produce identical output symbols for each possible
input symbol. This clearly puts two states in the
same block of P_1 if they are one-equivalent and in
different blocks if they are one-distinguishable."
For the automaton applied when the cut corresponds
to an irrational,
For that applied to the singly-represented rational
the example state table yields,
For the doubly-represented rational example, this
is the partition before a canonical representative
is indicated through the Dedekind cut,
and this is the partition after,
Continuing the quoted exposition concerning the
definition of distinguishability,
"The next step is the formation of a partition
P_2 in which two states are placed in the same
block iff, for each input symbol z, their z-successors
lie in a common block of P_1. Note that P_2
must be a refinement of P_1, for if two states
are two-equivalent, they must certainly be
one-equivalent. Partition P_2 is most
easily formed by splitting the various blocks
of P_1 in such a way that the defining
conditions for P_2 are met."
For the automaton applied when the cut corresponds
to an irrational, the second partition is given as
Observe that there has been no change.
For that applied for the singly-represented rational
the second partition for the example state table
For the doubly-represented rational example, this
is the second partition before a canonical representative
is indicated through the Dedekind cut,
and this is the second partition after,
Continuing the quoted exposition concerning the
definition of distinguishability,
"Partitions P_3, P_4, ..., P_k,... can be formed
in a similar manner.[...]
"[...], we see that the partitioning process
may be terminated as soon as some partition
P_(m+1) is found to be identical to its
predecessor P_m."
Thus, the automaton applied when the cut corresponds
to an irrational has already terminated its
sequence of partitions given
For that applied for the singly-represented rational
the partition sequence for the example state table
For the doubly-represented rational example before
orientation by the Dedekind cut,
and after,
So, up to this point, it is clear that the various
relationships in the definition of real numbers
via Dedekind cuts and the presuppositions of
a trivial algebraic substitution do have discernible
representation relative to modeling with automatons.
With regard to the relationship of finitism and
the apparent simplicity of a symbol such as
it is instructive to look at the reduced machines for
these examples. For the automaton applied when the
cut is an irrational, one has
| 0 | 1 |
A | A0 | A1 |
For that applied when the cut corresponds with a
singly-represented rational, one has
| 0 | 1 |
A | B0 | B0 |
B | C0 | C0 |
C | D1 | D1 |
D | A1 | A1 |
For the doubly-represented rational example before
orientation by the Dedekind cut,
| 0 | 1 |
A | B1 | B1 |
B | C1 | C1 |
C | D0 | E0 |
D | F1 | F1 |
E | G0 | G0 |
F | F0 | F0 |
G | G1 | G1 |
and after,
| 0 | 1 |
A | B1 | B1 |
B | C1 | C1 |
C | D0 | E0 |
D | F1 | F1 |
F | F0 | F0 |
In all cases, the finiteness of the reduced representation
relies on circularity. In the cases where the cuts are
either irrational or doubly-represented, one or more states
are found to be transitioning into themselves.
Relative to the model investigated here, the notion
of reduced machine relative to k-equivalent states
permits all of the possible machine configurations
derived from consideration of Dedekind cuts to have
finite representation -- provided one does not object
to the use of circularity in the state tables.
The problem with simply writing out sequences
and naively "knowing" the intended meaning of
"distinguishability" is that the automaton for
the irrational cuts is applied to the identifying
concatenation of alphabet for all sequences
without regard for the hierarchy of logical
This analysis, as far as it goes, is not yet
The definition of distinguishability is characterized
in terms of "experiments." It seems prudent to
consider this aspect of the definition further and
with respect to the infinitary nature of the original
For each state, there are two possible outcomes in
the sense of traversing a decision tree. Suppose
states are redefined as choices and the components
of each state are taken to be selections. Then
an experiment is a sequence of selections taken
the purpose of destroying an assertion of equivalence.
In other words, just a subtle change of language
characterizes this model in relation to the historical
epistemic condition associated with the assertion
of "sameness" in relation to definitions.
However, the primary purpose for this further analysis
is to construct a topology.
One thing that will be required for this construction
is an enumeration of fractions which is the familiar
diagonalized listing on Z^+xZ^+.
Each fraction, representing a rational number,
corresponds to one of the automata discussed in
the preceding remarks. As before, there is no
immediate concern for uniqueness constraints since
that destroys information.
Given any fraction, take the reduced state table, say M,
for the lossless representation and organize the selections
for the unreduced state table relative to their grammatical
order according to
Let this ordering be given the interval topology.
Observe that M is the endpoint of no interval.
Now, let the enumeration of fractions be given the
discrete topology, and, as a first specification
for the topology being constructed, let an enumeration
of sequences like that above be given the product
For the next step, it is easiest to use numerical
coordinates. So, let the machine symbol M be
taken as set-theoretic omega and let the selections
be indexed by the non-zero integers according to
Extend the construction with the letters of the
input alphabet, namely '0' and '1'. Remembering
that the interval topology on the representation
above has, for example, (2)<omega and omega<(-2),
and, remembering that the omega in the representation
above has an natural number index according to the
enumeration of fractions, augment the topology
described so far with basis elements,
B_n_0={0}u{(i,j)|i<omega /\ j>=n}
B_n_1={1}u{(i,j)|i>omega /\ j>=n}
This is the minimal Hausdorff topology.
Before continuing with the present example, take a moment
to consider a countable language with a unary negation
symbol.The well-formed formulae may be partitioned
according to whether or not the first symbol from
among the logical constants is the symbol for negation.
They can be partitioned into an enumerable sequence
of lines according to the formulae that can be formed
relative to each step of a stepwise introduction of variable
terms. The negation symbol, itself can be taken as
"omega." Then, the formulae can be arranged in this manner
relative to extension with the Fregean constants,
"the True" and "the False." Language, at least that
fragment that can be formalized, is topological.
Returning to the present analysis, consider the
quotient topology induced by a map of this minimal
Hausdorff topology into the set of reduced lossless
machines applied to the identifying sequences of
alphabet letters for rational Dedekind cuts. Such
a topology exists because the map from the minimal
Hausdorff topology is surjective onto the set of
lossless machines provided that the map takes each
"line" of selections to the reduced machine for which
its choices form the states of the unreduced machine.
Because the topology on the enumeration of fractions
is discrete, the inverse image of each machine
corresponds to a representation of the equivalence
class of fractions for some particular rational
Dedekind cut.
Based on the inverse images, there is a partition
of the minimal Hausdorff topology by which a quotient
space on the topology may be formed.
Something for another time.
In any case, the definition of "distinguishability" is
obtained by a stepwise condition involving "experiments
of length K". But -- and here is the important aspect
of this analysis -- equivalence is an infinitary concept.
You do not get to write
with neither circularity nor a completed infinity.
It is one or the other. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8292768","timestamp":"2014-04-19T02:17:44Z","content_type":null,"content_length":"24324","record_id":"<urn:uuid:0b81ff16-2323-49ec-8160-31c358220235>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use is subject to License Terms. Your use of this web site or any of its content or software indicates your agreement to be bound by these License Terms.
Copyright © 2006 Sun Microsystems, Inc. All rights reserved.
JSR-209 (Final Approval Ballot)
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Class Polygon
All Implemented Interfaces:
java.io.Serializable, Shape
public class Polygon
extends java.lang.Object
implements Shape, java.io.Serializable
The Polygon class encapsulates a description of a closed, two-dimensional region within a coordinate space. This region is bounded by an arbitrary number of line segments, each of which is one side
of the polygon. Internally, a polygon comprises of a list of (x, y) coordinate pairs, where each pair defines a vertex of the polygon, and two successive pairs are the endpoints of a line that is a
side of the polygon. The first and final pairs of (x, y) points are joined by a line segment that closes the polygon. This Polygon is defined with an even-odd winding rule. (The even-odd rule
specifies that a point lies inside the path if a ray drawn in any direction from that point to infinity is crossed by path segments an odd number of times.) This class's hit-testing methods, which
include the contains, intersects and inside methods, use the insideness definition described in the Shape class comments.
See Also:
│ Field Summary │
│ protected Rectangle │ bounds │
│ │ Bounds of the polygon. │
│ int │ npoints │
│ │ The total number of points. │
│ int[] │ xpoints │
│ │ The array of x coordinates. │
│ int[] │ ypoints │
│ │ The array of y coordinates. │
│ Constructor Summary │
│ Polygon() │ │
│ Creates an empty polygon. │ │
│ Polygon(int[] xpoints, int[] ypoints, int npoints) │ │
│ Constructs and initializes a Polygon from the specified parameters. │ │
│ Method Summary │
│ void │ addPoint(int x, int y) │
│ │ Appends the specified coordinates to this Polygon. │
│ boolean │ contains(double x, double y) │
│ │ Determines if the specified coordinates are inside this Polygon. │
│ boolean │ contains(double x, double y, double w, double h) │
│ │ Tests if the interior of this Polygon entirely contains the specified set of rectangular coordinates. │
│ boolean │ contains(int x, int y) │
│ │ Determines whether the specified coordinates are inside this Polygon. │
│ boolean │ contains(Point p) │
│ │ Determines whether the specified Point is inside this Polygon. │
│ boolean │ contains(Point2D p) │
│ │ Tests if a specified Point2D is inside the boundary of this Polygon. │
│ boolean │ contains(Rectangle2D r) │
│ │ Tests if the interior of this Polygon entirely contains the specified Rectangle2D. │
│ Rectangle │ getBounds() │
│ │ Gets the bounding box of this Polygon. │
│ Rectangle2D │ getBounds2D() │
│ │ Returns the high precision bounding box of the Shape. │
│ PathIterator │ getPathIterator(AffineTransform at) │
│ │ Returns an iterator object that iterates along the boundary of this Polygon and provides access to the geometry of the outline of this Polygon. │
│ PathIterator │ getPathIterator(AffineTransform at, double flatness) │
│ │ Returns an iterator object that iterates along the boundary of the Shape and provides access to the geometry of the outline of the Shape. │
│ boolean │ intersects(double x, double y, double w, double h) │
│ │ Tests if the interior of this Polygon intersects the interior of a specified set of rectangular coordinates. │
│ boolean │ intersects(Rectangle2D r) │
│ │ Tests if the interior of this Polygon intersects the interior of a specified Rectangle2D. │
│ void │ invalidate() │
│ │ Invalidates or flushes any internally-cached data that depends on the vertex coordinates of this Polygon. │
│ void │ reset() │
│ │ Resets this Polygon object to an empty polygon. │
│ void │ translate(int deltaX, int deltaY) │
│ │ Translates the vertices of the Polygon by deltaX along the x axis and by deltaY along the y axis. │
│ Methods inherited from class java.lang.Object │
│ clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait │
public int npoints
The total number of points. The value of npoints represents the number of valid points in this Polygon and might be less than the number of elements in xpoints or ypoints. This value can be NULL.
See Also:
public int[] xpoints
The array of x coordinates. The number of elements in this array might be more than the number of x coordinates in this Polygon. The extra elements allow new points to be added to this Polygon
without re-creating this array. The value of npoints is equal to the number of valid points in this Polygon.
See Also:
public int[] ypoints
The array of y coordinates. The number of elements in this array might be more than the number of y coordinates in this Polygon. The extra elements allow new points to be added to this Polygon
without re-creating this array. The value of npoints is equal to the number of valid points in this Polygon.
See Also:
protected Rectangle bounds
Bounds of the polygon. This value can be NULL. Please see the javadoc comments getBounds().
See Also:
public Polygon()
Creates an empty polygon.
public Polygon(int[] xpoints,
int[] ypoints,
int npoints)
Constructs and initializes a Polygon from the specified parameters.
xpoints - an array of x coordinates
ypoints - an array of y coordinates
npoints - the total number of points in the Polygon
java.lang.NegativeArraySizeException - if the value of npoints is negative.
java.lang.IndexOutOfBoundsException - if npoints is greater than the length of xpoints or the length of ypoints.
java.lang.NullPointerException - if xpoints or ypoints is null.
public void reset()
Resets this Polygon object to an empty polygon. The coordinate arrays and the data in them are left untouched but the number of points is reset to zero to mark the old vertex data as invalid and
to start accumulating new vertex data at the beginning. All internally-cached data relating to the old vertices are discarded. Note that since the coordinate arrays from before the reset are
reused, creating a new empty Polygon might be more memory efficient than resetting the current one if the number of vertices in the new polygon data is significantly smaller than the number of
vertices in the data from before the reset.
See Also:
public void invalidate()
Invalidates or flushes any internally-cached data that depends on the vertex coordinates of this Polygon. This method should be called after any direct manipulation of the coordinates in the
xpoints or ypoints arrays to avoid inconsistent results from methods such as getBounds or contains that might cache data from earlier computations relating to the vertex coordinates.
See Also:
public void translate(int deltaX,
int deltaY)
Translates the vertices of the Polygon by deltaX along the x axis and by deltaY along the y axis.
deltaX - the amount to translate along the x axis
deltaY - the amount to translate along the y axis
public void addPoint(int x,
int y)
Appends the specified coordinates to this Polygon.
If an operation that calculates the bounding box of this Polygon has already been performed, such as getBounds or contains, then this method updates the bounding box.
x - the specified x coordinate
y - the specified y coordinate
See Also:
getBounds(), contains(java.awt.Point)
public Rectangle getBounds()
Gets the bounding box of this Polygon. The bounding box is the smallest Rectangle whose sides are parallel to the x and y axes of the coordinate space, and can completely contain the Polygon.
Specified by:
getBounds in interface Shape
a Rectangle that defines the bounds of this Polygon.
public boolean contains(Point p)
Determines whether the specified Point is inside this Polygon.
p - the specified Point to be tested
true if the Polygon contains the Point; false otherwise.
public boolean contains(int x,
int y)
Determines whether the specified coordinates are inside this Polygon.
x - the specified x coordinate to be tested
y - the specified y coordinate to be tested
true if this Polygon contains the specified coordinates, (x, y); false otherwise.
public Rectangle2D getBounds2D()
Returns the high precision bounding box of the Shape.
Specified by:
getBounds2D in interface Shape
a Rectangle2D that precisely bounds the Shape.
See Also:
public boolean contains(double x,
double y)
Determines if the specified coordinates are inside this Polygon. For the definition of insideness, see the class comments of Shape.
Specified by:
contains in interface Shape
x - the specified x coordinate
y - the specified y coordinate
true if the Polygon contains the specified coordinates; false otherwise.
public boolean contains(Point2D p)
Tests if a specified Point2D is inside the boundary of this Polygon.
Specified by:
contains in interface Shape
p - a specified Point2D
true if this Polygon contains the specified Point2D; false otherwise.
See Also:
public boolean intersects(double x,
double y,
double w,
double h)
Tests if the interior of this Polygon intersects the interior of a specified set of rectangular coordinates.
Specified by:
intersects in interface Shape
x - the x coordinate of the specified rectangular shape's top-left corner
y - the y coordinate of the specified rectangular shape's top-left corner
w - the width of the specified rectangular shape
h - the height of the specified rectangular shape
true if the interior of this Polygon and the interior of the specified set of rectangular coordinates intersect each other; false otherwise.
See Also:
public boolean intersects(Rectangle2D r)
Tests if the interior of this Polygon intersects the interior of a specified Rectangle2D.
Specified by:
intersects in interface Shape
r - a specified Rectangle2D
true if this Polygon and the interior of the specified Rectangle2D intersect each other; false otherwise.
See Also:
public boolean contains(double x,
double y,
double w,
double h)
Tests if the interior of this Polygon entirely contains the specified set of rectangular coordinates.
Specified by:
contains in interface Shape
x - the x coordinate of the top-left corner of the specified set of rectangular coordinates
y - the y coordinate of the top-left corner of the specified set of rectangular coordinates
w - the width of the set of rectangular coordinates
h - the height of the set of rectangular coordinates
true if this Polygon entirely contains the specified set of rectangular coordinates; false otherwise.
See Also:
public boolean contains(Rectangle2D r)
Tests if the interior of this Polygon entirely contains the specified Rectangle2D.
Specified by:
contains in interface Shape
r - the specified Rectangle2D
true if this Polygon entirely contains the specified Rectangle2D; false otherwise.
See Also:
public PathIterator getPathIterator(AffineTransform at)
Returns an iterator object that iterates along the boundary of this Polygon and provides access to the geometry of the outline of this Polygon. An optional AffineTransform can be specified so
that the coordinates returned in the iteration are transformed accordingly.
Specified by:
getPathIterator in interface Shape
at - an optional AffineTransform to be applied to the coordinates as they are returned in the iteration, or null if untransformed coordinates are desired
a PathIterator object that provides access to the geometry of this Polygon.
public PathIterator getPathIterator(AffineTransform at,
double flatness)
Returns an iterator object that iterates along the boundary of the Shape and provides access to the geometry of the outline of the Shape. Only SEG_MOVETO, SEG_LINETO, and SEG_CLOSE point types
are returned by the iterator. Since polygons are already flat, the flatness parameter is ignored. An optional AffineTransform can be specified in which case the coordinates returned in the
iteration are transformed accordingly.
Specified by:
getPathIterator in interface Shape
at - an optional AffineTransform to be applied to the coordinates as they are returned in the iteration, or null if untransformed coordinates are desired
flatness - the maximum amount that the control points for a given curve can vary from colinear before a subdivided curve is replaced by a straight line connecting the endpoints. Since
polygons are already flat the flatness parameter is ignored.
a PathIterator object that provides access to the Shape object's geometry.
JSR-209 (Final Approval Ballot)
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Copyright © 2006 Sun Microsystems, Inc. All rights reserved. Use is subject to License Terms. Your use of this web site or any of its content or software indicates your agreement to be bound by these
License Terms.
For more information, please consult the JSR 209 specification. | {"url":"http://docs.oracle.com/javame/config/cdc/opt-pkgs/api/agui/jsr209/java/awt/Polygon.html","timestamp":"2014-04-16T06:00:14Z","content_type":null,"content_length":"43362","record_id":"<urn:uuid:e2cceb21-4db0-4484-88f1-be16e1187ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Periodic Functions
Tutorial to explore and understand what is a periodic function. Before you start the tutorial, let us review the definition of a periodic function.
A function f is periodic with period P if
f(x) = f(x + P) , P is a real number.
The graph of a periodic function repeats itself indefinitely. If f is known over one period, it is known everywhere since the graph repeats itself.
Interactive Tutorial
1 - click on the button above "click here to start" and MAXIMIZE the window obtained.
2 - Use the top slider (Change shape) to change the shape of the graph of function f.
3 - Use the middle slider (Change period) to change the period of function f.
4 - Use the bottom slider( shift ), start from the left when P = 0 and change P slowly shifting the graph of f. When the graph of f in blue ( f(x) ) and the graph in red ( f(x + P) ) are identical (
superimposed ), the value of P displayed (as a multiple of Pi) is an approximation to the period of the graph of f.
5 - Change the shape and period and repeat the above exploration.
More on
Find the Inverse Functions - Online Calculator
Free Online Graph Plotter for All Devices
Home Page -- HTML5 Math Applets for Mobile Learning -- Math Formulas for Mobile Learning -- Algebra Questions -- Math Worksheets -- Free Compass Math tests Practice
Free Practice for SAT, ACT Math tests -- GRE practice -- GMAT practice Precalculus Tutorials -- Precalculus Questions and Problems -- Precalculus Applets -- Equations, Systems and Inequalities --
Online Calculators -- Graphing -- Trigonometry -- Trigonometry Worsheets -- Geometry Tutorials -- Geometry Calculators -- Geometry Worksheets -- Calculus Tutorials -- Calculus Questions -- Calculus
Worksheets -- Applied Math -- Antennas -- Math Software -- Elementary Statistics High School Math -- Middle School Math -- Primary Math
Math Videos From Analyzemath
Author - e-mail
Updated: 2 April 2013
Copyright © 2003 - 2014 - All rights reserved | {"url":"http://www.analyzemath.com/function/periodic.html","timestamp":"2014-04-20T23:46:27Z","content_type":null,"content_length":"8793","record_id":"<urn:uuid:b0780ef0-97f2-4004-9860-9a4ed91f8cbf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
ASK AN EDUCATOR! – How can I measure the height of my model rocket?
Seth asks:
What would be a good way to gauge the height of my model rocket?
Well, there are really quite a few ways to make the measurement. In a perfect world, your rocket launches perpendicular to the earth’s surface, and you can use a trigonometry to calculate the
altitude based on your distance from the launch pad and the apparent angle from you to the rocket. You can use a tool, such as an inclinometer or “altitude tracker,” which uses your line of site to
the rocket and references it to a string held vertically due to gravity. You then record your angle when the rocket hits apogee and through trig or the supplied calculator, you can determine the
height. The problem with this method is that it doesn’t take into account any drift, which changes the your distance to the rocket and skews the measurement.
Alternatively you can use the same method, but with two observers. NASA has a nice description and calculator that does the trig for you and gives you a much more accurate measurement.
Another method it to buy a altimeter like a PerfectFlite APRA Altimeter. This device uses a barometer and returns a beep corresponding to the altitude an achieved velocity. You can also find more
sophisticated devices that have computer interfaces, GPS, temperature, etc.
I hope this answers your question, and have fun with you calculations!
Don’t forget, everyone is invited to ask a question!
Click here!
“Ask an Educator” questions are answered by Adam Kemp, a high school teacher who has been teaching courses in Energy Systems, Systems Engineering, Robotics and Prototyping since 2005.
1 Comment
1. Actually, the barometic pressure should give you a better height information than GPS. The drift on the vertial is +/- 75ft for GPS, while for barometric, it would be +/- 25 ft.
http://www.gpsinformation.net/main/altitude.htm <- GPS
http://www.hills-database.co.uk/altim.html <- Barometric
Sorry, the comment form is closed at this time. | {"url":"http://www.adafruit.com/blog/2012/07/15/ask-an-educator-how-can-i-measure-the-height-of-my-model-rocket/","timestamp":"2014-04-20T13:17:48Z","content_type":null,"content_length":"34180","record_id":"<urn:uuid:0821e136-4489-4506-bbe2-4dcc30aeb0c6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spanaway Algebra 2 Tutor
Find a Spanaway Algebra 2 Tutor
...I am available during the week from 10am - 3pm, and Saturdays. I like to use visualizations (I have created a Pythagoras puzzle from acrylic). I also bring enthusiasm to the table, because I am
a hobby mathematician. I am enrolled at Western Governor's University.
34 Subjects: including algebra 2, chemistry, physics, Spanish
...From 2003 to 2005, I tutored all GED subjects and administered the practice tests to dozens of students at the Muckleshoot Tribal College in Auburn, WA. The SAT tests on a specific range of
vocabulary, fairly abstract, but not technical terms, and on secondary meanings of words. Certain of these abstract words turn up over and over again on the SAT.
38 Subjects: including algebra 2, English, writing, geometry
With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a
quick fix, but I will not stop working if you make the effort. -Bill
16 Subjects: including algebra 2, calculus, geometry, statistics
...For seven years I have been doing this in the Buckley/Bonney Lake area where I live and in the White River School District where I am fairly well known. Although I do not hold a teaching
certificate (working on that), I am a certified Substitute teacher, I taught Algebra 2 at Choice HS, and I wa...
11 Subjects: including algebra 2, calculus, statistics, geometry
...I am highly qualified to tutor proofreading. I have been proofreading business documents and technical documents on science and engineering topics for myself and my colleagues for over 40
years. These documents include business letters (of course), proposals for engineering projects, project re...
21 Subjects: including algebra 2, English, chemistry, physics | {"url":"http://www.purplemath.com/Spanaway_Algebra_2_tutors.php","timestamp":"2014-04-21T02:01:16Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:035be94a-edc1-4022-8e4b-1ebd7125f547>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Including probe-level uncertainty in model-based gene expression clustering
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2007; 8: 98.
Including probe-level uncertainty in model-based gene expression clustering
Clustering is an important analysis performed on microarray gene expression data since it groups genes which have similar expression patterns and enables the exploration of unknown gene functions.
Microarray experiments are associated with many sources of experimental and biological variation and the resulting gene expression data are therefore very noisy. Many heuristic and model-based
clustering approaches have been developed to cluster this noisy data. However, few of them include consideration of probe-level measurement error which provides rich information about technical
We augment a standard model-based clustering method to incorporate probe-level measurement error. Using probe-level measurements from a recently developed Affymetrix probe-level model, multi-mgMOS,
we include the probe-level measurement error directly into the standard Gaussian mixture model. Our augmented model is shown to provide improved clustering performance on simulated datasets and a
real mouse time-course dataset.
The performance of model-based clustering of gene expression data is improved by including probe-level measurement error and more biologically meaningful clustering results are obtained.
Microarrays [1,2] are routinely used for the quantitative measurement of gene expression levels on a genome-wide scale. Microarray experiments are complicated multiple step procedures and variability
can be introduced in every step, so that the resulting data are often very noisy, especially for weakly expressed genes. Appropriate statistical analysis of this noisy data is very important in order
to obtain meaningful biological information [3,4]. The analysis of microarray data is usually performed in multiple stages, including probe-level analysis, normalisation and higher level analyses.
The aim of the probe-level analysis is to obtain reliable gene expression measurements from the image data. Various higher level analyses, such as detecting differential gene expression or
clustering, can then be carried out depending on the biological aims of the experiment.
Unsupervised clustering is the most frequently used approach for exploring gene function. By clustering, a huge number of genes can be organised into a much smaller number of categories according to
their shared expression patterns. It is hoped that these shared patterns reflect similar function or common transcriptional regulation. Exploring and studying the obtained gene clusters is an
important way to infer the function of uncharacterised genes from other known genes in the same cluster. There are many unsupervised algorithms which have been used to cluster gene expression data,
including the most popular hierarchical clustering [5] and k-means [6], which are based on similarity measures, and self-organising maps [7]. Most of these conventional algorithms are largely
heuristically motivated. They are easily implemented and their application is usually computationally efficient. However, these methods lack the capability to deal in a principled way with the
experimental variability in the gene expression data. Furthermore, there is no formal way to determine the number of clusters with these algorithms. It is hard to say which one is generally better
than the others [8]. Probabilistic models provide a principled alternative to these conventional methods. In particular, model-based approaches have been proposed as useful methods for clustering
gene expression data in a probabilistic way [9-12]. By using a probabilistic model, the experimental noise can be included explicitly in the model and estimated from the data, making this approach
more robust to noise. There are also useful and principled model selection methods that can be used to determine the optimal number of clusters. The advantages of model-based probabilistic approaches
over heuristic methods are already well established [10].
Affymetrix arrays contain multiple probes for each target gene and this internal replication can be used to obtain an estimate of the technical measurement error associated with each gene expression
measurement [13-17]. This source of error is especially significant for weakly expressed genes. The recently developed model, multi-mgMOS [18], provides accurate gene expression measurements along
with the associated uncertainty in this measurement. It has been shown that the probe-level measurement error obtained from multi-mgMOS can be propagated through a downstream probabilistic analysis,
thereby improving the performance of the analysis [16,17]. Existing model-based clustering methods do not consider this probe-level measurement error and they therefore discard this rich source of
information about variability. Although standard model-based clustering methods are relatively robust to noise, very noisy measurements can still have a detrimental effect on these clustering
methods, resulting in poor performance and many biologically irrelevant clusters. In this paper, we aim to include information about probe-level measurement error into the standard Gaussian mixture
model in order to improve performance compared to standard model-based clustering. Our augmented Gaussian mixture clustering model is called PUMA-CLUST (Propagating Uncertainty in Microarray Analysis
– CLUSTering) and has been implemented in the R-package pumaclust which is available from [19].
Results and discussion
We examine the performance of the extended Gaussian mixture model on two simulated datasets and a real-world mouse time-course dataset [12]. The simulated datasets are generated to reflect the noise
commonly seen in real microarray experiments. The extended mixture model is compared with the standard Gaussian mixture models implemented in MCLUST [20], which includes all variants of standard
Gaussian mixture models in terms of the representation of the covariance matrix. However, these models do not take the probe-level measurement error into consideration.
The performance of different clustering methods on datasets with known structures can be evaluated by using the adjusted Rand index [21,22]. The adjusted Rand index measures the similarity of two
clusterings on a dataset and it is widely used by the clustering research community [10,23-25]. The adjusted Rand index lies between 0 and 1, and is calculated based on whether pairs are placed in
the same or different clusters in two partitions with a higher value meaning better agreement between two clusterings. For the simulated datasets, since the true structure of the data is known, we
use the adjusted Rand index to evaluate the different partitioning ability of the extended mixture model which incorporates the probe-level measurement error and the standard mixture model. For the
real mouse time-course dataset, gene ontology (GO) enrichment analysis is used to compare the performance of the two clustering methods.
Clustering on simulated data sets
Simulated periodic data
Periodic patterns are often observed in real-world time-course microarray data [12,26]. However, the true structure of the real datasets is unavailable. We generate simulated periodic data and
include noise with magnitude estimated from real microarray data. Similar to the methods used by [23] and [25], the simulated data is generated by the following four steps.
At the first step, the logged gene expression within each known group is generated. There are six groups and 600 genes in the dataset. Each group has 100 genes. The first four groups have a periodic
sine pattern. The expression of gene i in group q, q = 1, 2, 3, 4, is generated by
x[qij ]= A[i ]sin(2πj/10 - πq/2) + S, (1)
where j = 1, 2,..., J and J is the number of conditions or time points. A[i ]is a random scaling factor which is sampled from U(0, 7), where U represents the uniform distribution. S is a shifting
factor which is set as 7. This assignment of A[i ]and S is to make the gene expression level lie between 0 and 14 which is the normal range of the logged gene expression level from real Affymetrix
datasets. The gene expression levels of group 5 and group 6 are generated by linear functions
x[qij ]= jA[qi]/J and x[qij ]= -jA[qi]/J + S, (2)
respectively, where A[qi ]is sampled from U (0,14) and S = 14 when q = 6 so as to ensure that the simulated expression level lies within the accepted logged expression range.
The simulated data from the first step follows perfectly the same sine wave within the same group except for a different magnitude. However, in practice there is biological and technical noise in the
experiment distorting the true sine wave. At the second step, the real mouse dataset (described in the next section) is used to obtain an estimate of the combined noise from biological and technical
sources which is related to the variance of observed gene expression level from replicated experiments. The mouse dataset has three or four replicates for each condition. Using the gene expression
summaries from MAS 5.0 [27] which is the standard software provided by Affymetrix, an estimate of the combined technical and biological noise can be obtained from Cyber-T [28]. Cyber-T is a Bayesian
hierarchical model which calculates the variance between replicates using point estimates of gene expression level from each replicate. Since the variance has a dependence on gene expression level,
the combined noise, $σqij2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacqWFdpWCdaqhaaWcbaGaemyCaeNaemyAaKMaemOAaOgabaGaeGOmaidaaaaa@33B8@$, is sampled from a subset of variances
calculated from Cyber-T whose corresponding expression levels are close to x[qij]. Thus, the final simulated expression level, $x^ MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWG4baEgaqcaaaa@2E35@$[qij], is
$x^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=
vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWG4baEgaqcaaaa@2E35@$[qij ]= x[qij ]+ ε[qij], (3)
where ε[qij ]is drawn from $N MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=
wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$(0, $σqij2 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacqWFdpWCdaqhaaWcbaGaemyCaeNaemyAaKMaemOAaOgabaGaeGOmaidaaaaa@33B8@$). When J = 10, the simulated expression level for group three is shown in Figure 1(a)
. It can be seen that there is more noise for the lower expressed genes than the highly expressed ones, which is a common feature of real datasets.
Simulated expression profiles. Simulated expression profiles for one group under 10 conditions. (a) are the raw data on a log scale and (b) are the normalised profiles with zero mean and standard
deviation one.
At the third step, in order to show the clustering improvement by including probe-level measurement error, we sample the corresponding probe-level variance of the simulated expression level from the
real mouse dataset processed by multi-mgMOS. Similar to the second step, since the measurement error has a dependence on the gene expression level, the standard deviation for each simulated
expression value, $σ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFdpWCgaqcaaaa@2E86@$[qij ]is sampled from a subset of standard deviation calculated from multi-mgMOS whose corresponding
expression levels are close to $x^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWG4baEgaqcaaaa@2E35@$[qij]. Figure 2(a) shows the scatter plot of the sampled standard deviation against the
simulated expression level for one randomly selected condition. It can be seen that the variance of the measured gene expression for the weakly expressed genes is generally larger than that for the
highly expressed genes as is commonly observed in real datasets. This is consistent with the plot in Figure 1(a). At the final step, we normalise the simulated expression level for each gene over all
conditions by subtracting the mean expression level and dividing by the standard deviation such that the profile of each gene has zero mean and standard deviation one. The simulated standard
deviation is also divided by the standard deviation of the expression level to determine the corresponding measurement error of the normalised data. The normalised profile is shown in Figure 1(b)
when J = 10.
Standard deviation against the simulated gene expression level. Scatter plots of standard deviation against the simulated gene expression level. The standard deviation in (a) is sampled from the
multi-mgMOS results obtained from the mouse dataset. The ...
Since the true partition of the simulated dataset is known, the agreement of the clustering results from different methods with the true partition can be assessed by the adjusted Rand index. The true
number of groups, six, is selected for both MCLUST and PUMA-CLUST. Three sets of datasets are generated to evaluate the different performance of PUMA-CLUST and MCLUST with number of conditions 10, 20
and 30. For each set, 10 random simulated datasets are generated. The average adjusted Rand index from PUMA-CLUST and MCLUST are shown in the first column of Figure Figure3.3. For the three sets of
simulated datasets, PUMA-CLUST results in markedly better performance compared with MCLUST and the p-values of a paired t-test, shown in Table Table1,1, indicate that the difference in performance
is highly significant.
Average adjusted Rand index. The average adjusted Rand index of the clustering results from PUMA-CLUST and MCLUST on the simulated data. The first column is for the six group dataset and the second
column is for the seven group dataset with one noise ...
P-values obtained from a paired t-test of adjusted Rand index from MCLUST and PUMA-CLUST. A paired t-test is performed for MCLUST and each of PUMA-CLUST results. The 10 simulated datasets in Figure 3
are used for each test. PC represents PUMA-CLUST results ...
Including a noise group
In a real-world microarray dataset, there are usually a certain fraction of genes whose expression levels are indistinguishable from random noise. These genes do not belong to any pattern group in
the dataset [25].
To assess the performance of PUMA-CLUST on this kind of dataset, we add a group of random noise genes into the previously simulated datasets. The first generating step of the gene expression level
for group seven is
x[qij ]= A[qi], (4)
where A[qi ]is sampled from U(0,14). The following steps of the simulation are the same as those for the former six groups. Three sets of simulated datasets with 10 randomly generated datasets for
each set are also sampled and the average adjusted Rand index for three cases with 10, 20, and 30 conditions are shown in the second column of Figure Figure3.3. The number of groups for both MCLUST
and PUMA-CLUST is assigned to seven. From the three plots it can be seen that the performance of the clustering from both PUMA-CLUST and MCLUST decreases with the inclusion of the noise group, but
PUMA-CLUST still outperforms MCLUST over all three noise levels with the three different data dimensions. The p-values in Table Table11 indicate that the improvement is statistically significant.
Testing the robustness to misspecified technical variance
The probe-level variance in the simulated datasets generated above is sampled from multi-mgMOS results from the real mouse dataset. When applying PUMA-CLUST it was assumed that the level of noise is
known, but in practice it would be estimated using multi-mgMOS. We would like to test robustness to errors in estimating the measurement error variance. We therefore add some noise to the sampled
standard deviation, $σ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacuWFdpWCgaqcaaaa@2E86@$[qij], to simulate the error made in estimating this quantity. For the six-group and seven-group datasets,
three kinds of random noise are added by sampling from $N MathType@MTEF@5@5@+=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$(0, 0.01), $N MathType@MTEF@5@5@+=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$(0, 0.1) and $N MathType@MTEF@5@5@+=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$(0, 0.2). The scatter plots of the error-added standard deviation against the simulated gene expression are
shown in Figure 2(b)–(d). Figure Figure33 gives the average adjusted Rand index of the clustering results from PUMA-CLUST on the error-added standard deviation for various cases. In the case of
PC.01, the added noise is quite small so that the clustering results of PC.01 are very close to the clustering results on the original simulated data. As the added noise variance increases, the
performance of PUMA-CLUST decreases. The p-values in Table Table11 mostly increase when larger noise is added to the variances but all p-values remain small and demonstrate a significant improvement
for PUMA-CLUST over MCLUST. These results demonstrate that clustering is most accurate when the measurement error variance is known, but that the method is robust to errors in the estimate of the
measurement error.
Clustering on a real mouse time-course dataset
The improved performance of the new model, PUMA-CLUST, over the standard Gaussian mixture model on simulated datasets was shown in the previous section. Here, we evaluate the performance of
PUMA-CLUST on a real mouse dataset showing periodic behavior [12] by comparing with the results of the standard mixture model implemented in MCLUST.
This time-course dataset profiles the gene expression changes during the hair growth cycle, which is synchronised for the first two cycles following birth. After two cycles the hair growth cycle
becomes progressively unsynchronised. Lin et al. use Affymetrix MG-U74Av2 microarray chips to profile mRNA expression in mouse back skin from eight representative time points in order to discover
regulators in hair-follicle morphogenesis and cycling [12]. The microarray dataset utilised a total of 25 chips with each time point consisting of three or four replicates. The first five time points
(day 1, 6, 14, 17 and 23) cover the first synchronised cycle and the last three time points (week 9, month 5 and year 1) belong to the asynchronous cycles. They identified 2,461 potential hair
cycle-associated genes using a F test comparing synchronous and asynchronous time points. This dataset is available at [29].
We apply both PUMA-CLUST and MCLUST clustering over the first five time points which belong to the synchronised cycle and includes 15 chips. For MCLUST the raw mouse dataset is processed using the
popular probe-level method GCRMA [30]. For PUMA-CLUST the raw data is processed by multi-mgMOS. We also applied MCLUST to MAS5.0 and multi-mgMOS gene expression measurements and the performance was
found to be similar to the results presented here using GCRMA.
The clustering is performed on the 2,461 potential hair cycle-associated genes. The obtained expression level for each probe-set from both probe-level methods are normalised to have zero mean and
standard deviation one. The Bayesian Information Criterion (BIC [31]) is used to determine the number of clusters. The calculated BIC for various numbers of clusters is shown in Figure Figure4.4. It
can be seen that the optimal BIC for PUMA-CLUST is obtained at K = 22 and the optimal BIC for MCLUST is obtained at K = 30. In both cases, MCLUST converges to the model having the same full rank
covariance matrix for each component (the 'EEE' model [32]). In order to make the different clustering methods comparable, the number of clusters for each method should be the same. Therefore, the
22-cluster and the 30-cluster cases are compared separately. The 22 clusters obtained from PUMA-CLUST and MCLUST are shown in Figure Figure55 and Figure Figure66 respectively, and the 30 clusters
obtained are shown in Figure Figure77 and Figure Figure8,8, respectively. For visualisation, the average expression level at each time point over replicates is shown for both the gene profile and
the cluster center.
BIC for PUMA-CLUST and MCLUST. BIC for (a) PUMA-CLUST and (b) MCLUST against the number of mixture components on the 2,461 potential hair growth-associated genes from the mouse time-course dataset.
PUMA-CLUST obtains the minimum BIC at K = 22 and MCLUST ...
Expression pattern clusters from PUMA-CLUST when K = 22. The clusters are for the 2,461 potential hair cycle-associated genes of the mouse time-course dataset when K = 22. The expression pattern for
each probe-set is shown as dark lines for five time ...
Expression pattern clusters from MCLUST when K = 22. The clusters are for the 2,461 potential hair cycle-associated genes of the mouse time-course dataset when K = 22. The expression pattern for each
probe-set is shown as dark lines for five time points. ...
Expression pattern clusters from PUMA-CLUST when K = 30. The clusters are for the 2,461 potential hair-growth-associated genes of the mouse time-course dataset when K = 30. The expression pattern for
each probe-set is shown as dark lines for five time ...
Expression pattern clusters from MCLUST when K = 30. The clusters are for the 2,461 potential hair-growth-associated genes of the mouse time-course dataset when K = 30. The expression pattern for
each probe-set is shown as dark lines for five time points. ...
To assess whether biologically relevant clusters are created using the two methods, we systematically performed GO annotation enrichment analysis for the individual clusters using DAVID 2006 (The
Database for Annotation, Visualization and Integrated Discovery, [33]). The GO enrichment analysis allows the direct assessment of the biological significance for gene clusters found based on the
enrichment of genes belonging to a specific GO functional category. The enrichment calculation performed in DAVID is a modified Fisher Exact test. The resulting p-value shows the biological
significance for gene clusters. Based on our experience, GO Biological Process term level 5 gives more precise category definitions which are useful in further biological interpretations. Therefore,
a meaningful GO enrichment analysis is to examine enriched categories of GO Biological Process at term level 5 and to select an enrichment cutoff at a conventional p-value of 0.05.
We found that for the 22-cluster results from the two methods PUMA-CLUST produced more clusters (21 of 22) with at least one enriched GO category in comparison to MCLUST (17 of 22), as shown in
Figure 9(a). A visual inspection of these MCLUST clusters without an enriched GO category indicates that four out of five of these clusters (Cluster #1,6,8,15 in Figure Figure6)6) contain
heterogeneous temporal expression profiles (i.e. not tightly clustered). Since the number of enriched GO categories found varies greatly among clusters (shown in Figure 10(a)), the average number
(13.1) of enriched categories among the 22 PUMA-CLUST clusters is only slightly greater than the average among the MCLUST clusters (11.5). A more meaningful indicator of the distribution differences
is the median number of enriched categories in PUMA-CLUST clusters (14) and MCLUST clusters (7). The same enrichment analysis method was repeated using the 30 clusters for both methods, and the
results still clearly indicate that the PUMA-CLUST method results in more biologically meaningful clusters than the MCLUST method. Using 30 clusters, all clusters generated by PUMA-CLUST have at
least one enriched GO category, in comparison to only 21 out of 30 clusters created by MCLUST as shown in Figure 9(b). The median number of enriched categories for PUMA-CLUST and MCLUST are 7 and 3,
respectively, as shown in Figure 10(b). Based on these GO enrichment analyses, it is evident that PUMA-CLUST generated more biologically relevant clusters than MCLUST.
Comparison of the number of clusters found with the indicated ranges of enriched GO categories for MCLUST and PUMA-CLUST clusters. Comparison of the number of clusters found with the indicated ranges
of enriched categories for MCLUST and PUMA-CLUST clusters ...
Boxplot of the number of enriched categories for MCLUST and PUMA-CLUST clusters. Boxplot of the number of enriched categories for MCLUST and PUMA-CLUST clusters using (a) 22 clusters and (b) 30
clusters. The boxes show the lower quartile, median, and ...
For further validation of the performance of PUMA-CLUST, we also applied MCLUST on multi-mgMOS measurements so that we can compare PUMA-CLUST with MCLUST using exactly the same probe-level summary
method. MAS 5.0 is another popular probe-level method and therefore we also applied MCLUST to MAS 5.0 processed data for comparison. Enrichment analyses on the 22-cluster results for all four
approaches (Figure (Figure1111 and Figure Figure12)12) show that MCLUST on multi-mgMOS processed data performed similarly to MCLUST on GCRMA processed data. Both have five clusters without any
enriched category, but MCLUST with GCRMA had slightly higher median value for the number of enriched categories (7 vs. 5). Although MCLUST with MAS5.0 only had two clusters without any enriched
category, its median value for the number of enriched categories is notably less than that of PUMA-CLUST with multi-mgMOS (5.5 vs. 14). Thus, PUMA-CLUST with multi-mgMOS still performs best in
comparison to MCLUST using the three different expression summary methods. For 30-cluster results and for results with other numbers of clusters we found similar results. In particular, when the same
probe-level method, multi-mgMOS, is used, PUMA-CLUST always outperforms MCLUST. The improved performance is due to the inclusion of the probe-level measurement error which down-weights the effect of
the noisy low expressed genes.
Comparison of the number of clusters found with the indicated ranges of enriched GO categories for MCLUST and PUMA-CLUST clusters using various probe-level methods. Comparison of the number of
clusters found with the indicated ranges of enriched categories ...
Boxplot of the number of enriched categories for MCLUST and PUMA-CLUST clusters using various probe-level methods. Boxplot of the number of enriched categories for MCLUST and PUMA-CLUST clusters
using various probe-level methods when K = 22. The boxes ...
In this paper we demonstrate the usefulness of the measurement error in model-based clustering of gene expression data. A standard Gaussian mixture model with an unequal volume spherical covariance
matrix is augmented to incorporate probe-level measurement error obtained from Affymetrix microarrays. Results from simulated datasets and a real mouse time-course dataset show that the inclusion of
probe-level measurement error results in improved and more biologically meaningful clustering of gene expression data. The augmented clustering model has been implemented in an R package, pumaclust,
for public use of the method.
The improved performance of the augmented model has been shown in this paper. It is possible that further improvement can also be made by considering the replicate information where repeated
measurements are available for time points. Clustering on repeated measurements has been considered by [12,23,25], but all of these approaches do not include the probe-level measurement error.
Including both probe-level noise and replicate information in the clustering would be a useful extension of our work.
multi-mgMOS and probe-level measurement error
Affymetrix microarrays use multiple probe-pairs called a probe-set to measure the expression level for each gene. Each probe-pair consists of a perfect match (PM) probe and a mismatch (MM) probe. By
design, the intensity of the PM probe measures the specific hybridisation of the target and the MM probe measures the non-specific hybridisation associated to its corresponding PM probe. The
microarray experimental data show that the intensities of both PM and MM probes vary in a probe-specific way and MM probes also detect some specific hybridisation. Based on these observations,
multi-mgMOS [18] assumes the intensities of PM and MM probes for a probe-set both follow gamma distributions with parameters accounting for specific and non-specific hybridisation, and probe-specific
effects. Let y[ijc]and m[ijc ]represent the jth PM and MM intensities respectively for the ith probe-set under the cth condition. The model is defined by
y[ijc ]~ Ga(a[ic ]+ α[ic], b[ij])
m[ijc ]~ Ga (a[ic ]+ [ic], b[ij]) (5)
b[ij ]~ Ga(c[i], d[i]),
where Ga represents the gamma distribution. The parameter a[ic ]accounts for the background and non-specific hybridisation associated with the probe-set and α[ic ]accounts for the specific
hybridisation measured by the probe-set. The parameter b[ij ]is a latent variable which models probe-specific effects. The Maximum a Posteriori (MAP) solution of this model can be found by efficient
numerical optimisation. The posterior distribution of the logged gene expression level can then be estimated from the model and approximated by a Gaussian distribution with a mean, $x^
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacuWG4baEgaqcaaaa@2E35@$[ic], and a variance, ν[ic ]. The mean of this distribution is taken as the estimated gene expression for gene i under the condition c
and the variance can be considered the measurement error associated with this estimate. The Gaussian approximation to the posterior distribution is useful for propagating the probe-level measurement
error in subsequent downstream analyses.
Mixture model
The mixture model is a useful tool for revealing the inherent structure of data. In a mixture model with K components, the data is generated by
$p(xi)=∑k=1KP(k)p(xi|k;θk), (6) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
where P(k) denotes the probability of selecting the kth component with parameters θ[k ]and θ = {θ[1], θ[2],..., θ[K], P(k)} is the complete parameter set of the mixture model. The parameters k ar
latent variables determining which cluster the data belongs to.
Mixture models are usually solved by maximum likelihood using an Expectation-Maximisation (EM) algorithm [34]. With the initialised parameters at t = 0, the values of parameters can be determined
iteratively through an E-step and M-step:
• E-step: Compute
P^t(k|x[i]) = P(k|x[i]; θ^t) (7)
for each data point x[i ]and each component k.
• M-step:
$θt+1=argmaxθ∑i∑kPt(k|xi)log(p(xi|k;θk)P(k)) (8) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
with constraint ∑[k ]P(k) = 1.
Standard Gaussian mixture model
For mixture component distributions from the exponential family, like the Gaussian, both steps are exactly tractable. In a Gaussian mixture model where θ[k ]= {μ[k], Σ[k]}, each component k is
modelled by a Gaussian distribution with mean μ[k ]and covariance matrix Σ[k],
$p(xi|k;θk)=N(xi|μk,Σk)=1(2π)p|Σk|exp(−12(xi−μk)TΣk−1(xi−μk), (9) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
where |·| denotes determinant and p is the dimension of the data. As well as changing the number of components in the mixture, the covariance matrix Σ[k ]can be constrained to determine the
flexibility of the model. The most constrained model is parameterised by Σ[k ]= σ^2I with only one free parameter in the covariance matrix for all components. The unconstrained model has full rank Σ[
k ]with p(p + l)/2 free parameters in the covariance matrix for each component where p is the data dimension. All representations of the covariance matrix are explored in [35]. Allowing the number of
free parameters in the covariance matrix to vary leads to various models accommodating varying characteristics of data. All of these models have been implemented in MCLUST [20] and the BIC model
selection criterion (described later) is used to select the most appropriate model.
Including measurement uncertainty in a Gaussian mixture model
From a probabilistic probe-level model, such as multi-mgMOS, for each data point one can obtain the measurement error, ν[i], which is a vector giving the variance of the measured expression level on
each chip. Suppose x[i ]is the true expression level for data point i. The kth component of the Gaussian mixture model is modelled by p(x[i]|k; θ[k]) = $N MathType@MTEF@5@5@+=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$(x[i]|μ[k], Σ[k]). The measured expression level $x^ MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieWacuWF4baEgaqcaaaa@2E3D@$[i ]can be expressed as $x^ MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieWacuWF4baEgaqcaaaa@2E3D@$[i ]= x[i ]+ ε[i]. A zero-mean Gaussian measurement noise is assumed, ε[i ]~ $N MathType@MTEF@5@5@+=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$(0, diag(ν[i])), where diag(ν[i]) represents the diagonal matrix whose diagonal entries starting in the upper
left corner are the elements of ν[i]. Since $x^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieWacuWF4baEgaqcaaaa@2E3D@$[i ]is a linear sum of x[i ]and ε[i], the kth Gaussian component can
be augmented as
p($x^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0
=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieWacuWF4baEgaqcaaaa@2E3D@$[i]|k; θ[k]) = $N MathType@MTEF@5@5@+=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFneVtaaa@383B@$($x^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaieWacuWF4baEgaqcaaaa@2E3D@$[i]|μ[k], Σ[k ]+ diag(ν[i])) (10)
We therefore augment the mixture model to account for the measurement error of each data point,
$p(xi)=∑k=1KP(k)N(xi|k;μk,Σk+diag(νi)). (11) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai
Ideally, the covariance matrix should be of full rank to obtain the largest flexibility of the model. However, this will increase the complexity of the model. Since in (11) the additive measurement
error diag(ν[i]) accounts for inherent variability in the data, especially for extremely noisy gene expression data, the unequal volume spherical model (VI) described in [10] with the covariance Σ[k
]= $σk2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacqWFdpWCdaqhaaWcbaGaem4AaSgabaGaeGOmaidaaaaa@30F4@$I is adopted. This model allows the spherical components to have different
variances which accounts for the variability within different gene function groups. Therefore, in this model the gene-specific variance ν[i ]is known and obtained from a probabilistic probe-level
analysis model and the function-specific variance $σk2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=
OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacqWFdpWCdaqhaaWcbaGaem4AaSgabaGaeGOmaidaaaaa@30F4@$ is to be estimated from the mixture model
via the EM algorithm. The parameters are denoted θ[k ]= {μ[k], $σk2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiGacqWFdpWCdaqhaaWcbaGaem4AaSgabaGaeGOmaidaaaaa@30F4@$} for
Gaussian component k and θ = {θ[1], θ[2],..., θ[k ]} for all components, where K is the number of components. Using the K-means algorithm, one can obtain the initial parameters θ^0 for all
components. Equal probability of the component prior is also assumed for the initial value of P(k), P^0(k). At the E-step, for each data point x[i ]the posterior probability of belonging to component
k is calculated as,
$Pt(k|xi)=Pt(k|xi;θt−1)=P(xi|θkt−1)Pt−1(k)∑kP(xi|θkt−1)Pt−1(k). (12) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
At the M-step, the component prior and the parameters of components are optimised,
$Pt(k)=1N∑n=1NPt(k|xi) (13) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
$θt=argmaxθ∑i∑kPt(k|xi)log(p(xi|θk)Pt(k)). (14)$
Equation (14) cannot be solved analytically due to the incorporation of ν[i ]in the variance terms. However, with fast optimisation methods available such as SNOPT [36] and donlp2 [37], it is easy to
calculate the optimal parameters numerically at the M-step. In our R implementation, pumaclust, we use donlp2.
Model selection
In the previous section the covariance matrix of the Gaussian mixture model is specified and the parameters are worked out via an EM algorithm for a given K. In practice the most appropriate number
of clusters should also be determined. In mixture models, the Bayesian Information Criterion (BIC [31]) is usually used to decide the appropriate number of clusters. For model m with the number of
clusters K, the calculation of BIC is
BIC[m ]= -2log p(D|$θ^ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiWacuWF4oqCgaqcaaaa@2E7A@$[m]) + d[m ]log(n), (15)
where D is the dataset, d[m ]is the number of free parameters to be estimated in model m, n is the number of genes and $θ^ MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaaiiWacuWF4oqCgaqcaaaa@2E7A@$[m ]are the estimated maximum likelihood parameters obtained by the EM algorithm. For the unequal volume spherical model (VI), the
number of free parameters is d[m ]= K(p + 2) - 1. MCLUST also uses BIC to select the most appropriate class of covariance model.
Adjusted rand index
The adjusted Rand index gives a measure of agreement between clustering results. Given a set of n data points D = {x[1],..., x[n]}, suppose C^1 = {$c11 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaeGymaedabaGaeGymaedaaaaa@3008@$,..., $cM1 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaemyta0eabaGaeGymaedaaaaa@303B@$} and C^2 = {$c12 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaeGymaedabaGaeGOmaidaaaaa@300A@$,..., $cN2 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaemOta4eabaGaeGOmaidaaaaa@303F@$} represent two different partitions of the data points in D. Assume that n[ij ]is the number of data
points belonging to cluster $ci1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaemyAaKgabaGaeGymaedaaaaa@3073@$ and $cj2 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaemOAaOgabaGaeGOmaidaaaaa@3077@$, and n[i]. and n.[j ]are the number of data points in cluster $ci1 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaemyAaKgabaGaeGymaedaaaaa@3073@$ and $cj2 MathType@MTEF@5@5@+=
vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGJbWydaqhaaWcbaGaemOAaOgabaGaeGOmaidaaaaa@3077@$ respectively. The adjusted Rand index can be calculated by
$∑i,j(nij2)−[∑i(ni.2) ∑j(n.j2)]/(n2)12[∑i(ni.2)+∑j(n.j2)]−[∑i(ni.2)∑j(n.j2)]/(n2). (16) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=
Authors' contributions
XL developed and implemented the new model, applied the new and standard models to the simulated and real data, and drafted the manuscript. KL and AB provided the mouse dataset, helped with the
evaluation of the clustering results on this dataset and revised the manuscript. MR supervised the study and helped with the manuscript preparation. All authors read and approved the final
XL was funded by the Overseas Scholarship Scheme from the University of Manchester and a studentship from the School of Computer Science. BA was supported by NIH awards AR44882 and AR52863. MR was
supported by a BBSRC award 'Improved processing of microarray data with probabilistic models'.
• Schena M, Shalon D, Davis RW, Brown PO. Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science. 1995;270:467–470. doi: 10.1126/science.270.5235.467. [
PubMed] [Cross Ref]
• Lockhart DJ, Dong H, Byrne MC, Follettie MT, Gallo MV, Chee MS, Mittmann M, Wang C, Kobayashi M, Horton H, Brown EL. Expression monitoring by hybridization to high-density oligonucleotide arrays.
Nat Biotechnol. 1996;14:1675–1680. doi: 10.1038/nbt1296-1675. [PubMed] [Cross Ref]
• Slonim DK. From pattern to pathways: gene expression data analysis comes of age. Nature Genetics. 2002;32:502–508. doi: 10.1038/ng1033. [PubMed] [Cross Ref]
• Quackenbush J. Computational Analysis of Microarray Data. Nature Reviews Genetics. 2001;2:418–427. doi: 10.1038/35076576. [PubMed] [Cross Ref]
• Eisen MB, Spellman PT, Brown PO, Botstein D. Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci USA. 1998;95:14863–14868. doi: 10.1073/pnas.95.25.14863. [PMC free
article] [PubMed] [Cross Ref]
• Tavazoie S, Hughes JD, Campbell MJ, Cho RJ, Church GM. Systematic determination of genetic network architecture. Nat Genet. 1999;22:281–285. doi: 10.1038/10343. [PubMed] [Cross Ref]
• Tamayo P, Slonim D, Mesirov J, Zhu Q, Kitareewan S, Dmitrovsky E, Lander ES, Golub TR. Interpreting patterns of gene expression withself-organizing maps: methods and application to hematopoietic
differentiation. Proc Natl Acad Sci USA. 1999;22:2907–2912. doi: 10.1073/pnas.96.6.2907. [PMC free article] [PubMed] [Cross Ref]
• D'haeseleer P. How does gene expression clustering work? Nature Biotechnology. 2005;23:1499–1501. doi: 10.1038/nbt1205-1499. [PubMed] [Cross Ref]
• Fraley C, Raftery AE. Model-based clustering, discriminant analysis and density estimation. J Am Stat Assoc. 2002;97:911–931. doi: 10.1198/016214502760047131. [Cross Ref]
• Yeung KY, Fraley C, Murua A, Raftery AE, Ruzzo WL. Model-based clustering and data transformations for gene expression data. Bioinformatics. 2001;17:977–987. doi: 10.1093/bioinformatics/
17.10.977. [PubMed] [Cross Ref]
• Siegmund KD, Laird PW, Laird-Offringa IA. A comparison of cluster analysis methods using DNA methylation data. Bioinformatics. 2004;20:1896–1904. doi: 10.1093/bioinformatics/bth176. [PubMed] [
Cross Ref]
• Lin KK, Chudova D, Hatfield GW, Smyth P, Andersen B. Identification of hair cycle-associated genes from time-course gene expression profile data by using replicate variance. Proceedings of the
National Academy of Science USA. 2004;101:15955–15960. doi: 10.1073/pnas.0407114101. [PMC free article] [PubMed] [Cross Ref]
• Hein AMK, Richardson S, Causton HC, Ambler GK, Green PJ. BGX: afully bayesian integrated approach to the analysis of Affymetrix GeneChip data. Biostatistics. 2005;4:249–264. [PubMed]
• Li C, Wong WH. Model-based analysis of oligonucleotide arrays: model validation, design issues and standard error application. Genome Biology. 2001;2:research0032. [PMC free article] [PubMed]
• Rattray M, Liu X, Sanguinetti G, Milo M, Lawrence N. Propagating Uncertainty in Microarray Data Analysis. Briefings in Bioinformatics. 2006;7:37–47. doi: 10.1093/bib/bbk003. [PubMed] [Cross Ref]
• Sanguinetti G, Milo M, Rattray M, Lawrence ND. Accounting for probe-level noise in principal component analysis of microarray data. Bioinformatics. 2005;21:3748–3754. doi: 10.1093/bioinformatics/
bti617. [PubMed] [Cross Ref]
• Liu X, Milo M, Lawrence ND, Rattray M. Probe-level measurement error improves accuracy in detecting differential gene expression. Bioinformatics. 2006;22:2107–2113. doi: 10.1093/bioinformatics/
btl361. [PubMed] [Cross Ref]
• Liu X, Milo M, Lawrence ND, Rattray M. A tractable probabilistic model for Affymetrix probe-level analysis across multiple chips. Bioinformatics. 2005;21:3637–3644. doi: 10.1093/bioinformatics/
bti583. [PubMed] [Cross Ref]
• PUMA – Propagating Uncertainty in Microarray Analysis http://www.bioinf.manchester.ac.uk/resources/puma/
• Fraley C, Raftery AE. Mclust: software for model-based cluster analysis. J Classification. 2002;16:297–306. doi: 10.1007/s003579900058. [Cross Ref]
• Milligan GW, Cooper MC. A study of the comparability of external criteria for hierarchical cluster analysis. Multivariate Behavioral Research. 1986;21:441–458. doi: 10.1207/s15327906mbr2104_5. [
Cross Ref]
• Hubert L, Arable P. Comparing partitions. Journal of classification. 1985;2:193–218. doi: 10.1007/BF01908075. [Cross Ref]
• Yeung KY, Medvedovic M, Bumgarner RE. Clustering gene-expression data with repeated measurements. Genome Biology. 2003;4:R34. doi: 10.1186/gb-2003-4-5-r34. [PMC free article] [PubMed] [Cross Ref]
• Bolshakova N, Azuaje F. Cluster validation techniques for genome expression data. Signal Process. 2003;83:825–833. doi: 10.1016/S0165-1684(02)00475-9. [Cross Ref]
• Medvedovic M, Yeung KY, Bumgarner RE. Bayesian mixture model based clustering of replicated microarray data. Bioinformatics. 2004;20:1222–1232. doi: 10.1093/bioinformatics/bth068. [PubMed] [Cross
• Tu BP, Kudlicki A, Rowicka M, McKnight SL. Logic of the yeast metabolic cycle: temporal compartmentalization of cellular processes. Science. 2005;310:1152–1158. doi: 10.1126/science.1120499. [
PubMed] [Cross Ref]
• Affymetrix . Statistical algorithms reference guide. Affymetrix Inc, Santa Clara CA; 2002.
• Baldi P, Long AD. A Baysian framework for the analysis of microarray expression data: regularized t-test and statistical infrence of gene changes. Bioinformatics. 2001;17:509–519. doi: 10.1093/
bioinformatics/17.6.509. [PubMed] [Cross Ref]
• Gene Expression Omnibus, accession number GDS912 http://www.ncbi.nlm.nih.gov/projects/geo/
• Wu Z, Irizarry RA, Gentleman R, Martinez-Murillo F, Spencer F. A model-based background adjustment for oligonucleotide expression arrays. Journal of the American Statistical Association. 2004;99
:909–917. doi: 10.1198/016214504000000683. [Cross Ref]
• Schwartz G. Estimating the dimension of a model. Ann Stat. 1978;6:461–464.
• Fraley C, Raftery AE. Tech Rep 415R. Department of Statistics, University of Washington; 2002. MCLUST: Software for Model-Based Clustering, Discriminant Analysis and Density Estimation.
• Dennis G, Jr, Sherman BT, Hosack DA, Yang J, Gao W, Lane HC, Lempicki RA. DAVID: database for annotation, visualization, and integrated discovery. Genome Biology. 2003;4:P3. doi: 10.1186/
gb-2003-4-5-p3. [PMC free article] [PubMed] [Cross Ref]
• Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B. 1977;39:1–38.
• Banfield JD, Raftery AE. Model-based Gaussian and non-Gaussian clustering. Biometrics. 1993;49:803–821. doi: 10.2307/2532201. [Cross Ref]
• Gill PE, Murray W, Saunders MA. SNOPT: an SQP algorithm for large-scale constrained optimization. SIAM Journal on Optimization. 2002;12:979–1006. doi: 10.1137/S1052623499350013. [Cross Ref]
• Spellucci PA. A SQP method for general nonlinear programs using only equality constrained subproblems. Mathematical Programming. 1998;82:413–448.
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1847531/?tool=pubmed","timestamp":"2014-04-20T23:54:59Z","content_type":null,"content_length":"163959","record_id":"<urn:uuid:751bf98b-147f-4b9c-b654-ef4da3451732>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Georgia Perimeter - ACCT - 2102
CHAPTER 8PricingASSIGNMENT CLASSIFICATION TABLEStudy Objectives *1. Compute a target cost when the market determines a product price. Compute a target selling price using cost-plus pricing. Use
time-and-material pricing to determine the cost of service
Georgia Perimeter - ACCT - 2102
CHAPTER 9Budgetary PlanningASSIGNMENT CLASSIFICATION TABLEStudy Objectives 1. 2. 3. Indicate the benefits of budgeting. State the essentials of effective budgeting. Identify the budgets that comprise
the master budget. Describe the sources for preparin
Georgia Perimeter - ACCT - 2102
CHAPTER 10Budgetary Control and Responsibility AccountingASSIGNMENT CLASSIFICATION TABLEBrief Exercises A Problems B ProblemsStudy Objectives 1. Describe the concept of budgetary control. 2. Evaluate
the usefulness of static budget reports. 3. Explain
Georgia Perimeter - ACCT - 2102
CHAPTER 11Standard Costs and Balanced ScorecardASSIGNMENT CLASSIFICATION TABLEBrief Study Objectives 1. Distinguish between a standard and a budget. 2. Identify the advantages of standard costs. 3.
Describe how companies set standards. 4. State the for
Georgia Perimeter - ACCT - 2102
CHAPTER 12Planning for Capital InvestmentsASSIGNMENT CLASSIFICATION TABLEStudy Objectives 1. Discuss capital budgeting evaluation, and explain inputs used in capital budgeting. Describe the cash
payback technique. Explain the net present value method.
Georgia Perimeter - ACCT - 2102
CHAPTER 13Statement of Cash FlowsASSIGNMENT CLASSIFICATION TABLEBrief Exercises A Problems B ProblemsStudy Objectives *1. Indicate the usefulness of the statement of cash flows. Distinguish among
operating, investing, and financing activities. Prepare
Georgia Perimeter - ACCT - 2102
CHAPTER 14Financial Statement Analysis: The Big PictureASSIGNMENT CLASSIFICATION TABLEStudy Objectives 1. 2. 3. 4. 5. Discuss the need for comparative analysis. Identify the tools of financial
statement analysis. Explain and apply horizontal analysis.
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 1-1B (a) Factory utilities. Depreciation on factory equipment. Indirect factory labor. Indirect materials. Factory managers salary. Property taxes on factory
building. Factory repairs. Manufacturing overhead. (b) Direc
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 2-1B (a) Factory Labor. 68,000 Factory Wages Payable. Employer Payroll Taxes Payable. Employer Fringe Benefits Payable. (b) Work in Process Inventory ($68,000 X
85%). 57,800 Manufacturing Overhead. 10,200 Factory Labor
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 3-1B (a) April 30 Work in ProcessCooking. Work in ProcessCanning. Raw Materials Inventory. Work in ProcessCooking. Work in ProcessCanning. Factory Labor. Work in
ProcessCooking. Work in ProcessCanning. Manufacturing Ov
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 4-1B (a) Estimated overhead = Predetermined overhead rate Direct labor costs $270,000 = 180% of direct labor cost $50,000 + $100,000 (b) Activity cost pools
Machining Machine setup Cost drivers Machine hours Set up hou
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 5-1 B (a)(b)The relevant range is 4,000 9,000 units of output since a straight-line relationship exists for both direct materials and rent within this range.
Variable cost per unit Within the relevant range (4,000 9,
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 6-1B (a) (1) Contribution margin per room = Contribution margin per room = Contribution margin ratio = $50 ($5 + $30) $15 $15 $50 = 30%Fixed costs = $10,000 +
$2,000 + $1,000 + $500 = $13,500 Break-even point in rooms
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 7-1B (a) Reject Order Revenues Materials ($0.50) Labor ($1.20) Variable overhead ($1.00) Fixed overhead Sales commissions Net income $ -0-0-0-0-0 -0$ -0Accept
Order $19,000 (2,000) (4,800) (4,000) (5,000) -0$3,200Net
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 8-1B (a) The target cost formula is: Target cost = Market price Desired profit. In this case, the market price is $15 and the desired profit is $3 (20% X $15).
Therefore the target cost is $12 ($15 $3). (b) Target cost
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 10-1B (a)SANTO COMPANY Selling Expense Report For March By Month Actual $31,000 $35,500 $53,000 Year-to-Date Budget Actual Difference $ 30,000 $ 31,000 $1,000 U $
67,000 $ 66,500 $ 500 F $112,000 $119,500 $7,500 UMon
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 11-1B (a) Direct materials: (2,000 X 3) X $5 = $30,000 Direct labor: (2,000 X 1/2) X $16 = $16,000 Overhead: $16,000 X 70% = $11,200 (b) Direct materials: 3 X $5 =
$15.00 Direct labor: 1/2 X $16 = 8.00 Overhead: $8 X 7
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 12-1B (a) The cash payback period is: $48,000 $8,000 = 6 years The net present value is: 8% Cash Discount Present Flows X Factor = Value Present value of net
annual cash flows Present value of salvage value Capital inv
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 13-1B 1. (a) Cash. Land. Gain on Disposal. 18,000 12,000 6,000(b) The cash receipt ($18,000) is reported in the investing section. The gain ($6,000) is deducted
from net income in the operating section. 2. (a) Cash. C
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO EXERCISESSET BEXERCISE 14-1B LONGBINE INC. Condensed Balance Sheets December 31 Increase or (Decrease) Amount Percentage ($25,000 (33,000 58,000 (25.0%) (10.0%) (13.5%)2009 Assets
Current assets Plant assets (net) Total assets Liabilities C
Georgia Perimeter - ACCT - 2102
SOLUTIONS TO CASES FOR MANAGEMENT DECISION MAKINGCASE 1 1. A predetermined manufacturing overhead rate means that all manufacturing overhead costs, as defined by generally accepted accounting
principles (GAAP), are allocated to each job based on a cost d
Georgia Perimeter - ACCT - 2102
CASE 21.ABC is beneficial when traditional overhead allocation results in inaccurate product costing. Wall Dcor should investigate the product costing system because in order to sell the unframed
prints the stores must mark them up only slightly above t
Georgia Perimeter - ACCT - 2102
CASE 31.Several different views are likely to surface. Below are representative responses: Now you have a much better understanding of Mr. Burns situation and realize that finding a good solution
rests on setting the transfer price so that: (1) Wall Dco
Georgia Perimeter - ACCT - 2102
CASE 41.Present value of future cash flows: Cost of equipment (zero residual value) Cost of ink and paper supplies (purchased immediately) Annual cash flow savings for Wall Dcor ($175,000 X 3.60478)*
Annual additional store cash flow from increased sale
Georgia Perimeter - ACCT - 2102
CASE 51.Yes. All organizations should set goals. A large-scale project such as a professional rodeo may not make a profit in the first year. Based on ticket sales, the Auburn Circular Club Pro Rodeo
Roundup appears to be a popular attraction. Therefore,
Georgia Perimeter - ACCT - 2102
CASE 6 SWEATS GALORE 1. Yes, it is important for Michael to stipulate certain criteria during planning for his new business. Michael is wise to set criteria other than simply making a profit. First,
Michael wants to do something he enjoys. Because he has
Georgia Perimeter - ACCT - 2102
CASE 7 ARMSTRONG HELMET COMPANY 1.Direct Materials Product Costs Direct Manufacturing Labor Overhead Period Costs $15,500 11,000Item Administrative salaries Advertising for helmets Depreciation on
factory building Depreciation on office equipment Insura
AIB College of Business - MATHEWS - SOC 2000
Solutions to homework # 2 I.15. The area of a triangle is equal to half the area of the parallelogram dened by two sides of the triangle and the angle between them. The latter area is the absolute
value of the cross product of the two vectors correspondin
AIB College of Business - MATHEWS - SOC 2000
Solutions to homework # 1 35. First note u1 + u2 = u3 , i.e., the vectors u1 , u2 , u3 are linearly dependent, i.e., they lie in the same plane. The vectors u1 and u2 , on the other hand, are not not
proportional, hence linearly independent. So, it is eno
AIB College of Business - MATHEWS - SOC 2000
Solutions to homework # 3 II.16. (a) f (z ) = z 2 sin z . Since sin z = ei(x+iy) ei(x+iy) ey (cos x + i sin x) ey (cos x i sin x) eiz eiz = = 2i 2i 2i cos x(ey ey ) (ey + ey ) sin x +i = 2 2and z 2 =
(x2 y 2 ) + 2ixy , we get f (x, y ) = u(x, y ) + iv (x
AIB College of Business - MATHEWS - SOC 2000
Solutions to homework # 4 II.27. (a) Res f (i) = lim(z i)f (z ) = lim(z i)z i z iz2 i2 z2 = lim = . (z i)(z + i) z i z + i 2i(b) Since e1/z 1 = e1 e1/z , we get e1/z 1 = e1 (1 + 1 1 + + ). 1!z 2!z
2The coecient c1 of 1/z in this Laurent series, i.e.,
AIB College of Business - MATHEWS - SOC 2000
Solutions to homework # 5 II.31. (a) Consider the rectangular contour CR consisting of the intervals IR :=[R, R], + SR :=[R, R + 2 i/b], JR :=[R + 2 i/b, R + 2 i/b], SR :=[R + 2 i/b, R]. First note
thatJReaz dz = 1 + ebzIRea(x+2i/b) dx = e2ia/b 1 + eb
AIB College of Business - MATHEWS - SOC 2000
Answers and comments on homework # 7 IV.2. (a) f (x) = 11 2 sin(2nx) . n n=1(b)2 n=1(1)nsin(nx) . n(c) The series for [0, 1] converges faster to the function f on [0, 1] than the series for [1, 1].
On the other hand, the second series approximates
AIB College of Business - MATHEWS - SOC 2000
Answers to homework # 10 VI.2. (a) 2 cos(nx/L), L n=1, n odd (b) 1 1 + cos(nx/L). 2L n=0 LVI.6. (a) VI.7. y (x, t) = VI.11. ex (x) = (x) + (x). 2I 1 (sin(n/3) + sin(2n/3) sin(nx/L) sin(nvt/L). v n=1
n 1 3 e + e1 , 4 (b) 1 9 e + e4 . 5
Aarhus Universitet - MATH - 0984321
Fsica Bsica: Mecnica - H. Moyss Nussenzveig, 4.ed, 2003Problemas do Captulo 2por Abraham Moyss Cohen Departamento de Fsica - UFAM Manaus, AM, Brasil - 2004Problema 1Na clebre corrida entre a lebre e
a tartaruga, a velocidade da lebre de 30 km/h e a da
Aarhus Universitet - MATH - 0984321
Problemas Resolvidos do Captulo 3MOVIMENTO BIDIMENSIONALAteno Leia o assunto no livro-texto e nas notas de aula e reproduza os problemas resolvidos aqui. Outros so deixados para v. treinarPROBLEMA
1Um projtil disparado com velocidade de 600 m/s, num n
Aarhus Universitet - MATH - 0984321
Problemas Resolvidos do Captulo 5APLICAES DAS LEIS DE NEWTONAteno Leia o assunto no livro-texto e nas notas de aula e reproduza os problemas resolvidos aqui. Outros so deixados para v. treinar
PROBLEMA 1Um astronauta, vestindo seu traje espacial, cons
Aarhus Universitet - MATH - 0984321
Problemas Resolvidos do Captulo 6TRABALHO E ENERGIA MECNICAAteno Leia o assunto no livro-texto e nas notas de aula e reproduza os problemas resolvidos aqui. Outros so deixados para v. treinar
PROBLEMA 1Resolva o problema 8 do Captulo 4 a partir da con
Aarhus Universitet - MATH - 0984321
Problemas Resolvidos do Captulo 7CONSERVAO DA ENERGIA NO MOVIMENTO GERALAteno Leia o assunto no livro-texto e nas notas de aula e reproduza os problemas resolvidos aqui. Outros so deixados para v.
treinar PROBLEMA 1No Exemplo 1 da Se. 5.3, considere a
Aarhus Universitet - MATH - 0984321
Problemas Resolvidos do Captulo 8CONSERVAO DO MOMENTOAteno Leia o assunto no livro-texto e nas notas de aula e reproduza os problemas resolvidos aqui. Outros so deixados para v. treinar PROBLEMA
1Dois veculos espaciais em rbita esto acoplados. A massa
Aarhus Universitet - MATH - 0984321
Problemas Resolvidos do Captulo 9COLISESAteno Leia o assunto no livro-texto e nas notas de aula e reproduza os problemas resolvidos aqui. Outros so deixados para v. treinar PROBLEMA 1Calcule a
magnitude (em kgf) da fora impulsiva que atua em cada um d
Vanderbilt - CS - CS 103
CS 103 Test 1 Fall 2009Enter your answers into the M-files provided. Each problem has its own M-file. Problem 1. (15 points) A. Which of the following is not an algorithm. 1. Start with 11, add each
successive prime number until the sum is greater than 1
Vanderbilt - CS - CS 103
CS 103 Test 2 Fall 2009Enter your answers into the M-files provided. Suppress all printing to the Command Window. Problem 1. (15 points) A. Polymorphism is 1. Dependence of the behavior of a function
on the type of its input arguments 2. Dependence of th
Vanderbilt - CS - CS 103
CS 103 Test 3 Fall 2009Problem 1. (15 points) Enter answers to this problem into the function p1 (in the provided M-file p1.m). A. In computer programming, the word robust describes 1. a program that
does something reasonable with all inputs 2. a compute
Vanderbilt - CS - CS 103
CS 103 Test 4 Fall 2009Problem 1. (15 points) Enter answers to this problem into the function p1 (in the provided M-file p1.m). A. The break statement is used to 1. terminate a loop 2. terminate a
for loop but not a while loop 3. terminate a while loop b
Vanderbilt - CS - CS 103
CS 103Test 5Fall 2009Problem 1. (15 points) Enter answers to this problem into the function t5p1 (in the provided M-file t5p1.m). A. In computer science, the term stack denotes a 1.
first-in-first-out data structure 2. last-in-first-out data structure
Vanderbilt - CS - CS 103
CS 103 Test 41 (the rest of Test 4) Fall 2009Problem 1. (15 points) Enter answers to this problem into the function t4p1 (in the provided M-file t4p1.m). A. The break statement is used to 1.
terminate a loop 2. terminate a for loop but not a while loop 3
Vanderbilt - CS - CS 103
CS 103Test 6Fall 2009Problem 1. (15 points) Enter answers to this problem into the function t6p1 (in the provided M-file t6p1.m). A. An activation record is 1. the set of values assigned to a
variable throughout a Matlab session 2. the set of locations
Texas San Antonio - CS - 2213
CS 2213 Final 1. [10] Consider the following function:void funct(int u, int *v, int w) cfw_ printf(0 0 0 0\n, u, v, w, *v); *v = u; v = &u; *v = w; printf(0 0 0 0\n, *v, u); What is printed from
the following code (including what funct() prints) assumin
Salem State - GLS - 1133
Rock Identifiable FeaturesBy Francesco De Sisto1. Sedimentary Rocks a. Have strata layers b. Have certain grain size c. Are made up of other rock fragments d. Some sedimentary rocks react to acid
such as coquina, micrite limestone, chalk limestone, etc
UT Dallas - SE - 3306
Exam 03 04/19/2007SE3306 Mathematical Foundations of SEName: WebCT ID:Instructions: Read through the whole exam rst and plan your time. The exam is worth 25 points, with each question valued as
indicated. Closed book: Reference to notes, the text or ot
UT Dallas - SE - 3306
Exam 02 03/22/2007SE3306 Mathematical Foundations of SEName: WebCT ID:Instructions: Read through the whole exam rst and plan your time. The exam is worth 25 points, with each question valued as
indicated. Closed book: Reference to notes, the text or ot
UT Dallas - SE - 3306
Exam 01 02/14/2006SE3306 Mathematical Foundations of SEName: WebCT ID:Instructions: Read through the whole exam rst and plan your time. The exam is worth 25 points, with each question valued as
indicated. Closed book: Reference to notes, the text or ot
UT Dallas - SE - 3306
SE 3306 - Mathematical Foundations of SEHomework # 6Due on 04/17/2007 10am(1)Can a simple graph exist with 15 vertices of degree ve? Explain! (2) The complementary graph G of a simple graph G has the
same vertices as G. Two ver tices are adjacent in G
UT Dallas - SE - 3306
SE 3306 - Mathematical Foundations of SEHomework # 5Due on 04/22/2009 10:30am(1) Consider a small north-south street that terminates as it meets a large east-west avenue, as shown in the Figure
below. Due to the heavy trac along the avenue, the westbou
UT Dallas - SE - 3306
SE 3306 - Mathematical Foundations of SEHomework # 4Due on 03/25/2009 10:30am(1) Let = cfw_a, b, c. Give a FSA that will describe 1. L1 = cfw_x |(x begins with c) (x contains two consecutive bs. 2.
L2 = cfw_x |x|a is odd |x|b is even. (2) Let = cfw_0,
UT Dallas - SE - 3306
UT Dallas - SE - 3306
SE 3306 Problem Set on Language Theory Required Reading: class notes, Rosen 5th edition 11.1-11.41. Class notes on regular expressions, automata, machines, problems at back 2. Section 11.1 Examples
2,3,4 , problem 18 pg. 750 3. Section 11.2 Examples 2,3,
UT Dallas - SE - 3306
Introduction to Axiomatic Set TheoryPrepared by: Prof. Kendra Cooper Contents Axiomatic Set Theory Frege Russell Zermelo-Fraenkel ZF Set Theory Axioms - Summary Continuum Hypothesis Problems Nave set
theory is elegant, simple, intuitive, and generally lo
UT Dallas - SE - 3306
UT Dallas - SE - 3306
Propositional Calculus Solution Outline Problem 1: (a) E R S T = = = = Relation Relation Relation Relation is is is is an Equivalence Relation Reflexive Symmetric TransitiveE <=> (R and S and
T) (b) C = person has courage S = person has skill M = person | {"url":"http://www.coursehero.com/file/5680963/impdfch07/","timestamp":"2014-04-17T19:12:29Z","content_type":null,"content_length":"49858","record_id":"<urn:uuid:a4641ebe-251b-41b9-a71a-79124fc470d8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Efficient Distributed Algorithm for Constructing Small Dominating Sets
The dominating set problem asks for a small subset $D$ of nodes in a graph such that every node is either in $D$ or adjacent to a node in $D$. This problem arises in a number of distributed network
applications, where it is important to locate a small number of centers in the network such that every node is nearby at least one center. Finding a dominating set of minimum size is NP-complete, and
the best approximation is provided by the same simple greedy approach that gives the well-known logarithmic approximation result for the closely related set cover problem.
We describe and analyze new randomized distributed algorithms for the dominating set problem that run in polylogarithmic time, independent of the diameter of the network, and that return a dominating
set of size within a logarithmic factor from optimal, with high probability. In particular, our best algorithm runs in $O(\log n \log \Delta)$ rounds with high probability, where $n$ is the number of
nodes, $\Delta$ is the maximum degree of any node, and each round involves a constant number of message exchanges among any two neighbors; the size of the dominating set obtained is within $O(\log \
Delta)$ of the optimal in expectation and within $O(\log n)$ of the optimal with high probability. We also describe generalizations to the weighted case and the case of multiple covering | {"url":"http://www.ccs.neu.edu/home/rraj/Pubs/domSet.html","timestamp":"2014-04-17T22:00:22Z","content_type":null,"content_length":"1949","record_id":"<urn:uuid:89fccc6c-c614-4ead-b5f9-2c97bc506fb9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Retract of a topological space
From Encyclopedia of Mathematics
A subspace topological space retraction of continuous mapping of
Thus, all convex subspaces of locally convex linear spaces are absolute retracts; such is the case, in particular, with a point, an interval, a ball, a straight line, etc. This characterization means
that absolute retracts have the following properties. Every retract of an absolute retract is again an absolute retract. Each absolute retract is contractible in itself and is locally contractible.
All homology, cohomology, homotopy, and cohomotopy groups of an absolute retract are trivial. A metric space
If a retraction of a space homotopy type. Conversely, two homotopy-equivalent spaces can always be imbedded in a third space in such a way that they are both deformation retracts of this space.
[1] K. Borsuk, "Theory of retracts" , PWN (1967)
"Absolute retract" and "absolute neighbourhood retract" are often abbreviated to AR and ANR.
Retracts and absolute retracts have been studied in other classes of spaces than the metrizable ones, most successfully in compact Hausdorff spaces and in [a1]. The Continuous lattice).
[a1] E.S. Shchepin, "A finite-dimensional compact absolute neighborhood retract is metrizable" Soviet Math. Doklady , 18 (1977) pp. 402–406 Dokl. Akad. Nauk SSSR , 233 : 3 (1977) pp. 304–307
[a2] J. van Mill, "Infinite-dimensional topology, prerequisites and introduction" , North-Holland (1988)
How to Cite This Entry:
Retract of a topological space. A.V. Arkhangel'skii (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Retract_of_a_topological_space&oldid=17827
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Retract_of_a_topological_space","timestamp":"2014-04-20T03:11:52Z","content_type":null,"content_length":"24753","record_id":"<urn:uuid:2041d52c-9f21-4a6f-9f38-92c2f7943dbe>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
│ │ │ │
│ │Fair Dice, Iamonds, Eternity, Mazes, Atom Stability, Dissections, Kites&Bricks, Triangles, Retrograde, Pentagons, Prize Problems, BOOKS, LINKS, Chaos Tiles, and Partridges.│ │
│Mathematica│ Mathematical Explorer │MathWorld│
Updates added 22 January 2004
John Boozer and Guenter Stertenbrink both found gorgeous embeddings of the F64 graph. I finally tracked down the persons responsible for notating the second graph as [23,-11,-29,25,-25,29,11,-23]^8.
In addition to HSM Coxeter and Roberto Frucht, Joshua Lederberg - the winner of the 1958 Nobel Prize in Physiology and Medicine -- did a large study (pdf) of graphs notated in a similar way. Doctor
Lederberg's complete works are available online. Another graph that deserves a nice picture is the Hoffman-Singleton graph.
Jens Kruse Andersen: Hans Rosenthal and I have found a prime gap of 1001548 between two probabilistic primes (prp's) with 43429 digits. The logarithm of the primes is 99997 so the gap is 10.02 times
the "typical" gap by the prime number theorem. I think that regardless of relative size, this is the first known prime "megagap" with identified (probabilistic) primes as gap ends.
Xah Lee has put together a page on Algorithmic Mathematical Art.
Dick Hess modified a puzzle first posed by E.K. Chapin in 1927. You have 2 mugs, a water supply, and a packet of instant coffee which when dissolved in one cup produces coffee of strength 100%. Your
task is to fix coffee as requested. You may fill or transfer liquid and may, at any time, empty the entire coffee packet into a mug. Send Answers.
1. With mugs of capacity 5 and 3 cups, fix 5 cups of coffee equivalent to 2 packets of instant coffee in 15 cups.
2. With mugs of capacity 5 and 3 cups, fix 1 cup of coffee at a) 20% strength b) 10% strength c) 5% strength d) 1% strength
3. With mugs of capacity 5 and 4 cups, fix 4 cups of coffee at 16% strength
4. With mugs of capacity 7 and 5 cups, fix 4 cups of coffee at 16% strength
5. With mugs of capacity 10 and 3 cups, fix 1 cup of coffee at 8% strength
6. With mugs of capacity 10 and 7 cups, fix 1 cup of coffee at 8% strength
Material added 13 January 2004
The Wolfram Functions Site has been vastly expanded. More than a hundred volumes worth of function information. A highlight is the 10,000+ visualizations of every function, in many different ways.
Code for all visualizations is provided. My latest MAA column talks about it more.
Harvey Heinz has updated his page about magic cubes. I was completely unaware of Frankenstein's Cube.
My last MAA column of December 2003 talks about Cubic Symmetric Graphs. I'm quite proud of my version of the Coxeter Graph, there -- I made it the logo for the Mathpuzzle Yahoo Group.
Erich Friedman has updated his page of Sequential Domino Packings. His solution for 18 dominoes might not be minimal -- can you fit these into a 65x65 square? Send Answer (if one exists) Smallest
packings for both 23 and 24 sequential dominoes might also be very tricky to find. Robert Reid, Erich Friedman, Minami Kawasaki, and myself have all been busy finding solutions -- see Sequence
A005842. Solutions for 16, 21, 34, 46, 54, 56, 60, 62, 63, 65, and 76 can be found in this zip. Minami Kawasaki has put together a page about sequential squares.
Francis Heaney: I review the new Jell-O Checkers snack packet on my blog today. It has many flaws, and I was hoping someone would step forward to help analyze how they affect the game. [Ed - I expect
a Zillions expert will crack this in hours. Send Answer.]
Cihan Altay: PQRST 08 Puzzle Competition starts on January 17th Saturday at 20:00 (GMT+02). You'll have one week to solve and rate 10 puzzles.
Yasutoshi Kohmoto: 2004^6 = 3959307^3 + 1393389^3 + 1494^3 = 3848682^3 + 1980119^3 + 27889^3
Karl Scherer has greatly expanded his WireWorld results, with help from Nyles Heise. Pentomino Odd Pairs (by Livio Zucca) is well worth a look -- can you find a shape made with an odd number of
either I or V pentominoes?
A noted game and puzzle creator has recently been harassed for having an almanac. He'll be attending the MIT Mystery Hunt this week. Warm-up puzzle.
Material added 1 January 2004
Erich Friedman: Start with 2003. Chop it up any way you like: 20,0,3. Square the pieces and add them together to get a new number 400+0+9=409. Repeat until you get 2004. It's easy to find a path that
takes 7 steps: 20:0:3 --> 40:9 --> 1:6:81 --> 6:59:8 --> 35:8:1 --> 12:90 -> 8:2:44 --> 2004. Find a 6 step path from 2003 to 2004. might make a good new yearspuzzle. this took 7 steps. Do it in six
steps. Answer and Solvers.
Emrehan Halici: "I’m sending 3 problems which I’d prepared for the new year. I hope you like them. 1. Move 2 matches to get 2004. 2. Move 3 matches to get 2004. 3. Move 4 matches to get 2004." He
also sent a table on how to make the numbers 1-50 using the digits in 2004. Answer.
Livio Zucca has created a page with various pentomino and tetromino challenges. For example, find a shape that can be split into either I or T tetrominoes, but none of the others.
George Jelliss has release Issue 30 of the Games and Puzzles Journal. The bound volumes he offers are a nice prize, and many of the puzzles there are quite nice. I especially like problems 64 and 65.
Packing squares in squares. Robert Reid sent my a nice Christmas Present -- squares 1-27 packed into the smallest possible square. I played around with the problem of 1-25 in the smallest square. One
of my best efforts is shown here. I don't have a proper 8 packed in here. However, I could fit in 3 4x8 rectangles and everything else into the 75x75 square. Paul Cleary sent me a solution for 1-25
packed into a 76x76 square, but he believes (as I do) that 75x75 must have a solution. Erich Friedman is presenting a slightly different take on Squares in Squares this month at Math Magic.
A puzzle by Marek Penszko of Poland. It's a division problem. Answer.
Dave Millar: I was inspired by the room puzzles by Erich Friedman, and made one of my own creation. 3 room, all of 8 squares. From each number, you can see that number of spaces in any direction; up,
down, left, or right. Answer.
Material added 23 December 2003
Cletus Emmanuel has found that (2^148330+1)^4 - 2 is a probable prime. That makes it the largest known probable prime on the Henri Lipchitz Probable Prime List. Seventeenorbust.com reports that
5359•2^5054502+1 is now the 4th largest proven prime.
Dave Millar has sent me several nice puzzles. In the Pento puzzle, rearrange the given double size pentominoes so that the squares in the grid represent the number of smaller pentominoes in each one.
(Answer) I also like this logo he did for me. In the Bird puzzle (solution), the bird must be divided into 5 identical shapes, not necessarily of the same size.
Colonel Sicherman, on the Logical Hats problem: "After one logician identifies his number, can the other two always identify theirs immediately? I have not found a counterexample." Answer.
NetLogo 2.0 is available for download. This is the programming language of Turning Turtles fame. There are many excellent and instructive programs built in.
I'm trying to learn a little TeX. I didn't make much headway until I tried out a combination of the winedt shell and MiKTeX.
My latest two Math Games columns are Domino Graphs and Superflu Modeling. Here's a rainbow PDF version of the Petersen Graph I did as an experiment, after getting some kind words for the first
Erich Friedman: Find positive integers A, B, C, D, and E all less than 100 so that A^2 + B^2 + C^2 = D^2 + E^2 and A^3 + B^3 + C^3 = D^3 + E^3. Answer and Solvers.
Johan de Ruiter: "Last night I was wondering whether any integer can be written as a linear combination of a finite number of noninteger squareroots of integers where all coefficients are integers.
Maybe it's trivial, but I wasn't able to find a solution yet." I wasn't able to find a trivial proof either, beyond proofs for 2 or 3 square roots. Is there a clever impossibility proof? Answer.
The square of 40081787109376 starts 16065496578813... how does it end?
Material added 13 December 2003
My latest Math Games column deals with Integer Sequences.
Des MacHale of University College Cork found a way to fit squares of size 1-24 in a 43×115 rectangle. All rectangles smaller than 54×91 have been shown to be unsolvable by Patrick Hamlyn. Thus, the
necessary excess required for a rectangular packing of squares 1-24 ranges from 14 to 45 (below). I thought this might make a nice puzzle -- treating the blue areas below as holes, divide the 43×115
rectangle below into 24 squares. Slide your mouse over the image to see the solution. For 1-22, rectangles smaller than 53×72 are unsolvable. (Patrick Hamlyn: The squares 1-22 into a rectangle: I
just finished searching 53*72, no solutions after 589.5 hours.) For 1-23, 61×71 might be possible, or maybe 62×70, but probably something larger is needed. As a warm-up puzzle, here's a beauty:
Divide an 11×11 square into 11 squares of size 2-5. See my Square Packing column for more.
Des MacHale's packing of squares 1-24 in a 43×115 rectangle.
Nyles Heise (nylesheise at yahoo.com) fit a 32-bit WireWorld multiplier into a 22×93 rectangle. his notes, or his MCL representation. (Two Updates: Input 1's, and Output 1's) You can download MCell
from Mirek's site. In hexadecimal, the below calculates EF4E75E7 EFA03229 = DFFFFFFFFFF7FFFF in 8116 cycles. See the Output 1's file for an expanded MCell version.
#SG ..............................................................................................
#SG .O.......................OOO.*OOOO....O..........O..........................................O.
#SG ..OOOOOOOOOOOOOOOOOOOOO...O.o...O.O..OOO..O...OOO..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO..
#SG .......................O.O.....O...O..O..*.O.O....O...........................................
#SG ..OOOOOOOOOOOOOOOOOOOOO..O....O..OO..OOOO*o...OOO..OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOo*OOo*OOo..
#SG .O.......................O...O..O...O.O....*.....O..........................................*.
#SG ..OOOOOOOOOOOOOOOOO...O..O..O...O...O...OO..OO.*..*oOO*oOO*oOOOOOOOOOO*oOO*oOO*oOO*oOOOOOO..O.
#SG ...................O.O.O.OOO....O....O.O..O...O*o.........................................*.O.
#SG ..OOOOOOOOOOOOOOOOO..O.O..O...O..O....OOO.O.O..*.O*o.O*oOO*oOOOOOO*oOO*oOO*oOO*oOO*oOOOO..o.O.
#SG .O...................O.O..OOOOOOO....O.*...OOOO.....O...................................O.O.O.
#SG ..OOOOOOOOOOOOOOOOO..O.O.O....O...*...o.....O......o..*OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOo*O..O.O.
#SG ...................O.O.O.O.O.OO.Oo.O.OOO..OO.......*.o....................................O.O.
#SG ..OOOOOOOOOOOOOOOOO..O..OOOOO..OO.O...O..o.O.oO..o.O..OO*oOOOOOOOOOO*oOOOOOO....Oo*OOo*OOO..o.
#SG .O...................O...*.O.O..OOO....O*...*..*ooO.........................O..O............*.
#SG ..OOOOOOOOOOOOOOOO...O...o....O..O.............o.o.OOOOOOOOo*OOOOOOo*OOOOOOO....*oOOOOOOOO..O.
#SG ..................O..O..O.*....O.O.....*OOo*OOO...........................................O.O.
#SG ..OOOOOOOOOOOOOOOO..OO...O..OOO..O....o.........o*OOo*OOOo*OOo*OOo*OOo*OOo*O...O...o*OOo*O..o.
#SG .O.................O.OO....O.....O...O....OOo*OO............................O.O.*.O.........*.
#SG ..OOOOOOOOOOOOOOO..O...O..OOO.O.OOO.O..O**.......O*oOO*oOO*oOO*oOO*oOO*oOO..oooo...O*oOOOO..O.
#SG .................O.O..OOO..O.OOO.O..*.O..o......O.........................*..O..*.........O.O.
#SG ..OOOOOOOOOOOOOOO..O.O.O.OO.O.O..o.O*OO.O.O......o*OOo*OOo*OOo*OOo*OOo*OO..oOOO.O..o*OOOOO..o.
#SG .O.................O.O..........*.O.*.OO...O.............................o...O..O.O.........*.
#SG ..OOOOOOOOOOOOOOOOO...OOOOOOOOOO.O......*oO.*oOO*oOO*oOO*oOO*oOO*oOO*oOO*.....*o...OOOOO*oOO..
#SG ..............................................................................................
CHESS MAZE by Mark J. P. Wolf (mark.wolf at cuw.edu): "I’ve always admired the rich possibilities in the simplicity of the chessboard, and tried to capture this in the Chess maze. How the maze works:
Starting with the White Queen at a8, capture your way to the White King at h1. Pieces capture as they do in chess (except pawns, which can capture one square diagonally in any direction), and pieces
can only capture pieces of the opposite color. However, once a capture is made your piece becomes the type of piece that was just captured (and moves accordingly on the next move) and all moves must
end in a capture. Pieces that are captured are removed from the board, so the number of pieces on the board gradually decreases. To keep track of which pieces are removed, I recommend either crossing
out the captured pieces, so as to indicate which squares are empty and can be passed over later in the maze. (For an easy warm-up, try capturing the black queen in five moves, or the black pawn at a1
in nine moves). A unique feature of the maze is that as pieces are captured and removed, new pathways open up that were previously unavailable, making the maze fairly difficult to work backwards."
Answer and Solvers. Picture of Solution. Doug Orleans created an applet for this puzzle.
Brian Silverman did a Google search on 5-digit numbers, and discovered that 17839 is the most unpopular number on the internet. I'm trying to help the number, since 17839 × 19813 × 237877 × 11893969
= 10^21 - 9. It's also a factor of 35! + 11.. Does this number have anything else going for it? Send 17839 factoids. Livio Zucca is searching for pento-tetra-tri solutions. Jorge Mireles has expanded
his page on Poly^2ominoes. Peter Esser has found a way to pack the Sliced Heptiamonds into a rectangle.
Jim Propp recently gave a talk to the theory group at Microsoft Research entitled "Random walk and random aggregation, derandomized", which Microsoft has made available to the outside world. Watching
the video/demo is probably the quickest and most pleasant way to find out the current state of knowledge about these models.
Slouching Towards Bedlam won the Interactive Fiction Competition. The solutions I received for Borromean Rings are great.
Material added 4 December 2003
RSA-576 has been factored by the programming team of J. Franke, T. Kleinjung, P. Montgomery, H. te Riele, F. Bahr, D. Leclair, Paul Leyland and R. Wackerbarth. Institutions involved include Bonn
University, the Max Planck Institute, the Experimental Mathematics Institute in Essen, CWI, NFSNET, and Microsoft Research. For this development and application of the GNFS algorithm, they will split
$10,000.00. NSFNET (Number Field Sieve Net) just happens to be recruiting, if you'd like to join the effort to factor 2^811 - 1. They recently factored 2^757-1.
060606504743045317388011303396716199692321205734031879550656996221305168759307650257059 =
398075086424064937397125500550386491199064362342526708406385189575946388957261768583317 ×
Material added 2 December 2003
A few years ago, Michael Shafer was visiting here. Michael: "Yes, yes, it's all true! I came across mathpuzzle.com some time in 2000 and bookmarked it to check out the puzzles you came out with every
week as well as the interesting links. www.mersenne.org was one of them and you can see what happened. Thank you for leading me to Al Zimmerman's contests, the WPC qualifiers, and Theodore Gray's
periodic table and fun with sodium as well. The occasional challenges are also fun to spend a few minutes (or sometimes more) pondering. Keep up the great work!" If you check out the mersenne.org
link, you'll see that Michael Shafer helped to discover the world's largest prime, 2^20996011-1. Many congratulations, Michael! You can see more at mathworld.wolfram.com.
More record setting news from Lance Gay: " I just saw your new Square Packing page at maa.org. I have improved solutions to 198, 205, 206, 253, 258, and 259." My next goal is to correct my graphic of
primitive quilts.
198-21 {{106,92},{12,13,19,48},{2,9,1},{14},{92,16},{19},{9},{3,11},{28},{20,10},{58},{48}}
205-21 {{112,93},{22,23,48},{93,16,3},{13,12},{11,12},{11,11,1},{29},{13},{19,3},{64},{48}}
206-21 {{113,93},{23,25,45},{93,17,3},{19,7},{5,20},{12},{12,5},{7,26,3},{68},{19},{45}}
253-22 {{141,112},{27,28,57},{1,9,16,1},{1},{29},{112,23,7},{16},{16},{32,7},{2,84},{25},{57}}
258-22 {{142,116},{25,33,58},{1,12,12},{116,27},{4,4,25},{12,20},{4,8},{31},{27,1},{84},{58}}
259-22 {{142,117},{25,34,58},{117,31,19},{10,24},{15,14},{28,3},{12,84},{9,9},{16,2},{14},{58}}
Material added 1 December 2003
Two more new columns at my MAA column, Math Games. First, a treatise on Multistate Mazes. Next, a synopsis of the current knowledge of Square Packings, in particular the case of Mrs. Perkins's Quilt.
For the latter, Richard Guy and I tried to create a list of Primitive Optimal Quilts, and already a number of mistakes are apparent. Can you find the three quilts that aren't primitive, and the
primitive quilt that can be derived from one of them? Yesterday, Richard K Guy learned, demonstrated, then stressed to me how important the primitive quilts are for solving the general problem. A
correct list of primitive quilts up to size 100 is needed. Write me if you discover a patch in the solution for Mrs. Perkins's Quilt. I can well imagine a high-schooler picking one of the primitive
squares, tracing through the build-up process, and becoming permanent listed in the problem's history. A great resource for this is squaring.net.
Slightly too late for the multistate column is Perl Code for Logic Mazes. As a possible gift, Robert Abbott has about 20 copies of each of his wonderful maze books left -- see his site for
information. Even raarrer, William Waite has ten copies of his Camera Conundrum puzzle. It's an incredibly clever secret-opening box. It won at the last International Puzzle Design competition.
Speaking of that ....
The 4th IPP Puzzle Design Competition has started. Established three years ago to promote and recognize innovative new designs of mechanical puzzles, the annual IPP Puzzle Design Competition will be
held in conjunction with IPP24 in Tokyo. The competition is open to designs made public between July, 2002 and July, 2004. Entry Deadline: June 30, 2004. Judging at IPP: July 30 - August 1, 2004.
Awards at IPP: August 1, 2004. Complete rules and information are at the IPP Puzzle Design Competition web site, part of John Rausch's PuzzleWorld. The third competition completed in August with
judging and awards at IPP23 in Chicago, USA. It was a great success, with 52 designs participating. Lee Krasnow's Clutch Box won the Puzzlers' Award; and Mineyuki Uyematsu's Cube in Cage 333 won the
judging committee's Grand Prize.
Another gift possibility -- I have a domino cards game out, Auf & Ab. The link will take you to Funagain Games. It's basically a nicely improved set of double-9 domino cards, perfect for playing my
game Ups and Downs. If you want a book, I'm still fascinated by Mathematical Constants by Steven Finch. Easier reading is Dudeney's Amusements in Mathematics, which was my first puzzle book.
Routewords is an interesting combination of graph theory and wordplay. Ross Eckler tackled this problem back in 1980 -- find a non-planar word. Also at Word Ways, some of the challenges are worth a
Inspired by Bill Cutler's solution, Fan Chung and Ron Graham have done a detailed combinatorical analysis of the Archimedean Stomachion. It turns out the solutions have a fantastic amount of
symmetry. Archimedes could have found all of this easily ... and he may have!
Dick Hess: The Logical Hats Puzzle. Logicians A, B and C each wear a hat with a positive integer on it such that the number on one hat is the sum of the numbers on the other two. They can see the
numbers on the other two hats but not their own. They are given this information and asked in turn if they can identify their number. In the first round A, B and C each in turn say they don't know.
In the second round A is first to go and states his number is 50. What numbers are on B and C? Answer and Solvers. Jonathan Welton: I was delighted to see an old puzzle of mine doing the rounds
(Logical Hats Puzzle). This was originally published in the Sunday Times magazine as puzzle number 1814, and was reprinted in a collection of these puzzles, Brainteasers by Victor Bryant in 2002 -
highly recommended if you like tough puzzles.
A 2000 year old icosahedron is available for auction at Christie's. More glimpses of ancient math are stored at the Vatican. I recently refound a long-lost link -- Monty Hall's take on the Monty Hall
Older material
Anyone can subscribe to my moderated mailing list at http://groups.yahoo.com/group/mathpuzzle/.
I announce updates to this site via that list.
07Oct03-18Nov03 25Aug03-30Sep03 06Jul03-17Aug03 01Apr03-29Jun03 25Dec02-26Mar03 29Oct02-13Dec02 03Sep02-21Oct02 16Jul02-26Aug02 13May02-09Jul02 04Mar02-06May02 05Dec01-25Feb02 19Aug01-04Dec01
17Feb01-05Aug01 02Jun00-11Feb01
More Math Puzzle Pages
Awards, Book Recommendations, Chaos Tiles, Characteristic Polynomial, ChebyshevU, Chromatic Number of the Plane, Coin Weighing, Complex Numbers, Connecting Dots, Crosswords, Digital Sums, Dissecting
Convex Figures, Domino Games, Encryption, Eternity Puzzle, Fair Dice, Gambling Odds, Graphs, Gravity, Guestbook, Golay Code, Group Theory, Iamonds, Knots, Leapers, Locales, Math Software, Mini-Go,
Multistate Mazes, Number Fun, Parity, Partridge Puzzle, Path Problems, Pentagon Tiling, Philosopher Logic, Phonetic Alphabet, Polyominoes, Prize Page, Programs, Repeating Decimals, Retrograde
Analysis, Silly Puzzle, Similar Dissections, Superflu, Stability of the Atom, Riemann Hypothesis, Tic Tac Toe, Triangles, WGE123, Word Play, World Puzzle Championshiop 2000, Zome.
Previous Puzzles of the Week Other Stuff
EXCELLENT SITES AND PRODUCTS
Martin Gardner celebrates math puzzles and Mathematical Recreations. This site aims to do the same. If you've made a good, new math puzzle, send it to ed@mathpuzzle.com. My mail address is Ed Pegg
Jr, 1607 Park Haven, Champaign, IL 61820. You can join my moderated recreational mathematics email list at http://groups.yahoo.com/group/mathpuzzle/. Other math mailing lists can be found here.
All material on this site is copyright 1998-2004 by Ed Pegg Jr. Copyrights of submitted materials stays with the contributor and is used with permission.
one million. | {"url":"http://mathpuzzle.com/22Jan04.html","timestamp":"2014-04-19T04:19:09Z","content_type":null,"content_length":"43043","record_id":"<urn:uuid:ed384f8e-2dd3-4813-8a3a-4ae1cddb0b3e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase Plane, real unequal eigenvalues of the same sign
August 27th 2012, 08:58 AM
Phase Plane, real unequal eigenvalues of the same sign
I have the following eigenvalues: r_1 = 4, r_2 = 2, which gives me the eigenvectors v_1 = [1, 1] and v_2 = [1,3] for r_1 and r_2 respectively. How should I draw the phase plane for this? This
gives me two vectors that both approach infinity when t -> infinity? I've tried to find an example problem that draws the phase plane for this with no success. If anyone could either explain and/
or point me to an example with this type of eigenvalues/vectors then that would be great. Thanks. :-)
August 27th 2012, 09:19 AM
Re: Phase Plane, real unequal eigenvalues of the same sign
First, draw the lines corresponding to the eigenvectors. If both eigenvectors are positive, this is a "source", if negative, a "sink". Every line through the equilibrium point is a solution. In
order to show the difference in eigenvalues, one thing you can do is draw the lines denser (closer together) close to the eigenvector with larger eigenvalue, farther apart close to the
eigenvector with smaller eigenvalue.
August 27th 2012, 09:32 AM
Re: Phase Plane, real unequal eigenvalues of the same sign
First, draw the lines corresponding to the eigenvectors. If both eigenvectors are positive, this is a "source", if negative, a "sink". Every line through the equilibrium point is a solution. In
order to show the difference in eigenvalues, one thing you can do is draw the lines denser (closer together) close to the eigenvector with larger eigenvalue, farther apart close to the
eigenvector with smaller eigenvalue.
Thank you for the input. However what I don't know how to do is how to draw the actual solutions, i.e. the lines that are drawn between the eigenvectors. What does it look like? Do you know what
I could search for to find an example of how to do this? | {"url":"http://mathhelpforum.com/differential-equations/202601-phase-plane-real-unequal-eigenvalues-same-sign-print.html","timestamp":"2014-04-20T02:31:38Z","content_type":null,"content_length":"5884","record_id":"<urn:uuid:16db52c1-7b69-4962-901f-5c0024b8dea8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I find my height above sea level?
It is specialist equip!! :)
Top of the line GSP systems I work with will give you 1 to 1.5 cm (10 - 15mm)accuracy at a cost of $20,000 or so because multiple units are needed for that sort of accuracy
standard handheld GSP units like the Garmin E-Trex etc will give down to maybe 3-5 metres of accuracy
Ohh I should actually qualify that .... We find that the vertical accuracy averages ~ half the quality of horizontal accuracy. So for a horizontal accuracy of 1cm, which under good conditions is
quite achievable,
the vertical accuracy will be ~ 2cm | {"url":"http://www.physicsforums.com/showthread.php?s=fa7784e34223452c13649c1b934705ef&p=3805576","timestamp":"2014-04-24T14:56:30Z","content_type":null,"content_length":"22379","record_id":"<urn:uuid:f4001f52-909f-4218-be52-60362c672c67>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
The following is a list of all lower division courses regularly taught by the mathematics department. If you have any questions contact a faculty member, they will be happy to answer any questions
you have.
• MATH 113 - This class will prepare students to have a successful learning experience in Math 121 and subsequent math courses. Topics will preview those of Math 121 and the course will stress
algebraic skills and understanding. Students who do not plan to take further math courses should not take this course. 4 Credit Hours. Prerequisites:
□ TRS 92 Minimum Grade of C- or
□ Computed Math Placement Score MA110.
• MATH 121 - Pre-Calculus This course covers the topics in algebra and trigonometry necessary for students who plan to enroll in Math 221 Calculus I. The course will include an in-depth analysis of
the topics covered in Math 113, with additional emphasis on symbolic methods. The concept of function, with its multiple representations, will be emphasized. 4 Credit Hours. Prerequisites:
□ MATH113 Minimum Grade of C- or
□ MATH 110 Minimum Grade of B- or
□ Computed Math Placement Score MA121.
• MATH 201 - Elementary Statistics An introduction to basic ideas in statistics including descriptive statistics, measure of central tendency and dispersion, probability, sampling distributions,
estimation, hypothesis testing, regression and correlation, and statistical software application. This course is equivalent to BA 253 and Psyc 241. Credit will be given for only one of these
courses. 4 Credit Hours. Prerequisites:
□ MATH 110 Minimum Grade of C- or
□ Computed Math Placement Score MA121 or
□ MATH 121 Minimum Grade of C- or
□ MATH 221 Minimum Grade of C- or
□ MATH 222 Minimum Grade of C- or
□ MATH 210 Minimum Grade of C-.
• MATH 210 - Survey of Calculus This course is intended as a survey of Calculus for students who do not intend to continue their study of Calculus. Topics include limits, differential and integral
calculus of one variable and an introduction to calculus of two two variables. Math 210 does not satisfy the prerequisite for Math 221 and Math 222. Students who require a more rigorous treatment
of Calculus should take Math 221 and Math 222. 4 Credit Hours. Prerequisites:
□ MATH 110 Minimum Grade of C- or
□ MATH 121 Minimum Grade of C- or
□ Computed Math Placement Score MA121.
• MATH 215 - Math For Elem School Teacher I This course is designed primarily for the elementary school teacher. It includes a study of sets, set operations, construction of numeration systems,
whole and integer and rational number arithmetic, ratio and proportion, decimals, percent, selected topics in geometry, the metric system, and an introduction to the real number system. 0 OR 3
Credit Hours 0.000 OR 2.000 Lecture hours 0.000 OR 2.000 Lab hours. Prerequisites:
□ TRS 92 Minimum Grade of C- or
□ Computed Math Placement Score MA110 and
□ ED 200 Minimum Grade of C-.
• MATH 221 - Calculus I Limits, continuity, derivatives and integrals of functions of one variable including polynomial, root, rational, exponential, logarithmic, trigonometric, and inverse
trigonometric functions. Applications of Calculus are included. 4 Credit Hours. Prerequisites:
□ MATH 121 Minimum Grade of C- or
□ Computed Math Placement Score MA221.
• MATH 222 - Calculus II A continuation of Math 221. Techniques and applications of integration, introduction to differential equations and applications, sequences and series, applications using
polar and parametric coordinate systems. 4 Credit Hours. Prerequisites:
□ MATH 221 Minimum Grade of C- or
□ Computed Math Placement Score MA222.
• MATH 223 - Calculus III Vectors and multivariable calculus with applications. 4 Credit Hours. Prerequisites:
□ MATH 222 Minimum Grade of C-. | {"url":"http://www.fortlewis.edu/math/AboutOurProgram/MathCourseCatalog/LowerDivisionCourses.aspx","timestamp":"2014-04-21T12:25:30Z","content_type":null,"content_length":"29536","record_id":"<urn:uuid:e4a5d196-e517-41b2-82aa-0a8b9f284bcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scripting 2D cuts of a 3D geometry
Up to Use
Scripting 2D cuts of a 3D geometry
Posted by
Andreas Kopf
at April 23. 2013
Hello everybody,
i've just started to use salome and want to built a script which cuts an imported 3D geometry in 2D planes.
At the moment i define 3 points, built a plane out of them and cut the whole geometry with this plane.
Is there a better way, cause i need several hundreds of cuts . I don't need the cuts in the program - i need the cuts as separate files (as 2D geometry).
Thanks in advance,
Re: Scripting 2D cuts of a 3D geometry
Posted by
Saint Michael
at April 23. 2013
Hi Andreas
Look at the attached script, does it do what you need?
Hi St. Micheal,
it works amazing fast - BUT i only get empty cutplanes.
I've adapted the script that the size of the box is as big as the geometry (*.stp) and the planes really cut the geometry, but when i import the planes they are empty.
Btw. is there a command giving the max. size of an imported geometry - like a covering box?
Thanks !
Btw. is there a command giving the max. size of an imported geometry - like a covering box?
geompy.BoundingBox() return extents of the bounding box.
thanks for the hint!
Do you have any idea, why the cuts are empty? it doesn't depend on the geometry import...
Do you have any idea, why the cuts are empty?
Maybe your 3D model is not a solid one? Use "Measures / What is" command to see what your model is.
Original i have 30 Solids, but by loading it in salome there is only one compound displayed:
Number of sub-shapes :
VERTEX : 680
EDGE : 992
WIRE : 418
FACE : 388
SHELL : 44
SOLID : 30
COMPSOLID : 0
COMPOUND : 1
SHAPE : 2553
Do you have any idea, why the cuts are empty?
What is actually a problem? Result of the boolean operation is empty or the exported file is empty?
The exported files are empty - i put in several different geometries, but every time the same.
Maybe export goes to a file different than the file you check?
this was not the problem ... ^ ^
> Box_1 = geompy.MakeBoxDXDYDZ(200, 200, 200)
I thought this is a kind of bounding box - now i changed it to my imported geometry and i get files WITH geometry inside
But - by cutting 30 objects i get 30 files for one plane - any idea who i get only one file for each cut?
I think the simplest solution would be not to use all cut-planes at once but rather to iterate over the cut-planes and cut all solids by one plane at a time.
I use
"geomObj = geompy.ImportSTEP(xxx)"
for importing one geometry with several solids - but how can i cut all solids out of this geometry by one plane?
how can i cut all solids out of this geometry by one plane?
The same way:
cuts = geompy.MakeCommon( geomObj, cutPlane )
sure... but resulting in 30 separate faces and consequently in 30 separate .stp-files
sure... but resulting in 30 separate faces and consequently in 30 separate .stp-files
Not a problem. Just remove a loop on sub-faces of the result of geompy.MakeCommon(), so all faces relating to one cut-plane will be in one file.
ok -thanks a lot ... it's working (when i only do one cut).
Now i try to understand to make several of this cuts in a loop ... | {"url":"http://www.salome-platform.org/forum/forum_10/153742785","timestamp":"2014-04-19T04:55:41Z","content_type":null,"content_length":"57721","record_id":"<urn:uuid:a1e12906-8dc1-42ff-abf1-97be9e0462f7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reduces the dimensionality of the data by projecting it onto a lower dimensional subspace using a random matrix with columns of unit length (i.e. It will reduce the number of attributes in the data
while preserving much of its variation like PCA, but at a much less computational cost).
It first applies the NominalToBinary filter to convert all attributes to numeric before reducing the dimension. It preserves the class attribute.
For more information, see:
Dmitriy Fradkin, David Madigan: Experiments with random projections for machine learning. In: KDD '03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data
mining, New York, NY, USA, 517-522, 003.
The table below describes the options available for RandomProjection.
Option Description
The distribution to use for calculating the random matrix.
Sparse1 is:
sqrt(3) * { -1 with prob(1/6),
distribution 0 with prob(2/3),
+1 with prob(1/6) }
Sparse2 is:
{ -1 with prob(1/2),
+1 with prob(1/2) }
numberOfAttributes The number of dimensions (attributes) the data should be reduced to.
percent The percentage of dimensions (attributes) the data should be reduced to (inclusive of the class attribute). This NumberOfAttributes option is ignored if this option is present or
is greater than zero.
randomSeed The random seed used by the random number generator used for generating the random matrix
replaceMissingValues If set the filter uses weka.filters.unsupervised.attribute.ReplaceMissingValues to replace the missing values
The table below describes the capabilites of RandomProjection.
Capability Supported
Class Missing class values, Empty nominal class, Nominal class, String class, Date class, No class, Unary class, Relational class, Numeric class, Binary class
Attributes Unary attributes, Date attributes, Nominal attributes, Relational attributes, String attributes, Empty nominal attributes, Missing values, Numeric attributes, Binary attributes
Min # of instances 0
This documentation is maintained by the Pentaho community, and members are encouraged to create new pages in the appropriate spaces, or edit existing pages that need to be corrected or updated. | {"url":"http://wiki.pentaho.com/display/DATAMINING/RandomProjection","timestamp":"2014-04-21T04:42:17Z","content_type":null,"content_length":"48270","record_id":"<urn:uuid:845d03ee-a852-43b8-878d-89d15d67df3b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Problem with Wins Produced
Since Buddahfan keeps trying to convince people of the issues with WP, no one is really discussing the issues (he or the people he talks to), I thought I'd try to put up a summary of the main
criticism aimed at the statistic.
Before I start, you can take a look at this link for the full calculation of WP:
... or not, you can just trust me ;) But I would suggest you keep it open in another tab or something to reference for things like the marginal values assigned to rebounding, etc.
The main strength of WP is that it is correlated. Almost every value assigned in the WP formula comes from a regression of some sort. All the weights given to rebounds, field goals made, field goals
missed, assists, etc. All of this is great - as WP is not meant to be a simple model that just does what we expect with the numbers, but correlates them mathematically and shows us what the numbers
say about wins, rather than what we say about wins.
The way it works is that it finds how many points a player scores per possession (offensive efficiency) and how many points he gives up per defensive possession (defensive efficiency), then uses a
correlation between wins and the efficiencies to find his WP. Fairly straightforward. The value of each action is given a marginal value - ie the value of that action compared to the expected result
of a possession (which is roughly scoring 1 point). The table shown in the link above gives the marginal win-value of each action. These values are based on two things: the possession used in the
action taken, and the points value of taking that action, and they result in the basic production value for a player. There are a bunch of other factors added in to get to WP, but they aren't the
The points values of each action are obvious. A 3 pointer made earns you 2 points above the expected 1 point (the exact values are slightly different, but I am simplifying this for discussion). A 2
pointer made earns you 1 extra point. You can see this in the marginal win production values in the table at the link (3FG = +.064; 2FG = +.032).
The possession values are where the major criticism of the model comes. There are 3 possible outcomes for a possession (we'll ignore team turnovers and team rebounds, and ignore the effects of
steals, assists, etc to simplify):
1) A turnover. This means your team loses the ball and the other team gains it - entirely because of you. A full possession is used in a TO.
2) A field goal made. This uses the possession. Simple.
3) A missed field goal and a defensive rebound for the other team. Here there is a sharing of the credit for the possession. You've lost "x" possessions by missing the shot, and the defender has
gained 1-x possessions by grabbing the board. The value of x could be anything, depending on how you value defensive rebounds compared to offensive rebounds.
The other possibility from a missed shot is that your team gets an offensive rebound. This means that the "x" portion of a possession you lost when you missed has been regained - you don't lose the
possession. So the value of an offensive rebound is -x possessions.
This is where the relative value of an offensive rebound versus a defensive rebound is determined. An offensive rebound nets your team x possessions, while a defensive rebound nets your team 1-x
possessions. Again, x can be anything, and you could do anything from just using the ratio between the number of offensive rebounds and defensive rebounds in the league to just picking a value
between 0 and 1 to determine it.
This is the one portion of WP that is NOT from a regression. The regression done for win production values of each action is based on the possession value of those actions. And the possession value
is fairly straightforward when looking at made field goals and turnovers. But that x value is complicated. A case could be made for a variety of values for x, and the exact value used is not a
stat-breaking issue. But the weight used for x changes.
UPDATE: Not So Friendly Stranger pointed out some areas this was confusing. And it seems that confusion was partially due to me writing it down wrong. So yeah. Reread this section please.
The possession formulas are as follows:
Possessions used = FGM + x*FGMS + 0.47*FTA + TO – x*REBO + (1-x)*DREBD + x*DREBTM
Possessions gained = DFGM + x*DFGMS + 0.47*DFTM + (1-x)*REBD - x*DREBO + DTO + x*REBTM
FGM = Field Goals Made
FGMS = Field Goals Missed
FTA = Free Throw Attempts - we'll ignore these
TO = Turnovers
REBO = Offensive Rebounds
REBD = Defensive Rebounds
REBTM = Team Rebound- we'll ignore this for now
DFGM = Opponents' Field Goals Made
DFGMS = Opponents' Field Goals Missed
DFTM = Opponents' Free Throws Made - we'll ignore these
DTO = Opponents' Turnovers
DREBO = Opponents' Offensive Rebounds
DREBD = Opponents' Defensive Rebounds
DREBTM = Opponents' Team Rebound- we'll ignore this for now
The formula is simplified as follows on Berri's site:
Possessions used = FGA + 0.47*FTA + TO – REBO
Possessions gained = DFGM + 0.47*DFTM + REBD + DTO + REBTM
Note that the FGA in the used formula is FGM (Made) and FGMS (Missed) combined, with the x coefficient on FGMS assumed to be 1.
The main issue here is the value of x. In the first equation, rewritten without FTA to simplify:
Possessions used = FGM + FGMS + TO - REBO
The value of each part is the same. So a FGM is worth as much as a FGMS is worth as much as a REBO. This tells us that x must be 1 in this equation.
In the second equation, a defensive rebound is worth as much as a FGM:
Possessions gained = DFGM + REBD + DTO
Here you can see that the opponents' field goals missed has disappeared - its value is 0. This makes sense, since the defensive rebound value (1-x) is 1, so x must be 0.
However, this means that the value of x changes depending on which side of the ball you are looking at. This is illogical. It means that for individual players, a missed field goal is as bad as a
turnover - it uses a full possession. But at the same time, a player can gain a full possession by grabbing a defensive rebound.
So, consider this in terms of each team:
On any one possession use, the one team loses possession, and the other gains possession, so the total possession change is 2 (-1 for one team and +1 for the other) in one direction.
However, look at the statistics that can be attributed to individuals. A FGM or FGMS can be attributed to a single player. So can a TO, a REBO and a REBD. But the values of DFGM and DTO cannot. So
from a single player perspective, the formulas are as follow (simplified):
Possessions used = FGM + FGMS + TO - REBO
Possessions gained = REBD
These show the various things a player can do to impact possessions. However, you will see that credit for a possession change is given twice in the case of a missed FG followed by a defensive
rebound. The player missing the shot is charged with a possession used, while the defensive rebounder gets a possession gained. Contrast this to a made shot, where the shooter uses a possession, and
no one on the the other team gains a possession. This is not logically consistent.
From this follows the criticisms leveled at the value of rebounding in WP. For the statistic to be logically consistent, the value of x should be constant on a given play. Although x could
conceivably change with every variation of 10 players on the court, on a single play it should be consistent. If a value of x was assigned somewhere between 0 and 1, but not equal to 0 or 1, then the
possession values of a defensive rebound, an offensive rebound, and a missed shot would all decrease relative to the possession values of made shots and turnovers.
This section also updated to correct previous error.
For example, if we use the league average ratio between offensive rebounds and defensive rebounds and then assign x based on the 'scarcity' of a rebound:
895 ORB / 3394 TRB = 26.4% of rebounds are offensive. So x would be 0.736 - giving an offensive rebound 3 times the value of a defensive rebound, since they are 3 times as rare. Also, this implies
that in a missed-shot-defensive-rebound scenario, the defensive rebounder only gets about 1/4 of the credit for the possession change, while the shooter gets about 3/4.
Then the formulas would look like:
Possessions used = FGM + 0.736*FGMS + TO – 0.736*REBO + 0.264*DREBD
Possessions gained = DFGM + 0.736*DFGMS + 0.264*REBD - 0.264*DREBO + DTO
So this is the reason WP punishes low efficiency shooters and over-rewards rebounders (especially defensive rebounders). Other discussions such as using NBA journeymen or failure stories with high WP
to try to disprove it are not mathematically relevant.
As for the reason most people still like WP? Well, even with the wacky distribution of possessions, the regression takes place afterward. So, using the points and possession values of the various
actions, the correlation between wins and efficiency is applied. As such, the regression does some self-correcting for the incorrect possession definition. It clearly can't compensate entirely for
the possession definitions, but it does have an impact. And of course the results are a forced correlation to wins - this is both a pro and a con. It means the results aren't purely mathematical, but
are scaled up to include "intangible" effect. However, it does improve the relationship between team wins and individual players' WP.
Also as a note, the WP calculation is MUCH more complicated than what I have outlined here. I've used a very simplified version to show the basic flaws in the stat pointed out in depth at the APBR
metrics board, located here:
To get a full understanding of the WP calculation, read the link I attached at the top, and get Berri's book. Even the link skips over some complicated calculations.
Sorry for the confusion in the initial post version. Let me know if you spot any more inconsistencies. | {"url":"http://www.raptorshq.com/2011/9/28/2455315/the-problem-with-wins-produced","timestamp":"2014-04-20T03:32:47Z","content_type":null,"content_length":"89198","record_id":"<urn:uuid:2d1ea8ac-bcca-4053-901c-7724c9277198>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
30 search hits
Homogeneous nucleation of quark gluon plasma, finite size effects and longlived metastable objects (1998)
Eugene E. Zabrodin Larissa V. Bravina Horst Stöcker Walter Greiner
The general formalism of homogeneous nucleation theory is applied to study the hadronization pattern of the ultra-relativistic quark-gluon plasma (QGP) undergoing a first order phase transition.
A coalescence model is proposed to describe the evolution dynamics of hadronic clusters produced in the nucle- ation process. The size distribution of the nucleated clusters is important for the
description of the plasma conversion. The model is most sensitive to the initial conditions of the QGP thermalization, time evolution of the energy den- sity, and the interfacial energy of the
plasma hadronic matter interface. The rapidly expanding QGP is first supercooled by about T = T Tc = 4 6%. Then it reheats again up to the critical temperature Tc. Finally it breaks up into
hadronic clusters and small droplets of plasma. This fast dynamics occurs within the first 5 10 fm/c. The finite size e ects and fluctuations near the critical temperature are studied. It is
shown that a drop of longitudinally expanding QGP of the transverse radius below 4.5 fm can display a long-lived metastability. However, both in the rapid and in the delayed hadronization
scenario, the bulk pion yield is emitted by sources as large as 3 4.5 fm. This may be detected experimentally both by a HBT interferometry signal and by the analysis of the rapidity distributions
of particles in narrow pT -intervals at small |pT | on an event-by-event basis. PACS numbers: 12.38.Mh, 24.10.Pa, 25.75.-q, 64.60.Qb
Supercooling of rapidly expanding quark-gluon plasma (1998)
Eugene E. Zabrodin Larissa V. Bravina László Pal Csernai Horst Stöcker Walter Greiner
We reexamine the scenario of homogeneous nucleation of the quark-gluon plasma produced in ultra-relativistic heavy ion collisions. A generalization of the standard nucleation theory to rapidly
expanding system is proposed. The nucleation rate is derived via the new scaling parameter Z. It is shown that the size distribution of hadronic clusters plays an important role in the dynamics
of the phase transition. The longitudinally expanding system is supercooled to about 3 6%, then it is reheated, and the hadronization is completed within 6 10 fm/c, i.e. 5 10 times faster than it
was estimated earlier, in a strongly nonequilibrium way. PACS: 12.38.Mh; 12.39.Ba; 25.75.-q; 64.60.Qb
Excitation function of energy density and partonic degrees of freedom in relativistic heavy ion collisions (1998)
H. Weber Christoph Ernst Marcus Bleicher Larissa V. Bravina Horst Stöcker Walter Greiner Christian Spieles Steffen A. Bass
We estimate the energy density epsilon pile-up at mid-rapidity in central Pb+Pb collisions from 2 200 GeV/nucleon. epsilon is decomposed into hadronic and partonic contributions. A detailed
analysis of the collision dynamics in the framework of a microscopic transport model shows the importance of partonic degrees of freedom and rescattering of leading (di)quarks in the early phase
of the reaction for Elab 30 GeV/nucleon. In Pb+Pb collisions at 160 GeV/nucleon the energy density reaches up to 4 GeV/fm3, 95% of which are contained in partonic degrees of freedom.
Dissociation rates of J / psi's with comoving mesons : thermal versus nonequilibrium scenario. (1998)
Christian Spieles Ramona Vogt Lars Gerland Steffen A. Bass Marcus Bleicher Horst Stöcker Walter Greiner
We study J/psi dissociation processes in hadronic environments. The validity of a thermal meson gas ansatz is tested by confronting it with an alternative, nonequilibrium scenario. Heavy ion
collisions are simulated in the frame- work of the microscopic transport model UrQMD, taking into account the production of charmonium states through hard parton-parton interactions and
subsequent rescattering with hadrons. The thermal gas and microscopic transport scenarios are shown to be very dissimilar. Estimates of J/psi survival probabilities based on thermal models of
comover interactions in heavy ion collisions are therefore not reliable.
J/psi suppression in heavy ion collisions - interplay of hard and soft QCD processes (1998)
Christian Spieles Ramona Vogt Lars Gerland Steffen A. Bass Marcus Bleicher Leonid Frankfurt Mark Strikman Horst Stöcker Walter Greiner
We study J/psi suppression in AB collisions assuming that the charmonium states evolve from small, color transparent configurations. Their interaction with nucleons and nonequilibrated, secondary
hadrons is simulated us- ing the microscopic model UrQMD. The Drell-Yan lepton pair yield and the J/psi /Drell-Yan ratio are calculated as a function of the neutral transverse en- ergy in Pb+Pb
collisions at 160 GeV and found to be in reasonable agreement with existing data.
Intermediate mass dileptons from secondary Drell-Yan processes (1998)
Christian Spieles Lars Gerland Nils Hammon Marcus Bleicher Steffen A. Bass Horst Stöcker Walter Greiner C. Lourenco Ramona Vogt
Recent reports on enhancements of intermediate and hight mass muon pairs producedin heavy ion collisions have attracted much attention.
Excitation function of entropy and pion production from AGS to SPS energies (1998)
Manuel Reiter Adrian Dumitru Jörg Brachmann Joachim A. Maruhn Horst Stöcker Walter Greiner
Entropy production in the initial compression stage of relativistic heavy-ion collisions from AGS to SPS energies is calculated within a three-fluid hydrodynamical model. The entropy per
participating net baryon is found to increase smoothly and does not exhibit a jump or a plateau as in the 1-dimensional one-fluid shock model. Therefore, the excess of pions per participating net
baryon in nucleus-nucleus collisions as compared to proton-proton reactions also increases smoothly with beam energy.
Entropy production in collisions of relativistic heavy ions : a signal for quark-gluon plasma phase transition? (1998)
Manuel Reiter Adrian Dumitru Jörg Brachmann Joachim A. Maruhn Horst Stöcker Walter Greiner
Entropy production in the compression stage of heavy ion collisions is discussed within three distinct macroscopic models (i.e. generalized RHTA, geometrical overlap model and three-fluid
hydrodynamics). We find that within these models \sim 80% or more of the experimentally observed final-state entropy is created in the early stage. It is thus likely followed by a nearly
isentropic expansion. We employ an equation of state with a first-order phase transition. For low net baryon density, the entropy density exhibits a jump at the phase boundary. However, the
excitation function of the specific entropy per net baryon, S/A, does not reflect this jump. This is due to the fact that for final states (of the compression) in the mixed phase, the baryon
density \rho_B increases with \sqrt{s}, but not the temperature T. Calculations within the three-fluid model show that a large fraction of the entropy is produced by nuclear shockwaves in the
projectile and target. With increasing beam energy, this fraction of S/A decreases. At \sqrt{s}=20 AGeV it is on the order of the entropy of the newly produced particles around midrapidity.
Hadron ratios are calculated for the entropy values produced initially at beam energies from 2 to 200 AGeV.
Nuclei in a chiral SU(3) model (1998)
Panajotis Papazoglou Detlef Zschiesche Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner
Nuclei can be described satisfactorily in a nonlinear chiral SU(3)-framework, even with standard potentials of the linearmodel. The condensate value of the strange scalar meson is found to be
important for the properties of nuclei even without adding hyperons. By neglecting terms which couple the strange to the nonstrange condensate one can reduce the model to a Walecka model
structure embedded in SU(3). We discuss inherent problems with chiral SU(3) models regarding hyperon optical potentials.
Metastable quark-antiquark droplets within the Nambu-Jona-Lasinio model (1998)
Igor N. Mishustin Leonid M. Satarov Horst Stöcker Walter Greiner
Chemically non equilibrated quark antiquark matter is studied within the Nambu Jona-Lasinio model. The equations of state of non strange (q = u, d) and strange (q = s) qq systems are calculated
in the mean field approximation. The existence of metastable bound states with zero pressure is predicted at finite densities and temperatures T 50 MeV. It is shown that the minimum energy per
particle occurs for symmetric systems, with equal densities of quarks and antiquarks. At T = 0 these metastable states have quark number densities of about 0.5 fm 3 for q = u, d and of 1 fm 3 for
q = s. A first order chiral phase transition is found at finite densities and temperatures. The critical temperature for this phase transition is approximately 75 MeV (90 MeV) for the non strange
(strange) baryon free quark antiquark matter. For realistic choices of parameters, the model does not predict a phase transition in chemically equilibrated systems. Possible decay channels of the
metastable qq droplets and their signatures in relativistic heavy ion collisions are discussed. | {"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Horst+St%C3%B6cker%22/start/0/rows/10/yearfq/1998/sortfield/author/sortorder/desc","timestamp":"2014-04-20T17:44:30Z","content_type":null,"content_length":"51037","record_id":"<urn:uuid:75b75e5c-eb80-4d19-aaef-4ac65c29a835>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trig question help (Determining length)
November 13th 2009, 10:04 AM
Trig question help (Determining length)
can someone please help me out.... I have a question and yes it's for homework but I have no idea on how to even begin it.... To me it looks like they already gave the answer or something but I'm
not sure how to start or how to do it... Can someone please help me out.
Determine the length of each unknown variable:
How do I solve this? I have no idea on where to even begin... If someone can help me it will be greatly appreciated... I can't show the work I've alreayd done because I don't know how to do
Thanks for all the help though if you can answer this
Alright I tried to re-edit it... does it work now?
November 13th 2009, 10:41 AM
in a circle of radis r of angle of radian measure $\theta$
subtends an arc of lenght $s = r\theta$
this basically what you need to know to solve this
November 13th 2009, 12:00 PM
I still don't understand how to solve this... can you go a litlte more in depth than that?
Are the answers:
A) = 3.142
B) = 5.783 rad
because maybe this is why I'm confused... I was given this question to solve but below the 2 pictures it had those which I think are already the answers....
November 13th 2009, 12:27 PM
ok the first is
$<br /> a = \frac{\pi}{3}*3 = \pi \ or \ 3.142 \ cm<br />$
the second one is
just subtracting $\theta$ from $2\pi$
$<br /> \theta = \frac{2}{4} = \frac{1}{2}<br />$
$<br /> 2\pi - \frac{1}{2} = 5.783\ rad<br />$
November 13th 2009, 01:09 PM
thank you so much! I understand it now. | {"url":"http://mathhelpforum.com/trigonometry/114334-trig-question-help-determining-length-print.html","timestamp":"2014-04-17T11:19:29Z","content_type":null,"content_length":"6944","record_id":"<urn:uuid:ab032f67-f367-4600-8761-9a7396644ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schaum's Outline of Geometry, 4th Edition
Schaum's Outline of Geometry, 4th Edition
Author: , Date: 03 Apr 2010, Views: ,
Schaum's Outline of Geometry, 4th Edition by Barnett Rich, Christopher Thomas
Publisher: McGraw-Hill (August 13, 2008) | Number Of Pages: 336 | ISBN-10: 0071544127 | File type: PDF | 2 mb
This review of standard college courses in geometry has been updated to reflect the latest course scope and sequences. The new edition includes an added chapter on Solid Geometry and a chapter on
Transformation, plus expanded explanations of particularly difficult topics, as well as many new worked-out and supplementary problems.
Follow Rules!
You can your request about reupload on AvaxHome any books with dead links. (!) Write URL of publication, please.
More serious and useful books! (Updated!):
Vol.1. 2000 ebooks, 27 Gb ### Vol.2. 2000 ebooks, 8 Gb
Vol.3. 2000 ebooks, 35 Gb ### Vol.4. 2000 ebooks, 31 Gb
AIO BookReaders Pack (10 mb, New links): (File type: PDF: Foxit Reader Pro v3.1 Build 0824; PDB: iSilo v5.03; DjVu: WinDjView v1.0.3)
Download: http://uploading.com/files/5bm81efe/Readers.zip/ or http://www.sharingmatrix.com/file/2726753/Readers.zip or http://hotfile.com/dl/34507203/8ec9907/Readers.zip.html or http://
[Fast Download] Schaum's Outline of Geometry, 4th Edition
Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us,
we'll remove relevant links or contents immediately.
Astronomy and Cosmology Physics
Philosophy Medicine
Mathematics DSP
Cryptography Chemistry
Biology and Genetics Psychology and Behavior | {"url":"http://www.ebook3000.com/Schaum-s-Outline-of-Geometry--4th-Edition_51371.html","timestamp":"2014-04-21T02:32:34Z","content_type":null,"content_length":"15339","record_id":"<urn:uuid:4ab66f5d-51dc-417d-845d-d42e3d1bdb5d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical problems recast as physics questions, provide new tools for old quandaries
A Princeton scientist with an interdisciplinary bent has taken two well-known problems in mathematics and reformulated them as a physics question, offering new tools to solve challenges relevant to a
host of subjects ranging from improving data compression to detecting gravitational waves.
Salvatore Torquato, a professor of chemistry, has shown that two abstract puzzles in geometry -- known as the "covering" and "quantizer" problems -- can be recast as "ground state" problems in
physics. Ground state problems relate to the study of molecule systems at their lowest levels of energy and have numerous applications across scientific disciplines. Torquato's conclusions are
reported in a paper that was published online Nov. 10 by Physical Review E.
"This paper describes what I believe to be a wonderful example of the deep interplay between geometry and physics," Torquato said. The problem of determining the ground states of special interacting
particles, which is relevant to studies of conditions of materials approaching absolute zero, is directly applicable to these math questions, he explained.
"In other words," Torquato said, "what appear to be abstract mathematical problems can be related to the cooling of special liquids that undergo phase transitions to crystal states at a temperature
of absolute zero. These reformulations are completely new to my knowledge."
The interdisciplinary nature of the study reflects Torquato's wide-ranging research expertise. He also is a faculty member at the Princeton Institute for the Science and Technology of Materials, a
senior faculty fellow at the Princeton Center for Theoretical Science and an associated faculty member in Princeton's Department of Physics, Program in Applied and Computational Mathematics and
Department of Mechanical and Aerospace Engineering.
The covering problem refers to a quandary in mathematics that employs geometrical principles to understand the most efficient way to arrange overlapping spheres so that they cover the maximum amount
of space. The covering problem has applications in wireless communication network layouts, search templates for gravitational waves and some forms of radiation therapy.
The quantizer problem is concerned with finding an optimal point configuration. The problem comes into play when mathematicians attempt to reduce what is known as a "distance error." This occurs, for
example, when researchers are devising data compression techniques. They want to find the most efficient way to convey large amounts of data. They also must represent the information contained within
it as accurately as possible. In these problems, fractional numbers representing data points are rounded off to the nearest point of an underlying array of points representing the optimal
configuration, resulting in the distance errors.
In addition to providing solutions for data compression, the approach offers answers to other aspects of computer science bearing upon digital communications, particularly coding and cryptography,
and numerical methods involving partial differential equations.
Torquato started on his new approach after learning from a colleague about the quantizer problem and seeing parallels with other studies on the ground state in physics. He already was familiar with
the covering problem because of his interest in sphere-packing puzzles.
The covering and quantizer problems are major problems in discrete mathematics and information theory, according to Henry Cohn, a mathematician with Microsoft Research New England in Cambridge, Mass.
These quandaries have, however, received much less attention than related problems such as packing, Cohn said, because "they seem to be far more subtle and complicated and thus much more difficult to
Torquato's study, he added, develops the theory from a physics perspective and builds new connections with the theory of energy minimization and ground states.
"One consequence is new insights into the role of randomness and disorder," Cohn said. "I'm also intrigued by the potential use for detecting gravitational waves, which would be a remarkable
application of mathematics."
Tackling the problem
As a starting point, Torquato knew that covering and quantizer problems could be considered optimization problems, in which researchers attempt to either minimize or maximize a particular function.
For example, in the covering problem, the idea is to find the best possible array of points in overlapping spheres that cover the maximum amount of space. In the quantizer problem, the goal is to
minimize the errors inherent in rounding off "nasty numbers," according to Torquato.
The breakthrough came when he recognized that the covering and quantizer problems could be cast as energy minimization or ground state problems in any space dimension. In such problems, a researcher
attempts to find an optimal arrangement of molecules (or points) that minimizes the total energy of all the particles in the system. This is a difficult task in general because the interactions or
forces a molecule experiences due to all of the other molecules is very complex and the resulting patterns are challenging to predict.
In thinking about particles close to absolute zero, Torquato knew that these particles were in their ground state and represented the perfect array for energy minimization. He realized that the point
arrays being studied in covering and quantizer problems could also be described as interacting systems of particles. Finding the ground states for those interacting particle systems, he further
realized, would represent the best solution to the covering and quantizer problems.
In the paper, Torquato also has drawn connections to two other seemingly different but important mathematical conundrums, including the sphere-packing problem and the density-fluctuation (number
variance) problem, the latter of which is related to a classical problem in number theory. The sphere-packing problem asks for the densest arrangements of spheres, which is a notoriously difficult
problem. The renowned mathematician and astronomer Johannes Kepler proposed what he viewed as the best solution in 1611. It has only been in the last few years that researchers have devised a proof
of Kepler's conjecture.
"These results may have important implications across many fields, which speaks to the fundamental nature of the problems," Torquato added.
One of the more intriguing possible applications to his work would be in helping to develop search templates for gravitational waves.
Gravitational waves are ripples in the structure of space-time, which may occur individually or as continuous radiation. According to Einstein's Theory of General Relativity, they are emitted when
extremely massive objects, such as black holes, experience sudden accelerations or changes of shape. While in theory they travel through space at the speed of light, gravitational waves remain
"The problem of the detection of gravitational waves is a huge problem in astrophysics," Torquato said. Scientists attempting to analyze data from currently existing gravitational wave detectors may
be able to use the insights from his paper to design software tools for more accurate searches through high-dimensional data sets, Torquato said.
The work builds on his approach to the sphere-packing problem. In August 2009, Torquato and Yang Jiao, now a postdoctoral fellow at Princeton, made a major advance in addressing a twist on a
longstanding packing problem, jamming more tetrahedra -- solid figures with four triangular faces -- and other polyhedral solid objects than ever before into a space.
The current study was supported by the Office of Basic Energy Sciences of the U.S. Department of Energy.
Story Source:
The above story is based on materials provided by Princeton University. The original article was written by Kitta MacPherson. Note: Materials may be edited for content and length.
Journal Reference:
1. Salvatore Torquato. Reformulation of the covering and quantizer problems as ground states of interacting particles. Phys. Rev. E, 82, 056109 (2010) [22 pages] DOI: 10.1103/PhysRevE.82.056109
Cite This Page:
Princeton University. "Mathematical problems recast as physics questions, provide new tools for old quandaries." ScienceDaily. ScienceDaily, 20 November 2010. <www.sciencedaily.com/releases/2010/11/
Princeton University. (2010, November 20). Mathematical problems recast as physics questions, provide new tools for old quandaries. ScienceDaily. Retrieved April 16, 2014 from www.sciencedaily.com/
Princeton University. "Mathematical problems recast as physics questions, provide new tools for old quandaries." ScienceDaily. www.sciencedaily.com/releases/2010/11/101116161255.htm (accessed April
16, 2014). | {"url":"http://www.sciencedaily.com/releases/2010/11/101116161255.htm","timestamp":"2014-04-16T04:41:14Z","content_type":null,"content_length":"89455","record_id":"<urn:uuid:df3198bb-942b-4688-83a8-a8f280d7dfa0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transformations on higher-order functions
- In Proceedings of the ACM Symposium on Partial Evaluation and Semantics-Based Program Manipulation (PEPM'91 , 1991
"... Given a description of the parameters in a program that will be known at partial evaluation time, a binding time analysis must determine which parts of the program are dependent solely on these
known parts (and therefore also known at partial evaluation time). In this paper a binding time analysis f ..."
Cited by 33 (5 self)
Add to MetaCart
Given a description of the parameters in a program that will be known at partial evaluation time, a binding time analysis must determine which parts of the program are dependent solely on these known
parts (and therefore also known at partial evaluation time). In this paper a binding time analysis for the simply typed lambda calculus is presented. The analysis takes the form of an abstract
interpretation and uses a novel formalisation of the problem of binding time analysis, based on the use of partial equivalence relations. A simple proof of correctness is achieved by the use of
logical relations. 1 Introduction Given a description of the parameters in a program that will be known at partial evaluation time, a binding time analysis must determine which parts of the program
are dependent solely on these known parts (and therefore also known at partial evaluation time). A binding time analysis performed prior to the partial evaluation process can have several practical
benefits (see [... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2760532","timestamp":"2014-04-18T16:57:07Z","content_type":null,"content_length":"13056","record_id":"<urn:uuid:d4b4d827-2262-44a8-89b1-61a22e9ec6bd>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woodstock, IL Algebra Tutor
Find a Woodstock, IL Algebra Tutor
...Their confidence soars! A little bit about me: -Bachelor of Arts, Human Development from Prescott College, High Honors, 4.0 GPA -Associate in Arts, Associate in General Studies, College of
DuPage, High Honors, 4.0 GPA -Qualities: patient, understanding, flexible, kind, easygoing, calm, helpful What do you need help with? Please send me a message.
26 Subjects: including algebra 1, algebra 2, chemistry, Spanish
...So, whether you're in a class and looking to improve your grade, or you're a homeschooler looking to put together a class, or a busy professional getting ready to go back to school or develop a
new skill, (or anything else!) let me know and I can help you get there!I've tutored and taught Algebra...
25 Subjects: including algebra 1, algebra 2, calculus, statistics
...I have had the opportunity to tutor many students on all different levels, and continue to be asked to tutor teachers' own children. They have witnessed my patience and grace, and want that for
their children. I teach from the heart.As part of my career as an elementary teacher, I taught third grade for four years and sixth grade for six years.
36 Subjects: including algebra 2, algebra 1, reading, GED
...Rhetorical questions can be tricky, and even subjective. I've developed an approach to these types of questions that make them considerably easier for students to tackle. I truly enjoy helping
students to master the Reading portion of the ACT test.
20 Subjects: including algebra 1, algebra 2, English, reading
...I first tutored math in graduate school as I helped education majors prepare for state exams. A few years later, I was a math and algebra (and language arts) tutor at Sylvan Learning Center.
Since then, I've spent the last six years teaching math (and English) prerequisite courses at a small private nursing college.
17 Subjects: including algebra 1, algebra 2, reading, English | {"url":"http://www.purplemath.com/Woodstock_IL_Algebra_tutors.php","timestamp":"2014-04-21T02:17:25Z","content_type":null,"content_length":"24143","record_id":"<urn:uuid:b0634ced-46e3-426e-92c9-a6678ff592f0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rules for Forming MNA Matrices without Op
Rules for Forming MNA Matrices without Op-Amps
Click here for MNA rules with Op-Amps
MNA applied to a circuit with only passive elements (resistors) and independent current and voltage sources results in a matrix equation of the form:
We will take n to be the number of nodes (not including ground) and m to be the number of independent voltage sources.
□ Ground is labeled as node 0.
□ The other nodes are labeled consecutively from 1 to n.
□ We will refer to the voltage at node 1 as v_1, at node 2 as v_2 and so on.
□ The naming of the independent voltage sources is quite loose, but the names must start with the letter "V" and must be unique from any node names. For our purposes we will require that
independent voltage sources have no underscore ("_") in their names. So the names Va, Vsource, V1, Vxyz123 are all legitimate names, but V_3, V_A, Vsource_1 are not.
□ The current through a voltage source will be labeled with "I_" followed by the name of the voltage source. Therefore the current through Va is I_Va, the current through VSource is I_VSource,
□ The naming of the independent current sources is similar; the names must start with the letter "I" and must no underscore ("_") in their names. So the names Ia, Isource, I1, Ixyz123 are all
legitimate names, but I_3, I_A, Isource_1 are not.
The A matrix is (m+n)x(m+n) and will be developed as the combination of 4 smaller matrices, G, B, C, and D.
☆ the G matrix is nxn and is determined by the interconnections between the passive circuit elements (resistors)
☆ the B matrix is nxm and is determined by the connection of the voltage sources.
☆ the C matrix is mxn and is determined by the connection of the voltage sources. (B and C are closely related, particularly when only independent sources are considered).
☆ the D matrix is mxm and is zero if only independent sources are considered.
Rules for making the G matrix
The G matrix is an nxn matrix formed in two steps
1. Each element in the diagonal matrix is equal to the sum of the conductance (one over the resistance) of each element connected to the corresponding node. So the first diagonal element
is the sum of conductances connected to node 1, the second diagonal element is the sum of conductances connected to node 2, and so on.
2. The off diagonal elements are the negative conductance of the element connected to the pair of corresponding node. Therefore a resistor between nodes 1 and 2 goes into the G matrix at
location (1,2) and locations (2,1).
Rules for making the B matrix
The B matrix is an mxn matrix with only 0, 1 and -1 elements. Each location in the matrix corresponds to a particular voltage source (first dimension) or a node (second dimension). If the
positive terminal of the ith voltage source is connected to node k, then the element (i,k) in the B matrix is a 1. If the negative terminal of the ith voltage source is connected to node
k, then the element (i,k) in the B matrix is a -1. Otherwise, elements of the B matrix are zero.
Rules for making the C matrix
The C matrix is an nxm matrix with only 0, 1 and -1 elements. Each location in the matrix corresponds to a particular node (first dimension) or voltage source (second dimension). If the
positive terminal of the ith voltage source is connected to node k, then the element (k,i) in the C matrix is a 1. If the negative terminal of the ith voltage source is connected to node
k, then the element (k,i) in the C matrix is a -1. Otherwise, elements of the C matrix are zero.
In other words, the C matrix is the transpose of the B matrix. (This is not the case when dependent sources are present.)
Rules for making the D matrix
The D matrix is an mxm matrix that is composed entirely of zeros. (It can be non-zero if dependent sources are considered.)
The x matrix is (m+n)x1 and holds our unknown quantities. It will be developed as the combination of 2 smaller matrices v and j.
☆ the v matrix is nx1 and hold the unknown voltages
☆ the j matrix is mx1 and holds the unknown currents through the voltage sources
Rules for making the v matrix
The v matrix is an nx1 matrix formed of the node voltages. Each element in v corresponds to the voltage at the equivalent node in the circuit (there is no entry for ground -- node 0).
Rules for making the j matrix
The j matrix is an mx1 matrix, with one entry for the current through each voltage source. So if there are two voltage sources V1 and V2, the j matrix will be:
The z matrix is (m+n)x1 z matrix and holds our independent voltage and current sources. It will be developed as the combination of 2 smaller matrices i and e. It is quite easy to
☆ the i matrix is nx1 and contains the sum of the currents through the passive elements into the corresponding node (either zero, or the sum of independent current sources).
☆ the e matrix is mx1 and holds the values of the independent voltage sources.
Rules for making the i matrix
The i matrix is an nx1 matrix with each element of the matrix corresponding to a particular node. The value of each element of i is determined by the sum of current sources into the
corresponding node. If there are no current sources connected to the node, the value is zero.
Rules for making the e matrix
The e matrix is an mx1 matrix with each element of the matrix equal in value to the corresponding independent voltage source.
Back Erik Cheever's Home Page
Please email me with any comments or suggestions | {"url":"http://www.swarthmore.edu/NatSci/echeeve1/Ref/mna/MNAMatrixRules.html","timestamp":"2014-04-18T03:04:56Z","content_type":null,"content_length":"10873","record_id":"<urn:uuid:ec9bf49e-be0e-41fd-99ff-f1e2bbdc9098>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reading List
MATH 6962-1, Spring 2004
Reading List has been updated to include material assigned on
April 15, 2004
All readings will be listed here, organized by topic. Dates indicate the class by which you should have read the indicated material. Some readings may be announced here before they are announced in
class, in case you want to read ahead. Sometimes, particularly if I am traveling, you may find some references posted at the library class reserves before they are linked here.
If a reading does not have a date, you don't have to worry about it yet.
Fundamentals of Probability Theory and Stochastic Processes
Assigned Readings
☆ missing pages have been included
☆ This summarizes cumulants and generating functions.
Optional Readings
□ Kloeden & Platen, Numerical Solution of Stochastic Differential Equations, Sec. 2.1-2.2 (PDF)
☆ Brief overview of the connections between measure theory and probability. The copy is rather poor; I will try to replace it.
Finite-State, Discrete-Time Markov Chains
Assigned Readings
Lawler, Intoduction to Stochastic Processes, Ch. 1 (PDF) (03/08/04)
Optional Readings
☆ Complete proof of existence and uniqueness of stationary distribution, and law of large numbers for Markov chains. Also some further discussion of recurrence/transience classification
techniques and computational techniques for stationary distribution.
☆ Perron-Frobenius theory for positive matrices
☆ Stationary distributions: Detailed balance, microreversibility, and Kirchoff's method of solution
☆ Maximum likelihood method for choosing parameters in Markov chain
☆ Method for calculating variance of first hitting times, and application to delinquent credit management
☆ Dissection principle for Markov chains and some theory on recurrence and transience when the n-step probability transition densities can be computed explicitly.
Countable State, Discrete-Time Markov Chains
Assigned Readings
Lawler, Intoduction to Stochastic Processes, Ch. 2 (PDF) (03/25/04)
Karlin & Taylor, A First Course in Stochastic Processes, Ch. 2 and 3 (PDF) (03/22/04)
Countable State, Continuous-Time Markov Chains
Assigned Readings
Lawler, Intoduction to Stochastic Processes, Ch. 3 (PDF) (04/01/04)
Andersson and Britton, Stochastic Epidemic Models and Their Statistical Analysis, Ch. 2 (PDF) (04/12/04)
Optional Readings
□ Reichl, A Modern Course in Statistical Physics, Ch. 5 (PDF) (PDF file may still be corrupted)
☆ A concise discussion of Markov chains and stochastic differential equations from a physicist's perspective. A nice collection of concepts, but there are some misleading statements!
Assigned Readings
Karlin & Taylor, A First Course in Stochastic Processes, Sec. 6.1-6.4 (PDF) (04/12/04)
Continuous Space, Continuous-Time Markov Processes
Assigned Readings
Kloeden and Platen, Numerical Solution of Stochastic Differential Equations, Sec. 1.7,1.8 (PDF) (04/19/04)
Kloeden and Platen, Numerical Solution of Stochastic Differential Equations, Sec. 2.3,2.4 (PDF) (04/22/04)
Optional Readings
Friedman, Stochastic Differential Equations and Applications, Ch. 2
Stochastic Calculus
Assigned Readings
Kloeden and Platen, Numerical Solution of Stochastic Differential Equations, Ch. 3 (PDF) (04/30/04)
Stochastic Differential Equations
Optional Readings
Kloeden and Platen, Numerical Solution of Stochastic Differential Equations, Ch. 4.1, 4.5, 4.6, 4.8, 4.9 (PDF)
Kloeden and Platen, Numerical Solution of Stochastic Differential Equations, Ch. 6.1, 6.3 (PDF)
Numerical Solution of Stochastic Differential Equations
Assigned Readings
Higham, "An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations," SIAM Review 43 (3), 525-546 (2001)
Financial Derivatives and Partial Differential Equations
Optional Readings
Almgren, "Financial Derivatives and Partial Differential Equations," American Mathematical Monthly 109 (1), 1-12 (2002) | {"url":"http://homepages.rpi.edu/~kramep/Stoch/readings.html","timestamp":"2014-04-19T19:36:39Z","content_type":null,"content_length":"9620","record_id":"<urn:uuid:65476230-38d7-42a7-8c56-1963b33e0476>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
general conformal transf. extension to hyperbolic manifold
up vote 2 down vote favorite
I have an hyperbolic manifold with boundary a conformal sphere. Can I extend any conformal transformation of boundary to the interior of the ball? I know how to do with Moebius but I wonder if there
is a general prescription for generic holomorphic transformations on the boundary
2 Like Igor, I'm not sure I understand the question. If the boundary of a hyperbolic manifold is a "conformal sphere", then isn't the hyperbolic manifold just the hyperbolic ball itself? If so, then
any conformal transformation of the boundary (sphere) is a Mobius transformation. And, as you apparently already know, this can be extended uniquely to an isometry of the interior. Did you mean to
ask something else? – Deane Yang Jul 13 '11 at 22:35
If your hyperbolic space has dimension $>3$ (so the sphere has dim $>2$), then it follows from Liouville's theorem that every conformal transformation is the restriction of a Mobius
transformation, and therefore of a hyperbolic isometry. However, since you use the term "holomorphic", I suspect you are referring to hyperbolic 3-space, with boundary $S^2$. – Ian Agol Aug 11 '11
at 0:29
add comment
2 Answers
active oldest votes
I assume you mean that you want to extend a holomorphic map $\phi:S^2\to S^2$ to a map $\Phi: \overline{\mathbb{H}^3}\to \overline{\mathbb{H}^3}$. Since a holomorphic map $\phi:S^2\to S^2$ is
a rational map, and in particular a branched cover, you may extend it over the 3-ball (e.g. just by coning off). I assume, however, that you want to do it in some canonical way which is
natural under conjugation by the Mobius group. For example, map $z\mapsto z^n$ gives a map of $S^2=\hat{\mathbb{C}}$ which extends in a natural way to $\mathbb{H}^3$ as a branched cover over a
up geodesic connecting $0$ and $\infty$ in $\mathbb{H}^3$. I think one may also be able to use the Douady-Earle extension as Igor suggests. The higher-dimensional version of this is given by
vote 3 Besson-Courtois-Gallot, and is called the "natural map". The rough idea is to associate to each point in $\mathbb{H}^3$ the visual measure on $\partial \mathbb{H}^3=S^2$, then push forward
down this measure by the map at infinity, and then take the barycenter of this measure to determine where the point goes. I don't see why this couldn't work for a rational map, although it may be
vote tricky to compute. You might also have a look at a paper of Lyubich and Minsky who show how to associate a hyperbolic lamination to a rational map.
add comment
I dont really understand the question, but I suspect the keywords are "Douady-Earle extension"'; see
Quasiconformal harmonic extension of a quasi-symmetric map on $S^1$
up vote 2 down vote or the original paper:
add comment
Not the answer you're looking for? Browse other questions tagged conformal-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/70266/general-conformal-transf-extension-to-hyperbolic-manifold","timestamp":"2014-04-18T23:26:19Z","content_type":null,"content_length":"57062","record_id":"<urn:uuid:9447e9a1-8077-45c6-a0b6-42a5b660a4ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
eight way symmetry of circle
January 29th 2011, 12:32 AM
eight way symmetry of circle
Hi guys
Please explain to me the eight way symmetry of a circle.I am learning computer graphics and it is an important concept to draw circles.I have tried searching in google but I could not grasp the
thank you
January 29th 2011, 03:25 AM
I'm aware of the following symmetries of the circle:
1. Rotation about its center through any angle whatsoever. Doing this just gives your circle back again.
2. Reflection about any diameter of the circle. Again, you'll just get the circle back again.
The phrase "eight way symmetry" is therefore puzzling to me.
January 29th 2011, 08:14 AM
What's special about relections in four particular of all the possible diameters, and then four more from the resulting images, is that they capitalise on the coordinate system to save work.
Reflection in the x-axis is achieved by trading the y coordinate of each point for its negative. Reflection in the line y=x by swapping x and y values for each other. And similarly, see what does
for reflecting in the lines x=0 and y = -x.
So, if you draw a circle by specifying, say, 80 points/pixels as pairs of (x,y) coordinates so that the circle is centred at the origin (0,0), then happily you only have to do any real work to
specify the 10 of them that define one eighth of the circle - say, the arc between the x axis and the line y=x. Then for each of these ten you'll get 7 more in corresponding positions on the
grid, by trading and swapping. Good illustration here, Computer Graphics : Circle drawing : 5 / 8 : 8-way symmetry algorithm.
January 29th 2011, 04:36 PM
If between x-axis and y=x the point is (x,y) then between the lines y=x and the y-axis the coordinate of a point is (y,x).Continuing in the anticlock direction in the next octant the point is
(-y,x) and then (-x,y),(-x,-y),(-y,-x),(y,-x),(x,-y).I want to know how these points have been found because in basic coordinate geometry any point in the first quadrant has coordinate (x,y),2nd
quad (-x,y),3rd quad (-x,-y) and fourth quad (x,-y).Is there some different rule for octants.I hope my question is clearer now.Please help me.
THANK YOU
January 29th 2011, 06:49 PM
(x,y) changes into (y,x) by reflection in y=x, so now the x coordinate of the point is y and the y coordinate is x. For example (2,1) is transformed into (1,2) by reflection in the line y=x.
Now we take the starting point as being (y, x) which by reflection in the y axis becomes (-y, x). That is, the point (1,2) is transformed into (-1,2) by reflection in the y-axis. | {"url":"http://mathhelpforum.com/geometry/169626-eight-way-symmetry-circle-print.html","timestamp":"2014-04-21T05:32:58Z","content_type":null,"content_length":"8330","record_id":"<urn:uuid:3c4db262-ef57-40cd-b0f0-a422130f5a59>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- December 2005, week 5 (#20)LISTSERV at the University of Georgia
Date: Thu, 29 Dec 2005 06:35:41 -0500
Reply-To: Peter Flom <flom@NDRI.ORG>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Peter Flom <flom@NDRI.ORG>
Subject: Re: paramaetric and non-parametric test.
Comments: To: dokiya777@GMAIL.COM
Content-Type: text/plain; charset=US-ASCII
>>> Raj Datta <dokiya777@GMAIL.COM> >>> wrote
I have 2 group of data with each having sample size of 75. Each group is
having the no. of stores and the tracking variable is the number of sales.
When i do the normality test, it fails. But than also i went ahead and
done t test with 2 groups and it turns out to be significant. But some
body told to me that if normality test has failed (though sample size is
>30) i should use Mann - Whiteney U test.
Please help which test to use. This is a problem pertaining to paramaetric
and non-parametric test.
Why is the tracking variable 'number of sales' as opposed to 'dollar value of sales' or something?
What question are you interested in (not what STATISTICS question, what substantive question?
It's very unclear what you are trying to model, and without knowing that, neither I nor anyone else
can give good advice. If you write back with more details (to SAS-L, not just to me, please) then
someone here will probably be able to help you. | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0512e&L=sas-l&O=A&F=&S=&P=2427","timestamp":"2014-04-20T00:40:03Z","content_type":null,"content_length":"10226","record_id":"<urn:uuid:73105172-139e-4d42-8982-8fcb029fee6d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inductor : Inductance of a Coil
Inductance is the name given to the property of a component that opposes the change of current flowing through it and even a straight piece of wire will have some inductance. Inductors do this by
generating a self-induced emf within itself as a result of their changing magnetic field. When the emf is induced in the same circuit in which the current is changing this effect is called
Self-induction, ( L ) but it is sometimes commonly called back-emf as its polarity is in the opposite direction to the applied voltage.
When the emf is induced into an adjacent component situated within the same magnetic field, the emf is said to be induced by Mutual-induction, ( M ) and mutual induction is the basic operating
principal of transformers, motors, relays etc. Self inductance is a special case of mutual inductance, and because it is produced within a single isolated circuit we generally call self-inductance
simply, Inductance. The basic unit of inductance is called the Henry, ( H ) after Joseph Henry, but it also has the units of Webers per Ampere (1 H = 1 Wb/A).
Lenz's Law tells us that an induced emf generates a current in a direction which opposes the change in flux which caused the emf in the first place, the principal of action and reaction. Then we can
accurately define Inductance as being "a circuit will have an inductance value of one Henry when an emf of one volt is induced in the circuit were the current flowing through the circuit changes at a
rate of one ampere/second". In other words, a coil has an inductance of one Henry when the current flowing through it changes at a rate of one ampere/second inducing a voltage of one volt in it and
this definition can be presented as:
Inductance, L is actually a measure of an inductors "resistance" to the change of the current flowing through the circuit and the larger is its value in Henries, the lower will be the rate of current
We know from the previous tutorial about the Inductor, that inductors are devices that can store their energy in the form of a magnetic field. Inductors are made from individual loops of wire
combined to produce a coil and if the number of loops within the coil are increased, then for the same amount of current flowing through the coil, the magnetic flux will also increase. So by
increasing the number of loops or turns within a coil, increases the coils inductance. Then the relationship between self-inductance, ( L ) and the number of turns, ( N ) and for a simple single
layered coil can be given as:
Self Inductance of a Coil
L is in Henries
N is the Number of Turns
Φ is the Magnetic Flux Linkage
Ι is in Amperes
This expression can also be defined as the flux linkage divided by the current flowing through each turn. This equation only applies to linear magnetic materials.
Example No1
A hollow air cored inductor coil consists of 500 turns of copper wire which produces a magnetic flux of 10mWb when passing a DC current of 10 amps. Calculate the self-inductance of the coil in
Example No2
Calculate the value of the self-induced emf produced in the same coil after a time of 10mS.
he self-inductance of a coil or to be more precise, the coefficient of self-inductance also depends upon the characteristics of its construction. For example, size, length, number of turns etc. It is
therefore possible to have inductors with very high coefficients of self induction by using cores of a high permeability and a large number of coil turns. Then for a coil, the magnetic flux that is
produced in its inner core is equal to:
Φ is the magnetic flux linkage, B is the flux density, and A is the area.
If the inner core of a long solenoid with N turns/metre is hollow, "air cored", the magnetic induction in its core will is given as.
Then by substituting these expressions in the first equation above for Inductance will give us:
By cancelling out and grouping together like terms, then the final equation for the coefficient of self-inductance for an air cored coil (solenoid) is given as:
L is in Henries
μ[ο] is the Permeability of Free Space (4.π.10^-7)
N is the Number of turns
A is the Inner Core Area (π.r^2) in m^2
l is the length of the Coil in metres
As the inductance of a coil is due to the magnetic flux around it, the stronger the magnetic flux for a given value of current the greater will be the inductance. So a coil of many turns will have a
higher inductance value than one of only a few turns and therefore, the equation above will give inductance L as being proportional to the number of turns squared N^2. As well as increasing the
number of coil turns, we can also increase inductance by increasing the coils diameter or making the core longer. In both cases more wire is required to construct the coil and therefore, more lines
of force exists to produce the required back emf. The inductance of a coil can be increased further still if the coil is wound onto a ferromagnetic core, that is one made of a soft iron material,
than one wound onto a non-ferromagnetic or hollow air core.
If the inner core is made of some ferromagnetic material such as soft iron, cobalt or nickel, the inductance of the coil would greatly increase because for the same amount of current flow the
magnetic flux would be much stronger. This is because the lines of force would be more concentrated through the ferromagnetic core material as we saw in the Electromagnets tutorial. For example, if
the core material has a relative permeability 1000 times greater than free space, 1000μ[ο] such as soft iron or steel, than the inductance of the coil would be 1000 times greater so we can say that
the inductance of a coil increases proportionally as the permeability of the core increases. Then for a coil wound around a former or core the inductance equation above would need to be modified to
include the relative permeability μ[r] of the new former material.
If the coil is wound onto a ferromagnetic core a greater inductance will result as the cores permeability will change with the flux density. However, depending upon the ferromagnetic material the
inner cores magnetic flux may quickly reach saturation producing a non-linear inductance value and since the flux density around the coil depends upon the current flowing through it, inductance, L
also becomes a function of current, i.
In the next tutorial about Inductors, we will see that the magnetic field generated by a coil can cause a current to flow in a second coil that is placed next to it. This effect is called Mutual
Inductance, and is the basic operating principal of transformers, motors and generators.
Reproduced with permission from Wayne Storr
( http://www.electronics-tutorials.ws/inductor/inductance.html ) | {"url":"http://www.dnatechindia.com/Tutorial/Inductor/Inductor-Inductance-of-a-Coil.html","timestamp":"2014-04-18T06:26:15Z","content_type":null,"content_length":"68518","record_id":"<urn:uuid:8e33bfc3-ea4f-4aa1-8b86-333c09c7963f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Utilities shared by various UIs
strokePicture :: [Span (Endo a)] -> [(Point, a -> a)]Source
Turn a sequence of (from,style,to) strokes into a sequence of picture points (from,style), taking special care to ensure that the points are strictly increasing and introducing padding segments where
neccessary. Precondition: Strokes are ordered and not overlapping.
paintStrokes :: (a -> a) -> a -> [(Point, a -> a)] -> [(Point, a)] -> [(Point, a)]Source
Paint the given stroke-picture on top of an existing picture | {"url":"http://hackage.haskell.org/package/yi-0.6.5.0/docs/Yi-UI-Utils.html","timestamp":"2014-04-18T03:38:37Z","content_type":null,"content_length":"10607","record_id":"<urn:uuid:ecd7eaef-d880-40f8-a88a-4630c8647ed0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cheverly, MD Statistics Tutor
Find a Cheverly, MD Statistics Tutor
I have 11 years' experience teaching and mentoring college undergraduate and graduate students on quantitative research projects, including teaching applied statistics and research methods, and
using SPSS, STATA, and Excel. I also have over 20 years of research experience in the social sciences, mo...
6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word
...I took the MCAT in March of 2013, scoring in the 95th percentile, and I took both the SAT and ACT with scores at or above the 95th percentile. I can help with subject mastery and test taking
tips and strategies. I have particular strengths in verbal reasoning and writing (13 on MCAT Verbal Reas...
39 Subjects: including statistics, reading, chemistry, Spanish
...The curriculum included a strong marketing component with an international focus. My 25 year career as an engineer and technical consultant has required me to identify and pursue new market
opportunities to grow my employers' companies and, currently, my sole proprietorship. I have taken several Praxis Tests and have done very well on all of them.
31 Subjects: including statistics, chemistry, calculus, physics
...I have taught and given tutorials in mathematics at the undergraduate level for 6 years as an Assistant Lecturer. I have also taken tutorials in mathematics at the undergraduate level as a
Teaching Assistant for 4 years. Some of the areas in mathematics I taught included Calculus, Algebra, Statistics and Probability, Ordinary Differential Equations and Difference equations.
10 Subjects: including statistics, calculus, algebra 1, algebra 2
...While in college, I was on the Putnam Intercollegiate Math Competition team for 3 consecutive years, and won several math competitions. I had a 4.0 GPA in math as an undergraduate (graduating
with more than twice the number of required credit hours in math). I also obtained a perfect score in Co...
36 Subjects: including statistics, physics, calculus, geometry
Related Cheverly, MD Tutors
Cheverly, MD Accounting Tutors
Cheverly, MD ACT Tutors
Cheverly, MD Algebra Tutors
Cheverly, MD Algebra 2 Tutors
Cheverly, MD Calculus Tutors
Cheverly, MD Geometry Tutors
Cheverly, MD Math Tutors
Cheverly, MD Prealgebra Tutors
Cheverly, MD Precalculus Tutors
Cheverly, MD SAT Tutors
Cheverly, MD SAT Math Tutors
Cheverly, MD Science Tutors
Cheverly, MD Statistics Tutors
Cheverly, MD Trigonometry Tutors
Nearby Cities With statistics Tutor
Bladensburg, MD statistics Tutors
Capitol Heights statistics Tutors
Edmonston, MD statistics Tutors
Fairmount Heights, MD statistics Tutors
Glenarden, MD statistics Tutors
Hyattsville statistics Tutors
Landover Hills, MD statistics Tutors
Landover, MD statistics Tutors
Lanham Seabrook, MD statistics Tutors
New Carrollton, MD statistics Tutors
North Brentwood, MD statistics Tutors
Riverdale Park, MD statistics Tutors
Riverdale Pk, MD statistics Tutors
Riverdale, MD statistics Tutors
Tuxedo, MD statistics Tutors | {"url":"http://www.purplemath.com/Cheverly_MD_statistics_tutors.php","timestamp":"2014-04-17T15:40:09Z","content_type":null,"content_length":"24521","record_id":"<urn:uuid:b901749d-c20e-4617-8b93-b3649ef93c90>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real functions with finitely many zeroes
up vote 3 down vote favorite
I am looking for as general a class as possible of real functions defined on $\mathbb{R}^+$ that are guaranteed to have a finite number of zeroes - no, polynomials are not enough :).
Specifically, consider a class of functions defined like elementary functions, but without allowing for complex constants. It seems to me that such functions must have a finite number of zeros. Any
ideas on how to prove this (or counterexmamples)?
Background: What I'm actually looking to do is prove that a real postive function $f:\mathbb{R}^+\to\mathbb{R}^+$ is eventually concave, i.e. there exist $x_0\geq 0$ such that $f(x)$ is concave for
every $x\geq x_0$. I know that $f$ is real analytic, increasing and upper bounded, but its exact formula is intractable. It thus suffices to show that $f''$ has a finite number of zeroes and hence my
question. The structure of $f''$ is more complicated than a "limited elementary function" described above, but a result on such functions will definitely be a step in the right direction.
ca.analysis-and-odes fa.functional-analysis
Isn't $\log(x+1)$ a counterexample? – Igor Khavkine Oct 15 '12 at 10:53
Why so? It seems to me $log(1+x)$ equals 0 only once... – Yair Carmon Oct 15 '12 at 11:04
You need something better than the vague definition of "elementary function" in Wikipedia. For example (on $\mathbb R$) is $\sqrt{x^2}-x = |x|-x$ supposed to be called "elementary"? I don't think
so, but you can't see that from the Wikipedia definition. – Gerald Edgar Oct 15 '12 at 13:01
Can you show us the explicit formula you have for $f$ ? Do you have a differential equation ? – Lierre Oct 15 '12 at 13:50
I meant that $\log(x+1)$ is not eventually concave. Though perhaps I'm misunderstanding something about the motivation part of your question... ah, it seems that for you $f$ is some specific
function that you didn't define, rather any function. – Igor Khavkine Oct 15 '12 at 14:39
show 5 more comments
6 Answers
active oldest votes
Let $\mathcal R$ be any o-minimal expansion of the real ordered field $(\mathbb R,<,0,1,+,-,\cdot)$, and $\mathcal F$ be the class of functions (first-order) definable (with real
parameters) in $\mathcal R$. On the one hand, $\mathcal F$ has various nice closure properties (in particular, it is closed under composition, taking inverse functions, and derivatives).
On the other hand, o-minimality guarantees that for any $f\in\mathcal F$, $f\colon\mathbb R\to\mathbb R$, its positive set $\{x\in\mathbb R:f(x)>0\}$ is a finite union of points and
intervals; in particular, $f$ is eventually positive, eventually negative, or eventually constant $0$.
Note that in practice, theories of structures known to be o-minimal are often also model complete, hence a function is definable iff its graph is a projection of a Boolean combination of
positive sets of the basic functions included in its signature.
Wilkie proved that the exponential field $\mathbb R_{\exp}=(\mathbb R,\exp)$ is o-minimal. The class of functions definable in $\mathbb R_{\exp}$ includes the functions mentioned in your
up vote 5 question, so the answer to your specific question is positive.
down vote
accepted Even larger expansions of $\mathbb R$ are known to be o-minimal. First, by a result of van den Dries, $\mathbb R_\mathrm{an}$ is o-minimal, which is the expansion of $\mathbb R$ by all
real-analytic functions $f\colon[0,1]^n\to\mathbb R$ (extended by the constant $0$ function outside $[0,1]^n$ to be defined on the whole of $\mathbb R^n$). Second, the pfaffian closure $
\mathcal R_\mathrm{pfaff}$ of any o-minimal expansion $\mathcal R$ of $\mathbb R$ is again o-minimal, due to Speisseger. In particular, $\mathbb R_\mathrm{an,pfaff}$ is o-minimal. (The
full definition of the pfaffian closure can be found e.g. in [1]. In particular, it includes all pfaffian functions such as $\exp$.)
[1] Patrick Speissegger, Pfaffian Sets and O-minimality, in: Lecture Notes on O-Minimal Structures and Real Analytic Geometry (C. Miller, J.-P. Rolin and P. Speissegger, eds.), Fields
Institute Communications vol. 62, 2012, pp. 179–218, http://dx.doi.org/10.1007/978-1-4614-4042-0_5
1 Thanks! It's also worth to mention Hardy's class $L$ “orders of infinity” that also answers my specific question in the affirmative, c.f. the first theorem in "An extension of hardy's
classL of “orders of infinity” " by Michael Boshernitzan. Credit for this reference goes to Prof. Mikhail Sodin from TAU. – Yair Carmon Oct 16 '12 at 15:10
add comment
I think that the theory of o-minimal structures could provide a good answer to your question. See the Pisa lecture notes of Michel Coste
up vote 2 down vote
or the book of van den Dries (Tame Topology and o-minimal Structures, 1998).
add comment
Emil's answer is the definitive one, but I thought I would add some details. Wilkie's result about $\mathbb{R}_{\exp}$ that he mentions relies in part on Khovanskii's theory of fewnomials.
In a way, Wilkie's theorem is overkill for your purpose, especially if you're interested in elementary function, since Wilkie's result deals with the multitude of definable functions in
the expansion that are definable but hard to describe succinctly.
up vote 2
down vote On the other hand, Khovanskii's original result is much more hands on (though in no way constructive), relying on three purely elementary ingredients: perturbation, Rolle's theorem, and
the Bezout inequality. So if you need at all to "look under the hood" and see why such a result may be true, you may want to take a look at Khovanskii's book. The beginning is rather
accessible and contains a detailed proof of what you need.
Thanks! I couldn't find a copy of a book online but I did find a paper by Khovanskii titled "Fewnomials and Pfaff Manifolds" - does this also contain what I need? – Yair Carmon Oct 25
'12 at 12:05
The paper should cover the same things. I would still advise you to try for the book though because I would assume it has more context and is more reader-friendly. From where I am, most
of the book is available on Google books and Chapter I is probably the most useful for you. – Thierry Zell Oct 25 '12 at 14:11
add comment
Regarding the statement of you background: The claim that a function $f: \mathbb{R}^+\to \mathbb{R}^+$ bounded and increasing will be concave for every $x \geq x_0$ is in general wrong.
Counterexample: Let us consider for a parameter $a>0$ the function $$ f_a(x) = 1 - e^{-x}(1+1/\sqrt a \sin x) .$$ Obviously, $f_a$ is bounded. Moreover, it is easy to check, that $$ f_a'
up vote 0 (x) = 1/\sqrt a \ e^{-x}(\sqrt{a}-\cos x + \sin x) > 0 ,\quad\text{whenever}\quad a > 2 , $$ The second derivative is given by $$ f_a''(x) = 1/\sqrt a\ e^{-x}(2\cos x - \sqrt{a}) $$ and
down vote has no sign, whenever $a < 4$. Hence for $a\in (2,4)$ the statement is wrong.
I don't think that the OP meant the question for ANY function, but for some specific function which is too complicated to describe explicitly. – Igor Rivin Oct 15 '12 at 14:15
I never made such a claim. Instead what I'm saying that if $f$ is increasing and bounded AND $f''$ has finitely many zeros (unlike your proposed counterexample) – Yair Carmon Oct 15 '12
at 14:58
... then $f$ is eventually concave. The "finitely many zeros" issue is the motivation for my actual question. – Yair Carmon Oct 15 '12 at 15:00
add comment
I'm still not totally clear on the precise question.
Consider classes of functions from (all or part of) $\mathbb{R}^+$ to $\mathbb{R}.$ We might demand that the domain be all of $\mathbb{R}^+$, all but finitely many points of $\mathbb{R}^+$,
some union of finitely many sub-intervals or merely a "reasonable" subset. We might also require only finitely many zeros (on the domain) or no zeros. We might even require that the function
be positive. If we desire closure under composition, addition, multiplication, subtraction and/or division then some of the domain/range options are compatible and others are not. This
applies too if we want to include $\ln(x)$ or other specified functions. In any case, the constant zero function may be an exceptional member with certain restrictions which go unmentioned.
up vote Here is a question, is something like it (related to) what you are asking?
0 down
vote Consider the class $G$ of all functions with domain a subset of $\mathbb{R}^+.$ Let $H$ be smallest subclass which is closed under composition, addition, multiplication, subtraction and
division and contains the constant functions along with $x^r$ and $a^x,\log_a(x)$ for $a \gt 0. $ QUESTION: The initially specified functions all have only finitely many zeros. Is this
true for all of $H?$ What if we also allow exponentiation $f(x)^{g(x)}?$
Qualifications: Here composition and division may contract the domain. Also, "finitely many zeros" should be understood to mean that the set of zeros is a finite union of intervals some or
all of which may be singleton points.
Actually the initially specified functions have no zeros. One could say "polynomials" but that seems wasteful if one is about to allow addition and multiplication. – Aaron Meyerowitz Oct
15 '12 at 19:55
add comment
Here is a general algebraic approach which may give you some help.
It may even solve your problem, although I am not promising that.
Clones in universal algebra are sets of functions which are closed under having projections of all arities and functional composition. Although I was not trained to do this, they can be
viewed as a graded collection by arity, and one can look at the binary or ternary or (as in your case) unary members of the clone.
A simple result is that if one has a clone generated by a beginning set of functions B, then all the functions of each grade can be determined by B acting on a sufficiently large subset
of members of just that grade, without using members of the other grades.
Something that you would like to have happen is to find a clone whose unary grade a) contains only functions with finitely many zeros b) is generated by operations in a small set B which
are precisely those used in your target function (which I will call g instead of f''), and c) contains g.
Much as you might like it, that may not happen because B is "too rich" to be able to satisfy condition a). One approach to try is to "thin out" B: create some terms out of functions of B,
up vote 0 make a new set B', and hope to make a subclone which will satisfy a). Hopefully you will be able to satisfy c), but thinning out the generating set may also toss out g .
down vote
Alternatively, you could look at the (unary grade of the) clone generated by g, or by g and a skilled choice of operations from B. If g's clone already contains functions with infinitely
many zeros, then so will any clone that contains g, in which case you will know that a purely clone theoretic approach, even with a judicious choice from B, will not give you what you
Even so, don't give up yet. The difference of clones is not a clone but may be useful. You might show that a member of the unary grade either has finitely many zeros or has some property
Q, where Q is preserved by the clone generating scheme. Now the hope is to find a helpful property Q which is something that you can demonstrate g does not have.
I realize the above is just an abstract nonsense version of what you already know, but it might be a useful shift in perspective for you.
Gerhard "Ask Me About System Design" Paseman, 2012.10.16
add comment
Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes fa.functional-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/109705/real-functions-with-finitely-many-zeroes/109798","timestamp":"2014-04-19T12:45:45Z","content_type":null,"content_length":"93098","record_id":"<urn:uuid:e189ace1-2390-4a85-8798-adb6ca6fe0e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homotopy theory
Paths and cylinders
Homotopy groups
Category theory
Universal constructions
The notion of cofibration is dual to that of fibration. See there for more details.
A cofibration is a member of a distinguished class of cofibrations in one of the several setups in homotopy theory:
In traditional topology, one usually means a Hurewicz cofibration.
In category theory and descent theory, Grothendieck however introduced a notion of cofibered category or cofibration, whose definition is categorically dual to that of a fibered category and based on
the generalization of the universal property of coCartesian squares; however it does not correspond to the extension property in topology, but to a lifting property, like for fibrations. For that
reason, Gray suggested (and this is to some extent adopted in $n$lab) to call such categories opfibrations. In the quasicategorical setup, the generalizations of fibered categories are called
Cartesian fibrations, and the generalizations of op/co-fibered categories are called coCartesian fibrations. In the $1$-categorical setup, the coCartesian arrows in op/co-fibrations are indeed the
ones which complete coCartesian squares in the special and most important case of domain opfibrations.
Revised on January 7, 2012 13:42:06 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/cofibration","timestamp":"2014-04-18T21:06:27Z","content_type":null,"content_length":"24747","record_id":"<urn:uuid:2de17059-6544-49d3-a215-183a3b0187a7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Substitution help...
February 6th 2010, 06:48 PM #1
Sep 2009
[SOLVED] Substitution help...
Hi, I just need help with taking the derivative of the substition factor. From there I can (hopefully) get the rest.
$x^3 \dfrac{dy}{dx} + x^2y+x=3e^{xy}$
Change of variable given in problem:
Is this part right, usually the right side of this only has a $y$, so im not quite sure what to do. Thanks.
Hi, I just need help with taking the derivative of the substition factor. From there I can (hopefully) get the rest.
$x^3 \dfrac{dy}{dx} + x^2y+x=3e^{xy}$
Change of variable given in problem:
Is this part right, usually the right side of this only has a $y$, so im not quite sure what to do. Thanks.
if $z = e^{xy}$, then $\frac{dz}{dx} = e^{xy}\left(y + x\frac{dy}{dx}\right)$
ahh with respect to both x and y.
No, not with respect to both $x$ and $y$. Only with respect to $x$. But since $y$ is a function of $x$, you have to use the chain rule.
I.e. $z = e^{xy}$
Let $u = xy$ so that $z = e^u$.
$\frac{du}{dx} = x\frac{d}{dx}(y) + y\frac{d}{dx}(x)$
$= x\frac{dy}{dx} + y$.
$\frac{dz}{du} = e^u = e^{xy}$.
So $\frac{dz}{dx} = \left(x\frac{dy}{dx} + y\right)e^{xy}$.
February 6th 2010, 07:12 PM #2
Senior Member
Nov 2009
February 6th 2010, 07:15 PM #3
Sep 2009
February 6th 2010, 08:17 PM #4 | {"url":"http://mathhelpforum.com/differential-equations/127540-solved-substitution-help.html","timestamp":"2014-04-23T16:37:15Z","content_type":null,"content_length":"43086","record_id":"<urn:uuid:34a555a0-91a2-4fb8-9301-ec69f6e741c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Parallel Programming: Models, Languages, Algorithms
Papers for Student Presentations
Choose one paper from the following list for presentation and another (preferably related) one for opposition.
TRANSACTIONAL MEMORY
PARALLEL ALGORITHMS
Task: Prepare a 20 minutes presentation of your chosen paper and at least 3 questions on the other paper for opposition.
After the presentation, hand in a written summary of your presented paper on 2-3 pages.
This page is maintained by Christoph Kessler (chrke \at ida.liu.se) | {"url":"http://www.ida.liu.se/~chrke/courses/APP/papers.html","timestamp":"2014-04-16T05:26:50Z","content_type":null,"content_length":"6492","record_id":"<urn:uuid:50efac2a-2a27-4b40-8290-485c801380af>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aspects of Modeling Languages for Mathematical Programming
Wednesday, May 12
Aspects of Modeling Languages for Mathematical Programming
9:00 AM-11:00 AM
Room: Capitol North
Modeling languages and the computing environments that support them help simplify and speed the tasks of choosing, using, and understanding models whose prime focus is the solution of constrained
optimization problems. This minisymposium will present four perspectives on modeling environments and their interactions with solution algorithms. The speakers will give an overview of algebraic
modeling languages and some new directions; discuss modeling systems from the solver's perspectives; tell how modeling systems can help solve applications in parallel; and discuss languages and
libraries for constraint-logic programming, for problems that partly involve discrete variables.
Organizers: David M. Gay
Bell Laboratories, Lucent Technologies
Alexander Meeraus
GAMS Development Corporation
9:00-9:25 Developments in Algebraic Modeling Languages for Optimization
Robert Fourer, Northwestern University; and David M. Gay, Organizer
9:30-9:55 Modeling Systems: The Benefits for Nonlinear Optimization Solvers
Arne Stolbjerg Drud, ARKI Consulting and Development, Bagsvaerd, Denmark
10:00-10:25 Modeling Languages and Condor: Metacomputing for Optimization
Michael C. Ferris and Todd S. Munson, University of Wisconsin, Madison
10:30-10:55 Constraint Programming Languages and Libraries
Ken McAloon, ILOG, Inc., Mountain View, California
┃ OP99 Home │ Program │ Program Updates │ Speaker Index │ Hotel │ Transportation │ Registration ┃
MMD, 12/21/98 | {"url":"http://www.siam.org/meetings/op99/ms32.htm","timestamp":"2014-04-20T14:49:30Z","content_type":null,"content_length":"3898","record_id":"<urn:uuid:eaef02cb-2eeb-40f4-a896-4f4fe93e14a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: you estimated at least as many quantities as you have observations
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: you estimated at least as many quantities as you have observations
From "Koksal, Bulent" <bkoksal@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: you estimated at least as many quantities as you have observations
Date Fri, 5 Oct 2007 14:55:19 +0300
What does the below note mean? Does it mean that the results are
unreliable. Thanks.
Note: you estimated at least as many quantities as you have observations.
Bülent Köksal
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-10/msg00167.html","timestamp":"2014-04-17T04:32:23Z","content_type":null,"content_length":"5575","record_id":"<urn:uuid:d96f925b-6b57-496e-ae86-3d6c3479bfc1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematical Sciences
Professors: Larry P. Ammann, Michael Baron, Sam Efromovich, M. Ali Hooshyar, Patrick L. Odell (Emeritus), Istvan Ozsvath, Viswanath Ramakrishna, Ivor Robinson (Emeritus), Robert Serfling, Janos Turi,
John W. Van Ness (Emeritus), John Wiorkowski
Assistant Professors: Yan Cao, Pankaj Choudhary, Mieczyslaw Dabkowski
Adjunct Professors: Jose Carlos Gomez Larranage, Adolfo Sanchez Valenzuela
Affiliated Faculty: Herve Abdi (BBS), Raimund J. Ober (EE), Alain Bensoussan (SOM), Thomas Butts and Titu Andreescu (SME)
Senior Lecturers: Frank R. Allum, Bentley Garrett, Yuly Koshevnik, David L. Lewis, Charles R. McGhee, Randall E. Rausch, Joanna R. Robinson, William Scott, Paul Stanford
The Mathematical Sciences Department at The University of Texas at Dallas offers graduate study in five majors: applied mathematics, engineering mathematics, mathematics, statistics, and an
interdisciplinary degree in Bioinformatics and Computational biology. The degree programs offer students the opportunity to prepare for careers in these disciplines themselves or in any of the many
other fields for which these disciplines are such indispensable tools. As other sciences develop, problems which require the use of these tools are numerous and pressing.
In addition to a wide range of courses in mathematics and statistics, the Mathematical Sciences Department offers a unique selection of courses that consider mathematical and computational aspects of
engineering, biology and other scientific problems.
The Master of Science degree programs are designed for persons seeking specializations in applied mathematics, engineering mathematics, mathematics, statistics, bioinformatics and computational
The Master of Science degree is available also for those who plan to teach mathematical sciences above the remedial level at a community college or at a college or university. The Master of Science
degree is recommended as a minimum, since an earned doctorate is sometimes required.
For information concerning the Master of Arts in Teaching in Mathematics Education, designed for persons who are teaching in grades 6-12, see the Science and Mathematics Education section.
The Doctor of Philosophy degree programs cover two basic areas of concentration: statistics and applied mathematics. They are designed for those who plan to pursue academic, financial or industrial
The faculty, staff and students have access to a large network of Sun workstations and servers on campus. In addition, the Department has a classroom equipped with a cluster of 20 high-end Linux PCs
that are used for instruction and special research purposes.
Admission Requirements
The University’s general admission requirements are discussed here.
Specific additional admission requirements for students in Mathematical Sciences follow. Students lacking undergraduate prerequisites for graduate courses in their area must complete these
prerequisites or receive approval from the graduate adviser and the course instructor before registering.
One of the components of a student’s academic history which is evaluated when the student is seeking admission to the graduate program is his/her performance on certain standardized tests. Since
these tests are designed to indicate only the student’s potential for graduate study, they are used in conjunction with other measures of student proficiency (such as GPA, etc.) in determining the
admission status of a potential graduate student. Accordingly, there is no rigid minimum cut-off score for admission to the program. However, a student with at least a Graduate Record Examination
(GRE) combined score of 1050 with at least 550 on the math portion would have a reasonable probability of admission as a Master’s student, assuming that the student’s other credentials were in order.
Similarly, a student with a GRE score of 1200 (with at least 650 in the quantitative portion) would have a reasonable probability of admission as a Ph.D. student, assuming that all other credentials
were in order. Higher standards prevail for students seeking Teaching Assistantships.
Degree Requirements
Master of Science
The University’s general degree requirements are discussed here.
Students seeking a Master of Science in Mathematical Sciences must complete a total of 12 three-credit hour courses. In some cases, credit for 3 hours is approved for good mathematics background. The
student may choose a thesis plan or a non-thesis plan. In the thesis plan, the thesis replaces two elective courses with completion of an approved thesis (six thesis hours). The thesis is directed by
a Supervising Professor and must be approved by the Head of the Mathematical Sciences Department.
Each student must earn a 3.0 minimum GPA in the courses listed for the student’s program.
Applied Mathematics Major
MATH 5301-5302 Elementary Analysis I and II (or equivalent)
MATH 6303 Theory of Complex Functions
MATH 6313 Numerical Analysis
MATH 6315 Ordinary Differential Equations
MATH 6318 Numerical Analysis of Differential Equations
MATH 6319-6320 Principles and Techniques in Applied Mathematics I and II
MATH 6308 Inverse Problems and their Applications
MATH 6321 Optimization
Plus two guided electives.
Engineering Mathematics Major
MATH 5301-5302 Elementary Analysis I and II (or equivalent)
MATH 6303 Theory of Complex Functions
MATH 6313 Numerical Analysis
MATH 6315 Ordinary Differential Equations
MATH 6318 Numerical Analysis of Differential Equations
MATH 6319-6320 Principles and Techniques in Applied Mathematics I and II
MATH 6331 Systems, Signals and Control
MATH 6305 Mathematics of Signal Processing
plus two guided electives.
Mathematics Major
MATH 5301-5302 Elementary Analysis I and II (or equivalent)
MATH 6303 Theory of Complex Functions
MATH 6313 Numerical Analysis
MATH 6315 Ordinary Differential Equations
MATH 6318 Numerical Analysis of Differential Equations
MATH 6301 Real Analysis
MATH 6302 Real and Functional Analysis
MATH 6306 Topology and Geometry
MATH 6311 Abstract Algebra I
plus two guided electives.
Statistics Major
Students seeking a Master of Science in Mathematical Sciences with a major in Statistics must complete the following core courses:
STAT 6331 Statistical Inference I
STAT 6337-38 Statistical Methods I, II
STAT 6339 Linear Statistical Models
STAT 6341 Numerical Linear Algebra and Statistical Computing
One course from each of any two of the following sets of courses:
{STAT 6329, STAT 6343, STAT 7334} Stochastic Processes or Experimental Design or Nonparametric and Robust Statistical Methods
{STAT 6348, STAT 7331} Multivariate Analysis
{STAT 6347, STAT 7338} Time Series Analysis
Students must choose remaining courses from among the following electives:
MATH 6301, MATH 6302, MATH 6313, MATH 6331 or any 6300- or 7300-level statistics courses. Also, a maximum of two of the following prerequisite 5000-level courses may be counted as electives: MATH
5301, 5302, Elementary Analysis I, II and STAT 5351, 5352 Probability and Statistics I, II.
Other Requirements
Electives must be approved by the graduate adviser. Typically, electives are 6000- and 7000-level mathematical sciences courses. Courses from other disciplines may also be used upon approval.
Substitutions for required courses may be made if approved by the graduate adviser. Instructors may substitute stated prerequisites for students with equivalent experience.
Master of Science in Bioinformatics and Computational Biology
Master of Science in Bioinformatics and Computational Biology (BCBM) is offered jointly by the Departments of Mathematical Sciences and Molecular and Cell Biology. This program combines coursework
from the disciplines of biology, computer science, and mathematical Sciences. The BCBM program seeks to answer the demand for a new breed of scientist that has fundamental understanding in the fields
of biology, mathematics, statistics, and computer science. With this interdisciplinary training, these scientists will be well prepared to meet the demand and challenges that have arisen and will
continue to develop in the biotechnology arena.
Faculty from both Mathematical Sciences (MMS) and Molecular and Cell Biology (MCB) participate in the Bioinformatics and Computational Biology program, with the Mathematical Sciences Department
serving as the administrative unit. Both departments participate in advising students.
For the Master’s degree in Bioinformatics and Computational Biology, beginning students are expected to have completed multivariate calculus, linear algebra, two semesters of general Chemistry, two
semester of organic Chemistry, two semesters of general physics, programming in C/C++, and two semesters of biology.
Requirements for completing a degree in BCBM are:
Core courses:
BIO 5410 Biochemistry
BIO 5420 Molecular Biology
BIO 5381 Genomics
STAT 5351 Probability and Statistics I
STAT 5352 Probability and Statistics II
MATH 6341 Bioinformatics
Additional core courses for the Computational Biology track:
MATH 6313 Numerical Analysis
MATH 6343 Computational Biology
MATH 6345 Mathematical Methods in Medicine & Biology
Additional core courses for the Bioinformatics track:
CS 5333 Discrete Structures
CS 5343 Algorithms Analysis and Data Structures
CS 6360 Database Design
Elective: A minimum of 7 semester credit hours of elective, approved by the student’s adviser. Typically, electives are 6000- and 7000- level courses in mathematical sciences, biology or computer
Courses from other disciplines may also be used upon approval.
Doctor of Philosophy
The University’s general degree requirements are discussed here.
Each Doctor of Philosophy degree program is tailored to the student. The student must arrange a course program with the guidance and approval of the graduate adviser. Adjustments can be made as the
student’s interests develop and a specific dissertation topic is chosen. A minimum of 90 semester hours beyond the bachelor’s degree is required.
Applied Mathematics Major
MATH 6301 Real Analysis
MATH 6302 Real and Functional Analysis
MATH 6303 Theory of Complex Functions I
MATH 6306 Topology and Geometry
MATH 6311 Abstract Algebra I
MATH 6313 Numerical Analysis
MATH 6315 Ordinary Differential Equations
MATH 6316 Differential Equations
MATH 6318 Numerical Analysis of Differential Equations
MATH 6319-6320 Principles and Techniques in Applied Mathematics I and II
MATH 7313 Partial Differential and Integral Equations I
MATH 7319 Functional Analysis
Statistics Major
MATH 6301 Real Analysis
MATH 6302 Real and Functional Analysis
STAT 6331- 6332 Statistical Inference I, II
STAT 6337- 6338 Statistical Methods I, II
STAT 6339 Linear Statistical Models
STAT 6344 Probability Theory I
STAT 7330 Decision Theory
STAT 7331 Multivariate Analysis
STAT 7334 Nonparametric Statistics
STAT 7338 Time Series Modeling and Filtering
STAT 7345 Stochastic Processes
MATH 6303 Theory of Complex Functions I, or MATH 6313 Numerical Analysis, or
MATH 6315 Ordinary Differential Equations I, or MATH 7319 Functional Analysis
Electives and Dissertation
An additional 18-24 credit hours for Applied Math and 18-24 credit hours for Statistics designed for the student’s area of specialization are taken as electives in a degree plan designed by the
student and the graduate adviser. This plan is subject to approval by the Department Head. After completion of the first 3 or 4 academic semesters of the course program, the student must pass a Ph.D.
Qualifying Examination in order to continue on to the research and dissertation phase of the Ph.D. program.
Finally, a dissertation is required and must be approved by the graduate program. Areas of specialization include:
· Applied Mathematics: applied analysis, biomathematics, differential equations, relativity, scattering theory, systems theory, signal processing.
· Statistics: statistical inference, applied statistics, statistical computing, probability, stochastic processes, linear models, time series, statistical classification, multivariate analysis,
nonparametric and robust statistics, asymptotic theory.
Other specializations are possible, including interdisciplinary topics. There must be available a dissertation research adviser or group of dissertation advisers willing to supervise and guide the
student. A dissertation Supervising Committee should be formed in accordance with the U.T. Dallas policy memorandum (87-III.25-48). The dissertation may be in Mathematical Sciences exclusively or it
may involve considerable work in an area of application.
Within the Mathematical Sciences programs opportunities exist for work and/or research in applied mathematics, engineering mathematics, mathematics and statistics. The opportunity to take course work
in several of the other university programs also allows the student to prepare for interdisciplinary work. Special topics within research areas include functional analysis, operator theory,
differential and integral equations, optimization, numerical analysis, system theory and control with application in material and molecular sciences, inverse problems with applications in geosciences
and medical sciences, relativistic cosmology, differential geometry, applications of topology to biology, mathematical and computational biology with applications in cardiovascular physiology,
neurobiology and cell biology; probability theory, applied probability, stochastic processes, mathematical statistics, statistical inference, asymptotic theory, statistical time series, Bayesian
analysis, robust multivariate statistical methods, robust linear models, robust and nonparametric methods, sequential analysis, statistical computing, signal processing, remote sensing, change-point
problems, forecasting and applications in their respective areas such as energy finance, semiconductor manufacturing, psychology, actuarial sciences, physical and medical sciences.
For a complete list of faculty and their areas of research, visit the website www.utdallas.edu/nsm/math/faculty . | {"url":"http://www.utdallas.edu/dept/graddean/CAT2008/NSM/6.Final_Mathematical%20Sciences%20Grad%20Catalog.htm","timestamp":"2014-04-19T22:08:27Z","content_type":null,"content_length":"20109","record_id":"<urn:uuid:d0a329e4-fe06-4ea5-88b6-b2ca5fd7331d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kant's Philosophy of Mathematics
1. Kant’s definition of trapezium cited here is consistent with current usage in the United States and Canada, according to which a trapezium is a quadrilateral with no sides parallel and a trapezoid
is a quadrilateral with one pair of parallel sides. Current British usage of the two terms reverses these definitions.
2. Most commentators take Kant to have had a reasonably sophisticated understanding of the mathematical developments of his time. Paul Rusnock (Rusnock 2004) has argued provocatively against this
common view, claiming that because of his lack of technical sophistication, Kant did not have the resources to develop a philosophically interesting account of mathematical practice, and so that his
philosophy of mathematics is inadequate even in light of its historical context.
3. Jaakko Hintikka defends a contrary thesis with respect to the relation between the Discipline of Pure Reason in its Dogmatic Employment and the Transcendental Aesthetic according to which the
Discipline expresses Kant’s “preliminary” theory of mathematics, and the Transcendental Aesthetic his “full” theory. According to Hintikka, the former is the “background and the starting-point of”
the latter (Hintikka 1969, p.49). Hintikka argues thereby that the “preliminary” theory is independent of the “full” theory, and so that Kant’s philosophy of mathematics as he interprets it can be
defended without a commitment to Kant’s theory of intuition and Transcendental Idealism.
4. Indeed, the relevant passage from the Preamble to the Prolegomena about the synthetic apriority of mathematical judgments is added almost verbatim to the B-edition of the Critique of Pure Reason.
5. It is, of course, the use of such a science of arithmetic that is more general than a science of time. The individual propositions of arithmetic, or what Kant calls “numerical formulas,” are in
fact singular, which is why he claims that arithmetic does not have axioms as geometry does. (A164/B205)
6. A two volume successor to this collection, edited by Carl Posy and Ofra Rechter, is in production (Posy and Rechter forthcoming). | {"url":"http://plato.stanford.edu/entries/kant-mathematics/notes.html","timestamp":"2014-04-21T07:45:38Z","content_type":null,"content_length":"15043","record_id":"<urn:uuid:6f3882f5-bc2d-4934-a586-6b9b4d410799>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doubt: Expected Values of Bivariate Random Variables
March 25th 2009, 11:24 PM #1
Mar 2009
Doubt: Expected Values of Bivariate Random Variables
E[X|Y, Z] = E[X|Y]
I am given a problem with this condition. Please tell me what the left hand side term mean?? and what can we write for it??
E[X|Y, Z] = ??
(I expect something in terms of Joint PDF or Conditional PDF.If that cannot be done, please tell me any other way)
I know what to write for E[X|Y]. But what should we write for E[X|Y, Z]?? URGENT!! (Tell me if my question is not clear!!)
E[X|Y, Z] = E[X|Y]
I am given a problem with this condition. Please tell me what the left hand side term mean?? and what can we write for it??
E[X|Y, Z] = ??
(I expect something in terms of Joint PDF or Conditional PDF.If that cannot be done, please tell me any other way)
I know what to write for E[X|Y]. But what should we write for E[X|Y, Z]?? URGENT!! (Tell me if my question is not clear!!)
Hi there. I think (could be wrong after all) that it's something like:
that is, you treat Y,Z as a random vector. I don't know if this helps but you can condition a variable with respect to any random variable, also vectors.
(I expect something in terms of Joint PDF or Conditional PDF.If that cannot be done, please tell me any other way)
I know what to write for E[X|Y]. But what should we write for E[X|Y, Z]?? URGENT!! (Tell me if my question is not clear!!)[/QUOTE]
E[X|Y,Z] means the expected value of X, given specific values of Y and Z. Saying that is equal to E[X|Y], the expected value of x given a specific value of Z, tells you that X is independent of
March 27th 2009, 03:00 AM #2
Mar 2009
March 27th 2009, 04:47 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/advanced-statistics/80719-doubt-expected-values-bivariate-random-variables.html","timestamp":"2014-04-19T05:56:37Z","content_type":null,"content_length":"38348","record_id":"<urn:uuid:8e0ea4f8-9c1d-42c0-82a1-98d0c48abe7a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculator Tab
Calculator Tab is a free online scientific calculator which works like your regular calculator. One feature that sets it appart from other calculators is that it's memory bank can store an unlimited
amount of numbers and descriptions of these numbers for an indefinite length of time. The stored numbers can be sorted by date, by the number of uses or by name. Your saved numbers and their
respective descriptions are saved directly on your computer, never entering the sphere of the internet, so that they are as secure as your workstation.
This calculator follows the standard order of operations. It can handle values from 4.9e-324 to 1.7e+308. To perform a calculation, press the calculator buttons with the mouse as you would write the
symbols of the calculation. For example to calculate 2 + 5 * 8 press the buttons . To see the result (which in this case will be 42) and to terminate the calculation press the Functions with one (x)
value" below. To be able to use your keyboard instead of the mouse, you must first click anywhere on the calculator with the mouse.
Calculator Tab will not allow you to make ambiguous enteries and will tell you, what is not allowed if you try to make such an entry. If you, for example, try to enter the entry of
• unlimited long-term memory storage
• pop-up version (for working with other documents)
• saving of settings between sessions
• highlighting of the last valid function
• possibility of switching between comma and point as decimal separators
• ambiguous input filter
There are two possibilities to store a value in the memory bank:
1. Using the quickly store the displayed value in the memory bank. You don't need to enter a description.
2. Using the
To open the memory bank press this button:
The memory bank will open and you will see your saved values. You can only see three values at a time. If you have more then tree values saved, you can use the scroll bar to the right of the saved
values to scroll through the memory bank. To insert a saved value into the calculator, click on the value or the grey field surrounding it. To erase a value click on the
There are 2 possibilities to enter a negative number:
1. Press the .
2. Or press the .
Making a bracket negative can only be accomplished using the first method, because the
Functions which take one value ([value X] [Function], meaning that the value is always entered before the function. For example to enter sin(30) press the buttons .
Functions with two values ([value X] [Function] [value Y]. For example to enter 53 (5 cubed) press the buttons .
Using the percentage buttons . Here the mnemonic is clear: 5% (of) 50 is reflected in the label "x % y".
If on the other hand you want to know how many percent of 50 the value of 5 constitues, use the button Functions with two (x and y) values". Accordingly, in order to calculate how many percent of 50
the value of 5 constitutes (which is 10%), enter .
To select your options for various settings, press the
You can select, whether a point or a comma should be used as a decimal separator.
Order of saved values
You can select, how your saved values should be ordered: by date (most recent first); by most used (values used more frequently appear first); or by alphabet (values with descriptions first in the
alphabet will appear first).
Descriptions on roll-over
You can select, whether a short description of the fuctionality should appear, when you hover over a button. As you get more familiar with the calculator, you might not need this feature.
Once you have clicked inside the calculator, you can use your number buttons and fuction buttons like on any other calculator. Additionally, you then have the following shortcuts at your disposal:
Key Function
[enter] is equal to -or- enter/return
= is equal to
[insert] save a value without custom description
i save a value with custom description
m open/close memory bank
o options
a open/close left side functions
s open/close bottom side functions
[backspace] backspace button
[delete] reset button
The "plus" button is used for adding two values or for specifying a positive exponent in scientific notation (note: since the default exponent is positive, it does not need to be explicitly
The "minus" button is used for subtracting one value from an other, for entering a negative value or for specifying a negative exponent in scientific notation. For information about entering a
negative value see the above section "Negative numbers".
RESTRICTIONS: None.
Addition on Wikipedia
Negative numbers on Wikipedia
The "multiplication" button is used for multiplying two values.
RESTRICTIONS: None.
The "division" button is used for dividing one value (dividend) by another (divisor).
RESTRICTIONS: The divisor must not equal zero. A zero in the divisor will generate an error and terminate the calculation.
The "square root" button is used for calculating the square root of a value. The square root of a value is the number, which multiplied by itself, gives the original value.
RESTRICTIONS: The value must be greater than or equal to zero. A value smaller than zero will generate an error and terminate the calculation.
The "root" button is used for calculating the root of a value. Although it is theoretically possible to calculate the root of a negative value if the root is an odd number, this calculator only
calculates the roots of values which are greater than or equal to zero.
RESTRICTIONS: The value who's root you want to calculate must be greater than or equal to zero, except where the root is a number between -1 and 1 inclusive, which corresponds to raising the vaule to
the reciprocal of the root. Otherwise a value smaller than zero will generate an error and terminate the calculation.
The "square" button is used for calculating the square of a value. The square of a value is the result of multiplying a value by itself.
RESTRICTIONS: None.
The "exponent" button is used for raising a value (base) to the power of another value (exponent).
RESTRICTIONS: Since raising a negative base to a power between -1 and 1 exclusive corresponds to the root reciprocal of the exponent, and trying to calculate a root of a negative number generates an
error, this configuration will generate an error and terminate your calculation.
The "logarithm" button is used for calculating the logarithm to base 10 of a value. To calculate the logarithm of a value to a base other than 10 divide the logarithm of the value by the logarithm of
the desired base.
RESTRICTIONS: The value must be greater than zero. A value less than or equal to zero will generate an error and terminate your calculation.
Logarithm on Wikipedia
Change of base on Wikipedia
The "natural logarithm" button is used for calculating the logarithm to base e of a value. To calculate the logarithm of a value to a base other than e divide the natural logarithm of the value by
the natural logarithm of the desired base.
RESTRICTIONS: The value must be greater than zero. A value less than or equal to zero will generate an error and terminate your calculation.
Natural logarithm on Wikipedia
Change of base on Wikipedia
The "e" button is used for entering (an approximation of) the constant e into the calculator. e to the power of a value is the inverse of the natural logarithm of the same value.
RESTRICTIONS: None.
The "reciprocal" button is used for calculating the reciprocal of a value. The reciprocal of a value is 1 divided by the value.
RESTRICTIONS: The value must not equal zero. A value of zero will generate an error and terminate the calculation.
The "sine" button is used for calculating the sine of a value. For now, degrees are the only unit this function supports. It returns a number between -1 and 1 inclusive.
The "cosine" button is used for calculating the cosine of a value. For now, degrees are the only unit this function supports. It returns a number between -1 and 1 inclusive.
To convert degrees(deg) to radians(rad) and back, the following formulas can be used:
rad = deg * Pi / 180
deg = rad / Pi * 180
RESTRICTIONS: None.
Trigonometric function on Wikipedia
The "arcsine" button is used for calculating the arcsine of a value. Arcsine is the inverse function of sine. For now, degrees are the only unit this function supports. It returns a number between
-90 and 90 inclusive.
RESTRICTIONS: The value must be a number between -1 and 1 inclusive.
The "arccosine" button is used for calculating the arccosine of a value. Arccosine is the inverse function of cosine. For now, degrees are the only unit this function supports. It returns a number
between -90 and 90 inclusive.
To convert degrees(deg) to radians(rad) and back, the following formulas can be used:
rad = deg * Pi / 180
deg = rad / Pi * 180
RESTRICTIONS: The value must be a number between -1 and 1 inclusive.
Inverse trigonometric function on Wikipedia
The "tangent" button is used for calculating the tangent of a value. For now, degrees are the only unit this function supports.
RESTRICTIONS: None.
Trigonometric function on Wikipedia
The "arctangent" button is used for calculating the arctangent of a value. Arctangent is the inverse function of tangent. For now, degrees are the only unit this function supports. It returns a
number between -90 and 90 inclusive.
To convert degrees(deg) to radians(rad) and back, the following formulas can be used:
rad = deg * Pi / 180
deg = rad / Pi * 180
RESTRICTIONS: None.
Inverse trigonometric function on Wikipedia
The "Pi" button is used for entering (an approximation of) the constant Pi into the calculator. Pi is for example useful in trigonometric functions and for calculating the circumference, area and
volume of circular geometrical objects.
RESTRICTIONS: None.
The "modulo" button is used for calculating modulo of two values. Modulo is the remainder of one value (dividend) divided by another (divisor). The way it is implemented in this calculator, the
result has always the same sign as the dividend.
RESTRICTIONS: The divisor must not equal zero. A zero in the divisor will generate an error and terminate the calculation.
The "scientific notation" button is used for entering values in scientific notation. A value written in scientific notation has the form: a x 10b. It is especially useful for entering very large or
very small values. For example instead of entering 123000 you can enter 1.23e+5. And instead of entering 0.000123 you can enter 1.23e-4.
To enter a negative exponent press the minus (-) button after pressing the "scientific notation" button. When entering a positive exponent, you do not need to enter the plus (+) sign, because the
exponent is positive by default.
RESTRICTIONS: Trying to place scientific notation in a point of an expression, where it cannot be placed, will generate an error message.
Scientific notation on Wikipedia
The "toggle positive / negative" button is used for switching the sign of the last entered number. The sign of a bracket cannot be switched. To make a bracket negative use the "minus" button before
opening a bracket. For more information about negative vaules see the above section "Negative numbers".
RESTRICTIONS: None.
The "bracket" buttons are used for opening and closing brackets. Brackets are useful when the order of calculation should diverge from the standard order of operations. If for example in the
expression 5 + 3 / 2 the term 5 + 3 should be evaluated first and then be divided by 2, then you it must be entered as (5 + 3) / 2.
Once a bracket has been opened, numbers will apear beside the bracket symbols on the bracket buttons, which indicate how many brackets have been opened or closed. The number beside the "close
bracket" will be red as long as not all opened brackets have been closed. Opened brackets will be closed automatically upon evaluating the expression using the "is equal to" button. Once a bracket
has been closed the output will be updated to reflect the value of the closed bracket.
The sign of a bracket cannot be switched. To make a bracket negative use the "minus" button before opening a bracket. For more information about negative vaules see the above section "Negative
RESTRICTIONS: Trying to open or close a bracket at a point of an expression, where a bracket cannot be placed, will generate an error message.
The "x percent of y" button is used for calculating a percentage of a value. For example to calculate 5% of 50 (which is 2.5), enter .
The "how many percent of y is x?" button is used for calculating how many percent one value of another value constitutes. For example to find out how many percent of 50 the value 5 constitutes (which
is 10%), enter .
For more information about percentages see the above section "Percentages".
RESTRICTIONS: None.
The "ceiling" button is used for obtaining the ceiling of a value. The ceiling of a value is the smallest integer (whole number or negative) not less than the value. For example the ceiling of 4.2 is
5 and the ceiling of -4.2 is -4.
RESTRICTIONS: None.
The "floor" button is used for obtaining the floor of a value. The floor of a value is the highest integer (whole number or negative) less than or equal to the value. For example the floor of 4.2 is
4 and the floor of -4.2 is -5.
RESTRICTIONS: None.
The "decimal" button is used for obtaining the decimal part of a value. For example the decimal of 3.141 is 0.141.
RESTRICTIONS: None.
The "factorial" button is used for calculating the factorial of a value (for integer values) or for calculating (an aproximation of) the gamma function for the value + 1 for values greater than or
equal to zero and non-integer values smaller than zero. The algorithm used for this operation is based on the "StieltjesLnFactorial" algorithm by Peter Luschny.
RESTRICTIONS: The value must be greater than or equal to zero, or non-integer smaller than zero. Integer values smaller than zero will generate an error and terminate the calculation. | {"url":"http://www.calculator-tab.com/","timestamp":"2014-04-20T21:45:41Z","content_type":null,"content_length":"61310","record_id":"<urn:uuid:9f00511d-ad18-4e81-a289-836b88b3d4fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
July 14th 2009, 07:17 AM #1
Junior Member
Oct 2008
Find the following limit if they exist, otherwise explain why they do not exist:
$\lim \frac{Re(z)}{z}$
with $z\rightarrow 0$
Since $Re(z) = x$
So $\lim \frac {x}{x+iy}$ ??
with $z\rightarrow 0$
How do i continue from here?
Consider the limit along the path $x=y$. What is it?
Consider the limit along the path $y=0$. What is it?
What does that tell you?
I'm not pretty sure but I would guess that the limit along the path x = y is 0 and the limit along the path y = 0 is 0 as well.
Thus the limit of this function does not exist?
z-->0 implies (x,y)-->(0,0)
$\lim_{(x,y) \to (0,0)} f(x,x) = \lim_{(x,y) \to (0, 0)} \frac {x}{x+ix} = \frac {1}{1+i}$
$\lim_{(x,y) \to (0,0)} f(x,0) = \lim_{(x,y) \to (0, 0)} \frac{x}{x} = 1$
Since different paths to the origin give different limits, the limit does NOT exist
July 14th 2009, 07:32 AM #2
July 14th 2009, 07:38 AM #3
Junior Member
Oct 2008
July 14th 2009, 07:54 AM #4 | {"url":"http://mathhelpforum.com/differential-geometry/95128-limits.html","timestamp":"2014-04-19T08:49:09Z","content_type":null,"content_length":"42824","record_id":"<urn:uuid:3bbcee74-e1a1-4e44-a8f3-832e4a7f705f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is slugging percentage in baseball?
Statistics play an important role in summarizing baseball performance and evaluating players in the sport.
Since the flow of a baseball game has natural breaks to it, and normally players act individually rather than performing in clusters, the sport lends itself to easy record-keeping and statistics.
Statistics have been kept for professional baseball since the creation of the National League and American League, now part of Major League Baseball.
In baseball statistics, slugging percentage (abbreviated SLG) is a popular measure of the power of a hitter. It is calculated as total bases divided by at bats:
$SLG = \frac{(\mathit{1B}) + (2 \times \mathit{2B}) + (3 \times \mathit{3B}) + (4 \times \mathit{HR})}{AB}$
Isolated Power
Gross Production Average or GPA is a baseball statistic created in 2003 by Aaron Gleeman, as a refinement of On-Base Plus Slugging (OPS). GPA attempts to solve two frequently cited problems with OPS.
First, OPS gives equal weight to its two components, On Base Percentage (OBP) and Slugging Percentage (SLG). In fact, OBP contributes significantly more to scoring runs than SLG does. Sabermetricians
have calculated that OBP is about 80% more valuable than SLG. A second problem with OPS is that it generates numbers on a scale unfamiliar to most baseball fans. For all the problems with a
traditional stat like batting average (AVG), baseball fans immediately know that a player batting .365 is significantly better than average, while a player batting .167 is significantly below
average. But many fans don't immediately know how good a player with a 1.013 OPS is.
The basic formula for GPA is: $\frac{{(1.8)OBP} + SLG}{4}$
Related Websites: | {"url":"http://answerparty.com/question/answer/what-is-slugging-percentage-in-baseball","timestamp":"2014-04-16T18:56:02Z","content_type":null,"content_length":"24590","record_id":"<urn:uuid:e1869ac9-a323-4699-84c2-38a77bae603b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider The Electric Field Due To An Infinitely ... | Chegg.com
Consider the electric field due to an infinitely long, straight, uniformly charged wire. (There is no current running through the wire-all charges are fixed.) Assuming that the wire is infinitely
long means that we can assume that the electric field is perpendicular to any cylinder that has the wire as an axis and that the magnitude of the field is constant on any such cylinder. Denote by Er
the magnitude of the electric field due to the wire on a cylinder of radius r.
Imagine a closed surface S made up of two cylinders, one of radius a and one of larger radius b, both coaxial with the wire, and the two washers that cap the ends. The outward orientation of S means
that a normal on the outer cylinder points away from the wire and a normal on the inner cylinder points toward the wire.
i) Explain why the flux of E, the electric field, through the washers is 0.
ii) Explain why Gauss’s Law implies that the flux through the inner cylinder I the same as the flux through the outer cylinder. [Hint: The charge on the wire is not inside the surface S].
iii) Use part (ii) to show that Eb/Ea=a/b
iv) Explain why part (iii) shows that the strength of the field due to an infinitely long uniformly charged wire is proportional to 1/r.
b.) Now consider an infinite flat sheet uniformly covered with charge. As in part (a), symmetry shows that the electric field E is perpendicular to the sheet and has the same magnitude at all points
that are the same distance from the sheet. Use Gauss’s Law to explain why, on any one side of the sheet, the electric field is the same at all points in the space off the sheet.
Please explain thoroughly so I can grasp this information. Thank you!
Advanced Math | {"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-electric-field-due-infinitely-long-straight-uniformly-charged-wire-current-runnin-q3040312","timestamp":"2014-04-17T08:14:58Z","content_type":null,"content_length":"21264","record_id":"<urn:uuid:2ddfd834-afbc-425a-a141-cf792bc63993>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Between the points (-2,7) and (-7,-5) find:
a) The Distance
b) The Midpoint - Homework Help - eNotes.com
Between the points (-2,7) and (-7,-5) find:
a) The Distance
b) The Midpoint
a) To calculate the distance d between two points` (x_1,y_1)` and `(x_2,y_2)` use the formula:
Here,` x_1=-2, x_2=-7, y_1=7 and y_2=-5.`
So, d`=sqrt({-7-(-2)}^2+(-5-7)^2)`
` =sqrt((-5)^2+(-12)^2)`
` =sqrt(25+144)`
` =sqrt169`
=13 units.
Therefore, the distance between the points (-2,7) and (-7,-5) is 13 units.
b) To find the mid point between the two points `(x_1,y_1)` and `(x_2,y_2)` use the formula:
Mid point=`((x_1+x_2)/2,(y_1+y_2)/2)`
Here, `x_1=-2, x_2=-7, y_1=7 and y_2=-5` . Now, plug in the values in the above formula.
Mid point=`((-2+(-7))/2,(7+(-5))/2)`
Therefore, the mid point between the points (-2,7) and (-7,-5) is `(-4.5,1)` .
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/between-points-2-7-7-5-find-distance-b-midpoint-443352","timestamp":"2014-04-19T09:34:48Z","content_type":null,"content_length":"25868","record_id":"<urn:uuid:44a97270-65d8-445d-b8ca-d9401bcf3bf8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: SYSTEM AND METHOD FOR MEASURING SIMILARITY OF SEQUENCES WITH MULTIPLE ATTRIBUTES
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method (and structure) for quantifying an ordered sequence of data, includes receiving data of the ordered sequence and determining a skeleton of the ordered sequence. The skeleton includes a
plurality of perceptually important points (PIPs) of the ordered sequence, as derived by determining one or more points of local maxima of the data over the ordered sequence.
A computer configured to execute a process of quantifying an ordered sequence of data, said computer comprising:a data receiver to receive data of said ordered sequence; anda calculator to determine
a skeleton of said ordered sequence,wherein said skeleton comprises a plurality of perceptually important points (PIPs) of said ordered sequence, as derived by determining one or more points of local
maxima of said data over said ordered sequence.
The computer of claim 1, wherein said ordered sequence is multivariate.
The computer of claim 1, wherein said ordered sequence comprises a time series of data.
The computer of claim 1, wherein data of said ordered sequence is preliminarily converted into a metric space when said ordered sequence data is not presented in a manner allowing metric operations
on said data.
The computer of claim 4, wherein a successive PIP is determined by said calculator by constructing a line between two previous PIPs and a maximum relative to said line is identified for data between
said two previous PIPs, to become said successive PIP.
The computer of claim 5, wherein successive PIPs are sequentially determined by said calculator until a termination test determines that said skeleton is sufficiently developed.
The computer of claim 6, wherein said termination test comprises a local similarity measure.
The computer of claim 5, wherein a starting endpoint and an ending endpoint are identified for said ordered sequence of data and said starting and ending endpoints are assigned to be a first PIP and
a second PIP for said ordered sequence.
The computer of claim 1, said calculator further selectively determining a local similarity metric d for said ordered sequence, for use in determining said PIPs, and a global similarity metric, for
use in comparing said skeleton with a skeleton of another ordered sequence.
The computer of claim 9, said calculator further processing at least one of the following procedures:comparing a similarity of said skeleton with a skeleton of another ordered sequence;searching for
similarities within said ordered sequence;searching for similar ordered sequence in a database;recognizing or identifying events or specific sequences;searching for an event or similar event;
analyzing an ordered sequence expressed as a time series;discovering relationships within a time series or between two different time series;categorizing signals into groups or clusters;an
optimization processing;a time-series compression; andan indexing of data.
The computer of claim 10, wherein said procedure involves a time series of financial data.
A computerized method of quantifying an ordered sequence of data, comprising:receiving data of said ordered sequence; anddetermining a skeleton of said ordered sequence,wherein said skeleton
comprises a plurality of perceptually important points (PIPs) of said ordered sequence, as derived by determining one or more points of local maxima of said data over said ordered sequence.
The method of claim 12, further comprising preliminarily converting said ordered sequence data into a metric space when said ordered sequence data is not presented in a manner allowing metric
operations on said data.
The method of claim 12, wherein a successive PIP is determined by constructing a line between two previous PIPs and a maximum relative to said line is identified for data between said two previous
PIPs, to become said successive PIP.
The method of claim 14, wherein successive PIPs are sequentially determined by until a termination test determines that said skeleton is sufficiently developed.
The method of claim 12, wherein a starting endpoint and an ending endpoint are identified for said ordered sequence of data and said starting and ending endpoints are assigned to be a first PIP and a
second PIP for said ordered sequence.
The method of claim 12, said method further selectively:determining a local similarity metric d for said ordered sequence, for use in determining said PIPs; anddetermining a global similarity metric,
for use in comparing said skeleton with a skeleton of another ordered sequence.
The method of claim 12, said method further comprising at least one of:comparing a similarity of said skeleton with a skeleton of another ordered sequence;searching for similarities within said
ordered sequence;searching for similar ordered sequence in a database;recognizing or identifying events or specific sequences;searching for an event or similar event;analyzing an ordered sequence
expressed as a time series;discovering relationships within a time series or between two different time series;categorizing signals into groups or clusters;an optimization processing;a time-series
compression; andan indexing of data.
The method of claim 12, as implemented into a service entity that provides consultation service to another entity.
A signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method of quantifying an ordered sequence of data, said
method comprising:receiving data of said ordered sequence; anddetermining a skeleton of said ordered sequence,wherein said skeleton comprises a plurality of perceptually important points (PIPs) of
said ordered sequence, as derived by determining one or more points of local maxima of said data over said ordered sequence.
BACKGROUND OF THE INVENTION [0001]
1. Field of the Invention
The present invention generally relates to representing time sequences for such purposes as recognition, analysis, comparison, and relationship discovery. More specifically, a perceptual skeleton is
derived by determining the perceptually important points (PIPs), as being points of any number of different orders of maxima, to provide a method to measure such time sequences, including similarity
between two different sequences.
2. Description of the Related Art
A temporal sequence (e.g., time series or time sequence) is a sequence of values measured at certain time intervals. The time intervals may or may not be equally spaced. Non-limiting examples include
stock market data and exchange rates, biomedical measurements, weather data, history of product sales, audio, video, etc.
Time series constitute a large portion of the data stored in computers and the ability to efficiently search and organize such data is of growing importance in many applications. As a result,
significant effort has been directed towards developing methods that will enable computers to assist users in performing tasks such as: "find companies with similar stock prices", "find portfolios
that behave similarly", "find products with similar sell cycles", "cluster users with similar credit card utilization", or "search for music."
Prior works by others in this area include the application of the Discrete Fourier Transform, Discrete Wavelet Transform, Principal Component Analysis or Linear Predictive Coding cepstrum
representation to reduce sequences into points in low dimensional space and the use of the Euclidean distance between two sequences as a measure of similarity.
However, there are many similarity queries where Euclidean distances fail to capture the notion of similarity. A more intuitive idea has been explored that two series should be considered similar if
they have enough non-overlapping time-ordered pairs of similar subsequences. In another approach, a set of linear transformations on the Fourier series representation of a sequence is used as a basis
for similarity measurement, while yet another approach used a time warping distance.
A special class of problem is the analysis of multivariate time series. Examples of such series include electroencephalograms (where the EEG measurements are recorded up to dozens of channels),
weather data (with daily measurements of temperature, humidity, atmospheric pressure and wind), and stock market portfolios (with multiple stocks tracked over a period of time).
In one method, Taniguchi showed that similarities and differences between multivariate stationary time series can be characterized in terms of the structure of the covariance or spectral matrices. In
another method, Huan, et al. proposed using a library of smooth localized complex exponentials (SLEX) to extract computationally efficient local features of non-stationary time series.
A separate area of research has focused on the design of feature sets that will allow for more effective and "perceptually tuned" representation of time series based on the extraction of key
features, event detection, and extraction of important points.
These techniques are especially interesting, as they attempt to capture the notion of similarity from the perspective of human observer. However, most of these perceptual techniques have difficulties
handling multivariate data.
Thus, a need continues to exist for an apparatus, tool, and method of deriving a simple, compressed perceptual representation of multivariate time series and using it as a basis for efficient
indexing and similarity search. The present invention addresses this need.
SUMMARY OF THE INVENTION [0013]
In view of the foregoing, and other, exemplary problems, drawbacks, and disadvantages of the conventional systems, it is an exemplary feature of the present invention to provide a structure (and
method) in which an ordered sequence of data can be quantifiably represented in a manner similar to visual analysis by humans.
It is another exemplary feature of the present invention to provide a structure and method for comparing two ordered sequences of data in a manner similar to visual comparison by humans.
It is another exemplary feature of the present invention to provide a computerized method that mimics the visual processing by humans when performing functions involving visual representations of
ordered sequences and does so in a manner that provides quantitative measurements for comparison purposes.
Thus, in a first exemplary aspect of the present invention, to achieve the above features and objects, described herein is a computer configured to execute a process of quantifying an ordered
sequence of data, including a data receiver to receive data of the ordered sequence and a calculator to determine a skeleton of the ordered sequence, wherein the skeleton comprises a plurality of
perceptually important points (PIPs) of the ordered sequence, as derived by determining one or more points of local maxima of the data over the ordered sequence.
In a second exemplary aspect of the present invention, also described herein is a computerized method to determine a skeleton of an ordered sequence of data.
In a third exemplary aspect of the present invention, also described herein is a signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing
apparatus to perform the computerize method of quantifying an ordered sequence of data.
As will be explained in more detail, the present invention, therefore, provides the capability of an efficient compression of a time signal, compression and representation in accordance with human
visual system, simplification of a signal for efficient indexing, matching, similarity measurement and retrieval.
There are many potential applications of the technique of the present invention, since any ordered sequence of data could be used for input data. Possible applications include, for example: financial
analysis and portfolio optimization; storage, indexing, and searching of medical signals and information, speech, music, seismological signals, and/or weather and climate data; business and marketing
analytics, such as analyzing product lifecycle, looking for products with similar lifecycles, looking for customers with similar behavior over time or other data mining, etc.
BRIEF DESCRIPTION OF THE DRAWINGS [0021]
The foregoing and other purposes, aspects and advantages will be better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings,
in which:
[0022]FIG. 1
shows a flowchart 100 of an exemplary embodiment of the present invention;
[0023]FIG. 2
shows visually the concept and derivation 200 of a perceptual skeleton 203 of exemplary waveform 201;
[0024]FIG. 3
shows the method 300 of deriving the perceptually important points (PIPs) of the waveform 201;
[0025]FIG. 4
shows a flowchart 400 of the process of deriving the PIPs of the present invention;
[0026]FIG. 5
shows derivation of PIPs for an exemplary multidimensional waveform 500;
[0027]FIG. 6
shows an exemplary embodiment 600 for measuring similarity of perceptual skeletons of three signals 601,602,603;
[0028]FIG. 7
shows three stock series 700 discussed for demonstration of an application of the method of the present invention;
[0029]FIG. 8
shows an exemplary block diagram 800 of a software-based system for a software tool that implements the methods of the present invention;
[0030]FIG. 9
illustrates an exemplary hardware/information handling system 900 for incorporating the present invention therein; and
[0031]FIG. 10
illustrates a signal bearing medium 1000 (e.g., storage medium) for storing steps of a program of a method according to the present invention.
DETAILED DESCRIPTION OF AN EXEMPLARY EMBODIMENT OF THE INVENTION [0032]
Referring now to the drawings, and more particularly to FIGS. 1-10, an exemplary embodiment of the method and structures according to the present invention will now be described.
Algorithms that attempt to capture some elements of human perception and behavior have often shown excellent results in many applications. When performing similarity measurements, humans mine visual
data extensively to construct a representation that captures the most important aspects of a signal, the nature of the application and the task that needs to be achieved.
Although such process is difficult to generalize, by including its key steps into a matching algorithm, one can greatly improve the accuracy and perceptual relevance of retrieved results. For
example, humans are very good at constructing different representations of an object, simplifying them by "picking" the most important characteristics of an object, and using these "simplifications"
to drive similarity judgments.
Therefore, in accordance with the concepts of the present invention, at the core of any similarity task is the computation of a perceptual skeleton, a set of points that an observer would "care
about", and then selectively using these perceptual skeletons in, for example, a matching task. Thus, the present invention provides an exemplary general framework for similarity measurement of time
domain signals with multiple attributes, although it is noted that the concepts are more general.
That is, it will be clear that the methods of the present invention will be applicable to any ordered sequence and is not confined to signals based on the time domain or even to data based on a
regular interval separating the data points. In these cases in which data is not based on time or have irregular intervals, a preliminary conversion might have to be executed to bring the data into a
metric space capable of quantitative analysis of the data or possibly to convert analog data into an ordered sequence of discrete data.
The first step in the methodology 100 illustrated in
FIG. 1
involves, therefore, transforming signals into a space with a metric (constructing the representation) 101, if necessary, so that operations, such as measuring distances between different points of
the signal or identifying local maxima, can be performed.
In step 102, the skeleton of a signal is constructed as being a set of perceptually important points (PIPs) in that space, as will be discussed shortly. In step 103, if necessary for a specific task,
dimensions of the skeleton are calculated. In step 104, a distance between two skeletons of different signals can then be used as a similarity measurement.
[0039]FIG. 2
shows intuitively the concept 200, for an exemplary one-dimensional signal 201, of the PIPs 202 used in the present invention to construct a skeleton 203 obtained by connecting the PIPs 202. Although
the present invention concerns primarily time sequences, so that the horizontal axis represents time, it should be apparent that the skeletons of the present invention can be extended to other types
of signals and waveforms that have an order to the data.
As a preliminary matter for explaining the mathematics behind the present invention, let us consider two discrete time domain signals, x=[x(t
), . . . ,x(t
)], and Y=[y(t
), . . . , Y(t
)], of length N
, and N
, respectively. Each time instance is described with M attributes, x(t)=[x
(t), . . . ,x
(t)], and y(t)=[y
(t), . . . ,y
(t)]. Usually the attribute vectors represent different measurements, which are often either strongly correlated, or include features that are distinctly different in nature, so that a distance
metric between two attribute vectors cannot be defined naturally.
Therefore, as a first step we apply a de-correlating transform F() and project X and Y onto a K-dimensional metric space, S
), . . . , f
), . . . , f
)] (1)
, f(t)=[f
(t), . . . , f
(t)] and K≦M.
We will also assume that S is a normed linear space with a norm, ∥∥, and metric d(f
, f
∥ defined by the norm. It is noted that the goal of the mapping is not dimensionality reduction (although this is a useful step when dealing with highly correlated variables), but the projection of a
signal into a space where a metric can be defined more naturally.
This metric will then constitute a local similarity metric, used to identify perceptually skeletons, compute the compression rate and construct a global similarity metric (i.e., a true similarity
distance between the two signals).
A body of research in cognitive psychology indicates that humans and animals depend on "landmarks" and "simplifications" in organizing their spatial memory. A subject asked to look at the time
sequence 201 of FIG. 2 and duplicate the picture, will typically memorize only the key turning points 202, as shown in the dashed representation, and then recreate the picture 203 by connecting these
few points 202.
This idea of perceptually important features has been explored in a variety of applications. One of the first uses of this concept was in reducing a number of points required to represent a line in
cartoon making. Similar ideas have also been explored independently.
In the present invention, a perceptually important point (PIP) is defined as a local maximum of the transformed signal F. Depending on the nature of the problem, one can use maxima of different
At the coarsest level, each point in F potentially represents a PIP, and a key exemplary idea behind the perceptual skeletons of the present invention is to discard minor fluctuations and keep only
major maxima. One possible PIP identification procedure for one-dimensional signals is described in Fu, et al.
The present invention refines these previous procedures and extends it to handle multi-dimensional feature representations, as exemplarily illustrated in
FIG. 3
for an exemplary one-dimensional sequence 300.
As shown in the flowchart 400 of
FIG. 4
, we start with the signal representation F=[f(t1), . . . , f(tN)], as shown by sequence 300 in FIG. 3. In step 401, the first and the last points in F are selected as the first two PIPs (e.g., PIP 1
and PIP 2). In step 402, these first two PIPs are interconnected by a line 301. In step 403, every next PIP (e.g., PIP 3) is then identified as a point with the maximum distance 302 to its two
adjacent PIPs (e.g., PIP1 and PIP2) from this interconnecting line (e.g., 301). This process can continue until, in step 404, a termination test described later indicates that the skeleton is
sufficiently developed.
[0050]FIG. 5
illustrates represents a generalization to multiple dimensions. The PIP identification procedure can be then described as follows:
1 = [ 1 , f ( t 1 ) ] = [ z 1 ( 1 ) , z 2 ( 1 ) , K , z K + 1 ( 1 ) ] , PIP 2 = [ 2 , f ( t N ) ] = [ z 1 ( N ) , z 2 ( N ) , K , z K + 1 ( N ) ] , PIP 3 = [ i , f ( t i ) ] = [ z 1 ( i ) , z 2 ( i )
, K , z K + 1 ( i ) ] , i = arg max i d ( f ( t i ) , fn ( t i ) ) , and
where fn
)=[tn(i), fn
), fn
), . . . , fn
(i), zn
(i), . . . , zn
+1(i)] is the normal projection of the point f(t
) onto a line connecting the two neighboring PIPs. A line in K+1-dimensional space can be represented as
-1, i=2,K ,K+1,
, the line connecting pips 1 and 2 is defined by:
m i
- 1 = z i ( N ) - z i ( 1 ) z i - 1 ( N ) - z i - 1 ( 1 ) , n i - 1 = z i ( N ) - z i ( 1 ) z i - 1 ( N ) - z i - 1 ( 1 ) , i = 2 , K , K + 1
From now on, we will assume L
norm to be the local similarity metric in the space. In that case, for every point f(t
), fn(t
) can be found by maximizing:
= j = 1 K + 1 ( z j ( i ) - zn j ( i ) ) 2 ,
subject to zn[j]
(i)ε PIP
, PIP
[2] [0052]
Using Lagrange multipliers to solve this problem, we obtain fn(t
(i), zn
(i), . . . , zn
+1(i)] as a solution to the following system of equations
1 ( i ) + 1 2 λ 1 m 1 = z 1 ( i ) zn j ( i ) - 1 2 λ j - 1 + 1 2 λ j m j = z j ( i ) , j = 2 , K , K + 1 zn K + 1 ( i ) - 1 2 λ K = z K + 1 ( i ) , j = 1 , K , K
The PIP identification process continues until a certain distortion measure is satisfied (e.g., step 404 in
FIG. 4
), or until the number of PIPs is equal to the length of the sequence. The local similarity measure d can be also used as a distortion measure. Assuming original sequence F, compressed sequence Fc,
and the sequence interpolated from the compressed version F', the distortion rate dr can be computed as:
= 1 N i = 1 N d ( f ( t i ) , f ' ( t i ) )
As previously mentioned, the skeletons of the present invention can be used for a number of practical application data, including, for example, stock market data and exchange rates, biomedical
measurements, weather data, history of product sales, audio, video, etc.
More generally, the present invention allows such functions as recognizing or identifying events or specific sequences, searching for an event or similar event, analyzing a time series, discovering
relationships within a time series or between two different time series, categorization of signals into groups or clusters, optimization processing, time-series compression, or indexing of data.
As a point in passing, measurements using the present invention will be different depending on the selection of the starting point and end point. The assumption is that the first and the last point
are selected so as to capture the signal of interest or a portion of a signal of interest. It is noted that this is quite similar to how humans perceive the signal.
Taking, for example, a time series of stock prices, one might be interested in the behavior over last year, or over the last month only. Depending on which period is selected the signal, although the
same, will look very much different to the observer, as the extreme points or PIPs have an entirely different meaning. However, it should also be clear that, if all signals of interest have the same
end points, the resultant perceptual skeletons will be correspondingly related over the period of interest, including corresponding metrics of similarity, even if the perceptual skeletons would
change somewhat if another endpoint had been selected.
In the example above, the PIPs represent first-order maxima, since this is how they were defined (e.g., by computing the metric D). However, it is noted that there could be applications where PIPs
are defined as second- or higher-order maxima (e.g., if the change in the growth rate, or other discontinuities, were to be the focus).
If a desired task involves determining similarity between two functions X and Y, and the two functions are reduced to their perceptual skeletons F
and F
, the final step is to compute the similarity between the simplified representations.
We will first consider the local similarity metric, d, as a global distance measure. However, as it is often reported, Minkowski-based metrics have drawbacks in comparing time series. Therefore, we
will also consider multivariate dynamic time warping (DTW) as an alternative measure.
We start with the perceptual skeletons [f
), . . . , f
)] and [f
), . . . , f
)], where N
and N
are the number of points in each skeleton, respectively. To compute the similarity measure between the skeletons, we first construct an N
matrix M, where M(i, j)=d(f
)), and d is the local similarity metric. The warping path, W=w
, w
, . . . , w
, where w
is a contiguous set of matrix elements that defines a mapping between F
and F
, subject to: boundary conditions w
=(1,1) and w
), continuity constraint w
=(a',b'), where a-a'≦1 and b-b'≦1, and monotonicity constraint a-a'≧0 and b-b'≧0. As there are many warping paths that satisfy these conditions, we are interested in finding the path that minimizes
the warping cost
( F s X , F s Y ) = min W l = 1 L M ( w l )
[0062]FIG. 6
demonstrates this method of similarity based on N×N matrices and warping paths. Three time-series 601, 602, 603 presumed to have PIPs as identified are shown, and the perceptual skeletons are shown
in graph 604. The question of interest 600 is to determine which of the two input signals i
, i
is closer to the reference signal r.
The M matrix 605 shows the M matrix between reference signal 601 and input signal 1 (602). The numbers 1-5 on the left side of the matrix 605 correspond to the five PIPs of the reference signal and
the numbers 1-5 across the top correspond to the five PIPs of input signal 1 (602). The numbers in the grids of matrix 605 indicate the vertical distance squared between the two sets of PIPs. The
gray grids indicate the warping path and provides similarity measure (e.g., "distance") of 3.71 between the reference signal and input signal 1. Matrix 606 provides similar information between the
reference signal and input signal 2, and the warping path shows a "distance" of 5.02.
The application and performance of the method of the present invention will now be demonstrated in a financial modeling application, using the dataset consisting of 1986-2006 daily stock prices for
the DOW Jones Industrial (DJI) index. This index includes 32 stocks.
As first demonstration, a search query is exercised to find a stock having similar time data of the input time data.
FIG. 7
shows the result 700 of this search exercise when the query is the stock price series 701 for American Express in a three month period starting on Nov. 14, 2005. Using skeleton representation, the
closest match, using both the Euclidean distance and DTW) was found to be the JP Morgan stock price series 702. The closest match using Euclidean distance is the Hewlett Packard stock price series
As a second demonstration of the processing potential of the present invention, we will now consider the following model of the stock market. We will assume a market with Q assets (for our dataset Q=
32). Market vectors p(t)=[p
(t),K, p
(t)] and r(t)=[r
(t),K, r
(t)] are vectors of nonnegative numbers representing asset prices and returns (price relatives) for every trading day.
Let us assume the following simple sequential "momentum" investment strategy. An investor starts investing at time t
and rebalances her portfolio every T
days. The investor can invests all her wealth into only one stock. Let S
denote investor's initial capital. Then, at the end of the trading period the investor's wealth becomes:
S t
= t = t 0 t 0 + T r S 0 r i ( t )
where i is the index of the asset being invested in
, since r represents rate expressed as (current price)/(price of previous period).
In order to select the investment for the next trading period, the investor will consider the evolution of the market over Th days prior to the decision time, which is represented by a sequence of
price vectors P(t)=[p(t-Th),K,p(t-Th)]. The investor will analyze the stock market history, find a period when the market behaved similarly to the current one, identify the asset that had the highest
return in the given period and select that asset as the new investment.
In other words, for every trading period, ti, the investor finds the index of the new investment as
( i ) = arg min j = t i - T h , K , t i - 1 D ( P ( t i ) , P ( t j ) )
and the investor
's return after N trading periods becomes
= S N / S 0 = n = 1 N t = ( N - 1 ) Tr + 1 NT r r ind ( i ) ( t )
The sequence of price vectors P(t) is a Q-dimensional time series, where each point represents a market vector at time t. Thus, the present invention can be used to find the most similar past market
conditions, and will evaluate the performance of our method by comparing the achieved total return R, to the returns obtained by using the Euclidian distance (ED) and the dynamic time warping (DWT)
as similarity metrics between the original signals. We will also compare the performance of the perceptual skeletons with DWT as similarity metric (PS+DWT), with the Euclidean distance as similarity
metric (PS+ED). Instead of the distortion rate, we control the quality of the representation via the parameter SLmin, which defines the minimum length of a segment between two PIPs.
Results for the different choices of (Tr,Th, SLmin) shown in the first vertical column of Table 1 below are given in the four right hand columns of the table. The skeleton based representation
clearly outperforms the other methods, as demonstrated by the higher returns shown in the second and third columns relative to the returns in the third and fourth columns.
As expected, when used with original signal, DWT in general performs better than ED. However, when using perceptual skeletons, both DWT and ED generate the same returns, indicating that the
perceptual representation is robust enough to be used even with the simplest distance measures.
We also observe how the performance of the skeleton representations depends on the compression factor and deteriorates as the representation becomes too coarse (large SLmin, resulting in large
distortion rates), or when the simplification is insufficient (too small SLmin, yielding a signal representation that is similar to the original signal).
-US-00001 TABLE 1 (T
, T
, SL
) PS + DWT PS + ED DWT ED (150, 150, 10) 1.35 1.36 1.36 1.18 (150, 150, 15) 2.11 2.11 1.36 1.18 (150, 150, 20) 2.33 2.33 1.36 1.18 (150, 150, 30) 1.57 1.77 1.36 1.18 (120, 120, 3) 1.57 1.57 1.96 1.57
(120, 120, 5) 2.36 2.36 1.96 1.57 (120, 120, 10) 2.13 2.13 1.96 1.57 (120, 120, 15) 2.60 2.60 1.96 1.57 (120, 120, 20) 2.17 2.17 1.96 1.57 (90, 90, 5) 2.17 2.17 1.26 2.17 (90, 90, 15) 2.36 2.36 1.26
2.17 (90, 90, 20) 1.81 1.81 1.26 2.17 (40, 90, 10) 2.28 2.28 1.82 2.09 (20, 90, 10) 2.01 1.92 1.82 1.34
[0074]FIG. 8
shows a block diagram 800 of a software-based implementation of the present invention. I/O interface module 801 provides the interface to receive ordered sequence data for processing from an outside
source, although such ordered sequence data could also be received via memory interface module 802 from a storage device 803. I/O interface 801 would also receive user inputs from a keyboard or mouse
or other input device, in coordination with graphical user interface (GUI) 804, and output results for user display, again in coordination with the GUI module 804.
GUI module 804 would also provide capability of the user to control the software tool, including such tasks, depending upon the function to be performed, as identifying the ordered sequence to be
reduced to a skeleton, entry of data such as defining endpoints of the ordered sequence if endpoints are manually entered by the user, defining the termination test and/or parameters for this test,
Calculator module 805 provides the capability to execute the various mathematical procedures for such tasks as calculating the skeleton and similarity values. Control module 806 could be implemented
as the main function of an application program, serving to invoke various subroutines related to the other block diagram modules as appropriate.
Exemplary Hardware Implementation [0077]FIG. 9
illustrates a typical hardware configuration of an information handling/computer system in accordance with the invention and which preferably has at least one processor or central processing unit
(CPU) 911.
The CPUs 911 are interconnected via a system bus 912 to a random access memory (RAM) 914, read-only memory (ROM) 916, input/output (I/O) adapter 918 (for connecting peripheral devices such as disk
units 921 and tape drives 940 to the bus 912), user interface adapter 922 (for connecting a keyboard 924, mouse 926, speaker 928, microphone 932, and/or other user interface device to the bus 912), a
communication adapter 934 for connecting an information handling system to a data processing network, the Internet, an Intranet, a personal area network (PAN), etc., and a display adapter 936 for
connecting the bus 912 to a display device 938 and/or printer 939 (e.g., a digital printer or the like).
In addition to the hardware/software environment described above, a different aspect of the invention includes a computer-implemented method for performing the above method. As an example, this
method may be implemented in the particular environment discussed above.
Such a method may be implemented, for example, by operating a computer, as embodied by a digital data processing apparatus, to execute a sequence of machine-readable instructions. These instructions
may reside in various types of signal-bearing media.
Thus, this aspect of the present invention is directed to a programmed product, comprising signal-bearing media tangibly embodying a program of machine-readable instructions executable by a digital
data processor incorporating the CPU 911 and hardware above, to perform the method of the invention.
This signal-bearing media may include, for example, a RAM contained within the CPU 911, as represented by the fast-access storage for example. Alternatively, the instructions may be contained in
another signal-bearing media, such as a magnetic data storage diskette 1000 (FIG. 10), directly or indirectly accessible by the CPU 911.
Whether contained in the diskette 1000, the computer/CPU 911, or elsewhere, the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a
conventional "hard drive" or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.),
paper "punch" cards, or other suitable signal-bearing media including transmission media such as digital and analog and communication links and wireless. In an illustrative embodiment of the
invention, the machine-readable instructions may comprise software object code.
From the above discussion, it can be seen that the benefits of the invention include an efficient compression of a time signal (or other ordered sequence), compression and representation in
accordance with human visual system, and simplification of the signal for efficient indexing, matching, similarity measurement, and retrieval.
A few non-limiting applications of the present invention include: 1) financial analysis & portfolio optimization; 2) storage, indexing, and searching of medical signals and information, speech,
music, seismological signals, weather & climate data; and 3) applications in business analytics and marketing, such as analyzing product lifecycle, looking for products with similar lifecycles,
looking for customers with similar behavior over time, etc. However, it should be apparent to one having ordinary skill in the art, having taken the discussion herein as a whole, that the present
invention could be applied to any application in which an ordered sequence of data is involved.
In yet another aspect of the present invention, it should be apparent that the method described herein has potential application in widely varying areas for analysis of data, including such as areas
as business, manufacturing, government, etc. Therefore, the method of the present invention, particularly as implemented as a computer-based tool, can potentially serve as a basis for a business
oriented toward analysis of such data, including consultation services. Such areas of application are considered as covered by the present invention.
While the invention has been described in terms of a single preferred embodiment, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and
scope of the appended claims.
Further, it is noted that, Applicants' intent is to encompass equivalents of all claim elements, even if amended later during prosecution.
Patent applications by Aleksandra Mojsilovic, New York, NY US
Patent applications in class Pattern matching access
Patent applications in all subclasses Pattern matching access
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20080235222","timestamp":"2014-04-19T00:09:09Z","content_type":null,"content_length":"71320","record_id":"<urn:uuid:6316ecdf-a21a-4f9d-952f-1f59e790c918>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Determining the high-pressure phase transition in highly-ordered pyrolitic graphite with time-dependent electrical resistance measurements
FIG. 1.
Photomicrographs of the sample at various pressures during Run 1. The sample began as a 70 μm square of HOPG remained nearly rectangular at all pressures. For reference, the diameter of the culet is
300 μm.
FIG. 2.
Resistance versus pressure data for Run 1. Upon initial compression, the pressure was increased rapidly (gray circles), with little time (∼15 min) between pressure increases. The phase change was
observed, indicated by the steep increase in resistance at ∼19 GPa (point A). Near the maximum pressure of 28 GPa (point B), the resistance began to increase noticeably with time, and each change in
pressure (black circles) was subsequently followed by a long observation period to detect these changes. Although the curve shows a large hysteresis in the resistance on decreasing pressure, with the
return transition occurring at ∼10 GPa (point D), it can be seen from the time-dependence of the resistance [Figs. 2 and 3] that the transition begins to reverse direction at point C, when the
pressure was decreased back to ∼19 GPa.
FIG. 3.
Resistance versus time data for Run 1 during compression and decompression surrounding the highest compression point (point B) in Fig. 2. The open circles represent measurements made during the
observation periods, while the black circles represent the measurements where the pressure was changed. Note the positive slope of the resistance versus time at high pressures on both compression
(left of dashed line) and decompression (right of dashed line). Corresponding pressures are listed for reference.
FIG. 4.
Resistance versus time data for Run 1 during decompression near points C and D in Fig. 2. The open circles represent measurements made during the long observation periods, while the black circles
represent the points where the pressure was changed. At point C, corresponding to the 19 GPa transition, the slope of the resistance versus time curve changes from zero to negative. At point D (∼10
GPa), it can be seen that the decrease in resistance over time is larger than the increase seen at each adjustment, leading to an overall decrease in the resistance versus pressure behavior.
Corresponding pressures are listed for reference.
FIG. 5.
Resistance versus pressure data for Run 2. Symbol notation is the same as in Fig. 2. The overall behavior was similar to that during Run 1, but several long observation periods were recorded during
compression as well as decompression. Again, we observe a large resistance increase near 18-19 GPa on compression (point E), and a large hysteresis to this curve, with resistance beginning to
decrease at ∼9 GPa on decompression (point H). However, we observe the change in the slope of the resistance versus time curve at points E and G [Figs. 6 and 7].
FIG. 6.
Resistance versus time data for Run 2 during compression up to point F in Fig. 5. Gray and black circles represent resistance measurements taken during pressure adjustment and correspond to the
respective points in Fig. 5. Open circles represent measurements taken during the long observation periods. During the loading phase of Run 2, the resistance decreased with pressure but remained
constant with time up until the transition pressure (point E, 18-19 GPa) when the slope of the resistance versus time began to increase, at first slowly, and then quickly after each pressure
adjustment. Corresponding pressures are listed for reference.
FIG. 7.
Resistance versus time data for Run 2 during decompression near points G and H in Fig. 5. Symbol notation is the same as in Fig. 6. During the unloading phase of Run 2, the sample resistance
exhibited nearly identical behavior to that in Fig. 4. As the transition pressure (point G, 18-16 GPa) was reached, the slope of the resistance versus pressure curve again became negative. It is
possible that this transition began ∼1-2 GPa earlier, where the slope becomes flat. At point H, these resistance drops became larger than the increase seen at each adjustment, and thus the resistance
drops become apparent in the overall resistance versus pressure behavior. Corresponding pressures are listed for reference.
FIG. 8.
Two models of gasket thickness used for resistivity calculations for measurements taken during the initial compression (Run 1). Gray symbols correspond to a model where gasket thickness changes
linearly with pressure, while black symbols correspond to a model where gasket material is allowed to flow out between the culet edges and to compress with applied pressure.
FIG. 9.
Geometric factor wt/l in the resistivity calculation as calculated from the gasket-flow model for Run 1 (black symbols) and Run 2 (gray symbols). Filled symbols represent compression, while open
symbols represent decompression. It can be seen that this factor is approximately the same at all pressures for both Run 1 and Run 2 and thus cannot be a factor in the overall change in resistance.
FIG. 10.
The total eDOS as computed by GGA for graphite, lonsdaleite, diamond, bct-C[4], and M-carbon at 0 GPa (light gray), 15 GPa (dark gray), and 25 GPa (black). All phases of carbon except for graphite
show a large band gap, yielding insulating behavior as compared to semimetallic graphite. The eDOS of all phases show little variation with pressure at least up to 25 GPa.
Article metrics loading... | {"url":"http://scitation.aip.org/content/aip/journal/jap/110/4/10.1063/1.3627372","timestamp":"2014-04-16T16:59:35Z","content_type":null,"content_length":"86276","record_id":"<urn:uuid:88fa5777-40e6-4171-bdea-49f0cf3dce65>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Function
April 20th 2011, 08:20 PM #1
Super Member
Feb 2008
Probability Function
Let Y1 and Y2 be independent Poisson random variables with means x1 and x2 respectively. Find the
a) probability function of Y1 + Y2
b) conditional probability function of Y1, given that Y1 + Y2 = m
Please help. I will greatly appreciate it!! Thanks
Here is (a). (b) is left for you to do.
$\Pr(Y_1+Y_2 =n) = \sum_{k=0}^n \Pr(Y_1=k, Y_2 = n-k)$
(since the events are disjoint, that is, mutually exclusive)
$= \sum_{k=0}^n \Pr(Y_1 = k) \Pr(Y_2 = n-k)$
(using independence)
$= \sum_{k=0}^n \frac{e^{-\lambda}\lambda^k}{k!} \frac{e^{-\mu} \mu^{n-k}}{(n-k)!}$
(using the pmf of a Poisson random variable)
$= e^{-(\lambda+\mu)} \sum_{k=0}^n \frac{\lambda^k \mu^{(n-k)}}{k!(n-k)!}$
$= \frac{e^{-(\lambda+\mu)}}{n!} \sum_{k=0}^n {n \choose k} \lambda^k \mu^{(n-k)}$
$= \frac{e^{-(\lambda+\mu)}}{n!} (\lambda+\mu)^n$
(using binomial theorem)
which is recognised as the pmf of a Poisson random variable with parameter $\lambda+\mu$.
Last edited by mr fantastic; April 21st 2011 at 11:57 PM. Reason: Fixed latex.
April 20th 2011, 10:53 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/178217-probability-function.html","timestamp":"2014-04-18T14:11:33Z","content_type":null,"content_length":"35178","record_id":"<urn:uuid:95c86511-d2be-4f00-a9a0-39f7ba6870a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Whitestone Calculus Tutor
Find a Whitestone Calculus Tutor
...Algebra 1 is a textbook title or the name of a course, but it is not a subject. It is often the course where students become acquainted with symbolic manipulations of quantities. While it can
be confusing at first (eg "how can a letter be a number?"), it can also broaden your intellectual scope.
25 Subjects: including calculus, chemistry, physics, statistics
...I try to guide students to understanding the material by trying to ground problems in real life situations: you can see whether an answer makes sense based on some sort of intuition, rather
than just going through the algorithm and hoping you don't mess up. I'm a big fan of unit analysis, where ...
18 Subjects: including calculus, physics, trigonometry, SAT math
...Over the past 3 years, I have been successfully tutoring students in the early Society of Actuary examinations through to MLC. I am close to reaching Associate status with the SOA. I also tutor
candidates for the VEE portion - Corporate Finance, Economics and Linear Regression.
55 Subjects: including calculus, reading, English, geometry
...In addition, I have worked as a K-12 substitute teacher in both the public and private school systems for six years. I see myself as more of an academic mentor; I love enabling students of all
ages to excel in math and the rest of their academic endeavors. I use my background as an actor to con...
27 Subjects: including calculus, reading, statistics, GRE
...Petersburg, Russia. I have also spent many years participating in computer programming competitions, and I competed almost exclusively in Pascal/Delphi (or Kylix, the unix version of pascal).
If you are just starting to learn computer programming, Pascal is a great first language. It is a bit outdated, though.
32 Subjects: including calculus, physics, geometry, statistics | {"url":"http://www.purplemath.com/Whitestone_calculus_tutors.php","timestamp":"2014-04-19T02:24:35Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:23142f21-cf3e-43a7-aff8-35c72966c758>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alameda Science Tutor
Find an Alameda Science Tutor
...I am experienced in conducting research using quantitative (statistics) and qualitative methods. My knowledge in psychology allows me to come up with unique and suitable study plans. I have
complete a Yoga Teacher Training Course in Yoga Tree San Francisco for 200 hours.
4 Subjects: including psychology, study skills, proofreading, yoga
...I also have published articles on the history of standardized testing. My degrees are in history, but I have always required and evaluated student writing in every course I have given. I have
helped scores of students improve their writing skills in order to prepare for the SAT essay section.
49 Subjects: including philosophy, sociology, psychology, GRE
...Then, we'll break your problem areas down into simple steps and walk through their applications. Finally, we'll reinforce your new skills while challenging you to apply your skill set to ever
more subtle and challenging applications. I believe that there's nothing you can't learn.
29 Subjects: including ACT Science, philosophy, reading, English
...However, I also try to instill in my students the intellectual motor necessary to develop deep understanding. They must learn how to teach themselves. Concretely, this means I constantly ask
questions to check understanding and select difficult problems that force them to apply concepts in multiple contexts.
6 Subjects: including chemistry, physics, philosophy, calculus
...I can't wait to work together! :)I tutored a Cal undergrad in introductory Statistics last Spring. This undergrad had dropped the course in the Fall due to a failing grade after the first
midterm. After regularly working with me during the Spring semester, my undergrad tutee ended up with an A in the Statistics class.
27 Subjects: including physics, chemistry, biology, physical science
Nearby Cities With Science Tutor
Berkeley, CA Science Tutors
Daly City Science Tutors
Emeryville Science Tutors
Hayward, CA Science Tutors
Oakland, CA Science Tutors
Piedmont, CA Science Tutors
Richmond, CA Science Tutors
San Francisco Science Tutors
San Leandro Science Tutors
San Lorenzo, CA Science Tutors
San Pablo, CA Science Tutors
San Rafael, CA Science Tutors
South San Francisco Science Tutors
Union City, CA Science Tutors
Walnut Creek, CA Science Tutors | {"url":"http://www.purplemath.com/Alameda_Science_tutors.php","timestamp":"2014-04-19T09:31:43Z","content_type":null,"content_length":"23880","record_id":"<urn:uuid:c6914b9e-cbd6-4099-bff5-0fcbe3c8014d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH K305 ALL Statistical and Mathematical Techniques for Journalism
Mathematics | Statistical and Mathematical Techniques for Journalism
K305 | ALL | Tba
K305 Statistical and Mathematical Techniques for Journalism (3 cr.)
P: One college-level mathematics course and Journalism J200. R: M118.
Intended for journalism majors. An introduction to the mathematical
and statistical methods necessary in the practice of journalism.
Working with data, measures of central tendency and dispersion.
Statistical inference and hypothesis testing. The use of spreadsheets
in statistical work. Focus on the exposition of mathematical and
statistical results. Credit given only for one of the following: MATH
K300, K305, K310; CJUS P291; ECON E370; PSY K300; SOC S371; or SPEA | {"url":"http://www.indiana.edu/~deanfac/blfal04/math/math_k305_ALL.html","timestamp":"2014-04-25T00:05:58Z","content_type":null,"content_length":"1216","record_id":"<urn:uuid:d48a5573-1584-4bd7-984d-76c4af99b297>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
(Electrostatics) Energy of a configuration
1. The problem statement, all variables and given/known data
For a given configuration i found the scalar potential [tex]\phi(r)[/tex]-->(as you can see its a function only of r)
My question is about calculating the energy of the system.
2. Relevant equations
W=-\dfrac{\varepsilon_0}{2}\int |\nabla \phi|^2 d^{3}x =\dfrac{1}{2}\int \phi \rho \,d^{3}x
3. The attempt at a solution
I just dont know if i should integrate [tex]\phi (r)[/tex] like a triple integral with limits [tex](0,\infty)x(0,2\pi)x(0,\pi)[/tex] or should i perform the inverse substitution (from sferical
coordinates to cartesian ) and then integrate [tex]\phi (x,y,z)[/tex] like a triple integral with limits (-oo,oo)x(-oo,oo)x(-oo,oo)
Moreover, if i perform the change in the variables,what happen to the Jacobian of the substitution (|J|=[tex]r^2 sin(\vartheta)[/tex] )???? | {"url":"http://www.physicsforums.com/showthread.php?p=2161003","timestamp":"2014-04-19T07:30:03Z","content_type":null,"content_length":"20751","record_id":"<urn:uuid:84385186-c3c5-4471-a8b3-decac2cb8580>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advances in Mathematics Open Archive
List of the recent articles made freely available as part of this journal’s
open archive
. All articles published after 48 months have unrestricted access and will remain permanently free to read and download.
1 March 2010
Chieh-Yu Chang | Matthew A. Papanikolas | Dinesh S. Thakur | Jing Yu
We consider the values at proper fractions of the arithmetic gamma function and the values at positive integers of the zeta function for Fq[θ] and provide complete algebraic independence results
1 March 2010
Claus Hertling | Christian Sevenheck
We investigate variations of Brieskorn lattices over non-compact parameter spaces, and discuss the corresponding limit objects on the boundary divisor. We study the associated variation of
1 March 2010
Mark Pollicott | Howard Weiss | Scott A. Wolpert
We prove topological transitivity for the Weil–Petersson geodesic flow for real two-dimensional moduli spaces of hyperbolic structures. Our proof follows a new approach that combines the density
1 March 2010
Alfonso Gracia-Saz | Rajan Amit Mehta
A VB-algebroid is essentially defined as a Lie algebroid object in the category of vector bundles. There is a one-to-one correspondence between VB-algebroids and certain flat Lie algebroid
1 March 2010
Dmitry Kleinbock | Barak Weiss
We show that the sets of weighted badly approximable vectors in Rn are winning sets of certain games, which are modifications of (α,β)-games introduced by W.M. Schmidt in 1966. The latter winning
1 March 2010
Marina Ghisi | Massimo Gobbino
We consider the second order Cauchy problemu″+m(|A1/2u|2)Au=0,u(0)=u0,u′(0)=u1, where m:[0,+∞)→[0,+∞) is a continuous function, and A is a self-adjoint nonnegative operator with dense domain on a
1 March 2010
Patrick Dehornoy
We develop combinatorial methods for establishing lower bounds on the rotation distance between binary trees, i.e., equivalently, on the flip distance between triangulations of a polygon. These
1 March 2010
Shin-Yao Jow
Given a big divisor D on a normal complex projective variety X, we show that the restricted volume of D along a very general complete-intersection curve C⊂X can be read off from the Okounkov body
1 March 2010
Mark S. Ashbaugh | Fritz Gesztesy | Marius Mitrea | Gerald Teschl
We study spectral properties for HK,Ω, the Krein–von Neumann extension of the perturbed Laplacian −Δ+V defined on C0∞(Ω), where V is measurable, bounded and nonnegative, in a bounded open set Ω⊂Rn
1 March 2010
Michael Barot | Elsa Fernández | María Inés Platzeck | Nilda Isabel Pratti | Sonia Trepode
In this paper the relationship between iterated tilted algebras and cluster-tilted algebras and relation extensions is studied. In the Dynkin case, it is shown that the relationship is very strong
15 February 2010
Vyacheslav Futorny | Alexander Molev | Serge Ovsienko
We address two problems with the structure and representation theory of finite W-algebras associated with general linear Lie algebras. Finite W-algebras can be defined using either Kostant's
15 February 2010
Kwokwai Chan | Naichung Conan Leung
We construct and apply Strominger–Yau–Zaslow mirror transformations to understand the geometry of the mirror symmetry between toric Fano manifolds and Landau–Ginzburg models....
15 February 2010
Graciela Carboni | Jorge A. Guccione | Juan J. Guccione
We obtain a mixed complex, simpler than the canonical one, given the Hochschild, cyclic, negative and periodic homology of a crossed product E=A#fH, where H is an arbitrary Hopf algebra and f is a
15 February 2010
B. Feigin | E. Frenkel | V. Toledano Laredo
We introduce a class of quantum integrable systems generalizing the Gaudin model. The corresponding algebras of quantum Hamiltonians are obtained as quotients of the center of the enveloping
15 February 2010
David Brander | Wayne Rossman | Nicholas Schmitt
We give an infinite dimensional generalized Weierstrass representation for spacelike constant mean curvature (CMC) surfaces in Minkowski 3-space R2,1. The formulation is analogous to that given by
15 February 2010
Jean-Benoît Bost | Klaus Künnemann
We define and investigate extension groups in the context of Arakelov geometry. The “arithmetic extension groups” ExtˆXi(F,G) we introduce are extensions by groups of analytic types of the usual
15 February 2010
Andre Henriques | David E. Speyer
We introduce a recurrence which we term the multidimensional cube recurrence, generalizing the octahedron recurrence studied by Propp, Fomin and Zelevinsky, Speyer, and Fock and Goncharov and the
30 January 2010
Giovanni Catino | Zindine Djadli
In this paper we prove that, under an explicit integral pinching assumption between the L2-norm of the Ricci curvature and the L2-norm of the scalar curvature, a closed 3-manifold with positive
30 January 2010
De-Qi Zhang
For a compact Kähler manifold X and a strongly primitive automorphism g of positive entropy, it is shown that X has at most ρ(X) of g-periodic prime divisors. When X is a projective threefold,
30 January 2010
Mathias Beiglböck | Vitaly Bergelson | Alexander Fish
Jin proved that whenever A and B are sets of positive upper density in Z, A+B is piecewise syndetic. Jin's theorem was subsequently generalized by Jin and Keisler to a certain family of abelian
30 January 2010
Gavril Farkas
We determine the Kodaira dimension of the moduli space Sg of even spin curves for all g. Precisely, we show that Sg is of general type for g>8 and has negative Kodaira dimension for g<8....
30 January 2010
Selman Akbulut | Baris Efe | Sema Salur
Previously the two of the authors defined a notion of dual Calabi–Yau manifolds in a G2 manifold, and described a process to obtain them. Here we apply this process to a compact G2 manifold,
30 January 2010
Jeffrey Streets
We study the behavior of the Ricci Yang–Mills flow for U(1) bundles on surfaces. By exploiting a coupling of the Liouville and Yang–Mills energies we show that existence for the flow reduces to a
30 January 2010
J. Bell | S. Launois | J. Lutley
We develop a new approach to the representation theory of quantum algebras supporting a torus action via methods from the theory of finite-state automata and algebraic combinatorics. We show that
30 January 2010
Ricardo J. Alonso | Emanuel Carneiro
We extend the Lp-theory of the Boltzmann collision operator by using classical techniques based in the Carleman representation and Fourier analysis, allied to new ideas that exploit the radial
30 January 2010
Matteo Bonforte | Juan Luis Vázquez
We investigate qualitative properties of local solutions u(t,x)⩾0 to the fast diffusion equation, ∂tu=Δ(um)/m with m<1, corresponding to general nonnegative initial data. Our main results are
30 January 2010
Imre Bárány | Pavle Blagojević | András Szűcs
We show that for a given planar convex set K of positive area there exist three pairwise internally disjoint convex sets whose union is K such that they have equal area and equal perimeter....
30 January 2010
Ivan Cheltsov | Ilya Karzhemanov
For any smooth quartic threefold in P4 we classify pencils on it whose general element is an irreducible surface birational to a surface of Kodaira dimension zero....
30 January 2010
Boris Pittel
We consider a random (multi)graph growth process {Gm} on a vertex set [n], which is a special case of a more general process proposed by Laci Lovász in 2002. G0 is empty, and Gm+1 is obtained from
30 January 2010
Ursula Molter | Ezequiel Rela
In this paper we study the problem of estimating the generalized Hausdorff dimension of Furstenberg sets in the plane. For α∈(0,1], a set F in the plane is said to be an α-Furstenberg set if for
30 January 2010
Benjamin Steinberg
Let K be a commutative ring with unit and S an inverse semigroup. We show that the semigroup algebra KS can be described as a convolution algebra of functions on the universal étale groupoid
30 January 2010
Zhi-Wei Li | Pu Zhang
An Artin algebra A is said to be CM-finite if there are only finitely many isomorphism classes of indecomposable finitely generated Gorenstein-projective A-modules. Inspired by Auslander's idea on
30 January 2010
Luchezar L. Avramov | Srikanth B. Iyengar | Joseph Lipman | Suresh Nayak
We study functors underlying derived Hochschild cohomology, also called Shukla cohomology, of a commutative algebra S essentially of finite type and of finite flat dimension over a commutative
15 January 2010
Xijun Hu | Shanzhong Sun
We illustrate a new way to study the stability problem in celestial mechanics. In this paper, using the variational nature of elliptic Lagrangian solutions in the planar three-body problem, we
15 January 2010
Antonio Córdoba | Diego Córdoba | Francisco Gancedo
We study the free boundary evolution between two irrotational, incompressible and inviscid fluids in 2-D without surface tension. We prove local existence in Sobolev spaces when, initially, the
15 January 2010
Giorgio Patrizio | Andrea Spiro
A (bounded) manifold of circular type is a complex manifold M of dimension n admitting a (bounded) exhaustive real function u, defined on M minus a point xo, so that: (a) it is a smooth solution
15 January 2010
Ovidiu Munteanu | Natasa Sesum
In this paper we investigate the existence of a solution to the Poisson equation on complete manifolds with positive spectrum and Ricci curvature bounded from below. We show that if a function f
15 January 2010
Erwin Lutwak | Deane Yang | Gaoyong Zhang
Minkowski's projection bodies have evolved into Lp projection bodies and their asymmetric analogs. These all turn out to be part of a far larger class of Orlicz projection bodies. The analog of
15 January 2010
Jan Draisma
This paper deals with two families of algebraic varieties arising from applications. First, the k-factor model in statistics, consisting of n×n covariance matrices of n observed Gaussian random
15 January 2010
Isamu Iwanari
This paper is largely concerned with constructing coarse moduli spaces for Artin stacks. The main purpose of this paper is to introduce the notion of stability on an arbitrary Artin stack and
15 January 2010
Graeme Kemkes | Xavier Pérez-Giménez | Nicholas Wormald
In this work we show that, for any fixed d, random d-regular graphs asymptotically almost surely can be coloured with k colours, where k is the smallest integer satisfying d<2(k−1)log(k−1). From
15 January 2010
Dzmitry Badziahin
The goal of this paper is to develop a coherent theory for inhomogeneous Diophantine approximation on curves in Rn akin to the well established homogeneous theory. More specifically, the measure
15 January 2010
M. Wendt
In this paper, we provide combinatorial descriptions of A1-fundamental groups of smooth toric varieties. As a corollary, a smooth projective toric variety for which the irrelevant ideal in the Cox
15 January 2010
S. Artstein-Avidan | V. Milman
In this short note we give a characterization of the support map from classical convexity. We show it is the unique additive transformation from the class of closed convex sets in Rn which include | {"url":"http://www.journals.elsevier.com/advances-in-mathematics/open-archive/","timestamp":"2014-04-20T05:53:42Z","content_type":null,"content_length":"102150","record_id":"<urn:uuid:bc761d7d-300d-4673-bad0-33fad7e536ec>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
10 POINTS! Find the given integral and check your answer by differentiation. y^2/(y^3 + 4)^2 dy? - Homework Help - eNotes.com
10 POINTS! Find the given integral and check your answer by differentiation. y^2/(y^3 + 4)^2 dy?
Find `int y^2/((y^3+4)^2)dy` :
Let `u=y^3+4,du=3y^2dy`
Then `int yy^2/((y^3+4)^2)dy=1/3int(3y^2)/((y^3+4)^2)dy`
Using the general power rule we can integrate:
`=-1/3u^(-1)+C` substituting for u we get:
`=-1/3(y^3+4)^(-1)+C` or
Check: `d/(dy) -1/(3(y^3+4))+C=d/(dy) -1/3(y^3+4)^(-1)+C`
`=y^2/((y^3+4)^2)` as required.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/10-points-find-given-integral-check-your-answer-by-384520","timestamp":"2014-04-24T03:20:32Z","content_type":null,"content_length":"25259","record_id":"<urn:uuid:16faf837-3c2d-4a17-82d0-17827dea5f9d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biological Data Analysis: Homework 3
Due Tuesday, Sept. 17
You must type this and all other homework assignments. Do not e-mail the assignment to me; turn it in early (at 322 Wolf) for a foreseeable absence, or turn it in late after an unexpected absence
from class.
1. As I was typing this assignment, our cat Gus wanted me to pet him, so he patted me on the arm with his left paw 9 times and his right paw 2 times. Is this significantly different from a 1:1 ratio?
Analyze the data using all three of the goodness-of-fit tests we've learned, report the P-values, and write a sentence interpreting any differences among the results.
exact binomial: P=0.065
chi-square: P=0.035
G-test: P=0.028
The chi-square and G tests give significant (P<0.05) P-values, which are too low.
2. Falk and Ayala (1971) collected data on 1187 individuals, recording whether each one clasped their hands with the right thumb on top (R) or the left thumb on top (L). There were 535 R individuals
and 652 L individuals. Is this significantly different from a 1:1 ratio of R and L individuals? Analyze the data using all three goodness-of-fit tests, report the P-values, and write a sentence
interpreting any differences among the results.
exact binomial: P=0.00075
chi-square: P=0.00068
G-test: P=0.00068
The chi-square and G tests give that are only slightly too low, showing that with large sample sizes, all three tests give similar results.
3. Under certain conditions, animal cell lines can become "immortalized," meaning they will keep growing and dividing indefinitely in laboratory cultures. Nowak et al. (2004) looked at the effect of
the pro-apoptotic protein Bax on immortalization of mouse muscle cells. They made mice without the Bax protein (Bax^−/−), established cell lines from them, and compared them to cell lines from mice
with the Bax protein (Bax^+/− and Bax^+/+). After 50 days, all 7 lines of Bax^−/− cells were still growing, while only 3 out of 9 of the lines with Bax were growing. Test the data using all three
tests of independence, and compare the results of the three tests.
Fisher's exact test: P=0.011. Chi-square test: P=0.0063. G-test: P=0.0018. The different P-values illustrate that with small sample sizes, the three tests give different results; in this case, the
chi-square and G-tests give P-values that are too low.
The figure shows how to enter the numbers in the spreadsheet for the chi-square or G-test. The same numbers are used for the Fisher's exact test. Some people used 3 alive and 9 dead for the Bax+
numbers, but the total was 9 for Bax+, so when the question says 3 were alive, it means that 6 were dead.
4. McDonald (1989) collected amphipods (Platorchestia platensis) on a beach on Long Island, New York, and determined their genotype at the mannose-6-phosphate isomerase (Mpi) locus. Totalled across
several dates, there were 1002 Mpi^100/100, 1715 Mpi^100/90, and 761 Mpi^90/90 females; there were 676 Mpi^100/100, 1204 Mpi^100/90, and 442 Mpi^90/90 males. Is the difference in genotype proportions
between females and males significant? Test the data using the chi-squared and G-tests of independence, and compare the results of the two tests. Optional: If you want some extra fun, use SAS to
analyze the data using Fisher's exact test, as well. I'll talk about SAS in class in a couple of weeks, but if you want a challenge, you can read through the handbook page on SAS and try to figure it
out yourself.
Chi-square: P=0.026. G-test: P=0.026. This illustrates that the chi-square and G-tests, which gave different results with the small numbers in the first question, give about the same result with a
large sample size.
Only a couple of people tried the Fisher's exact test in SAS; here's how to set it up:
data amphipods;
input sex $ genotype $ count;
female 100/100 1002
female 100/90 1715
female 90/90 761
male 100/100 676
male 100/90 1204
male 90/90 442
proc freq data=amphipods;
weight count / zeros;
tables sex*genotype / chisq;
exact chisq;
The result is P=0.026, illustrating that with large sample sizes, the chi-square and G-test are accurate.
5. Plot the data from question 4 on a graph. You must create this graph using a computer; do not draw it by hand.
1. Look at the web page on drawing graphs with Excel.
2. Proportions used to summarize nominal variables are more informative than plotting the raw numbers. Plotting 1715 females and 1204 males with the Mpi^100/90 genotype isn't helpful; what's
interesting is that 49.3 percent of females and 51.9 percent of males had that genotype.
3. Always label both the X and Y axis.
4. Where possible, it's better to use X-axis labels, rather than a legend, to identify the different bars.
5. Put things you want to compare next to each other; in this case, put the male percentage next to the female percentage for each genotype.
6. If you're printing in black-and-white, don't use colors for the bars that will print as the same shade of gray.
6. Biological statistics is an important part of your life, of course, but it shouldn't be the only part of your life. Do something fun and adventurous this weekend, so fun and so adventurous that
you'll remember it 10 years from now. If you have the kind of fun adventure you can tell me about, then tell me about it; if you have the kind of fun you'd like to keep private, then don't tell me
about it.
Falk, C.T., and F.J. Ayala. 1971. Genetic aspects of arm folding and hand-clasping. Japanese journal of human genetics 15: 241-247.
McDonald, J.H. 1989. Selection component analysis of the Mpi locus in the amphipod Platorchestia platensis. Heredity 62: 243-249.
Nowak, J.A., J. Malowitz, M. Girgenrath, C.A. Kostek, A.J. Kravetz, J.A. Dominov, and J.B. Miller. 2004. Immortalization of mouse myogenic cells can occur without loss of p16INK4a, p19ARF, or p53 and
is accelerated by inactivation of Bax. BMC Cell Biology 5:1.
Return to the Biological Data Analysis syllabus
Return to John McDonald's home page
This page was last revised September 12, 2013. Its URL is http://udel.edu/~mcdonald/stathw3.html | {"url":"http://udel.edu/~mcdonald/stathw3.html","timestamp":"2014-04-16T04:10:44Z","content_type":null,"content_length":"8593","record_id":"<urn:uuid:c181983a-90ee-48b8-9a81-ace3bbf55e55>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Hallmarks of Crackpottery, Part 1: Two Comments
Another chaos theory post is in progress. But while I was working on it, a couple of
comments arrived on some old posts. In general, I’d reply on those posts if I thought
it was worth it. But the two comments are interesting not because they actually lend
anything to the discussion to which they are attached, but because they are perfect
demonstrations of two of the most common forms of crackpottery – what I call the
“Education? I don’t need no stinkin’ education” school, and the “I’m so smart that I don’t
even need to read your arguments” school.
Let’s start with the willful ignorance. This is the kind of crackpottery
that frankly bugs me most. As an American, I’m used to the fact that our
culture distrusts intelligence and education. Politicians in America use
“intellectual” as a insult. Mentioning that someone attended one of the best
schools in America is commonly used as a criticism. That’s where this one, by
a guy who calls himself “Vorlath”, comes from. Enough introduction, href="http://scienceblogs.com/goodmath/2009/10/sorry_denise_-_but_god_didnt_m.php#comment-2022628">here
it is:
I can’t believe there are still people who believe in Cantor’s
theory. It’s complete silliness and makes the people who believe in
superstition look like the smart ones by comparison. Cantor was well known to
treat infinity as a finite number and that’s all he’s doing with his theory.
It doesn’t mean anything.
That argument reduces, roughly, to “I have no idea what Cantor’s argument was,
but I don’t like it, and therefore it’s wrong”.
Cantor didn’t treat infinity like a finite number. What he did was study the
structure of numbers, and realize that “infinity” isn’t a simple thing. You can show
that there are “infinities” that are larger than other “infinities”. In fact, it’s
pretty inescapable.
The rough sketch is: take any set, S. You can create another set, called the power set of S
(usually written 2^S), which consists of the set of all subsets of S. So if S is
the empty set, then 2^S has one value: the set containing the empty set. If S
is the set { a, b }, then 2^S = { {}, {a}, {b}, {a,b} }. The power set of any set S is
always larger than S. So – take the set of all natural numbers. How big is it? Well,
it’s infinite. How big is its powerset? It’s also infinite. But every powerset must be
bigger than the original set – so the powerset of the natural numbers must be larger
than the natural numbers. How can that be? It can be, because there are different kinds
of infinities.
Some of that actually has an effect in reality. I can create a perfect one
to one mapping between the natural numbers, and the set of all rational
numbers. They’re the same size. But I can’t do that for real numbers. My
mapping for the rationals won’t include π, unless I cheat and specifically add it
to the list. It won’t include square roots. No matter what I do, I can’t devise any
method for creating a one-to-one correspondence between the natural numbers and the real
numbers. That is what Cantor’s work really showed.
Intuitively, it seems wrong that there are degrees of infinity, that there
are bigger infinities and smaller infinities. The argument from Vorlath is, basically,
that the fact that it’s not intuitive means that it must be wrong. Vorlath has
no need to know what Cantor really said, or what it really means. Without knowing that,
he just knows it’s wrong, because it’s obviously silly.
Moving on, we come to the genius who doesn’t need to know an argument in
order to disprove it. This is an amazingly common form of argument. I see it
mostly in the form of Cantor disproofs, where it takes the form “Here’s a
one-to-one mapping between the natural numbers and the reals”. The mappings
are always wrong. But what makes the argument particularly annoying is that
the mappings are trivially wrong: Cantor’s diagonalization shows that
given any one-to-one mapping between the naturals and the reals, the
mapping will miss some real numbers. In every case where I’ve seen
one of these arguments, their enumerated mapping fails because Cantor’s
diagonalization shows how to produce a counterexample for it. You can’t disprove
Cantor’s diagonalization without showing that something is wrong with the
diagonalization proof – but these anti-Cantor geniuses constantly think that
they can disprove Cantor without actually addressing the proof.
This comment though, has nothing to do with Cantor. It’s on
the subject of iterative compression. Here
it is:
Your talking linear maths not non-linear with non-linear representation you
can compress repeatedly because the input a is different to the output b which
you can then consider B in linear form ready to become non-linear C I know it
can be done as I have the non-linear twin bifurcation maths needed and have
tested it already on random digit data and yes you can unpack this information
too. not to worry I’m am just getting ready to produce the compressor myself
and no I’m not a Christian. There of course for a given data set will be
limits as to how compressed you can make the data but this will not be
relative to linear laws of data compression.
This is, basically, the same thing as the Cantor disproofs. The argument
against iterative compression is pretty simple: compression is intrinsically
limited – because most strings are non-compressible. It doesn’t
matter how clever you are.
The argument is incredibly simple. Imagine that you want to compress all
strings of 128 bits. You’re not very ambitious: you only want to compress them
by one bit – to reduce all of those strings of 128 bits to 127
You can’t do it.
There are 2^128 strings of 128 bits, and 2^127 strings
of 127 bits. That means that either: (1) you can’t compress half of the strings,
or (2) on average, every 127 bit string is the compressed form of two
128 bit strings. In case 1, you’ve admitted defeat: you can’t compress half of
the strings at all. In case 2, you’re screwed, because compression is
only valuable if it’s reversible – that is, when you compress a string, you
expect to be able to get the original uncompressed string back; but if
there are multiple input strings that can be mapped to the same compressed
string, then your decompressor can only, at best, return a set of
strings saying “The uncompressed string is one of these”.
The more you compress, the worse it gets. Want to compress things by half?
Then you’re talking about compressing 2^N strings into
2^n/2 values – giving you 2^n/2 possible uncompressed
values for each compressed string.
The way that this relates to iterative compression is that an iterative
compression system is no different than any other. Sure, you can devise
iterative compression systems – they’re commonly known as “lousy compressors”.
What they do is take some specific set of input strings, and compress them a
little bit. Then you run them again, and they compress a little bit more. Then
you run them again, and they compress a little bit more. Until eventually,
they stop working. And, at best, you’ve compressed your input as much as you
could have in a single pass with a non-iterative compression system.
The unavoidable fact is that the set of inputs is larger than the set of
outputs, and that means that compression is inevitably limited. The reason
that compression works in practice is because our input strings are
highly structured – and in this case, structure is another word for
“redundancy”. We can compress text because text is far from random – it
has redundant structure that we can remove. (For example, in the format
that I’m using to write this post, each letter takes 8 bits. But I’m using
a character set of about 64 characters. So really, even ignoring all of
the redundancy of english, I’m using 8 bits per character when I could
be using only 6. Fully 1/4 of the length of this document is completely
redundant. And that doesn’t even cover the fact that I use words like “that” all
the time – 41 times so far in this document, and each one takes 32 bits.)
The problem with compression has nothing to do with linearity
versus non-linearity. It doesn’t matter how clever you are. If you can’t
explain just how you’re going to compress 2^N strings into
2^N-1 strings without losing any information, then you
lose: it can’t work. You haven’t addressed the problem.
1. #1 AnyEdge October 28, 2009
I think these are essentially the same argument system: only one is self-deluded. One says: I don’t understand the argument, so it’s wrong. The other says: I pretend to understand the argument,
so it’s wrong. Note that 2 reduces to 1.
Counterintuitive is my favorite word. It means: I understand this, but you’re not going to.
2. #2 James Sweet October 28, 2009
Mostly OT, but I recall back in my BBS days there was a short-lived compression scheme called UC2 (if I recall correctly) that basically just took advantage of the inefficiencies in the
commonly-used compression formats of the day, and just iteratively LZ’d (probably LZ’d — I don’t actually know the details) a couple extra times to squeeze out another 10% or so above the
commodity formats. They got away with it by having a rip-roaring fast implementation, so the extra passes (and their accompanying diminishing returns) didn’t make it a non-starter from a
performance perspective.
Hrm, I wanted to muse about some pie-in-the-sky ideas this post gave me, but it occurs to me it is probably too closely related to my job, and I should stay mum. Ah well…
3. #3 Alex Besogonov October 28, 2009
Typo: “lousy” should be “lossy”.
4. #4 AnyEdge October 28, 2009
Take a digital file. Sort all the ones to the front, zeros to the back. Zeros are just zero: throw them away. Count up the ones.
Take resultant number, represent in binary. This is your compressed file.
Note that this system can be used recursively.
Decompression algorithm is still pending.
5. #5 James Sweet October 28, 2009
@Alex: No, I am pretty sure Mark meant “lousy”. “Lossy” is something completely different, and is typically not iterative. JPEG anyone?
6. #6 Nicolai Hähnle October 28, 2009
There are people who try to disprove Cantor? The world never ceases to amaze me.
The nicest compression crackpot argument I’ve heard of is the following: Okay, so you can’t compress every string, but you could just take run your compression algorithm anyway. If it outputs
something shorter, you take that output; if it doesn’t, you take the original string.
Of course this also can’t give you anything useful, but at least it’s more creative than the usual nonsense.
7. #7 Mark C. Chu-Carroll October 28, 2009
I disagree with your definition of counter-intuitive.
Most of us develop a set of basic ideas about how things work based on our experiences. Many of those ideas are common to just about anyone who grows up as a human being on planet earth. A
typical example: If you lift something up and then let go of it, it
will fall down.
When we’re educated, we learn some more abstract ideas – but we rapidly develop a basic understanding of them, usually by relating them to our concrete experiences.
Intuition is that basic understanding that we develop by relating our concrete experience to abstract ideas. Some of it is obvious and correct. We can get commutativity by knowing
that if I pick one apple and you have two, then we have the same number of apples as if I picked two and you picked one.
The problem with intuition is that our concrete understandings of abstract ideas frequently aren’t correct. We may not have learned all of the details of the abstract idea
before we formed our intuition. Or we may have learned them, but the way that we concretized our understanding into intuition left some out.
In the case of things involving numbers, most people never get beyond the very basic idea of numbers. Almost everyone develops their intuition about numbers based on a very incomplete
understanding. And that leads to things like the huge number of Cantor crackpots: It’s just obvious that you can’t say infinity one is bigger than infinity two. Our basic understanding of what
comparing things means doesn’t allow us to compare infinities.
8. #8 James Sweet October 28, 2009
BTW, I just want to mention that a number of statements that Mark made in this and the other post are accurate from a theoretical standpoint, they are only “mostly” accurate from a practical
standpoint. Most practical lossless compression format is going to involve transmitting some preliminary data — usually a Huffman tree of sorts — and because of the way that preliminary data is
encoded and the fact that the compression method (generally) doesn’t take this data into account when figuring out how to compress the rest of the data, you typically are still going to wind up
with a “compressible string” as the output of the majority of practical compression formats.
Which is why if you ZIP something, and then ZIP the ZIP file, you might get a modest improvement, even though ZIP is not a “lousy” compression format.
Not that this contradicts the point of anything Mark is saying!
9. #9 AnyEdge October 28, 2009
I was, of course, being deliberately glib. Counterintuitive indeed conveys the nuances you ascribe to it. But it is often used (not here) as an arrogant hammer.
10. #10 Mark C. Chu-Carroll October 28, 2009
No, not a typo. I meant “lousy”. What I was trying to get at is that the way that a compression system works is basically by recognizing redundancies in input data. If you eliminate all of the
computable redundancies, you’ve got a maximally compressed format. Most compression systems try to capture all computable redundancies in a single pass. A good compression system is one which
does a good job and identifying and eliminating all redundancies in the input data.
For iterative compression to work, it has to mean that the first compression pass didn’t eliminate all of the redundancies that it was capable of recognizing. Instead, it left behind redundancies
that it could recognize, which it would then eliminate in another pass. The worse the compression system, the more likely it would be to benefit from multiple passes. A really good compression
system would work in a single pass.
A less good system might be able to get something out of a second pass. A not-very-good system might be able to get something out of a third pass. But the better the compressor, the sooner it
can’t get any more out of the file.
To be concrete: there’s a compression system called huffman coding. In the simplest version of huffman coding, you find the most common character in your file, and you represent it using a very
short format. For things less common, you use longer formats; the least common things, you just leave alone.
It’s possible to imagine an input file where the first pass of a huffman coding compressor merged pairs of characters in a single byte – and that that single byte would be one of the most common
“characters” in a second pass. So you could benefit from repeating the huffman coding compression twice. But the amount of compression that you’d get from that second pass would be very small.
And the odds of things falling out so that you could get any benefit from a third pass are vanishingly small.
In fact, in practice, simple Huffman coding for most texts isn’t a particularly great compression system. If you take a better compression system, like the modern LZW compressors that most of us
use, the odds of seeing any significant improvement from a second pass are virtually zero.
11. #11 B-Con October 28, 2009
If I understand the concept of an indescribable number properly, even though there are more Reals than Rationals we will never encounter more than aleph-null of the Reals, simply because they
cannot be expressed.
If this is true, then I don’t know if it would somewhat pacify or further confuse those who think that Cantor was mistaken. Perhaps if they knew that we cannot, technically, ever, in any way,
write down or measure more than aleph-null of the Reals they would feel like their intuitive grasp of numbers makes more sense.
12. #12 Mark C. Chu-Carroll October 28, 2009
Are you kidding? There are still tons and tons of people trying to disprove Cantor. I probably get an anti-Cantor flame at least twice a month, and I’m just a lowly math-blogger. Math journals
still get multiple submissions a year containing attempted refutations of Cantor!
13. #13 Derek R October 28, 2009
I think the fundamental trait of crackpots (the “zeroth” hallmark) is a deep inability to understand that their own knowledge has limits. In other words, if they cannot understand XYZ, then XYZ
must be a hoax. They are unable to believe that there are things beyond their comprehension.
It’s sort of a form of arrogance, because they think they know all there is to know.
14. #14 NitPicker October 28, 2009
Just nit picking, but algebraic numbers are countable. A mapping including square roots is totally legitimate.
15. #15 SciencePundit October 28, 2009
not to worry I’m am just getting ready to produce the compressor myself and no I’m not a Christian.
Huh??? What does being or not being a Christian have anything do do with that?
16. #16 KeithB October 28, 2009
I wonder if your commenter is confusing digital and analog compression schemes.
After all, if I convert all the numbers from 1 to 1,000,000 to logs, I can fit them all into a range from 0 – 60. (Hey! I am an engineer, these are converted to dB!) I can take that 0 – 60, take
the log of that (handling the zero as a special case) and reduce the range still further…
Of course, anybody can *compress* data that way, the trick is to *de-compress* it losslessly and not lose precision.
I bet your commenter is an Audio Engineer who designs companders. 8^)
17. #17 Vorlath October 28, 2009
“because there are different kinds of infinities”
Only during construction of another set and only as long as that set stay dependent on the original.
Look, you have set A and are constructing set B based on set A. So if you treat A as a finite set, you can make something that appears larger. Unfortunately, this is the fallacy of composition.
What applies to the elements does not automatically translate to the whole. That’s the flaw with the whole one-to-one mapping theory and how you apply it.
What you have is where the elements of set B are dependent on the elements of set A. You say so yourself.
“If S is the set { a, b }, then 2S = { {}, {a}, {b}, {a,b} }. The power set of any set S is always larger than S.”
What you don’t say is that this is true only while 2S is dependent on S when the sets are infinite. If you remove the relationship, then both sets are equal size.
Consider the following.
Set A is base2 while Set B is base 3. For any given amount of digits (this is our dependency), base 3 will always be able to represent numbers that the same amount of digits in base 2 cannot
represent. Is base 3 “larger” than base 2? Yes, as long as we’re comparing finite digits. But no if we treat each set as independent. That’s Cantor’s flaw right there. He’s using the finite
digits to say that the whole has a certain property. That’s the fallacy of composition.
As I said, you can have dependent sets that appear to be smaller or larger DURING CONSTRUCTION. This is what you’re saying when you say the power set is larger. Depends on POV whether you’re
seeing it as a dependent or independent set.
Take the set of reals up to 0.5. This is half the “size” of the set of reals. Has to be. But up to 0.5, we have an infinite amount of reals as well. Contradiction. Cantor’s contradiction.
Cantor’s argument does nothing except compare different bases and uses the digit level to produce a contradiction as I explained above. The dependency is the digits, so the two axis can never be
independent. Even in infinity, the two axis will be different sizes (just like 0-0.5 is smaller than 0-1.0 when it’s a subset) until you can treat both axis (or sets) as independent. Then you can
remap them.
By forcing this inequality at the digit level (where one axis will be longer than the other), Cantor constructs his diagonal. Then he states that both axis must be infinite when taken
independently, yet we can create a new element. Well, no shit Sherlock. I can create a new element outside of 0-0.5 at the element level too, but it doesn’t mean jack to the whole of the set.
Cantor’s argument is flawed and so is his first proof which is even more ridiculous.
Cantor uses the digits to create a finite relationship that is different from the relationship between infinite sets. It’s a bogus contradiction and you know it.
18. #18 Bob O'H October 28, 2009
Are you kidding? There are still tons and tons of people trying to disprove Cantor.
An infinity of them, indeed.
19. #19 Cyan October 28, 2009
re: 17
Hooray! I’m so looking forward to this thread!
20. #20 AnyEdge October 28, 2009
Fundamentally, you do not understand the difference between measure and cardinality. Looking at the interval [0,1], then yes, the measure of [0,0.5] is half the measure of [0,1]. However, the
cardinality, the ‘number of elements’ in each set is ‘c’. They are equal.
But I don’t think you WANT to be right. Because being right would mean that you have to change your current position. When your ego is more powerful than your desire to know the truth, you have
abandoned science.
21. #21 Reinier Post October 28, 2009
You have a good point about Cantor’s cardinalities, namely that they are not the same as a notion of set “size” according to which, for instance, the set of reals up to 0.5 is half the “size” of
the set of reals. These two sets would be of different “size”, while they have the same Cantor cardinality.
Your mistake is to assume that Cantor thought the two were equivalent. He didn’t; neither do textbooks that teach Cantor cardinality and the diagonalization argument. Mathematicians do discuss
your kind of “size”, but usually call it measure, and defining measures in general ways is tricky: see e.g. Wikipedia on the Lebesgue measure.
22. #22 Ketil Tveiten October 28, 2009
I will try the impossible:
@17: You are apparently confusing the concepts of *cardinality* (generalisation of “numbers of elements in a set”) with that of *measure* (generalisation of “geometric size of a set”).
It is true that (using standard measure) [0,0.5] has half the *measure* of [0,1], but they have the same *cardinality*. Consider the mapping that takes an element of [0,0.5] and sends it to [0,1]
by multiplying it by 2. This map is obviously both one-to-one and onto*, so for each element in [0,0.5] there is a unique element in [0,1]. Thus, they have the same *cardinality*. It really isn’t
that hard.
*(I hate it when I can’t expect people to understand what ‘bijective’ means.)
23. #23 dean October 28, 2009
It takes either a whole heap of ignorance, or a steaming heap of something else, to believe the ideas in #17.
24. #24 william e emba October 28, 2009
You do not need to understand a theorem’s claimed proof in order to invalidate it. A correct counterexample suffices.
Similarly, you do not need to actually read through a crank’s alleged counterexample to know it is false. Knowing the correct proof suffices.
For a few years, I believed there was one famous professional mathematician who was an utter crackpot regarding Cantor’s theorem. The William Dilworth of Dilworth v Dudley, who sued Dudley for
libel over being called a “crank” in Dudley’s Mathematical Cranks and lost (see the dismissal by Judge Richard Posner) turns out is not the famed Robert Dilworth of lattice theory fame. The crank
Dilworth managed to publish his gibberish in a nothing journal.
See also Hodge’s article in The Bulletin of Symbolic Logic on the anti-Cantor papers he dealt with as an editor over the years.
25. #25 Mark Cook October 28, 2009
@24: “You do not need to understand a theorem’s claimed proof in order to invalidate it. A correct counterexample suffices.”
True, but it would be nice if those giving incorrect counterexamples would look at this particular proof, given that it can be used to show why their incorrect counterexample does not work.
26. #26 william e emba October 28, 2009
The problem is that these crackpots are incapable of understanding what Cantor’s theorem actually says, let alone actually reading the proof and applying it to their alleged counterexamples.
Vorlath simply makes no sense whatsoever, although some readers do try to parse a sentence here and there and then explain his mistake. That is misunderstanding what he has written, which is
ultimately just mathematical word salad. There’s no way in, no way out.
Old timers will recall James Harris on sci.math. Same crankery, different theorem.
27. #27 Vorlath October 28, 2009
Then that’s exactly what I’m saying. Cantor is mistaking measure (the finite digits) to create his diagonal.
Ketil, your multiplying by 2 is exactly my point. The Y axis in Cantor’s diagonal can be scaled to match the X axis by using base 2 (the same as the X axis). But Cantor’s mapping when picking
digits doesn’t allow it because the two axis are dependent on each other (his own personal custom mapping).
I do know what bijective means. Also, I’m not arguing that any sets have the same or different cardinality. I’m only arguing that Cantor’s argument is wrong.
Also, [0,0.5] does not have the same cardinality as [0,1.0] when the first set is a subset of the other. This is true because all elements of the first set can be found in the second, but not
vice versa. Only when the first is not a subset of the other and taken as independent sets are their cardinality the same.
I know people here won’t like what I’ve just said, but it’s true. As I said earlier, if I map base 2 to base 3 via digits, then base 3 does not have the same cardinality. Once you remove the
dependency on digits, then they DO have the same cardinality. If you want to call one measure, then fine. It’s all relative to the mapping in use at the time. This is what Cantor is doing. His
axis are dependent via digits and is what causes the apparent (but wrongful) difference in cardinality. Until he uses independent sets for both axis, then his argument is flawed.
Note that if you dislike my use of cardinality, then you see the problem with Cantor’s argument. He too is using measure instead of cardinality when building his diagonal.
28. #28 Mark C. Chu-Carroll October 28, 2009
That depends.
You need to be able to provide a counter-example to a proof. But you often can’t tell whether a particular counter-example is valid unless you understand the proof that it purports to be a
counter-example ofany enumeration, it will be missing reals. Without understanding what Cantor’s proof does, you can’t create a counter-example to it. Before anyone can take an enumeration of the
reals seriously, you need to show why the diagonalization doesn’t work for your enumeration.
I’ve gotten *dozens* of these things from cranks, and every single time, they give me an enumeration of the reals, which can be disproven using the diagonalization.
29. #29 AnyEdge October 28, 2009
@26. Yes. I agree. I feel victim to it, now I will step back.
When someone produces a bijection from N< ->R, and produces the concurrent flaw in Cantor’s proof, I will discard a century of topology. Until then, I need to let this go.
30. #30 AnyEdge October 28, 2009
That was supposed to read ‘a bijection from N to R’ but my fancy ascii symbols got HTML’d away.
And two sets which are bijectable have the same cardinality. By definition. If you don’t like the definition of the term, make up a new term. Just because you want two sets cardinality to be
different, that doesn’t make it so.
31. #31 Pelli October 28, 2009
#27: It is not true that if A is a proper subset of B then A and B do not have the same cardinality. You are misapplying your intuition about finite sets to infinite ones.
32. #32 Pelli October 28, 2009
@27: Or you are just using a faulty definition of cardinality. The correct definition is that two sets have the same cardinality if there exists a bijection between them.
[0,0.5] is indeed a proper subset of [0,1], but that doesn’t stop there from being the obvious bijection f(x) = 2x between them.
33. #33 Douglas McClean October 28, 2009
@27 said:
Also, [0,0.5] does not have the same cardinality as [0,1.0] when the first set is a subset of the other. This is true because all elements of the first set can be found in the second, but not
vice versa. Only when the first is not a subset of the other and taken as independent sets are their cardinality the same.
Actually, they have the same cardinality whether they are considered together or independently. This may be the source of your confusion. The fact that “all elements of the first set can be found
in the second, but not vice versa” does not mean that two sets have different cardinality. For example, consider the set of all natural numbers except 37 and the set of all natural numbers. Their
cardinality is the same. What you say is true of finite sets, but it is not true of infinite sets. Two sets have the same cardinality if and only if there is a bijection between them.
In your bizarro-math, do the set of even naturals and the set of odd naturals have the same cardinality or not?
34. #34 Vorlath October 28, 2009
Pelli, you are missing the point. Your argument and mine are the same. What you just said, apply it to Cantor’s argument.
The vertical axis in Cantor’s grid is a subset of the horizontal axis (because the X axis is compressed using base 2). By your own admission, they have the same cardinality. But Cantor uses the
dependent aspect of the digits to create a false illusion that he can create a new element.
Suppose you have the set [0,1.0] and I DON’T create a new set. All I do is create a new division. I just put in a separator in the ORIGINAL set at 0.5 (before or after doesn’t matter). You can’t
tell me that all elements before the division can be mapped to the whole set. It’s impossible (the elements before the division can only map to themselves leaving the rest of the set without a
mapping). This is what Cantor is doing. When you have dependent sets, you have an equivalent situation. Apologies if I can’t explain this concept as clearly as I would like.
35. #35 Vorlath October 28, 2009
Apologies for the double post. What everyone is saying about the cardinality between [0,0.5] and [0,1.0] is EXACTLY MY POINT as to the flaw found in Cantor’s argument. You’re all correct on this
point (so no need to repeat that they have the same cardinality), but none of you are seeing the implications.
36. #36 Pelli October 28, 2009
@27: Start off with this diagonalisation that shows there is no bijection between the natural numbers and the set of sequences with elements 0 or 1: Suppose there is a bijection f between the
natural numbers and the set of sequences. Then create this sequence: Let the kth element in the sequence be the opposite of the kth element of f(k). This sequence cannot be f(m) for any m, since
if it is then it differs from itself in the mth position.
Do you wish to claim this proof is false, or just that a similar argument won’t apply to the reals?
I don’t completely understand your argument. I’m guessing you might have misunderstood the vertical axis. It consists of discrete points numbered 1, 2, 3, etc. It is not a continuum.
37. #37 SciencePundit October 28, 2009
Also, [0,0.5] does not have the same cardinality as [0,1.0] when the first set is a subset of the other. This is true because all elements of the first set can be found in the second, but not
vice versa. Only when the first is not a subset of the other and taken as independent sets are their cardinality the same.
That’s just utter nonsense! Infinite sets don’t bevahe like finite sets and you can’t draw conclusions about infinite sets using rules that only apply to finite sets. You are wrong: all elements
of the second set can indeed be found in the first set. Any element from [0,1.0] can be divied by 2 to yield a unique element of [0,0.5]. In fact, any subset of the real numbers can be directly
mapped onto any other subset of the reals. For example, the set [0,1.0] has a bijection to the set [1.0,∞). Any element in the first set has a unique corresponding element in the second set
defined by x→1/x and vice versa. There are no elements that don’t map. The set of reals has no “holes” in it.
38. #38 Pelli October 28, 2009
“You can’t tell me that all elements before the division can be mapped to the whole set.”
Yes I can. Map x to 2x. Then every point in [0,0.5] gets mapped to exactly one point in [0,1]. Check out Hilbert’s hotel. Do you agree there is a bijection between the set of positive integers
and the set of positive even integers?
39. #39 Shawn Smith October 28, 2009
Mathmatics is a purely human exercise. We humans completely define the exercise. The definition we use for “sets being the same size” is “there exists a one-to-one onto mapping (bijection)
between the two sets.” I like that definition because it works for both finite and infinite sets.
Now, maybe you reject that definition. Maybe your rejection of that definition leads to an exercise that is at least as enjoyable for you as the common definition is to the rest of us. It’s my
impression that’s how non-Euclidean geometry came about. But unless you have done some work to flesh out the system that your definition leads to, don’t come around to people who use mathematics
on a daily basis and tell them, “you’re wrong, I’m right, and you all are so stupid to be using that stupid definition for describing equally sized sets.”
Take the set of reals up to 0.5.
Okay, I guess you mean “the reals between 0 and 0.5.”
This is half the “size” of the set of reals. Has to be.
Because you say so? Puuuhhleeeze. I am not convinced. I doubt many others reading this blog are convinced, either. In order to talk about “size” with regard to infinite numbers, you have to
define “size.” Produce a definition, and see what it leads to.
But up to 0.5, we have an infinite amount of reals as well.
Agreed, assuming again that you really mean “reals between 0 and 0.5″
Contradiction. Cantor’s contradiction.
What? You haven’t even established what you mean by “size,” yet. Produce a definition, preferably one that will work with both finite and infinite sets, do the work of seeing what happens with
your definition, and present that to us before making bald assertions.
Cantor’s argument does nothing except compare different bases and uses the digit level to produce a contradiction as I explained above.
Cantor uses the same base (base 10, I believe) to produce a real that is not in any given list of reals. The thing that allows him to do that is that each real is infinitely long, so both axes
are infinitely long. The only way the two axes could be a different size is that the horizontal axis is countably infinite (aleph 0) and the vertical axis is not countably infinite (aleph 1). Oh,
wait…that was what Cantor was trying to show, and which you are saying is wrong.
40. #40 SciencePundit October 28, 2009
Okay, it looks like I misunderstood what Vorlath said. It’s still wrong for most of the reasons I said.
41. #41 Mark C. Chu-Carroll October 28, 2009
What I find striking about Vorlath’s argument here is that he’s arguing that Cantor is wrong for concluding that there are differently sized infinities, when that’s plainly ridiculous. But at the
same time, his argument about that relies on creating differently sized infinities.
How is that a refutation of Cantor?
42. #42 pjb October 28, 2009
Vorlath’s own article and comments might show a little clearer where things are going wrong.
43. #43 Nate October 28, 2009
@Vorlath: What are, in your opinion, the cardinalities of the following sets? And what are the cardinalities as per Cantor’s arguments?
1. The natural numbers {0, 1, 2, …}
2. The even natural numbers {0, 2, 4, …}
3. The odd natural numbers {1, 3, 5, …}
4. The non-negative reals [0, infty)
5. The reals on a "small" interval: [0, 1]
6. The reals on a “large” interval: [0, 10^6]
44. #44 Vorlath October 28, 2009
Pelli: “Yes I can. Map x to 2x.”
That would make the subset into an independent set. Cantor is not using an independent set as you’re showing (which would be cool if he had). If you do what you claim to Cantor’s axis, then you
lose the ability to create a diagonal where this number is not found in the list.
And to everyone else saying you can map [0,0.5] to [0,1.0], please stop and reread what I’ve said. I agree with all of you. In fact, you’re all making my point for me. What you think I’m doing
wrong is exactly what Cantor is doing wrong and is in fact what I’ve been unsuccessfully trying to explain.
Mark: Finally, someone who is getting close. Brilliant! It’s the first time I’ve seen someone move closer to what I’m trying to explain.
Two different infinities, quite right. But I’m saying that the USE of different infinities is flawed by Cantor, and that he DECLARES them the same. In other words, Cantor is declaring that the
two axis are infinite “size” (sorry if that’s the wrong term). But his grid uses a relationship that created an uneven mapping digit-wise (aka FINITE relationship) much like mapping base 2 to
base 3 via digits. See where I’m getting at? That’s the flaw. That’s what’s wrong with Cantor’s diagonal argument. He’s mapping via digits instead of by element and we all know you can’t do that.
He thinks he’s mapping by element but he is not.
If you can’t map by digit between base 2 and base 3, why should we believe it between base 2 (X axis) and an enumeration (akin to base 1 if there were such a thing) (Y axis)?
45. #45 Nate October 28, 2009
A more thoughtful post: when I learn a new concept I sometimes get caught up on a part that I seem to have “disproved”. I then seek an explanation why my proof is wrong, which helps me to
understand the concept in general. If, for some reason, I can’t find a disproof, it is sufficient to know that almost all mathematicians accept the concept and that many other theories depend on
it. I have the humility to say “I don’t understand why my proof is wrong, but I understand that other people do, and that is good enough for me.”
Ultimately I need to be able to be understand my mistake, but as a new learner, which Vorlath seems to be in this subject, I need to be take certain things as true to proceed.
Vorlath’s claim that Cantor’s arguments are invalid doesn’t stand in some void. He’s claiming that countless mathematicians, engineers, computer scientists, and whoever else are all deluded – and
that, despite claiming that the disproof is simple – only he has discovered it. He’s further claiming that the tenets of topology, computability, and many other theories are in jeapardy. And he
has the audacity to make this extraordinarily sweeping statement with the triteness of “BTW, I’ve just disproved Cantor’s theory with the above image” (taken from his website).
Vorlath, have the humility to understand that everyone else understands why you are wrong, even if you don’t (yet).
46. #46 Blaise Pascal October 28, 2009
@32: I wouldn’t say he’s using a “faulty” definition of cardinality; he’s certainly using a non-Cantorian definition, but perhaps in the end his definition of cardinality for infinite sets
ultimately makes more sense than the standard Cantorian definition. I doubt it, though.
More fundamentally, it appears that the biggest complaint from Vorlath is about Cantor’s use of the decimal expansion of reals to generate the diagonalized counterexample. I personally feel there
are surmountable technical issues with the standard presentation (relating to the equivalence between pairs of digit sequences like 0.4999999… and 0.5), but as I said, they are surmountable.
Stepping away from reals, Cantor’s proof is very suitable for showing that there isn’t a surjection from S to 2^S for any set S (including N). For suppose there is such a surjection f:S->2^S.
Take the subset K of S such that n in K if and only if n not in f(n). Then K is not equal to f(i) for any i in S, because i is in exactly one of K or f(i). This contradicts the premise that f was
a surjection, so no such surjection can exist.
With a proof that the relation |A| ≤ |B|, defined as “There exists a surjection from B to A”, is a well-defined ordering relation, that is sufficient to prove that there is a never-ending chain
of infinitites, even without showing a bijection from 2^N and R.
47. #47 Pelli October 28, 2009
Vorlath, I don’t quite follow your arguments. I’d like to know what you think about my proof in 36 that there is no bijection between the set of natural numbers and the set of infinite binary
strings? Is it correct or not? If not, then which sentence is wrong?
48. #48 Radoslav Harman October 28, 2009
IMO Vorlath is not satisfied with the fact that the vertical axis corresponds to *enumeration* of reals, while the horizontal axis corresponds to *digits* of reals, i.e., the meaning and perhaps
the “sizes” (“scales”?) of the two axes are not the same. However, I do not understand how does this “incomparability of axes” invalidates Cantor’s diagonal argument that the set of real numbers
is not countable.
49. #49 Ketil Tveiten October 28, 2009
@27 “… Also, I’m not arguing that any sets have the same or different cardinality.
[one sentence passes]
Also, [0,0.5] does not have the same cardinality as [0,1.0] when the first set is a subset of the other. …”
Now, obviously you don’t see how you’re wrong about the mathematics, but do you at least see how you contradict yourself?
Anyway, I dare say this thread has been a wonderful example and confirmation of everything Mark wrote in his post. Good job, Vorlath!
50. #50 pjb October 28, 2009
If you can’t map by digit between base 2 and base 3, why should we believe it between base 2 (X axis) and an enumeration (akin to base 1 if there were such a thing) (Y axis)?
Each row in the grid is an infinite bit sequence in a countably infinite set of bit sequences. The bits in each sequence are enumerated just as the sequences in the set are (if stacked to make a
grid, your X axis is enumerated just as your Y axis is). The individual columns in your grid are irrelevant to the point. There is no “map by digit between base 2 and base 1.” Any “base” in
itself is simply a map between a sequence and a number. The diagonal counterexample is simply a demonstration that there will necessarily be sequences (and therefore real numbers) excluded by the
set even though it is infinite.
51. #51 Vorlath October 28, 2009
@Blaise: “This contradicts the premise that f was a surjection, so no such surjection can exist.”
You’re talking about the power set. I can tackle that at a later time if you wish. But it comes down to having a dependent set. Apply a new transformation (ie. make the sets independent) and the
contradiction disappears.
@Pelli: Your statement is only true with dependent sets. When you use the stated relationship, you are modifying the properties of the original set to create a new set. So it’s expected that you
will get a “contradiction” between the two sets. Remove the relationship between elements and the contradiction disappears.
@Radoslav: The different scales on a finite level is what allows the new number to be created. However, this property disappears on infinite sets rendering the diagonal impossible to construct.
It’s like setting up a vertical axis on base 2 and a horizontal axis on base 3. I can list all the base 2 elements, create a diagonal and set all digits to 3. No matter how many digits you take,
this diagonal will always be different than what is found in the list.
Some people think this argument is flawed because only base 2 digits are used in Cantor’s argument, but this is not so. He uses the row itself as a digit much like cutting notches on a stick. The
difference in bases is a finite property that occurs digit-wise, but does not hold true when you take the entire sets into consideration independently of the base (element by element).
@Ketil: ”
“Also, I’m not arguing that any sets have the same or different cardinality.”
“Also, [0,0.5] does not have the same cardinality as [0,1.0] when the first set is a subset of the other. …”
Now, obviously you don’t see how you’re wrong about the mathematics, but do you at least see how you contradict yourself?”
I should have been clearer and you’re right that it is a contradiction as I have written it. Sorry about that. My first statement was in relation to reals and naturals. I don’t want to debate
here whether or not they have the same cardinality. My second statement was not related to the first. It relates to what Cantor is doing incorrectly.
“There is no “map by digit between base 2 and base 1.”"
There is. You will have log(x) base 2 digits for x base 1 rows. That’s a relationship that happens at the finite digit level and is required to build the diagonal, but is not valid when you take
the whole set into consideration (fallacy of composition). IOW, these are two different mappings and is where the bogus contradiction stems.
@Everyone: What Cantor has proven is that B = [0,0.5] has a different cardinality than A = [0,1.0] ONLY while set B is based on A. In other words, the elements before 0.5 are identical in both
sets. They are mapped one to one. Doesn’t matter if you can map them differently. That’s not the issue here. Cantor is using a DEPENDENT mapping. As long as that dependent mapping exists, the two
sets have different cardinality until the two sets are made independent (where one would apply a transformation, scaling or whatever else).
In Cantor’s argument, the finite digits are what creates the dependency between the two axis. Reals don’t even enter the scenario.
Also, I’m willing to be wrong, but I haven’t seen anything that contradicts my argument. In fact, everyone agrees with what I’m saying aside from the implications. If I can be shown how I’m
seeing the implication wrongly, then I’ll gladly admit it.
52. #52 dean October 28, 2009
“Also, I’m willing to be wrong”
That’s great – but you have a long way to go before what you say can be considered close enough to be wrong – it’s “not even wrong” now.
Cantor’s argument basically says this: no matter how you careful you are when you arrange all the numbers in [0,1] into a countable list, you fail, because a cleverly simple (perhaps that’s the
part that confuses you) argument shows that missed at least one. No different bases, no “dependent” and “independent” sets.
Why don’t you spend your time trying to square the circle?
53. #53 Mark C. Chu-Carroll October 28, 2009
You keep harping on “dependent” sets and such. But set theory has no notion of dependent sets the way you’re using the term; and the implications that you’re drawing from the “dependent set”
notion are just plain wrong.
The set [0..0.5] has the same size as the set [0..1]. Both sets already exist, by definition. You’re not creating the set [0..1] by pointing out that there’s a one-to-one relationship between
[0..0.5] and [0..1] – you’re just defining a relationship. No matter how you chose to define a relation, it doesn’t change the set that it maps to. Whether you map [0..1] to [0..0.5] with the
relation { (x, x/3) }, or [0..2] to [0..0.5] via the relation { (x, x/4) }, the set [0..0.5] remains exactly the same, with the same cardinality, the same members, etc.
You’re relying on the idea that relations construct sets, which is wrong. You’re using that error to create a weird version of dependent sets which is worse than wrong – it’s just completely
invalid. According to your theory, you can’t define the cardinality of any infinite set, because it varies depending on which relation you’re using on that set. In fact, your notion means that a
given set doesn’t even have identity operations unless you know the relation that defined it – and two sets with the same values are not if they’re defined using different relations.
That’s worse than wrong. That makes the entirety of set theory pretty much entirely useless.
54. #54 Davis October 28, 2009
Vorlath, it might help to provide some precise definitions of the terms you’re using, because certain of the words fundamental to your argument are either not standard mathematical words, or
seemingly not used in the standard way. Here’s a short list to start with:
□ dependent mapping;
□ independent (and dependent) sets;
□ cardinality.
Without precise definitions of your terms, you’ll end up just talking past everyone.
55. #55 Pelli October 28, 2009
@51 “@Pelli: Your statement is only true with dependent sets. When you use the stated relationship, you are modifying the properties of the original set to create a new set. So it’s expected that
you will get a “contradiction” between the two sets. Remove the relationship between elements and the contradiction disappears.”
I am proving no bijection exists by contradiction: Suppose there is one. Then I can exhibit an element not included in your bijection, so it wasn’t a bijection after all. Hence there cannot be a
bijection. Which exact sentence here is wrong?
It is true that my exhibited element will depend on what supposed bijection you are giving be, but that isn’t a problem at all. I don’t really see what you mean by “dependence” here. Are you
getting confused by the fact that the bijection is “specified”? Well, that’s how proof by contradiction works.
56. #56 Vorlath October 28, 2009
By dependent set, I mean using a set that exists based on a relationship to another set. A specific mapping that not only determines the “transformation” or criteria, but also how many elements
are found in the other set. (Exact one to one mapping does not count as it changes nothing).
Independent sets are sets that are free to be mapped to each other in any way you like, especially allowing someone to map them one to one. If you see things like “Set K only contains n where n
is not in f(n)”, that is NOT an independent set as set K cannot exist independent of f(n) as long as the quoted relationship exists. That set K could theoretically exist on its own is irrelevant
as that is not the issue. The issue is removing any relationships so as to enable one to one mapping.
Cardinality is if you can map all elements of one set to another and only applies to independent sets because otherwise, you would already have a pre-imposed mapping and are not free to map them
one to one.
But set theory has no notion of dependent sets the way you’re using the term
Of course it does. Your power set is constructed from another set, is it not? It has a mapping, right? Try removing this initial relationship and remap them.
The set [0..0.5] has the same size as the set [0..1].
I agree. This is my point. It’s been remapped from it’s initial relationship. The original mapping has thus been discarded (or modified). Cantor does not discard the original mapping, but imposes
one that is not one to one.
Both sets already exist, by definition.
That’s irrelevant because Cantor isn’t using it that way. I know you can do this or that. But Cantor is doing ONE very specific thing. He is CREATING a new set based on another. Or if you prefer
to treat all sets as pre-existing, then he is using ONE very specific mapping that is intentionally not one to one.
So he is doing the equivalent of mapping [0..0.5] of subset B to [0..0.5] of set A. Doesn’t matter that other mappings exists. Big deal. I know that. Everyone here knows that. In fact, this is my
argument. Without this particular mapping, Cantor’s argument falls apart.
According to your theory, you can’t define the cardinality of any infinite set, because it varies depending on which relation you’re using on that set.
Not my theory, Cantor’s. What you said is exactly the reasoning Cantor uses to say that the cardinalities are different. He uses one very specific mapping to show how cardinalities are different.
I agree with you that the “theory” is nonsense.
In fact, your notion means that a given set doesn’t even have identity operations unless you know the relation that defined it – and two sets with the same values are not if they’re defined
using different relations.
Two sets with the same values are not what? Identical? They can be, even if defined via different relations. That’s not the issue though. It’s the imposing of a relation that is not one to one
and then asking people to show a counter-example while retaining this skewed mapping. It’s astonishing how many people ask for this bogus counter-example.
I am proving no bijection exists by contradiction
I know, but there is no contradiction. It only looks like one. The kth element uses base 2 digits while f(k) uses what is essentially base 1 like notches on a stick. Comparing different bases
SHOULD result in one axis being able to represent more elements than the other when comparing finite digits. But the infinite case does not hold without using the fallacy of composition.
As I’ve said plenty of times, comparing bases via digits does not mean that cardinality of base 3 is greater than the cardinality of base 2.
If your “proof” held, then you would also prove that base 3 has higher cardinality than base 2 because this example is equivalent to your setup.
57. #57 Blaise Pascal October 28, 2009
By dependent set, I mean using a set that exists based on a relationship to another set. A specific mapping that not only determines the “transformation” or criteria, but also how many
elements are found in the other set. (Exact one to one mapping does not count as it changes nothing).
Independent sets are sets that are free to be mapped to each other in any way you like, especially allowing someone to map them one to one. If you see things like “Set K only contains n where
n is not in f(n)”, that is NOT an independent set as set K cannot exist independent of f(n) as long as the quoted relationship exists. That set K could theoretically exist on its own is
irrelevant as that is not the issue. The issue is removing any relationships so as to enable one to one mapping.
If I understand you correctly, there are three bits to dependence of sets, a set A, a set B, and a surjection f from A to B (such that y in B implies an x in A such that f(x) = y), so that B
“exists”/depends on A and f for it’s existence.
In that case, Cantor is saying that N and R are fundamentally independent, because no such surjection f can exist between N and R.
You are right in that the proof relies on a formulation similar to “Set K contains n if and only if n is not in f(n)”. You may not find that OK. But another way of looking at it is that Cantor is
trying to create a function in a much bigger domain. He is saying “There exists a function K from R^N to R such that K(f) is not in f(N)”, (where “R^N” is the set of all functions from N to R)
and then attempting to prove this by showing how to construct such a function. He’s not claiming that K is an injection or surjection or even unique. Just that at least one function K exists with
that property. Clearly, for a given f in R^N, K(f) depends on f, but that’s sort of standard with any function.
A bijection F between R and N implies that no such function K exists, for if it did then K(F) would have to be in F(N), since F(N) is R, and K(F) is in R. so the proof of the existence of K is
proof that no such bijection exists.
Now, do you disagree with my reasoning so far (specifically, do you agree that if such a function K exists, then no bijection from between R and N exists, even if you believe that Cantor failed
to properly demonstrate siad function K)?
58. #58 Observer October 28, 2009
It is sad to see so many fine mathematicians arguing with a fool. He know a lot of our words, true. But he uses them in unfamiliar ways. Like an aphasic.
Vorlath’s incorrectness does not harm us. Let us leave him be, with his happy little disproof of Cantor’s diagonalization argument. He will never publish it. The foundations of our mathematics
are secure.
It does a disservice to our dignity to bicker with a child as if children ourselves.
59. #59 Douglas McClean October 28, 2009
@37, who said:
In fact, any subset of the real numbers can be directly mapped onto any other subset of the reals.
This can’t be true without another condition, can it? How can {26, Pi} be mapped onto [0,1]? For that matter, how can {26} be mapped onto {26, Pi}? I assume the missing condition is either that
the subsets not have zero measure, or that the subsets have non-zero measure (which may not be equivalent conditions in the presence of AC, if I understand correctly?)?
60. #60 Jonathan Vos Post October 28, 2009
There’s no valid Math in this region of lesser crackpotia.
This is, once again, a problem in Psychology. Or pair of linked problems:
(1) What is it about Euclid, Darwin, Cantor, and Einstein, that they have in common that attracts armies of the ignorant, over-confident, and deluded to attempt to kill them as father figures, or
hated authority? There are so many other great thinkers who attract no loons, by comparison. Who’s trying to debunk Archimedes, or Fibonacci, or Euler?
(2) What drives the twisted minds and shattered psyches of those who, respectively, try to Square the Circle, Deny Evolution by Natural Selection, Squash Infinities into a finite box, or make
Minkowski Space Euclidean?
61. #61 pjb October 29, 2009
You will have log(x) base 2 digits for x base 1 rows.
This makes no sense. Each row is an infinite digit sequence. Each row has countably infinitely many digits. The digits in each sequence (i.e. the columns) are enumerable. The sequences (i.e. the
rows) in a countably infinite set are enumerable by definition. Neither of these enumerations is “base 1,” whatever that means. The base used in the enumeration is completely irrelevant. The base
of your digit sequences is completely irrelevant. In fact, you don’t even have to think of them as numbers. By the diagonal argument, a countable set of these infinite digit sequences will
necessarily exclude at least one sequence, so the set of all infinite digit sequences (of any base) is larger than any countable subset. Notice that we haven’t even described any numbers yet.
NOW, if you wish, and if you are careful to exclude redundant digit expansions, you can create a one-to-one mapping between a set T of infinite digit sequences and the real numbers. Taking care
again with the redundant digit expansions, you can still use the diagonal argument to show that any countable subset of T will necessarily exclude at least on sequence also in T, and so the set T
is “larger” than any countable subset. Since T is a bijection with the reals, and each countable subset is, by the definition of countable, a bijection with the natural numbers, the reals are
then a “larger” set than the naturals.
62. #62 MPL October 29, 2009
Disproving Cantor is easy: just assume the axiom of choice doesn’t hold. Or, you can just throw out the axiom of infinity and make all sets finite.
You might do some damage to the rest of mathematics, but no big deal, right?
63. #63 MPL October 29, 2009
Oh, it’s been a while. I forgot that Cantor–Bernstein–Schroeder shows you don’t need choice to prove the reals are larger than the naturals. Still, take enough axioms away, and we can make it not
work. Clearly, it’s the only logical thing to do!
64. #64 Pelli October 29, 2009
@56: “I know, but there is no contradiction. It only looks like one. The kth element uses base 2 digits while f(k) uses what is essentially base 1 like notches on a stick. Comparing different
bases SHOULD result in one axis being able to represent more elements than the other when comparing finite digits. But the infinite case does not hold without using the fallacy of composition.”
Are you thinking f(k) is a number? f(k) is exactly the kth element, which is a string. There is no basis. I’m proving things about strings. You can think of them as having symbols A and B instead
of 0 and 1. Whenever you give me an infinite enumerated list of strings, I can point to a string that is not there. It has got nothing to do with different representations of strings, whatever
that is. I’m not even doing anything with the sets, you’re the one doing the work when you give me your bijection. I’m just pointing at an element that’s not included in the image of your map.
Whether a string is an element of a set or not does not depend on where the set comes from.
1. Do you agree that a bijection between n and the set of infinite A,B-strings, is an enumerated list of all the A,B-strings?
2. Do you agree that if two strings differ at some position, then they are different?
3. Do you agree if an element is different from all elements on the list, then it is not in the list?
4. Do you agree that given a list, I can construct an element different from every element on the list and hence not in the list?
5. Do you agree that if any list is missing some element, then there is no comlete list?
Please be very specific about what you agree and do not agree with, otherwise we’ll just be talking past each other.
Here’s an example of how my construction works:
If the first ten elements of your list are
f(1) = A…
f(2) = BB…
f(3) = BBB…
f(4) = BABA…
f(5) = BAABA…
f(6) = BBBAAB…
f(7) = BBBBAAA…
f(8) = AABBABAA…
f(9) = ABABBABBB…
f(10) = ABBABABBBA…
then the first 10 letters in my counterstring will be:
65. #65 Rilke's Granddaughter October 29, 2009
Pelli, I’m not sure I understand how we know 4 is true. Could you explain?
66. #66 Vorlath October 29, 2009
If I understand you correctly, there are three bits to dependence of sets, a set A, a set B, and a surjection f from A to B (such that y in B implies an x in A such that f(x) = y), so that B
“exists”/depends on A and f for it’s existence.
In that case, Cantor is saying that N and R are fundamentally independent, because no such surjection f can exist between N and R.
I agree this is what he’s trying to show. But he fails.
He is saying “There exists a function K from RN to R such that K(f) is not in f(N)”, (where “RN” is the set of all functions from N to R) and then attempting to prove this by showing how to
construct such a function. He’s not claiming that K is an injection or surjection or even unique. Just that at least one function K exists with that property. Clearly, for a given f in RN, K
(f) depends on f, but that’s sort of standard with any function.
A bijection F between R and N implies that no such function K exists, for if it did then K(F) would have to be in F(N), since F(N) is R, and K(F) is in R. so the proof of the existence of K
is proof that no such bijection exists.
I’m not sure where to begin. You’re using the power set, but it’s the same issue.
So let me extract a few sections again and it might become clearer. What you’re doing is STATING that f is a bijection, but then you use f by setting it up as a mapping that is not one to one.
When you use a mapping that is not one to one, elements are left out and it’s expected that you can find one of those left out elements. Hence, no contradiction.
such that K(f) is not in f(N)
So K(f) has two possibilities, correct? Either it IS in f(N) or it is NOT in f(N). That’s base 2.
But then you go through all N (or the other set, doesn’t matter since it’s assumed that a bijection exists, this is why you use f as an index I presume), correct? Since you can’t leave any of
those out, it’s base 1.
So f(N) vs. N sets up a mapping between base 2 and base 1. This is not a mapping that is one to one. The result you got is an expected one, not a contradiction.
@64 Pelli:
Are you thinking f(k) is a number? f(k) is exactly the kth element, which is a string. There is no basis. I’m proving things about strings.
You’re imposing a relationship as to how many f(k)’s there are relative to k through the use of finite digits. That means you’re using different bases. k indexes into a digit that uses base 2
while f(k) indexes into rows which is an enumeration (base 1). You can’t just handwave this away. Remove the relationship between finite digits in your “proof”, or use the same base.
Whenever you give me an infinite enumerated list of strings, I can point to a string that is not there.
Only with finite digits. Fallacy of composition when using infinite strings.
What you describe is an expected result for the specific mapping you are using which is not one to one. Use a one to one mapping like using the same base and your argument falls apart. You even
state that you’re using different bases. An enumeration is base 1 since there is only ONE option to include each and every element in that set. But each digit has TWO options for the other set.
It’s well known that you can’t compare bases and say that base X has higher cardinality than base Y.
The flaw in all these “proofs” is the same in that it’s mapping digits of different bases (or the equivalent version of it) one to one instead of mapping elements one to one. If you map digits,
you must use the same base.
1. Do you agree that a bijection between n and the set of infinite A,B-strings, is an enumerated list of all the A,B-strings?
No. What you describe is a mapping that is not one to one and cannot ever be a bijection. It’s stated right in your definition. A,B strings use base 2 since there are two choices. And an
enumeration as base 1 by definition. If you used base 2 vs. base 3, it would be obvious, but somehow, just because you use an enumeration, you think the base just disappears. Each axis or each
set must be represented using a base. State what they are. In effect, everyone here is trying to convince me that base 2 has higher cardinality than base 1 (or the equivalent of base 3 having
higher cardinality than base 2). So it’s not even MY argument. There is a contradiction between the conclusion of Cantor’s argument and what is known about infinite sets when comparing different
bases. One of them has to be wrong. Pick one.
2. Do you agree that if two strings differ at some position, then they are different?
3. Do you agree if an element is different from all elements on the list, then it is not in the list?
4. Do you agree that given a list, I can construct an element different from every element on the list and hence not in the list?
Not even close. The process you use to create this new element only works with finite digits. It does not extend to the case of infinite digits unless you use the fallacy of composition. Besides,
it doesn’t even work in the finite case if you use the same base demonstrating that you’re using a mapping that is not one to one.
5. Do you agree that if any list is missing some element, then there is no comlete list?
The digits in each sequence (i.e. the columns) are enumerable.
The digits are base 2.
The sequences (i.e. the rows) in a countably infinite set are enumerable by definition.
There’s your base 1.
So you’re mapping base 2 digits to base 1 rows. Fascinating that if it were base 3 compared to base 2, there would be no issue. But base 1 doesn’t need any specific digit for you to see. It can
use anything at all and it uses the rows themselves as the vertical digits in this case (like notches).
So when you construct your diagonal, the row index can only use base 1 digits (notches). However, the string can use base 2 digits. If you map base 1 digits to base 2 digits in this manner, then
base 2 will always be able to represent more elements than base 1 when using the same amount of digits. However, this does not hold up when using infinite sets. Otherwise, Cantor’s argument would
mean that base 2 has higher cardinality than base 1 (or any base higher than the other would have higher cardinality).
67. #67 Vorlath October 29, 2009
Correction: “f(N) vs. N” should be “element of f(N) vs. enumeration of N (or f(N))”
68. #68 Pelli October 29, 2009
Vorlath, thank you for your cooperation.
I think you’ve misunderstood the common definitions of what a set and a bijection are.
Also, reading through what you said to Blaise, I think you’ve misunderstood how the proof works. (“So let me extract a few sections again and it might become clearer. What you’re doing is STATING
that f is a bijection, but then you use f by setting it up as a mapping that is not one to one.”)
What you say there is correct. We prove there is no bijection, by SUPPOSING (stating) there is one. Then we go on to show that it is not one-to-one. This is a contradiction, so the assumption
that f existed was false. Note that our process of showing the failure DOES NOT ALTER f. We’re NOT taking a bijection f, changing it to make it fail, and then say it failed so it’s a
contradiction – we ARE taking a bijection f, showing that it fails, and say that is a contradiction.
I still don’t understand what you mean by basis – do you mean that “base k” is k^N (infinite strings of k symbols)?. In that case what we’re trying to convince you of is that “base 1″ = N is
smaller than “base 2″ = 2^n. However, this is not equivalent to “base 2″ being smaller than “base 3″ which is false. In fact k^N is the same size for all finite k > 1.
69. #69 James Sweet October 29, 2009
An infinitely-long conference is going to be held on disproofs of Cantor’s theorem. The conference begins Friday at noon, and each of the infinite number of lectures lasts one hour.
Vorlath approaches the conference organizer and asks to present a lecture. The organizer says, “I’m sorry, but the entire schedule is full.” Vorlath turns is about to leave, when the organizer
says, “Wait, I think there’s something we can do.”
He reschedules the Friday noon talk to 1PM, the 1PM talk is rescheduled for 2PM, the 2PM talk is rescheduled for 3PM, and so on down the line, with Vorlath schedule to speak at noon. Thus
Vorlath’s lecture can be accommodated in this infinitely-long anti-Cantor conference, despite the entire agenda being full already.
“That’s still no good,” says Vorlath. “You see, I have an infinite number of disproofs of Cantor, and I will need one hour to present each proof.” Vorlath regretfully turns to go, but the
conference organizer stops him again. “I still think we can fit you in,” he says.
The talk that had been rescheduled for 1PM is now rescheduled for 2PM. But the talk that was scheduled for 2PM is now rescheduled until 4PM. The talk at 3PM is rescheduled to 6PM, 4PM to 8PM, 5PM
to 10PM, 6PM rescheduled to midnight, 7PM rescheduled until 2AM the following morning, and so on down the line.
Thus, even though Vorlath has an infinite number of disproofs to present, and the conference schedule was already full to begin with, he can still present each of his infinitely-many lectures.
Isn’t infinity fun?
70. #70 AnyEdge October 29, 2009
The value here, is in recognizing mania. Vorlath has latched on to a meaningless issue: an enumerated list is different from a number. This is true. And enumerated list is in fact, different from
a number. All the nonsense about bases is obfuscative for no important reason.
None of it suggests, in any way, that we cannot make an enumerated list of numbers. Or that a diagonalization argument on that list does not produce an element previously not on that list.
71. #71 James Sweet October 29, 2009
The poor organizer of the infinitely-long anti-Cantor conference has another problem: After an infinite number of phone calls, he has discovered that ALL of the infinite number of conference
participants each have an infinite number of proofs to present, each of which requires a one-hour lecture. The conference is postponed indefinitely while he tried to figure this out.
Suddenly, he has a eureka moment: Each of the conference participants is assigned a number, beginning with 1. At noon on the first day of the conference, presenter #1 may give his 1st lecture. At
1PM, presenter #1 gives his 2nd lecture. At 2PM, presenter #2 gives his 1st lecture. At 3PM, presenter #3 gives his 1st lecture. At 4PM, presenter #2 gives his 2nd lecture. At 5PM, presenter #1
gives his 3rd lecture. At 6PM, presenter #4 gives his 1st lecture. At 7PM, presenter #3 gives his 2nd lecture. At 8PM, presenter #2 gives his 3rd lecture. At 9PM, presenter #1 gives his 4th
lecture. And so on in this fashion for ever and ever, amen.
72. #72 James Sweet October 29, 2009
Subsequently, the organizer realized that this wouldn’t work after all. All of his careful planning depended on one thing: being able to map each of the conference participants to a pre-assigned
natural number. However, he discovered much to his chagrin that all of the participants presenting disproofs of Cantor were completely irrational. And everyone knows you can’t denumerate
irrationals with the set of natural numbers….
Thanks, I’ll be here all night.
73. #73 Clayton October 29, 2009
I think Vorlath’s trouble is even simpler than has been hypothesized: he doesn’t understand proof by contradiction. Cantor’s argument relies on assuming that something is a bijection and then
discovering that it is not. (as pointed out by Pelli @68) This is, of course, a valid proof technique. I think Vorlath believes that our discovery of a contradiction invalidates the *entire*
proof, not just the later assumption.
If that’s the case, Vorlath, I sympathize: It’s a pretty weird concept when first encountered. You should consult other examples of proof by contradiction to see how it’s done.
74. #74 Shawn Smith October 29, 2009
which, if any, of the following propositions do you consider to be true:
1. the cardinality of the set of natural numbers is greater than the cardinality of the set of reals.
2. the cardinality of the set of natural numbers is less than the cardinality of the set of reals.
3. the cardinality of the set of natural numbers is equal to the cardinality of the set of reals.
If it’s anything but 2, please describe why.
As another exercise, please answer the same question, except that instead of “reals,” substitute “integers,” and describe any answer other than 3. Are the set of integers and the set of natural
numbers independent? If they are independent, then what if I define the integers as {floor(n/2)*((1-2)^n) | n is a natural number}? Are the integers suddenly transformed into a dependent set, or
have I just made an invalid dependency because -1 is not a natural number?
75. #75 Jonathan Vos Post October 29, 2009
I still think that this is really about Psychology, rather than Math. I’ve spent HUNDREDS of hours arguing about people with “idées fixes” about Euclid, Darwin, Cantor, and Einstein. Most of the
time, evidence does not change their mind. Logic does not change their mind. One in a LONG while, I do get to see the light go on over their head. I’ve dropped out of some email groups I’d been
in for years, because I just burned out on suffering fools gladly. My bad.
76. #76 Ketil Tveiten October 29, 2009
I would like to quote the so-called ‘Myers’ Axiom’: ‘You can’t use reason to talk someone out of a position they didn’t use reason to arrive at.’
But anyway, allow me to attempt a derailing of the discussion at hand by proposing the following topic:
I have noticed, in my various encounters with persons of the crankish persuasion, that there appears to exist several levels of crank, each a little more educated than the next. For instance, one
has the crank who does not believe in infinite sets, whose counterargument to any explanation is ‘[a given infinitary construction] would never finish.’ Then, further up along the ladder of
education, one has cranks such as our present Mr. Vorlath (begging your pardon if you are a ‘Ms’), who (apparently) does not accept the existence of different ‘sizes’ of infinities (which we
mathematicians know as ‘cardinalities’). And finally, at the boundary of academically acceptable mathematical activity, one has the fellow who, while possessing a doctoral degree in algebra
(e.g.) occasionally shows up at the algebra seminar and asks the lecturer really braindead questions (about the group (Z/2)^2; ‘what scheme structure do you apply to this?’).
Have you, ladies & gentlemen, any thoughts on the topic?, do you know of any results or research on the classification of cranks?, and finally, do you have any amusing stories to share about
encounters with such personalities?
77. #77 william e emba October 29, 2009
There are various flavors of constructivists who are definitely not cranks in terms of mathematical ability. Some of them get real crankish about mathematics that does not follow their rules.
Part of it is their views were extremely unfashionable for a time, as in, tenure and grants and mere publishability were made unfairly difficult for them. But part of it is they really do get
crankish about things outside their philosophy.
As for math department cranks, UC Berkeley once admitted a seriously incompetent grad student who Did Not Get Anything. His presence in any class or lecture was usually a disaster, as he couldn’t
help but ask psychobabble crackpot questions. The first year Prelim Exam study group that year deliberately held secret meetings just to avoid him.
But one question stands out, genius of a different order. Robert Griess had recently constructed the Monster, and when visiting, gave a general mathematics audience colloquium talk. Of course
everyone went, what an event! Along the way, Griess mentioned various aspects of his construction, including how the known theory of the Monster had implied that if there was a 196883 dimensional
representation, he could probably reduce it down to six parameters that had to be very carefully chosen. Which he did, and the rest was history.
Towards the end, he discussed the relationship of the Monster to the other sporadic simple groups, mentioned that the 26 sporadics split into 20 that were subquotients of the Monster, and 6
unrelated groups, which he was calling the “pariahs”. At the end, during the question and answer session, departmental crackpot got called upon (no one thought to run interference for Griess),
and asked the immortal question: “is there a relationship between the six parameters and the six pariahs?” Despite the fact that very few people really followed Griess, everyone cracked up
laughing, and Griess certainly was flabbergasted.
78. #78 Vorlath October 29, 2009
We prove there is no bijection, by SUPPOSING (stating) there is one. Then we go on to show that it is not one-to-one.
No. You think that’s what you’re doing but you’re not. You are supposing that there is a bijection, but then DEFINE that bijection to not be a one to one mapping BEFORE the conclusion. Before the
“contradiction”. Then you use this mapping that is NOT one to one to come to the conclusion that this mapping is not one to one. Circular logic.
You may also look at it another way. You assume there is a bijection and then assume a particular mapping for that bijection. At the end, you find out your mapping wasn’t actually one to one.
However, it’s trivially obvious that the particular assumed, chosen and arbitrary mapping is not one to one. There’s no need to work through anything.
It doesn’t matter if the mapping you chose isn’t one to one. You have to prove that there is NO one to one mapping at all. Unfortunately, you believe this is what you have done.
It’s very difficult to show the flaw in circular reasoning because it goes round and round, especially when one doesn’t see that the bijection is defined to not be one to one. Then you come back
and say “it’s assumed to be one to one”. Then I say “No, when you use the mapping, it is one particular mapping and NOT a bijection at all”. Then you say “This is what we show through the
conclusion that it isn’t a one to one mapping”. Then I go, “no, BEFORE the conclusion. You actually define your own custom mapping. It may or may not end up being a bijection though it’s
trivially obvious it doesn’t map one to one.” Then you say… on and on.
How do I break the cycle? And everybody reading this has already heard your side. They’ve seen it and are accustomed to it. But no one has yet seen what I’m describing. So I look crazy! That’s
fine. I’ll keep trying.
Note that our process of showing the failure DOES NOT ALTER f.
We’re NOT taking a bijection f, changing it to make it fail, and then say it failed so it’s a contradiction
Actually, this is exactly what you are doing. You’re assuming a bijection and then using it in a way that is not a bijection BEFORE the conclusion.
You’re essentially saying the following.
1. Assume X is 4.
2. Define X as 5 in a second assumption.
Then you show the contradiction that 4!=5 and thus X cannot be 4.
That’s what you’ve done. It’s circular logic.
we ARE taking a bijection f, showing that it fails, and say that is a contradiction.
No, you make it fail BEFORE the contradiction. You use f as a mapping that is not one to one when you construct (or define) K.
Also, the construction of K is using the fallacy of composition when using infinite sets.
do you mean that “base k” is k^N (infinite strings of k symbols)?
I don’t think so. Base K is any string (infinite or otherwise) where the digits can use K symbols.
However, this is not equivalent to “base 2″ being smaller than “base 3″ which is false.
This is what your are “proving” though. The power set “bijunction” mapping you’re using is an equivalent setup. That’s why either |base 3| > |base 2| or your proof is wrong. It’s one or the
an enumerated list is different from a number
Not if you use the list as notches that can be counted.
As an aside, please explain how you would select a particular row without using an index and where you cannot count anything (as that would create number as well).
Cantor’s argument relies on assuming that something is a bijection and then discovering that it is not.
There is no discovery. The bijection “proof” has two assumptions.
1. Assume there is a bijection.
2. Define the bijection to be a mapping that is not one to one.
Then it goes on to show what appears to be a “contradiction”.
There is no discovery. Only circular reasoning.
I think Vorlath believes that our discovery of a contradiction invalidates the *entire* proof, not just the later assumption.
I’m saying the contradiction is ASSUMED at the start by the way K is defined or the way the grid is defined depending on the version of the “proof” used.
On this blog, I’m operating on the assumption that |R| > |N| for the sake of argument.
Also, my answer isn’t clear cut, but it’s definitely not 1.
As to the other two, it depends on how the elements are defined. If you only use independent elements in the set N, then |R| > |N|, but if you handle more esoteric definitions of naturals like X
is the center of the set N, then |R| = |N|. When naturals are defined this way, they are actually reals. For example, all reals can be represented by their relative position along the set for
[0,1.0). It’s simply a percentage. So each real is defined as a ratio between its position and the size of the set. If you define a natural as being located in the center of the set, then it too
is defined relative to the size of the set. The two sets are then one and the same. You don’t even need to map them. But if you don’t use or allow those kinds of numbers, then |R| > |N|.
If you allow esoteric naturals (not sure what they’d be called), then you can do stuff like:
R = {X >> (log2(|Z|)/2) | X ∈ Z}
(>> is a binary shift, but the base isn’t important).
And you have a one to one mapping between naturals and reals.
Are the set of integers and the set of natural numbers independent?
From each other? Sure.
If they are independent, then what if I define the integers as {floor(n/2)*((1-2)^n) | n is a natural number}? Are the integers suddenly transformed into a dependent set, or have I just made
an invalid dependency because -1 is not a natural number?
N is still independent, but your new set is dependent on N if you continue to use the mapping in the future. If you now throw away the definition, but keep the set, then it’s independent. You may
also make them independent if you’re allowed to replace or update the mapping.
I know what you’re saying, but that’s not it at all. I can’t get anyone to see what I’m describing. If they saw it, that’d be one thing. But so far, I can’t even be right or wrong. No one has yet
talked about what I’m talking about.
Mark is the one that came closest to seeing what I was talking about. But what normally happens in cases like this is that they explain it in a different way with the assumption that I don’t
understand. But there isn’t a single thing in this thread I haven’t seen and understood a million times over.
So what’s one to do but appear crazy until the day comes when someone takes a chance and looks at what I’m describing instead of assuming I don’t understand.
BTW, I’m hoping I’m wrong. Just for the record. But I haven’t seen anything yet that indicates I’m wrong.
79. #79 Ketil Tveiten October 29, 2009
@78: Allow me to quote myself at #49.
Moreover: “BTW, I’m hoping I’m wrong. Just for the record. But I haven’t seen anything yet that indicates I’m wrong.”
No, you’re not, and yes you have.
Your ‘hope’ that you are wrong is perhaps the most transparent rhetorical device since the last time a Republican spoke publically. You do not ‘hope’ that you are wrong. You believe, with great
conviction, that you are right, and bolster yourself with great fortitude against arguments to the contrary, whether they be right or not.
You have been presented with multiple very good explanations as to why you are wrong, but you steadfastly refuse to accept them, for reasons which are entirely related to your lack of
understanding of set theory. You say ‘you haven’t seen them’, but you have, only you do not understand what it is you see.
80. #80 p October 29, 2009
There are people who try to disprove Cantor? The world never ceases to amaze me.
A couple years ago I had an amusing discussion with someone who tried to dissuade me from Cantor’s diagonal argument arguing that it was disproven by Marx’s philosophy. Yes, really.
81. #81 Pelli October 29, 2009
“You are supposing that there is a bijection, but then DEFINE that bijection to not be a one to one mapping BEFORE the conclusion.”
“You may also look at it another way. You assume there is a bijection and then assume a particular mapping for that bijection.”
“You’re assuming a bijection and then using it in a way that is not a bijection BEFORE the conclusion.”
No. When I say we are given a bijection, then the bijection has been defined. You give me an infinite sheet of paper saying “f(1) = …., f(2) = …., f(3) = …., etc”. That is specifying f uniquely
and at that point it is already decided whether f is a bijection (every element appears exactly once on your paper) or not (do you agree?). Then I will just say “hey, I’ve got an infinite slip of
paper where I’ve written an element that’s not on your paper”. This does not alter f, as you agree, and there is no ‘redefining’, ‘reassuming’ or ‘wrong using’ of f going on.
“No, when you use the mapping, it is one particular mapping and NOT a bijection at all”
This is the point. A bijection is a function. It is a specific function. If I show this specific function misses an element, then it is not a bijection. You seem to think that a bijection is some
magic entity that can change according to what I do.
“You’re essentially saying the following.
1. Assume X is 4.
2. Define X as 5 in a second assumption.
Then you show the contradiction that 4!=5 and thus X cannot be 4.”
No, it’s more like this:
Proof that no even odd number exists: Assume X is an even odd number. Since it is even, it is divisible by 2. Then it is not not divisible by 2, so it is not odd. Contradiction. There is no such
What you’re saying sounds to me like “You are assuming X is even and odd number, and then you show it is not odd. That’s a circular argument.”
“Base K is any string (infinite or otherwise) where the digits can use K symbols.”
OK, that’s a clear enough definition for me.
Here’s the reason the proof cannot show that base 3 is bigger than base 2: Suppose we have f mapping from base 2 to base 3. We want to show it is not onto (some element in base 3 is missed). So
we try to do the usual thing, starting by putting down base 2 along the vertical axis. Unfortunately, since I have shown base 2 is larger than base 1, we cannot put them down one by one. We have
to draw a solid line (real axis), and use every point on it. Now our grid isn’t a lattice anymore, and there is no diagonal, so we can’t do the diagonalization.
Basically, when we compared base 1 to base 2, the key thing that made it work was that the “length” of a base 2 string “is” base 1. So every letter in my counterexample could correspond to
exactly one base 1 element, which in turn was supposed to correspond to a base 2 element. By choosing each letter to ensure it differs from that corresponding base 2 element, I made a string
different from every base 2 element that was in the correspondence.
The length of base 3 elements is also base 1, and I have shown that it is smaller than base 2. Hence there are more base 2 elements than you have letters in your base 3 string, so you can’t use
this idea to avoid them all.
If you really think base 3 > base 2, could you please write out a detailed proof?
82. #82 p October 29, 2009
Not quite on-topic, but here’s a nifty proof that the rationals have the same cardinality as the naturals:
I will prove that the set of (positive) rational numbers can be associated one-to-one with a subset of the natural numbers. Since it is easy to associate the natural numbers with a subset of the
rationals (the reciprocal function will do), the demonstration below completes the proof.
By definition, each positive rational number can be expressed in a unique manner as p/q, where p and q are natural numbers with no common factors. Consider the string of symbols composed of
(digits of p)/(digits of q), with a slash between the strings of digits. Now read that string (which is unique for every rational) as an integer in base 11, with the slash being the symbol for
10. There’s your integer associated with that rational number. QED.
83. #83 daedalus2u October 29, 2009
Whenever someone says a concept is counterintuitive, what they are really saying is that their intuition is so lousy that it fails with this concept. That is why when my intuition doesn’t produce
the right answer I change my intuition so that it doesn’t fail the next time I need it.
This is how you make your intuition better. When you have an intuitive idea, you test the idea with an algorithm, and then if the algorithm (i.e. a proof) is correct, then you can change your
intuition to accommodate the correct idea.
I see intuition as a non-algorithmic method of arriving at an approximate answer to something. It might be correct; it might not be correct, but it is a lot faster than an algorithm. Of course
there are many ideas that cannot be tested with an algorithm.
84. #84 scineram October 29, 2009
Why assume bijection? Any function is not surjective.
85. #85 jre October 29, 2009
Holy cow! Here I thought this thread was going to stay on the lighter side; maybe score this goofball using John Baez’ crackpot index or something. Instead, you’re all blinding me with your
formal logic and stuff. Too heavy for me, man.
86. #86 Vorlath October 29, 2009
No. When I say we are given a bijection, then the bijection has been defined
By using what could be called a sub-element of f(i), you are creating a new mapping that is not one to one. There is no reason to expect a mapping that is not one to one to be a bijection.
Producing a new element (or set) K should be expected. No contradiction.
Proof that no even odd number exists: Assume X is an even odd number. Since it is even, it is divisible by 2. Then it is not not divisible by 2, so it is not odd. Contradiction. There is no
such X.
This is not what you are doing with K. What you are doing is like trying to figure out if X is 4. So you assume X is 4. Then you define X as 5. By showing that 4!=5, you’ve demonstrated a
Do you at least agree that what I’ve just describe cannot be used in a proof by contradiction? If so, then this is the very flaw I see in your proof.
Basically, when we compared base 1 to base 2, the key thing that made it work was that the “length” of a base 2 string “is” base 1.
I think you mean that you have an enumeration of digits (which can be used as base 1) and are thus mapping digits of both axis one to one (I’ve agreed to this from the beginning). But this is not
a requirement. It’s a personal choice and is a mapping that is not one to one on the elements.
Unfortunately, since I have shown base 2 is larger than base 1, we cannot put them down one by one.
How is base 2 larger than base 1? You mean one can represent more elements in base 2 with the same amount of digits, sure. But it has fewer digits. You can still list them one by one. Why would
this no longer possible exactly?
Do you agree that taking the diagonal of base 1 and altering it in some fashion will not give a new number? And that since you can create a new number from the list of base 2 digits, then one set
MUST have more elements than the other?
Good so far?
But consider what happens when you go to infinity with base 1. You can take the diagonal and you will create a new number. Not sure if I’m using this correctly, but doesn’t Dedekind’s theorem
says that a proper subset MUST map to the set in order to be infinite?
Out of curiosity, when you do go to infinity, what is the ratio of digits in your diagonal vs. the number of rows?
The length of base 3 elements is also base 1, and I have shown that it is smaller than base 2. Hence there are more base 2 elements than you have letters in your base 3 string, so you can’t
use this idea to avoid them all.
I didn’t catch that last part. Avoid what?
87. #87 Pelli October 29, 2009
“By using what could be called a sub-element of f(i), you are creating a new mapping that is not one to one. There is no reason to expect a mapping that is not one to one to be a bijection.
Producing a new element (or set) K should be expected. No contradiction.”
What do you mean by sub-element? I am not creating a new map. I am pointing at the same unaltered f, except I have constructed an ELEMENT, a string S, such that S is not equal to any f(i). Hence
f was not onto. I don’t understand what you mean by “producing … should be expected”.
“This is not what you are doing with K. What you are doing is like trying to figure out if X is 4. So you assume X is 4. Then you define X as 5. By showing that 4!=5, you’ve demonstrated a
No, I’m trying to prove X cannot be 4. Hence I assume the opposite, that X = 4. I then show this leads to X = 5. Obviously this is a contradiction, and since my logic is correct, the only thing
that can be wrong is the assumption, which must be wrong. Hence X is not 4.
One realistic setting for this would be to prove X=4 is not a solution for X = 2X-3.
You could also consider the equation X = X+1. A proof that no such X exists could go along the lines of: Suppose there is such an X. Then X = X+1. This is a contradiction, so no such X existed.
“Do you at least agree that what I’ve just describe cannot be used in a proof by contradiction? If so, then this is the very flaw I see in your proof.”
Yes, what you described was not a proof by contradiction. But that doesn’t prove your point. Apart from what I just said, another objection is that “trying to figure out if X is 4″ and “proving X
is not 4″ are different things. Only for the latter can you assume the opposite and prove it false.
“I think you mean that you have an enumeration of digits (which can be used as base 1) and are thus mapping digits of both axis one to one (I’ve agreed to this from the beginning). But this is
not a requirement. It’s a personal choice and is a mapping that is not one to one on the elements.”
I don’t quite follow. I was questioning your claim that my proof for (base) 2 > 1 will also prove 3 > 2, by showing the key point where the argument would fail. If you want to prove 3 > 2, you’ll
have to fix that somehow.
“How is base 2 larger than base 1? You mean one can represent more elements in base 2 with the same amount of digits, sure. But it has fewer digits. You can still list them one by one. Why would
this no longer possible exactly?”
No, whether or not base 2 is larger than base 1 is what we are discussing, i.e. if there is a bijection between the infinite set of 1-letter strings (which is clearly equivalent to the set of
natural numbers), and the set of 2-letter strings (which is clearly equivalent to the power set of the natural numbers and unclearly equivalent to the set of real numbers. We are not talking
about the number of digits/letters either, since the strings are allowed to be infinite. What I’m saying is you can’t match up all 1-letter strings with all 2-letter strings (both of which there
are infinitely many of) – there will always be at least one 2-letter string left over.
“Do you agree that taking the diagonal of base 1 and altering it in some fashion will not give a new number? And that since you can create a new number from the list of base 2 digits, then one
set MUST have more elements than the other?”
I don’t know what you mean by the diagonal of base 1, or altering the diagonal. I agree that there are more (finite or infinite) 2-letter strings than 1-letter strings, in the sense that if you
try to match them up there will always be 2-letter strings left over.
“But consider what happens when you go to infinity with base 1. You can take the diagonal and you will create a new number. Not sure if I’m using this correctly, but doesn’t Dedekind’s theorem
says that a proper subset MUST map to the set in order to be infinite?”
When I try to do the “standard” diagonal argument with base 1, I will fail because I can’t pick a letter to make a string different from another given string at a given position – there is just
one letter! Dedekind’s theorem says that if a set is infinite, then there EXISTS a proper subset that maps to the whole set. And if there exists such a set then the set is infinite.
“Out of curiosity, when you do go to infinity, what is the ratio of digits in your diagonal vs. the number of rows?”
In my diagonlisation proof that 2^n is bigger than n, I am making a construction which is not a process. My construction says “The kth letter of my string is the opposite of the kth letter in the
string f(k)”. Although it is practical to think of it as choosing the first letter, then the second, then the third, etc., this actually does everything at once. Many misconceptions about
infinite limits come from thinking that infinity is what you get if you do a finite thing at a time forever. As for your question – what I can say is that the first n letters in my string will
only depend on the first n strings (rows) f(1), f(2), …, f(n).
” The length of base 3 elements is also base 1, and I have shown that it is smaller than base 2. Hence there are more base 2 elements than you have letters in your base 3 string, so you can’t use
this idea to avoid them all.
I didn’t catch that last part. Avoid what?”
My construction of a counterexample avoids all the strings in the list by avoiding the kth string in the kth position. If you have more strings than positions, then you can’t do that. There will
need to be a new idea introduced in order to prove that base 3 > base 2 (which is actually false).
88. #88 Pelli October 29, 2009
To aid your understanding, here are some examples of sets that have bijections between them:
N (positive integers) and Z (integers): Map odd numbers 2k-1 to -k and even numbers 2k to k.
N (positive integers) and Q (rational numbers).
The reason Cantor’s diagonalisation fails is that the counterexample number you construct will inevitably be irrational and hence not a problem.
N (positive integers) and finite subsets of N.
Diagonalisation fails because the counterexample subset you construct will inevitably be infinite and hence not a problem.
By the way, I was reading the post on your blog and it struck me you are only talking about finite strings, or equivalently strings that begin with an infinite string of H’s. Those are indeed in
bijection with N, as in the case of finite subsets of N.
89. #89 Afgncaap October 29, 2009
Vorlath, I have a simple question. Does a bijective mapping exist between the set of natural numbers and the “independent” set of reals?
90. #90 Michael October 29, 2009
I’ve always understood cantor’s proof as follows
1) Assume there is a One-to-one mapping from the natural numbers to the reals
2) if it exists therefore you can write it out
example: 1->0.1000…
etc. etc.
3) however, Cantor then goes, using the diagnalisation method (described well enough in the comments above for me to go over it again) to show that there is a real number that is not on that list
at all, and therefore the mapping can’t have been complete
4) Since one-to-one mapping is how we define things being the same cardinality, since no one-to-one mapping exists (as shown above) then the cardinality of the Reals must be different to the
cardinality of the Naturals
And since the mapping wasn’t specified
it could be any possible mapping
and therefore |R| =/= |N|
Which part of his proof do you disagree with, Vorlath?
91. #91 EastwoodDC October 29, 2009
@4 — Didn’t Dembski and Marks just publish something about that?
92. #92 tux October 30, 2009
“There is no discovery. The bijection “proof” has two assumptions.
1. Assume there is a bijection.
2. Define the bijection to be a mapping that is not one to one.
Then it goes on to show what appears to be a “contradiction”.
There is no discovery. Only circular reasoning.”
Assume that f:N –> R is one-to-one. List the values of f: f(1), f(2), etc. Use the diagonalization process to construct a number x so that x is not equal to f(n) for any n. Then f is not onto.
Therefore any one-to-one function from N to R is not onto, so no bijection exists.
Does this clear up the issue?
93. #93 Vorlath October 30, 2009
@87 Pelli:
What do you mean by sub-element? I am not creating a new map. I am pointing at the same unaltered f, except I have constructed an ELEMENT, a string S, such that S is not equal to any f(i).
Hence f was not onto. I don’t understand what you mean by “producing … should be expected”.
You use the notation for a power set, so I assumed the bijection was between an infinite set and its power set. Did I read that incorrectly?
If I read that correctly, then sub-element is when you say that n is only in K when n is not in f(n). I just said n is (or is not) a sub-element of f(n). I didn’t know what word to use because f
(i) is both an element and a set depending on how you look at it. So I said sub-element.
But you are creating a new map. You’re checking if n is found (or not found) in f(n). That’s base 2 since you have two choices. And you check all f(i) leaving only one choice, base 1. This is a
mapping that is not one to one. So when you construct K, f(i) cannot be a bijection even though you tried to assume it was. Then you go on to show the expected result how a mapping that is not
one to one generates K that is not in f(i).
No, I’m trying to prove X cannot be 4. Hence I assume the opposite, that X = 4.
Yeah, sorry about that.
I then show this leads to X = 5.
Actually, you define X as 5. It doesn’t actually come from anywhere. When you check n in f(n), nobody is forcing you to do that. You can use a different mapping.
But that doesn’t prove your point.
Not trying to prove a point just yet. Taking it one step at a time.
So if it can be shown that your “proof” suffers from the same flaw, then it too would be flawed, correct?
I was questioning your claim that my proof for (base) 2 > 1 will also prove 3 > 2
Wait a sec. You actually agree that you’re proving |base 2| > |base 1|? And you can show how this does not imply |base 3| > |base 2|? If you can show this, I’d be satisfied. Oh wait, you’re
talking about infinite base 2 digits used as a power set. Never mind. I already know the arguments.
Although it is practical to think of it as choosing the first letter, then the second, then the third, etc., this actually does everything at once.
That’s what I’m asking. What is the ratio of digits when you take the whole thing into account at once?
As far as base 1 and base 2, you see this as mapping |N| vertically to |N^2| horizontally, right? This would be equivalent to the power set, correct? So because you can create a new string not
found in the list, then |N^2| > |N|. I’m good so far?
|N^2| is dependent on |N| through the digits. This is a very specific mapping. It’s certainly not a one to one mapping. This is known from the beginning. Use a different mapping and you might get
better results. Just because you “prove” that one particular mapping isn’t a bijection doesn’t mean that no bijection can exist.
@88 Pelli: I understand bijection.
The reason Cantor’s diagonalisation fails is that the counterexample number you construct will inevitably be irrational and hence not a problem.
Does this really hold up past the fallacy of composition though? If you actually could build a diagonal using infinite digits, would you not produce infinitely many diagonals? If so, then these
would not map to a single natural with finite digits, but to a proper subset of N which requires |N| digits. So we would not be able to conclude that any one of them is irrational by the mere
fact that infinite digits are in use.
@89 Afgncaap:
Does a bijective mapping exist between the set of natural numbers and the “independent” set of reals?
Short answer is no. Don’t ask me to prove it
@90 Michael:
I only disagree with:
since no one-to-one mapping exists (as shown above
94. #94 Vorlath October 30, 2009
@92 tux:
Does this clear up the issue?
The setup that the diagonalization process uses imposes a mapping that is not one to one. So your “proof” can stop before you even do anything. No need for a diagonal. Assuming that a mapping
(known to not be one to one) to be a bijection is where the faux contradiction comes from.
95. #95 Douglas McClean October 30, 2009
@93, Vorlath, who wrote:
Does a bijective mapping exist between the set of natural numbers and the “independent” set of reals?
Short answer is no. Don’t ask me to prove it
So, you reject Cantor’s method, but you do accept his result?
Truly this is a new and fascinating kind of crackpottery.
96. #96 Robert October 30, 2009
My intuitive feeling (definately nowhere near a proof, or even usefull in teh construction of one) of Cantor’s different infinities is that if you were able to enumerate the reals, you should be
able to take a point on the real numer line, and then go on to the NEXT point. Since the real number line is a continuous line, this is mind bogglingly counter-intuitive, therefor my intuition is
that Cantor is right.
Of course, my intuition has been mind bogglingly wrong before, so I’ve read and understood the proof and now I know for certain.
(Of course, my intuition assumes some kind of ‘nice’ probably monotone enumeration. Cantor proved that some kind of monstrously complicated enumeration which jumps back and forth and so fills the
line will also fail.)
97. #97 AnyEdge October 30, 2009
Sadly, the intuition is not quite true. Consider that the Real Line, R, is separable. The rationals are enumerable, but there is no ‘next’ rational point on the line.
98. #98 Pelli October 30, 2009
@93 Vorlath,
“You use the notation for a power set, so I assumed the bijection was between an infinite set and its power set. Did I read that incorrectly?”
I’m trying to prove there is no bijection between the set of 1-letter strings and the set of 2-letter strings, which is roughly equivalent to both that there is no bijection between N and PN and
that there is no bijection between N and R.
“If I read that correctly, then sub-element is when you say that n is only in K when n is not in f(n). I just said n is (or is not) a sub-element of f(n). I didn’t know what word to use because f
(i) is both an element and a set depending on how you look at it. So I said sub-element.”
So in the proof for N vs PN, you do indeed construct a set K, which is an element of PN, such that n lies in K iff n does not lie in the set f(n), which is an element of PN. Hence K is different
from all f(n)’s, which contradicts that f hits every element in PN. I’d prefer to talk about 1-letter and 2-letter strings though, since that way you don’t have sets of sets.
“But you are creating a new map. You’re checking if n is found (or not found) in f(n). That’s base 2 since you have two choices. And you check all f(i) leaving only one choice, base 1. ”
Apart from some of your reasoning, this is basically the reason that there is no bijection f between N and PN. It is not based on the fact that 2 choices is more than 1, because the analogy that
3 choices is more than 2 fails to produce an analogous proof for 3-letter strings vs 2-letter strings.
“This is a mapping that is not one to one. So when you construct K, f(i) cannot be a bijection even though you tried to assume it was.”
Exactly! You give me an f and claim it is a bijection, i.e. every element of PN, or equivalently every subset of N, is equal to f(i) for some i. I then say “look at K, you fail”.
“Then you go on to show the expected result how a mapping that is not one to one generates K that is not in f(i).”
No, then I go on to show that any mapping that is claimed to be a bijection has a counterelement K that is not equal to any f(i). So no bijective mapping exists.
“Actually, you define X as 5. It doesn’t actually come from anywhere.”
In my equation example, this is what goes on:
Proof that X=4 does not satisfy X=X+1. 1) Suppose that X=4 indeed satisfies X=X+1. 2) Since X=X+1, it follows that X=5. 3) X cannot be 4 and 5 at the same time, contradiction. 4) Hence our
assumption was false.
The corresponding steps for me are: 1) Suppose there is an f that indeed is a bijection. 2) Since it is a bijection, we can list the elements and walk down the diagonal to get a counterexample.
This is not hit by f, so f is not onto. 3) f cannot be both a bijection and not onto, contradiction. 4) Hence our assumption was false.
“When you check n in f(n), nobody is forcing you to do that. You can use a different mapping.”
f could be any mapping, but as I said f is not magic. When I assume there exists a bijection f, then f becomes a specific mapping. Put another way, do you agree that if you give me a SPECIFIC map
f, I can use the diagonal argument to find a counterelement for that specific map? If so, wouldn’t you agree that no map f is a bijection, since every map has a counterelement?
“So if it can be shown that your “proof” suffers from the same flaw, then it too would be flawed, correct?”
If you can show me my proof suffers from a flaw, then it would be flawed. If you manage to point out a flaw in a proof for something else that I agree applies to an analogous point in my proof,
then I will accept my proof is flawed.
“Wait a sec. You actually agree that you’re proving |base 2| > |base 1|? And you can show how this does not imply |base 3| > |base 2|? If you can show this, I’d be satisfied. Oh wait, you’re
talking about infinite base 2 digits used as a power set. Never mind. I already know the arguments.”
In this line of argument, I’m claiming the following:
1. You CANNOT pair up 1-letter strings with 2-letter strings without there being 2-letter strings left over.
(2. You CAN pair up 2-letter strings with 3-letter strings without there being any strings left over.)
“That’s what I’m asking. What is the ratio of digits when you take the whole thing into account at once?”
The ratio between infinities is too complicated to discuss before we agree on what sets have the same size. It is easy to see that “intuitively”, the following ought to be true: inf + inf = inf
and inf*inf = inf whereas inf/inf and inf-inf can be anything.
“Just because you “prove” that one particular mapping isn’t a bijection doesn’t mean that no bijection can exist.”
True. But if I prove that all mappings are not bijections, then indeed no bijections exist.
“Does this really hold up past the fallacy of composition though? If you actually could build a diagonal using infinite digits, would you not produce infinitely many diagonals? If so, then these
would not map to a single natural with finite digits, but to a proper subset of N which requires |N| digits. So we would not be able to conclude that any one of them is irrational by the mere
fact that infinite digits are in use.”
I’m sorry but I don’t understand what you are saying. The reason I can conclude that the number must be irrational is because I know the function is a bijection: I have something I know is a
bijection between natural numbers and decimal expansions of rational numbers. You walk down the diagonal to find a decimal expansion for a number not hit by my function. Since I know my function
hits all rationals, your number must be irrational.
99. #99 AnyEdge October 30, 2009
Are you in fact saying that R and N are NOT bijectable, and yet Cantor is STILL WRONG?!
100. #100 Stephen Wells October 30, 2009
Vorlath asks: “If you actually could build a diagonal using infinite digits, would you not produce infinitely many diagonals? ”
You produce one diagonal. It’s the diagonal.
Similarly, you’re wrong about the proof involving “a bijection that’s not one to one”. It doesn’t. It shows that, if you attempt to define _any_ bijection- nature unspecified- between the
rationals and the reals, you can always (by diagonal argument) define a real number that’s not covered by your bijection; so you can’t count the reals using rationals.
101. #101 Anonymous October 30, 2009
“@92 tux:
Does this clear up the issue?
The setup that the diagonalization process uses imposes a mapping that is not one to one. So your “proof” can stop before you even do anything. No need for a diagonal. Assuming that a mapping
(known to not be one to one) to be a bijection is where the faux contradiction comes from.”
No it doesn’t. For example, take a particular mapping, say f(n)=0…01000…., where 1 appears in the nth position. This is clearly one to one, and diagonalization will produce a string that was not
on the original list. The process doesn’t force the map to not be one to one, but it shows that the map can’t be onto.
102. #102 Vorlath October 30, 2009
@95 Douglas:
So, you reject Cantor’s method, but you do accept his result?
Truly this is a new and fascinating kind of crackpottery.
It serves two purposes.
1. It avoids the crazies from coming out and asking for a one to one mapping.
2. It’s funny seeing how people who call others crackpot don’t understand (or never even think) how a proof can have the correct conclusion and still be bogus.
Oh and thanks for that reaction! Looks like I’m in good company here.
@98 Pelli:
Exactly! You give me an f and claim it is a bijection, i.e. every element of PN, or equivalently every subset of N, is equal to f(i) for some i. I then say “look at K, you fail”.
But K uses a different mapping that f for its construction. All you’d showing is how this different mapping is not the same as f. You wouldn’t be showing that f is not a bijection.
K is built from ONE specific mapping. I want to know if ALL mappings can’t be f.
1. You assume that f is the bijection.
2. You then use a mapping where you take n from f(n). Call this mapping g. This cannot possibly be f because you’re using f in that very mapping. So we know ahead of time that g is not f.
3. g != f (this is a known fact)
4. You assume g = f.
5. You construct K from g.
6. K cannot exist in f
At this point, you say that f cannot exist, but there is no contradiction on f (existing). The only contradiction is between statement 3 and 4. The rest is a consequence of this.
You’ve proven that g!=f. But g is ONE specific mapping that is known to not be f.
When I assume there exists a bijection f, then f becomes a specific mapping.
So you know what it is even though you believe it doesn’t exist? WOW!
Put another way, do you agree that if you give me a SPECIFIC map f, I can use the diagonal argument to find a counterelement for that specific map?
Not even close. The diagonal would require the use of a different mapping than f. So all you’d be showing is that this different mapping is not f.
You can ALWAYS show that your own mapping is not one to one. So the diagonal is POINTLESS.
But if I prove that all mappings are not bijections, then indeed no bijections exist.
But you don’t do that. You only prove that your custom mapping is not a bijection.
Suppose I do give you a mapping that I claim is f. Then you want to create a diagonal, but in order to create this diagonal, you need to use a custom mapping that USES f (so mapping g USES f).
This is YOUR mapping. It is custom made. And can’t be f if it also uses f. So you take a DIFFERENT mapping than what I gave you and then say “look, it fails” Well, DUH! You always go back to a
mapping that is known to not be f.
Since I know my function hits all rationals, your number must be irrational.
Only if you can create a SINGLE diagonal with infinite digits. I haven’t seen any proof of what you claim. I still don’t see how you get around the fallacy of composition.
@99 AnyEdge:
Are you in fact saying that R and N are NOT bijectable, and yet Cantor is STILL WRONG?!
SHOCKER!!! C’mon. Seriously? WTF?
103. #103 Pelli October 30, 2009
“But K uses a different mapping that f for its construction. All you’d showing is how this different mapping is not the same as f. You wouldn’t be showing that f is not a bijection.”
Let’s get things straight here: Let A = N and B = PN. I want to show there is no bijection f : A -> B. If you give me a function f, then I will exhibit an element b in B such that there is no
element a in A for which f(a) = b. This will prove the function f is not onto and hence not a bijection. Agree?
Note how b is not a function. b is an element. f is still the same function as before.
“K is built from ONE specific mapping. I want to know if ALL mappings can’t be f.”
Yes, K is built from one mapping. WHICHEVER map f you give me, I can construct a counterelement for that specific map f.
“1. You assume that f is the bijection.”
Yes. I want to show there is a contradiction if f is a bijection. Hence I assume it is and show something goes wrong.
“2. You then use a mapping where you take n from f(n).”
No. There are no more functions flying around than f itself. I then define K = {n : n is not in f(n)}. This is a set. Not a function.
“Call this mapping g.”
There is no other map.
“This cannot possibly be f because you’re using f in that very mapping.”
I am not defining a new map. I would be wrong to define f in terms of f, but I’m not. I am assuming f is already defined, by some adversary trying to give me a counterexample.
“So we know ahead of time that g is not f.”
3. g != f (this is a known fact)”
4. You assume g = f.”
There is no g.
“5. You construct K from g.”
I construct K from f.
“6. K cannot exist in f”
K is not an element in the IMAGE of f, i.e. there is no n such that f(n) = K.
“So you know what it is even though you believe it doesn’t exist? WOW!”
Yes, I assume the adversary is correct and that there is such a map f. Then I will show that its properties make a contradiction. It’s similar to the argument “Suppose an omnipotent and
benevolent god exists. Then there would be no suffering. Yet there is, so we have a contradiction. Hence there is no such god.” (Please don’t go into details about where this argument goes
wrong.) Even if I don’t believe in a god, I can “show” that the claim “there is an omnipotent and benevolent god” is false by assuming god exists.
“Not even close. The diagonal would require the use of a different mapping than f. So all you’d be showing is that this different mapping is not f.”
No, the diagonal constructs an element not in the image of f.
” But if I prove that all mappings are not bijections, then indeed no bijections exist.
But you don’t do that. You only prove that your custom mapping is not a bijection.”
I prove that any mapping you give me is not a bijection.
“Suppose I do give you a mapping that I claim is f. Then you want to create a diagonal, but in order to create this diagonal, you need to use a custom mapping that USES f (so mapping g USES f).”
No, I create a set (or in my case, a string) directly from f.
“Only if you can create a SINGLE diagonal with infinite digits. I haven’t seen any proof of what you claim. I still don’t see how you get around the fallacy of composition.”
I don’t get this. You put all expansions down row by row. Then you draw a line down the diagonal. Now you have an infinite line with numbers on it. Pick a new expansion by making sure it is
different from this line at every position.
I don’t like talking about sets and power sets and bijections, because there is lots of mathematical terminology that we may or may not agree on. Why don’t we talk about this instead:
Proof that you cannot write down all infinite A,B-strings row by row on an infinite sheet of paper: Suppose you can. Then I let you do it. I take your infinite sheet of paper, that looks like
and I take another infinite slip of paper, writing down the following A,B-string: In the kth position, I’ll write down an A if the string in the kth row contains a B in the kth position,
otherwise I write a B. This string is clearly not equal to the string in any row, so it is not on the list. Thus your list is not complete.
Could you please point out where exactly my proof goes wrong?
104. #104 Rilke's Granddaughter October 30, 2009
@103. Pelli, my intuition says that ANY combination of ABs will be found on the first page, since it is infinite. Help me understand WHY your constructed string’s not there.
Molly the not-so-mathy-one
105. #105 Pelli October 30, 2009
@104: This is where intuition fails. You think that since there are infinitely many rows and infinitely many A,B-strings, then they should match up. This is not the case though because there are
many different infinite cardinalities (“set sizes”). (Colloquially, two sets are the same size if you can match up their elements. One set is bigger than the other if however you try to match up
the elements there are always elements from the bigger set left over. This works as you’d expect for finite sets and extends to infinite sets without introducing any notion of infinite numbers,
whatever that is.)
To get a string that’s not in the list, I do not use counting. Instead, I construct it to make it different from all the strings on the list: The nth letter in my string is chosen to be the
OPPOSITE of the nth letter in the nth string.
So if the list looks like this:
(where * denotes A or B)
my string will look like this:
It differs from the 1st string in the first letter, the 2nd string in the 2nd, etc. Hence it differs from all elements, and is therefore not on the list.
106. #106 Nobody Important October 30, 2009
The reason why it isn’t there is because you’re going down the list row by row and systematically making the constructed string different than the string in each row in such a way that it is
guaranteed to also be different than the strings in all the prior rows.
Put more long-windedly, your job is to make a string that is *not* already on the list. The way to do that is to start with the first string in the list and make your string different than that
one. So you start with row 1. Making the first character of the constructed string not equal to the first character of the row 1 string is sufficient to ensure that the constructed string doesn’t
match it. So far, so good. Now proceed to row 2. You need to do the same thing, but the catch is that you have to do it in such a way that the constructed string is now guaranteed to be different
than the row 1 string *and* the row 2 string. In other words, you can’t go back and change the first character of the constructed string because you might be changing it *back* to what it was in
the row 1 string. So instead you change the *second* character of the constructed string to be different than the *second* character of the row 2 string. Now the constructed string is guaranteed
to be different than both the row 1 and row 2 strings because the first character in the constructed string doesn’t match the first character in the row 1 string and the second character in the
constructed string doesn’t match the second character in the row 2 string. Imagine going through the same process for each row and it should be at least relatively plain (provided I explained it
well at all) that the constructed string *must* be different than all the strings in the list.
107. #107 Nobody Important October 30, 2009
…and Pelli beats me to it. Curses!
108. #108 Shawn Smith October 30, 2009
@Rilke’s Granddaughter #104,
my intuition says that ANY combination of ABs will be found on the first page, since it is infinite.
If the strings were all finite size, then your intuition would be correct. However, all the strings are infinitely long. The key thing to know is that when it comes to infinity, our intuition
doesn’t do a very good job describing what is actually happening. Infinities are just plain weird. When it comes to sets with an infinite number of elements, you can have two sets, and it can
look like one of them is obviously smaller, like the counting numbers (positive integers) and all the integers (positive, negative, and 0) and they still have the same cardinality, which is kind
of like saying they have the number of elements.
Pelli’s constructed string is different because no matter which string you point to on the first sheet of paper, Pelli’s string on the second sheet will be different from it. If Pelli’s string is
different than every string on the first sheet, then it can’t be on it. The main property that allows that to happen is that all the strings are infinitely long. If it helps your intuition, maybe
you can justify it by saying that that infinitely long string gives an infinite amount of chances to get a different string. Maybe that won’t be enough for your intuition to justify it, but no
one said that set theory (and infinities, specifically) has to match anyone’s intuition.
109. #109 Nobody Important October 30, 2009
Re-reading #104, it might be useful to her for us to explicitly state that just because you have an infinite list of some “Thing” doesn’t mean your list automatically includes every possible
“Thing.” I think making that assumption is how her intuition failed.
For example, think about sets of integers. The set of all integers except “3″ is still an infinite set of integers even though it’s missing one.
In fact, there are some things that are actually impossible to list (or “enumerate” as we nerds like to call it), which is precisely what Cantor’s Diagonalization proves and Pelli’s string
example demonstrates.
110. #110 Rilke's Granddaughter October 30, 2009
Thanks, thanks, thanks! Makes much more sense now… except @109. But if the “infinity” is meant to be an “exhaustive list”‘; i.e. it is supposed to include all permutations of something, my
intuition says that an infinite list of all permutations should include, well, all permutations. I see now how my intuition fails….
And I can see at least one order of infinity. But how the heck do you get to the next one? What on earth does it MEAN to have the multiple orders of infinity? Are they useful for anything?
111. #111 Nobody Important October 30, 2009
But if the “infinity” is meant to be an “exhaustive list”‘; i.e. it is supposed to include all permutations of something, my intuition says that an infinite list of all permutations should
include, well, all permutations.
That’s just it, infinity is not necessarily an exhaustive list. Cantor’s Theorem is a proof by contradiction that says “for certain infinite sets (specifically the “uncountable” ones), if you
think you’ve got a complete list, you’re wrong and I can prove it by generating an element of the set that isn’t on your list (the Diagonalization) at will.”
And I can see at least one order of infinity. But how the heck do you get to the next one? What on earth does it MEAN to have the multiple orders of infinity? Are they useful for anything?
If by “order of an infinite set” you mean “number of elements in an infinite set,” then your intuition is correct. All infinite sets have the same order. “Infinity plus one” really is a
meaningless phrase. The Reals and the Integers, for example, are the same size (note that this is true even though the Integers are in fact a proper subset of the Reals).
What we’re talking about in this thread is something different than Order called Cardinality. Two sets are said to have the same cardinality if and only if there exists a bijection (a one-to-one
and onto mapping) between them. Cardinality doesn’t say anything at all about the relative size of the two sets (unless the sets are finite, of course). The Reals and the Integers, for example,
do not have the same cardinality while the Integers and the Rationals do (note that the Integers are a proper subset of both the Reals and the Rationals).
People often-times mistake different cardinalities of infinite sets for different sizes because, frankly, of sloppy language. We have a habit of saying that an infinite set A is “bigger” than an
infinite set B if the cardinality of A is larger than the cardinality of B (largely because in the case of finite sets, that’s actually how it works), but that’s not really what it means. The
problem is that the very idea of “size” breaks down when you’re dealing with infinite sets. There is a sense in which, say, the Rationals are “bigger” than the Integers; namely that the Integers
are a proper subset of the Rationals, but again, that notion doesn’t apply to the actual size of the two sets, which is still just infinity.
As for practical uses, I’m a computer scientist and all of this stuff bears directly on the question of computability (what computers can and cannot do), so I think it’s very useful indeed. I
cannot write a program that enumerates the Reals, but I can write one that enumerates the Integers, just to pick a trivial example. I’m sure it’s relevant to other stuff as well, but someone else
will have to step in and fill in those blanks.
112. #112 Jonathan Vos Post October 30, 2009
Wikipedia, on Georg Ferdinand Ludwig Phillip Cantor, gives a hint of the counter-intuitive psychology in the History of this branch of Mathematics.
Cantor’s theory of transfinite numbers was originally regarded as so counter-intuitive—even shocking—that it encountered resistance from mathematical contemporaries such as Leopold Kronecker and
Henri Poincaré [Dauben 2004, p. 1] and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections. Some Christian theologians (particularly
neo-Scholastics) saw Cantor’s work as a challenge to the uniqueness of the absolute infinity in the nature of God,[Dauben, 1977, p. 86; Dauben, 1979, pp. 120 & 143] on one occasion equating the
theory of transfinite numbers with pantheism.[Dauben, 1977, p. 102] The objections to his work were occasionally fierce: Poincaré referred to Cantor’s ideas as a “grave disease” infecting the
discipline of mathematics,[Dauben 1979, p. 266] and Kronecker’s public opposition and personal attacks included describing Cantor as a “scientific charlatan”, a “renegade” and a “corrupter of
youth.”[ Dauben 2004, p. 1. See also Dauben 1977, p. 89 15n] Writing decades after Cantor’s death, Wittgenstein lamented that mathematics is “ridden through and through with the pernicious idioms
of set theory,” which he dismissed as “utter nonsense” that is “laughable” and “wrong”.[Rodych 2007] Cantor’s recurring bouts of depression from 1884 to the end of his life were once blamed on
the hostile attitude of many of his contemporaries,[Dauben 1979, p. 280:"...the tradition made popular by [Arthur Moritz Schönflies] blamed Kronecker’s persistent criticism and Cantor’s inability
to confirm his continuum hypothesis” for Cantor’s recurring bouts of depression] but these episodes can now be seen as probable manifestations of a bipolar disorder.[Dauben 2004, p. 1. Text
includes a 1964 quote from psychiatrist Karl Pollitt, one of Cantor's examining physicians at Halle Nervenklinik, referring to Cantor's mental illness as "cyclic manic-depression"].
# Aczel, Amir D. (2000). The mystery of the Aleph: Mathematics, the Kabbala, and the Human Mind. New York: Four Walls Eight Windows Publishing. ISBN 0760777780. A popular treatment of infinity,
in which Cantor is frequently mentioned.
# Dauben, Joseph W. (1977). Georg Cantor and Pope Leo XIII: Mathematics, Theology, and the Infinite. Journal of the History of Ideas 38.1.
# Dauben, Joseph W. (1979). Georg Cantor: his mathematics and philosophy of the infinite. Boston: Harvard University Press. The definitive biography to date. ISBN 978-0-691-02447-9
# Dauben, Joseph W. (1983). Georg Cantor and the Origins of Transfinite Set Theory. Scientific American 248.6:122-131
# Dauben, Joseph (1993, 2004). “Georg Cantor and the Battle for Transfinite Set Theory” in Proceedings of the 9th ACMS Conference (Westmont College, Santa Barbara, CA) (pp. 1–22). Internet
version published in Journal of the ACMS 2004.
113. #113 Douglas McClean October 31, 2009
It’s funny seeing how people who call others crackpot don’t understand (or never even think) how a proof can have the correct conclusion and still be bogus.
I certainly do understand how a proof can have a correct conclusion and still be bogus. I was merely observing that it is new-to-me to see an anti-Cantor crackpot who accepts Cantor’s conclusion
but rejects his argument. There is no logical inconsistency inherent in such a position, and I never claimed there was.
I will again call you a crackpot, because that is what you are. Just, as I said, a new (to me) kind of crackpot.
114. #114 Douglas McClean October 31, 2009
I said @92, I meant @102. Oops.
115. #115 Ivan October 31, 2009
@96 Robert,
Assuming the axiom of choice, one can in fact choose a well-ordering of the real numbers, where each real number has a successor.
This of course doesn’t contradict Cantor’s diagonal argument, because ordinal numbers come in all cardinalities (a well-ordering of a set X being essentially an isomorphism between X and an
ordinal number).
116. #116 p October 31, 2009
@111: As for practical uses, I’m a computer scientist and all of this stuff bears directly on the question of computability (what computers can and cannot do), so I think it’s very useful indeed.
I cannot write a program that enumerates the Reals, but I can write one that enumerates the Integers, just to pick a trivial example. I’m sure it’s relevant to other stuff as well, but someone
else will have to step in and fill in those blanks.
Here’s a pretty nifty trick: Seemingly impossible functional programs and a more general treatment, A Haskell monad for infinite search in finite time.
Using the fact that Cantor space is uniformly continuous, we can do searches of “infinite” sets (i.e. implictly defined infinities) in finite time. Just about all the work is done by the type
system, which exploits the fact that Cantor space is topologically compact, so that we can decide propositions including the type (Cantor -> y) where y is a decidable type, but not (Integer->
Integer) which is excluded due to the Halting Problem.
Very cool, trippy stuff. I’ve got a strong feeling that such techniques will pay off in correctness proofs of programs and related applications, as these searches don’t need access to the “guts”
of a function, only relying on its type for proofs.
The fact that such techniques work is evidence for Cantor’s view of infinities: we can use finitistic programs to deal handily with implicitly defined infinities.
117. #117 p October 31, 2009
Or, to put things very simplistically, while the type (Integer->Integer) is undecidable, we can do a surprising amount of work with the fact that (Integer->Bool) is decidable.
118. #118 p October 31, 2009
@102: K is built from ONE specific mapping. I want to know if ALL mappings can’t be f.
f is defined very generically, to such an extent that most mathematicians find the proof valid.
If you could provide a mapping g that is not commensurable with or reducible to f, it would be very strong evidence for your argument. Otherwise, you will be left with the option of declaring
that such a g exists with no evidence backing your assertion.
119. #119 Christopher November 1, 2009
In Cantor’s proof, it is not necessary to assume that there is a bijection from N onto R. You need only let f be any arbitrary function with domain N. You can enumerate f(x) because the domain is
N. Then using the enumeration you can find an element of R that is not in the image of f (using diagonalization if you like). So f is not onto R. (Indeed, the codomain need not be restricted to
R, not even restricted at all except to the universe of discourse.)
This is not about finding a contradiction.
There is no reason to use a reductio ad absurdum method in this proof.
I can see Vorlath has been harping on the reductio aspect of Cantor’s proof. But there is no reductio aspect to his proof. (Actually I don’t want to bother looking for the original, but I just
explained that the reductio is not needed.)
So the only criticism left would be directed toward diagonalization. The construction used is explicit and constructive. The constructed number is a real number. It is not in the enumeration of f
(x) (i.e. it is not in the image of f).
On topic: some of these misunderstandings arise from incomplete or informal proofs given in classes (even at the college level). For the proof in question, there are a lot of bad illustrations
used to make the point quickly in a class that really only wants to use the result. Specifically in entry level computer science, math, and philosophy classes. Even psychology classes do it.
One bit of evidence that someone has received an informal proof is that they talk about the diagonalization as if it had something to do with a grid. The grid is an illustration of an
enumeration. Also, if someone thinks that the diagonalization is a way of discovering a “new real”, most likely they have been shown a proof that uses reductio ad absurdem.
One reason to avoid the reductio style proof, is that the end of the proof goes like this:
“If f is a bijection, then f is not a bijection.
Therefore, f is not a bijection.”
If you avoid the assumption that f is a bijection, and let f be any function, the end of the proof is more straight forward:
“If f is a function, then f is not a bijection.
Therefore, f is not a bijection.”
120. #120 Pelli November 1, 2009
Christopher: You’re right. So our claims are: “If f is a function from N to R, then there is an element in R that is not hit by f” and “Given a(n enumerated) list of A,B-strings, there is an
A,B-string not on the list.”
121. #121 Vorlath November 1, 2009
@103 Pelli:
If you give me a function f, then I will exhibit an element b in B such that there is no element a in A for which f(a) = b.
Only by ditching f and using your own custom mapping that is known to not be f. Big deal. Doesn’t prove a thing.
This will prove the function f is not onto and hence not a bijection. Agree?
Not even close. It will show that your custom mapping is not a bijection. But it says nothing about the bijection itself.
Yes, K is built from one mapping. WHICHEVER map f you give me, I can construct a counterelement for that specific map f.
HAHAHA! What? You don’t use f, but rather digits of f. This imposes a specific mapping that is different from f.
Let’s say set P is the set of ALL mappings. We want to know if F (a bijection) exists in P. All other mappings are mutually exclusive with F since all other mappings are not one to one.
You want to tell me that by USING F in your own custom mapping G, making it a different mapping than F, you are showing how F is not a bijection? That’s not even wrong. All you’re showing is how
G != F.
1. Assume there exists a function F that produces naturals other than 0 or 1.
2. If F exists, then I can take F/F and get 1 every time. Note that I’m ONLY USING F.
3. 1 cannot be any other natural number, so contradiction.
4. Conclusion: F cannot exist.
Do you accept that proof? No. This is what you are doing, so why should I accept your proof? F/F is a new function different from F and so is taking digits from F in base 2 with respect to base 1
digits in i.
There is no other map.
Yes there is. If there wasn’t, you wouldn’t be able to construct K.
I construct K from f.
Not exactly. You are constructing K from g which uses f. That makes g!=f. If you used f, you would use it DIRECTLY. You would not be taking a digit from f which alters the mapping.
I prove that any mapping you give me is not a bijection.
Saying so doesn’t make it so.
No, I create a set (or in my case, a string) directly from f.
No you don’t. You create it from the DIGITS of f. That’s a different mapping since it retains the original mapping used when defining the power set which is known to not be one to one.
Suppose you can. Then I let you do it. I take your infinite sheet of paper, that looks like
I will tell you that this is no longer the mapping I gave you and you are cheating.
@118 p:
f is defined very generically, to such an extent that most mathematicians find the proof valid.
If you could provide a mapping g that is not commensurable with or reducible to f, it would be very strong evidence for your argument. Otherwise, you will be left with the option of declaring
that such a g exists with no evidence backing your assertion.
g is the mapping used to define the power set.
That’s what is being used to form K. This mapping cannot be f by definition. Really, can you show how taking digits of f(i) is any different than the mapping used to defined the power set? It’s
The mapping of f isn’t really being used, but rather the proof is using one side of f (as all f(i)) as a substitution for N^2 in order to falsely claim that only f is used, but then it takes base
2 digits from N^2 and maps them to base 1 digits (enumeration) from i to form K. Are we supposed to KNOW that this (original mapping used to define the power set) is the only mapping that can
exist for f?
If there is a mapping f, it won’t use a base 2 to base 1 relationship as this is g used when defining the power set (except that the mapping is used from N^2 to N instead of N to N^2). So by
taking digits of f(i), you are re-introducing mapping g used when defining the power set.
Suppose I give you a mapping where I tell you that it only works if you don’t map different bases (because this would re-introduce a mapping that is mutually exclusive with the mapping that I’m
giving you). So you can’t take digits of f(i) unless you also take digits of i in the SAME base.
How good is your proof now?
I see four things.
1. The mapping used to define the power set is assumed to be a bijection.
2. I don’t see the proof where taking digits does not create a new mapping.
3. How do you create K without a very specific mapping?
4. When you take digits of f(i), how do you know that f uses it this way? If it doesn’t, then the proof is meaningless as this would mean that f(i) isn’t actually being used, but rather that f is
used as a placeholder for N^2 (and not actually the mapping) where you then map N^2′s digits in your own way to N thereby eliminating f completely from the proof. IOW, there is nothing stating
that the mapping of digits in base 2 from N^2 to digits in base 1 in N is what f uses.
122. #122 Stephen Wells November 1, 2009
Vorlath, for the nth time: for any proposed 1:1 mapping from the integers to the reals, the diagonalisation produces a real not mapped to any integer. Therefore there is no such mapping. How can
this be problematic, and why are you talking about “G” and “N^2″?
123. #123 Pelli November 1, 2009
Vorlath, I am not looking at the digits of the function f, whatever that is – I am looking at the digits of the elements!
” Suppose you can. Then I let you do it. I take your infinite sheet of paper, that looks like
I will tell you that this is no longer the mapping I gave you and you are cheating.”
Do you not agree that if you have a function f that maps from naturals to strings, then you can write it down uniquely likes this:
1: ABABBBABAB….
2: ABBBABAB….
(Just like the function f(x) = x^2 on naturals can be written
If I’m not allowed to write down the values taken by your function, then what can I do?
And this time I wasn’t even talking about a function. I wanted to prove “that you cannot write down all infinite A,B-strings row by row on an infinite sheet of paper”. The adversary claims he can
do it and does it, but as soon as I touch the paper I’m cheating?!
124. #124 Douglas McClean November 1, 2009
In your allegedly analogous “proof”, you wrote:
1. Assume there exists a function F that produces naturals other than 0 or 1.
2. If F exists, then I can take F/F and get 1 every time. Note that I’m ONLY USING F.
3. 1 cannot be any other natural number, so contradiction.
4. Conclusion: F cannot exist.
What, exactly, is 3 supposed to contradict? None of your premises claims that F(x) / F(x) won’t equal 1 for any or all x, your premise is only that F(x) won’t equal 1 for any x. This, I’m sure
you’ll agree, is not the same thing.
125. #125 Pelli November 1, 2009
Here’s another (rather contrived) example of a similar proof:
Proof that there is no bijection between {0,1} and {0,1,2}:
Let any f from {0,1} to {0,1,2} be given. Consider the number k = 3-f(0)-f(1). We check it lies in {0,1,2} and that f(0) = k or f(1) = k both lead to f being not a bijection.
Vorlath, do you think your argument that we’re ‘using some other map’ instead of f applies here too? What would that map be in this case?
126. #126 Vorlath November 1, 2009
@122 Stephen:
for any proposed 1:1 mapping from the integers to the reals, the diagonalisation produces a real not mapped to any integer.
For any proposed 1:1 mapping, you can ditch said mapping and use the power set mapping to show that you can create a new number. However, it does not mean that this new number doesn’t have a
mapping in f.
@123 Pelli:
Do you not agree that if you have a function f that maps from naturals to strings, then you can write it down uniquely likes this:
1: ABABBBABAB….
2: ABBBABAB….
A million times no. This is the power set mapping. It’s not f by definition.
(Just like the function f(x) = x^2 on naturals can be written
Yeah, but you’re using the same base here.
Take two sets where all elements are represented by infinite digits. Pretend we’re mapping R to R or something like that.
We know for certain that base 2 R maps one to one with base 3 R.
Now have the left side use base 2 and the right side use base 3. Write exactly the same digits on both side. No matter what, you can always create a new number on the right side by switching any
digit to 2. All digits on the right have covered all combinations possible on the left. So this means you can always have more numbers on the right that aren’t mapped.
I can also ask people for their own custom mappings. I will swap out all numbers on the right that uses 2 in a digit with a number that does not use a 2. There should be no issue here as I can
cover the exact same amount of numbers as on the left using only two symbols per digit (as that is all the left side uses). When I’m done, I will again have only numbers on the right that uses
digits 0 and 1. This will leave all numbers with 2 as a digit being unmapped.
Does this mean that |R| > |R|. No. But it does prove my point that using different bases creates a different mapping that is not f. In my proof, I showed the one to one mapping and then proceeded
to show a contradiction where one side did not map completely to the other side.
How did this happen? Where was the flaw? Whatever you answer is exactly how Cantor’s argument is flawed. The snag is that people get sideswiped by the use of naturals.
If I’m not allowed to write down the values taken by your function, then what can I do?
You can take the values (on both sides) as long as you don’t change the mapping f (and retain such a mapping). Make sure that f stays f without introducing an arbitrary mapping. If you don’t
change f and through existing properties of such a mapping it ends up being in contradiction with each other, then that would be a valid proof.
Right now, f is not used directly. Instead, a new mapping g is constructed which uses the output of f and changes how it maps the indexes to the elements of N^2 via digits.
Prove that the ONLY mapping that f can have, if it exists, is g (the power set mapping). If you did that, then you would also have a valid proof.
The adversary claims he can do it and does it, but as soon as I touch the paper I’m cheating?!
You’re cheating if you change the mapping I’m giving you. And all the proofs mentioned so far do in fact change the mapping. I can even tell you exactly what mapping you’re using. It’s the
original power set mapping that you’re asking if there exists ANOTHER mapping which is one to one. Of all the ironies, I find it amazing that all the proofs go ahead and use the original power
set mapping when they’re trying to find something different and are then up in arms when it’s discovered that f != power set mapping.
You do realize that a one to one mapping, if it exits, will be inconsistent with the power set mapping? You do understand that, right?
127. #127 Stephen Wells November 1, 2009
Vorlath, the construction of a number which is not covered by f is exactly and precisely a demonstration that the “new number” does not have a mapping in f. We construct a number which differs,
from every number covered by f, in at least digit. Ergo it’s not covered by f, and you are wrong.
We didn’t “ditch” f, we didn’t use a “power set mapping”, we didn’t switch from base 2 to 3 or n or anything. Try to grasp this before responding.
128. #128 Stephen Wells November 1, 2009
Incidentally, Vorlath, your argument about base 2/base 3 proves only that if you construct something that isn’t a bijection, it isn’t a bijection. Cantor’s proof does something entirely different
by showing that for _any_ claimed bijection between integers and reals, you can make a real that doesn’t match any integer. Is this the source of your problem?
129. #129 Vorlath November 1, 2009
@123 Douglas:
What, exactly, is 3 supposed to contradict? None of your premises claims that F(x) / F(x) won’t equal 1 for any or all x, your premise is only that F(x) won’t equal 1 for any x. This, I’m
sure you’ll agree, is not the same thing.
You, my friend, have just entered the crackpot zone because this is 100% my argument. This is precisely the flaw in Cantor’s argument.
F(x) != F(x)/F(x)
We agree, right?
We know that g is the power set mapping, so f cannot be g just like F(x) cannot be F(x)/F(x).
I would have to prove that F(x) is the same as F(x)/F(x). In the same manner, you must prove that f and the power set mapping are one and the same.
@128 Stephen:
Incidentally, Vorlath, your argument about base 2/base 3 proves only that if you construct something that isn’t a bijection, it isn’t a bijection.
Cantor’s proof does something entirely different by showing that for _any_ claimed bijection between integers and reals, you can make a real that doesn’t match any integer.
NO!!! It does no such thing. You only think it does.
Cantor’s argument uses a mapping that isn’t a bijection and then goes on to show that it isn’t a bijection.
130. #130 Pelli November 1, 2009
Vorlath, how do you represent an A,B-string if not by what it is, mainly e.g. “ABABBABBA….”? It is true that the A,B-strings can be put in bijective correspondence with the subsets of N, but that
is no problem.
You’re talking about different bases, but functions don’t care about bases. If one of my sets is {cat, dog, horse} that does not mean a bijection between that and the set {1,2,3} has to be
written “cat < -> one, horse < -> two, dog < -> three” rather than “cat < -> 1, horse < -> 2, dog < -> 3″.
“This is the power set mapping. It’s not f by definition.”
As I said, an infinite string written down does not transform it into a power set and change a function.
“Take two sets where all elements are represented by infinite digits. Pretend we’re mapping R to R or something like that.”
“We know for certain that base 2 R maps one to one with base 3 R.”
“Now have the left side use base 2 and the right side use base 3. Write exactly the same digits on both side. No matter what, you can always create a new number on the right side by switching any
digit to 2. All digits on the right have covered all combinations possible on the left.
So this means you can always have more numbers on the right that aren’t mapped.”
Yes to all. What you are saying can be generalized to “if you write down a function that maps reals in base 2 to the numbers in base 3 with no 2s, then there are reals (on the right) left over.”
The proof fails because there are reals not in the form “base 3 without 2s”.
The corresponding argument against our proof would be: “You have only shown that any function from N to 2^N that maps numbers to strings of the form 010111…, then there are elements in 2^N left
over.” This is not a problem since every element in 2^N is exactly of the form 0101011…. Hence any function f from N to 2^N will indeed map numbers to strings of the form 01101…
Is this your objection? That we haven’t proven all elements of 2^N are of the form 0100101…? That’s true almost by definition!
” The adversary claims he can do it and does it, but as soon as I touch the paper I’m cheating?!
You’re cheating if you change the mapping I’m giving you.”
If I claim you cannot write down all A,B-strings row by row, then can I not safely assume if you do it your paper will contain rows and rows of ABABABAB….?
“You do realize that a one to one mapping, if it exits, will be inconsistent with the power set mapping? You do understand that, right?”
I don’t know what you mean by “the power set mapping”. What I can tell you is that if you find a bijection, it will be inconsistent with my definition of a function.
131. #131 Ivan November 1, 2009
Is this the source of your problem?
Vorlath clearly has many problems. Your problem is that you’re trying to understand his problems as though they were merely mathematical in nature.
132. #132 Vorlath November 1, 2009
@130 Pelli:
Vorlath, how do you represent an A,B-string if not by what it is, mainly e.g. “ABABBABBA….”?
You can do that as long as the other side of the mapping is expressed in a similar manner.
You’re talking about different bases, but functions don’t care about bases.
So why are you using different bases if they don’t matter? Use the same base and prove me wrong. In all these discussions, no one is ever willing to use the same base.
We all know that bases matter to functions when said functions maps via digits. You’re mapping base 1 digits to base 2 digits. That makes all the difference in the world.
If one of my sets is {cat, dog, horse} that does not mean a bijection between that and the set {1,2,3} has to be written “cat one, horse two, dog three” rather than “cat 1, horse 2, dog 3″.
You’re matching by element, not digits.
As I said, an infinite string written down does not transform it into a power set and change a function.
Of course it does otherwise I can write infinite strings in base 2 and the same strings in base 3 and then match them by digits and you’ll get a different mapping. I proved it in my comment
The proof fails because there are reals not in the form “base 3 without 2s”.
Exactly right.
With your proof, there are reals not in the form “base 2 without 1s”.
Is this your objection? That we haven’t proven all elements of 2^N are of the form 0100101…? That’s true almost by definition!
no no. Not that. I’m saying you haven’t proven that all elements of 2^N are of the form 11111111…. (base 1).
I don’t know what you mean by “the power set mapping”.
The power set mapping is what you used to define N^2 from N.
133. #133 Pelli November 1, 2009
“The power set mapping is what you used to define N^2 from N.”
N^2 is the set of integer pairs. Do you mean 2^N, the set of infinite strings 0110.., which is in bijection with PN, the power set of N? How do I use any map to define PN from N? I define PN = {A
: A is a subset of N}. There is no function.
” Vorlath, how do you represent an A,B-string if not by what it is, mainly e.g. “ABABBABBA….”?
You can do that as long as the other side of the mapping is expressed in a similar manner.”
When there is no function at all, and you are just looking at strings, don’t you agree A,B-strings are exactly the things that are “BABABA…”?
” You’re talking about different bases, but functions don’t care about bases.
So why are you using different bases if they don’t matter? Use the same base and prove me wrong. In all these discussions, no one is ever willing to use the same base.”
Because there is as little reason for there to exist a common basis (whatever that is) for N and 2^N as there should exist a common basis for {1,2,3} and {cat, dog, horse}.
“We all know that bases matter to functions when said functions maps via digits. You’re mapping base 1 digits to base 2 digits. That makes all the difference in the world.”
No. You can define functions by specifying what they do to digits, e.g. let f : N -> N be given by reversing the digits in base 10. The function doesn’t care about what bases you think of when
you look at its elements though, so for this f we have f(10000_binary) = f(16) = 61 = 111101_binary.
” If one of my sets is {cat, dog, horse} that does not mean a bijection between that and the set {1,2,3} has to be written “cat one, horse two, dog three” rather than “cat 1, horse 2, dog 3″.
You’re matching by element, not digits.”
Huh? I am matching naturals numbers n with strings of the form ABAB… . Forcing them to be represented in the same way is like forcing cat and 1 to be represented in the same way when matched up.
” As I said, an infinite string written down does not transform it into a power set and change a function.
Of course it does otherwise I can write infinite strings in base 2 and the same strings in base 3 and then match them by digits and you’ll get a different mapping. I proved it in my comment
No. What you proved is you can define _A_ function from R to R by mapping x to the real whose base 3 rep is the base 2 rep of x. An example of another function from R to R is f(x) = x^2. And as I
said above, feeding the function numbers that you in your mind consider to be in some other basis does not change what the function does.
” The proof fails because there are reals not in the form “base 3 without 2s”.
Exactly right. With your proof, there are reals not in the form “base 2 without 1s”.”
No, “base 3 without 2s” is on the RIGHT side, not on the LEFT side (base 2). My right side is “ABABA….”, not the left side “natural numbers / base 1″.
“I’m saying you haven’t proven that all elements of 2^N are of the form 11111111…. (base 1).”
Wrong side.
134. #134 Pelli November 1, 2009
By the way, a “basis” says what number a string is, right? E.g. in base 2, we know “ab.cde” is 2a+b+c/2+d/4+e/8, whereas in base 3 it is 3a+b+c/3+d/9+e/27. Lo and behold, a basis maps strings of
digits to real numbers! If you demand a basis to exist before you talk about functions between strings and numbers, then you will get in trouble because a basis is exactly a function between
strings and numbers.
You need to realise a function f is just something that takes elements x of a set to elements f(x) of another set, which is formalised by saying a function is a set of pairs (x,f(x)). Since
elements don’t care about bases (the number 2 is the same even if you denote it by “cow”), functions don’t care about bases. A function mapping integers to their base 10 reversed forms is just a
collection of pairs {(cow,cat), (dog, horse), (horse, dog), (lamb, lamb), …} where I have denoted the numbers 13 by cow, 31 by cat, 597 by dog, 795 by horse, 11 by lamb, etc. No basis is required
for the function to exist, and no basis will affect this specific function. If you replace base 10 in the definition of f by base 11, you get ANOTHER function, just as replacing base 2 by base 3
in the definition of “101 in base 2″ (= 5) will result in the different number “101 in base 3″ (= 10).
135. #135 Douglas McClean November 1, 2009
Vorlath wrote:
You, my friend, have just entered the crackpot zone because this is 100% my argument. This is precisely the flaw in Cantor’s argument.
F(x) != F(x)/F(x)
We agree, right?
We know that g is the power set mapping, so f cannot be g just like F(x) cannot be F(x)/F(x).
I agree that F(x) may not necessarily equal F(x)/F(x) (for many choices of F and x).
I have no idea what you are claiming that this has to do with anything. Or what g is. Or how we know it to be the power set mapping. Or why it is “just like” how F(x) can’t be F(x)/F(x) (there
are plenty of other problems with your claimed analogy).
You’re wrong, Vorlath. It’s really simple. Ignore the reals and answer Pelli’s questions about the binary strings. All your ramblings about bases and how defining this or that construction
“changes” some other sets is just that, nonsensical rambling.
136. #136 Vorlath November 1, 2009
@133 Pelli:
Do you mean 2^N, the set of infinite strings 0110..
I define PN = {A : A is a subset of N}. There is no function.
Not fully, but since you mention that A is a subset, then we know that an element from N is either included or not included. That’s base 2. So we know that that this is not one to one with base 1
when restricted to the same amount of digits.
So why are you using different bases if they don’t matter?
Because there is as little reason for there to exist a common basis (whatever that is) for N and 2^N as there should exist a common basis for {1,2,3} and {cat, dog, horse}.
Sure there is. If you don’t then you get a different mapping as I’ve proven in my earlier comment.
No. You can define functions by specifying what they do to digits, e.g. let f : N -> N be given by reversing the digits in base 10.
I agree, but you’re using a single base, 10. My point is that when you use different bases, then you get things like 011 in binary is not the same as 011 in octal.
so for this f we have f(10000_binary) = f(16) = 61 = 111101_binary.
Here, you are either mapping the same base (base 2 or base 10) or you are allowing the different bases to have different amounts of digits. Again, I agree with your examples, but they reinforce
the point I was making.
Huh? I am matching naturals numbers n with strings of the form ABAB… .
Illusion only. You don’t have enough symbols to cover the full range and it amounts to one digit with three symbols in all cases.
Forcing them to be represented in the same way is like forcing cat and 1 to be represented in the same way when matched up.
Same base just means same number of symbols per digit. Each set can use its own set of symbols as long as the amount is the same per digit. If aliens were to use 10 hieroglyphs per digit, it’s
still base 10.
What you proved is you can define _A_ function from R to R by mapping x to the real whose base 3 rep is the base 2 rep of x.
YES!!! THIS is the flaw in Cantor’s argument. Right there. Your argument, apply it to Cantor’s argument.
You can map them with the proper base 3 rep and I can swap those elements one for one with elements that don’t have 2′s for digits. So your argument doesn’t hold water. Besides, do you not agree
that if the base 3 rep doesn’t use any 2′s, that it can map to all elements with base 2 rep when matching digit by digit? So there’s no problem swapping out all the elements that use 2′s.
With Cantor’s grid, I can do the same thing. I can swap out any element that doesn’t have the form 111111000… (all ones at the start and all zeros at the end) with an element that does (and is
not already in the list). This means we can reorder the list so that the 1 count of any entry will match the row count. However, there are elements that are not in the form “11111000…” just as
you’ve said earlier.
How come this makes my proof invalid, but makes Cantor’s valid?
An example of another function from R to R is f(x) = x^2. And as I said above, feeding the function numbers that you in your mind consider to be in some other basis does not change what the
function does.
f(x) = x^2 doesn’t use digits. So it’s not a valid example.
@135 Douglas:
I agree that F(x) may not necessarily equal F(x)/F(x) (for many choices of F and x).
Not necessarily? What number divided by itself and is not 0 will give an answer that is not 1?
Or how we know it to be the power set mapping.
A list of infinite binary strings is the same as the list of elements in a power set, no?
Or why it is “just like” how F(x) can’t be F(x)/F(x)
We know that F(x) is not F(x)/F(x) just like we know that mapping g which is a list of infinite binary digits that is known to not map one to one with a list of infinite singular digits when
using the same amount of digits. This means we know that g != f. So Cantor uses _A_ mapping that is not f. Just like I was using _A_ function F(x)/F(x) that is not F(x).
137. #137 Rilke's Granddaughter November 1, 2009
Does anybody have any clear idea what Vorlath means when he talks about ‘bases’? They don’t appear to have [i]anything[/i] to do with the discussion.
138. #138 Rilke's Granddaughter November 1, 2009
Vorlath, you made a strange comment:
Pelli: I define PN = {A : A is a subset of N}. There is no function.
Vorlath: Not fully, but since you mention that A is a subset, then we know that an element from N is either included or not included. That’s base 2. So we know that that this is not one to one
with base 1 when restricted to the same amount of digits.
That’s silly. He’s not defining a function; therefore there is no function. To claim that it’s a case of “not fully” makes no sense. Are you somehow claiming that every definition of a set is
“not fully” a function?
139. #139 Douglas McClean November 1, 2009
@135 Douglas:
I agree that F(x) may not necessarily equal F(x)/F(x) (for many choices of F and x).
Not necessarily? What number divided by itself and is not 0 will give an answer that is not 1?
The reason I said “not necessarily” is because you didn’t universally quantify over both F and x. For example, let F be the function that always returns 1. Then, for all x, F(x) = F(x)/F(x) = 1.
None of this has ANYTHING to do with the topic, because your analogy isn’t an analogy.
You also wrote:
With Cantor’s grid, I can do the same thing. I can swap out any element that doesn’t have the form 111111000… (all ones at the start and all zeros at the end) with an element that does (and
is not already in the list). This means we can reorder the list so that the 1 count of any entry will match the row count. However, there are elements that are not in the form “11111000…”
just as you’ve said earlier.
How come this makes my proof invalid, but makes Cantor’s valid?
Light dawning? You’re absolutely right, there is a bijection between naturals and binary strings of the form “some number of 1s followed by infinite 0s”. As you noted, there are lots of binary
strings that aren’t included. It doesn’t invalidate Cantor’s proof because he is TRYING to prove that some binary strings aren’t included!
140. #140 Vorlath November 2, 2009
@139 Douglas:
You’re absolutely right, there is a bijection between naturals and binary strings of the form “some number of 1s followed by infinite 0s”. As you noted, there are lots of binary strings that
aren’t included. It doesn’t invalidate Cantor’s proof because he is TRYING to prove that some binary strings aren’t included!
You didn’t answer the question. Why is it ok to say |base 2| > |base 1| but it’s not ok to say |base 3| > |base 2| when exactly the same technique is used? There are lots of base 3 strings that
aren’t included that “proof” too.
Pelli said earlier that my proof was flawed because there were numbers not in the form of not using 2′s. Here you have numbers not in the form “111110000….”. Where is the difference? How can you
use the same statement to validate one proof and dismiss the other?
@138 Rilke’s Granddaughter:
That’s silly. He’s not defining a function; therefore there is no function. To claim that it’s a case of “not fully” makes no sense. Are you somehow claiming that every definition of a set is
“not fully” a function?
But we can determine some features of the function even if it’s not stated directly. For example, allowing an element of N to either be included or not included tells us this is base 2. So if you
try to enumerate this list using the same binary digits, you will have a mapping between base 1 and base 2. This is known to not be a one to one mapping when using equal amounts of digits, but
it’s this very mapping that is used in the “proof”.
141. #141 Rilke's Granddaughter November 2, 2009
But we can determine some features of the function even if it’s not stated directly. For example, allowing an element of N to either be included or not included tells us this is base 2. So if
you try to enumerate this list using the same binary digits, you will have a mapping between base 1 and base 2. This is known to not be a one to one mapping when using equal amounts of
digits, but it’s this very mapping that is used in the “proof”.
There IS NO FUNCTION. There is a set, and a definition of a set.
Answer this question: are you claiming that every definition of a set is “not fully” a function, but is some kind of function?
142. #142 Vorlath November 2, 2009
Answer this question: are you claiming that every definition of a set is “not fully” a function, but is some kind of function?
I don’t know what those other definitions would be.
All I’m saying is that when you enumerate your set and you use base 2, then you create _A_ mapping when you match digits. There is enough there to know certain properties that this mapping has.
Such as not being a one to one mapping.
143. #143 Pelli November 2, 2009
“Pelli said earlier that my proof was flawed because there were numbers not in the form of not using 2′s. Here you have numbers not in the form “111110000….”. Where is the difference? How can you
use the same statement to validate one proof and dismiss the other?”
Where your 3 > 2 proof flaws is that you say f : A -> B, and then you assume f maps from A to elements that do not fill B. Your argument against my proof is that I don’t care that the elements of
B do not fit nicely into A. That is not the same thing. Do you agree?
(Your maps is from base 2 to base 3 without 2s. You fail because base 3 without 2s is not base 3. My map is from n to ABAB… strings. I do not fail, since ABAB… strings are just ABAB… strings.
There is no reason for ABAB… strings on the right to have the same basis as n on the left, just as a map between animals and numbers does not force the numbers to be expressed as animals.
“Not fully, but since you mention that A is a subset, then we know that an element from N is either included or not included. That’s base 2. So we know that that this is not one to one with base
1 when restricted to the same amount of digits.”
True. The set of finite A-strings of length n is greater than the set of finite A,B-strings of length n. Assuming this argument can be taken to infinity is fallacy of composition.
” No. You can define functions by specifying what they do to digits, e.g. let f : N -> N be given by reversing the digits in base 10.
I agree, but you’re using a single base, 10. My point is that when you use different bases, then you get things like 011 in binary is not the same as 011 in octal.”
There is a difference between defining a function in terms of bases and looking at what a function’s elements are in different bases. Suppose you say “f(x) = y, where y is obtained by reading the
base 2 expansion of x as base 3″. Then the function is defined in terms of a basis. Changing the bases in the definition will probably change the function. Changing what basis you think about
when you give the function a number will not affect it, just as f(x) = x^2 does not care about bases.
” so for this f we have f(10000_binary) = f(16) = 61 = 111101_binary.
Here, you are either mapping the same base (base 2 or base 10) or you are allowing the different bases to have different amounts of digits. Again, I agree with your examples, but they reinforce
the point I was making.”
The number of digits is irrelevant. A number is the same no matter what string of digits or other symbols you use to denote it.
” Huh? I am matching naturals numbers n with strings of the form ABAB… .
Illusion only. You don’t have enough symbols to cover the full range and it amounts to one digit with three symbols in all cases.”
Are you talking about my bijection between the set {1,2,3} and three animals? Talking about symbol representation here is just stupid! The function is the same even if I write the numbers in some
alien language and the animals in Hindi!
” Forcing them to be represented in the same way is like forcing cat and 1 to be represented in the same way when matched up.
Same base just means same number of symbols per digit. Each set can use its own set of symbols as long as the amount is the same per digit.”
OK. But isn’t f(1) = cat, etc a bijection, even though I haven’t written 1,2,3 in a basis with 26 symbols or “cat”, etc in a basis with 10 symbols?
“If aliens were to use 10 hieroglyphs per digit, it’s still base 10.”
” What you proved is you can define _A_ function from R to R by mapping x to the real whose base 3 rep is the base 2 rep of x.
YES!!! THIS is the flaw in Cantor’s argument. Right there. Your argument, apply it to Cantor’s argument.”
No. You assume every element from R is an element in R base 3 without 2, which is false. Cantor’s assumption that every element in 2^N is of the form 010101.. is true.
“You can map them with the proper base 3 rep and I can swap those elements one for one with elements that don’t have 2′s for digits. So your argument doesn’t hold water. Besides, do you not agree
that if the base 3 rep doesn’t use any 2′s, that it can map to all elements with base 2 rep when matching digit by digit? So there’s no problem swapping out all the elements that use 2′s.”
I don’t understand what you’re trying to say. There is indeed almost (because base 2 expansion is not unique) a bijection between reals and reals that have no 2 in base 3, by defining f(x) to be
x in base 2 read as base 3.
“With Cantor’s grid, I can do the same thing. I can swap out any element that doesn’t have the form 111111000… (all ones at the start and all zeros at the end) with an element that does (and is
not already in the list).”
No. Here’s a list where you can’t (unless you swap rows around, in which case all you’re saying is “I can replace the first element by 1000.., the second element by 1100.., etc which is clearly
1: 1010101010….
2. 1111111111…. (just in case that’s a valid 111000… string)
3: 000…
4: 1000….
5: 11000…
6: 111000…
7: 1111000…
“This means we can reorder the list so that the 1 count of any entry will match the row count.”
“However, there are elements that are not in the form “11111000…” just as you’ve said earlier.”
Do you mean there are elements in 2^N missed by your list? Then yes.
“How come this makes my proof invalid, but makes Cantor’s valid?”
Because you’re swapping around elements. Cantor is just saying “look at the 0101.. representation of f(k)”. That does not alter f.
If you can swap elements, then you get things like “There is no bijection from N to N. Given any such, replacing every f(k) by f(k)+1 won’t change bijectiveness, but then no f(n) is equal to 1.
Cantor is saying “Writing all the elements of 2^N as 0101… does not change bijectiveness”, and indeed it doesn’t, because functions don’t care about bases.
“f(x) = x^2 doesn’t use digits. So it’s not a valid example.”
Sure it is. The function f(x)=x^2 cares about the basis of your input just as much as the read base 2 as base 3 function. If it helps, think of the function as “translate x to base 2. then read
as base 3.” No matter what basis you give the number in, it will be translated to base 2 in the first step.
144. #144 Pelli November 2, 2009
A little clarification. My argument against the generality of the map read base 2 in base 3 is not that there are base 3 strings not in base 2. It is that the image of your map is still numbers
with only 0s and 1s.
Similarly, suppose you take base 2, double each digit, and read as base 3. Then exactly the same argument says you’ll miss base 3 numbers with 1s.
Or if you take base 2, and replace every other 1 with a 2, you will miss all numbers whose base 3 rep does not have 1 and 2 alternating.
If you take base 3, and replace all the 2s (and 1s) with 1 and read as base 3, you’ll once again miss the numbers with 2s in their base 3 rep.
Do the same with base 10 in the domain, and you’ll still miss the same numbers, even though 10 has more than enough digits.
Take base 2 and insert 0s between every digit and read as base 2. You still miss things, and this map is injective. (If you try to say you get double the number of digitsand take that to infinity
then you’re compositioning again.)
Conclusion: My argument has nothing to do about the basis of the domain of the function not working well with the image.
145. #145 Pelli November 2, 2009
Sorry for the multiple posts, but:
A fallacious argument for base 3 > base 2 would go like this:
Let f : base 2 -> base 3 be given. Then f maps to numbers whose base 3 rep contains only 0s and 2s (no, base 3 contains more than that). Considering 1, f is not a bijection. (I picked 2s instead
of 1s to show there is no link to the basis of the domain.)
The Cantor argument says:
Let f : N -> 2^N be given. Then f maps to string of the form 0101… (true, 2^N contains exactly these strings). Diagonal, contradiction.
146. #146 Stephen Wells November 2, 2009
Vorlath, you so nearly got it in your post 129! You complain that “Cantor’s proof uses a function that isn’t a bijection and then shows that it isn’t a bijection”. That’s exactly what the proof
does; it shows that for f which somebody comes up to you and claims is a bijection, you can show (using the diagonal argument) that f is NOT a bijection by defining a real which isn’t covered by
f. We didn’t have to know anything about the specifics of f, so it’s valid for all proposed f.
THIS IS A FEATURE, NOT A BUG. Capisce?
Your complaints about base 2/base 3 representations are a red herring; all you’ve shown is that ONE proposed G, of your own devising, is not a bijection. That proves nothing about different
cardinalities. If I propose to map rationals to reals by associating every rational P with the real (Q=P), I miss some reals, in a very similar way to the way that your proposed (real base 2/real
base 3) mapping misses some reals in base 3. But there are OTHER ways of mapping (real, base 2) to (real, base 3) which ARE bijections, so your argument about base2/base 3 fails. There is no way,
however subtle, of mapping rationals to reals with a bijection, which is waht Cantor proved.
Ho hum.
147. #147 Robert November 2, 2009
What I think some people fail te realize (perhaps this has been mentioned before in of the hundreds of post above though) is that a digital (binary, tertiary, octal, whatever) representation of a
number is not necesarily unique.
In decimal:
0.09999…. = 0.100000
This non uniqueness occurs only for those numbers which have a finite representation in a certain base. The infinite string of zeros can also be seen seen as in infinite string of 9′s (base – 1).
Cantor’s diagnolisation works around this problem by creating a number of only 4′s and 5′s. Doing this, you know that the created number has a unique digital representation, and that this
representation can not be found in your enumeration of reals. In base two you cannot choose a digit between 0 and 1 and the process doesn’t work (although there are ways to work around this.)
Given an enumeration, and using base 10, you find one number not in your enumeration. Using base 8 you would find another, etc… In fact there are infinitely many numbers not in your enumeration,
Cantor’s proof just gives you a specific way of finding one. He does not claim, that given any enumeration f there is a unique number which is not in the enumeration, he shows there is at least
148. #148 Vorlath November 2, 2009
The argument has degenerated into “Cantor is trying to show that numbers are missing” while the argument against base 2 vs. base 3 is “You have numbers missing.” Or in the alternate, but
equivalent is that I’ve use _A_ particular mapping when Cantor did not, yet I used HIS technique. It’s identical.
If none of you can explain why the exact same argument validates one proof, but not the other, this discussion is over.
149. #149 Pelli November 2, 2009
Vorlath, as I have said, it’s different if the numbers that should be on the right side are missing from the RIGHT side (in the case you are supposed to map to all of base 3 but only use base 3
without 2) or “missing” from the LEFT side (in the case I map to strings of the form ABAB… and all I assume is that the strings I map to are of the form ABAB…, without bothering about what basis
the integers on the left side use since that is irrelevant).
150. #150 Stephen Wells November 2, 2009
Vorlath, it’s quite simple. You brought to the table _one specific_ proposed mapping between reals base 2 and reals base 3. We have established that this mapping is not a bijection. This,
incidentally, shows that the reals can be mapped to a subset of the reals, which we knew, so yay consistency.
Cantor’s diagonal argument shows that for ANY proposed mapping between the rationals and the reals there is at least one real which isn’t mapped to any rational. The argument is completely
general and does not rely on any detail about any one particular proposed mapping.
It’s the difference between “This car is not working” and “no car can drive to the moon”.
So no, your arguments about bases and Cantor’s argument about cardinality are not remotely identical.
Get it?
151. #151 g November 2, 2009
Vorlath, I think this discussion has become a bit bogged down in side-issues. Let’s briefly try again from scratch and try to understand exactly where the divergence between your view and the
mainstream begins. I’ll go through the steps in Cantor’s proof, subdividing as finely as seems reasonable (so what follows is long; sorry), and I’d like to know at what point you think I first
say something false, and exactly why it’s false given that everything before it is true.
(I understand that your objection is a “high-level” one to the effect that it’s some kind of category error to combine “base-1-ish” and “base-something-else” things, or that somehow Cantor is
treating infinite sets as finite, or that the proof doesn’t take into account how some sets are “dependent” on one another. But if there’s such a high-level error in the argument then it must
show up as a false statement in the sequence that follows; I think it will be easier to understand your objection when we have a concrete example of an inference that you think is broken.)
1. Cantor claims: There is no bijection between the positive integers and the real numbers.
2. This will be true provided there is no *surjection* from the positive integers to the real numbers.
3. In other words, provided that there is no function f from N to R such that everything in R is f(n) for some n. (I am defining N to be the set of positive integers, and R to be the set of real
4. So it suffices to do the following. (a) Let f be any function from N to R. (b) Describe a procedure that, given such a function f, produces a real number. (c) Show that, whatever f was, this
number is not equal to f(n) for any n. (In other words, whatever f was, it was not in fact a surjection, let alone a bijection.)
5. OK, so let’s assume there is a function f from N to R such that everything in R is f(n) for some n.
6. For each n, f(n) is a real number, and can be written in decimal notation. Some numbers have more than one decimal representation (e.g., 0.3 = 0.29999…); when f(n) is one of these, choose the
one that ends with an infinite string of 0s rather than an infinite string of 1s. Define d(n,k) to be the k’th digit of f(n) when written out in this way.
7. (For clarification: let’s say that d(n,0) is the units digit, d(n,1) the 0.1 digit, etc. Of course k can be negative too, though it happens that we won’t bother looking at any d(n,k) for k<0.)
8. We're now ready for step (b) from the list above: describing a new number which will turn out not to equal f(n) for any n. I'm going to call the number x, and I'm going to describe it by
giving its decimal digits.
9. I'm free to define this number however I want. Of course, if I define it stupidly then step (c) -- where I try to prove that it doesn't equal any f(n) -- may fail. So, what shall I make its
9. All the digits before the decimal point are going to be 0. That is, when k < = 0, digit k is 0.
10. If k>0, then look at d(k,k), the k’th digit of f(k). If this is any of 0,1,2,3,4 then digit k of x will be 7. Otherwise, digit k of x will be 2.
11. So I have described a number x. All its digits before the decimal point are 0. All its digits after the decimal point are 2 or 7.
12. In particular, it doesn’t end with an infinite string of 0s or 9s, so its decimal representation is not ambiguous.
13. So if for any n it equals f(n), then its decimal digits are given by d(n,k).
14. But digit n of x differs from d(n,n) by at least 3, by the construction in step 10. In particular, it does not equal d(n,n), the n’th digit of f(n).
15. Therefore, x has only one decimal representation (by step 12) which doesn’t equal that of f(n) (by step 14).
16. Therefore, x is not equal to f(n).
17. Steps 13-16 apply for all n, so x does not equal any f(n).
18. (Note that the definition of x doesn’t depend on what n is. First, we defined all the digits of x; then, we said “choose any n and look at the n’th digit”.)
19. Since x does not equal any f(n), f was not in fact a surjection.
20. Note that the above construction applies no matter what f was; whatever f might be, we can define x using steps 8-11 above, and then by steps 12-17 we find that x is not equal to f(n) for any
21. So *every* f from N to R turns out not to be a surjection.
22. In other words, there is no surjection from N to R.
23. In particular, there is no bijection from N to R (since every bijection is a surjection).
24. Therefore N and R are not of equal cardinality.
I know you think statement 24 is wrong. What’s the first statement in that list that you think is wrong?
* * * * * * * * * *
Incidentally, although Cantor’s argument is usually expressed in terms of digits (in binary, base 2, or whatever), one can do essentially the same thing without any mention of digits. Like this,
for instance:
1. As above, we’ll let f be any function from N to R, describe a real number x in terms of f, and show that x doesn’t equal any f(n).
2. The first step in defining x is to construct a sequence of intervals on the real number line. We start with I(0), which we define to be [0,1], the closed interval whose endpoints are 0 and 1.
3. Now for n=1,2,3,… in turn, we construct intervals I(n). The idea is to make sure that I(n) (a) is contained in I(n-1) and (b) stays away from f(n).
4. So. Suppose the length of I(n) is L(n). (Note that L(0) is positive; I shall define later intervals so as to make sure that L(n) is always positive.) Let J(n) be an interval of length L(n-1)/
4, centred on f(n).
5. The difference I(n-1) \ J(n) — that is, all numbers that are in I(n-1) but not in J(n) — consists of either one interval or two, of total length at least 3/4 L(n-1).
6. In particular, it contains an interval of length at least 3/8 L(n-1). (The whole thing, if it’s a single interval; or the larger half; let’s choose the left half if they are of equal length.)
7. So it contains a *closed* interval — one that includes its endpoints — of length at least 1/4 L(n-1). (Move each endpoint in by 1/16 L(n-1).
8. This interval is what we shall call I(n).
9. I(n) is a closed interval of positive length.
10. I(n) is contained in I(n-1).
11. I(n) does not contain f(n).
12. As n runs through the positive integers, the left-hand endpoints of the I(n) never decrease and the right-hand endpoints never increase. (By 10 above.)
13. The left-hand and right-hand endpoints never meet or cross over. (By 9 above. Their limits as n->oo may coincide, though.)
14. Therefore, there is at least one number that is >= all the left-hand endpoints and < = all the right-hand endpoints. (For instance, the smallest number that's >= all the left-hand endpoints
will do.)
15. This number does not equal any f(n), because it’s in all the I(n) and f(n) is not contained in any I(n).
16. Therefore, whatever f was, there is a number that doesn’t equal any f(n).
17. Therefore, whatever f was, it wasn’t a surjection, hence not a bijection; hence N and R are not of equal cardinality. (As before.)
152. #152 Vorlath November 2, 2009
@150 Stephen:
So no, your arguments about bases and Cantor’s argument about cardinality are not remotely identical.
Get it?
I used Cantor’s argument, but used base 2 and base 3 instead of base 2 and base 1. It’s the exact same proof so anything you say against one must also be against the other. You can’t play on both
sides of the fence. If you say I built a custom mapping, then so did Cantor. It’s the EXACT same proof with the exact same construction. Only thing that’s different is how I build the new number,
but I’ve shown how you can build a new number using the exact same method using Cantor’s setup.
@151 g:
I’ve seen all that a million times over. You’re comparing base 10 to base 1 here. Doesn’t change the fact that this is _A_ particular mapping. You can’t do that. It’s just a difference in base.
Things start going wrong at point 6 where you fail to indicate the row number in base 10 as well.
Cantor’s first proof is even more asinine than the diagonal argument. He’s specifically using dependent sets when he says that between any two members exists another. What makes this possible is
remapping and hence the use of dependent sets. This has to do with the way that reals are defined. Naturals are not defined in the same manner, so this proof fails automatically. Really, this is
Like I said before, if you create a division at 0.5 and call the section from 0 to 0.5 as S, then |R| > |S|. It has to be since all elements in S are the exact same elements in R for that same
range. However, once you remove the original mapping, S can now map to R, |R| = |S| and this is how you obtain a new number in between existing numbers. You can even use this technique on R by
using R as a subset of itself so that you can always find a new number between two other numbers.
All Cantor is doing is using
1. |R| > |S| of dependent sets;
2. |R| = |S| of independent sets;
as a contradiction when there is no such thing.
What Cantor does is define two sequences A and B that are assumed to map to the entire range of R and then maps two subsets N1 and N2 of N that map to each range. Cantor then says that there is
an element c between A and B, but that there is no element in between of N1 and N2 to map to c since it would already be included in either N1 or N2.
What you can do is create two dependent and proper subsets N1 and N2 of N that are complements of each other. Then map N1 and N2 to A and B of the reals. When you obtain c, remap N1 and N2 to a
different pair of proper subsets of N and you will have no problem finding a natural to map to c while mapping N1 and N2 to A and B. And Cantor’s first proof falls apart, not to mention that he
again uses _A_ specific mapping.
In any case, as I said earlier this discussion has devolved to three tactics.
1. Saying I’m wrong just because.
2. Playing on both sides of the fence where Cantor’s argument applies to base 1 vs base 2, but not to base 2 vs. base 3 when it’s the exact same proof.
3. Re-explaining Cantor’s argument in countless ways thereby changing the topic. Why do people do this?
I want to know why Cantor’s proof applies to base 1 vs base 2, but not to base 2 vs. base 3. That’s it. I agree with everyone else on pretty much everything else.
I won’t reply to anything that qualifies as one of the three points I’ve mentioned in my list because it’s a waste of time. I’ve already responded in enough detail to cover those areas, not to
mention that I’ve gotten enough evidence that people will agree that Cantor’s argument is flawed just as long as you use the same proof, but change the representation used. That’s the last issue
remaining and if anyone wants to tackle it, fine. Otherwise, I’m more than satisfied that Cantor’s argument is trivially flawed.
153. #153 Rilke's granddaughter November 3, 2009
Vorlath, I still don’t understand how and why you’re using “base” in this context. So far as I can see there is no point at which cantor is showing a mapping from some set expressed in base 1 and
some set expressed in base 10. Can you explain?
154. #154 CuBr November 3, 2009
“I want to know why Cantor’s proof applies to base 1 vs base 2, but not to base 2 vs. base 3.”
First, let’s standardize the notation:
base 1 = set of all {1} strings (finite or infinite)
base 2 = set of all {0,1} strings (finite or infinite)
base 3 = set of all {0,1,2} strings (finite or infinite)
Also, let finite base n = set of all finite base n strings. I think we agree that |finite base 1|=|finite base n| for any natural number n.
What Cantor’s proof shows is that |finite base n|<|base 2| for any n. The argument relies on the FINITE part. You list every element of “finite base n” in the “left-hand list”, and the argument
requires any given element from the set “finite base n” can be found in this list after finitely many steps.
The argument does not show that |base 2|<|base 3| because we don’t have the finiteness here. If we attempted to list every element of base 2 in the “left-hand list” we could try something like
… etc.
But we won’t be able to find any given element of base 2 in this list after finitely many steps – that’s the difference.
155. #155 Stephen Wells November 3, 2009
Vorlath, do you seriously imagine that anything which is true about one proof (which happens to use a diagonal argument) must also be true of all other proof that happen to use a diagonal
argument? Wow. Apparently you _don’t_ see the difference between “this car is broken” and “no car can drive to the moon”.
Your argument about bases proposes ONE SPECIFIC mapping between two ways of writing the reals. You say specifically: my mapping is such that every real in base 2 is mapped to the real in base 3
that has the same digits.
We prove that this isn’t a bijection by finding any real in base 3 which your mapping doesn’t match to any real in base 2. That’s trivial. Fine. Your mapping is not a bijection.
This proves nothing about the cardinalities of “real base 2″ and “real base 3″ because you only considered one proposed mapping. You specified it yourself.
Cantor’s argument does not involve specifying any particular mapping between the integers and the reals. f(i) stands for _any_ proposed mapping such that a real n is given by f(i). The only
property that we know _any_ proposed mapping must have is that 1 is associated to a real f(1) and 2 is associated to f(2) and so on, so we can make a list of reals f(1), f(2), f(3) etc.
The application of the diagonal argument then generates a real R which is not f(i) for any i.
This proof is valid for _any_ proposed mapping f because we did not appeal to any property of f except that for every i there’s a real f(i)- which must be true of any proposed mapping.
So your argument about the property of one mapping is not the same as Cantor’s proof about the property of any mapping.
A proof that Paul has two legs is not the same as a proof that nobody has 27 legs.
156. #156 scineram November 3, 2009
Guys, reasoning cannot cure his neurological problems.
157. #157 Rilke's granddaughter November 3, 2009
Vorlath, I still don’t understand how and why you’re using “base” in this context. So far as I can see there is no point at which cantor is showing a mapping from some set expressed in base 1 and
some set expressed in base 10. Can you explain?
158. #158 Stephen Wells November 3, 2009
Rilke’s granddaughter, apparently Vorlath thinks that a list of the integers is “base 1″ (because you could represent them by tickmarks or something) and so the mapping in Cantor’s proof is “a
mapping between base 1 and base 10″.
There are no words.
159. #159 Afgncaap November 3, 2009
I’m wondering if the confusion is in the whole tickmarks thing. The fun part of a list of the rationals (Or integers, of course) is that every element on the list is finitely large. You can write
a finite number of tickmarks. But Cantor’s demonstrating primarily that you cannot create a list that can be enumerated using only finitely large ticklengths. I THINK that may be where some of
the confusion is?
Then he rambles on about base 3, and I have no gorram clue what he’s going on about. Base 3 is not larger than base 2. There is no inequality relation defined on bases. I don’t think even size is
160. #160 g November 3, 2009
Vorlath, I wasn’t purporting to show you something you haven’t seen before; I was trying to arrive at some common understanding of where your disagreement with the mainstream actually begins.
When you say things start to go wrong at step 6, do you mean that in step 6 I have said something false? (In which case: what false thing have I said?) Or that in step 6 I have committed some
other error (e.g., not writing something in base 10 that you would like me to write in base 10) that makes a later statement false? (In which case: what later statement is false?) Or that in step
6 I have said “now do X” where X cannot be done? (In which case: what specific thing have I said to do that cannot be done?) Or something else? (In which case: what?)
I appreciate that you would prefer to have a discussion about “base 1 versus base 2″, etc., but unfortunately I am unable to make much sense of what you’ve said about bases at present and I
therefore don’t think having such a discussion will help us until I’ve understood as concrete as possible what specific step in the reasoning you think is wrong and how it fails.
I am not “re-explaining” Cantor’s argument in order to change the subject (the subject *is* Cantor’s argument, after all), nor because I think a different or more detailed explanation will
convince you; I am splitting it up into tiny steps because I want to find exactly where the disagreement begins. Do you think there is something wrong with this procedure?
I am not attempting to diagnose what, if anything, is wrong in (e.g.) your arguments about “independent sets” because, again, it is not yet clear to me exactly what you are saying. Let’s first
understand exactly where you think Cantor’s argument breaks — where there is a step from something true to something false or meaningless. If we can do that, then perhaps we can make some
161. #161 Rilke's Granddaughter November 4, 2009
Rilke’s granddaughter, apparently Vorlath thinks that a list of the integers is “base 1″ (because you could represent them by tickmarks or something) and so the mapping in Cantor’s proof is
“a mapping between base 1 and base 10″.”
But that’s insane. Base one would be something with only 1 digit: 0 (or 1, I supposed). You could only represent one number.
Base 1 has nothing to do with Cantor’s problem. And Cantor’s problem doesn’t even have anything to do with bases.
162. #162 Stephen Wells November 4, 2009
And god only knows what he thinks he means by “fails to indicate the row number in base 10″. The row number is an integer. You could write it in base 537 and it wouldn’t make the slightest
163. #163 AnyEdge November 4, 2009
Long ago, in my point-set topology course (there was a lot of tennis), our instructor, a well regarded fellow at Washington University in St. Louis, used “Base 1″ to describe a series of tick
marks the way Vorlath is. But he made it a point to say that it was merely an ancient way of representing a number, it was not part of formal rigour, and had no place in a formal proof in
Point-Set Topology.
It was useful for things like keeping score at sports that score by ones.
164. #164 jdkbrown November 4, 2009
Base 1 is a perfectly good base for representing the integers, and it’s actually very useful when thinking about Turing machines. (Mark recently had a post in which he talked about base 1, too,
if you’d like to know more.)
Like the rest of you, though, I don’t quite understand what use Vorlath is making of the term ‘base’. Though perhaps it is the following. There are denumerably many base 1 strings–strings of the
character ’1′. (This is so even when we throw in the the infinite base 1 strings, since, of course, there’s only one of them). One way to view the Cantor proof is as showing that there are more
base two strings–strings composed of ’0′ and ’1′–than base 1 strings. This might be what Vorlath means when he talks about base 1 being larger than base 2.
165. #165 Robert November 4, 2009
With base 1 you mean simply ticking, 1, 11, 111, etc… This is fundamentally different from the other bases for two reasons. There is no zero, and you can’t do fractions. This is possible in all
other bases.
While ‘base 1′ can be usefull for certain number theoretic purposes, or for turing machines, in this context talking about base 1 is only confusing the issue since it is not really a base.
I suspect Vorlath is confused by the fact that a number can have different representations in different bases. Perhaps he doesn’t realize that a number exists independantly of any base. It is
however possible to write down a unique representation of a number in any given base by writing a (possibly infinite) string of digits. Cantor’s proof doesn’t rely on a base, only on the fact
that it is always possible to write down a representation of a number (in any base) in a unique way.
166. #166 mike November 4, 2009
Let’s run through Cantor’s proof your way:
You give me a function, f, claiming that it is a bijective mapping between two specific sets.
Now, it is my job to show that f does not map to every element in the image by pointing out an element that f does not map to, and I may ONLY use your description of f, nothing more; I cannot
restrict f or use a different function. To recap, the description of f is ‘a bijective mapping between two specific sets.’
If I am successful, I will show that f is not bijective. If my method can be used for not just f but any bijective mapping between two specific sets, then I will have shown that there does not
exist a bijective mapping between the two sets.
Is this acceptable? If so, let’s begin!
All we know about f is that it is a bijective mapping. This means that every element of the domain maps to an element of the codomain because f is a mapping. Take an element of the domain and
find its image in the codomain. Can you find another element of the codomain that is not this image? If yes, then take another element in the domain of f and find its image. Can you find another
element in the codomain that is neither of these two images? If yes, then continue, can you find an element in the codomain that is not the image of any element of the domain? If yes, then the
element you found is the element I want to point out which shows f is not bijective.
Acceptable so far? That would prove that f is not bijective. Also, because f is only defined as a bijective mapping and we have not added to that definition or changed f, it would work for any
bijective mapping. That would mean the sets have no bijective mapping, i.e. the sets have unequal cardinality.
So far, I have not defined the domain and codomain of f. I wish to prove that the cardinality of N, the set of natural numbers, is unequal to the cardinality of R, the set of real numbers.
Naturally I will have f map from N to R. The function f is still just a bijective mapping between two specific sets. Now the sets are named.
The question is “Can you find an element in the codomain that is not the image of any element of the domain?” If yes, then f is not bijective and the cardinality of N is unequal to the
cardinality of R.
I must come up with a number that is not the image of any element of the domain in order to say ‘yes.’ I must describe my number without ambiguity so that you know exactly what my number is. I
must clearly state what is in every decimal place of my number so that everyone can see that it is distinctly different from any of the images in the codomain. In order to do this I will simply
describe my number one digit at a time to be different from each image in the codomain. This is possible because each image in the codomain is mapped from an element in the domain. An element of
the domain is a natural number. My digits will correspond to the natural numbers.
Acceptable so far?
We must check whether my number matches any of the images. Choose an image, being an image it is mapped from a natural number, which corresponds to one of my digits. I defined that digit to be
different of the one from that image, so my number does not match the chosen image, nor any other image for that matter by the same reasoning.
My number checks out.
Function f is not a bijection.
There does not exist a bijection between N and R.
The cardinality of N is different from the cardinality of R.
Two infinite sets may have different cardinalities.
167. #167 Raph Levien November 5, 2009
One twist to this is that Cantor’s proof has been formalized in absolutely rigorous detail, and is theorem “ruc” in Metamath.
The proof is not all that difficult to follow, and I have fairly high confidence in the underlying system (having written a verifier for it from scratch, and various other things). I’m curious
what exactly the cranks would find in this proof to disagree with. It’s hard to believe it’s an individual step, as those are, as mentioned before, rigorous. My personal guess would be
definitions, but again, I’d love to know which one(s) in particular.
168. #168 g November 5, 2009
In this instance, I think Vorlath is thinking that what functions there are from one set to another depend on how you happen to be describing the sets at any given time. (So, e.g., he thinks that
if you talk about “[0,1/2], a subset of [0,1]” then you’re somehow committing to using {x->x} to map the one to the other, which means that *in that context* the former set has smaller
cardinality than the latter; and that once you start talking about base-whatever digits, you’re constraining what functions you’re allowed to use, which again will make N look smaller than R even
if it really isn’t.
In other words, I think he’s thinking of set theory as being rather like software development using a statically-typed language: the operations you’re allowed to perform on a given bit pattern
depend on whether you’re thinking of that bit pattern as an integer, a floating-point number, a pointer, a string, etc.
At least, I *think* that’s roughly what’s motivating his complaints. I could, of course, have misunderstood.
169. #169 mike November 5, 2009
I think I’ve identified two good stumbling points. First, I think Vorlath doubts the existence or characteristics of sets when we define them in terms of other sets. Vorlath defined such sets as
“dependent” because they seem to intuitively ‘depend’ on other sets for their definition and maybe even existence. Of course this is silly, the elements of a powerset of an infinite set are just
as well defined as any infinite set; yes, we might refer to the definition of another set to determine whether something is an element of a powerset but we also refer to that very definition to
determine whether something is an element of the original set. Furthermore we could have alternatively defined a powerset of A without referring to A by including the definition of A in our
definition of the powerset.
Also, Vorlath might not be familiar with most popular constructions of number sets because he described N as ‘independent’ of R, which, based on his definition of “dependent”, doesn’t seem right.
Secondly I think Vorlath has a problem with logical quantifiers when talking about diagonalization in that he interprets our proof as saying ‘for every’ bijective function, ‘there exists’
function f which meets standards of diagonalization (which may not represent bijective functions). What we are really saying is that ‘for every’ bijective function f, f meets the standards of
diagonalization. There is no loss of generality during the diagonalization.
170. #170 Takis Konstantopoulos November 6, 2009
Mark C. Chu-Carroll:
I applaud your patience in trying to explain the obvious to people who, simply, can’t get it. Like the religious ones, it doesn’t matter what you prove to them. If it goes against their
hard-wired, pre-existing, beliefs and “intuition”, your argument will, in their opinion, be wrong. Try as much as you can, they’ll remain unmoved.
171. #171 hellblazer November 7, 2009
This comment and the ensuing, erm, noise, suggest that the problems lie rather deeper or earlier than the details of terminology in (the usual portrayal of) Cantor’s proof.
I wonder if Mark C^3 was getting flashbacks…
172. #172 Blake Stacey November 9, 2009
Another hallmark of crackpottery: suing people who criticize you.
173. #173 Vorlath November 9, 2009
Most of what you said is correct. Cantor’s argument won’t let go of that specific mapping.
So, e.g., he thinks that if you talk about “[0,1/2], a subset of [0,1]” then you’re somehow committing to using {x->x} to map the one to the other, which means that *in that context* the
former set has smaller cardinality than the latter; and that once you start talking about base-whatever digits, you’re constraining what functions you’re allowed to use, which again will make
N look smaller than R even if it really isn’t.
Yes, Cantor is doing exactly what you said here. That’s the contradiction he’s using. He’s not allowing the remapping of [0,0.5] to [0,1] to happen. So no matter how many list I give you, you
will map my list to [0,0.5] as proper subsets are wont to do and you will show how I’m missing an element.
@169 mike:
yes, we might refer to the definition of another set to determine whether something is an element of a powerset but we also refer to that very definition to determine whether something is an
element of the original set.
What if the way you use your definition of the powerset to verify the list is incompatible with the one to one mapping?
You are using _A_ mapping that is known to be different from f when you do the verification.
One way to view the Cantor proof is as showing that there are more base two strings–strings composed of ’0′ and ’1′–than base 1 strings. This might be what Vorlath means when he talks about
base 1 being larger than base 2.
And there are more base 3 strings than base 2 strings. Explain why Cantor can use base 2 vs. base 1, but when we use the exact same proof on base 3 vs. base 2, the proof fails?
When you have infinite digits, there is another way to represent base 1. Simply have one and only one digit that is 1. All other digits are 0. For Cantor’s argument to work, it must be possible
to produce ALL base 2 strings regardless if it’s countable or not. We do not concern ourselves with N or any of that. If you believe you need more than |N| rows (and digits), then use that. Right
now, we simply want to make sure ALL infinite strings are there.
But no matter what the cardinality is of this set, I can only produce |base 1| strings. I can do this by placing all 1′s on the diagonal and 0′s everywhere else.
I’ve just proven that using Cantor’s diagonal, we can never use more than |base 1| strings as there are no free spots for strings of any other format. No matter how many infinite strings you use,
this setup does not allow us to list more than |base 1| strings. This is true regardless of the cardinality of S.
So it does not matter what list I give you. You will always be able to produce more strings because you can swap out one to one any string that is in base 2 format with one that is in base 1
format (not already in the list) producing an equivalent situation.
So Cantor’s proof amounts to restricting the list to only use base 1 strings. We are not in fact free to use whatever strings we want. It’s an illusion only. And since we know it’s 100% trivially
obvious that we can map infinite digits base 1 strings to base 2 strings since they’re just different representations of the same set (not to mention that’s how I defined them), something is out
of whack with Cantor’s proof in that it can’t even map one to one S to S regardless of what S is or what the cardinality of S is.
Note that I did not go into N or R or whatever else. It doesn’t matter what the set is.
174. #174 chasmotron November 11, 2009
You probably won’t read this, since your last comment is two days old, but you are really right on the verge of accepting Cantor’s proof.
Step 1. We can make a nice infinite list of all the base 1 strings. We can show that all the base 1 strings are accounted for in this list. You cannot create a base 1 string that is not on this
list. Give me a base 1 string, I’ll tell you where it is on the list.
Step 2. *If* we can map all the base 1 strings to all the base 2 strings, then we can create a second list of base 2 strings and line it up with our list of base 1 strings. You are utterly
convinced that we cannot construct such a list of base 2 strings, because we will always have to leave some out. You are correct. So, one list is complete, and the other list cannot be complete.
This is how Cantor proves that the base 2 strings have a higher cardinality than base 1.
Cantor’s proof cannot work the same way comparing base 2 and base 3, because we already know that we cannot make an infinite list of all the base 2 strings. Similarly, we cannot make an infinite
list of all the base 3 strings. With stuff missing from both lists, we can’t decide whether there is a complete mapping or not by comparing lists.
We can, however, treat the base 2 strings as co-efficients C_i in SUM_i(C_i/2^i) which will construct all the real numbers between 0 and 1. We can treat base 3 strings as coefficients D_i in
SUM_i(D_i/3^i), and also construct distinct real numbers between 0 and 1. We can create an algorithm that will translate between base 2 strings and base 3 strings by making them construct the
same real number. If every base 2 string corresponds to a unique real between 0 and 1, and every base 3 string corresponds to a unique real between 0 and 1, and we can define the algorithm to
translate between them, we have a successful mapping between base 2 and base 3.
Once we have that mapping, then we can redefine the math such that the base 2 strings define reals between 0.0 and 50.0, or between 0 and 0.000001. The previous base 2 to base 3 mapping still
holds, therefore the real numbers in a smaller or larger range than 0.0 to 1.0 are still the same cardinality as the 0.0 to 1.0 range.
175. #175 Robert November 11, 2009
I am a bit comfused about the discussion about base 1. While I can imagine how you would express an integer in base 1 (3 in base 10 = 111 in base 1) how would you express a rational number (or an
approximation of a real number) in base 1?
How do you write one third in base one?
The way I understand it, Cantor’s proof uses a base (generally base 10) to construct a counterexample. This number is a counterexample whether you write it in base 2 or base 10, however it’s
easiest to show it’s not in the image of f using the base the number was constructed in.
176. #176 chasmotron November 11, 2009
The difficulty of translating base 1 strings to real numbers is kind of the point of Cantor’s proof, really. There’s an obvious mapping from base 1 strings to integers, and any higher base string
can map to the reals, and those sets are not the same size.
I think Vorlath implied that there was some trickery involving matching base 10 integers with base 10 reals, and so wanted to discuss base 1 strings and base 2 strings. It ultimately works out
the same way, IMO.
177. #177 Stephen Wells November 12, 2009
Vorlath is still wrong. Cantor is considering a mapping between the _positive integers_ from 1 upwards and the reals, and he shows that for any such mapping, we can create a real that isn’t
mapped by any integer. The base in which the integers are written is irrelevant.
As others have pointed out, “base 1″ isn’t even a meaningful concept for the reals. You can write the integers as tickmark counts and call that “base 1″. So what?
178. #178 anon-nbr November 12, 2009
One thing Vorlath is claiming is that when you have a enumeration, then you’re required to state the base of it, (even though you may never write it down, and represent it only with an index ‘i’
for example), and furthermore that that base is required to be base 1.
I looked around and you can find him stating this in a blog post from a few months ago ( http://my.opera.com/Vorlath/blog/2009/06/22/cantors-theorem )
Enumerations can only ever be listed in what I’ve termed base 1. For example, when you count the number of rows, their values don’t matter. All you’re interested in is how many there are. So
you could have them all as dots, or X’s or whatever else. That’s base 1. But the rows are represented as base 2 (or another base) in Cantor’s examples which alters the numbers of digits. You
can’t do that and expect a valid comparison.
He doesn’t just say it CAN be represented that way, he says you HAVE TO do that.
I thought it was unusual when he introduced axes on the grid diagram and claimed you have to specify their base. The above quote shows that he thinks an expression of more than base 1 will be
more numerous or more expressive (his intuitive take on cardinality I think) than an index of (only) base 1. Thus he thinks a false comparison was made.
Also interesting is his take on what he calls dependant and independant sets, and the indications from above of “allowed” to map something, (paraphrasing) that a set is ‘claimed’ by map, of that
something is tied to something else and can’t be used for something beyond that, etc. Interesting because his other writings indicate that he has a lot of background in software writing. He seems
to have take some concerns from software into his ideas about math proofs: the software ideas of memory ownership (ie responsibility to free an alloc), the notion of resources pointed-to or
held-onto, generally the time-wise concerns behind software, and translated these into worries in math that a set is ‘held’ by a mapping, for example, and can’t be the domain or range of another
function then, until it’s “freed” (time-wise again) and then you would be “allowed” to use it.
179. #179 Anonymous November 12, 2009
In other words, his definition of “number” (real or integer) is not our definition, and his definition of “set” is not ours?
Well… I can imagine that if you redefine number and set to mean different things, Cantor’s proof might not work. But if he redefines things to be different from what they’re generally accepted to
be, he’s just talking gibberish.
180. #180 mike November 13, 2009
@173 Vorlath,
“What if the way you use your definition of the powerset to verify the list is incompatible with the one to one mapping?”
1. You need to show how that’s even possible. 2. If it is possible, then we could define the same set differently in the manner I described. 3. I did not use the definition of powerset in my
proof. I was careful to avoid the parts of math you are uneasy about.
“You are using _A_ mapping that is known to be different from f when you do the verification.”
No, I’m not. Prove it.
In my proof, f represents any bijective function from N to R and I only used that definition to describe what f could do.
181. #181 Vorlath February 1, 2010
The above quote shows that he thinks an expression of more than base 1 will be more numerous or more expressive (his intuitive take on cardinality I think) than an index of (only) base 1.
Thus he thinks a false comparison was made.
Only if you match by digits (infinite or otherwise). So if the cardinality of digits for base 1 is the same as the cardinality of digits for base 2, then yes, base 2 will be more expressive
because base 1 can only be expressed by finite consecutive 1′s or a single 1. All other combinations are not allowed.
So it’s not just base 1 and base 2 by themselves. It’s when they are matched by digit as is the case in Cantor’s argument.
182. #182 174isRight February 6, 2010
You keep mucking around with this digit nonsense.
How would you even compare base-2 with base-3 using diagonalization? Show that comparing base-2 and base-3 produces a contradiction. You can’t.
183. #183 Anonymous with Aspergers July 9, 2010
@12 (yes, I know this was a long time ago), don’t you mean that Math Journals get multiple submissions a month?
@Vorlath, I am going to try to define what the standard definition of size/cardinality is into your terms.
Two sets have the same cardinality if it is possible, once removing all dependencies, to create a complete mapping between the two sets. This mapping cannot leave out elements in any set
(injective mapping), and cannot have elements mapped to twice (surjective mapping)
An injective surjective mapping (an injection that is also a surjection) is called bijective or a bijection.
It does not mean that all mappings
So Cantor’s proof goes like:
1. If there is a bijection between the natural numbers and the real numbers between 0 and 1, it is possible to (given infinite time) to write the real numbers in a list. Let’s just call those
numbers a[i].
Do you agree with this?
2. If the digit function (in some base) is d(a,i), defined as the ith digit after the decimal point of a, then it is possible to find all the d(a[i],i). Let’s call those numbers b[i]
Note that b[i] is between 0 and the base, inclusive on 0, not on the base.
3. There is a digit other than b[1]. There is a digit other that b[2]. Etc.
4. Construct a real number given by the lowest possible other digit than the b[i].
Do you agree that this is possible?
Is this real number on the list?
Thus, we find that one of the steps or one of the assumptions are false.
Most people believe that all the steps are sound, showing that there are multiple infinities.
My question to Vorlath is, which step was wrong?
This is just so I can understand what you are saying.
Other note: All bases other than base 1 can be mapped to the interval [0,1].
Cantor shows it is not possible to map base 1 into [0,1]. | {"url":"http://scienceblogs.com/goodmath/2009/10/28/the-hallmarks-of-crackpottery/","timestamp":"2014-04-20T06:14:03Z","content_type":null,"content_length":"409153","record_id":"<urn:uuid:ad645a3c-45fd-45c6-96af-300206823f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
quations by
Solve A Systems of Linear Equations by Substitution
Other Methods: Vocabulary, Possible Solutions Solver
Graphically Using Determinants
By Linear Combination Using Matrices
With Calculator
Substitution Instructions Problems w/Solutions
Substitution Means Expression Swapping
Substitution means expression swapping. One expression is substituted or used in place of another expression of identical value so that a variable is eliminated.
Though this method always works, it is often easier to use another method unless one equations is already solved for one variable.
IF ONE EQUATION IS ALREADY SOLVED FOR ONE VARIABLE, use substitution.
Compare the two systems.
It's really the same system. The terms and format is just ordered differently.
The light blue one, on the left, is ideal for solution by substitution since one equations is already solved for one variable.
The light green one, on the right, is ideally set up for linear combination, or determinants.
Below the light blue one is solved by substitution.
So, to find the values of x and y, add lines together in a certain way.
Use Substitution to Solve Systems of Equations
1st: Pick 1 equation and solve for 1 of the variables.
2nd: Substitute this expression in the other equation.
3rd: Solve.
4th: Use this new value in the step 1 equation to get the other variable.
5th: State the solution and include values for both variables.
Here's an example.
The second equation is solved for y.
"-2x+3" is the expression needed.
2nd: Substitute this expression in the other equation.
5x+4 y =6
5x+4 ( ) =6
5x+4( -2x+3 )=6
3rd: Solve.
5x+4(-2x+3)= 6
5x-8x+12= 6
-3x+12= 6
-12 -12
-3x= -6
-3x/(-3)= -6/(-3)
x= 2
4th: Use this new value in the step 1 equation to get the other variable.
y= -2x+3
y= -2(x)+3
y= -2( )+3
y= -2(2)+3
y= -4+3
y= -1
5th: State the solution and include values for both variables.
Pick the systems best suited to solution by substitution.
If you picked correctly, "yes" is the message when you mouseover the 1st arrow.
Solve the best suited systems for practice.
A mouseover the 2nd arrow yields the solution to the system.
x - 2y = -2 y=4x - 2 5x + 5y = 10 4x - 3y=12
-3x - 6y = -6 x=3 10x + 10y = 12 x=3y-1
4x+10y=-22 y = -2x + 4 y=4x+2 y = 4x - 1
y=3x-9 y = -2x + 5 2y=8x+4 8y = 8x + 8
-2x - 2y = -4 x+3y=6 4x + 4y = 10 y=x+2
3x - 8y = -12 3x+9y=18 x + y = 12 y=4x+2
2x +2y = 6 -3x+3y=6 3x+ 3=y 5x+15y=10
y=-x+3 6x+9y=12 9x-3y=-9 3x+9y=6 | {"url":"http://www.mathnstuff.com/math/algebra/aswap.htm","timestamp":"2014-04-19T04:19:48Z","content_type":null,"content_length":"11197","record_id":"<urn:uuid:2158ae02-b3a6-44c3-97dd-8b533abef569>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
May 1998
"Nothing is more interesting than nothing" — or so says Ian Stewart, Professor of Mathematics at Warwick University. Many people have difficulty with the concept of zero. In fact, it has only really
been used as a number for the last 1500 years or so. Before this time it seems that zero was simply not that important. At the end of the day, a herd of no camels is not worth much.
Perhaps our ancestors were better off? Once you start using zero as a number then you can easily get into difficulty. Adding and taking away don't cause too much trouble, multiplication is
straightforward (though a little unrewarding) but division simply has to be disallowed.
In a previous issue of PASS Maths we were asked what infinity multiplied by zero was. Our answer was that infinity cannot be multiplied by anything in the usual sense of the word because it is not a
number. It's harder to explain away one divided by zero because they're both numbers; you're simply not allowed to do it. | {"url":"http://plus.maths.org/content/comment/reply/2698","timestamp":"2014-04-17T18:37:25Z","content_type":null,"content_length":"21428","record_id":"<urn:uuid:422e6a37-291f-496a-be61-ca007ed07051>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schrödinger’s cash
There’s an article in this week’s New Scientist by Justin Mullins about unforgeable quantum money. By the standards of “quantum mechanics journalism,” the article is actually really good; I’d
encourage you to read it if you want to know what’s going on in this area. In particular, Mullins correctly emphasizes that the point of studying quantum money is to understand quantum mechanics
better, not to mint practical QCash anytime soon (to do the latter, you’d first have to solve the minor problem of the money decohering within microseconds…).
My main quibble is just that I think the article overstates my own role! In my Complexity’09 paper, the main thing I showed is that secure quantum money that anyone can verify is possible, assuming
the counterfeiters only have black-box access to the device for verifying the money. I also showed that, to get quantum money that anyone can verify, you have to make computational assumptions. (By
contrast, Stephen Wiesner’s scheme from the 1960s, in which only the bank could verify the money, was information-theoretically secure.) But in terms of coming up with actual candidate quantum money
schemes (as well as breaking those schemes!), the other members of the “quantum money club”—Andy Lutomirski, Avinatan Hassidim, David Gosset, Ed Farhi, Peter Shor—have been more active than me.
Two other quibbles:
(1) Mullins writes: “Then last year, Aaronson proposed a new approach that does away with the banknote and concentrates instead on the stream of information that represents quantum cash.” In
Wiesner’s scheme, too, I think it was pretty clear that the “banknote with qubits stuck to it” was just a fun way to tell the story…
(2) The article does a good job of explaining the distinction between information-theoretic and computational security. But it doesn’t stress that, with the latter, we can’t actually prove that any
of the “hard problems” are hard, without also proving P≠NP! (I’ll admit that the importance of this point is slightly hard to convey in a popular article, possibly because many people, or so I’m
told, go about their lives without proving anything.) The best we can do is show that, if you could solve this problem, then you could also solve this other problem that people have studied for a
long time. But in the case of quantum money, we don’t even know how to do that—which is what we meant when we wrote in our ICS paper that “it seems possible that public key quantum money
intrinsically requires a new mathematical leap of faith.”
Considered as research topics in complexity theory, uncloneable quantum money, copy-protected quantum software, and so on are almost as wide-open today as public-key encryption was in the 1970s.
That is, we don’t have a compelling intuition as to whether these tasks are possible at all: all quantum mechanics does is open up the possibility of them, which wasn’t there in the classical world.
Unfortunately, in the case of quantum money, most of the ideas we’ve had for realizing the possibility have turned out to be insecure—often for non-obvious reasons. Assuming quantum money is
possible, we don’t know what the right protocols are, what types of math to base them on, or how to argue for their security. So if you’re not impressed by the results we have, why don’t you try
your hand at this quantum money business? Maybe you’ll have better luck than we did.
(Addendum: I also have a PowerPoint presentation on quantum money, which ironically goes into more detail than my Complexity paper.)
Joe Fitzsimons Says:
Comment #1 April 15th, 2010 at 8:21 pm
Great post. I really like these problems, and I suspect there are a ton more that we haven’t even thought of yet. Our blind computation paper came out of a (failed) attempt to do copy protected
I keep swinging backwards and forwards between whether I think it is possible or not. One day I think I have an impossibility proof, the next I think I have a scheme. Infuriating, but fun and
interesting none the less.
As regards the leap needed, I don’t know for sure, but I feel MBQC is probably part of the answer.
Matt Leifer Says:
Comment #2 April 15th, 2010 at 8:33 pm
I won’t be working on quantum copy protection any time soon. However, if you can come up with a quantum way of enforcing the terms of the GPL then I might be interested
Joe Fitzsimons Says:
Comment #3 April 15th, 2010 at 8:45 pm
Yes, I know DRM is evil. But until we have to upgrade our brains with DRM chips, there is a pretty obvious problem with media DRM.
Matt Leifer Says:
Comment #4 April 16th, 2010 at 5:58 am
You know, I was only half joking in my last post. Of course, you can’t possibly enforce all copyleft clauses via quantum theory. In particular, open-source would be ruled out unless all programs were
orthogonal in a known basis, which would make it impossible to enforce anything else. However, perhaps you could enforce a sharealike clause on its own by making it possible to copy a state, but only
by including a license to make more copies in each copy.
harrison Says:
Comment #5 April 16th, 2010 at 6:23 am
Your post got me to reading about the history of counterfeiting, which got me reading about the history of money. So this quote from Wikipedia is pretty tangential to quantum money:
“Coins were typically minted by governments in a carefully protected process, and then stamped with an emblem that guaranteed the weight and value of the metal. It was, however, extremely common for
governments to assert the value of such money lay in its emblem and thus to subsequently debase the currency by lowering the content of valuable metal.”
Which is marginally interesting (especially since fiat money as we now know it only came about in the past couple hundred years), but a few minutes later I read this:
“In the 1990s, the portrait of Chairman Mao Zedong was placed on the banknotes of the People’s Republic of China to combat counterfeiting, as he was recognised better than the generic designs on the
renminbi notes.”
What this suggests, to me, is that faces of gods or rulers were first put on coins in part as a natural “public-key anti-counterfeiting system”; it relies on the increased processing that the human
brain does when it sees a face to detect small differences between legitimate and counterfeit coins.
Sandra T. Says:
Comment #6 April 16th, 2010 at 7:47 pm
I don’t understand the motivation for quantum money. The basic analogy — uncloneable quantum banknotes — seems stupid. Is there a better story?
Why should we be willing to make a new mathematical leap of faith? Complexity theorists are too willing to build towers on shaky foundations.
“Assuming quantum money is possible, we don’t know what the right protocols are, what types of math to base them on, or how to argue for their security. So if you’re not impressed by the results we
have, why don’t you try your hand at this quantum money business? Maybe you’ll have better luck than we did.”
There are many unsolved problems. That by itself does not give motivation.
rrtucci Says:
Comment #7 April 16th, 2010 at 11:07 pm
Sandra, I’m not too interested in quantum money either. For me, the ultimate goal is to build a quantum computer and algorithms/software for it. If building a quantum computer is like the Apollo 11
mission, quantum money seems to me the equivalent of designing really cool sunglasses for the astronauts.
Yatima Says:
Comment #8 April 17th, 2010 at 12:20 pm
Of course this is not about quantum “money” at all, money being any commodity that people can stash away (from water bottles, sheep or pieces of supernova-forged metal). It is either about quantum
warehouse recipes that are about real money or it’s about quantum “fiat money” that no-one is supposed to counterfeit except the usual suspects.
I predict the existence of the new dictum “not worth a Quantum Continental” in finitely many of our possible futures.
Scott Says:
Comment #9 April 17th, 2010 at 2:16 pm
Sandra T.:
“There are many unsolved problems. That by itself does not give motivation.”
Based on the above two sentences, I’m guessing you’re not a theoretical computer scientist?
Seriously, let me give you my personal perspective: quantum money and quantum software copy-protection are interesting not because of the possible applications (which are remote and not even all that
interesting to me), but because they’re good testing-grounds for basic philosophical questions about the nature of quantum mechanics. One of the central properties of classical information is that
you can copy it an unlimited number of times—e.g., if you tell someone something, you have no idea how many of their friends they’re going to repeat it to. It’s such an obvious property that it seems
weird even to state it explicitly. But then in quantum mechanics, it’s not true!
OK, you might say, so we know the No-Cloning Theorem—but is it really all that important? After all, you might have some uncloneable state |ψ〉, but if you know how |ψ〉 was prepared, then you can
just run the preparation procedure again to copy it. And if you don’t know how |ψ〉 was prepared—well then, who cares about copying it? It’s not a useful state to you anyway.
Ah, but is that really true? Or is it conceivable that there could exist a “precious and irreplaceable” quantum state—a state |ψ〉 which we could measure in such a way as to learn or verify something
useful, but couldn’t measure in such a way as to produce more copies of |ψ〉? A state |ψ〉 that you could imagine a civilization storing in a special high-security vault, to prevent it from
decohering—and that would be taken out only on a few special occasions, when it needed to be measured? A quantum Ark of the Covenant, if you will?
Actually, now that I think about it, maybe that’s a better story than quantum money…
rrtucci Says:
Comment #10 April 17th, 2010 at 3:48 pm
I’m quite willing to believe that such unclonable verifiers exist in the quantum domain. But don’t such things already exist, to a very good approximation, classically. (For instance, isn’t a van
Gogh painting a verifier that is essentially impossible to clone.)
It’s the same problem that afflicts quantum cryptography. An important contribution to theory, but of little practical utility because you can achieve almost as good security in the classical domain
using much cheaper and much less fragile classical states
rrtucci Says:
Comment #11 April 17th, 2010 at 4:10 pm
I think you should call your unclonable states “white elephants”, precious, but impossibly expensive to keep alive.
Scott Says:
Comment #12 April 17th, 2010 at 8:35 pm
rrtucci: Van Gogh paintings are actually a terrible example — paintings can be, and are, forged all the time! There are regularly paintings that sell for millions of dollars, until their value
plummets when it’s discovered that they were merely the work of some schmoe and not a famous person.
It’s conceivable that quantum paintings, on the other hand, could genuinely be unforgeable—you could measure them in different bases to experience their beauty from exp(n) possible angles, but it
would be computationally intractable to determine, from the measurement results, how to prepare another state with the same behavior. Admittedly, we don’t actually know whether this sort of thing is
possible or not … which is why we’re, y’know, researching it.
jonas Says:
Comment #13 April 18th, 2010 at 8:53 am
There are these kinds of CS problems where you have a working communications protocol to solve some task if all the agents cooperate, but you want to design a protocol where it’s not worth for any
agent not to cooperate. I don’t really know much of these kinds of problems, but see eg. “http://gilkalai.wordpress.com/2010/01/26/
michael-schapira-internet-routing-distributed-computation-game-dynamics-and-mechanism-design-ii/”. I wonder if these quantum objects you all are searching (“quantum money”, but not as actual money,
just simialr schemes) could be used as building blocks helping such protocols, supposing of course that both quantum computers and these objects exist.
AnonCSProf Says:
Comment #14 April 18th, 2010 at 4:12 pm
I wonder about these arguments that “quantum money is not very interesting or realistic, but it’s interesting because it’s a good testing ground for fundamental philosophical questions”. So why does
the community work on quantum money, then? Why not just study those fundamental philosophical questions directly?
I’m not a theoretical computer scientist, either.
(I also wonder about allowing a news reporter to write an article hyping quantum money as though it could be useful, if you believe it is unlikely to be useful. Shouldn’t the first comment one makes
to the reporter be “this will probably never be useful, but it’s an interesting philosophical puzzle?”)
Evan Jeffrey Says:
Comment #15 April 18th, 2010 at 4:36 pm
Sandra T.:
The more practical and less philosophical answer to your question is that the whole problem is we don’t know what types of things quantum communication may be good for. Quantum money and quantum copy
protection are metaphors that researchers use to describe the mathematics they are studying. They may or may not relate to applications that turn out to be practical. The metaphors are not mere
window dressing, either. If they are good, they will inspire people to think in different and hopefully better ways about quantum information, which is the kind of thing that leads to figuring out a
useful way to apply it.
The example that is so painfully relevant that I hate to even bring it up is quantum cryptography. The quantum money protocol described by Stephen Wiesner eventually became quantum cryptography. I
rather think quantum cryptography itself is overrated, but it is a genuine practical application. You can go out and buy a pair of quantum communication nodes and they will more-or-less do what it
says on the tin.
math idiot Says:
Comment #16 April 19th, 2010 at 9:23 am
Scott: Quantum computer is based on the superposition of states. In the Many Worlds interpretation, there is no superposition of states. It says that we have many many worlds and in World A, the
state of the bit is in that of bit 0 and in World B, the state of the same bit is in that of bit 1. So, do you think that quantum computers rule out the Many Worlds interpretation or you would tell
me that the Many Worlds Interpretation can actually explain the operation of quantum bits?
Scott Says:
Comment #17 April 19th, 2010 at 10:16 am
I wonder about these arguments that “quantum money is not very interesting or realistic, but it’s interesting because it’s a good testing ground for fundamental philosophical questions”. So why does
the community work on quantum money, then? Why not just study those fundamental philosophical questions directly?
My personal answer: because the philosophical questions, as stated, are too vague to study directly! I see quantum computing and information, not merely as helping us address what used to be
considered philosophical questions, but as helping us make precise what those questions even mean.
I also wonder about allowing a news reporter to write an article hyping quantum money as though it could be useful, if you believe it is unlikely to be useful. Shouldn’t the first comment one makes
to the reporter be “this will probably never be useful, but it’s an interesting philosophical puzzle?”
I think that was the first comment I made to the reporter! Certainly it’s a comment I repeated over and over, and I was delighted to see that it actually got reflected in the story.
Look, I made zero effort to contact reporters about quantum money. They called me … and when they did, I repeatedly stressed how comically remote from practicality it was (while also answering the
question of how my colleagues and I ended up studying it!).
So what else would you have had me do? Run away and hid like Grisha Perelman? As you know, that’s also not a guarantee against sensationalized news stories…
mitchell porter Says:
Comment #18 April 20th, 2010 at 2:50 am
“Quantum money and quantum copy protection are metaphors that researchers use to describe the mathematics they are studying.”
And part of their attraction is that they naturally lead to whimsical generalization. It’s a short step from quantum money to quantum investment, a quantum subprime financial crisis, and a quantum
lawsuit by a quantum SEC against quantum Goldman Sachs for quantum securities fraud. It sounds like a joke, but we are really talking about topics like information, communication, and decision theory
in a quantum framework, and from that perspective I can imagine even that last bit of whimsy leading to theoretically valuable work.
John Sidles Says:
Comment #19 April 20th, 2010 at 8:13 am
Scott, I have a ton of respect for research on “money that can’t be cloned”, and the closely related problem of “experiments that can’t be simulated” (e.g., your linear optics project).
A respectful question—which I ask solely because it might have an interesting answer—is how much of these ideas remain valid if Ashtekar and Schilling’s view turns out to be correct, that “The linear
structure of quantum mechanics is, primarily, only a technical convenience“?
Obviously, the proof technology would have to be upgraded … but the key ideas of “money that can’t be cloned”, and “experiments that can’t be simulated” IMHO may well survive … and perhaps they would
even survive in a form that offers us (in your phrase) “a more compelling intuition” of what’s going on, than linear QM theory has so far given us.
One path by which this intuition is evolving in simulation theory—both classical and quantum—is a dawning recognition of the practical advantages for simulation of pulling-back onto simulation
state-spaces of larger dimension (the canonical example being pullback of three Euler angles onto four quaternion variables).
From this point-of-view, the linear structure of Hilbert spaces would arise from an Ashtekar-and-Schilling quest to efficiently simulate the physical processes we see in our universe: the real
quantum state-space having a non-Euclidean geometry, but for simulation purposes it being technically convenient—irresistibly convenient—to pullback quantum dynamics back onto a larger linear space …
even at the expense of requiring that fictional QM simulation space to have extravagantly many dimensions.
As for the real geometry of QM, we don’t yet know it … but the string theorists are trying to guess it.
From this point of view, fundamental research—both theoretical and experimental—relating to “money that can’t be cloned” and “experiments that can’t be simulated” can be regarded as a promising
avenue of investigation into key questions relating to the state-space geometry of QM.
That’s *one* way to read articles on these topics, anyway!
Scott Says:
Comment #20 April 20th, 2010 at 11:37 am
John: Everything remains valid, regardless of your beliefs about the foundations of QM (unless, of course, quantum mechanics were actually overturned, which would be much more interesting than
quantum money).
AnonCSProf Says:
Comment #21 April 21st, 2010 at 2:49 am
Scott, Thanks for the honest and impassioned response. I appreciate that you took the time to respond. I respect your position and find your arguments compelling. I’m convinced!
Evan, You write: “I rather think quantum cryptography itself is overrated, but it is a genuine practical application.” This is an ill-fated example to list. I am a cryptographer, and I would say that
quantum cryptography solves a problem that almost no one has [1], and does so at great cost. (Have you seen the companies trying to sell $50,000 quantum encryptors? It’s a bad joke.) Calling quantum
cryptography “overrated” is a bit of an understatement. And why is it overrated? Because it’s been overhyped, and because (in retrospect) quantum crypto researchers haven’t done a great job of being
upfront about the practical reasons why quantum cryptography is basically irrelevant in most practical settings when they wrote research papers on the subject. Basically, the people who tend to be
most excited about quantum cryptography these days are physicists, and the people who tend to be the least impressed are practitioners.
I’m much more sympathetic to arguments like the ones that Scott has presented, that we don’t know the right questions to ask and it’s important to ask fundamental questions.
I do have a gripe with the names the field has chosen. When experts choose names like “quantum money” and write serious papers about the subject, non-experts take them at their word and assume
they’re being serious and are seriously talking about serious protocols for money that could someday be practically useful. (This is especially problematic when people write papers whose introduction
motivate the work by talking about the need for provably secure primitives that do not rely upon unproven complexity-theoretic assumptions, or whatever.) To this outsider, it looks like the field is
to some extent allowing these misconceptions to spread, and profiting from them, similarly to what has happened with quantum cryptography. In this respect, I worry the field is unnecessarily creating
a potential credibility problem for itself. If the field is going to use serious-sounding names whimsically, I think it would behoove the quantum community to find a better way to convey that fact to
[1] See, e.g.,
and the comments after this interview:
John Sidles Says:
Comment #22 April 21st, 2010 at 8:40 am
Hmmm … I guess “foundations of quantum mechanics” means different things to different folks.
In particular, is the statement “the state-space of quantum mechanics is a Hilbert space” best regarded—in descending order of certainty—as an axiom so natural as to be irreplaceable; as a
well-established law of nature; as a good working approximation in practical calculations; or as a mere technical convenience?
I used to embrace “a well-established law of nature”, but have pretty much switched over to “a good working approximation” … this view was impressed upon me by (1) the geometric view of dynamics
pioneered by folks like Arnol’d, Mac Lane, Ashtekar and Schilling, Berezin (and many others), coupled with (2) the otherwise inexplicable success of modern quantum simulation codes.
It appears to be a rule of nature that every state-space that arises from thermodynamical processes has a non-trivial geometry … from protein conformation all the way up to space-time itself.
Until proven otherwise, why should the state-space of QM be any different? After all, didn’t it too arise thermodynamically, in the Big Bang?
The key phrase is “until proven otherwise” … that’s why technologies—both experimental and mathematical—for demonstrating that the state-space of QM is a Hilbert space are so very interesting.
It appears (to me) that the mathematical and physical evidence for QM being necessarily a Hilbert-space theory is not really all that strong … especially since today’s QM simulation codes are
astoundingly accurate, and yet (upon inspection) none of them fully embrace linear Hilbert-space orthodoxy.
Scott Says:
Comment #23 April 21st, 2010 at 9:34 am
John, if the state space of QM is not a Hilbert space, that would overturn quantum mechanics, and be the most exciting development in physics in almost a century.
At present, we don’t even have sensible toy theories that agree with quantum mechanics on known experiments, which are based on anything other than Hilbert space. (Where “sensible” means respecting
locality, causality, conservation of probability, etc.)
John Sidles Says:
Comment #24 April 21st, 2010 at 1:58 pm
Scott, if “sensible” means “respecting locality, causality, conservation of probability, etc.”, then is Hilbert-space QM a sensible theory sensu stricto?
I’m just making the mild observation that (linear) QM field theory has some pretty severe defects with regard to locality and causality. So is Hilbert space linearity a safeguard against even worse
problems … or is the linearity itself a contributing cause?
Without intending to formally develop a viable non-Hilbert state-space, haven’t today’s quantum simulationists have come a long way toward creating such a theory de facto?
As Mac Lane has pointed out, “mechanics developed by the treatment of many specific problems.” He was referring to classical mechanics, but perhaps QM will prove to be no different.
Raoul Ohio Says:
Comment #25 April 21st, 2010 at 3:17 pm
That is one great figure! Where can I get the T Shirt?
John Sidles Says:
Comment #26 April 22nd, 2010 at 3:35 am
Raoul, t-shirts are ephemeral … but a simple, tasteful tattoo would endure.
Consulting my database for the aesthetic viewpoint on geometric quantum mechanics, I find Karl Hofmann’s Notices of the {AMS} article “Commutative diagrams in the fine arts”, David Henderson and
Diana Taimina’s Mathematical Intelligencer article “Crocheting the hyperbolic plane”, and Taimina’s book Crocheting Adventures with Hyperbolic Planes.
Golly … I see Taimina’s book was this year’s winner of Diagram Prize … which some British literary folk seem to think is more prestigious even than the Booker!
These works provide an interesting cognitive perspective on the geometric ideas in Abhay Ashtekar and Troy A. Schilling’s article “Geometrical Formulation of Quantum Mechanics” (arXiv:gr-qc/
9706069v1). This ideas are well-summarized by the wonderful introduction to Taimina’s book (the introduction itself was written by Bill Thurston):
“Our brains are complicated devices, with many specialized modules working behind the scenes to give us an integrated understanding of the world. Mathematical concepts are abstract, so it ends up
that there are many different ways that they can sit in our brains.”
A given mathematical concept might be primarily a symbolic equation, a picture, a rhythmic pattern, a short movie—or best of all, an integrated combination of several different representations. These
non-symbolic mental models for mathematical concepts are extremely important, but unfortunately, many of them are hard to share.
Mathematics sings when we feel it in our whole brain. People are generally inhibited about even trying to share their personal mental models. People like music, but they are afraid to sing. You only
learn to sing by singing.
Vladimir Arnol’d has written in a similar vein:
“Our brain has two halves: one half is responsible for the multiplication of polynomials and languages, and the other half is responsible for orientation of figures in space and all the things
important in real life. Mathematics is geometry when you have to use both halves.”
Perhaps the best single reason to study geometric quantum mechanics, therefore, is to understand better what geometers like Taimina, Thurston, and Arnol’d, Mac Lane, Grothendieck, etc. are talking
about … and thus (slowly and incompletely) come to understand QM in their integrated cognitive style, within their geometric framework, and do practical QM calculations via their powerful
mathematical toolset.
The point is simply that geometric quantum mechanics definitely is beautiful, useful, and fun … and as for whether it is right physically … well … that’s a question for the future!
@article{Hofmann:2002kx, Author = {Karl Heinrich Hofmann}, Journal = {Notices of the {AMS}}, Number = {6}, Pages = {663–668}, Title = {Commutative Diagrams in the Fine Arts}, Volume = {49}, Year =
@article{Henderson:01, Author = {D. Henderson and D. Taimina}, Journal = {Mathematical Intelligencer}, Number = 2, Pages = {17–18}, Title = {Crocheting the hyperbolic plane}, Volume = 23, Year =
@article{Lui:1997eu, Author = {Lui, S. H.}, Journal = {Notices Amer. Math. Soc.}, Number = {4}, Pages = {432–438}, Title = {An interview with {V}ladimir {A}rnol’d}, Volume = {44}, Year = {1997}}
Raoul Ohio Says:
Comment #27 April 22nd, 2010 at 12:28 pm
In a couple hundred million years the {brain, eyes} system has evolved great power for recognizing patterns. Good figures allow this power to be used for more abstract models. My favorite example
concerns the family of bessel functions, which, like many undergrads, I once did not grok. A sketch in (I think) Magnus and Oberhettinger, “Formulas and Theorems for the Functions of Math Physics” *,
instantly shows how they work, and moves you across the “everything is easy once you know it” divide.
* http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=functions+of+mathematical+physics&x=0&y=0
Greg Kuperberg Says:
Comment #28 April 22nd, 2010 at 8:05 pm
Basically, the people who tend to be most excited about quantum cryptography these days are physicists, and the people who tend to be the least impressed are practitioners.
I am neither a cryptography practitioner nor a physicist, I’m a mathematician. I would be the first to agree that quantum key distribution solves a problem that almost no one has, and that
commercialization of this technique is at best an obscure market and at worst snake oil. I agree that there has been a lot of hype.
Nonetheless, I think that it’s really interesting that quantum key distribution does something that previously seemed impossible. (Namely, it achieves communication secrecy even if the eavesdropper
has unlimited computational resources.) It may sometimes happen that one of the most interesting tools in the toolbox is also one of the least useful. For the time being. When a tool is so
interesting, it might well be useful later for an unforeseen reason.
Besides, this tension between what is useful and what is fascinating is not a new thing in applied mathematics. One clear example is in error-correcting codes, where there are a lot of beautiful
constructions using algebra and number theory, but in the end simpler or less elegant constructions are almost always the most useful. Is it a waste of time to study algebraic geometry over finite
fields if you are a coding theorist? I don’t design cell phones, so I am not qualified to say, but again, speaking as a mathematician, I think that some coding theorists should learn about it.
Or, does algebraic geometry over finite fields create a credibility problem for coding theory? Does quantum key distribution create a credibility problem for quantum computation? Maybe, but not
because of mathematicians. There isn’t much incentive among mathematicians to spew that kind of hype. (We have plenty of incentive for other kinds of hype and bad behavior, but not that kind.) Hype
in quantum computation also bugs me, but it also seems alien to a mathematical audience, alien enough that I don’t think that it makes me look bad.
If one defines mathematicians to be people who prove theorems, then Scott also is one. I don’t mean to speak for him, but I think that he could say the same as me.
Raoul Ohio Says:
Comment #29 April 22nd, 2010 at 11:42 pm
Greg’s definition of a Mathematician misses a major point:
1. A mathematician’s job is using math to solve problems, useful or otherwise. This often involves proving theorems.
2. A PURE mathematician’s jobs job is proving theorems, possibly useful, but preferably not. The stylish take pride in never having solved a problem, G.H. Hardy being the standard bearer.
Guess which camp Newton, Gauss, Riemann, Euler, Laplace, Knuth, etc, are in?
In an unfortunate coup 100 odd years ago, the Pure cult, by proving more and more theorems about less and less, took over the establishment. They kept the funding provided by teaching calculus to
future scientists and engineers, who have been complaining from day one that math profs can’t calculate the Laplace Transform of a cosine.
John Sidles Says:
Comment #30 April 23rd, 2010 at 3:37 am
In the writings of geometers, one one often finds vigorous disagreement with the statement “mathematicians are people who prove theorems.”
Here is Vladimir Arnol’d speaking of René Thom:
“I was always delighted by the way in which Thom discussed mathematics, using sentences obviously having no strict logical meaning at all. While I was never able to completely free myself from the
straitjacket of logic, I was forever poisoned by the dream of the irresponsible mathematical speculation with no exact meaning. `One can always find imbeciles to prove theorems’ was, according to
Thom’s students, his principle. “
In Sergei Novikov we find:
“The percentage of good things that may be done rigorously is going to zero; the number of good theorems is increasing, but the ratio is going down rapidly.”
Mac Lane explains at greater length:
Analysis is full of ingenious changes of coordinates, clever substitutions, and astute manipulations. In some of these cases, one can find a conceptual background. When so, the ideas so revealed help
us understand what’s what. We submit that this aim of understanding is a vital aspect of mathematics.
In his discussion of quantum money, when Scott asks for “a compelling intuition”, my own reading of that passage (which possibly differs *greatly* from the meaning that Scott intended) is an
expression of a Mac Lane-style desire for a broader thematic grasp of “what’s what” in quantum informatioon theory, similar to the broader thematic grasp that pioneering algebraic geometers like
Arnol’d, Novikov, Thom, Mac Lane, Grothendieck (and many more) have long been seeking.
Obviously, it is not necessary that everyone agree with the geometers’ point-of-view, or with any other single point-of-view. In fact, universal agreement on thematic questions would indicate that
mathematics/science was slipping into bad health … because healthy ecosystems are diverse ecosystems.
@article{***, Author = {Lui, S. H.}, Journal = {Notices Amer. Math. Soc.}, Number = {4}, Pages = {432–438}, Title = {An interview with {V}ladimir {A}rnol’d}, Volume = {44}, Year = {1997}}
@incollection{***, Address = {Berlin}, Author = {Novikov, Sergei}, Booktitle = {Mathematical research today and tomorrow ({B}arcelona, 1991)}, Pages = {13–28}, Publisher = {Springer}, Series =
{Lecture Notes in Math.}, Title = {R\^ole of integrable models in the development of mathematics}, Volume = {1525}, Year = {1992}}
@book{***, Address = {New York}, Author = {Mac Lane, Saunders}, Publisher = {Springer-Verlag}, Title = {Mathematics, form and function}, Year = {1986}}
Ross Snider Says:
Comment #31 April 24th, 2010 at 11:18 am
I was wondering if you could clear something up that hit me quite hard this morning. Why doesn’t the proof by Baker, Gill, and Solovay that any solution to the P versus NP question not relativize
actually settle the question? Couldn’t one think (for example) having an oracle to solve the Subset Sum problem in an attempt to solve 3SAT. If we suppose that the Subset Sum problem has a polynomial
time solution then we run into the contradiction set up by Baker et al – so it must have an exponential run time.
I know I’m wrong. I just don’t know how.
Scott Says:
Comment #32 April 24th, 2010 at 1:16 pm
Ross: Don’t worry, you’re indeed wrong. black box that recognizes solutions to a combinatorial search problem, and the only way to gain information about the problem is to call the black box on
various candidate solutions, then an NP machine can find a solution exponentially faster than a P machine (which is limited to trial-and-error).
But NP-complete problems, like Subset Sum and 3SAT, are emphatically not black boxes. They have structure, arising from the actual clauses and variables—and indeed, algorithms like DPLL and simulated
annealing can and do exploit that structure to do somewhat better than brute-force search (e.g., to achieve more moderate exponential runtimes). BGS says nothing about algorithms that exploit the
structure of NP-complete problems, even though understanding such algorithms is the whole substance of the P vs. NP question.
The message of BGS, if you like, is that in the case of P vs. NP, there’s not going to be any clever way around a detailed consideration of problem structure, analogous to how Turing’s self-reference
argument avoided really “understanding the structure” of the halting problem.
Ross Snider Says:
Comment #33 April 25th, 2010 at 11:50 am
That’s all BGS is saying? Okay, I think that’s where my problem was stemming. I haven’t read the paper and am still relying on gaining understanding from second-hand sources. My understanding
previous to your explanation was that they showed how to transform any oracle where P=NP into an oracle where P!=NP. This is why I thought it showed that Sumset Sum (for example) must be an
exponential time problem (to avoid contradiction). I now understand that they showed results in both directions and that this highlighted for the TCS community the fact that relativizing proof
techniques can’t be used (at least independently) to decide the problem.
Thanks for getting back so quickly.
cars sale Says:
Comment #34 May 10th, 2010 at 1:39 pm
search problem, and the only way to gain information about the problem is to call the black box on various candidate solutions, then an NP machine can find a solution exponentially faster than a P
machine (which is limited to cars sale previous to your explanation was that they showed how to transform any oracle where P=NP into an oracle where P!=NP. This is why I thought it showed that Sumset
Sum (for example) must be an
Sim Says:
Comment #35 May 13th, 2010 at 1:11 pm
@ Scott Aaronson
Hi, I’m afraid it’s nothing related to the present post, but do you plan to say something about this?
Martin Says:
Comment #36 May 13th, 2010 at 5:00 pm
I saw someone mention the Many Worlds interpretation so I thought maybe I could smuggle in a question on that here. I’m a computer scientist so I don’t claim to understand quantum mechanics or any
interpretation thereof very well. Fortunately, I feel Scott and others here tend to discuss the topic in terms that a computer scientist (but layman quantum-wise) can relate to (at least some of the
What does the existence/realization of “many worlds” in the many worlds interpretation add? Consider the following transformation of the “many worlds interpretation” (MWI) into a theory I will call
“The random world interpretation” (RWI): In all situations where MWI asserts that the universe “spawns” into multiple universes with different configuration, the RWI interpretation asserts that the
universe randomly selects one of the potential universes with probability proportional to the number of such universes asserted to be created by MWI, and that universe comes to exist. There’s no
multiple universes, no brancing or anything – just random selection.
To me it seems, that if the multiple universes claimed to exist by MWI are inaccessible from each other, it seems that there’s no difference compared to the RWI interpretation. Further, RWI is much
easier for me to digest since I already accepted that some processes are truly random.
I’m sure there’s something I have overlooked. Maybe the the “spawing” happens in a way that can’t be mapped to a probability distribution? Maybe it is super continuous (but I think that would cause
similar problems for MWI itself – I can’t visualize the sort of continouos tree structure this would imply) and that the many worlds posited to exist by MWI really do add something. I also sometimes
get the feeling that some MWI-people don’t truly think all these universes really exist (whatever it means for something totally inaccessible to you “to exist) but that it is just a way to explain
it. Other times I get the feeling that they really do believe they exist and that also seems to make sense given the name of the interpretation. Further, if they don’t believe them to exist, it
becomes even harder to see what they add to the explanation and how the view differs from what all the old quantum guys, say, Schrödinger thought about it all along?
Maybe someone could recommend a down-to-earth yet not dumbing-down book on the subject?
Martin Says:
Comment #37 May 13th, 2010 at 6:57 pm
OK to put things more bluntly
My view on this is that the nice property of MWI that sets it apart from the Copenhagen Interpretation that a measurement isn’t any different than any other interaction. Your brain/consciousness
doesn’t trigger a magical collapse of a wave function of a wave function, but clearly as it interacts with an object its own probability distribution (in layman’s terms) becomes more and more
connected to that of the object being observed.
But this crucial property of the theory has nothing to do with the “many worlds” part of the theory – the crucial part is the relative-states / entanglement part of the interpretation which doesn’t
imply anything about “spawning” or the existence of multiple worlds, whatever it might mean.
Then MWI happens to have this unrelated and unfortunate other interpretation in it, namely the part of the many worlds. I view this as a way of interpreting randomness and probabilities in general.
It has no special place in the other part (the relative-state part) of the interpretation. It might as well belong in an interpretation of classical probability theory where it is saying that there’s
some sort of physical reality to the probability trees used there.
I have two points on this:
First, it isn’t an interpretation of randomness that appeals much to me.
Secondly, no matter what one things of this, it really ought to be separate (dare I say untangled?) from the interpretation. There should be one interpretation called “the relative state
interpretation” and one called “the many worlds interpretation of randomness”. It is the former that is important as an alternative to the Copenhangen interpretation – not the latter. To make matters
worse, much of the controversy about MWI really derives from the latter! Are you uncomfortable with thought experiments like quantum suicide or whatever quack things people come up with? Just abolish
the “many worlds” part of the interpretation and stick to the relative state part of it. You can go to bed safely in the knowledge that there’s not some copy of you being tortured by Dr. Evil – and
you can be sure that all the quacks that comitted quantum suicide really are dead!
asdf Says:
Comment #38 May 14th, 2010 at 6:26 am
This claims that long-lasting quantum entanglement figures into photosynthesis:
Sim Says:
Comment #39 May 14th, 2010 at 9:11 pm
@37: gravity allows closed timelike curve (well… theorically) so that P/gravity=P/CTC=PSPACE, a bit (!) larger than NP-complete problems. The puzzle with your answer is that it seems to have nothing
to do with singularities. Or it has?
Sim Says:
Comment #40 May 20th, 2010 at 5:51 pm
Your interpretation seems very close to
(sorry for the delay, I mentionned it a couple of day ago but it’s still “awaiting moderation.”) | {"url":"http://www.scottaaronson.com/blog/?p=444","timestamp":"2014-04-17T01:10:23Z","content_type":null,"content_length":"64354","record_id":"<urn:uuid:d5b75ad1-06b3-4f41-9c0f-6a0096e1bd9f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2004 [00456]
[Date Index] [Thread Index] [Author Index]
Re: Do-loop conversion
• To: mathgroup at smc.vnet.net
• Subject: [mg50265] Re: Do-loop conversion
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Mon, 23 Aug 2004 06:34:54 -0400 (EDT)
• Organization: The University of Western Australia
• References: <cg6s9i$o9l$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
In article <cg6s9i$o9l$1 at smc.vnet.net>,
"Rich Matyi" <rjmatyi at comcast.net> wrote:
> I have a problem that involves converting a Do-loop structure into
> faster-executing version using functional programming. This is an optics
> problem -- specifically, calculating the angle dependence of the X-ray
> reflectivity of an n-layered medium on an infinitely thick substrate. The
> calculation relies on a well-known recursion relation:
> X(j) = [ r(j) + X(j+1) p^2(j+1) ]/[1+ r(j) X(j+1) p^2(j+1)]
> where X(j) is the ratio of the reflected and transmitted X-ray amplitudes
> at the bottom of layer j, p(j+1) is a phase factor (given below), and r(j)
> is the Fresnel coefficient for reflection at the interface between layers j
> and j+1:
> r(j) = [k(z,j) - k(z,j+1)]/[k(z,j) + k(z,j+1)]
> where the wavevector k(z,j) = (2*Pi/wavelength)Sqrt[n^2(j) - Cos(phi)^2],
You have k(z,j) on the left-hand side but wavelength and phi on the
right-hand side. To turn this into a Mathematica function the
relationship between (implicit) variables needs to be made clear.
> n(j) is the complex index of refraction for X-rays for the layer n(j) = 1 -
> delta(j) + I*beta(j), and phi is incident angle of the X-rays. The phase
> factor mentioned above is given by
> p(j) = r(j) exp[-2 (k(j) k(j+1)) s(j+1)] with s being the roughness at
> interface j+1.
Shouldn't these be k(z,j) and k(z,j+1)?
> The recursion layer works because with an infinitely thick substrate, the
> reflected amplitude coming up from the substrate = 0, so at a given angle
> of incidence, you work from the bottom up starting with X(bottom) = 0 and
> add the amplitudes at each interface until you get to the top surface.
Entering the recurrence formula for X(j),
x[j_] := (x[j + 1] p[j + 1]^2 + r[j])/(r[j] x[j + 1] p[j + 1]^2 + 1)
and the bottom condition,
X[5] = 0;
you can compute the results for intermediate layers, say
or for the top layer (depending on your numbering system),
in terms of r[j] and p[j]. These results are valid for _any_ angle phi
and are not all that complicated. You could use dynamic programming
(x[j_] := x[j] = ...) to make this more efficient, if required.
> The various functions above -- wavetabc, leftvectmapc, thtestcomp,
> roughrevcomp, coeffcomp, phasecomp, fcoeffcomp, intensitycomp, and
> newdatacomp -- are complied functions that take structural parameters for
> each layer (index of refraction parameters delta and beta, layer thickness
z, I assume?
> interfacial roughness) by reading them out of a list {params} that is input
> at the start of the program.
Both r[j] and p[j] depend (implicitly) on z, lambda, and phi. Computing
these functions should be very fast for specific parameter values.
> As I said, the above Do-loop works just fine by calculating at each angle
> phi the final intensity at that angle, normalizing it with a background and
> maximum peak factor (also in the {params} list and applied by the
> newdatacomp line) and appending it to the intensity datafile
> newc. Executing this loop for a five-layer system and a couple of hundred
> angular points takes around 0.2 seconds -- not bad, except that I need to
> use this in a genetic algorithm minimization routine which needs many
> hundreds of these curves to be calculated with GA- dictated changes to the
> initial parameter matrix. So: if anyone can give me some guidance on how to
> eliminate the Do-loop over the angular range so I can speed things up, I
> would be very grateful.
If I understand what you're trying to do, I think that you can leave phi
as a symbolic parameter. It appears that all other parameters will be
numerical so that the final result, although complicated, should be
Paul Abbott Phone: +61 8 9380 2734
School of Physics, M013 Fax: +61 8 9380 1014
The University of Western Australia (CRICOS Provider No 00126G)
35 Stirling Highway
Crawley WA 6009 mailto:paul at physics.uwa.edu.au
AUSTRALIA http://physics.uwa.edu.au/~paul | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Aug/msg00456.html","timestamp":"2014-04-21T02:09:11Z","content_type":null,"content_length":"38548","record_id":"<urn:uuid:16a68e50-86a6-4a97-a4ba-febd83709051>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Microsoft Office Excel 2007 Formulas and Functions For Dummies
Cheat Sheet
Microsoft Office Excel 2007 Formulas and Functions For Dummies
To write formulas and functions in Excel 2007 that manage your data, make sure you understand common, text, and array functions and take advantage of some helpful keyboard shortcuts to work quickly.
To develop the correct data when you write formulas in Excel 2007, be sure to use the right operators, understand references, and follow the correct order of operations. Excel 2007 sends error
messages to help correct problems you may encounter in formulas or functions.
Excel Functions You're Likely to Use
Some Excel functions apply to specific subject areas, but others are general and apply to all needs. The following list shows an array of Excel functions used by one and all. Check here for a quickie
reference to the purpose of each Excel function.
Excel Function Description
SUM Calculates the sum of a group of values
AVERAGE Calculates the mean of a group of values
COUNT Counts the number of cells in a range that contains numbers
INT Removes the decimal portion of a number, leaving just the integer portion
ROUND Rounds a number to a specified number of decimal places or digit positions
IF Tests for a true or false condition and then returns one value or another
NOW Returns the system date and time
TODAY Returns the system date, without the time
SUMIF Calculates a sum from a group of values, but just of values that are included because a condition is met
COUNTIF Counts the number of cells in a range that match a criteria
Excel Text Functions You'll Find Helpful
Excel's text functions are very helpful when you're working with names, addresses, customer lists, or any other text-based data. Here is list of Excel functions associated with text, along with a
description of what each function does:
Function Description
LEFT Extracts one or more characters from the left side of a text string
RIGHT Extracts one or more characters from the right side of a text string
MID Extracts characters from the middle of a text string; you specify which character position to start from and how many characters to include
CONCATENATE Assembles two or more text strings into one
REPLACE Replaces part of a text string with other text
LOWER Converts a text string to all lowercase
UPPER Converts a text string to all uppercase
PROPER Converts a text string to proper case
LEN Returns a text string’s length (number of characters)
Excel Error Messages to Get to Know
If you create a formula in Excel that contains an error or circular reference, Excel lets you know about it with an error message. A handful of errors can appear in a cell when a formula or function
in Excel cannot be resolved. Knowing their meaning helps correct the problem.
Error Meaning
#DIV/0! Trying to divide by 0
#N/A! A formula or a function inside a formula cannot find the referenced data
#NAME? Text in the formula is not recognized
#NULL! A space was used in formulas that reference multiple ranges; a comma separates range references
#NUM! A formula has invalid numeric data for the type of operation
#REF! A reference is invalid
#VALUE! The wrong type of operand or function argument is used
Excel Order of Operations to Keep in Mind
Mathematics dictates a protocol of how formulas are interpreted, and Excel follows that protocol. The following is the order in which mathematical operators and syntax are applied both in Excel and
in general mathematics. You can remember this order by memorizing the mnemonic phrase, Please excuse my dear aunt Sally.
1. Parentheses
2. Exponents
3. Multiplication and division
4. Addition and subtraction
Excel Cell References Worth Remembering
In Excel formulas, you can refer to other cells either relatively or absolutely. When you copy and paste a formula in Excel, how you create the references within the formula tells Excel what to
change in the formula it pastes. The formula can either change the references relative to the cell where you're pasting it (relative reference), or it can always refer to a specific cell. You can
also mix relative and absolute references so that, when you move or copy a formula, the row changes but the column does not, or vice versa.
Preceding the row and/or column designators with a dollar sign ($) specifies an absolute reference in Excel.
Example Comment
=A1 Complete relative reference
=$A1 The column is absolute; the row is relative
=A$1 The column is relative; the row is absolute
=$A$1 Complete absolute reference | {"url":"http://www.dummies.com/how-to/content/microsoft-office-excel-2007-formulas-and-functions.html","timestamp":"2014-04-20T04:37:58Z","content_type":null,"content_length":"60595","record_id":"<urn:uuid:ea0fce1f-ecc4-4278-9d5a-ff330fbc9745>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |