content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Slobodan Vujosevi\'c
Prirodno-matematicki fakultet, Podgorica, Yugoslavia
Abstract: A subalgebra {\bf A} of a Boolean algebra {\bf B} is {\it large} (in {\bf B}) if there exists a $b\in B$ such that the algebra generated by the set $A\cup \{b\}$ is the whole algebra {\bf
B}. In this paper we give a complete description of large subalgebras of a Boolean algebra.
Classification (MSC2000): 06E05
Full text of the article:
Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001.
© 2001 Mathematical Institute of the Serbian Academy of Science and Arts
© 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/059/5.html","timestamp":"2014-04-19T17:06:47Z","content_type":null,"content_length":"3218","record_id":"<urn:uuid:73bce5a3-f2c2-4ca4-b7ae-6e27c5eb0472>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Potrero Math Tutor
Find a Potrero Math Tutor
...I look forward to working with you!I have formally taught 2 years of 10th grade math. I have formally taught 2 years of 10th grade math. I lived in France for 5 years, minored in French in
college and lived in West Africa for 2 years, teaching math in French at a local high school.
14 Subjects: including trigonometry, probability, algebra 1, algebra 2
...Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps
from past courses tend to reappear as roadblocks down the line. By identifying and correcting these problems, I help students become effective independent learners for both current and future
14 Subjects: including geometry, linear algebra, probability, algebra 1
...I have always had a passion for science and math and I love to help teach the subject to other students. I have 1 year of tutoring experience through my high school and received perfect math
scores on both my SAT and ACT! I'm very easy to talk to and try to not only make math easier to understand but also make it fun!
6 Subjects: including linear algebra, painting, prealgebra, algebra 1
...During my PhD, I designed many software systems using Python. I have given many academic lectures in Python. I have taught Python to many colleagues at the PhD level.
26 Subjects: including algebra 2, calculus, vocabulary, grammar
...I also taught all subjects at a primary school in Papua New Guinea for 3 months after my freshman year at Stanford, and I am currently a substitute at a Child Development Center. I have been
playing and writing music since I was ten years old. I am currently a professional musician in an Indie/Folk/Island/Jazz band called Ed Ghost Tucker, and I am the predominant songwriter for the
38 Subjects: including algebra 1, biology, vocabulary, grammar | {"url":"http://www.purplemath.com/potrero_ca_math_tutors.php","timestamp":"2014-04-19T07:38:35Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:6480108b-129d-4f86-bb14-fb2e8f66da95>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
markov chain-fair die
April 3rd 2010, 01:50 AM #1
Feb 2008
markov chain-fair die
A fair die is thrown repeatedly. Let $S_{n}$ be the sum of the outcomes, and let $R_{n}$ be the remindeer when $S_{n}$ is divided by 4(that is $R_{n}$ is the sum of the first n throws reduced
modulo 4).
a) Show that $R_{n}$ ia a Markov chain on state space {0,1,2,3}.
b) what are the transitoin probabilities for this chain?
Can you please give me numerical example of $R_{n}$ to see what is going on in this chain as I find it difficult to understand the question.
thanks for any help.
A fair die is thrown repeatedly. Let $S_{n}$ be the sum of the outcomes, and let $R_{n}$ be the remindeer when $S_{n}$ is divided by 4(that is $R_{n}$ is the sum of the first n throws reduced
modulo 4).
a) Show that $R_{n}$ ia a Markov chain on state space {0,1,2,3}.
b) what are the transitoin probabilities for this chain?
Can you please give me numerical example of $R_{n}$ to see what is going on in this chain as I find it difficult to understand the question.
thanks for any help.
I don't know if this is what you mean by a numerical example but here it goes. Suppose you rolled 4,2,6,1 then R looks like (for n=1,2,3,4) 0,2,0,1.
a) should be relatively simple (use the Markov property of the S_n) and for b), think about it in this way, suppose you are at 0 and want to go to 1. You have to roll either a 1, or a 5 (both
give remainder 1). So from 0 to 1 has probability 1/3. Now try to figure the other ones out using the same method.
Hope this helps.
April 3rd 2010, 04:45 AM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/137080-markov-chain-fair-die.html","timestamp":"2014-04-16T07:39:13Z","content_type":null,"content_length":"35041","record_id":"<urn:uuid:d3f16413-f5b3-4d02-ae11-329e5437d878>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Assignment #1 Computing Prime Numbers Hi, I made a program from Assignment #1 Computing Primes Number, I don't know if there is any better program than this? Source code->http://pastebin.com/91Y5WEnL
The Source Code: ------------------------------------------------------------------ for candiPrimes in range(2,410): #Test Primes Numbers from 2 to 410 divisor = 1 NumOfReTimes = 0 #Count how many
remainder 0 will show up while (divisor<=candiPrimes): '''Analogy: Numbers start from 2 / N times if two remainder 0 will show up and the number is Prime''' if candiPrimes % divisor == 0:
NumOfReTimes += 1 divisor += 1 if NumOfReTimes == 2: #test if two remainder 0 did show up print ' ' + str(candiPrimes), -----------------------------------------------------------------
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fb9f046e4b05565342edf69","timestamp":"2014-04-19T07:28:50Z","content_type":null,"content_length":"38324","record_id":"<urn:uuid:c65412f9-2701-438b-a203-ce87121c208a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help from a top notch gearhead
I am trying to build a barge with a boom on it. I want a motor to be in charge of pulling up about 6-7lbs. of weight and hold for about 5-10 sec. and then release back into the water automatically
and continuing the process.w/o stopping the process. If someone can help me what size and type dc motor and other accessories will I need. To make this automatically pull and release after a short
amount of seconds. Pretty simple process just dont know much about robotics. Is there any timers needed to allow motor to run for a particular amount of time and then the release. Can or is there a
motor out there that will run and then stop and spin freely letting the object spin back into the water. There maybe one pulley involved. But letting me know the type motor and sz. will help
tremendously. I know I am asking alot but 2nd question. Where can I find a site that will allow me to purchase a geared track that a motor can travel on. Basically a site with different accessories
for gears and tracks. Any help will be much appreciated. Thanks | {"url":"http://www.societyofrobots.com/robotforum/index.php?topic=5769.0","timestamp":"2014-04-21T03:03:10Z","content_type":null,"content_length":"50643","record_id":"<urn:uuid:2eeb1742-808f-418b-81a5-89dabe67ed54>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riverdale, IL Precalculus Tutor
Find a Riverdale, IL Precalculus Tutor
In May 2006 I earned a Bachelor of Science degree at Rose-Hulman Institute of Technology in Biomedical Engineering. There I became proficient in engineering and mathematical applications and was
a TA for a computer graphing course. After college I was employed at a biomedical software engineering ...
13 Subjects: including precalculus, chemistry, algebra 1, algebra 2
...I love to help. Math is my specialty - including calculus, geometry, precalculus, and statistics! I have a Bachelor's of Science from California Institute of Technology (CIT), an incredibly
challenging university.
21 Subjects: including precalculus, chemistry, calculus, statistics
I am a teacher's assistant who is looking to tutor students part-time. I have been a teacher's assistant for 1st and 3rd grade for the past 2 years. I am currently a student at Purdue University
Calumet, and I am majoring in math education.
9 Subjects: including precalculus, calculus, vocabulary, phonics
...I tutor because I love working with children. I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the
Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred Heart.
20 Subjects: including precalculus, chemistry, calculus, physics
...I also had the privilege of taking some Refresher Courses meant for Senior teachers. I have nearly 40+ years of teaching mathematics at Senior level (grades 11 and 12) to students from India,
Pakistan, Kuwait and the U.S.A. Besides this I have been preparing students for Engineering Entrance Ex...
14 Subjects: including precalculus, calculus, geometry, statistics
Related Riverdale, IL Tutors
Riverdale, IL Accounting Tutors
Riverdale, IL ACT Tutors
Riverdale, IL Algebra Tutors
Riverdale, IL Algebra 2 Tutors
Riverdale, IL Calculus Tutors
Riverdale, IL Geometry Tutors
Riverdale, IL Math Tutors
Riverdale, IL Prealgebra Tutors
Riverdale, IL Precalculus Tutors
Riverdale, IL SAT Tutors
Riverdale, IL SAT Math Tutors
Riverdale, IL Science Tutors
Riverdale, IL Statistics Tutors
Riverdale, IL Trigonometry Tutors
Nearby Cities With precalculus Tutor
Blue Island precalculus Tutors
Burnham, IL precalculus Tutors
Calumet Park, IL precalculus Tutors
Crestwood, IL precalculus Tutors
Dixmoor, IL precalculus Tutors
Dolton precalculus Tutors
East Hazel Crest, IL precalculus Tutors
Flossmoor precalculus Tutors
Harvey, IL precalculus Tutors
Hazel Crest precalculus Tutors
Merrionette Park, IL precalculus Tutors
Phoenix, IL precalculus Tutors
Posen, IL precalculus Tutors
Robbins, IL precalculus Tutors
South Holland precalculus Tutors | {"url":"http://www.purplemath.com/Riverdale_IL_Precalculus_tutors.php","timestamp":"2014-04-18T01:07:28Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:209da8f2-3043-4dee-ba74-3f340a3160f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Description
Next: Basis of Grade Up: Syllabus and Course Rules Previous: Useful Texts and Web Contents
In this year's course we will cover the following basic topics:
• Very rapid review of Maxwell's equations, wave equation for EM potentials, Green's functions for the wave and Helmholtz equations, magnetic monopoles. You should all know this already, but it
never hurts to go over Maxwell's equations again...
• Plane waves and wave guides. Polarization, propagating modes. (Jackson chapters 7 and 8). This year (fall 2007) Ronen tells me that he got through about the first half of chapter 7, but we'll
probably review this quickly for completeness.
• Radiating systems and multipolar radiation (Jackson chapter 9). We will cover this material thoroughly. We'll do lots of really hard problems for homework and you'll all just hate it. But it'll
be soooo good for you. The new edition of Jackson no longer covers multipoles in two places, but its treatment of vector harmonics is still quite inadequate. We will add a significant amount of
material here and go beyond Jackson alone. We may do a tiny bit of material from the beginning of chapter 10 (scattering) - just enough to understand e.g. blue skies and polarization, and perhaps
to learn of the existence of e.g. critical opalescence. We will not cover diffraction, apertures, etc. as those are more appropriate to a course in optics.
• Relativity (Jackson chapters 11 and 12). We will do a fairly complete job of at lease special relativity that will hopefully complement the treatments some of you have had or are having in other
courses, but those of you who have lived in a Euclidean world all your lives need not be afraid. Yes, I'll continue to beat you to death with problems. It's so easy. Five or six should take you
• Radiation by moving charges (Jacksom chapters 14 and 16). Basically, this uses the Green's functions deduced during our discussion of relativity to show that accelerated charges radiate, and that
as they do so a somewhat mysterious "self-force" is exerted that damps the motion of the particle. This is important, because the (experimental) observation that bound charges (which SHOULD be
accelerating) don't radiate leads to the collapse of classical physics and the logical necessity of quantum physics.
• Miscellaneous (Jackson chapters 10, 13, 15). As noted above, we may look a bit at sections here and there in this, but frankly we won't have time to complete the agenda above as it is without
working very hard. Stuff in these chapters you'll likely have to learn on your own as you need it.
Next: Basis of Grade Up: Syllabus and Course Rules Previous: Useful Texts and Web Contents Robert G. Brown 2007-12-28 | {"url":"http://www.phy.duke.edu/~rgb/Class/phy319/phy319/node6.html","timestamp":"2014-04-20T05:59:43Z","content_type":null,"content_length":"5721","record_id":"<urn:uuid:1fb476cf-954d-4368-a353-8e2860f225d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hoffman Estates Statistics Tutor
Find a Hoffman Estates Statistics Tutor
...I use accounting data a great deal in finance applications. I am not a CPA, and I have not worked as an accountant in my career, but, I am familiar with basic financial accounting techniques,
and I have helped several students with this subject matter. Generally, I ask that students email me qu...
13 Subjects: including statistics, geometry, algebra 2, algebra 1
...My teaching style is one of: * LISTENING to see what your student is doing; to learn how he or she thinks. * ENCOURAGING students to push a little farther; to show them what they're really
capable of. * EXPLAINING to teach what students need, in ways they can understand and remember! I have o...
21 Subjects: including statistics, chemistry, calculus, geometry
...Having both an Engineering and Architecture background, I am able to explain difficult concepts to either a left or right-brained student, verbally or with visual representations. I am also
great at getting students excited about the subject they are learning by relating it to something relevant...
34 Subjects: including statistics, reading, writing, English
...I also have vast experience in teaching statistical programming languages and softwares which include SPSS, R, SAS, MINITAB, etc I began my career in statistics as Graduate Assistant where I
thought Business Statistics for four years. That was while I completed my masters degree and PhD in Stat...
6 Subjects: including statistics, calculus, algebra 2, SPSS
...My passion is for probability/statistics, both in theory and applied material. I took AP Statistics in high school, and received a 5 on the AP Exam. I also have experience programming in SAS
for larger data analysis problems.
5 Subjects: including statistics, algebra 1, prealgebra, probability | {"url":"http://www.purplemath.com/hoffman_estates_il_statistics_tutors.php","timestamp":"2014-04-18T16:30:44Z","content_type":null,"content_length":"24408","record_id":"<urn:uuid:3435d4e5-775f-478d-b8df-af00a03f3b59>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
derivative with respect to a variable
little rusty with my derivatives...
Question derive Eo with respect to r
En = -C/r + D^(-r/p)
C,D and p are all constants...
so far I have Eo = -Cr^-2 .. but then im not sure about the exponential.
any help would be appreciated. | {"url":"http://www.physicsforums.com/showthread.php?t=335188","timestamp":"2014-04-16T07:32:16Z","content_type":null,"content_length":"22763","record_id":"<urn:uuid:a9bd23cb-1c4d-4cd8-a2d0-4b71ec37b51d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Diff EQ question: finding a function with given properties
September 8th 2009, 10:41 PM
Basic Diff EQ question: finding a function with given properties
Find a function f(y) with the property that: for each solution y(x) of dy/dx = f(y), the limiting value lim as x-> +inf of y(x) equals 3 if y(0) > 0.
Any help?
September 9th 2009, 04:09 AM
A possible solution of the problem is a function $y(x)$ with the following derivative $y^{'} (x)$...
$y^{'} < 0$ with $y>3$ or $y<0$
$y^{'} =0$ with $y=3$ or $y=0$
$y^{'} >0$ with $0<y<3$
A DE the solution of which has these properties is, among others, the following ...
$y^{'} = 3 y - y^{2}, y(0)=y_{0}>0$
... as you can easily verify...
Kind regards | {"url":"http://mathhelpforum.com/differential-equations/101294-basic-diff-eq-question-finding-function-given-properties-print.html","timestamp":"2014-04-20T15:06:36Z","content_type":null,"content_length":"6259","record_id":"<urn:uuid:8265253d-c5c1-4fa8-abf7-411268b2e8dc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate the Mean
Edit Article
Calculating the MeanMean Calculation Help
Edited by Anne Merritt, KnowItSome, Flickety, Eric and 21 others
In mathematics, the "mean" is a kind of average found by dividing the sum of a set of numbers by the count of numbers in the set. While it isn't the only kind of average, the mean is the one most
people think of when speaking about an average. You can use means for all kinds of useful purposes in your daily life, from calculating the after time it takes you to get home from work, to working
out how much money you spend in an average week.^[1]
Calculating the Mean
1. 1
Determine the set of values you want to average. These numbers can be big or small, and there can be as many of them as you want. Just make sure you are using real numbers and not variables.
2. 2
Add your values together to find the sum. You can use a calculator or a spreadsheet, or do it by hand if the set is simple enough.
3. 3
Count the quantity of values in your group. If you have values that repeat in your set, each one still counts in determining your total.
□ Example: 2,3,4,5, and 6 make for a total of five values.
4. 4
Divide the sum of the set by the count of values. The result is the mean, or average, of your set. This means that if each number in your set was the mean, they would add up to the same total.
□ Example: 20 ÷ 5 = 4
Therefore 4 is the mean of the numbers.
Mean Calculation Help
• Other kinds of averages include the "mode" and the "median." The mode is the value repeated most often in any set. The median is the number in a set with an equal quanity of values in the set
greater and smaller than it. These averages will often produce different results than the mean from the same set of numbers.
Article Info
Thanks to all authors for creating a page that has been read 72,389 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Calculate-the-Mean","timestamp":"2014-04-20T01:53:10Z","content_type":null,"content_length":"69478","record_id":"<urn:uuid:f98b4000-28ce-45d1-aa86-cdfeb42abbb7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
College, NY Algebra 2 Tutor
Find a College, NY Algebra 2 Tutor
...In a classroom setting of approximately seven students, I provided students with additional problems and explained theories in order to enhance their understanding of the material. In addition
to MERRP, I also tutored physics, chemistry, and mathematics for UIC campus housing. I am qualified to...
13 Subjects: including algebra 2, chemistry, calculus, physics
...I believe constant interaction with students is crucial to determine what they are absorbing and what needs to be repeated. I try to bolster student confidence with positive reinforcement
whenever possible. Finally, I believe relating the subject matter being taught to their everyday experience helps to keep students interested.
19 Subjects: including algebra 2, reading, algebra 1, SAT math
...I have instructed students in Algebra, Geometry, Algebra II and Trigonometry, and Pre-Calculus. Experienced high school math teacher available to tutor SAT math. I have instructed classified
students at all levels and in most content area subjects.
14 Subjects: including algebra 2, reading, Spanish, accounting
...As a tutor, my unique approach lies in the ability to dissect key concepts so students can first grasp the fundamentals on which they will successfully build the skills necessary to master the
subjects they struggle with. My philosophy is that no one is a failure seeing my belief is that, "failu...
20 Subjects: including algebra 2, English, reading, writing
...I prepare students for tests, help them with their homework, and get them prepared for the regents and the SAT. My teaching makes math feel easy and pleasant. I specialize in SAT, SAT-II(math 1
and 2), Pre-algebra, Algebra I, Geometry, Algebra II, Pre-Calculus, GRE, and GMAT.
14 Subjects: including algebra 2, reading, geometry, SAT math | {"url":"http://www.purplemath.com/College_NY_Algebra_2_tutors.php","timestamp":"2014-04-21T05:13:02Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:9e7168ad-ca55-42e4-9f66-45cfed651117>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hoffman Estates Statistics Tutor
Find a Hoffman Estates Statistics Tutor
...I use accounting data a great deal in finance applications. I am not a CPA, and I have not worked as an accountant in my career, but, I am familiar with basic financial accounting techniques,
and I have helped several students with this subject matter. Generally, I ask that students email me qu...
13 Subjects: including statistics, geometry, algebra 2, algebra 1
...My teaching style is one of: * LISTENING to see what your student is doing; to learn how he or she thinks. * ENCOURAGING students to push a little farther; to show them what they're really
capable of. * EXPLAINING to teach what students need, in ways they can understand and remember! I have o...
21 Subjects: including statistics, chemistry, calculus, geometry
...Having both an Engineering and Architecture background, I am able to explain difficult concepts to either a left or right-brained student, verbally or with visual representations. I am also
great at getting students excited about the subject they are learning by relating it to something relevant...
34 Subjects: including statistics, reading, writing, English
...I also have vast experience in teaching statistical programming languages and softwares which include SPSS, R, SAS, MINITAB, etc I began my career in statistics as Graduate Assistant where I
thought Business Statistics for four years. That was while I completed my masters degree and PhD in Stat...
6 Subjects: including statistics, calculus, algebra 2, SPSS
...My passion is for probability/statistics, both in theory and applied material. I took AP Statistics in high school, and received a 5 on the AP Exam. I also have experience programming in SAS
for larger data analysis problems.
5 Subjects: including statistics, algebra 1, prealgebra, probability | {"url":"http://www.purplemath.com/hoffman_estates_il_statistics_tutors.php","timestamp":"2014-04-18T16:30:44Z","content_type":null,"content_length":"24408","record_id":"<urn:uuid:3435d4e5-775f-478d-b8df-af00a03f3b59>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Theoretical Interpretation of Spacetime/
Science writers like analogies. Here is one that explains why the different colors of visible|white light travel at different wavelengths and frequencies. The reason has to do with avoiding
interference with regard to the transmission and reception of visible light specifically, and with regard to the entire spectrum of electromagnetic particle-waves [EMPW] in general.
Consider the similarities between a four-minute mile foot race and the speed of electromagnetic particle-waves [EMPW], i.e., the speed of visible light in a vacuum. Commonly referred inversely in the
science literature as the wave-particle duality. Physically one must begin with the particle and then the wave of particles. Since the wave theory was developed first in science writing, the common
usage continues to be presented inversely [wave-particle] as to how these occur temporally in spacetime.
With that said consider the 4-minute mile foot race compared to the speed of light traveled in a vacuum during a measured period of one second between two points.
Science writing today defines exactly the speed of visible light as traveled in one second, as being 299,792,458 meters/second. The measurement is obtained as an abstracted straight line between two
selected points: A and B. The speed of light in one second determines the relationship of distance|time. However, given the fact that the electromagnetic particle-wave travels along a curved line, as
in a sine wave, there is no presence of matter-energy along the abstracted straight line; nothing exists there.
In a sense, something similar occurs with the 4-minute mile foot race as measured distance|time, between the Starting and the Finish lines.
Runners occupy different lanes within the race track as shown, with adjustments made at the Starting Line in order that each runner runs exactly one mile distance. The object is to beat the 4-minute
mile goal. All runners are required to leave the starting line at the same split-second or be disqualified. But, the object is to beat the other runners to the Finish First, ahead of all other
│ Visible light: │
│ │
│frequencies 4-7.5x10^14 Hz; wavelengths: 750-400nm│
With regard to the speed of white light, all electromagnetic particle-waves (different colors) leave at the same time and arrive at the same time. Necessarily for visible light to exercise its being,
the different electromagnetic particle-waves of color leave point A at the same begin moment/time and arrive at point B at the same end moment/time, simultaneously together.
With the 4-minute mile race, the winning racers who reach the finish line at the same end moment/time; must have a run-off race, a tie-breaker. The colors in visible light must always produce the
same arrival time.
The racers in the 4-minute mile foot race have to stick to their own corridor and be sure not to invade the lane of the other racers, or risk being disqualified. With electromagnetic particle-waves,
something similar happens. Each color (EMPW) has its own path, or corridor within which it travels so as not to interfere with the other colors. Each color achieves this by having its own defined
wavelength and frequency.
Each of the foot racers travels along a different corridor that is supposedly the same length, although of a different shape along a curved path on the racetrack. Each color travels about a perceived
straight line from point A to B in one second, each color along a different curvilinear path with a unique wavelength and frequency.
The different corridors established by different wavelengths and frequencies traveled by the white|color electromagnetic particle-waves avoid interference among the different particle-waves.
And, this also means that the particular particle-waves of different paths, wavelengths and frequencies travel the defined one-second course at greater/lesser velocities among themselves. Further,
all of them travel faster than the defined speed of light in a vacuum abstracted as of the straight line between points A and B.
The obvious conclusion, as pointed out in other essays, is that the currently defined speed of light in a vacuum is a limited definition, and does not represent the maximum speed of matter-energy in
spacetime/motion. Also, all electromagnetic particle-waves of visible light (white|color light) travel at superluminal speeds ---meaning above the defined speed of light in a vacuum. Beyond these
limited observations, all electromagnetic particle-waves [EMPWs] travel at velocities greater than the defined speed of light in a vacuum.
In essence, then, it is necessary to take into consideration the superluminal velocities of the different electromagnetic particle-waves and their unique paths, wavelengths and frequencies, in order
to understand the nature of light and its purported measured velocity as restrictively defined by today's science writers.
©2014 Copyrighted by Charles William Johnson, Earth/matriX Editions, P.O. Box 231126, New Orleans, LA 70183-1126 | {"url":"http://earthmatrix.com/index.html","timestamp":"2014-04-16T04:12:45Z","content_type":null,"content_length":"179511","record_id":"<urn:uuid:3647d640-fe04-482d-be83-06fe620be6a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mill Valley Calculus Tutor
Find a Mill Valley Calculus Tutor
...Many students, who are new to statistics, think of it as “pure math” type of a subject; however there is a lot of real world application in statistics, and not just math. I teach my students
both the mathematical concepts of statistics/probability and how to deal with word problems: recognize th...
14 Subjects: including calculus, statistics, geometry, algebra 2
...I have taught my younger sister piano and my friends' kids Mandarin and math. For the past several years of working as a professional tutor, I have tutored students of all ages and backgrounds
in elementary math, algebra, geometry, statistics/biostatistics, and Mandarin. One of my recent assignments was tutoring a 14-year-old girl Mandarin.
22 Subjects: including calculus, geometry, statistics, biology
...I have spent the past 2 years as a volunteer math instructor. I am very patient and have lots of experience helping struggling math students. My hours are flexible.
7 Subjects: including calculus, algebra 1, algebra 2, SAT math
...While I was there, I also took various Calculus courses and courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways
the world works, and the value of a good education. That said, I’ve been through the education system, and have seen its flaws, and places where it could work better.
6 Subjects: including calculus, physics, algebra 1, algebra 2
...My first goal is to help them with their current difficulties and homework. But then I take them back to the beginning, find out what they missed learning, and correct that. Math is like
building a brick wall, each layer relies on a solid foundation.
10 Subjects: including calculus, geometry, precalculus, algebra 1 | {"url":"http://www.purplemath.com/Mill_Valley_calculus_tutors.php","timestamp":"2014-04-17T01:29:41Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:8a4bddb8-fb0f-4adf-a7fd-034a6550b2c5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wednesday, August 18, 2004
12:15 am
Ah, it won't take 'gerd.'
12:27 am
:( Snarf is SO a word!
12:29 am
runningforhome - I feel your pain. Ever since I discovered it was not accepted, I don't even see it in the grids anymore. Snarf, snarf.
12:38 am
50 for 464. Five 6's so far... tchauzinho a todos...
12:40 am
45 for 495. seven 6's and an _8_.
it's nice to see an S again.
12:40 am
between the fraus, the trois, and the fois, i'm going loopy.
12:48 am
9 6's at this point
1:15 am
78/789 no 8; 11/6's g'night
1:28 am
I can only find three sixes... :(
Goodnight all.
1:37 am
My score appears to have gone down to 80? While I dont care about the rankings, how the heck did i manage that?
1:48 am
79/800: 1/8, 1/7, 8/6's
Night all...
1:54 am
56 words, only 4 6s, and I'm headed for bed. 'Night.
2:55 am
6 X 6
3 start with R, 3 with S
5:58 am
72 words 4.90 1000
6:43 am
I have a 6 starting with T
6:50 am
and an 8 for 45
7:02 am
I'm not playing today. Here are stats:
170 words
8:33 am
Here is a hint.... 7 for 28.. its an animal that louisiana imported and regrets it.
8:41 am
Swtbtrcup - 'fraid its not much of a clue for us europeans... ;(
8:47 am
or us Northerners.
9:06 am
sure, but a google search finds it. :)
9:31 am
found it but it took several different google searches before I had the correct one.
9:51 am
I couldnt make it too easy :) I figured a small search and you would find it.... all I can say is ewwwwwwwwwww one of the girls I work with has eatten it.
10:08 am
I found the 8 for 45!
10:09 am
nice one!
given up on Louisiana nasty pest animal thingy.
10:30 am
Its nutritious but not delicious...
10:42 am
Sorry, brain won't work the clue. maybe I need a break.
10:43 am
re: the 7: thanks...read a bit about it, how very interesting! ergh
9 g's??? i have 5
10:45 am
Here's an annoyance for British Babblers - 'todger' isn't allowed! :(
10:48 am
"Its nutritious but not delicious... "
They make coats out of them...
10:49 am
So JLM_MI how do you get to find out all the words in the grid?
10:52 am
aj - I used a boggle solver. There are many on the web. I use 2.
If I'm going to play a puzzle, I use one that will give me a total word count for each word length. I make a very absolute point to not look at the words, otherwise what's the point of playing. I
just want the number. BUT, that solver does not use the same dictionary as babble, so it can be wrong by a few words, which can be frustrating.
Today, since I'm not playing, I used a solver that does use the same dictionary, but just gives the list of words ordered by word length and alphabetically, then I counted. This obviously doesn't
work if you want to play, since you can't help but see most of the words.
I'm sure there are or have been people who use these to "cheat" and just put in all the words for a perfect score. Doesn't matter to me, but talk about boring! :-D
10:53 am
Well I guess the word Traif isn't as widely used as Kosher is. Go figure. Lawyer-1, you here?
10:59 am
ah ha! I found the 7/28 animal a completely different way... building on a word i knew often was combined with others. When I got a 7/28 I searched on Google for that word and found it. Weird and
wonderful :) Reminds me of capybaras...
11:32 am
If you watch "The Insomniac", Dave Attell hunted these (the 7 letter rodent) with the local SWAT team one night.
12:15 pm
I found an 8 for 55, 9 for 66 and 10 for 84. The ten is a plural of the nine.
12:18 pm
There's a really great 8 for 45 (starts with a d)
12:19 pm
What does the 9 start with?
12:27 pm
Well, I've got the ROUS (Rodent Of Unusual Size.) What does the other 7 start with?
12:27 pm
The 9 starts with a P
12:34 pm
Don't anyone tell my children that unsort is not a word! Bah!
12:58 pm
could anyone give a clue for the 9
1:49 pm
Thanks for the 10 Claws. I've had the 9 for ages and was SO excited about it i totally forgot to plural it. ;)
The ROUS I'm still stuck on (any other clues muchly appreciated) and am looking for the 8 starting with P (I have the one starting with U)
1:52 pm
I mean the 8 starting with D (not P)
the 9 clue:
A civil servant in the dept. of employment could very likely be one of these.
2:01 pm
tent is back again!!!
2:08 pm
yikes finally got the 9 and 10 had a related 8, hours ago that seemed like it made sense but wouldnt take
2:31 pm
124/1404 and thats enuf
2:44 pm
awwww doesn't take Pretoria
2:55 pm
Has anybody used the 'x'?
3:16 pm
Atlante - you're on the right track for a 9 and 10 with that, though...
Hints for any of the eights, anyone?
3:25 pm
oh thanks rfh!! got 'em :)
6:28 pm
I would love clues for the 8 letter words please.
6:46 pm
HELLLOOOOOO, is any one out there?
6:59 pm
Re one 8 letter word--Starts with "D", means loose rock. Gotta run. Good luck.
7:02 pm
Thanks for the clue. Finally found that AND that nasty rous :)
9:48 pm
If you can unseat someone, why can't they be unsat? huh, huh, huh?
10:02 pm
Good Grief, I finally got the 9 and 10 and then had to look it up. Had no clue what it was.
10:11 pm
Anybody have a clue for the 8 for 55?
10:19 pm
Lilecama - R.O.U.S.'s is a reference from a movie called The Princess Bride, just in case you did not already know that :)
10:20 pm
Never mind- we just found it!
10:27 pm
Prosun - means in favor of the sun... won't take it tho.
10:34 pm
Loose rock he says. I've always called loose rocks scree.
10:47 pm
50 4.68 610, not bad for a newbie. I'm off to bed, the baby gets up early. | {"url":"http://www.playbabble.com/512/08/18/2004","timestamp":"2014-04-16T14:00:16Z","content_type":null,"content_length":"75800","record_id":"<urn:uuid:fde8ab24-d0db-4e3f-ae5e-bab3b182e4a2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rhombified Stellations of Snub Uniform Polyhedra
George Hart's 'propellohedra', (http://www.georgehart.com/propello/propello.html) introduce a form of polyhedra containing both regular polygons and kite shaped quadrilaterals.
An alternative way to form these polyhedra is to stellate a uniform snub polyhedron. A uniform snub polyhedron typically consists of m-gons, n-gons and snub triangles. If the snub triangles edge
adjacent to either the {m} or {n} faces are extended such that they meet in an {m} or {n}-acral peak above the centre of the original face, then a polyhedron consisting of the remaining {n} or {m}
-gonal faces and kites is the result.
These figures can be relaxed such that the kites become rhombi. George Hart discusses the icosahedral case at http://www.georgehart.com/zomebook/green-giant.html and in http://www.georgehart.com/
A systematic search through the snub uniform polyhedra generates a number of rhombified stellations, in each case, where the vertex figure is given, the bracketed polygon is the one that has been
stellated. It does appear to be the case that if the replaced polygon ({m} or {n} above) is a triangle, then the resulting kites relax into three coplanar rhombi which form a compound hexagon. The
overall polyhedron which results is then similar to a uniform polyhedron:
Convex Snubs
Icosahedron (3)-3-3-3-3 (Snub tetratetrahedron, Tetrahedrally stellated) - 'Rhombi-propello-tetrahedron' - hexagonal faces
Snub Cuboctahedron (4)-3-3-3-3 'Rhombi-propello-octahedron' (above left)
Snub Cuboctahedron 4-3-(3)-3-3 'Rhombi-propello-cube' - hexagonal faces
Snub Icosidodecahedron (5)-3-3-3-3 'Rhombi-propello-icosahedron' (above right)
Snub Icosidodecahedron 5-3-(3)-3-3 'Rhombi-propello-dedecahedron'- hexagonal faces
Non-convex Snubs
Great Vertisnub Icosidodecahedron (^5/[3])-3-3-3-3 'Rhombi-propello-great-icosahedron ?' (above top left)
Snub Dodecadodecahedron (^5/[2])-3-5-3-3 (above top right)
Snub Dodecadodecahedron ^5/[2]-3-(5)-3-3 (above bottom left)
Snub Disicosidodecahedron (^5/[2])-3-3-3-3-3 (above bottom right)
Snub Disicosidodecahedron ^5/[2]-3-(3)-3-3-3 - hexagonal faces
Great Snub Icosicosidodecahedron (^5/[2])-3-^5/[3]-3-3-3 - degenerate?
An omission from the above list is the Great Snub Icosidodecahedon (^5/[2])-3-3-3-3, as the stellated form of this figure will not relax into a rhombified form.
Other snub figures can be similarly processed to produce pseudo-hexagonal faces, but where non-triangular faces are considered they call for augmentation of the edges around a retrograde face, or
around a pro-grade face where the dihedral angles involved are > 90 degrees. The adjacent edges will not meet at a point so no 'kite' faces are possible.
The anti-prisms can also be considered as snub polyhedra, and whereas all cases with n>2 can be stellated to give kite shaped faces, only in the case of n=3 can the resulting figure be relaxed into
having rhombic faces, in which case the result is a cube.
I am indebted to Mason Green for suggesting that various polyhedra with kite shaped faces could be relaxed into rhombic forms.
The original stellations were produced using Great Stella and relaxed using HEDRON. | {"url":"http://www.orchidpalms.com/polyhedra/rhombic/rhomstel.htm","timestamp":"2014-04-21T09:50:43Z","content_type":null,"content_length":"7692","record_id":"<urn:uuid:2228182e-c9c1-464f-9e8e-c3f6b0a820ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Algebra II question drawing below
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bbec59e4b0017ef6254cfe","timestamp":"2014-04-20T16:19:29Z","content_type":null,"content_length":"68845","record_id":"<urn:uuid:329f1887-943a-42be-8317-7dc5bd5cf88b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Question
Something is wrong with the wording there. You expect 1 outcome, 1 out of a possible 52 outcomes. They might mean for the number of outcomes where the card is red. Clearly, this is 26. As for "what
type of event..." you should probably look in your book, maybe they've given some definitions or classifications for types of events, and they want you to categorize this event as one of those. | {"url":"http://www.physicsforums.com/showthread.php?p=251274","timestamp":"2014-04-16T10:23:54Z","content_type":null,"content_length":"24683","record_id":"<urn:uuid:2bab63cf-8e22-424e-87c0-aa792dea3085>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equational reasoning via partial re
"... We present C-CoRN, the Constructive Coq Repository at Nijmegen. It consists of a library of constructive algebra and analysis, formalized in the theorem prover Coq. In this paper we explain the
structure, the contents and the use of the library. Moreover we discuss the motivation and the (possible) ..."
Cited by 18 (9 self)
Add to MetaCart
We present C-CoRN, the Constructive Coq Repository at Nijmegen. It consists of a library of constructive algebra and analysis, formalized in the theorem prover Coq. In this paper we explain the
structure, the contents and the use of the library. Moreover we discuss the motivation and the (possible) applications of such a library.
- Journal of Symbolic Computation, Special Issue on the Integration of Automated Reasoning and Computer Algebra Systems , 2002
"... Abstract. We describe a framework for algebraic expressions for the proof assistant Coq. This framework has been developed as part of the FTA project in Nijmegen, in which a complete proof of
the fundamental theorem of algebra has been formalized in Coq. The algebraic framework that is described her ..."
Cited by 14 (7 self)
Add to MetaCart
Abstract. We describe a framework for algebraic expressions for the proof assistant Coq. This framework has been developed as part of the FTA project in Nijmegen, in which a complete proof of the
fundamental theorem of algebra has been formalized in Coq. The algebraic framework that is described here is both abstract and structured. We apply a combination of record types, coercive subtyping
and implicit arguments. The algebraic framework contains a full development of the real and complex numbers and of the rings of polynomials over these fields. The framework is constructive. It does
not use anything apart from the Coq logic. The framework has been successfully used to formalize non-trivial mathematics as part of the FTA project.
"... We describe a framework of algebraic structures in the proof assistant Coq. We have developed this framework as part of the FTA project in Nijmegen, in which a constructive proof of the
Fundamental Theorem of Algebra has been formalized in Coq. The algebraic hierarchy that is described here is both ..."
Cited by 11 (0 self)
Add to MetaCart
We describe a framework of algebraic structures in the proof assistant Coq. We have developed this framework as part of the FTA project in Nijmegen, in which a constructive proof of the Fundamental
Theorem of Algebra has been formalized in Coq. The algebraic hierarchy that is described here is both abstract and way, dening e.g. a ring as a tuple consisting of a group, a binary operation and a
constant that together satisfy the properties of a ring. In this way, a ring automatically inherits the group properties of the additive subgroup. The algebraic hierarchy is formalized in Coq by
applying a combination of labeled record types and coercions. In the labeled record types of Coq, one can use dependent types: the type of one label may depend on another label. This allows to give a
type to a dependent-typed tuple like hA; f; ai, where A is a set, f an operation on A and a an element of A. Coercions are
"... We have finished a constructive formalization in the theorem prover Coq of the Fundamental Theorem of Calculus, which states that differentiation and integration are inverse processes. In this
formalization, we have closely followed Bishop's work ([4]). In this paper, we describe the formalization i ..."
Cited by 8 (0 self)
Add to MetaCart
We have finished a constructive formalization in the theorem prover Coq of the Fundamental Theorem of Calculus, which states that differentiation and integration are inverse processes. In this
formalization, we have closely followed Bishop's work ([4]). In this paper, we describe the formalization in some detail, focusing on how some of Bishop's original proofs had to be refined, adapted
or redone from scratch.
"... Abstract. We estimate the cost of formalizing a proper standard library for proof checking of mathematics in the spirit of the QED project. Apparently it will take approximately 140 man-years.
This estimate does not include the development of the proof checking program, nor does it include work on t ..."
Cited by 6 (0 self)
Add to MetaCart
Abstract. We estimate the cost of formalizing a proper standard library for proof checking of mathematics in the spirit of the QED project. Apparently it will take approximately 140 man-years. This
estimate does not include the development of the proof checking program, nor does it include work on the metatheory of that program. This should discourage any individual or small research group to
think they can reach anything like the goal of the QED project on their own.
- Types for Proofs and Programs, Intl. Workshop (TYPES 2000), LNCS 2277 , 2000
"... In type-theory based proof systems that provide inductive structures, computation tools are automatically associated to inductive de nitions. Choosing a particular representation for a given
concept has a strong inuence on proof structure. We propose a method to make the change from one represe ..."
Cited by 3 (0 self)
Add to MetaCart
In type-theory based proof systems that provide inductive structures, computation tools are automatically associated to inductive de nitions. Choosing a particular representation for a given concept
has a strong inuence on proof structure. We propose a method to make the change from one representation to another easier, by systematically translating proofs from one context to another. We show
how this method works by using it on natural numbers, for which a unary representation (based on Peano axioms) and a binary representation are available. This method leads to an automatic translation
tool that we have implemented in Coq and successfully applied to several arithmetical theorems.
"... Abstract. The technique of reflection is a way to automate proof construction in type theoretical proof assistants. Reflection is based on the definition of a type of syntactic expressions that
gets interpreted in the domain of discourse. By allowing the interpretation function to be partial or even ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract. The technique of reflection is a way to automate proof construction in type theoretical proof assistants. Reflection is based on the definition of a type of syntactic expressions that gets
interpreted in the domain of discourse. By allowing the interpretation function to be partial or even a relation one gets a more general method known as ``partial reflection''. In this paper we show
how one can take advantage of the partiality of the interpretation to uniformly define a family of tactics for equational reasoning that will work in different algebraic structures. The tactics then
follow the hierarchy of those algebraic structures in a natural way.
- in `Theorem Proving in Higher Order Logics, TPHOLs 2003', Vol. 2758 of LNCS , 2001
"... The correctness of proofs is increasingly being veried with computer programs called `proof checkers'. Examples of such proof checkers are Mizar, ACL2, PVS, Nuprl, HOL, Isabelle and Coq. This
paper addresses what is one of the most important problems for that kind of system, which is how to deal wit ..."
Cited by 2 (0 self)
Add to MetaCart
The correctness of proofs is increasingly being veried with computer programs called `proof checkers'. Examples of such proof checkers are Mizar, ACL2, PVS, Nuprl, HOL, Isabelle and Coq. This paper
addresses what is one of the most important problems for that kind of system, which is how to deal with partial functions and the related issue of how to treat undened terms. In many systems the
problem is avoided by articially making all functions total. However that does not correspond to the practice of every day mathematics. In type theory partial functions are modeled by giving
functions extra arguments which are proof objects. Because of that it is not possible to apply a function outside its domain. However having proofs as rst class objects makes the logic non-standard.
This has the disadvantages that it is unfamiliar to most mathematicians and that many proof tools won't be usable for it. For instance a theorem prover like Otter cannot be easily used for this kind
of logic. Also expressions in type theoretical systems get clumsy because they contain proof objects. The PVS system solves the problem of partial functions dierently. PVS generates type-correctness
conditions or TCCs for statements in its language. These are proof obligations that have to be satised `on the side' to show that the statements are well-formed. In this paper we relate the type
theoretical approach to one resembling the PVS approach. We add domain conditions to ordinary rst order logic (which in this paper will be classical and one-sorted) and we show that the combination
corresponds precisely to a rst order system that treats partial functions in the style of type theory. 1
, 2002
"... We have finished a constructive formalization in the theorem prover Coq of the Fundamental Theorem of Calculus, which states that differentiation and integration are inverse processes. This
formalization is built upon the library of constructive algebra created in the FTA (Fundamental Theorem of Alg ..."
Cited by 2 (0 self)
Add to MetaCart
We have finished a constructive formalization in the theorem prover Coq of the Fundamental Theorem of Calculus, which states that differentiation and integration are inverse processes. This
formalization is built upon the library of constructive algebra created in the FTA (Fundamental Theorem of Algebra) project, which is extended with results about the real numbers, namely about
(power) series. Two important issues that arose in this formalization and which will be discussed in this paper are partial functions (different ways of dealing with this concept and the advantages
of each different approach) and the high level tactics that were developed in parallel with the formalization (which automate several routine procedures involving results about real-valued | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=132749","timestamp":"2014-04-17T05:24:30Z","content_type":null,"content_length":"34306","record_id":"<urn:uuid:561bffe7-d0ca-4cbf-9f80-93de28b30984>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference Request: Perspective Painting
up vote 6 down vote favorite
What is a good book/article explaining the mathematics behind perspective painting? I have already looked at the wikipedia article on the topic, so I am looking for something more advanced than this.
I am a research mathematician of limited artistic ability and knowledge.
books geometry soft-question reference-request
2 The fascinating case about Mathematics, Murder and Art behind it's (re)inventionn in the Renaissance: perlentaucher.de/buch/25202.html AMS Notices on the painting: ams.org/notices/200703/
comm-cass.pdf – Thomas Riepe May 9 '10 at 8:30
add comment
6 Answers
active oldest votes
The geometry of an art by Kirsti Andersen (amazon)
Mathematics for the non-mathematician by Morris Kline (See Chapter 10- math and painting in the renaissance)
up vote 5 down vote accepted
Mathematics and its history by John Stillwell (See chapter 8 on Projective Geometry)
add comment
George Francis' "A Topological Picturebook" provides somewhat of an overview about perspective, and is very helpful for learning to draw. It's very easy to ogle his diagrams.
up vote 2 down vote
add comment
Google Books (option full view) has many books on perspective, free for downloading or perusing. Perspective being a rather ancient subject should be very adequately covered in a number
up vote 1 of these books (among those that you can actually download). The quality and completeness of the scans should be verified as they are often defective.
down vote
add comment
More on the artistic side I appreciated
The Invention of Infinity: Mathematics and Art in the Renaissance by J.V. Field
Its theme is the interaction of mathematical and artistic inquiries as characteristic of Western art in the Renaissance, with perspective and precise description of geometrical forms (such
as polyhedra) as turning point, and embodied in several key artists such as Piero della Francesca, Leonardo da Vinci, Albrecht Dürer.
The same author has written a book dedicated to Piero della Francesca:
up vote 1
down vote Piero Della Francesca: A Mathematician's Art
Another source are books about the camera obscura and pinhole photography.
More contemporary: the techniques used to enhance digital images and their perspective with mathematical models of camera lens deformation have given birth to relatively sophisticated
applied mathematics. There are a few private companies such as DxO selling software to correct (among other things) perspective in image files by reversing the nonlinear effects of
multiple lens systems found in camera.
add comment
You might look at:
up vote 1 down vote
which provides the contributions of Brook Taylor and places his contribution in historical pesrpective.
add comment
According to Leonardo da Vinci, the best way to learn perspective painting is to get a framed pane of glass (an empty pictureframe) and something you can mark it with. Hold it up to the
perspective you want to draw and trace the lines (as quoted in Ruskin, The Elements of Drawing).
Ruskin himself says that most of the 'great artists' had a minimal grasp of perspective.
Another great book on this topic is Secret Knowledge by David Hockney.
up vote 0 My opinion is that you should never let your judgment of a drawn object be deternmined solely by the underlying geometry. We have two eyes, not one -- that means that, in order to appear
down vote 3d, objects drawn on a flat piece of paper must somehow appear more than what they are.
The only way to do this is to distort the perspective/dimensions of the object and 'harmonize' it between what the left eye sees and what the right eye sees. Some enterprising
mathematician could come and formalize this someday, but for now it's enough to simply 'eyeball' as well as using geometry. Your eyes and hands will naturally choose the shape that
appears more 3d in binocular vision.
add comment
Not the answer you're looking for? Browse other questions tagged books geometry soft-question reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/23977/reference-request-perspective-painting/23979","timestamp":"2014-04-20T21:15:01Z","content_type":null,"content_length":"68391","record_id":"<urn:uuid:43662ae3-3b09-4b75-b289-1e5695ad47f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
13 search hits
Refactoring the UrQMD model for many-core architectures (2013)
Jochen Gerhard
Ultrarelativistic Quantum Molecular Dynamics is a physics model to describe the transport, collision, scattering, and decay of nuclear particles. The UrQMD framework has been in use for nearly 20
years since its first development. In this period computing aspects, the design of code, and the efficiency of computation have been minor points of interest. Nowadays an additional issue arises
due to the fact that the run time of the framework does not diminish any more with new hardware generations. The current development in computing hardware is mainly focused on parallelism.
Especially in scientific applications a high order of parallelisation can be achieved due to the superposition principle. In this thesis it is shown how modern design criteria and algorithm
redesign are applied to physics frameworks. The redesign with a special emphasise on many-core architectures allows for significant improvements of the execution speed. The most time consuming
part of UrQMD is a newly introduced relativistic hydrodynamic phase. The algorithm used to simulate the hydrodynamic evolution is the SHASTA. As the sequential form of SHASTA is successfully
applied in various simulation frameworks for heavy ion collisions its possible parallelisation is analysed. Two different implementations of SHASTA are presented. The first one is an improved
sequential implementation. By applying a more concise design and evading unnecessary memory copies, the execution time could be reduced to the half of the FORTRAN version’s execution time. The
usage of memory could be reduced by 80% compared to the memory needed in the original version. The second implementation concentrates fully on the usage of many-core architectures and deviates
significantly from the classical implementation. Contrary to the sequential implementation, it follows the recalculate instead of memory look-up paradigm. By this means the execution speed could
be accelerated up to a factor of 460 on GPUs. Additionally a stability analysis of the UrQMD model is presented. Applying metapro- gramming UrQMD is compiled and executed in a massively parallel
setup. The resulting simulation data of all parallel UrQMD instances were hereafter gathered and analysed. Hence UrQMD could be proven of high stability to the uncertainty of experimental data.
As a further application of modern programming paradigms a prototypical implementa- tion of the worldline formalism is presented. This formalism allows for a direct calculation of Feynman
integrals and constitutes therefore an interesting enhancement for the UrQMD model. Its massively parallel implementation on GPUs is examined.
Dileptons and resonances as probes for hot and dense nuclear matter (2009)
Sascha Vogel
The thesis describes the possibilites to explore the hot and dense phase in heavy ion collisions. Therefore hadronic and leptonic decays of resonances are investigated.
Direct photons in heavy ion collisions (2010)
Björn Bäuchle
Direct photon emission from heavy-ion collisions has been calculated and compared to available experimental data. Three different models have been combined to extract direct photons from
different environments in a heavy-ion collision: Thermal photons from partonic and hadronic matter have been extracted from relativistic, non-viscous 3+1-dimensional hydrodynamic calculations.
Thermal and non-thermal photons from hadronic interactions have been calculated from relativistic transport theory. The impact of different physics assumptions about the thermalized matter has
been studied. In pure transport calculations, a viscous hadron gas is present. This is juxtaposed with ideal gases of hadrons with vacuum properties, hadrons which undergo a chiral and
deconfinement phase transition and with a system that has a strong first-order phase transition to a deconfined ideal gas of quarks and gluons in the hybrid model calculations with the various
Equations of State. The models used for the determination of photons from both hydrodynamic and transport calculations have been elucidated and their numerical properties tested. The origin of
direct photons, itemised by emission stage, emission time, channel and baryon number density, has been investigated for various systems, as have the transverse momentum spectra and elliptic flow
patterns of direct photons. The differences of photon emission rates from a thermalized transport box and the hadronic photon emission rates that are used in hydrodynamic calculations are found
to be very similar, as are the spectra from calculations of heavy-ion collisions with transport model and hybrid model with hadronic Equation of State. Taking into account the full (vacuum)
spectral function of the rho-meson decreases the direct photon emission by approximately 10% at low photon transverse momentum. The numerical investigations show that the parameter with the
largest impact on the direct photon spectra is the time at which the hydrodynamic description is started. Its variation shows deviations of one to two orders of magnitude. In the regime that can
be considered physical, however, the variation is less than a factor of 3. Other parameters change the direct photon yield by up to approximately 20%. In all systems that have been considered --
heavy-ion collisions at E_lab = 35 AGeV and 158 AGeV, (s_NN)**1/2 = 62.4 GeV, 130 GeV and 200 GeV -- thermal emission from a system with partonic degrees of freedom is greatly enhanced over that
from hadronic systems, while the difference between the direct photon yields from a viscous and a non-viscous hadronic system (transport vs. hydrodynamics) is found to be very small. Predictions
for direct photon emission in central U+U-collisions at 35 AGeV have been made. Since non-soft photon sources are very much suppressed at this energy, experimental results should very easily be
able to distinguish between a medium that is entirely hadronic and a system that undergoes a phase transition from partonic to hadronic matter. In the case of lead-lead collisions at 158 AGeV,
the situation is not so clear. In central collisions, the complete direct photon spectra including prompt photons seem to favour hadronic emission sources, while the partonic calculations only
slightly overpredict the data. In peripheral collisions at the same energy, the hadronic contribution is more than one order of magnitude smaller than the prompt photon contribution, which fits
the available experimental data. A similar picture presents itself at higher energies. At RHIC energies, however, the difference between transport calculations and hadronic hybrid model
calculations is largest. Hybrid model calculations with partonic degrees of freedom can describe the experimental results in gold-gold collisions at 200 GeV. The elliptic flow component of direct
photon emission is found to be consistently positive at small transverse momenta. This means that the initial photon emission from a non-flowing medium does not completely overshine the emission
patterns from later stages. High-pt photons dominantly come from the beginning of a heavy-ion collision and therefore do not carry the directed information of an evolving medium.
Fluctuations in ultra-relativistic heavy-ion collisions from microscopic descriptions (2007)
Stéphane Haussler
Quantum chromodynamics predicts the existence of a phase transition from hadronic to quark-gluon matter when temperature and pressure are sufficiently high. Colliding heavy nuclei at
ultra-relativistic speeds allows to deposit large amounts of energy in a small volume of space, and is the only available experimental mean to produce the extreme conditions necessary to obtain
the deconfined state. Numerous models and ideas were developed in the last decades to study heavy ion physics and understand the properties of extremely heated and compressed nuclear matter. With
the ever increasing energy available in the center of mass frame (and thus number of particles produced) and the development of large acceptance detectors, it has become possible to study the
fluctuations of physical quantities on an event-by-event basis, and access thermodynamical properties not present in particle spectra. The characteristics of the highly excited matter produced,
e.g. thermalization, effect of resonance decay. . . can be investigated by fluctuation analyses. In fact, fluctuations are good indicators for a phase transition and a plethora of fluctuation
probes have been proposed to pin down the existence and the properties of the QGP. We study various fluctuation quantities within the Ultra-relativistic Quantum Molecular Dynamics UrQMD and the
quantum Molecular Dynamics qMD models. UrQMD is based on hadron and string degrees of freedom and allows to disentangle purely hadronic effects. In contrast, the qMD model includes an explicit
transition from quark to hadronic matter and can serve to test adequate probes of the initial QGP state. We show that the qMD model can reasonably reproduce various experimental particles
rapidity distributions and transverse mass spectra in wide energy range. Within the frame of the dynamical recombination procedure used in qMD, we study the enhancement of protons over pions (p/&
#960;) ratio in the intermediate pt range (1.5 < pt < 2.5). We show that qMD can reproduce the large p/π ≈ 1 observed experimentally at RHIC energies at hadronization. However, the
subsequent decay of resonances makes the ratio fall to values incompatible with experimental data. We thus conclude that resonance decay might have a drastic influence on this observable in the
quark recombination picture. Charged particles multiplicity fluctuations measured at SPS by the NA49 collaboration are enhanced in midperipheral events for Pb+Pb collisions at Elab = 160 AGeV.
This feature is not reproduce by hadron-string transport approaches, which show a flat centrality dependence, within the proper experimental acceptance and with the proper centrality selection
procedure. However, we show that the behavior of multiplicity fluctuations in transport codes is similar to the experimental result in full 4π acceptance. We identify the centrality
selection procedure as the reason for the enhanced particle multiplicity fluctuations in midperipheral reactions and argue that it can be used to distinguish between different scenarios of
particle productions. We show that experimental data might indicate a strong mixing of projectile and target related production sources. Strangeness over entropy K/π and baryon number over
entropy p/π ratio fluctuations have been measured by the NA49 experiment in the SPS energy range, from Elab = 20 AGeV up to Elab = 160 AGeV. We investigate the sensitivity of this observable
to kinematical cuts and discuss the influence of resonance decay. We find the dynamical p/π ratio fluctuations to increase with beam energy, in agreement with the measured data points. On
the contrary, the dynamical K/π ratio fluctuations are essential flat as a function of centrality and depend only weakly on the kinematical cuts applied. Our results are in line with the
simulations performed earlier by the NA49 collaboration in their detector acceptance filter. Finally, we focus on the correlations and fluctuations of conserved charges. It was proposed that
these fluctuations are sensitive to the fractional charge carried by the quarks in the initial QGP stage and survive the whole course of heavy ion reactions. A crucial point is the influence of
hadronization that may relax the initial QGP fluctuation/correlation signals to their hadronic values. We use the quark Molecular Dynamics qMD model to disentangle the effect of
recombination-hadronization on charged particles ratio fluctuations, charge transfer fluctuations, baryon number-strangeness correlation coefficient and various ratios of susceptibilities (i.e.
correlations over fluctuations). We find that the dynamical recombination procedure implemented in the qMD model destroys all studied initial QGP fluctuations and correlations and might ex- plain
why no signal of a phase transition based on event-by-event fluctuations was found in the experimental data until now.
Experimentelle Konsequenzen einer Minimalen Länge (2007)
Ulrich Harbach
Es wird ein effektives Modell zur Berücksichtigung einer Minimalen Länge in der Quantenfeldtheorie vorgestellt. Im Falle der Existenz Großer Extradimensionen kann dies zu überprüfbaren
Modifikationen verschiedener Experimente führen. Es werden verschiedene Phänomene wie z.B. der Casimir-Effekt, Neutrino-Nukleon-Reaktionen oder Neutrinooszillationen diskutiert.
Black hole production and graviton emission in models with large extra dimensions (2007)
Benjamin Koch
This thesis studies the possible production of microscopical black holes and the emission of graviational radiation under the assumption of large extra dimensions. We derive observables for the
Large Hadron Collider and for ultra high energetic cosmic rays.
Open heavy flavor and other hard probes in ultra-relativistic heavy-ion collisions (2014)
Jan Uphoff
In this thesis hard probes are studied in the partonic transport model BAMPS (Boltzmann Approach to MultiParton Scatterings). Employing Monte Carlo techniques, this model describes the 3+1
dimensional evolution of the quark gluon plasma phase in ultra-relativistic heavy-ion collisions by propagating all particles in space and time and carrying out their collisions according to the
Boltzmann equation. Since hard probes are produced in hard processes with a large momentum transfer, the value of the running coupling is small and their interactions should be describable within
perturbative QCD (pQCD). This work focuses on open heavy flavor, but also addresses the suppression of light parton jets, in particular to highlight differences due to the mass. For light
partons, radiative processes are the dominant contribution to their energy loss. For heavy quarks, we show that also binary interactions with a running coupling and an improved Debye screening
matched to hard-thermal-loop calculations play an important role. Furthermore, the impact of the mass in radiative interactions, prominently named the dead cone effect, and the interplay with the
Landau-Pomeranchuk-Migdal (LPM) effect are studied in great detail. Since the transport model BAMPS has access to all medium properties and the space time information of heavy quarks, it is the
ideal tool to study the dissociation and regeneration of J/psi mesons, which is also investigated in this thesis.
An integrated Boltzmann + hydrodynamics approach to heavy ion collisions (2009)
Hannah Petersen
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion
reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the
collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured.
For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space
dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective
solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are
introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density
in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over
the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach
that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The
initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the
non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different
equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the
freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that
mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the
particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid
model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow
values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is
able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function
is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean
transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order
phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively
charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly
and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and
decays still takes place during this period.
Hanbury-Brown-Twiss interferometry within the UrQMD transport approach (2013)
Gunnar Gräf
In this thesis, Hanbury-Brown-Twiss (HBT) interferometry is used together with the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) to analyse the time and space structure of heavy-ion
collisions. The first chapter after the introduction gives an overview of the different types of models used in the field of heavy-ion collisions and a introduction of the UrQMD model in more
detail. The next chapter explains the basics of Hanbury-Brown-Twiss correlations, including azimuthally sensitive HBT (asHBT). Results section: 4. Charged Multiplicities from UrQMD 5. Formation
time via HBT from pp collisions at LHC 6. HBT analysis of Pb+Pb collisions at LHC energies 7. HBT scaling with particle multiplicity 8. Compressibility from event-by-event HBT 9. Tilt in
non-central collisions 10. Shape analysis of strongly-interacting systems 11. Measuring a twisted emission geometry This thesis covers the standard integrated HBT analyses, extracting the
Pratt-Bertsch radii, at LHC energies. The analyses at these energies showed a too soft expansion in UrQMD probably related to the absence of a partonic phase in UrQMD. The most promising results
in this thesis at these energies are the restriction of the formation time to a value smaller than 0.8 fm/c and furthermore, the results from the asHBT analyses. In simulations of non-central
heavy-ion collisions at energies of Elab= 6, 8 and 30 AGeV the validity of the formulae to calculate the tilt angle via asHBT has been checked numerically, even for the case of non-Gaussian,
flowing sources. On this basis has been developed and test in the course of this thesis that allows to measure a scale dependent tilt angle experimentally. The signal should be strongest at FAIR
Dynamical equilibration and transport coefficients of strongly interacting matter (2013)
Vitalii Ozvenchuk | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/referee/Marcus+Bleicher","timestamp":"2014-04-20T00:59:41Z","content_type":null,"content_length":"54187","record_id":"<urn:uuid:56c4c4f7-5b8a-4b79-926e-902de97647d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scientific Computation - CS3210, Spring 2013
TR 1:10 - 2:25pm
Roon: TBA
Instructor:Joseph Traub
Office Address: 456 CSB
Office Hours: Tuesday 2:30 - 3:00 pm, Thursday 3:30 - 4:00 pm and by appointment
Email: traub@cs.columbia.edu
TAs: TBA
Class Info:
Required Text: Numerical Methods, Third Edition, Faires and Burden. I suggest you buy the 3^rd edition used.
Detailed information about homeworks, solution sets, handouts, grades etc. will be posted in Courseworks.
30% homework
30% midterm,
40% final
10% extra credit homework
You are responsible for the material covered in: lectures, readings and homeworks.
Continuous Problems
Many problems in physics, chemistry, biology, engineering vision graphics, animations, weather predictions, etc. have continuous mathematical models
Example: Ecosystems.
Continuous problems usually have to be solved numerically
The most important law in computing:
Moore's law
Why Moore's law is ending for current technology and what can be done about it.
The world's fastest computers
Scaling laws
Brief review of calculus results we'll need.
Solutions of nonlinear equation
Bisection algorithm
Newton iteration
Error formula
Termination criteria
Applications of Newton
Square root
Secant algorithm
Fibonacci sequence
Polynomial interpolation
Spline interpolation
Linear recurrences with constant coefficients
Uncertainty, Undecidability
Nonlinear recurrences
Logistic equation
Strange attractors
Limits to weather prediction
Butterfly effect
Univariate integration
Why such an important problem
Trapezoid module
Simpson module
Composite algorithm
High dimensional integration
Curse of dimensionality
Monte Carlo algorithm
Dynamical systems
Linear ordinary differential equations (ODE)
Nonlinear ODE
Separation of variables
Numerical solution
Euler algorithm
Error of Euler
Higher order Taylor
Condition of problem
Wilkinson polynomial
Implications of finite precision algorithm
Stability of algorithm
Backward error analysis | {"url":"http://www.cs.columbia.edu/~traub/html/body_csw3210.html","timestamp":"2014-04-20T13:37:51Z","content_type":null,"content_length":"5544","record_id":"<urn:uuid:6756d39c-f183-45a0-8132-9b1f3022fc3f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Early stage researcher (PhD student) in Mathematics (Number Theory)
Early stage researcher (PhD student) in Mathematics (Number Theory) (M/F) F1-110048
The University of Luxembourg is looking for its Faculty of Science, Technology and Communication (Mathematics Research Unit, Research Group of Gabor Wiese) for 1 Early stage researcher (PhD student)
in Mathematics (Number Theory) (M/F) Ref: F1-110048 Temporary contract, initially for 2 years with a maximal prolongation up to 4 years, 40 hours/week.
Starting 1 November 2013, or later. Employee status.
Research in one of the areas Number Theory, Arithmetic Geometry, Computer Algebra with the aim of studying towards a PhD. Limited teaching activity of 2 hours per week.
Your Profile
Master degree (or an equivalent degree) in Mathematics of high quality.
Dedication to doing Mathematics on a research level.
Willingness to integrate in a team.
Solid knowledge in Algebra.
Knowledge in Algebraic Number Theory, Algebraic Geometry, Modular Forms, Representation Theory, programming skills and experience with computer algebra systems will be an advantage. Good written and
oral skills in English. Some knowledge of German and/or French is helpful, but not required.
We offer
Outstanding research conditions in an international and inspiring environment.
Possibility to become a member of a young team of dedicated researchers.
A competitive salary.
Additional information
Candidates should submit the following documents: - Letter of motivation - Detailed curriculum vitae - Two reference letters - Copies of diplomas Please apply online until 15 September 2013:
The University of Luxembourg is an equal opportunity employer.
For further information please contact: Prof. Dr. Gabor Wiese (gabor.wiese@uni.lu) Tel: +352 46 66 44 5760
Website of the Mathematics Research Unit of the University of Luxembourg: | {"url":"http://euro-math-soc.eu/node/4034","timestamp":"2014-04-16T04:14:29Z","content_type":null,"content_length":"13991","record_id":"<urn:uuid:36e9cb31-2a8f-47ee-b157-55afc54db370>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is the Optimal Threshold at Which to Recommend Breast Biopsy?
A 2% threshold, traditionally used as a level above which breast biopsy recommended, has been generalized to all patients from several specific situations analyzed in the literature. We use a
sequential decision analytic model considering clinical and mammography features to determine the optimal general threshold for image guided breast biopsy and the sensitivity of this threshold to
variation of these features.
Methodology/Principal Findings
We built a decision analytical model called a Markov Decision Process (MDP) model, which determines the optimal threshold of breast cancer risk to perform breast biopsy in order to maximize a
patient’s total quality-adjusted life years (QALYs). The optimal biopsy threshold is determined based on a patient’s probability of breast cancer estimated by a logistic regression model (LRM) which
uses demographic risk factors (age, family history, and hormone use) and mammographic findings (described using the established lexicon–BI-RADS). We estimate the MDP model's parameters using SEER
data (prevalence of invasive vs. in situ disease, stage at diagnosis, and survival), US life tables (all cause mortality), and the medical literature (biopsy disutility and treatment efficacy) to
determine the optimal “base case” risk threshold for breast biopsy and perform sensitivity analysis. The base case MDP model reveals that 2% is the optimal threshold for breast biopsy for patients
between 42 and 75 however the thresholds below age 42 is lower (1%) and above age 75 is higher (range of 3–5%). Our sensitivity analysis reveals that the optimal biopsy threshold varies most notably
with changes in age and disutility of biopsy.
Our MDP model validates the 2% threshold currently used for biopsy but shows this optimal threshold varies substantially with patient age and biopsy disutility.
Citation: Burnside ES, Chhatwal J, Alagoz O (2012) What Is the Optimal Threshold at Which to Recommend Breast Biopsy? PLoS ONE 7(11): e48820. doi:10.1371/journal.pone.0048820
Editor: Kazuaki Takabe, Virginia Commonwealth University School of Medicine, United States of America
Received: April 24, 2012; Accepted: October 5, 2012; Published: November 7, 2012
Copyright: © 2012 Burnside et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by the National Institutes of Health (grants K07-CA114181, R01-CA127379, and R21-CA129393). The authors have declared that no competing interests exist. The funders
had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The overall annual utilization rate of breast biopsies of 62.6 per 10,000 patients per year, translates to just over 700,000 breast biopsies per year in the United States [1], [2] While image-guided
core needle biopsy of the breast has certainly become an integral part of breast cancer diagnosis, little is known about the optimal breast cancer risk threshold that radiologists should use to
recommend this procedure. Understanding the optimal threshold for breast biopsy is important for several reasons. Breast biopsy, which reveals benign findings approximately 75% of the time, is the
most costly per capita component of a breast cancer screening program [3]. Furthermore, each patient has a unique risk tolerance and co-morbidities to weigh in contemplating the decision for breast
biopsy. Shared decision-making through physician-patient communication in order to tailor health care decisions to individual patient preferences [4] is becoming more prevalent in the context of
novel [5], [6] and established screening tests [7]. This increased interest in personalized medicine in general [8], [9] and in the domain breast cancer in particular [10] motivates an understanding
of the variables that may affect the optimal level of risk at which to recommend healthcare interventions like breast biopsy.
A threshold for breast biopsy has evolved based on several high quality publications in the literature that established certain mammographic findings to have a low estimated malignancy risk (<2%)
enabling researchers to recommend short-term interval follow-up rather than biopsy as the standard of care for these particular scenarios [11]–[14]. The formal “Probably Benign” category, based on
this literature, was established in the Breast Imaging Reporting and Data System (BI-RADS) lexicon thereby standardizing a 2% level below which biopsy need not be recommended [15]. This evidence has
led to a more general application of this threshold for breast biopsy to all lesions thought to have a probability of malignancy less than or equal to 2% (Table 1).
Table 1. BI-RADS final assessment codes with recommendations.
Modeling is becoming increasingly important in evaluating health care interventions and assessing utility and effectiveness [16]. In fact, such models are now being used to suggest health care
policies [7], [17]. In the past, decision analytic modeling has been used in the breast imaging literature, primarily for cost-effectiveness analysis in order to determine the optimal use of
competing healthcare interventions.[18]–[21] These manuscripts have used a technique called Markov modeling to evaluate interventions like staging MR lymphangiography [21], computer-aided detection
[20], breast MRI with core biopsy [18] and MRI screening in patients with BRCA1 mutations [19]. However, standard Markov models can evaluate only one set of decision rules at a time and a single
model must be created for each strategy being analyzed. However, when there are a large number of embedded decision nodes (e.g. when there are a large number of decisions occur repetitively over time
with a vast array of possible permutations) standard Markov models or simulation techniques become computationally impractical. Situations that require sequential decision making, such as recurrent
screening mammography and biopsy decisions, are better addressed with Markov decision processes (MDPs), which have the computational capability to solve sequential decisions making problems that
involve uncertainty [22], [23].
The overarching reason for this study is two-fold. We wish to determine if a 2% threshold is reasonable based on accepted decision-analytic framework considering clinically relevant variables. We
also aim to establish which variables most profoundly affect this decision threshold. From a clinical perspective, our model is designed to personalize the risk threshold at which to recommend breast
biopsy in the interest of improving decision-making based on a patient’s risk of breast cancer.
The University of Wisconsin Health Sciences Institutional Review Board (UW-IRB) approved this HIPAA-compliant study. The UW-IRB did not require informed consent to utilize the clinical data that
informed our model because there were no direct identifiers associated with the data, thereby minimizing any risk (specifically, the risk to patient confidentiality). The clinical data set that we
used is described elsewhere but summarized here for the convenience of the reader [24]. We collected data for consecutive screening and diagnostic mammography examinations between April 5, 1999 and
February 9, 2004 which included 48,744 mammography examinations on 18,270 patients. All mammographic findings were described and recorded using BI-RADS by the interpreting radiologist at the time of
mammography interpretation using the PenRad® system which records patient demographic risk factors and mammography findings in a structured format. We matched our mammography data with our state’s
population-based registry for cancer incidence data. A finding matched to a registry report of ductal carcinoma in situ or any invasive carcinoma within 365 days was considered positive and a finding
with no match in the same time frame was considered negative. Patients diagnosed with a high risk lesion called lobular carcinoma in situ in the registry were also considered benign, however we did
not have access to other high risk lesions atypical ductal hyperplasia, atypical lobular hyperplasia, papilloma, radial scar, and others, since these biopsy results were not recorded in the registry.
Patient features reflect a representative clinical population referred for breast cancer screening and diagnosis [24], [25].
In order to analyze the optimal threshold at which to recommend breast biopsy, we developed a finite-horizon, discrete-time MDP [22], [26], which provides a mathematical framework for modeling
decision-making in situations where outcomes are partly uncertain (e.g. the development of breast cancer) and partly under the control of the decision maker (e.g. the decision to perform breast
biopsy). An MDP has five components including decision epochs, states, decisions, rewards, and transition probabilities.
Decision Epochs
In an MDP, a decision epoch is defined as the unit of time in which a decision is typically made. In our model, we assumed that decisions are made annually.
An MDP is characterized by a set of states that completely define the possibilities for a patient's health at a given time. There are 104 states in our MDP model, illustrated as “nodes” (circles or
ovals) as in Figure 1. One-hundred-one of these states are defined by the risk of breast cancer (0%, 1%, 2%,…100%) as determined by a validated logistic regression model (LRM), which uses patient
demographic factors and mammographic features summarized in Table 2 [24]. These “risk score states” are integer values that represent risk scores directly converted from our LRM estimate of the
probability of cancer. For example, if a 55-year old patient with coarse heterogeneous microcalcifications has a 12.2% probability of having breast cancer according to the LRM, this patient is placed
in the “risk score state” of 12. In addition to risk score states, we define two biopsy related states–corresponding to a malignant biopsy outcome (Figure 1–Biopsy-M) and a benign biopsy outcome (
Figure 1–Biopsy-B). Finally, we include “Death” as a state in our model.
Figure 1. The state transition diagram of our MDP model shows transitions between various stages depending on the decision made.
Nodes represent the state of the model and arcs represent the transition of patient from one state to another. Round nodes in the first column represent the risk-score-states consisting or
probability of cancer (e.g. 1%, 2%, …, 100%) of the patient of age 40. Round nodes in the second column represent the risk after 1 year. At each decision epoch, depending on the risk of cancer, the
radiologist needs to make one of the two decisions–biopsy (BX), or annual mammography (AM). If biopsy is elected, the patient will then move to either the malignant biopsy state (Biopsy-M) or the
benign biopsy state (Biopsy-B).
Table 2. Variable Definitions with Sensitivity Analysis Ranges.
States in an MDP model can be categorized as transient states (including the risk score and Biopsy-B states) or absorbing states (including the Biopsy-M and Death states). The patient exists in a
transient state temporarily and has the opportunity to move to another state with each epoch. Once the patient enters an absorbing state, she does not change states thereafter, i.e. the radiologist
does not have the opportunity to make further decisions in the states of Biopsy-M or Death.
For each risk score state, a radiologist, the decision-maker, can choose from the following decisions: biopsy (Figure 1–BX) or annual mammography (Figure 1–AM). If annual mammography is recommended,
the patient’s risk changes as defined by the transition probabilities (description forthcoming) in our model. Alternatively, if biopsy is recommended, then the patient either goes to the malignant
biopsy state (Figure 1–Biopsy-M) or the benign biopsy state (Figure 1–Biopsy-B).
Patients accrue rewards depending upon the time spent in each state. In our model, rewards correspond to quality-adjusted life years (QALYs), which are commonly used in medical decision making [27].
Living to maximal life expectancy in perfect health is the goal of any health care system. Diagnostic tests, like mammography or breast biopsy, can affect QALYs positively by increasing the number of
years (through early detection enabling cure) or negatively by diminishing quality (causing anxiety or discomfort). To preserve the simplicity of the model, we included only disutilities associated
with breast biopsy in our model and excluded all disease-related, treatment-related and age-related disutilities to focus on the tradeoff between the disutility of biopsy and potential life-year
savings of diagnostic decisions in isolation. Since there is not a consistent literature on age-related or breast-cancer-treatment related disutilities, we did not model these variables realizing
that our conclusions may be somewhat conservative (explored further in the discussion). [28]–[31].
Our model considers two types of rewards–intermediate and lump-sum– corresponding to transient states and absorbing states respectively. Patients accrue intermediate rewards each time they enter a
transient state like a risk-score-state after a routine mammography recommendation is made or if a biopsy reveals benign findings. For the risk-score-state when annual mammography is recommended (
Figure 1–AM), her intermediate reward depends on her probability of dying from breast cancer or other causes during that year [32]. We use a parametric model [33] to adjust the probability of dying
from breast cancer based on a patient’s current risk score, taking into account the probability of death if a patient with breast cancer is not treated in that year–discussed in detail in [34].We
estimate the treatment effectiveness factor (defined as the ratio of the probability of death with treatment versus that without treatment) from the parametric model to estimate the probability of
death if a patient with breast cancer is not treated. Using this probability, we estimate intermediate reward as 1 year, if a patient is alive at the end of that year, and 1/2 year, if the patient
dies during that year (from breast cancer or other causes). Assigning a 1/2 year reward for death reflects an accepted modeling convention called “half cycle correction” which balances the fact that
death can occur at any time over the year. The intermediate reward for a benign biopsy is calculated in a manner similar to the risk-score-state with the additional penalty for biopsy added. For
absorbing states like a malignant biopsy (Figure 1–Biopsy-M), the patient receives a “lump sum” reward equivalent to her post-treatment expected life with breast cancer estimated using the
Surveillance, Epidemiology, and End Results (SEER) program of the National Cancer Institute, which takes into account the stage at diagnosis and probability of dying associated with that stage [35],
[36]. We assume that 75% of the cancers at diagnosis are invasive while the remaining are DCIS in estimating the post-cancer lump-sum rewards. [36] A patient gets a lump-sum reward of 0 if she moves
to the “Death” state.
In our model, we estimate the reduction in QALYs for diagnostic tests by introducing a time penalty, a disutility, to account for discomfort, anxiety, and complications. Based on the data available
in medical literature [29] we assume the disutility of biopsy at age 40 is 2 weeks for our base case, and it increases linearly with age at a rate dictated by a variable called the “disutility
factor” which determines the disutility at age 100. In other words, this disutility factor (2 in our base case) is multiplied by the disutility at age 40 to determine the disutility at age 100
(calculated to be 4 weeks in our base case) and the disutility of biopsy for ages between 40 and 100 increase linearly between these established values (Figure 2). Our base case reflects a higher
disutility for older patients because of increasing co-morbidities and biopsy-complications in this age group. However, equal and lower disutility based on age is also considered in our sensitivity
Figure 2. Disutility factors used in sensitivity analysis.
Transition Probabilities
The transition probabilities determine the state of the patient in the next decision epoch based on the state and decision at the current decision epoch. The state transitions of an MDP possess the
Markov property which states that the state of the patient at the next decision epoch depends only on her current state and decision and is independent of all previous states or decisions.
We estimated risk-score-state transition probabilities by tracking the average change in risk scores (the states of our model) for patients as they undergo annual mammography in our clinical dataset
using a previously constructed and validated logistic regression model [24]. We matched findings in the same breast and quadrant from same patients and then calculated the change in risk for all
patients who had findings observed over more than one time point. If patients were not seen annually, we estimated the yearly risk change using linear interpolation. For example, consider a
40-year-old patient who is estimated (with our LRM) to have a risk score of 1% at the time of her baseline mammogram. When she returns for routine screening exam at age 42 our LRM uses demographic
and mammographic features to estimate that she has a breast cancer risk of 5%. Assuming a linear increase in risk with time, we estimate (or impute) the risk at age 41 was 3%. If a patient had only a
single observation, the data was not used in the calculation of transition probabilities. After observing risk score changes (either using LRM or by imputation), we calculated the average change in
each risk score over 1 year. This consolidated list of average transition probabilities for each risk score comprised the transition probabilities used in our MDP model to calculate the optimal
biopsy threshold.
We assume that the biopsy has a perfect sensitivity and specificity, and patient’s risk-score-state completely defines her current risk of breast cancer. Therefore, if a patient having 5% risk of
cancer is recommended biopsy, she has a 5% chance to move to Biopsy-M state and a 95% chance to move to the Biopsy-B state, from which she moves to one of the risk-score-states in the next epoch
based on her risk-score-state transition probabilities defined above.
We make a series of assumptions to construct the MDP. We take the patient’s perspective with an objective of finding a policy that would maximize patients’ QALYs, and therefore do not model costs. We
consider all participants (including the patient and the radiologist) to be risk neutral which means that participants would always choose a policy that maximize their expected QALYs. Routine yearly
mammography and biopsy are the only decision options that completely describe the “state space” thereby excluding short-term interval follow-up or utilization of other imaging modalities (like breast
ultrasound or breast MRI). We only consider percutaneous core needle biopsy and do not consider excisional biopsy as an option for diagnosis.
Once the biopsy is performed, if the patient comes back into the system the record of a prior biopsy is not preserved. This assumption does not imply that patients may not have multiple biopsies, it
simply assumes that they are not distinguished from the patients who do not have a history of biopsy. We assume that patients adhere to the decisions made by the radiologists, i.e. the patient will
get her annual mammogram (or biopsy) with certainty if the radiologist recommends annual mammography (or biopsy).
Determining Optimal Policy
The objective of our MDP model is to identify the optimal policy, i.e. the optimal decision (BX or AM) for a patient of a particular age and risk score that will maximize her total expected QALYs. We
solve a series of recursive equations (Bellman equations) for all ages and risk-score-states to identify the optimal policy [22].
Sensitivity Analysis
In addition to finding the optimal base case policy based on the constructed MDP, we performed sensitivity analysis using ranges for variables that had the potential to alter our conclusions (Table 2
). We tested high and low values for biopsy disutility at age 40, biopsy disutility factor, percent of invasive versus in situ and the treatment effectiveness factor in order to determine the effect
on our optimal biopsy threshold.
In summary we constructed our MDP model using patient data from a clinical breast imaging practice to determine breast cancer risk (via the LRM) and transition probabilities. The remaining parameters
in the model including rewards, survival statistics, and assumptions are derived from population-based data and the literature.
The mean age of women undergoing mammography–the population used to develop our model–was 56.5 years (range = 17.7–99.1, SD = 12.7). There were 477 cancers diagnoses in the 48,744 mammograms included
in our model (for a cancer detection rate of 9.7 per 1000 patients). Of all the 477 cancers, 417 had staging information from our cancer registry and 60 did not. Of the cases with stage available,
71.9% (300/417) were early stage (stage 0 or 1) and 25.9% (108/417) had lymph node metastasis.
We found the optimal threshold for biopsy to be 2% for patients between 42 and 75 years of age. However the threshold below age 42 was lower (1%) and above age 75 was higher (range 3–5%). Note that
the optimal probability threshold to biopsy increases with age. This implies that older patients would be less likely while younger patients would be more likely to benefit from a biopsy
recommendation in terms of total QALYs. (Figure 3).
Sensitivity analysis revealed that the optimal biopsy threshold varied most substantially as we varied age and disutility of biopsy. As the disutility of biopsy at age 40 increases (also increasing
the disutility at age 100 because the disutility factor remains the same) the threshold for biopsy also increases (Figure 4). Similarly, if we increase the disutility factor (Figure 5), we observe
the same trend of increased optimal biopsy threshold. Interestingly, even if the disutility of biopsy remains constant for all ages (i.e. the disutility factor is 1), the biopsy threshold still
increases with age. We find that as the proportion of invasive cancers (relative to in situ disease) increases, the optimal biopsy threshold decreases (Figure 6) because the life expectancy is lower
for invasive cancer as compared to in situ disease. Finally, as the treatment effectiveness factor increases (Figure 7), the biopsy threshold decreases.
Adoption of 2% probability of breast cancer as a threshold for biopsy has been useful from a practical practice standpoint but until now, has not been supported by decision analytic theory. Our MDP
model demonstrates that 2% is the optimal breast biopsy threshold for most women of screening age (between 42 and 75) based on the desire to maximize QALYs. However age is important in determining an
optimal biopsy threshold: for younger patients (<42) biopsy thresholds given by our MDP model are lower than for older patients because younger patients accrue more QALYs as a result of early
diagnosis and cure. We must carefully consider several aspects of our model including sensitivity analysis, modeling decisions, and assumptions as we judge its clinical accuracy and applicability.
After age, the disutility of biopsy most profoundly affects the optimal biopsy threshold. In our sensitivity analysis, anytime we increase the disutility of biopsy (at age 40 or by increasing the
factor that accelerates increases by age) the threshold for biopsy also increases because the “harm” of biopsy increases as the benefit remains the same. However, even if the disutility of biopsy is
the same for younger and older patients, the biopsy threshold still increases with age because of the limited benefit in terms of QALYs for older patients. When we increase the fraction of invasive
cancers or increase the treatment effectiveness factor, biopsy threshold decreases because an early diagnosis is more valuable for increased length of life and overcomes the disutility of biopsy.
However, underlying the details of our sensitivity analysis, a larger theme emerges: if in fact personalized breast cancer screening is a desired goal [10], perhaps we should be tailoring the biopsy
threshold to individual patients based on their unique risk tolerance and their judgment of the “harm” of biopsy. Our MDP model provides the framework to offer that individualized threshold.
We decided to adopt the patient’s perspective and not model costs in our MDP in contradistinction to prior literature which concentrated on the cost-effectiveness of interventions from the societal
perspective [18]–[21]. We chose to include only biopsy related disutility to estimate rewards, and exclude disutilities associated with malignancy, treatment and age for several reasons. Our approach
allows us to explicitly capture the influence (harm or benefit) of breast biopsy on the expected life years in isolation. Second, there are no well accepted utility weights for breast cancer
treatment and a wide range has been reported [28]–[31]. Third, since our model compares the lump-sum post-biopsy rewards with the sum of intermediate post-mammography rewards to inform a policy, the
inclusion of other disutilities would require the calibration of the utility weights in both rewards for a fair comparison, which is beyond the scope of this work. In general, our approach will have
the tendency to conservatively estimate the difference in optimal biopsy threshold between older and younger women. Decreasing the value of life with breast cancer disutility during treatment or in
older age groups would disproportionately lower the value (increase the threshold) of biopsy in older age groups making the discrepancy between the biopsy threshold in older versus younger women more
The limitations of any decision analytic model lie in the assumptions made. We have made several assumptions to simplify our MDP which abbreviate the full complexity inherent in clinical breast
imaging practice. For example, we do not consider other screening or follow-up methods like breast MRI, ultrasound or mammographic watchful waiting (short-term interval follow-up). Furthermore, of
the demographic risk factors that we evaluated in our logistic regression model (age, family history of breast cancer, personal history of breast cancer, hormone replacement therapy, and prior breast
surgery) only patient age and personal history of breast cancer were found to be statistically significant and were ultimately included in the logistic regression model. However, breast density,
prior history of atypia on breast biopsy, and BRCA mutations, among other risk factors have certainly been found to confer breast cancer risk from an epidemiologic standpoint in the larger medical
literature. Including a more extensive list of breast cancer risk factors would be interesting to include in a risk prediction model to determine if they influence the threshold of biopsy.
We have not incorporated any risk-aversion into the model and therefore do not observe the effect on optimal policies. We do not consider that a patient’s utilities may change if she has undergone
more than one biopsy (true-positive or false-positive) or incorporate the possibility that a patient may not adhere to the radiologist recommendation. All of these modeling decisions may influence
conclusions and we hope to incorporate such scenarios in our model in the future.
Based on our analysis, a 2% threshold for breast biopsy appears to be optimal for most women of screening age with the important caveat that age and biopsy disutility influence this threshold most
profoundly. If personalized care is our goal, we need accurate estimates for malignancy risk and evidence-based, optimal decision thresholds for interventions to most effectively diagnose disease.
Decision analytic models, like MDPs, are critically important in defining these levels and are increasingly pervasive.
Our future work will include increasing the complexity of our model to more accurately reflect actual clinical practice. For example, including six-month follow-up and other imaging procedures like
breast ultrasound and breast MRI will more adequately reflect the myriad of current tools available for breast cancer diagnosis. In addition, we also plan to include costs in our model to evaluate
the cost-effectiveness of current and proposed policies to perform breast biopsy. Once our model is sufficiently validated, we plan to make it available to radiologists and patients in order to aid
decisions to biopsy breast findings. While validation will entail testing the generalizability of the model on a wide range of breast imaging practices, ideally in the form of a multi-institutional
trial, this validation will represent a critical next step for translation of our methodologies to clinical practice.
We thank Charles Kahn, MD and Katherine Shaffer, MD whose assistance with the mammography database has been an invaluable resource.
Author Contributions
Conceived and designed the experiments: EB OA JC. Performed the experiments: EB OA JC. Analyzed the data: EB OA JC. Contributed reagents/materials/analysis tools: EB OA JC. Wrote the paper: EB OA JC.
1. 1. United States Census Bureau (2000) Projections of the Total Resident Population by 5-Year Age Groups, and Sex with Special Age Categories: Middle Series, 2001 to 2005. Population Projections
Program, Population Division, U.S. Census Bureau, Washington, D.C. 20233. Available: http://www.census.gov/population/www/projections/natsum-T3.html. Accessed: 9 October 2012.
2. 2. Ghosh K, Melton LJ, 3rd, Suman VJ, Grant CS, Sterioff S, et al (2005) Breast biopsy utilization: a population-based study. Arch Intern Med 165: 1593–1598. doi: 10.1001/archinte.165.14.1593
3. 3. Poplack SP, Carney PA, Weiss JE, Titus-Ernstoff L, Goodrich ME, et al. (2005) Screening mammography: costs and use of screening-related services. Radiology 234: 79–85. doi: 10.1148/
4. 4. Swan JS, Lawrence WF, Roy J (2006) Process utility in breast biopsy. Med Decis Making 26: 347–359. doi: 10.1177/0272989X06290490
5. 5. Chan EC (2005) Promoting an ethical approach to unproven imaging tests. J Am Coll Radiol 2: 311–320. doi: 10.1016/j.jacr.2004.09.012
6. 6. Hillman BJ (2005) Informed and shared decision making: An alternative to the debate over unproven screening tests. Journal of the American College of Radiology 2: 297–298. doi: 10.1016/
7. 7. U.S. Preventive Services Task Force (2009) Screening for breast cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 151: 716–726, W-236.
8. 8. Snyderman R, Dinan MA (2010) Improving health by taking it personally. JAMA 303: 363–364. doi: 10.1001/jama.2010.34
9. 9. Williams RS, Willard HF, Snyderman R (2003) Personalized health planning. Science 300: 549. doi: 10.1126/science.300.5619.549
10. 10. Schousboe JT, Kerlikowske K, Loh A, Cummings SR (2011) Personalizing mammography by breast density and other risk factors for breast cancer: analysis of health benefits and
cost-effectiveness. Ann Intern Med 155: 10–20. doi: 10.7326/0003-4819-155-1-201107050-00003
11. 11. Sickles EA (1991) Periodic mammographic follow-up of probably benign lesions: results in 3,184 consecutive cases. Radiology 179: 463–468. doi: 10.1148/radiology.179.2.2014293
12. 12. Varas X, Leborgne F, Leborgne JH (1992) Nonpalpable, probably benign lesions: role of follow-up mammography. Radiology 184: 409–414. doi: 10.1148/radiology.184.2.1620838
13. 13. Varas X, Leborgne J, Leborgne F, Mezzera J, Jaumandreu S, et al. (2002) Revisiting the mammographic follow-up of BI-RADS category 3 lesions. AJR Am J Roentgenol 179: 691–695. doi: 10.2214/
14. 14. Vizcaino I, Gadea L, Andreo L, Salas D, Ruiz-Perales F, et al. (2001) Short-term follow-up results in 795 nonpalpable probably benign lesions detected at screening mammography. Radiology 219:
475–483. doi: 10.1148/radiology.219.2.r01ma11475
15. 15. American College of Radiology (2003) ACR BI-RADS® – Mammography. 4th Edition. In: ACR Breast Imaging Reporting and Data System, Breast Imaging Atlas. Reston, VA. American College of
16. 16. Berry DA, Cronin KA, Plevritis SK, Fryback DG, Clarke L, et al. (2005) Effect of screening and adjuvant therapy on mortality from breast cancer. N Engl J Med 353: 1784–1792. doi: 10.1056/
17. 17. Mandelblatt JS, Cronin KA, Bailey S, Berry DA, de Koning HJ, et al. (2009) Effects of mammography screening under different screening schedules: model estimates of potential benefits and
harms. Ann Intern Med 151: 738–747. doi: 10.7326/0003-4819-151-10-200911170-00010
18. 18. Hrung JM, Langlotz CP, Orel SG, Fox KR, Schnall MD, et al. (1999) Cost-effectiveness of MR Imaging and Core-Needle Biopsy in the Preoperative Work-up of Suspicious Breast Lesions. Radiology
213: 39–49. doi: 10.1148/radiology.213.1.r99oc5139
19. 19. Lee JM, Kopans DB, McMahon PM, Halpern EF, Ryan PD, et al. (2008) Breast Cancer Screening in BRCA1 Mutation Carriers: Effectiveness of MR Imaging–Markov Monte Carlo Decision Analysis.
Radiology 246: 763–771. doi: 10.1148/radiol.2463070224
20. 20. Lindfors KK, McGahan MC, Rosenquist CJ, Hurlock GS (2006) Computer-aided Detection of Breast Cancer: A Cost-effectiveness Study. Radiology 239: 710–717. doi: 10.1148/radiol.2392050670
21. 21. Pandharipande PV, Harisinghani MG, Ozanne EM, Specht MC, Hur C, et al. (2008) Staging MR Lymphangiography of the Axilla for Early Breast Cancer: Cost-Effectiveness Analysis. AJR Am J
Roentgenol 191: 1308–1319. doi: 10.2214/AJR.07.3861
22. 22. Alagoz O, Hsu H, Schaefer AJ, Roberts MS (2010) Markov Decision Processes: A Tool for Sequential Decision Making under Uncertainty. Med Decis Making Epub ahead of print.
23. 23. Chhatwal J, Alagoz O, Burnside ES (2010) Optimal Breast Biopsy Decision Making Based on Mammographic Features and Demographic Factors Operations Research. 58: 1577–1591. doi: 10.1287/
24. 24. Chhatwal J, Alagoz O, Lindstrom MJ, Kahn CE Jr, Shaffer KA, et al. (2009) A logistic regression model based on the national mammography database format to aid breast cancer diagnosis. AJR Am
J Roentgenol 192: 1117–1127. doi: 10.2214/AJR.07.3345
25. 25. Burnside ES, Davis J, Chhatwal J, Alagoz O, Lindstrom MJ, et al. (2009) Probabilistic computer model developed from clinical data in national mammography database format to classify
mammographic findings. Radiology 251: 663–672. doi: 10.1148/radiol.2513081346
26. 26. Puterman ML (1994) Markov Decision Processes: Discrete Stochastic Dynamic Programming: John Wiley & Sons, Inc. New York, NY, USA.
27. 27. Drummond MF (2005) Methods for the Economic Evaluation of Health Care Programmes: Oxford University Press.
28. 28. Brennan V, Wolowacz S (2008) A Systematic Review of Breast Cancer Utility Weights. International Society for Pharmaceutical and Outcomes Research (ISPOR) 13th Annual International Meeting.
Toronto, Ontario, Canada.
29. 29. Gram IT, Lund E, Slenker SE (1990) Quality of life following a false positive mammogram. Br J Cancer 62: 1018–1022. doi: 10.1038/bjc.1990.430
30. 30. Lidgren M, Wilking N, Jonsson B, Rehnberg C (2007) Health related quality of life in different states of breast cancer. Qual Life Res 16: 1073–1081. doi: 10.1007/s11136-007-9202-8
31. 31. Peasgood T, Ward S, Brazier J (2010) Health-state utility values in breast cancer. Expert Review of Pharmacoeconomics and Outcomes Research 10: 553–566. doi: 10.1586/erp.10.65
32. 33. Haybittle JL (1998) Life expectancy as a measurement of the benefit shown by clinical trials of treatment for early breast cancer. Clin Oncol (R Coll Radiol) 10: 92–94. doi: 10.1016/
33. 34. Chhatwal J (2008) Optimal management of mammography findings for breast cancer diagnosis: Patient’s perspective. Madison: University of Wisconsin-Madison. 204 p.
34. 35. Ries LAG, Melbert D, Krapcho M, Stinchcomb DG, Howlader N, et al. (2007) SEER Cancer Statistics Review, 1975–2005. Bethesda MD: National Cancer Institute. | {"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0048820","timestamp":"2014-04-17T11:12:30Z","content_type":null,"content_length":"141668","record_id":"<urn:uuid:481b7b82-f29a-40ff-8ff0-12b8eb3c575b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacheco, CA Math Tutor
Find a Pacheco, CA Math Tutor
...My Multiple Subject Teaching Credential for the state of California is a CLAD certified credential which included coursework about teaching exceptional children and those with special needs.
Before earning my credential, I spent 3 years working with special needs students as a paraeducator in an...
16 Subjects: including algebra 2, algebra 1, geometry, English
...I look forward to helping people to learn new subjects and overcoming their fears regarding learning and testing. My background includes a Bachelor of Science degree in Physics and a Masters
in Business Administration, plus various courses in data processing, programming, technical training and ...
21 Subjects: including calculus, grammar, Microsoft Excel, Microsoft Word
...For the past two years, I have been teaching a math enrichment class. I have been teaching Biology and related Life Sciences for most of the past 9 years, mainly high school Biology, but also
college Field Biology (at Cornell University) and AP Environmental Science. I have also taught other re...
43 Subjects: including trigonometry, GED, GRE, public speaking
...In addition, he has a lifelong passion for mathematics and, in addition to tutoring all grade levels in math, has volunteered for 6 years in the local public schools in San Rafael (including
mathematics instruction and Odyssey of the Mind coach). Dr. G. has a daughter who is currently in high school. He enjoys music, hiking and geocaching.Dr.
13 Subjects: including discrete math, differential equations, algebra 1, algebra 2
...I received a 5/5 on my AP Calculus Test, and ever since then I knew that I really wanted to help people, like you, understand and love Math as I do. I always loved Algebra more than any other
subject though. I love numbers and I love Math and I hope to help you understand why Math is so amazing and how to master the subject!
12 Subjects: including precalculus, algebra 1, algebra 2, geometry
Related Pacheco, CA Tutors
Pacheco, CA Accounting Tutors
Pacheco, CA ACT Tutors
Pacheco, CA Algebra Tutors
Pacheco, CA Algebra 2 Tutors
Pacheco, CA Calculus Tutors
Pacheco, CA Geometry Tutors
Pacheco, CA Math Tutors
Pacheco, CA Prealgebra Tutors
Pacheco, CA Precalculus Tutors
Pacheco, CA SAT Tutors
Pacheco, CA SAT Math Tutors
Pacheco, CA Science Tutors
Pacheco, CA Statistics Tutors
Pacheco, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Agua Caliente, CA Math Tutors
Briones, CA Math Tutors
Clyde, CA Math Tutors
Fetters Hot Springs, CA Math Tutors
Ignacio, CA Math Tutors
Lakeville, CA Math Tutors
Liberty Farms, CA Math Tutors
Mission Rafael, CA Math Tutors
Mount Eden, CA Math Tutors
North Richmond, CA Math Tutors
Paintersville, CA Math Tutors
Port Chicago, CA Math Tutors
Schellville, CA Math Tutors
Walker Landing, CA Math Tutors
West Pittsburg, CA Math Tutors | {"url":"http://www.purplemath.com/Pacheco_CA_Math_tutors.php","timestamp":"2014-04-20T04:26:37Z","content_type":null,"content_length":"24099","record_id":"<urn:uuid:13a472e1-06dc-430f-8098-6cc4422c8000>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Day One
1. The Natural numbers and zero
2. A well-defined collection of items
3. To calculate the numeric value
4. A number that is divisible by itself and one only
5. An element that leaves a term unchanged [a × 1 = a] or [b + 0 = b]
8. Any decimal that has a pattern that repeats forever
9. Is a relation in which no two ordered pairs have the same first element
11. ________ notation is a convenient way to write very small or large numbers
12. _____ property states that a(x + y) = ax + ay
13. The set of all possible specified elements from which subsets are formed
14. All real numbers that can be input into a function
17. The measure of the steepness of a line
19. ________ property of real numbers that states that the sum or product of a set of
22. if B = {1,2,3,4,5,6,7} and A = {1,2,5}, then A is a ______ of B
24. _______ set is a set whose elements can be counted
26. A decimal that ends is also known as
27. If a number has a factor other than one and itself, it is a _______
30. The overlap of two or more sets
31. When you multiply one number by another number, the result is the _______
32. whole numbers are _______ under addition and multiplication but NOT under subtraction and division
34. A whole number that is a divisor of another number
36. Any number that can be expressed as a ratio of two integers
37. An object contained in a set
39. The elements of a universe not contained in a given set
41. All real numbers that can be output from a function
42. a defined series of steps for carrying out a computation or process
43. ______ pair are the two numbers that are used to identify the position of a point in a plane
46. The ____ set is the set with no elements
50. When you add two or more numbers, the result is a _______ | {"url":"http://www.armoredpenguin.com/crossword/Data/2013.09/0814/08140701.075.html","timestamp":"2014-04-18T21:12:24Z","content_type":null,"content_length":"130790","record_id":"<urn:uuid:78959274-03ba-4c0b-a46c-7280a81f5b24>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what will be the output of MOSFET with sqaure wave as input???
• one year ago
• one year ago
Best Response
You've already chosen the best response.
a square-wave, within some frequency bounds
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5080174ae4b0b56960053f25","timestamp":"2014-04-18T23:52:28Z","content_type":null,"content_length":"29302","record_id":"<urn:uuid:39374359-66c3-4690-9d81-c96fadf92bd7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2007 [00034]
[Date Index] [Thread Index] [Author Index]
Re: Rule-based programming: declare variables to be a number
• To: mathgroup at smc.vnet.net
• Subject: [mg80797] Re: Rule-based programming: declare variables to be a number
• From: David Bailey <dave at Remove_Thisdbailey.co.uk>
• Date: Sun, 2 Sep 2007 02:51:53 -0400 (EDT)
• References: <fb83lf$82d$1@smc.twtelecom.net>
Hendrik van Hees wrote:
> I still work with Mathematica 4.0. I hope, somebody can answer my
> question despite this.
> I have written a simple rule-based program to evaluate traces of
> SU(2)-Lie algebra (Pauli matrices) to help me to obtain the Lagrangian
> of a chiral model, but that's not so important for my question.
> My problem is the following: To get the usual rules with algebraic
> expressions containing numbers and Lie-algebra variables, one needs to
> define what happens when a Lie-algebra variable is multiplied by a
> number. This works fine as long as I use really constant numbers like
> 1, 2, 1/2, etc.
> However, of course, one needs this feature also for variable numbers,
> say a coupling constant g. So I wrote Unprotect[NumberQ] and then said
> NumberQ[g]:=True,
> but then NumberQ[g^2] evaluates to False. So I have written a whole
> bunch of rules to make powers of g also numbers. It works already quite
> well, but is there a possibility to just declare a variable (like g) to
> be a number, and then make Mathematica know, that expressions like g^2,
> Sqrt[g], 1+g, etc. are also numbers?
Unless you have too much code to change, I would tackle this slightly
differently. I would represent your lie algebra variables in some way
you can recognise, and use non-commutative multiply (not Times) to
create your expressions. That way, everything that is not a lie algebra
variable is an ordinary number and it is fairly easy to define rules
that will combine these and apply the appropriate commutation rules to
the lie algebra variables.
As others have commented, it is not a good idea to write code like
NumericQ[g]=True because this changes the basic operation of
Mathematica. For example, such code might work OK until you try to
combine it with some more code that needs NumericQ for something else!
David Bailey
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Sep/msg00034.html","timestamp":"2014-04-20T18:43:16Z","content_type":null,"content_length":"36122","record_id":"<urn:uuid:e2aff21a-73a6-41e6-b2b9-d7e875630c72>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithms book
Free Algorithms textbook – because you never know when you’ll need a different algorithm.
3 thoughts on “Algorithms book”
2. The last such book I saw was the Seminal work done by Donaly Knuth called – “The Art of Computer Programming”. Volumes 1-3. I would still refer this book to anyone who is interested in this
3. //(c)Yarco; all possible combinations of N-1-nes and K-0-es; enjoy..and..yeh…I //know it’s a crapy code…writen in less than 30 mins though;)))))
void Combinations(int zero,int one);
void Recurs(int zero,int one);
void main()
{ int zero,one;
printf(“Please enter the ammount of zero’s and ones!!!:_)”);
void Combinations(int zero,int one)
{ static char *string=(char*)calloc(1,zero one 2),*StartPoint;
int counter,possition=1; | {"url":"http://www.blueskyonmars.com/2006/09/06/algorithms-book/","timestamp":"2014-04-20T03:10:41Z","content_type":null,"content_length":"16793","record_id":"<urn:uuid:343127c8-d9c2-4f20-b6e6-e788865b4144>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
August 13th 2009, 01:58 PM
Jan had the same summer job for three years 1993 through 1996, earning $250 in 1993, $325 in 1994, $400 in 1995 ,and $475 in 1996.
a. what is the slope of this line? what does it represent?
It is 75 and it represents the yearly increase.
b. What points on this line are meaningful in this context?
Aren't they all equally meaningful?
c. Guess what Jan's earnings were for 1992 and 1998, assuming the same summer job.
1992 = $175 1998 = $625
d. Write an inequality that states that Jan's earnings in 1998 were within 10% of the amount you guessed.
This is the question i really needed help on.
I set up l 625 - X l < 62.5
-62.5 < 625 - X < 62.5
562.5 < X < 687.5
Is this correct?
August 13th 2009, 02:03 PM
Matt Westwood
I believe so. That's how I would answer it (although I may be naughty and leave it as |625-X|< 62.5 because (a) it's technically an inequality, and (b) I'm lazy and can't be bothered to do
arithmetic, and (c) it doesn't *say* you've got to work it out into a particular form).
As for "meaningful" I don't know what that means in this context.
August 13th 2009, 02:09 PM
I think so, any predicted points pre or post this series may not be as meaningful. For example if you were to find a value for 1989 it would be negative.
It is!
August 13th 2009, 02:13 PM
Thank you!!!
And I really appreciate your help on the lattice point question.
I kept on drawing the graph thinking I got the points wrong.
August 13th 2009, 02:17 PM | {"url":"http://mathhelpforum.com/algebra/97965-inequality-print.html","timestamp":"2014-04-18T06:59:29Z","content_type":null,"content_length":"9495","record_id":"<urn:uuid:de0e5b87-db54-4f8a-a4eb-48fa70442ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kumo Generated Tatami Demos
Kumo Generated Tatami Demos Homepage
The proofs linked to this page were all automatically generated by an early prototype of the Kumo system. They allow you to browse tatami proofs, but not to build them with Kumo; for more on Kumo,
see the Kumo homepage.
Note: "Tatami" are natural fiber mats used in traditional Japanese homes. The size of a room is measured by the number of tatami on its floor, where each tatami is a rectangle, about 5 by 3 feet.
Thus a 2 tatami room, like a 2 tatami proof, is pretty small, an 8 tatami room (or proof) is ok, but a 12 tatami room (or proof) is getting large, and should probably be subdivided.
Tatami are cool, refreshing and aromatic; we hope you find the tatami backgrounds helpful while browsing our proofs.
These demos assume familiarity with many (order) sorted algebra and with OBJ; without this background, you will not be able to understand the proofs. Suitable material can be found in the following
two books: The coinduction proofs will also require familiarity with hidden (order) sorted algebra. See: The following examples are currently available for browsing:
Note: At this moment, examples 4, 6 and 7 are still in a preliminary form; in particular, we have not yet installed their homepages. Also, example 6 was produced by a now obsolete version of Kumo and
therefore has somewhat different formats from the other examples. To the Links Project homepage.
For information on the CafeOBJ project.
To the UCSD Meaning and Computation Lab homepage. 3 May 1998 | {"url":"http://cseweb.ucsd.edu/groups/tatami/demos/","timestamp":"2014-04-20T13:21:17Z","content_type":null,"content_length":"3816","record_id":"<urn:uuid:9b462c1a-06d3-45bb-8e0c-8346b22221d8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browsing Department of Mathematics by Title
Planned maintenance alert - Monday, April 21: DSpace@MIT will undergo maintenance activities that will affect service availability and access to file content. While the service interruptions should
be brief, access to file content may take longer to restore. Status updates will be posted to http://3down.mit.edu/.
Browsing Department of Mathematics by Title
• (Massachusetts Institute of Technology, 2008)
In the first part of this work, we study a long standing open problem on the mixing time of Kac's random walk on SO(n, R) by random rotations. We obtain an upper bound mix = O (n2.5 log n) for
the weak convergence which ...
• (Massachusetts Institute of Technology, 1990)
• (Massachusetts Institute of Technology, 1996)
• (Massachusetts Institute of Technology, 1991)
• (Massachusetts Institute of Technology, 1994)
• (Massachusetts Institute of Technology, 1990)
• (Massachusetts Institute of Technology, 2009)
In recent years there has been a great deal of new activity at the interface of biology and computation. This has largely been driven by the massive in flux of data from new experimental
technologies, particularly ...
• (Massachusetts Institute of Technology, 1947)
• (Massachusetts Institute of Technology, 1995)
• (Massachusetts Institute of Technology, 1976)
• (Massachusetts Institute of Technology, 2004)
Consider the unnormalized Ricci flow ...Richard Hamilton showed that if the curvature operator is uniformly bounded under the flow for all times ... then the solution can be extended beyond T. In
the thesis we prove that ...
• (Massachusetts Institute of Technology, 2004)
(cont.) yield a new proof of a result of Mochizuki yield a new proof of a result of Mochizuki Frobenius-unstable bundles for C general, and hence obtaining a self-contained proof of the resulting
formula for the degree of V₂.
• (Massachusetts Institute of Technology, 2003)
In this thesis, the Hilbert scheme of lines on smooth hypersurfaces is studied. The main result is that the Hilbert scheme of lines on any smooth Fano hypersurface of degree d =/< 6 in ... has
the expected dimension 2n - ...
• (Massachusetts Institute of Technology, 2004)
In this thesis, I studied the stability of local complex'singularity exponents (lcse) for holomorphic functions whose zero sets have only isolated singularities. For a given holomorphic function
f defined on a neighborhood ...
• (Massachusetts Institute of Technology, 2002)
We show that for a large class of torsionfree classifying spaces, K-theory filtered ring is an invariant of the genus. We apply this result in two ways. First, we use it to show that the
powerseries ring on n indeterminates ...
• (Massachusetts Institute of Technology, 1998)
• (Massachusetts Institute of Technology, 1991)
• (Massachusetts Institute of Technology, 2005)
The category of Segal spaces was proposed by Charles Rezk in 2000 as a suitable candidate for a model category for homotopy theories. We show that Quillen functors induce morphisms in this
category and that the morphisms ...
• (Massachusetts Institute of Technology, 2001)
In this thesis, two algorithms for protein structural motif recognition are presented. A program is described which successfully recognizes the occurrence of the right-handed parallel 8-helix
fold from protein sequence ... | {"url":"http://dspace.mit.edu/handle/1721.1/7841/browse?rpp=20&order=ASC&sort_by=1&etal=-1&type=title&starts_with=J","timestamp":"2014-04-19T22:25:38Z","content_type":null,"content_length":"41085","record_id":"<urn:uuid:5e2bf524-46f4-49f0-907e-1fda12dbef95>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I am having a hard time with 3 Algebra questions, heres one of them: - 1/2x + 2 = -x + 7 (I figured out one of the other questions, but I cant figure out these 2, the other one is below)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
About the other 3 questions, I figured out one of them, but the last one is: 4d − 12 = 12 − 4d solve for d. I really dont want answers, just help
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
nice flag >>.>>
Best Response
You've already chosen the best response.
Haha, thanks @amber96
Best Response
You've already chosen the best response.
No need to thank me for your gorgeous flag, I like a nice cup of bigotry when I wake up in the morning!
Best Response
You've already chosen the best response.
get all the x terms on one side of the equation
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[- \frac12x + 2 = -x + 7\] add \(x\) to both sides take away \(2\) from both sides
Best Response
You've already chosen the best response.
4d - 24 = -4d you want the terms with d on one side you have -24 on the left side, so "move" the 4d to the right side. you do this by adding -4d to both sides of the equation -4d + 4d -24= -4d-4d
now simplify
Best Response
You've already chosen the best response.
you should get -24= -8d now divide both sides by -8 to find d
Best Response
You've already chosen the best response.
So, d is equal to 3
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50994622e4b02ec0829cabfa","timestamp":"2014-04-16T10:20:55Z","content_type":null,"content_length":"60936","record_id":"<urn:uuid:2af857cb-cbf7-4f31-94fd-2568d117a30e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell Code by HsColour
{-# OPTIONS -fno-implicit-prelude -fglasgow-exts #-}
{- |
Copyright : (c) Mikael Johansson 2006
Maintainer : mik@math.uni-jena.de
Stability : provisional
Portability : requires multi-parameter type classes
Routines and abstractions for Matrices and
basic linear algebra over fields or rings.
module MathObj.Matrix where
import qualified Algebra.Module as Module
import qualified Algebra.Vector as Vector
import qualified Algebra.Ring as Ring
import qualified Algebra.Additive as Additive
import Algebra.Module((*>))
import Algebra.Ring((*), fromInteger, scalarProduct)
import Algebra.Additive((+), (-), zero, subtract)
import Data.Array (Array, listArray, elems, bounds, (!), ixmap, range)
import qualified Data.List as List
import Control.Monad (liftM2)
import Control.Exception (assert)
import NumericPrelude.List (outerProduct)
import NumericPrelude(Integer)
import PreludeBase hiding (zipWith)
{- |
A matrix is a twodimensional array of ring elements, indexed by integers.
data {-(Ring.C a) =>-}
T a = Cons (Array (Integer, Integer) a) deriving (Eq,Ord,Read)
{- |
Transposition of matrices is just transposition in the sense of
-- candidate for Utility
twist :: (Integer,Integer) -> (Integer,Integer)
twist (x,y) = (y,x)
transpose :: T a -> T a
transpose (Cons m) =
let (lower,upper) = bounds m
in Cons (ixmap (twist lower, twist upper) twist m)
rows :: T a -> [[a]]
rows (Cons m) =
let ((lr,lc), (ur,uc)) = bounds m
in outerProduct (curry(m!)) (range (lr,ur)) (range (lc,uc))
columns :: T a -> [[a]]
columns (Cons m) =
let ((lr,lc), (ur,uc)) = bounds m
in outerProduct (curry(m!)) (range (lc,uc)) (range (lr,ur))
fromList :: Integer -> Integer -> [a] -> T a
fromList m n xs = Cons (listArray ((1,1),(m,n)) xs)
instance (Ring.C a, Show a) => Show (T a) where
show m = List.unlines $ map (\r -> "(" ++ r ++ ")")
$ map (unwords . map show) $ rows m
dimension :: T a -> (Integer,Integer)
dimension (Cons m) = uncurry subtract (bounds m) + (1,1)
numRows :: T a -> Integer
numRows = fst . dimension
numColumns :: T a -> Integer
numColumns = snd . dimension
-- These implementations may benefit from a better exception than
-- just assertions to validate dimensionalities
instance (Additive.C a) => Additive.C (T a) where
(+) = zipWith (+)
(-) = zipWith (-)
zero = zeroMatrix 1 1
zipWith :: (a -> b -> c) -> T a -> T b -> T c
zipWith op mM@(Cons m) nM@(Cons n) =
let d = dimension mM
em = elems m
en = elems n
in assert (d == dimension nM) $
uncurry fromList d (List.zipWith op em en)
zeroMatrix :: (Additive.C a) => Integer -> Integer -> T a
zeroMatrix m n = fromList m n $
List.repeat zero
-- List.replicate (fromInteger (m*n)) zero
instance (Ring.C a) => Ring.C (T a) where
mM * nM = assert (numRows mM == numColumns nM) $
fromList (numColumns mM) (numRows nM)
(liftM2 scalarProduct (rows mM) (columns nM))
fromInteger n = fromList 1 1 [fromInteger n]
instance Functor T where
fmap f (Cons m) = Cons (fmap f m)
instance Vector.C T where
zero = zero
(<+>) = (+)
(*>) = Vector.functorScale
instance Module.C a b => Module.C a (T b) where
x *> m = fmap (x*>) m
{- |
What more do we need from our matrix class? We have addition,
subtraction and multiplication, and thus composition of generic
free-module-maps. We're going to want to solve linear equations with
or without fields underneath, so we're going to want an implementation
of the Gaussian algorithm as well as most probably Smith normal
form. Determinants are cool, and these are to be calculated either
with the Gaussian algorithm or some other goodish method.
{- |
We'll want generic linear equation solving, returning one solution,
any solution really, or nothing. Basically, this is asking for the
preimage of a given vector over the given map, so
a_11 x_1 + .. + a_1n x_n = y_1
a_m1 x_1 + .. + a_mn a_n = y_m
has really x_1,...,x_n as a preimage of the vector y_1,..,y_m under
the map (a_ij), since obviously y_1,..,y_m = (a_ij) x_1,..,x_n
So, generic linear equation solving boils down to the function
preimage :: (Ring.C a) => T a -> T a -> Maybe (T a)
preimage a y = assert
(numRows a == numRows y && -- they match
numColumns y == 1) -- and y is a column vector
Cf. /usr/lib/hugs/demos/Matrix.hs | {"url":"http://hackage.haskell.org/package/numeric-prelude-0.0.4/docs/src/MathObj-Matrix.html","timestamp":"2014-04-18T02:02:43Z","content_type":null,"content_length":"30165","record_id":"<urn:uuid:85c6a5e4-5ee4-44cb-84a8-e5e78f1d3288>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A number is selected at random from first 200 natural numbers. Find the probability that the number is divisible by 6 or 8
• one year ago
• one year ago
Best Response
You've already chosen the best response.
What is the probability of selecting a number divisible by 6?
Best Response
You've already chosen the best response.
Look at the number of multiples of 6 in the first 200 natural numbers and the number of multiples of 8. But then you have double counted some numbers (for example 24), so you must take those out.
Which numbers will you double count?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
You are missing some. You got the multiples of 48, but there are others, like 72.
Best Response
You've already chosen the best response.
thanks jaebond actually we have to consider the numbers divisible by 6 + numbers divisible by 8 - Numbers divisible by LCM of 6 and 8 which is 24 I was only considering numbers divisible by 48
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f521dbe4b061aa9f9a91b4","timestamp":"2014-04-18T16:04:59Z","content_type":null,"content_length":"37434","record_id":"<urn:uuid:37920195-fea2-4d54-8c68-472cb7f9930d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math::NumSeq -- number sequences
# only a base class, use one of the actual classes, such as
use Math::NumSeq::Squares;
my $seq = Math::NumSeq::Squares->new;
my ($i, $value) = $seq->next;
This is a base class for some number sequences. Sequence objects can iterate through values and some sequences have random access and/or a predicate test.
The idea is to generate things like squares or primes in a generic way. Some sequences, like squares, are so easy there's no need for a class except for the genericness. Other sequences are trickier
and an iterator is a good way to go through values. The iterating tries to be progressive, so not calculating too far ahead yet doing reasonable size chunks for efficiency.
Sequence values have an integer index "i" starting either from i=0 or i=1 or whatever best suits the sequence. The values can be anything, positive, negative, fractional, etc.
The intention is that all modules Math::NumSeq::Foo are sequence classes, and that supporting things are deeper, such as under Math::NumSeq::Something::Helper or Math::NumSeq::Base::SharedStuff.
The various methods try to support Math::BigInt and similar overloaded number types. So for instance pred() might be applied to test a big value, or ith() on a bigint to preserve precision from some
rapidly growing sequence. Infinities and NaNs give some kind of NaN or infinite return (some unspecified kind as yet).
In the following "Foo" is one of the subclass names.
Create and return a new sequence object.
Return the next index and value in the sequence.
Most sequences are infinite and for them there's always a next value. But if $seq is finite then at the end the return is no values. So for example
while (my ($i, $value) = $seq->next) {
print "$i $value\n";
Rewind the sequence to its starting point. The next call to next() will be the initial $i,$value again.
See "Optional Methods" below for possible arbitrary "seeks".
Return the current i position. This is the i which the next call to next() will return.
Return the first index $i in the sequence. This is the position rewind() returns to.
Return a human-readable description of the sequence. This might be translated into the locale language, though there's no message translations yet.
Return the minimum or maximum value taken by values in the sequence, or undef if unknown or infinity.
Return something if the sequence has $key (a string) characteristic, or undef if not. This is intended as a loose set of features or properties a sequence can have to describe itself.
digits integer or undef, the radix if seq is digits
count boolean, true if values are counts of something
smaller boolean, true if value[i] < i generally
integer boolean, true if all values are integers
increasing boolean, true if value[i+1] > value[i] always
non_decreasing boolean, true if value[i+1] >= value[i] always
increasing_from_i integer, i for which value[i+1] > value[i]
non_decreasing_from_i integer, i for which value[i+1] >= value[i]
value_is_radix boolean, value is radix for i
value_is_radix means each value is a radix applying to the i index. For example RepdigitRadix value is a radix for which i is a repdigit. These values might also be 0 or 1 or -1 or some such
non-radix to indicate no radix.
Return the A-number (a string) for $seq in Sloane's Online Encyclopedia of Integer Sequences, or return undef if not in the OEIS or not known. For example
my $seq = Math::NumSeq::Squares->new;
my $anum = $seq->oeis_anum;
# gives $anum = "A000290"
The web page for that is then
Sometimes the OEIS has duplicates, ie. two A-numbers which are the same sequence. When that's accidental or historical $seq->oeis_anum() is whichever is reckoned the primary one.
Return an arrayref or list describing the parameters taken by a given class. This meant to help making widgets etc for user interaction in a GUI. Each element is a hashref
name => parameter key arg for new()
share_key => string, or undef
description => human readable string
type => string "integer","boolean","enum" etc
default => value
minimum => number, or undef
maximum => number, or undef
width => integer, suggested display size
choices => for enum, an arrayref
choices_display => for enum, an arrayref
type is a string, one of
"filename" is separate from "string" since it might require subtly different handling to ensure it reaches Perl as a byte string, whereas a "string" type might in principle take Perl wide chars.
For "enum" the choices field is an arrayref of possible values, such as
{ name => "flavour",
type => "enum",
choices => ["strawberry","chocolate"],
choices_display, if provided, is human-readable strings for those choices, possibly translated into another language (though there's no translations yet).
minimum and maximum are omitted (or undef) if there's no hard limit on the parameter.
share_key is designed to indicate when parameters from different NumSeq classes can be a single control widget in a GUI etc. Normally the name is enough, but when the same name has slightly
different meanings in different classes a share_key keeps different meanings separate.
The following methods are only implemented for some sequences since it's sometimes difficult to generate an arbitrary numbered element etc. Check $seq->can('ith') etc before using.
Move the current i so next() will return $i or $value on the next call. If $value is not in the sequence then move so as to return the next higher value which is.
Usually seek_to_value() only makes sense for sequences where all values are distinct, so that a value is an unambiguous location.
Return the $i'th value in the sequence. Only some sequence classes implement this method.
Return two values ith($i) and ith($i+1) from the sequence. This method can be used whenever ith() exists. can('ith_pair') says whether ith_pair() can be used in the usual way (and gives a
For some sequences a pair of values can be calculated with less work than two separate ith() calls.
Return true if $value occurs in the sequence. For example for the squares this returns true if $value is a square or false if not.
Return the index i of $value. If $value is not in the sequence then value_to_i() returns undef, or value_to_i_ceil() returns the i of the next higher value which is, value_to_i_floor() the i of
the next lower value.
These methods usually only make sense for monotonic increasing sequences, or perhaps non-decreasing so with some repeating values.
Return an estimate of the i corresponding to $value.
The accuracy of this estimate is unspecified, but can at least hint at the growth rate of the sequence. For example if making an "intersection" checking for given values in the sequence then if
the estimated i is small it may be fastest to go through the sequence by next() and compare, rather than apply pred() to each target.
Math::NumSeq::Squares, Math::NumSeq::Cubes, Math::NumSeq::Pronic, Math::NumSeq::Triangular, Math::NumSeq::Polygonal, Math::NumSeq::Tetrahedral, Math::NumSeq::StarNumbers, Math::NumSeq::Powerful,
Math::NumSeq::PowerPart, Math::NumSeq::PowerFlip
Math::NumSeq::Even, Math::NumSeq::Odd, Math::NumSeq::All, Math::NumSeq::AllDigits, Math::NumSeq::ConcatNumbers, Math::NumSeq::Runs
Math::NumSeq::Primes, Math::NumSeq::TwinPrimes, Math::NumSeq::SophieGermainPrimes, Math::NumSeq::AlmostPrimes, Math::NumSeq::DeletablePrimes, Math::NumSeq::Emirps, Math::NumSeq::MobiusFunction,
Math::NumSeq::LiouvilleFunction, Math::NumSeq::DivisorCount, Math::NumSeq::GoldbachCount, Math::NumSeq::LemoineCount, Math::NumSeq::PythagoreanHypots
Math::NumSeq::PrimeFactorCount, Math::NumSeq::AllPrimeFactors
Math::NumSeq::ErdosSelfridgeClass, Math::NumSeq::PrimeIndexOrder, Math::NumSeq::PrimeIndexPrimes
Math::NumSeq::Totient, Math::NumSeq::TotientCumulative, Math::NumSeq::TotientSteps, Math::NumSeq::TotientStepsSum, Math::NumSeq::TotientPerfect, Math::NumSeq::DedekindPsiCumulative,
Math::NumSeq::DedekindPsiSteps, Math::NumSeq::Abundant, Math::NumSeq::PolignacObstinate
Math::NumSeq::Factorials, Math::NumSeq::Primorials, Math::NumSeq::Fibonacci, Math::NumSeq::LucasNumbers, Math::NumSeq::FibonacciWord, Math::NumSeq::FibonacciRepresentations,
Math::NumSeq::PisanoPeriod, Math::NumSeq::PisanoPeriodSteps, Math::NumSeq::Fibbinary, Math::NumSeq::FibbinaryBitCount
Math::NumSeq::Catalan, Math::NumSeq::BalancedBinary, Math::NumSeq::Pell, Math::NumSeq::Tribonacci, Math::NumSeq::Perrin, Math::NumSeq::SpiroFibonacci
Math::NumSeq::FractionDigits, Math::NumSeq::SqrtDigits, Math::NumSeq::SqrtEngel, Math::NumSeq::SqrtContinued, Math::NumSeq::SqrtContinuedPeriod, Math::NumSeq::AlgebraicContinued
Math::NumSeq::DigitCount, Math::NumSeq::DigitCountLow, Math::NumSeq::DigitCountHigh
Math::NumSeq::DigitLength, Math::NumSeq::DigitLengthCumulative, Math::NumSeq::SelfLengthCumulative, Math::NumSeq::DigitProduct, Math::NumSeq::DigitProductSteps, Math::NumSeq::DigitSum,
Math::NumSeq::DigitSumModulo, Math::NumSeq::RadixWithoutDigit, Math::NumSeq::RadixConversion, Math::NumSeq::MaxDigitCount
Math::NumSeq::Palindromes, Math::NumSeq::Beastly, Math::NumSeq::Repdigits, Math::NumSeq::RepdigitAny, Math::NumSeq::RepdigitRadix, Math::NumSeq::UndulatingNumbers, Math::NumSeq::HarshadNumbers,
Math::NumSeq::MoranNumbers, Math::NumSeq::HappyNumbers, Math::NumSeq::HappySteps
Math::NumSeq::CullenNumbers, Math::NumSeq::ProthNumbers, Math::NumSeq::WoodallNumbers, Math::NumSeq::BaumSweet, Math::NumSeq::GolayRudinShapiro, Math::NumSeq::GolayRudinShapiroCumulative,
Math::NumSeq::MephistoWaltz, Math::NumSeq::HafermanCarpet, Math::NumSeq::KlarnerRado, Math::NumSeq::UlamSequence, Math::NumSeq::ReRound, Math::NumSeq::ReReplace, Math::NumSeq::LuckyNumbers
Math::NumSeq::CollatzSteps, Math::NumSeq::ReverseAdd, Math::NumSeq::ReverseAddSteps, Math::NumSeq::JugglerSteps, Math::NumSeq::SternDiatomic, Math::NumSeq::NumAronson, Math::NumSeq::HofstadterFigure,
Math::NumSeq::Kolakoski, Math::NumSeq::GolombSequence, Math::NumSeq::AsciiSelf, Math::NumSeq::Multiples, Math::NumSeq::Modulo
Math::NumSeq::Expression, Math::NumSeq::File, Math::NumSeq::OEIS
Math::NumSeq::AlphabeticalLength, Math::NumSeq::AlphabeticalLengthSteps, Math::NumSeq::SevenSegments (in the Math-NumSeq-Alpha dist)
Math::NumSeq::Aronson (in the Math-Aronson dist)
Math::NumSeq::PlanePathCoord, Math::NumSeq::PlanePathDelta, Math::NumSeq::PlanePathTurn, Math::NumSeq::PlanePathN (in the Math-PlanePath dist)
Math::Sequence and Math::Series, for symbolic recursive sequence definitions
math-image, for displaying images with the NumSeq sequences
Copyright 2010, 2011, 2012, 2013, 2014 Kevin Ryde
Math-NumSeq is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your
option) any later version.
Math-NumSeq is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
Public License for more details.
You should have received a copy of the GNU General Public License along with Math-NumSeq. If not, see <http://www.gnu.org/licenses/>. | {"url":"http://search.cpan.org/dist/Math-NumSeq/lib/Math/NumSeq.pm","timestamp":"2014-04-21T13:49:27Z","content_type":null,"content_length":"40664","record_id":"<urn:uuid:25e3a7a4-b327-4f6d-b3c2-d7c89d79d72b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
I need help with mathematical coding.......911
09-30-2001 #1
Registered User
Join Date
Sep 2001
I need help with mathematical coding.......911
Hello fellow programmers
Hello it is Semma. I have an assignment where i have incoporate an integer for meters. Where the user has to answer this question:
(Ex.A) What are the dimenson of your yard in meters ? 50 75
(Ex.B) What are the dimensions of your house in meters ? 33 25
This is what I have written :
/ / Questions(?)
cout<<"what are the dimensions of your yard in meteres..(?)";<<endl;
cin >>YardMeters;
cout <<"Yard in meters:"<<YardMeters;<<endl;
cout <<"What are the dimensions of your house in meters..(?)";
cin >>HouseMeters;
cout <<"House in meters:"<<HouseMeters;<<endl;
// Meter Calculation Integer
int YardMeters;
int HouseMeters;
// Calculating Dimensions
int Calculation_A;
int Calculation_B;
int Rounding_B;
int ConversionHours;
int ConversionSecond;
int ConversionMinute;
Then i have to convert yard in meters and house in meters to at a
rate of 2 square meters per second it will take you 1462.5
seconds. Then i have to round it to the nearest second. After it
will display 1463 seconds which converts to 0 hour(s) 24minutes
and 23 seconds(s).
Therefore what code can I use to make these conversions of, the
rate of 2 square meters per second and rounding it to the
nearest second.
Thank you..........
I'm more than willing to help. But I'd like to see you make an attempt to layout the logic not just the ouput statements. When its done you will have learned a great deal more.... Enough from the
cout<<"what are the dimensions of your yard in meteres..(?)";<<endl;
cout <<"Yard in meters:"<<YardMeters;<<endl;
cout <<"House in meters:"<<HouseMeters;<<endl;
Too much of semicolons here. This is correct:
cout<<"what are the dimensions of your yard in meteres..(?)"<<endl;
cout <<"Yard in meters:"<<YardMeters<<endl;
cout <<"House in meters:"<<HouseMeters<<endl;
Making error is human, but for messing things thoroughly it takes a computer
09-30-2001 #2
Registered User
Join Date
Sep 2001
10-01-2001 #3 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/2226-i-need-help-mathematical-coding-911-a.html","timestamp":"2014-04-18T10:03:39Z","content_type":null,"content_length":"47384","record_id":"<urn:uuid:a1663b41-803d-4dbf-855e-64f0e9bde540>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Category of modules over a coPoisson-bialgebra
up vote 6 down vote favorite
Fix a ground commutative ring $k$. A coPoisson-bialgebra is a bialgebra $H$ equipped with a linear mapping $\pi:H \rightarrow H \otimes_k H$ s.t.
• $\pi$ is a coLie bracket
• $\pi$ is a coderivation
• $\pi(ab) = \Delta(a)\pi(b)+\pi(a)\Delta(b)$
I'm mostly interested in the case of cocommutative $H$. For such $H$, coPoisson structure is morally an infinitesimal deformation away from cocommutativity (the structure needed for "coquantization"
preserving bialgebra structure). Also, I'm mostly interested in the Hopf algebra case but this doesn't seem important for the question
The category of left modules over $H$ is a $k$-linear Abelian category (just because H is a $k$-algebra) equipped with a tensor product functor (due to the coproduct $\Delta$ on H). For cocommutative
$H$ the tensor product is symmetric. The question is:
What additional structure(s) on this category are obtained from $\pi$?
EDIT: I think I figured out the answer in case $k$ is a field. In this case the tensor product functor is exact (rather than just right exact)
Introduce $h$ a formal parameter satisfying $h^2=0$. $\pi$ defines a "deformed" coproduct on $H[h]$ given by
Thus the category of left $H[h]$-modules becomes a tensor category
Denote $\mathcal{M}$ our symmetric tensor category. The tensor product functor is $\otimes$ and its symmetric braiding is $b: X \otimes Y \rightarrow Y \otimes X$
We construct the Abelian category $\mathcal{M}[h]$. An object $X$ in $\mathcal{M}[h]$ is an object in $\mathcal{M}$ equipped with an endomorphism $h: X \rightarrow X$ s.t. $h^2=0$. A morphism in $\
mathcal{M}[h]$ is a morphism in $\mathcal{M}$ which commutes with $h$
$\mathcal{M}[h]$ comes with the following functors of interest:
• $Ker \space h: \mathcal{M}[h] \rightarrow \mathcal{M}$
• $Coker \space h: \mathcal{M}[h] \rightarrow \mathcal{M}$
• $i: \mathcal{M} \rightarrow \mathcal{M}[h]$ which sends $X$ to itself with $h=0$
• $[h]: \mathcal{M} \rightarrow \mathcal{M}[h]$; Given $X \in \mathcal{M}$ we define $X[h]$ to be $X \oplus X$ equipped with the obvious action of $h$
The desired additional structure is:
• An exact $k[h]$-linear tensor product functor $\hat{\otimes}$ on $\mathcal{M}[h]$
• Tensor functor strucutre on $Coker \space h$
The antisymmetry of $\pi$ imposes the following additional condition:
Note that $i(X) \hat{\otimes} Y \cong X \otimes Coker \space h_Y$ which follows by observing that $h$ annihilates $i(X) \hat{\otimes} Y$ due to $k[h]$-linearity of $\hat{\otimes}$, applying $Coker \
space h$ and using its tensor functor structure
We have the short exact sequence
$$0 \rightarrow X \xrightarrow{Ker \space h} X[h] \xrightarrow{Coker \space h} X \rightarrow 0$$
We apply $\hat{\otimes} Y[h]$ and use its exactness to get the short exact sequence
$$0 \rightarrow X \otimes Y \rightarrow X[h] \hat{\otimes} Y[h] \rightarrow X \otimes Y \rightarrow 0$$
This sequence yields an element $\epsilon(X, Y)$ of $Ext^1(X \otimes Y, X \otimes Y)$
Due to $b$, $\epsilon(X, Y)$ and $\epsilon(Y, X)$ belong to canonically isomorphic spaces. We demand
$$\epsilon(X, Y)+\epsilon(Y, X)=0$$
The problem is I need the case in which $\otimes$ is merely right exact. What currently confuses me is that in the bialgebra picture the last sequence is still exact for general $k$ but I can't prove
it in the category-theoretic language
ct.category-theory qa.quantum-algebra rt.representation-theory
I may be a little naive here but since any bialgebra is coPoisson with respect to the null coderivation is there really something to be expected on the category of all H-modules? – Nicola Ciccoli
Feb 13 '12 at 10:38
@Nicola, the structure depends on the choice of coPoisson bracket, not merely on its existence. See also my recent edit – Squark Feb 13 '12 at 21:17
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ct.category-theory qa.quantum-algebra rt.representation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/87893/category-of-modules-over-a-copoisson-bialgebra","timestamp":"2014-04-20T01:37:47Z","content_type":null,"content_length":"51991","record_id":"<urn:uuid:5a1d9fec-6de4-4ba9-979c-78bf53c51200>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Harmonic Polynomials and DirichletType Problems
Sheldon Axler and Wade Ramey
30 May 1995
Abstract. We take a new approach to harmonic polynomials via differ
Surprisingly powerful results about harmonic functions can be obtained simply
by differentiating the function jxj 2\Gamman and observing the patterns that emerge. This
is one of our main themes and is the route we take to Theorem 1.7, which leads
to a new proof of a harmonic decomposition theorem for homogeneous polynomials
(Corollary 1.8) and a new proof of the identity in Corollary 1.10. We then discuss
a fast algorithm for computing the Poisson integral of any polynomial. (Note: The
algorithm involves differentiation, but no integration.) We show how this algorithm
can be used for many other Dirichlettype problems with polynomial data. Finally,
we show how Lemma 1.4 leads to the identity in (3.2), yielding a new and simple
proof that the Kelvin transform preserves harmonic functions.
1. Derivatives of jxj
Unless otherwise stated, we work in R n ; n ? 2; the function jxj 2\Gamman is then har
monic and nonconstant on R n n f0g. (When n = 2 we need to replace jxj 2\Gamman with
log jxj; the minor modifications needed in this case are discussed in Section 4.) | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/491/3785938.html","timestamp":"2014-04-20T21:29:41Z","content_type":null,"content_length":"8360","record_id":"<urn:uuid:5a273b72-2f7f-4103-851c-9307ce9ed1e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Region Merging Algorithm
Next: Creating the Initial Regions Up: Theory Previous: Cost Function Formulation
An alternative solution to the problem is to treat it as a set of potentially inconsistent constraints:
The problem is then one of finding the set of consistent constraints which lead to a low (ideally minimal) cost.
Consider the difference in cost between
Since, by definition of rounding,
The smallest cost difference is:
Choose the interface where this cost difference is the largest. That is, choose the pair of regions where getting the offset ``wrong'' is the most disastrous.
Therefore, select the pair of regions according to:
Once this pair is selected, merge them into a new, single region
From these, calculate
Keep merging regions until there are no more interfaces between regions (usually when only one region remains if there is a single connected set of regions initially).
As each iteration of this algorithm merges two regions, the number of iterations required is
Next: Creating the Initial Regions Up: Theory Previous: Cost Function Formulation Mark Jenkinson 2001-10-12 | {"url":"http://www.fmrib.ox.ac.uk/analysis/techrep/tr01mj1/tr01mj1/node5.html","timestamp":"2014-04-17T07:04:11Z","content_type":null,"content_length":"9102","record_id":"<urn:uuid:b0a5c019-4b48-483c-a765-47aaefa4e5c7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Embedding in Linear Time
Results 1 - 10 of 19
- PROC. 5TH ANNUAL EUROPEAN SYMP. ON ALGORITHMS (ESA '01 , 2001
"... We consider graph drawings in which vertices are assigned to layers and edges are drawn as straight line-segments between vertices on adjacent layers. We prove that graphs admitting
crossing-free h-layer drawings (for fixed h) have bounded pathwidth. We then use a path decomposition as the basis for ..."
Cited by 21 (9 self)
Add to MetaCart
We consider graph drawings in which vertices are assigned to layers and edges are drawn as straight line-segments between vertices on adjacent layers. We prove that graphs admitting crossing-free
h-layer drawings (for fixed h) have bounded pathwidth. We then use a path decomposition as the basis for a linear-time algorithm to decide if a graph has a crossing-free h-layer drawing (for fixed
h). This algorithm is extended to solve a large number of related problems, including allowing at most k crossings, or removing at most r edges to leave a crossing-free drawing (for fixed k or r). If
the number of crossings or deleted edges is a non-fixed parameter then these problems are NP-complete. For each setting, we can also permit downward drawings of directed graphs and drawings in which
edges may span multiple layers, in which case the total span or the maximum span of edges can be minimized. In contrast to the so-called Sugiyama method for layered graph drawing, our algorithms do
not assume a preassignment of the vertices to layers.
- Journal of Graph Algorithms and Applications , 2005
"... A graph with a given partition of the vertices on k concentric circles is radial level planar if there is a vertex permutation such that the edges can be routed strictly outwards without
crossings. Radial level planarity extends level planarity, where the vertices are placed on k horizontal lines an ..."
Cited by 19 (9 self)
Add to MetaCart
A graph with a given partition of the vertices on k concentric circles is radial level planar if there is a vertex permutation such that the edges can be routed strictly outwards without crossings.
Radial level planarity extends level planarity, where the vertices are placed on k horizontal lines and the edges are routed strictly downwards without crossings. The extension is characterised by
rings, which are level non-planar biconnected components. Our main results are linear time algorithms for radial level planarity testing and for computing an embedding. We introduce PQR-trees as a
new data structure where R-nodes and associated templates for their manipulation are introduced to deal with rings. Our algorithms extend level planarity testing and embedding algorithms which use
- In Proc. 7th International Workshop on Algorithms and Data Structures (WADS’01 , 2003
"... In this paper, we consider the problem of finding a mixed upward planarization of a mixed graph, i.e., a graph with directed and undirected edges. The problem is a generalization of the
planarization problem for undirected graphs and is motivated by several applications in graph drawing. ..."
Cited by 15 (2 self)
Add to MetaCart
In this paper, we consider the problem of finding a mixed upward planarization of a mixed graph, i.e., a graph with directed and undirected edges. The problem is a generalization of the planarization
problem for undirected graphs and is motivated by several applications in graph drawing.
- 14TH SYMPOSIUM ON GRAPH DRAWING (GD), VOLUME 4372 OF LECTURE NOTES IN COMPUTER SCIENCE , 2006
"... Consider a graph G drawn in the plane so that each vertex lies on a distinct horizontal line ℓj = {(x, j) | x ∈ R}. The bijection φ that maps the set of n vertices V to a set of distinct
horizontal lines ℓj forms a labeling of the vertices. Such a graph G with the labeling φ is called an n-level gr ..."
Cited by 13 (7 self)
Add to MetaCart
Consider a graph G drawn in the plane so that each vertex lies on a distinct horizontal line ℓj = {(x, j) | x ∈ R}. The bijection φ that maps the set of n vertices V to a set of distinct horizontal
lines ℓj forms a labeling of the vertices. Such a graph G with the labeling φ is called an n-level graph and is said to be n-level planar if it can be drawn with straight-line edges and no crossings
while keeping each vertex on its own level. In this paper, we consider the class of trees that are n-level planar regardless of their labeling. We call such trees unlabeled level planar (ULP). Our
contributions are three-fold. First, we provide a complete characterization of ULP trees in terms of a pair of forbidden subtrees. Second, we show how to draw ULP trees in linear time. Third, we
provide a linear time recognition algorithm for ULP trees.
, 2007
"... A geometric simultaneous embedding of two graphs G1 = (V1, E1) and G2 = (V2, E2) with a bijective mapping of their vertex sets γ: V1 → V2 is a pair of planar straight-line drawings Γ1 of G1 and
Γ2 of G2, such that each vertex v2 = γ(v1) is mapped in Γ2 to the same point where v1 is mapped in Γ1, wh ..."
Cited by 7 (2 self)
Add to MetaCart
A geometric simultaneous embedding of two graphs G1 = (V1, E1) and G2 = (V2, E2) with a bijective mapping of their vertex sets γ: V1 → V2 is a pair of planar straight-line drawings Γ1 of G1 and Γ2 of
G2, such that each vertex v2 = γ(v1) is mapped in Γ2 to the same point where v1 is mapped in Γ1, where v1 ∈ V1 and v2 ∈ V2. In this paper we examine several constrained versions and a relaxed version
of the geometric simultaneous embedding problem. We show that if the input graphs are assumed to share no common edges this does not seem to yield large classes of graphs that can be simultaneously
embedded. Further, if a prescribed combinatorial embedding for each input graph must be preserved, then we can answer some of the problems that are still open for geometric simultaneous embedding.
Finally, we present some positive and negative results on the near-simultaneous embedding problem, in which vertices are not forced to be placed exactly in the same, but just in “near” points in
different drawings.
- Proc. Graph Drawing, GD 2007, volume 4875 of LNCS , 2007
"... Abstract. We add two minimum level nonplanar (MLNP) patterns for trees to the previous set of tree patterns given by Healy et al. [3]. Neither of these patterns match any of the previous
patterns. We show that this new set of patterns completely characterize level planar trees. 1 ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. We add two minimum level nonplanar (MLNP) patterns for trees to the previous set of tree patterns given by Healy et al. [3]. Neither of these patterns match any of the previous patterns. We
show that this new set of patterns completely characterize level planar trees. 1
"... A level graph is a directed acyclic graph with a level assignment for each node. Such graphs play a prominent role in graph drawing. They express strict dependencies and occur in many areas,
e.g., in scheduling problems and program inheritance structures. In this paper we extend level graphs to cyc ..."
Cited by 6 (4 self)
Add to MetaCart
A level graph is a directed acyclic graph with a level assignment for each node. Such graphs play a prominent role in graph drawing. They express strict dependencies and occur in many areas, e.g., in
scheduling problems and program inheritance structures. In this paper we extend level graphs to cyclic level graphs. Such graphs occur as repeating processes in cyclic scheduling, visual data mining,
life sciences, and VLSI. We provide a complete study of strongly connected cyclic level graphs. In particular, we present a linear time algorithm for the planarity testing and embedding problem, and
we characterize forbidden subgraphs. Our results generalize earlier work on level graphs.
- IN PROC. 14TH INTERN. SYMP. ON GRAPH DRAWING, VOLUME 4372 OF LNCS , 2006
"... We consider the problem of simultaneous embedding of planar graphs. We demonstrate how to simultaneously embed a path and an n-level planar graph and how to use radial embeddings for curvilinear
simultaneous embeddings of a path and an outerplanar graph. We also show how to use star-shaped levels to ..."
Cited by 5 (3 self)
Add to MetaCart
We consider the problem of simultaneous embedding of planar graphs. We demonstrate how to simultaneously embed a path and an n-level planar graph and how to use radial embeddings for curvilinear
simultaneous embeddings of a path and an outerplanar graph. We also show how to use star-shaped levels to find 2-bends per path edge simultaneous embeddings of a path and an outerplanar graph. All
embedding algorithms run in O(n) time.
, 2006
"... Abstract. We present the set of planar graphs that always have a simultaneous geometric embedding with a strictly monotonic path on the same set of n vertices, for any of the n! possible
mappings. These graphs are equivalent to the set of unlabeled level planar (ULP) graphs that are level planar ove ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract. We present the set of planar graphs that always have a simultaneous geometric embedding with a strictly monotonic path on the same set of n vertices, for any of the n! possible mappings.
These graphs are equivalent to the set of unlabeled level planar (ULP) graphs that are level planar over all possible labelings. Our contributions are twofold. First, we provide linear time drawing
algorithms for ULP graphs. Second, we provide a complete characterization of ULP graphs by showing that any other graph must contain a subgraph homeomorphic to one of seven forbidden graphs. 1
- PROC. SOFTWARE SEMINAR: THEORY AND PRACTICE OF INFORMATICS, SOFSEM 2004 , 2004
"... A track graph is a graph with its vertex set partitioned into horizontal levels. It is track planar if there are permutations of the vertices on each level such that all edges can be drawn as
weak monotone curves without crossings. The novelty and generalisation over level planar graphs is that ..."
Cited by 4 (3 self)
Add to MetaCart
A track graph is a graph with its vertex set partitioned into horizontal levels. It is track planar if there are permutations of the vertices on each level such that all edges can be drawn as weak
monotone curves without crossings. The novelty and generalisation over level planar graphs is that horizontal edges connecting consecutive vertices on the same level are allowed. We show that track
planarity can be reduced to level planarity in linear time. Hence, there are time algorithms for the track planarity test and for the computation of a track planar embedding. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=127225","timestamp":"2014-04-16T06:22:32Z","content_type":null,"content_length":"38003","record_id":"<urn:uuid:742f962e-17fb-4bab-9cdf-b32611c2ec05>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
calculus integration problem
Indefinite Integral of e^(7x)(sin(4x))dx cant seem to get the substitution/integration by parts to work for me thanks in advance!
Use integration by parts twice and treat the integral as the unknown. Then you can solve for the integral as if it's an algebra problem.
Integration by parts should work, but sometimes one of the most powerful methods of integration is by taking derivatives. What we are trying to find is $I=\int (e^{7x}\sin(4x))\,dx$. If we find the
derivative of I, we get $\frac{d}{dx}(e^{7x}\sin(4x))=4e^{7x}\cos(4x) + 7e^{7x}\sin(4x)$ In reverse, we get that $\int (4e^{7x}\cos(4x) + 7e^{7x}sin(4x))\,dx=e^{7x}\sin(4x)$ and with a bit of
rearranging we find $7I= e^{7x}\sin(4x) - 4\int e^{7x}\cos(4x)\,dx$ With me so far? Now, if we were to divide both sides of the equation by 7, we'd have another expression for I (the function we are
integrating). But there's another integral... Let's use the same process to solve for this integral. Let's call this integral J... so $7I=e^{7x}\sin(4x)-4J$ if $J=\int e^{7x}\cos(4x)\,dx$. Taking the
derivative of J, we get $\frac{d}{dx}(e^{7x}\cos(4x))=-4e^{7x}\sin(4x) + 7e^{7x}cos(4x)$ or in reverse, that $\int (-4e^{7x}\sin(4x) + 7e^{7x}cos(4x))\,dx = e^{7x}\cos(4x)$. With a bit of rearranging
we find that... $J=\frac{1}{7}e^{7x}\cos(4x) + \frac{4}{7}\int e^{7x}sin(4x)\,dx$ or $J=\frac{1}{7}e^{7x}\cos(4x) + \frac{4}{7}I$ Substituting J into our original equation gives... $7I= e^{7x}\sin
(4x) - 4[\frac{1}{7}e^{7x}\cos(4x) + \frac{4}{7}I]$ which, when expanded and manipulated, reduces to $\frac{65}{7}I=e^{7x}\sin(4x) - \frac{4}{7}e^{7x}\cos(4x)$ And finally, we can solve for I, which
is what we were finding... $I=\frac{7}{65}e^{7x}\sin(4x)-\frac{4}{65}e^{7x}\cos(4x) + C$ (the integration constant). Yes, it is a fair bit of writing, but is extremely powerful if all else fails...
Last edited by Prove It; September 9th 2008 at 08:38 PM. | {"url":"http://mathhelpforum.com/calculus/48411-calculus-integration-problem.html","timestamp":"2014-04-19T12:05:42Z","content_type":null,"content_length":"42919","record_id":"<urn:uuid:104c0ef1-4849-4437-ad1e-a1372393885c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuity of Functions Exercises
Continuity of Functions Exercises
Continuity at a Point via Formulas
It's good to have a feel for what continuity at a point looks like in pictures. However, sometimes we are asked about the continuity of a function for which we're given a formula, instead of a
Continuity on an Interval via Pictures
Remember, f is continuous on an interval if we can finger paint over f on that interval without lifting our drawing digit. Sample ProblemLook at the function f drawn below: The function f is c...
Continuity on an Interval via Formulas
When we are given problems asking whether a function f is continuous on a given interval, a good strategy is to assume it isn't. Try to find values of x where f might be discontinuous. If we're
Continuity on Closed and Half-Closed Intervals
When looking at continuity on an open interval, we only care about the function values within that interval. If we're looking at the continuity of a function on the open interval (a,b), we don't i...
The Informal Version
Have a graphing calculator ready. Sample ProblemGraph the function f(x) = 2x. This is a polynomial, which is continuous at every real number. In particular, it's continuous at x = 4, with...
The Formal Version
When we graph continuous functions, three things happen:We are given a continuous function f and a value c. We decide how far we wanted to let f(x) move away from f(c).We restricte the values of x...
Boundedness Theorem: A continuous function on a closed interval [a,b] must be bounded on that interval.There are two numbers - a lower bound M and an upper bound N - such that every value of f on
Extreme Value Theorem
Maximum and Minimum ValuesThe maximum value of a function on an interval is the largest value the function takes on within that interval. Similarly, the minimum value of a function on an inter...
Intermediate Value Theorem
Intermediate Value Theorem (IVT): Let f be continuous on a closed interval [a,b]. Pick a y-value M with f(a) | {"url":"http://www.shmoop.com/continuity-function/exercises.html","timestamp":"2014-04-16T20:21:33Z","content_type":null,"content_length":"27547","record_id":"<urn:uuid:22394ee7-298d-46f8-9d9c-c5a9ed6e3692>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2005 [00266]
[Date Index] [Thread Index] [Author Index]
Re: Derivative of g(z) = Arg(z) and f(z) = Re(z) + Im(z)?
• To: mathgroup at smc.vnet.net
• Subject: [mg54120] Re: Derivative of g(z) = Arg(z) and f(z) = Re(z) + Im(z)?
• From: "Valeri Astanoff" <astanoff at yahoo.fr>
• Date: Fri, 11 Feb 2005 03:33:32 -0500 (EST)
• References: <cuf4rq$gk0$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Marie wrote:
> How do I solve a derivative of a complex function Arg(z) or Re(z) +
> Im(z) by definition? (f(z) - f(w))/(z-w) -> f'(w) as z->w...
Let f(z) = P + I Q be the value of a complex function.
I remember that if P is constant and Q variable
(or P variable and Q constant)
then f can't be complex differentiable because the
Cauchy-Riemann conditions can't be met.
Hence Arg, Re, Im, Abs are not complex differentiable.
Curiously, Mathematica [5.0] doesn't refuse to evaluate Arg' :
In[1]:= Arg'[1 + 2 I] // N
Out[1]= -0.4
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Feb/msg00266.html","timestamp":"2014-04-19T17:24:01Z","content_type":null,"content_length":"35315","record_id":"<urn:uuid:55d0a023-c5d7-4971-af3b-849b2569fb3d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
f(x)≡0 (mod 15) Find all roots mod 15
February 1st 2010, 06:43 PM
f(x)≡0 (mod 15) Find all roots mod 15
f(x) = x^2 + 2x +1
f(x)≡0 (mod 15)
Find ALL roots mod 15.
Consider f(x)≡0 (mod3).
mod 3: check 0,1,2. Only 2 solves it.
Consider f(x)≡0 (mod5).
mod 5: check 0,1,2,3,4. Only 4 solves it.
So x≡2(mod 3) and x≡4(mod 5). Looking at x≡4(mod 5), we take x=4,9,14 and check each of these in x≡2(mod 3). Only 14 works.
Thus, x≡2(mod 3) and x≡4(mod 5) => x≡14 (mod 15)
and the final answer is x≡14 (mod 15).
My questions:
1) The above says that x≡2(mod 3) and x≡4(mod 5) => x≡14 (mod 15). But to show that x≡14 (mod 15) is a solution to the system x≡2(mod 3),x≡4(mod 5), don't we have to show the CONVERSE? i.e. x≡14
(mod 15) => x≡2(mod 3) and x≡4(mod 5)? Do we need to show both directions(iff)?
2) So x≡14 (mod 15) solves the system f(x)≡0 (mod3),f(x)≡0 (mod5). Why does this imply that x≡14 (mod 15) also solves f(x)≡0 (mod 15)?
3) Why are there no other roots mod 15? i.e. why can we be sure that x≡14 (mod 15) is the only solution to f(x)≡0 (mod 15)?
Any help is much appreciated!
[note: also under discussion in Math Links forum]
February 1st 2010, 07:09 PM
f(x) = x^2 + 2x +1
f(x)≡0 (mod 15)
Find ALL roots mod 15.
Consider f(x)≡0 (mod3).
mod 3: check 0,1,2. Only 2 solves it.
Consider f(x)≡0 (mod5).
mod 5: check 0,1,2,3,4. Only 4 solves it.
So x≡2(mod 3) and x≡4(mod 5). Looking at x≡4(mod 5), we take x=4,9,14 and check each of these in x≡2(mod 3). Only 14 works.
Thus, x≡2(mod 3) and x≡4(mod 5) => x≡14 (mod 15)
and the final answer is x≡14 (mod 15).
My questions:
1) The above says that x≡2(mod 3) and x≡4(mod 5) => x≡14 (mod 15). But to show that x≡14 (mod 15) is a solution to the system x≡2(mod 3),x≡4(mod 5), don't we have to show the CONVERSE? i.e. x≡14
(mod 15) => x≡2(mod 3) and x≡4(mod 5)? Do we need to show both directions(iff)?
2) So x≡14 (mod 15) solves the system f(x)≡0 (mod3),f(x)≡0 (mod5). Why does this imply that x≡14 (mod 15) also solves f(x)≡0 (mod 15)?
3) Why are there no other roots mod 15? i.e. why can we be sure that x≡14 (mod 15) is the only solution to f(x)≡0 (mod 15)?
Any help is much appreciated!
Much simpler, imo: $0=x^2+2x+1=(x+1)^2\Longrightarrow$ either there's some nilpotent element in $\mathbb{Z}_{15}$ or else $x+1=0\!\!\!\pmod {15}\Longleftrightarrow x=-1=14\!\!\!\pmod {15}$.
But we know that an element $a\in\mathbb{Z}_n$ is nilpotent iff every prime that divides n divides a, so in $\mathbb{Z}_{15}$ there are no nonzero nilpotent element and thus the only solution is
$14\!\!\!\pmod {15}$.
I think this answers all your questions.
February 1st 2010, 07:20 PM
Thanks, but your method is beyond my background of number theory that I have so far. I haven't learnt nilpotent and Z15, etc. Your method is likely better, but for me it's best to use the topics
that I've learnt up to this point.
So could somebody kindly explain the solution provided in the example (most importantly, the answer to question 2)?
Thank you!
February 2nd 2010, 05:44 PM
I believe questions of the type "Find ALL roots mod 15" are about "if and only if" statements. We can't just show one direction.
I think for the above solution, they actually meant iff for the last step, i.e. x≡2(mod 3) and x≡4(mod 5) <=> x≡14 (mod 15), because they showed that x≡14 (mod 15) works, i.e. is a solution, and
also nothing else works (4 and 9 does not work), i.e. it's the ONLY solution, so we have iff.
So I think we can say that:
f(x)≡0 (mod3) and f(x)≡0 (mod5)
<=> x≡2(mod 3) and x≡4(mod 5)
<=> x≡14 (mod 15)
So x≡14 (mod 15) is the solution to f(x)≡0 (mod3),f(x)≡0 (mod5).
But why is x≡14 (mod 15) the solution to f(x)≡0 (mod 15)? I still don't understand this...
Thanks for any help!
February 2nd 2010, 06:41 PM
I believe questions of the type "Find ALL roots mod 15" are about "if and only if" statements. We can't just show one direction.
I think for the above solution, they actually meant iff for the last step, i.e. x≡2(mod 3) and x≡4(mod 5) <=> x≡14 (mod 15), because they showed that x≡14 (mod 15) works, i.e. is a solution, and
also nothing else works (4 and 9 does not work), i.e. it's the ONLY solution, so we have iff.
So I think we can say that:
f(x)≡0 (mod3) and f(x)≡0 (mod5)
<=> x≡2(mod 3) and x≡4(mod 5)
<=> x≡14 (mod 15)
So x≡14 (mod 15) is the solution to f(x)≡0 (mod3),f(x)≡0 (mod5).
But why is x≡14 (mod 15) the solution to f(x)≡0 (mod 15)? I still don't understand this...
Thanks for any help!
Because $f(x)=x^2+2x+1=(x+1)^2\Longrightarrow f(14)=(14+1)^2=$$15^2=0\!\!\!\pmod {15}$
February 2nd 2010, 09:11 PM
Is it true in general that f(x)≡0 (mod 3) AND f(x)≡0 (mod 5) IMPLIES f(x)≡0 (mod 15)?
Is it true in general that f(x)≡0 (mod m) AND f(x)≡0 (mod n) IMPLIES f(x)≡0 (mod mn)? Why or why not?
Thanks for explaining!
February 3rd 2010, 01:31 AM
Is it true in general that f(x)≡0 (mod 3) AND f(x)≡0 (mod 5) IMPLIES f(x)≡0 (mod 15)?
Is it true in general that f(x)≡0 (mod m) AND f(x)≡0 (mod n) IMPLIES f(x)≡0 (mod mn)? Why or why not?
$6= 0\!\!\!\pmod 6\,\,\,and\,\,\,6= 0\!\!\!\pmod 3$ but $6eq 0\!\!\!\pmod {6\cdot 3=18}$ , so the proposition isn't true in that generality.
What is true is that $a=0\!\!\!\pmod n\,,\,\,a=0\!\!\!\pmod n\,,\,\,and\,\,\,(n,m)=1\Longrightarrow a=0\!\!\!\pmod{mn}$ ,and in general, without requiring $(n,m)=1$ , we have $a=0\!\!\!\pmod {\
Thanks for explaining!
February 3rd 2010, 07:16 PM
What if f(x)≡0 (mod m) AND f(x)≡0 (mod n) AND f(x)≡0 (mod q)?
Under what conditions will this guarantee that f(x)≡0 (mod mnq)? Do we need (m,n)=(m,q)=(n,q)=1? or just (m,n,q)=1? and why? | {"url":"http://mathhelpforum.com/number-theory/126693-f-x-0-mod-15-find-all-roots-mod-15-a-print.html","timestamp":"2014-04-18T14:17:08Z","content_type":null,"content_length":"16748","record_id":"<urn:uuid:ae115954-a8c8-45c5-8acd-d804e3420e33>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coprime automorphism-invariant subgroup of group of prime power order
This article describes a property that arises as the conjunction of a subgroup property: coprime automorphism-invariant subgroup with a group property imposed on the ambient group: group of prime
power orderView a complete list of such conjunctions | View a complete list of conjunctions where the group property is imposed on the subgroup
A subgroup group of prime power order (i.e., a finite prime number coprime automorphism-invariant subgroup if it is invariant under all the | {"url":"http://groupprops.subwiki.org/wiki/Coprime_automorphism-invariant_subgroup_of_group_of_prime_power_order","timestamp":"2014-04-16T10:29:03Z","content_type":null,"content_length":"24567","record_id":"<urn:uuid:951b9833-98ea-491e-b421-3b5b667ed933>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
21 (2008)
Robert Luketic (Director)
As I understand it, the book by Ben Mezrich which inspired this film is non-fiction. It told the true story (though using pseudonyms) of a team comprised of an MIT math professor and six MIT
students... (more)
7 Steps to Midnight (1993)
Richard Matheson
In this unnerving, `Kafka-esque' suspense novel by well known horror author Richard Matheson, a government mathematician sees reality collapse around him as his life is turned into a surrealistic
version... (more)
Advanced Calculus of Murder (1988)
Erik Rosenthal
In the second book in the Dan Brodsky series (following Calculus of Murder by the same author), Brodsky is invited to COTCA (the Conference on Operator Theory and C*-Algebras at Oxford University).
While... (more)
The Adventure of the Russian Grave (1995)
William Barton / Michael Capobianco
Even in the old Arthur Conan Doyle stories, Sherlock Holmes' arch-nemesis was a mathematician. Moriarty was said to be a math professor who (when he wasn't being evil) worked on the binomial
theorem and... (more)
Aurora in Four Voices (1998)
Catherine Asaro
Jato is trapped in Nightingale, a city in permanent darkness, inhabited by mathematical artists who mostly ignore him. Soz arrives to repair her ship, meets Jato, and finds... (more)
Bad Boy Brawley Brown (2002)
Walter Mosley
This is the sixth book in the highly praised Easy Rawlins mysteries that began with DEVIL IN A BLUE DRESS. They are set in post-WWII black Los Angeles, and unfold over the years. (The... (more)
Beyond the Limit: The Dream of Sofya Kovalevskaya (2002)
Joan Spicci
This book is a novelized account of the life of Sofia Kovalevskaya (aka Sonia Kovalevskey and infinitely1 many alternative spellings), famous today as the first woman to receive a Ph.D. in
mathematics.... (more)
Bloom (1998)
Wil McCarthy
In between blooms of a deadly manmade fungus, the humans discuss cellular automata (especially Conway's Game of Life) and complexity theory. Thanks to Rob Milson for suggesting this book. (more)
Book of Knut: a novel by Knut Knudson (2012)
Halvor Aakhus
Halvor Aakhus, who has an undergraduate degree in math and an MFA in writing, wrote this unusual work of fiction that takes the form of a novel by an apparently dead author named Knut Knudson which
has... (more)
Brain Wave (1954)
Poul Anderson
This debut novel from SF superstar Anderson explains that the human intelligence is far more powerful than we have thus far seen. In fact, once we escape from the effects of a force field that is
limiting... (more)
Calculus (Newton's Whores) (2004)
Carl Djerassi
The credit for the invention of calculus has long been contested, being claimed by both Isaac Newton and Gottfried Leibniz. A committee established by the Royal Society in 1712 concluded that
Newton was... (more)
Calculus and Pizza (2003)
Clifford Pickover
A pizza chef teaches calculus to his restaurant patrons. Romance and hilarity ensue. (more)
Calculus of Murder (1986)
Erik Rosenthal
"The hero is a part-time instructor and researcher at Berkeley and moonlights as a PI. He solves his cases using calculus. The narrative is excellent, humorous, and believable." Actually, I just...
Cálculo Infinitesimal de una variable (1994)
Juan de Burgos Román
Apparently, this Spanish calculus textbook begins each chapter with a "tale". I have not yet had a chance to see the book myself, and so I cannot say for certain whether these really are "fiction"
or... (more)
Cálculo Infinitesimal de varias variables (1995)
Juan de Burgos Román
Apparently, this Spanish calculus textbook begins each chapter with a "tale". I have not yet had a chance to see the book myself, and so I cannot say for certain whether these really are "fiction"
or... (more)
Continuity (1999)
Buzz Mauro
This short story cleverly uses the δ-ε definition of continuity of a function to discuss the changing self-esteem of a character over time. After briefly recalling the rigorous definition, it
introduces... (more)
Convergent Series (1979)
Larry Niven
According to the liner notes, Niven received an undergraduate degree in mathematics. Mostly the degree has only apparently inspired his titles (note also the book called "The Integral Trees")
without noticeably... (more)
Coyote Moon (2003)
John A. Miller
Well, this book is hard to describe! It's certainly different and not easily categorizable. It is a novel that addresses the question "What if a young, nerdy, MIT mathematics professor died of
cancer... (more)
The Day the Earth Stood Still (1951)
Robert Wise (director) / Harry Bates (story) / Edmund H. North
One must wonder how aliens might communicate with humans when and if they arrive on Earth. In the 1951 film The Day the Earth Stood Still, the extraterrestrial Klaatu (Michael Rennie) introduces
himself... (more)
D'Alembert's Principle: A Novel in Three Panels (2000)
Andrew Crumey
A fictionalized presentation of the life (and love) of Jean le Rond D'Alembert (1717-1783), best known -- to me at least -- as the first to study and solve the famous linear wave equation u_xx + c
u_tt = 0. See the online bookreview at at MAA Online. (more)
El matemático (1988)
Arturo Azuela
It is a kind of bildungsroman narrated by a sexagenarian mathematician who makes a mathematical discovery in the verge of the year 2000. Of course, there is the detail of considering the year 2000
the... (more)
An Elegant Solution (2013)
Paul Robertson
A fictionalized account of the life of Leonhard Euler, focusing on his relationship with the Bernoullis and told from the perspective of Christian theology. The novel also takes on aspects of a
murder... (more)
Fear of Math (1985)
Peter Cameron
A feather-touch story about a young woman who comes to New York to do an MBA - and has to pass a Calculus course, a pre-requisite for an MBA. A brief description of how utterly lost she is after
her... (more)
Forever Changes (2008)
Brendan Halpin
A very somber novel written for young adults about a mathematically talented teenager with cystic fibrosis. Her math teacher helps comfort her by making an analogy between the important role of the
infinitesimals in calculus and the importance of even a short life. (more)
G103 (2006)
Oliver Tearne (director)
This short film "shows a surreal day in the life of a mathematics undergraduate" taking the math course G103 at the University of Warwick. In fact, the Website makes it sound as if it is an
informational... (more)
Gallactic Alliance - Translight! (2009)
Doug Farren
A human scientist invents a new branch of mathematics, "continuum calculus", as the basis for a stardrive. At one point, he compares his mathematical constructions with those of an alien species
who have... (more)
The Gangs of New Math (2005)
Robert W. Vallin
This humorous short story about a brawl in a pub of mathematicians appeared in the November 2005 issue of Math Horizons magazine. There is quite a bit of "mathematical name-dropping" in the form of
quick... (more)
The Genius (1901)
Nikolai Georgievich Garin-Mikhailovskii
The Russian Engineer N.G. Mikhailovskii (1852-1906) was also an accomplished author using the pseudonym "N.G. Garin". His short story, "The Genius", tells about an Jewish man who fills his
notebooks with... (more)
Georgia on My Mind (1995)
Charles Sheffield
The story has to do with Babbage's Analytical Engine and a remote region of Antarctica (the "Georgia" of the title). The mathematics bit, aside from Babbage, consists of a nonlinear optimization...
The God Patent (2009)
Ransom Stephens
After his life falls apart, an engineer tries to revive a collaboration with the fundamentalist Christian with whom he once wrote two patents based on the Bible. While he viewed these patents for
what... (more)
The Hollow Man (1993)
Dan Simmons
A psychic mathematician is driven to the edge of insanity as his life partner approaches death. The mathematician's research is described explicitly -- as are some of the horrific events that
befall... (more)
Infinite Jest (1996)
David Foster Wallace
The twenty page passage on Eschaton, with the Mean Value Theorem footnote, is possibly the best use of mathematics in fiction I've ever seen. this book has some of the most interesting and
complete... (more)
Infinitely Near (1999)
Anthony Cristiano
An 8 minute long, black and white film with no dialogue showing intertwined scenes of a student having trouble with the concept of a limit in his calculus class and other scenes from his life. The
director... (more)
The Integral: A Horror Story (2009)
Colin Adams
This story, which he claims is an attempt to emulate Stephen King, is different from many of Adams' others. This may explain why it was published for the first time in his 2009 collections Riot at
the... (more)
The Lady's Code (2006)
Samantha Saxon
The third in a series of romance novels about intelligent, confident women, The Lady's Code features Lady Juliet Pervell, who has ruined her reputation in social circles but earned an honorary
degree in... (more)
Leap (2004)
Lauren Gunderson
This play explores the inspiration for Isaac Newton's amazing discoveries in 1664, personifying it in the form of two young girls whose playful interaction leads to the results we remember Newton
for today.... (more)
Let Newton Be! (2011)
Craig Baxter
The three actors in this play portray Isaac Newton at three different stages of his life, as well as occasionally representing other people. Interestingly, the three Newton's interact with each
other,... (more)
Letters to a Young Mathematician (2006)
Ian Stewart
I listed this one here before I had a chance to read it and am now wondering whether it should be counted as fiction at all. This is an excellent book which provides a lot of useful information
about... (more)
The Limit of Delta Y Over Delta X (1994)
Richard Cumyn
Here is a calculus example from a book with a title that can not be more mathematical. I printed this one in a calculus book that I wrote for my business/economics calculus class. I also read it
out... (more)
Little Zero the Seafarer [Captain One's frigate] (1968)
Vladimir Levshin
[This Russian children's novel] is about the titular character (who appears in the other books [by Levshin]), sailing from the A bay through arithmetical, algebraical and geometrical seas,
learning... (more)
The Mask of Zeus (1992)
Desmond Cory
Math is discussed a lot in this "Professor Dobie Mystery" novel because both the `detective' (Dobie) and the victim (his former Ph.D. student) are mathematicians. Of course, the math doesn't have
much... (more)
Math Girls (2007)
Hiroshi Yuki
Three high school friends work through some difficult mathematical ideas in this book, recently translated into English from the Japanese original. The author is apparently well known in Japan for
his... (more)
Math Takes a Holiday (2001)
Paul Di Filippo
Saint Hubert and Saint Barbara, the two patron saints of mathematics, pay a visit to a devout Catholic mathematics professor who has been praying for a mathematical miracle to silence his
mockers.... (more)
Maths on a Plane (2008)
Phil Trinh
This story, about a student flirting with the attractive woman in the seat next to him on a plane, won the student category of the 2008 New Writers Award from Cambridge University's ``Plus+
Magazine''.... (more)
The Maxwell Equations (1969)
Anatoly Dnieprov
The math in this story seems very real, though the specifics of it are inconsequential to the plot. A mathematical physicist in an isolated city needs help finding a solution to a linearized
version... (more)
Maxwell's Equations (2005)
Alex Kasman
James Clerk Maxwell was the 19th century theoretician who discovered electro-magnetic waves. He is often described as a "physicist", but I would argue that he was a mathematician. Certainly some of
his... (more)
Mean Girls (2004)
Tina Fey (screenplay) /Mark S. Waters (director)
In this movie about teenage girls -- written by Tina Fey (Saturday Night Live, 30 Rock) and inspired by the non-fiction book Queen Bees and Wannabes -- a previously home schooled student (played by
Lindsay... (more)
The Mirror Has Two Faces (1996)
Barbra Streisand (director)
Love story with Jeff Bridges and Barbra Streisand as math and English professors (respectively) at Columbia University. We get a detailed description of the Twin Prime Conjecture (concerning the
number... (more)
Morte di un matematico napoletano (1992)
Mario Martone (director)
"This movie describes the last day in [the] life of a famous Italian mathematician: Renato Caccioppoli. He was a fascinating and discussed person in Naples' political and cultural life. [A]
member... (more)
Murder, She Conjectured (2005)
Alex Kasman
A police psychologist attending a conference in Cambridge, England is pulled into an unsolved murder mystery by her mathematician boyfriend. An important theme of the story is the oppresive sexism
that... (more)
The Music of the Spheres (2001)
Elizabeth Redfern
A highly praised (a la Caleb Carr) historical thriller set in Europe in 1795, involving lots of astronomy. This includes Laplace musing over his theorem that gravitational perturbations are
bounded, and his wondering if a similar theorem applies to history. (more)
Newton's Hooke (2004)
David Pinner
A play about Isaac Newton and Robert Hooke which presents "the dark side" of Newton. Emphasis is put on his egotism (not only does he think that he is incomparably brilliant, but he also seems to
think... (more)
The Number Devil (Der Zahlenteufel) (1997)
Hans Magnus Enzensberger
"The title may be translated as The Counting Devil, or maybe The Number Devil, and it has a subtitle that translates to 'a pillowbook for everyone who is afraid of math'. Enzensberger is a
respected... (more)
Old Fillikin (1982)
Joan Aiken
A farm boy who hates his math class seemingly calls upon his grandmother's "familiar" to get revenge on his teacher. This reads like an old fashioned ghost story, but it is the kind where you can
imagine... (more)
On the Nature of Human Romantic Interaction (2003)
Karl Iagnemma
The title of the story was the title of a chapter in the Ph.D. thesis that Joseph, the main character, was working on...but never finished. Instead, he wound up living with his advisor's daughter,
working... (more)
On the Quantum Theoretic Implications of Newton's Alchemy (2007)
Alex Kasman
A postdoc at the mysterious "Institute for Mathematical Analysis and Quantum Chemistry" is surprised to learn that his work on Riemann-Hilbert Problems is being used as part of his employer's crazy
alchemy... (more)
The Parrot's Theorem (2000)
Denis Guedj
This is an ambitious novel, a magical fantasy about a talking parrot bought at a flea market in France who, with the help of the personal library of a reclusive mathematical genius, teaches some
children... (more)
Professor and Colonel (1987)
Ruth Berman
In this unusual story, we get to see another side to Sherlock Holmes' arch enemy, the brilliant but evil mathematician Professor Moriarty. Here, rather than perpetrating a crime, Moriarty is merely
visiting with his brother, discussing the significance of his research into asteroid dynamics. (See also Asimov's take on this same subject.) (more)
Quicksilver: The Baroque Cycle Volume 1 (2003)
Neal Stephenson
This long novel from the author of Cryptonomicon does for 17th Century mathematics what that earlier novel did for the 20th century. Namely, it deifies some great historical mathematicians (this
time... (more)
The Rose Acacia (1995)
Ralph P. Boas, Jr.
"A computer makes a deal with the devil, with the usual escape clause: if it can ask a question the devil cannot answer, the computer gets the information for free. As the devil puts it, no logical
paradoxes,... (more)
Round the Moon (1870)
Jules Verne
This early science fiction novel about space travel (published originally in French, of course) contains two chapters with explicit (and very nice) mathematical content. In Chapter 4 (A Little
Algebra)... (more)
The Secret Integration (1964)
Thomas Pynchon
The title is a pun relating the operation from calculus (the definite integral of a function) to the controversial attempt to solve many of the problems of race relations in America (the
integration... (more)
The Shiloh Project (1993)
David R. Beaucage
This is a Christian science fiction novel with mathematical undertones written by an author with a doctorate in mathematics. In it, a Jewish math teacher falsely accused of sexually abusing a
student... (more)
Signal to Noise (1999)
Eric S. Nylund
The protagonist in this science fiction novel, Jack Potter, is a tenure track math professor in a future where San Francisco has sunk under the ocean, all non-academic employment in the United
States... (more)
Silence Please (1954)
Arthur C. Clarke
In this "White Hart" story, Purvis tells about an experimental physicist who invents a highly successful antinoise generator. The Fourier analysis underpinning of antinoise is explicitly ... (more)
Singleton (2002)
Greg Egan
This story involves a physicist and a mathematician who have a child -- well, sort of -- that they have specially designed to remain in a "classical" state (as opposed to a quantum superposition of
states)... (more)
Sophie's Diary (2004)
Dora Musielak
Sophie Germain famously studied mathematics at night by candlelight despite her parents' insistence that she give up this unfeminine discipline. She then went on to become one of the great
mathematician's... (more)
Sorority House (1956)
Jordan Park (Cyril M. Kornbluth and Frederik Pohl)
Sorority House is a lesbian pulp novel written in 1956 by Cyril M. Kornbluth (1923-1958) and Frederik Pohl (1919- ) under the pen name "Jordan Park". The main character is a mentally unstable
young... (more)
The Spacetime Pool (2008)
Catherine Asaro
Janelle, recently graduated from MIT with a degree in math, is pulled through the "branch cut" between two universes to an alternate Earth where two sword wielding brothers rule half the world.
There,... (more)
Spherical Harmonic (2001)
Catherine Asaro
As a child, Dyhianna Selei created a transformation, just a mathematical construct, mapping the real world into an abstract space of "thoughts" (whatever that means) spanned by an infinite set of
spherical... (more)
Stand and Deliver (1987)
Ramon Menendez
Edward James Olmos plays Jaime Escalante, "a real-life math teacher in East L.A.. This is really unique. The hero's heroism consists in teaching mathematics! Obviously, I've gotta love this one.
So... (more)
Story of Your Life (1998)
Ted Chiang
What sort of mathematics would Vonnegut's Tralfamadorean's like to do? Or, alternatively, what sort of worldview would a sentient species have if their idea of simple mathematics was the calculus
of... (more)
Strange Attractors (1993)
Rebecca Goldstein
"Strange attractors: Collection of short stories, some of which have mathematical content. Two stories (the geometry of soap bubbles and impossible love and strange attractors) figure the same
main... (more)
Those Who Can, Do (1965)
Bob Kurosaka
In this short-short classic, a mathematics professor ends the first day of a Differential Equations class asking for questions. One student is irksome, even peculiar, in his wish to know what
practical... (more)
The Three Body Problem (2004)
Catherine Shaw
A cleverly titled novel that uses a historical mathematical contest and several characters based on real mathematicians as the basis for a murder mystery. Of special interest is the novel's
presentation... (more)
Torn Curtain (1966)
Alfred Hitchcock (Director)
Professor Armstrong (Paul Newman) pretends to defect to the other side of the iron curtain to learn of the secret "star wars"-like defense plan discovered by the brilliant (by his own account) Dr.
Lindt. Fiancee... (more)
Turbulence (2010)
Giles Foden
A British meteorologist is stationed in Scotland during World War II not to simply run a weather station (which is his cover), but to get to know the brilliant Wallace Ryman and learn to use his
mathematical... (more)
Turing (A Novel About Computation) (2003)
Christos Papadimitriou
The four vertices of an unlikely love "rectangle" are (a) a dying, maverick cryptographer, (b) a pregnant Internet wiz, (c) a romantic middle-aged Greek archaeologist and (d) Turing, an
artificially intelligent... (more)
Turnabout (1955)
Gordon R. Dickson
It's a story about a physics professor who is investigating a device that creates planar force-fields. In its first run, an explosion destroys the device and the physicist is trying to obtain an
answer... (more)
Two Trains Running (1990)
August Wilson
This play is set in Pittsburgh, 1969. An economically depressed area of the city is facing urban renewal, and the specter of eminent domain seizure hangs over the main character's future. The
other... (more)
Verrechnet (2009)
Carl Djerassi/Isabella Gregor
With the help of playwright/director Isabella Gregor, Djerassi updated his play Calculus (Newton's Whores). The plot still revolves around the question of priority on the invention of calculus, and
especially... (more)
War and Peace (1869)
Lev Tolstoy
Tolstoy's famous novel about...well, about war and peace (!) contains long passages explaining an analogy he makes between history and calculus. In particular, he argues that we should view history
as... (more)
The Years of Rice and Salt (2002)
Kim Stanley Robinson
This alternative history is based on the assumption that the Great Plague of the 1300s that decimated Europe's population was much worse, and that it in fact led to the extinction of almost all
of... (more)
Zilkowski's Theorem (2003)
Karl Iagnemma
This is a story of a love triangle with a definite mathematical twist. Henderson's roommate, Czogloz, steals away his girlfriend, Milla, when all three were math graduate students. Years later,
seeking... (more) | {"url":"http://kasmana.people.cofc.edu/MATHFICT/search.php?go=yes&topics=acd&orderby=title","timestamp":"2014-04-17T18:24:39Z","content_type":null,"content_length":"63905","record_id":"<urn:uuid:2a6c5845-8368-4de7-a7d2-865a0da8ebe0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limitation of my c++ program? How would
Size limitations are particularly evident with integers. On some computers, integers can only be about 32,000, on others, they can go up to a little more than 2,000,000,000. What is the largest value
your compiler will accept for a signed integer? What is the most negative allowed value?
How exactly would I find this out?
I'm not very sure about this topic, but...
Well, try starting with a large number and see how large it could get. Try using something like 10.000.000.000, if it accepts it then go up to 13.000.000.000, if it doesn't, try something like
8.000.000.000. Then start searching for billion max. values, then million max. values, then with thousands, hundreds and finally get the exact maximum number you can use for your computer. Same with
negative numbers, i think.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/81623/","timestamp":"2014-04-17T22:12:43Z","content_type":null,"content_length":"9789","record_id":"<urn:uuid:71bba173-0f7e-4b33-99fa-2a58b9936a6a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motion Control - A shortcut to sizing motors
A shortcut to sizing motors
George A. Beauchemin
Corporate Product Manager
MicroMo Electronics Inc.
Clearwater, Fla.
Brushed and brushless dc motors are a good choice in power sensitive or efficiency craving applications.
A lot of times, a dc motor or generator data sheet will include the motor constant K[m], which is the torque sensitivity divided by the square root of the winding resistance. Most designers view this
intrinsic motor property as an esoteric figure of merit useful only to the motor designer, with no practical value in selecting dc motors.
But K[m] can help reduce the iterative process in selecting a dc motor because it is generally winding independent in a given case or frame size motor. Even in ironless dc motors, where K[m] depends
on the winding (due to variations in the copper fill factor) it remains a solid tool in the selection process.
Because K[m] does not address the losses in an electromechanical device in all circumstances, the minimum K[m] must be larger than calculated to address those losses. This method is also a good
reality check because it forces the user to compute both the input and output power.
The motor constant addresses the fundamental electromechanical nature of a motor or generator. Selecting a suitable winding is simple after determining an adequately powerful case or frame size.
The motor constant K[m] is defined as:
K[m] = K[T]/R^0.5
In a dc motor application with limited power availability and a known torque required at the motor shaft, the minimum K[m] will be set.
For a given motor application the minimum K[m] will be:
K[m] = T / (P[IN] - P[OUT])^0.5
The power into the motor will be positive. P[IN] is simply the product of the current and voltage, assuming no phase shift between them.
P[IN] = V X I
The power out of the motor will be positive, since it supplies mechanical power and is simply the product of the rotational speed and torque.
P[OUT] = ω X T
A motion-control example includes a gantry-type drive mechanism. It uses a 38-mm-diameter coreless dc motor. The decision is made to double the slew speed with no change in the amplifier. The
existing operating point is 33.9 mN-m (4.8 oz-in.) and 2,000 rpm (209.44 rad/sec) and the input power is 24 V at 1 A. Furthermore, no increase in motor size is acceptable.
The new operating point will be at twice the speed and the same torque. Acceleration time is a negligible percentage of the move time, and slew speed is the critical parameter.
Calculating the minimum K[m]
K[m] = T / (P[IN] - P[OUT])^0.5
K[m] = 33.9 X 10^-3 N-m / (24 V X 1A -
418.88 rad/sec X 33.9 X 10^-3 N-m) ^0.5
K[m] = 33.9 X 10^-3 N-m / (24 W - 14.2 W)^ 0.5
K[m] = 10.83 X 10^-3 N-m/√W
Account for the tolerances of the torque constant and winding resistance. For example, if the torque constant and the winding resistance have ±12% tolerances, K[m] worst case will be:
K[MWC] = 0.88 K[T]/√(R X 1.12) = 0.832 K[m]
or almost 17% below nominal values with a cold winding.
Winding heating will further reduce K[m] since copper resistivity rises almost 0.4%/°C. And to exacerbate the problem, the magnetic field will attenuate with rising temperatures. Depending on the
permanent-magnet material, this could be as much as 20% for a 100°C rise in temperature. The 20% attenuation for 100°C magnet temperature rise is for ferrite magnets. Neodymium-boron-iron has 11%,
and samarium cobalt about 4%.
Interestingly, for the same mechanical input power, if the target is 88% efficiency, then the minimum K[m] would go from 1.863 N-m/√W to 2.406 N-m/√W. That is equivalent to having the same winding
resistance but a 29% greater torque constant. The higher the efficiency desired, the higher the K[m] required.
If in the case of the motor application the maximum current available and the worst-case torque load is known, compute the lowest acceptable torque constant by using
K[T] = T/I
After finding a motor family with sufficient K[m], select a winding that has a torque constant that slightly exceeds the minimum. Then commence determining if the winding will, in all cases of
tolerances and application constraints, perform satisfactorily.
Clearly, choosing a motor or generator by first determining the minimum K[m] in power-sensitive motor and efficiency-challenging generator applications can speed the selection process. The next step
will then be to select a suitable winding and ensure that all application parameters and motor/generator limitations are acceptable, including winding-tolerance considerations.
Because of manufacturing tolerances, thermal effects, and internal losses, one should always choose a K[m] somewhat larger than the application requires. A certain amount of latitude is needed since
there aren't an infinite number of winding variations available from a practical point of view. The larger the Km, the more forgiving it is in satisfying a given application's requirements.
In general, practical efficiencies above 90% may be virtually unobtainable. Larger motors and generators have larger mechanical losses. This is due to bearing, windage, and electromechanical losses
like hysteresis and eddy currents. Brush-type motors also have losses from the mechanical commutation system. In the case of precious metal commutation, popular with coreless motors, losses can be
extremely small, less than the bearing losses.
Ironless dc motors and generators have virtually no hysteresis and eddy current losses in the brush variant of this design. In the brushless versions, these losses, although low, do exist. This is
because the magnet is usually rotating relative to the back iron of the magnetic circuit. This induces eddy current and hysteresis losses. However, there are brushless dc versions that have the
magnet and back iron moving in unison. In these cases, losses are usually low.
SYMBOL DEFINITION UNIT(S)
I Current to motor A
K[ER] Back emf constant *V-sec
K[m] Motor constant N-m/√W
K[MWC] Motor constat-worse case N-m/√W
K[T] Torque Sensitivity N-m/A
Efficiency of electrical to mechanical
n %
or mechanical to electrical conversion
P[IN] Input Power W
P[OUT] Output power W
R Cold-winding resistance specified Ω
T Load torque or driving torque N-m
ω Motor rotational speed Rad/sec
V Motor applied voltage V
*V-sec is equivalent to V/(rad/sec)
PARAMETERS MOTOR A MOTOR B MOTOR C MOTOR D
Motor constant (mN-m/√W) 14.44 19.45 17.48 30.33
Speed (rpm) 2,000 4,000 4,000 4,000
Torque (mN-m) 33.9 33.9 33.9 33.9
Voltage (V) 19.99 19.82 24.49 21.93
Current (A) 0.69 0.92 0.80 0.79
Efficiency 51.7% 77.7% 72.5% 81.5%
Power delivered (W) 7.1 14.4 14.4 14.4 | {"url":"http://machinedesign.com/print/archive/motion-control-shortcut-sizing-motors","timestamp":"2014-04-24T13:05:15Z","content_type":null,"content_length":"23549","record_id":"<urn:uuid:fa93251a-daf5-478c-ac6a-3faf7e43bbaf>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Natural Numbers
The Natural Numbers
Cardinal numbers describe the size of a collection of objects; two such collections have the same (cardinal) number of objects if their members can be matched in a one-to-one correspondence. Ordinal
numbers refer to position relative to an ordering, as first, second, third, etc. The finite cardinal and ordinal numbers are called the natural numbers and are represented by the symbols 1, 2, 3, 4,
etc. Both types can be generalized to infinite collections, but in this case an essential distinction occurs that requires a different notation for the two types (see transfinite number).
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on number The Natural Numbers from Fact Monster:
• number: The Natural Numbers - The Natural Numbers Cardinal numbers describe the size of a collection of objects; two such ...
• number: The Integers and Rational Numbers - The Integers and Rational Numbers To the natural numbers one adjoins their negatives and zero to ...
See more Encyclopedia articles on: Mathematics | {"url":"http://www.factmonster.com/encyclopedia/science/number-the-natural-numbers.html","timestamp":"2014-04-16T14:38:59Z","content_type":null,"content_length":"20914","record_id":"<urn:uuid:1e9b39db-2bf6-4aa3-a73d-daf6a76c9c14>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program in Applied and Computational Mathematics
Peter Constantin (spring)
Acting Director
Philip J. Holmes (fall)
Departmental Representative
Paul D. Seymour
Executive Committee
René A. Carmona, Operations Research and Financial Engineering
Emily A. Carter, Mechanical and Aerospace Engineering
Peter Constantin, Mathematics
Weinan E, Mathematics
Philip J. Holmes, Mechanical and Aerospace Engineering
Yannis G. Kevrekidis, Chemical and Biological Engineering
Paul D. Seymour, Mathematics
Amit Singer, Mathematics
James M. Stone, Astrophysical Sciences
Jeroen Tromp, Geosciences
Sergio Verdú, Electrical Engineering
Associated Faculty
Yacine Aït-Sahalia, Economics
Michael Aizenman, Physics, Mathematics
William Bialek, Physics, Lewis-Sigler Institute for Integrative Genomics
David M. Blei, Computer Science
Carlos D. Brody, Molecular Biology, Princeton Neuroscience Institute
Adam Burrows, Astrophysical Sciences
Roberto Car, Chemistry
Moses S. Charikar, Computer Science
Bernard Chazelle, Computer Science
Patrick Cheridito, Operations Research and Financial Engineering
Mung Chiang, Electrical Engineering
Erhan Çınlar, Operations Research and Financial Engineering
Iain D. Couzin, Ecology and Environmental Biology
Bradley W. Dickinson, Electrical Engineering
David P. Dobkin, Computer Science
Jianqing Fan, Operations Research and Financial Engineering
Jason W. Fleischer, Electrical Engineering
Christodoulos A. Floudas, Chemical and Biological Engineering
Mikko P. Haataja, Mechanical and Aerospace Engineering
Gregory W. Hammett, Plasma Physics Lab, Astrophysical Sciences
Isaac M. Held, Geosciences, Atmospheric and Oceanic Sciences
Sergiu Klainerman, Mathematics
Naomi Ehrich Leonard, Mechanical and Aerospace Engineering
Simon A. Levin, Ecology and Evolutionary Biology
Elliott H. Lieb, Mathematics, Physics
Luigi Martinelli, Mechanical and Aerospace Engineering
William A. Massey, Operations Research and Financial Engineering
Jeremiah P. Ostriker, Astrophysical Sciences
H. Vincent Poor, Electrical Engineering
Warren B. Powell, Operations Research and Financial Engineering
Frans Pretorius, Physics
Jean-Hervé Prévost, Civil and Environmental Engineering
Herschel A. Rabitz, Chemistry
Peter J. Ramadge, Electrical Engineering
Jennifer L. Rexford, Computer Science
Clarence W. Rowley, Mechanical and Aerospace Engineering
Robert E. Schapire, Computer Science
José A. Scheinkman, Economics
Yakov G. Sinai, Mathematics
Jaswinder P. Singh, Computer Science
K. Ronnie Sircar, Operations Research and Financial Engineering
Howard Stone, Mechanical and Aerospace Engineering
John D. Storey, Molecular Biology, Lewis-Sigler Institute for Integrative Genomics
Sankaran Sundaresan, Chemical and Biological Engineering
Salvatore Torquato, Chemistry
Olga G. Troyanskaya, Computer Science, Lewis-Sigler Institute for Integrative Genomics
Geoffrey K. Vallis, Geosciences, Atmospheric and Oceanic Sciences
Robert J. Vanderbei, Operations Research and Financial Engineering
Applied Mathematics at Princeton. There has never been a better time to be a mathematician. The combination of mathematics and computer modeling has transformed science and engineering and is
changing the nature of research in the biological sciences. The requirements for the mathematics major are a minimum of eight upperclass courses in mathematics or applied mathematics, including three
basic courses on real analysis, complex analysis, and algebra. It is possible to design a course of undergraduate study aimed more strongly toward applications. Applied and computational mathematics/
mathematics faculty have developed core courses in applied mathematics and several courses where the emphasis is mathematical modeling. The latter is central to applied mathematics where it is not
only necessary to acquire mathematical techniques and skills, but also important to learn about the application domain.
The Undergraduate Certificate. The certificate is designed for students from engineering and from the physical, biological, and social sciences who are looking to broaden their mathematical and
computational skills. It is also an opportunity for mathematically oriented students to discover the challenges presented by applications from the natural sciences and engineering. Students
interested in the undergraduate certificate contact the program's undergraduate representative in the spring semester of their sophomore year to discuss their interests, and to lay out a plan for
their course selection and research component.
Program of Study
The requirements for the undergraduate certificate in applied and computational mathematics consist of:
1. A total of five courses normally 300 level or higher (requires letter grade; pass/D/fail not accepted), at least two of which are not included in the usual requirements for the candidate's major
concentration; and
2. Independent work consisting of a paper in one of the following formats: (a) a course project/computational laboratory (possibly in the context of a course offered by Program in Applied and
Computational Mathematics [PACM] faculty); (b) a project that you are working on with a professor; or (c) a summer research project that you are planning on undertaking. However, you may not use your
junior paper or senior thesis to satisfy the independent work for the certificate program. Your paper should have a significant applied mathematics component (subject to approval of the PACM
undergraduate representative). The independent work may not be used to satisfy the requirements of any other certificate. Students interested in the PACM certificate program must apply on or before
December 31 of their junior year.
Regardless of which option is selected in (2), students will also be required to participate during their junior and senior years in a not-for-credit colloquium offered by PACM. This will provide a
forum for presentation and discussion of independent work among all certificate students and will introduce them to other areas of applied mathematics.
The five required courses may vary widely from department to department in order to include a broad spectrum of science and engineering students throughout the University. These courses should fit
readily within the degree requirements of the respective departments of the engineering school or the economics, mathematics, physics, chemistry, molecular biology, and ecology and evolutionary
biology, or other relevant departments, but will require a particular emphasis in applied mathematics.
The five required courses must be distributed between the following two areas, with at least two from each area:
1. Mathematical foundations and techniques, including differential equations, real and complex analysis, discrete mathematics, probability, and statistics, typically offered by the Department of
2. Mathematical applications, including signal processing, control theory, and optimization, mathematical economics, typically offered by the economics, science, and engineering departments.
Specific choices must be approved by the PACM undergraduate representative.
The paper/course project/computational laboratories can be done as part of a course offered by applied and computational mathematics faculty or associated faculty on a wide range of topics of current
interest in applied mathematics. Such courses vary from year to year and are designated to satisfy automatically the independent work requirement. These courses should be taken in your junior year if
you intend to use them as a paper for your independent work. Four courses developed and staffed by applied and computational mathematics faculty and offered regularly are the following:
CBE 448/MAT 448 Introduction to Nonlinear Dynamics
MAE 541/APC 571 Applied Dynamical Systems
MAT 594/APC 584 Wavelets: Applications of Wavelets in Mathematics and Other Fields
MAT 595/APC 586 Topics in Discrete Mathematics: Discrete Math
Any other course that students might use to satisfy the independent work requirement must have prior approval from the applied and computational mathematics undergraduate representative. Students may
satisfy the independent work requirement outside of a course after consultation with and approval by the undergraduate representative. If the senior thesis option is selected, attempts will be made
to coordinate it with departmental requirements.
Certificate of Proficiency
Students who fulfill all requirements of the program will receive a certificate of proficiency in applied and computational mathematics upon graduation.
Relevant Advanced Courses. A list of representative advanced undergraduate and some graduate courses that meet the certificate requirements can be found on the program website. This list is primarily
illustrative and is by no means complete. Specific programs should be tailored by the program undergraduate representative in consultation with the student to meet individual and/or departmental
APC 150 Introduction to Statistics QR
This course is an introduction to probability and statistical methods, and covers topics in probability, random variables, sampling, descriptive statistics, probability distributions, estimation and
hypotheses testing, introduction to the regression model. The course emphasizes the practice, and students will learn how to perform data analysis using modern computational tools. L. Martinelli
APC 151 Introduction to Mathmatical Modeling (also MAT 151) QR
This course is an introduction to mathematical modeling in physical and social sciences. Topics covered include modeling via simple first and second order differential equations, fitting experimental
data, optimization and an introduction to modeling probabilistic events. One substantial goal of the course is to learn MATLAB through homework, weekly group projects and an individual final project.
Equal emphasis will be put on practical implementations of the models through MATLAB scripts and on theoretical underpinnings of the models. K. Rogale Plazonic
APC 199 Math Alive (also MAT 199) QR
An exploration of some of the mathematical ideas behind important modern applications, from banking and computing to listening to music. Intended for students who have not had college-level
mathematics and are not planning to major in a mathematically based field. The course is organized in independent two-week modules focusing on particular applications, such as bar codes, CD-players,
population models, and space flight. The emphasis is on ideas and mathematical reasoning, not on sophisticated mathematical techniques. Two 90-minute classes, one computer laboratory. Staff
APC 307 Combinatorial Mathematics (see MAT 307)
APC 350 Introduction to Differential Equations (also CEE 350/MAT 350) QR
An introduction to differential equations, covering both applications and fundamental theory. Basic second-order differential equations (including the wave, heat, and Poisson equations); separation
of variables and solution by Fourier series and Fourier integrals; boundary value problem and Green's function; variational methods; normal mode analysis and perturbation methods; nonlinear first
order (Hamilton-Jacobi) equations and method of characteristics; reaction-diffusion equations. Application of these equations and methods to finance and control. Prerequisites: MAT 102, 103, and 202.
Two 90-minute lectures. W. E
APC 351 Topics in Mathematical Modeling (see MAT 351)
APC 441 Computational Geophysics (see GEO 441) | {"url":"http://www.princeton.edu/ua/archive/departmentsprograms/index-dyn.xml?dept=apc&year=2011-12","timestamp":"2014-04-18T01:32:57Z","content_type":null,"content_length":"20131","record_id":"<urn:uuid:9f7ea976-ba76-45fc-b712-6f9fcce64d06>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Projection & Rasterization [Archive] - OpenGL Discussion and Help Forums
Josejulio Ma
07-01-2010, 09:41 AM
I am trying to do a simple raster (only shapes, no light, no colors) for an occlusion test. I give 3 points (on the screen coordinates) to the raster and it draws the polygon.
So, before i need to process my 3d polygons vertices on object coordinates and convert them to screen coordinates, i google'd, search and stuff and came up with an algorithm that is similar to the
gluProject (which i found on mesa source).
gluProject documentation: http://www.opengl.org/sdk/docs/man/xhtml/gluProject.xml
x y z w
v = (object.x, object.y, object.z, 1.0)
P = Projection Matrix
M = ModelView Matrix
Viewport = (0, 0, 512, 512)
v' = P x M x v
v' = v' / v.w // We divide every xyzw by w, this step is not in the gluProject documentation.
screen.x = Viewport(0) + Viewport(2) * (v'.x + 1) * 0.5
screen.y = Viewport(1) + Viewport(3) * (v'.y + 1) * 0.5
screen.z = (v'(2) + 1) * 0.5
Well, after that, my problem is the following:
When i have an scenario like: [_] is a polygon and C is a camera, and > is the frustum
[ ]
[ ]
[ ] > C
[ ]
Everything is ok, the screen values are inside and the triangles get draw. But if i rotate along Up Axis, the polygon starts to disappear, the points get outside the screen, for instance: A point
starts poping out for the right, and tends to get positive, and grow bigger bigger (no big problem), but it gets to a point where it is on the LEFT side of the screen... Am not sure what is really
happening or how to fix it. Any help would be really usefull.
Some screens:
This is when i start moving and things are good.
This is when i rotate the camera and things get screw up: http://dl.dropbox.com/u/7600660/bad.png
Notice that the points get to the left side.
Hope i made myself clear, thanks and sorry for the big rant. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-171446.html","timestamp":"2014-04-20T10:57:41Z","content_type":null,"content_length":"5632","record_id":"<urn:uuid:5ee45329-51c5-4c9f-8d4f-210912c2ef45>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
A fully abstract semantics for concurrent graph reduction (Extended Abstract)
Results 1 - 10 of 11
- In Proceedings, Tenth Annual IEEE Symposium on Logic in Computer Science , 1995
"... ion for the Lazy -calculus Samson Abramsky Guy McCusker Department of Computing Imperial College of Science, Technology and Medicine 180 Queen's Gate London SW7 2BZ United Kingdom Abstract We
define a category of games G, and its extensional quotient E . A model of the lazy -calculus, a type-fre ..."
Cited by 134 (9 self)
Add to MetaCart
ion for the Lazy -calculus Samson Abramsky Guy McCusker Department of Computing Imperial College of Science, Technology and Medicine 180 Queen's Gate London SW7 2BZ United Kingdom Abstract We define
a category of games G, and its extensional quotient E . A model of the lazy -calculus, a type-free functional language based on evaluation to weak head normal form, is given in G, yielding an
extensional model in E . This model is shown to be fully abstract with respect to applicative simulation. This is, so far as we know, the first purely semantic construction of a fully abstract model
for a reflexively-typed sequential language. 1 Introduction Full Abstraction is a key concept in programming language semantics [9, 12, 23, 26]. The ingredients are as follows. We are given a
language L, with an `observational preorder' - on terms in L such that P - Q means that every observable property of P is also satisfied by Q; and a denotational model MJ\DeltaK. The model M is then
said to be f...
, 1998
"... Machines Th. STREICHER Fachbereich 4 Mathematik, TU Darmstadt, Schlossgartenstr. 7, 64289 Darmstadt, streiche@mathematik.th-darmstadt.de B. REUS Institut fur Informatik,
Ludwig-Maximilians-Universitat, Oettingenstr. 67, D-80538 Munchen, reus@informatik.uni-muenchen.de Abstract One of the ..."
Cited by 52 (4 self)
Add to MetaCart
Machines Th. STREICHER Fachbereich 4 Mathematik, TU Darmstadt, Schlossgartenstr. 7, 64289 Darmstadt, streiche@mathematik.th-darmstadt.de B. REUS Institut fur Informatik,
Ludwig-Maximilians-Universitat, Oettingenstr. 67, D-80538 Munchen, reus@informatik.uni-muenchen.de Abstract One of the goals of this paper is to demonstrate that denotational semantics is useful for
operational issues like implementation of functional languages by abstract machines. This is exemplified in a tutorial way by studying the case of extensional untyped call-byname -calculus with
Felleisen's control operator C. We derive the transition rules for an abstract machine from a continuation semantics which appears as a generalization of the ::-translation known from logic. The
resulting abstract machine appears as an extension of Krivine's Machine implementing head reduction. Though the result, namely Krivine's Machine, is well known our method of deriving it from
continuation semantics is new and applicable to other languages (as e.g. call-by-value variants).
- Proc. POPL'99, ACM , 1999
"... Machine The semantics presented in this section is essentially Sestoft's \mark 1" abstract machine for laziness [Sestoft 1997]. In that paper, he proves his abstract machine 6 A. K. Moran and D.
Sands h fx = Mg; x; S i ! h ; M; #x : S i (Lookup) h ; V; #x : S i ! h fx = V g; V; S i (Update) h ; ..."
Cited by 40 (7 self)
Add to MetaCart
Machine The semantics presented in this section is essentially Sestoft's \mark 1" abstract machine for laziness [Sestoft 1997]. In that paper, he proves his abstract machine 6 A. K. Moran and D.
Sands h fx = Mg; x; S i ! h ; M; #x : S i (Lookup) h ; V; #x : S i ! h fx = V g; V; S i (Update) h ; M x; S i ! h ; M; x : S i (Unwind) h ; x:M; y : S i ! h ; M [ y = x ]; S i (Subst) h ; case M of
alts ; S i ! h ; M; alts : S i (Case) h ; c j ~y; fc i ~x i N i g : S i ! h ; N j [ ~y = ~x j ]; S i (Branch) h ; let f~x = ~ Mg in N; S i ! h f~x = ~ Mg; N; S i ~x dom(;S) (Letrec) Fig. 1. The
abstract machine semantics for call-by-need. semantics sound and complete with respect to Launchbury's natural semantics, and we will not repeat those proofs here. Transitions are over congurations
consisting of a heap, containing bindings, the expression currently being evaluated, and a stack. The heap is a partial function from variables to terms, and denoted in an identical manner to a
, 1995
"... We investigate functional computation as a special form of concurrent computation. ..."
, 1994
"... . This paper is about the relationship between the theory of monadic types and the practice of concurrent functional programming. We present a typed functional programming language CMML, with a
type system based on Moggi's monadic metalanguage, and concurrency based on Reppy's Concurrent ML. We pre ..."
Cited by 9 (3 self)
Add to MetaCart
. This paper is about the relationship between the theory of monadic types and the practice of concurrent functional programming. We present a typed functional programming language CMML, with a type
system based on Moggi's monadic metalanguage, and concurrency based on Reppy's Concurrent ML. We present an operational and denotational semantics for the language, and show that the denotational
semantics is fully abstract for may-testing. We show that a fragment of CML can be translated into CMML, and that the translation is correct up to weak bisimulation. Contents 1 Introduction . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Mathematical preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Categories and monads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Partial orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . ....
- Proc. CONCUR '95, volume 962 of Lecture Notes in Computer Science , 1995
"... This paper introduces an operational semantics for call-by-need reduction in terms of Milner's ß-calculus. The functional programming interest lies in the use of ß-calculus as an abstract yet
realistic target language. The practical value of the encoding is demonstrated with an outline for a paralle ..."
Cited by 6 (1 self)
Add to MetaCart
This paper introduces an operational semantics for call-by-need reduction in terms of Milner's ß-calculus. The functional programming interest lies in the use of ß-calculus as an abstract yet
realistic target language. The practical value of the encoding is demonstrated with an outline for a parallel code generator. From a theoretical perspective, the ß-calculus representation of
computational strategies with shared reductions is novel and solves a problem posed by Milner [13]. The compactness of the process calculus presentation makes it interesting as an alternative
definition of call-by-need. Correctness of the encoding is proved with respect to the call-by-need -calculus of Ariola et al. [3]. 1 Introduction Graph reduction of extended -calculi has become a
mature field of applied research. The efficiency of the implementations is due in great measure to a technique known as `sharing', whereby argument values are computed (at most) once and then
memoized for future reference. Both...
, 2007
"... Call-by-need lambda calculi with letrec provide a rewriting-based operational semantics for (lazy) call-by-name functional languages. These calculi model the sharing behavior during evaluation
more closely than let-based calculi that use a fixpoint combinator. In a previous paper we showed that the ..."
Cited by 3 (0 self)
Add to MetaCart
Call-by-need lambda calculi with letrec provide a rewriting-based operational semantics for (lazy) call-by-name functional languages. These calculi model the sharing behavior during evaluation more
closely than let-based calculi that use a fixpoint combinator. In a previous paper we showed that the copy-transformation is correct for the small calculus LRλ. In this paper we demonstrate that the
proof method based on a calculus on infinite trees for showing correctness of instantiation operations can be extended to the calculus LRCCλ with case and constructors, and show that copying at
compile-time can be done without restrictions. We also show that the call-by-need and call-by-name strategies are equivalent w.r.t. contextual equivalence. A consequence is correctness of all the
transformations like instantiation, inlining, specialization and common subexpression elimination in LRCCλ. We are confident that the method scales up for proving correctness of copy-related
transformations in non-deterministic lambda calculi if restricted to “deterministic” subterms.
, 1999
"... Indeterminism is typical for concurrent computation. If several concurrent actors compete for the same resource then at most one of them may succeed, whereby the choice of the successful actor
is indeterministic. As a consequence, the execution of a concurrent program may be nonconfluent. Even worse ..."
Cited by 2 (1 self)
Add to MetaCart
Indeterminism is typical for concurrent computation. If several concurrent actors compete for the same resource then at most one of them may succeed, whereby the choice of the successful actor is
indeterministic. As a consequence, the execution of a concurrent program may be nonconfluent. Even worse, most observables (termination, computational result, and time complexity) typically depend on
the scheduling of actors created during program execution. This property contrast concurrent programs from purely functional programs. A functional program is uniformly confluent in the sense that
all its possible executions coincide modulo reordering of execution steps. In this paper, we investigate concurrent programs that are uniformly confluent and their relation to eager and lazy
functional programs.
"... We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express
this first syntactic theory as a storeless reduction semantics of combinatory terms. We then factor out the ..."
Cited by 1 (0 self)
Add to MetaCart
We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this
first syntactic theory as a storeless reduction semantics of combinatory terms. We then factor out the introduction of let expressions to denote as many graph vertices as possible upfront instead of
on demand. The factored terms can be interpreted as term graphs in the sense of Barendregt et al. We express this second syntactic theory, which we prove equivalent to the first, as a storeless
reduction semantics of combinatory term graphs. We then recast let bindings as bindings in a global store, thus shifting, in Strachey’s words, from denotable entities to storable entities. The
store-based terms can still be interpreted as term graphs. We express this third syntactic theory, which we prove equivalent to the second, as a store-based reduction semantics of machine. The
architecture of this store-based abstract machine coincides with that of Turner’s original reduction machine. The three syntactic theories presented here therefore properly account for combinatory
graph reduction As We Know It. These three syntactic theories scale to handling the Y combinator. This article therefore illustrates the scientific consensus of theoreticians and implementors about
graph reduction: it is the same combinatory
"... Abstract. This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need
lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for ..."
Add to MetaCart
Abstract. This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda
calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a
natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer and surjective translation. This also shows that the natural
embedding of Abramsky’s lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in
this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.25.6768","timestamp":"2014-04-18T19:25:01Z","content_type":null,"content_length":"38805","record_id":"<urn:uuid:542ba7a4-e640-438c-9492-10c8af4f47b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
CSA - Discovery Guides
> Arts & Humanities
> Natural Sciences
> Social Sciences
> Technology
> List All
Quantum Cryptography: Privacy Through Uncertainty
(Released October 2002)
by Salvatore Vittorio
Review Key Citations Web Sites Glossary Conferences Editor
Key Citations Short Format Full Format
1. "Circular type" quantum key distribution
Nishioka, T; Ishizuka, H; Hasegawa, T; Abe, J
IEEE Photonics Technology Letters (1041-1135), vol. 14, no. 4, Apr. 2002, p. 576-578
A circular type interferometric system for quantum key distribution is proposed. The system, which adopts a Sagnac loop, is highly stable and simple. The stability derives from its
self-alignment and compensation for birefringence and the systems simplicity allows it to achieve an extremely fast bit rate. Moreover, it is easily applicable to multiparty setup. Key creation
with 0.1 photon per pulse at a rate of 1.2 kHz with a 5.4 percent quantum bit-error rate over a 200 -m fiber was realized. (Author)
back to top
2. Experimental quantum cloning of single photons
Lamas-Linares, A; Simon, C; Howell, J C; Bouwmeester, D
Science (0036-8075), vol. 296, no. 5568, 26 Apr. 2002, p. 712-714
Although perfect copying of unknown quantum systems is forbidden by the laws of quantum mechanics, approximate cloning is possible. A natural way of realizing quantum cloning of photons is by
stimulated emission. In this context, the fundamental quantum limit to the quality of the clones is imposed by the unavoidable presence of spontaneous emission. In our experiment, a single
input photon stimulates the emission of additional photons from a source on the basis of parametric down-conversion. This leads to the production of quantum clones with near-optimal fidelity.
We also demonstrate universality of the copying procedure by showing that the same fidelity is achieved for arbitrary input states. (Author)
back to top
3. The effect of multi-pair signal states in quantum cryptography with entangled photons
Dusek, M; Bradler, K
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 4, no. 2, Apr. 2002, p. 109-113
Real sources of entangled photon pairs (such as parametric down conversion) are not perfect. They produce quantum states that contain more than only one photon pair with some probability.
Several aspects of the use of such states for the purpose of quantum key distribution are discussed. It is shown that the presence of 'multi-pair' signals (together with low detection
efficiencies) causes errors in transmission even in the absence of an eavesdropper. Moreover, it is shown that even eavesdropping that draws information only from these 'multi-pair' signals
increases the error rate. This fact represents the important advantage of entanglement-based quantum key distribution. Information that can be obtained by an eavesdropper from the 'multi-pair'
signals is also calculated. (Author)
back to top
4. Superpositions of the orbital angular momentum for applications in quantum experiments
Vaziri, A; Weihs, G ; Zeilinger, A
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 4, no. 2, Apr. 2002, p. S47-S51
Two different experimental techniques for preparing and analyzing superpositions of Gaussian and Laguerre-Gaussian modes are presented. These involve exploiting an interferometric method in one
case and using computer-generated holograms in the other. It is shown that by shifting a hologram with respect to an incoming Gaussian beam, different superpositions of the Gaussian and the
Laguerre-Gaussian beam can be produced. An analytical expression connecting the relative phase, the amplitudes of the modes and the displacement of the hologram is given. The application of
such orbital angular momenta superpositions in quantum experiments such as quantum cryptography is discussed. (Author)
back to top
5. Generation of correlated photons via four-wave mixing in optical fibres
Wang, L J; Hong, C K; Friberg, S R
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 3, no. 5, Oct. 2001, p. 346-352
Correlated photons have been used in quantum optics for the generation of squeezed light, demonstration of quantum interference effects, quantum cryptography, Einstein-Podolsky-Rosen
experiments and most recently, quantum teleportation. Usually, they are generated using chi-squared parametric processes to take advantage of large chi-squared nonlinearities and to wavelength
separate the correlated photons from the pump photons. Here, we examine the generation of correlated photons using chi-cubed parametric processes in optical fibers. We show that using such
processes provides a simple and inexpensive correlated photon source readily compatible with communications technologies. (Author)
back to top
6. Generation and manipulation of squeezed states of light in optical networks for quantum communication and computation
Raginsky, Maxim; Kumar, Prem
Journal of Optics B; Quantum and Semiclassical Optics (1464-4266), vol. 3, no. 4, Aug. 2001, p. L1-L4
We analyze a fiber-optic component which could have multiple uses in novel information processing systems utilizing squeezed states of light. Our approach is based on the phenomenon of
photon-number squeezing of soliton noise after the soliton has propagated through a nonlinear optical fiber. Applications of this component in optical networks for quantum computation and
quantum cryptography are discussed. (Author)
back to top
7. A note on invariants and entanglements [for quantum communication]
Albeverio, Sergio; Fei, Shao-Ming
Journal of Optics B; Quantum and Semiclassical Optics (1464-4266), vol. 3, no. 4, Aug. 2001, p. 223-227
Quantum entanglements are studied in terms of the invariants under local unitary transformations. A generalized formula of concurrence for N-dimensional quantum systems is presented. This
generalized concurrence has potential applications in studying separability and calculating the entanglement of formation for high-dimensional mixed quantum states. (Author)
back to top
8. Quantum key distribution over 1.1 km in an 850-nm experimental all-fiber system [for encryptions]
Liang, C; Fu, D-H ; Liang, B ; Wu, L-A ; Yao, D-C ; Lu, S-W
Acta Physica Sinica (1000-3290), vol. 50, no. 8, Aug. 2001, p. 1429-1433
A 1.1 km long all-fiber quantum key distribution experimental setup has been realized for the first time at 850 nm. The system employs the BB84 protocol to establish a secret key between two
parties, the security of which is guaranteed by Heisenberg's uncertainty relationship and the quantum noncloning principle. Phase modulated single photons are used to carry the key. The
effective transmission rate is 3 bit/s, with a bit error rate of 9 pct. (Author)
back to top
9. Entanglement of the orbital angular momentum states of photons
Mair, Alois; Vaziri, Alipasha; Weihs, Gregor; Zellinger, Anton
Nature (0028-0836), vol. 412, no. 6844, 19 July 2001, p. 313-316
We demonstrate quantum entanglement involving the spatial modes of the electromagnetic field carrying orbital angular momentum. As these modes can be used to define an infinitely dimensional
discrete Hilbert space, this approach provides a practical route to entanglement that involves many orthogonal quantum states, rather than just two multidimensional entangled states, and could
be of considerable importance in the field of quantum information, enabling, for example, more efficient use of communication channels in quantum cryptography. (Author)
back to top
10. Strange attractor in optical logic cells
Gonzalez-Marcos, A P; Martin-Pereda, J A
IN:Photonic Devices and Algorithms for Computing III; Proceedings of the Conference, San Diego, CA, July 29-30, 2001, Bellingham, WA, Society of Photo-Optical Instrumentation Engineers, 2001,
p. 65-73
Optical logic cells, employed in such tasks as optical computing or optically controlled switches for photonic switching, display very particular behavior when the working conditions are
slightly modified. One of the more striking changes occurs when some delayed feedback is applied between one of the possible output gates and a control input. A chaotic behavior results, and
its possible applications range from communications to cryptography. But the main problem with this behavior is the binary character of the resulting signal. Most of the techniques employed
today to analyze chaotic signals concern analog signals, where algebraic equations are possible. There are no specific tools to study digital chaotic signals. Some methods have been proposed.
One is equivalent to the phase diagram used in studying analog chaos. The binary signal is converted to hexadecimal and then analyzed. This result provides more information than that obtained
from previous methods. (Author)
back to top
11. Entanglement purification for quantum communication
Pan, Jian-Wei; Simon, Christoph; Brukner, Caslav; Zeilinger, Anton
Nature (0028-0836), vol. 410, no. 6832, 26 Apr. 2001, p. 1067-1070
The distribution of entangled states between distant locations will be essential for the future large-scale realization of quantum communication schemes such as quantum cryptography and quantum
teleportation. Existing general purification protocols are based on the quantum controlled-NOT (CNOT) or similar quantum logic operations, which are very difficult to implement experimentally.
Present realizations of CNOT gates are much too imperfect to be useful for long-distance quantum communication. Here we present a scheme for the entanglement purification of general mixed
entangled states, which achieves 50 percent of the success probability of schemes based on the CNOT operation, but requires only simple linear optical elements. Because the perfection of such
elements is very high, the local operations necessary for purification can be performed with the required precision. Our procedure is within the reach of current technology, and should
significantly simplify the implementation of long-distance quantum communication. (Author)
back to top
12. Undeniable cryptographic protocol for both sender and receiver and its applications
Li, Xian-xian; Huai, Jin-peng
Beijing University of Aeronautics and Astronautics, Journal (1001-5965), vol. 27, no. 2, Apr. 2001, p. 182-185
In data communications, digital signature schemes have been used to ensure the non-repudiation of sent data and the integrity of the data. The non-repudiation of received data is also very
important in secure communication. In past years, this kind of cryptographic protocol was mainly implemented by the intervention of a trusted third party in the transmission and encryption of
data. Thus, the dependability and security of the trusted third party was a bottleneck in these secure systems. To solve this problem, an undeniable cryptographic protocol for both sender and
receiver is proposed. It is more efficient. Finally, its applications in electronic mail are discussed. (Author)
back to top
13. Visual cryptography based on optical interference encryption technique
Seo, D-H; Kim, J-Y; Lee, S-S; Park, S-J; Cho, W-H; Kim, S-J
IN:Photonic and quantum technologies for aerospace applications III; Proceedings of the Conference, Orlando, FL, Apr. 17, 18, 2001 (A02-10251 01-74), Bellingham, WA, Society of Photo-Optical
Instrumentation Engineers (SPIE Proceedings. Vol. 4386), 2001, p. 172-180
We propose a new visual cryptography scheme based on optical interference that can improve the contrast and signal to noise ratio of reconstructed images when compared to conventional visual
cryptography methods. The binary image being encrypted is divided into any number of n slides. For encryption, (n-1) randomly independent keys are generated along with another random key based
on an XOR process of random keys. The XOR process between each divided image and each random key produces the encryption of n encrypted images. These encrypted images are then used to make
encrypted binary phase masks. For decryption, the phase masks are placed on the paths of a Mach-Zehnder interferometer. (Author)
back to top
14. Experimental entanglement distillation and 'hidden' non-locality
Kwiat, Paul G; Barraza-Lopez, Salvador; Stefanov, Andre; Gisin, Nicolas
Nature (0028-0836), vol. 409, no. 6823, 22 Feb. 2001, p. 1014-1017
Entangled states are central to quantum information processing, including quantum teleportation, efficient quantum computation, and quantum cryptography. We demonstrate experimentally the
distillation of maximally entangled states from non-maximally entangled inputs. Using partial polarizers, we perform a filtering process to maximize the entanglement of pure
polarization-entangled photon pairs generated by spontaneous parametric down-conversion. We have also applied our methods to initial states that are partially mixed. After filtering, the
distilled states demonstrate certain non-local correlations, as evidenced by their violation of a form of Bell's inequality. Because the initial states do not have this property, they can be
said to possess 'hidden' non-locality. (Author)
back to top
15. Photonic and quantum technologies for aerospace applications III; Proceedings of the Conference, Orlando, FL, Apr. 17, 18, 2001
Donkor, E, Ed; Pirich, A R, Ed; Taylor, E W, Ed
Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE Proceedings. Vol. 4386), 2001
The present volume on photonic and quantum technologies for aerospace applications discusses quantum computer realization and design issues, quantum optics and optical quantum computing, field
theory and electrodynamics in quantum computing, and cryptography, security, and encryption. Attention is given to measurement and analysis of chirp for four-wave mixing, high-power laser
material for 944-nm emission, a high-speed photonic analog-to-digital converter, an optical subcarrier generation and multiplexing scheme for all-optical networks, and enabling photonic
technologies based on electrochromic and photochromic tungsten oxide. Other topics addressed include high-repetition-rate spectrally synthesized modelocked laser pulses, a stepped conical zone
plate antenna, fast quantum Fourier-Weyl-Heisenberg transforms, prospects of electric-dipole forbidden transitions for qubit logic, visual cryptography based on an optical interference
encryption technique, and the behavior of a persistent current qubit in a time-dependent EM field. (CSA)
back to top
16. Single photons on demand from a single molecule at room temperature
Lounis, B; Moerner, W E
Nature (0028-0836), vol. 407, no. 6803, 28 Sept. 2000, p. 491-493
The generation of nonclassical states of light is of fundamental scientific and technological interest. For example, 'squeezed' states enable measurements to be performed at lower noise levels
than possible using classical light. Deterministic (or triggered) single-photon sources exhibit nonclassical behavior in that they emit, with a high degree of certainty, just one photon at a
user-specified time. (In contrast, a classical source such as an attenuated pulsed laser emits photons according to Poisson statistics.) A deterministic source of single photons could find
applications in quantum information processing, quantum cryptography, and certain quantum computation problems. Here we realize a controllable source of single photons using optical pumping of
a single molecule in a solid. Triggered single photons are produced at a high rate, whereas the probability of simultaneous emission of two photons is nearly zero - a useful property for secure
quantum cryptography. Our approach is characterized by simplicity, room temperature operation, and improved performance compared to other triggered sources of single photons. (Author)
back to top
17. POVM inconclusive rate
Brandt, Howard E
IN:Quantum computing; Proceedings of the Conference, Orlando, FL, Apr. 26, 27, 2000 (A00-43078 12-59), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE Proceeedings.
Vol. 4047), 2000, p. 69-88
The inconclusive rate is considered as a disturbance measure in key distribution in quantum cryptography. Bennett's two-state protocol is addressed for the case in which a positive operator
valued measure is implemented by the legitimate receiver in the presence of individual attack by a general unitary disturbing eavesdropping probe. The maximum Renyi information gain by the
disturbing probe is calculated for given receiver error and inconclusive rates. It is demonstrated explicitly that less information is available to an eavesdropper at fixed inconclusive rate
and error rate than is available at fixed error rate only. (Author)
back to top
18. Experimental entanglement of four particles
Sackett, C A; Kielpinski, D; King, B E; Langer, C; Meyer, V; Myatt, C J; Rowe, M; Turchette, Q A; Itano, W M; Wineland, D J
Nature (0028-0836), vol. 404, no. 6775, 16 Mar. 2000, p. 256-259
Quantum mechanics allows for many-particle wavefunctions that cannot be factorized into a product of single-particle wavefunctions, even when the constituent particles are entirely distinct.
Such 'entangled' states explicitly demonstrate the non-local character of quantum theory, having potential applications in high-precision spectroscopy, quantum communication, cryptography, and
computation. In general, the more particles that can be entangled, the more clearly nonclassical effects are exhibited - and the more useful the states are for quantum applications. Here we
implement a recently proposed entanglement technique to generate entangled states of two and four trapped ions. Coupling between the ions is provided through their collective motional degrees
of freedom, but actual motional excitation is minimized. Entanglement is achieved using a single laser pulse, and the method can in principle be applied to any number of ions. (Author)
back to top
19. Free-space quantum cryptography in daylight
Hughes, Richard J; Buttler, William T; Kwiat, Paul G; Lamoreaux, Steve K; Morgan, George L; Nordholt, Jane E; Peterson, C G
IN:Free-space laser communication technologies XII; Proceedings of the Conference, San Jose, CA, Jan. 24, 2000 (A00-35451 09-74), Bellingham, WA, Society of Photo-Optical Instrumentation
Engineers (SPIE Proceedings. Vol. 3932), 2000, p. 117-126
Quantum cryptography is an emerging technology in which two parties may simultaneously generate shared, secret cryptographic key material using the transmission of quantum states of light. In
this paper we describe the theory of quantum cryptography, and the most recent results from our experimental free-space system, with which we have demonstrated the feasibility of quantum key
generation over a point-to-point outdoor atmospheric path in daylight. We achieved a transmission distance of 0.5 km, which was limited only by the length of the test range. Our results provide
strong evidence that cryptographic key material could be generated on demand between a ground station and a satellite (or between two satellites), allowing a satellite to be securely rekeyed on
orbit. We present a feasibility analysis of surface-to-satellite quantum key generation. (Author)
back to top
20. Quantum Cryptography for Secure Satellite Communications
Hughes, R J; Buttler, W T
RECON no. 20010104946.
Quantum cryptography is an emerging technology in which two parties may simultaneously generate shared, secret cryptographic key material using the transmission of quantum states of light. The
security of these transmissions is based on the inviolability of the laws of quantum mechanics and information-theoretically secure post-processing methods. An adversary can neither
successfully tap the quantum transmissions, nor evade detection, owing to Heisenberg's uncertainty principle. In this paper we have demonstrated the feasibility of quantum key generation over a
point-to-point outdoor atmospheric path in daylight.
back to top
21. Strategies for Steganalysis of Bitmap Graphics Files
Fogle, Christopher J
NASA no. 19990036740
Steganography is the art and science of communicating through covert channels. The goal of steganography is to hide the fact that a message is even being transmitted. In the context of today's
digital world, this ancient practice is enjoying resurgence due to the plethora of hiding places made possible by modern information media. Of particular concern is the use of graphics image
files to conceal both legitimate and criminal communications.
back to top
22. Some models of chaotic motion of particles and their application to cryptography
Szczepanski, J; Gorski, K; Kotulski, Z; Paszkiewicz, A; Zugaj, A
Archives of Mechanics - Archiwum Mechaniki Stosowanej (0373-2029), vol. 51, nos. 3-4, 1999, p. 509-528
Reflection law models describing the motion of a free particle in a bounded domain are considered. Properties of such dynamical systems are strongly related to the boundary conditions,
expressed by a map called a reflection law. We discuss recent results concerning the problem of transferring important properties like chaos, ergodicity, and mixing from the reflection law to
the motion of the particle. Then we present in a consistent way a method of constructing block cryptosystems, using chaotic reflection law models with appropriate properties. We also propose an
application of the mechanical particle model to constructing a pseudorandom number generator which can be applied in stream ciphers. The security of the cryptosystem based on particle motion is
due to the property of the statistical independence of the actual location of the particle, after a number of reflections, from its initial location. (Author)
back to top
23. Quantum cryptography on optical fiber networks
Townsend, Paul D
IN:Photonic quantum computing II; Proceedings of the Meeting, Orlando, FL, Apr. 15, 16, 1998 (A98-36101 09-74), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3385), 1998, p. 2-13
Quantum cryptography exploits the fact that an unknown quantum state cannot be accurately copied (cloned) or measured without disturbance. By using such elementary quantum states to represent
binary information, it is possible, therefore, to construct communication systems with verifiable levels of security that are 'guaranteed' by fundamental quantum mechanical laws. This paper
describes recent progress at BT Laboratories in the development of practical optical fiber-based quantum cryptography systems. These developments include interferometric systems operating in
the 1.3-micron wavelength fiber transparency window over point-to-point links up to 50 km in length and on multi-user passive optical networks. We describe how this technology performs on fiber
links installed in BT's public network and discuss issues such as cross-talk with conventional data channels propagating at different wavelengths in the same fiber. (Author)
back to top
24. Free-space quantum key distribution at night
Buttler, W T; Hughes, R J; Kwiat, P G; Lamoreaux, S K; Luther, G G; Morgan, G L; Nordholt, J E; Peterson, C G; Simmons, C M
IN:Photonic quantum computing II; Proceedings of the Meeting, Orlando, FL, Apr. 15, 16, 1998 (A98-36101 09-74), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3385), 1998, p. 14-22
An experimental free-space quantum key distribution (QKD) system has been tested over an outdoor optical path of 1 km under nighttime conditions at Los Alamos National Laboratory. This system
employs the Bennett 92 protocol. Here, we give a brief overview of this protocol and describe our experimental implementation of it. An analysis of the system efficiency is presented, as well
as a description of our error detection protocol, which employs a two-dimensional parity check scheme. Finally, the susceptibility of this system to eavesdropping by various techniques is
determined, and the effectiveness of privacy amplification procedures is discussed. Our conclusions are that freespace QKD is both effective and secure; possible applications include the
rekeying of satellites in low Earth orbit. (Author)
back to top
25. Positive operator valued measure in quantum information processing
Brandt, Howard E
IN:Photonic quantum computing II; Proceedings of the Meeting, Orlando, FL, Apr. 15, 16, 1998 (A98-36101 09-74), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3385), 1998, p. 23-35
The positive operator valued measure (POVM), also known as the probability operator valued measure, is useful in quantum information processing. The POVM consists of a set of nonnegative
quantum-mechanical Hermitian operators that add up to the identity. The probability that a quantum system is in a particular state is given by the expectation value of the POVM operator
corresponding to that state. Following a brief review of the mathematics and history of POVMs in quantum theory, and a pedagogical discussion of the quantum mechanics of photonic qubits, a
particular implementation of a POVM for use in the measurement of photonic qubits is reviewed. (Author)
back to top
26. Quantum information processing with cavity QED
Hood, C J; Lynn, T W; Mabuchi, H; Chapman, M S; Ye, J; Kimble, H J
IN:Photonic quantum computing II; Proceedings of the Meeting, Orlando, FL, Apr. 15, 16, 1998 (A98-36101 09-74), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3385), 1998, p. 95-100
Strongly coupled cavity QED systems show great promise for coherent processing of quantum information in the contexts of quantum computing, communication, and cryptography. We present here
current progress in experiments for which single atoms are strongly coupled to the mode of a high finesse optical resonator. (Author)
back to top
27. Photonic quantum computing II; Proceedings of the Meeting, Orlando, FL, Apr. 15, 16, 1998
Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE Proceedings. Vol. 3385), 1998
The papers presented in this volume focus on quantum communications and cryptography, quantum computing, and quantum structures and algebras. Specific topics discussed include quantum
cryptography on optical fiber networks; positive operator valued measure in quantum information processing; vibrational decoherence in ion-trap quantum computers; and optical approach to
quantum computing. Papers are also presented on quantum information processing with cavity QED; logic design for field-effect quantum transistors; and Q-extension of the linear harmonic
oscillator. (AIAA)
back to top
28. Explorations in quantum computing [Book]
Williams, Colin P; Clearwater, Scott H
Santa Clara, CA, Springer TELOS, 1998
This book explains quantum computing in simple terms and describes the key technological hurdles that must be overcome in order to make quantum computers a reality. The book uses executable
software simulations to help explain the material and is accompanied by a multiplatform CD-ROM containing Mathematica (Trademark) Version 2.2 and 3.0 notebooks that provide simulations and
tutorials on most topics covered in the book. The topics addressed include: quantum mechanics and computers, simulating a simple quantum computer, the effects of imperfections, breaking
unbreakable codes, true randomness, quantum cryptography, quantum teleportation, quantum error correction, and how to make a quantum computer. (AIAA)
back to top
29. New Result in Quantum Cryptography
Brandt, Howard E
NASA no. 19990025973
In the entangled translucent eavesdropping scenario of key generation in quantum cryptography, I demonstrate that the unsafe error rate based on standard mutual information comparisons is
equivalent to the maximum allowable error rate based on perfect mutual information for the eavesdropper. In this case, the unsafe error rate is not in fact overly conservative, as is commonly
back to top
30. Secure communications using quantum cryptography
Hughes, Richard J; Buttler, William T; Kwiat, Paul G; Luther, Gabriel G; Morgan, George L; Nordholt, Jane E; Peterson, C G; Simmons, Charles M
IN:Photonic quantum computing; Proceedings of the Meeting, Orlando, FL, Apr. 23, 24, 1997 (A97-35954 09-70), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3076), 1997, p. 2-11
The secure distribution of the secret random bit sequences known as 'key' material is an essential precursor to their use for the encryption and decryption of confidential communications.
Quantum cryptography is an emerging technology for secure key distribution with single-photon transmissions: Heisenberg's uncertainty principle ensures that an adversary can neither
successfully tap the key transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). We have developed experimental quantum cryptography systems based
on the transmission of nonorthogonal single-photon states to generate shared key material over multikilometer optical fiber paths and over line-of-sight links. In both cases, key material is
built up using the transmission of a single-photon per bit of an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material
after a data reconciliation stage with the sender. In our optical fiber experiment we have performed quantum key distribution over 24-km of underground optical fiber using single-photon
interference states, demonstrating that secure, real-time key generation over 'open' multi-km node-to-node optical fiber communications links is possible. (Author)
back to top
31. New results on entangled translucent eavesdropping in quantum cryptography
Brandt, Howard E; Myers, John M; Lomonaco, Samuel J, Jr
IN:Photonic quantum computing; Proceedings of the Meeting, Orlando, FL, Apr. 23, 24, 1997 (A97-35954 09-70), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3076), 1997, p. 12-28
We present a mathematical physics analysis of entangled translucent eavesdropping in quantum cryptography, based on the recent work of Ekert et al. (1994). The key generation procedure involves
the transmission, interception, and reception of two nonorthogonal photon polarization states. At the receiving end, a positive operator valued measure (POVM) is employed in the measurement
process. The eavesdropping involves an information-maximizing von Neumann-type projective measurement. We propose a new design for a receiver that is an all-optical realization of the POVM,
using a Wollaston prism, a mirror, two beam splitters, a polarization rotator, and three photodetectors. We present a quantitative analysis of the receiver. We obtain closed-form algebraic
expressions for the error rates and mutual information, expressed in terms of the POVM-receiver error rate and the angle between the carrier polarization states. We also prove a significant
result, namely, that in the entangled translucent eavesdropping approach, the unsafe error rate based on standard mutual information comparisons is equivalent to the maximum allowable error
rate based on perfect mutual information for the eavesdropper. In this case, the above unsafe error rate is in fact not overly conservative. (Author)
back to top
32. Relativistic corrections to the Ekert test for eavesdropping
Czachor, Marek
IN:Photonic quantum computing; Proceedings of the Meeting, Orlando, FL, Apr. 23, 24, 1997 (A97-35954 09-70), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers (SPIE
Proceedings. Vol. 3076), 1997, p. 141-145
A degree of violation of the Bell inequality depends on momenta of massive particles with respect to a laboratory if spin plays a role af a 'yes-no' observable. For ultrarelativistic particles
a standard Ekert test has to take into account this velocity-dependent suppression of the degree of violation of the inequality. Otherwise 'Alice' and 'Bob' may 'discover' a nonexisting
eavesdropper, where 'Alice' is the sender and 'Bob' is the recipient of cryptographic messages. (Author)
back to top
33. Prospects for quantum computation with trapped ions
Hughes, R J; James, D F V
NASA no. 19980210966
Over the past decade information theory has been generalized to allow binary data to be represented by two-state quantum mechanical systems. (A single two-level system has come to be known as a
qubit in this context.) The additional freedom introduced into information physics with quantum systems has opened up a variety of capabilities that go well beyond those of conventional
information. For example, quantum cryptography allows two parties to generate a secret key even in the presence of eavesdropping. But perhaps the most remarkable capabilities have been
predicted in the field of quantum computation. Here, a brief survey of the requirements for quantum computational hardware, and an overview of the in trap quantum computation project at Los
Alamos are presented. The physical limitations to quantum computation with trapped ions are discussed.
back to top
34. Photonic Imaging Networks
Fainman, Y; Kellner, Albert
NASA no. 19980020946
We have selected as a prototype application diagnostic medical imaging and visualization. We have developed several radiological visualization station we have developed and evaluated methods
for the loss less compression of images and image-formate data over a lossy packet network. We have studied noise mechanisms in transparent photonic networks. We have demonstrated
parallel-to-serial and serial-to-parallel conversion using spectral domain four-wave mixing with 150 fsec laser pulses reaching serial data rates of over 1 Tbit/sec. we have constructed and
demonstrated a nonlinear optical processor based on three-wave mixing in nonlinear LBO crystals. We have employed the parallel-to-serial and serial-to-parallel processors for experimental
demonstration of transmission of image information through an optical fiber channel. We have analyzed the secrecy capacity of a quantum cryptographic protocol for secret key generation and
found that it primarily depends on estimates of information in eavesdropper's possession, and the expected fraction of inconclusive outcomes. We investigated experimentally a novel frequency
division long distance interferometer for implementing quantum cryptographic protocol and found that signals are not affected by transmission over optical fiber, we have developed for the first
time the rigorous definitions and the mathematical formalism for information leakage through possible eavesdropping on the quantum channel. We quantify effective defense frontiers against
avesdroppers attacks.
back to top
35. Two-photon geometric optical imaging and quantum 'cryptoFAX'
Sergienko, A V; Shih, Y H; Pittman, T B; Strekalov, D V; Klyshko, D N
IN:ICONO '95 - Atomic and quantum optics: High-precision measurements; Proceedings of the Conference, St. Petersburg, Russia, June 27-July 1, 1995 (A96-33587 08-74), Bellingham, WA, Society of
Photo-Optical Instrumentation Engineers (SPIE Proceedings. Vol. 2799), 1996, p. 164-171
A nonlocal two-photon quantum effect which surprisingly shows some features analogous to geometrical imaging optics is observed. The remote transfer of 2D analog information with a high degree
of security (quantum 'cryptoFAX') is demonstrated. (Author)
back to top
36. Fibre-optics quantum cryptography
Sochor, Vaclav
IN:ICONO '95 - Atomic and quantum optics: High-precision measurements; Proceedings of the Conference, St. Petersburg, Russia, June 27-July 1, 1995 (A96-33587 08-74), Bellingham, WA, Society of
Photo-Optical Instrumentation Engineers (SPIE Proceedings. Vol. 2799), 1996, p. 185-187
Quantum cryptography is a new, multidisciplinary technique that distributes information to two or more parties in a way that can guarantees security against unauthorized eavesdroping. Three
schemes of implementation of this goal by means of fiber optics, namely photon polarization, delayed interferometry, and quantum correlations of nonorthogonal states, are discussed. Realization
of proposed schemes is discussed as well as the fundamental limits imposed on quantum cryptography by fiber-optics techniques. (Author)
back to top
37. Quantum cryptography over underground optical fibers
HUGHES, R J; LUTHER, G G; MORGAN, G L; PETERSON, C G; SIMMONS, C; et al
Quantum cryptography is an emerging technology in which two parties may simultaneously generated shared, secret cryptographic key material using the transmission of quantum states of light
whose security is based on the inviolability of the laws of quantum mechanics. An adversary can neither successfully tap the key transmissions, nor evade detection, owing to Heisenberg's
uncertainty principle. In this paper the authors describe the theory of quantum cryptography, and the most recent results from their experimental system with which they are generating key
material over 14-km of underground optical fiber. These results show that optical-fiber based quantum cryptography could allow secure, real-time key generation over 'open' multi-km node-to-node
optical fiber communications links between secure 'islands.' (DOE)
back to top
38. Generation of Antibunched Light by Excited Molecules in a Microcavity Trap
DEMARTINI, F; DIGIUSEPPE, G; MARROCCO, M; et al
In Rome Univ., Fourth International Conference on Squeezed States and Uncertainty Relations p 565-573 (SEE N96-27114 10-74)
The active microcavity is adopted as an efficient source of non-classical light. By this device, excited by a mode-locked laser at a rate of 100 MHz, single-photons are generated over a single
field mode with a nonclassical sub-poissonian distribution. The process of adiabatic recycling within a multi-step Franck-Condon molecular optical-pumping mechanism, characterized in our case
by a quantum efficiency very close to one, implies a pump self-regularization process leading to a striking n- squeezing effect. By a replication of the basic single-atom excitation process a
beam of quantum photon (Fock states) can be created. The new process represents a significant advance in the modern fields of basic quantum-mechanical investigation, quantum communication and
quantum cryptography. (Author)
back to top
39. High-Rate Strong-Signal Quantum Cryptography
YUEN, HORACE P; et al In Northwestern Univ., Fourth International Conference on Squeezed States and Uncertainty Relations p 363-368 (SEE N96-27114 10-74) Several quantum cryptosystems utilizing
different kinds of nonclassical lights, which can accommodate high intensity fields and high data rate, are described. However, they are all sensitive to loss and both the high rate and the
strong-signal character rapidly disappear. A squeezed light homodyne detection scheme is proposed which, with present-day technology, leads to more than two orders of magnitude data rate
improvement over other current experimental systems for moderate loss. (Author)
back to top
40. A Secure Key Distribution System of Quantum Cryptography Based on the Coherent State
GUO, GUANG-CAN; ZHANG, XIAO-YU; et al
In University of Science and Technology of China, Fourth International Conference on Squeezed States and Uncertainty Relations p 297-300 (SEE N96- 27114 10-74)
The cryptographic communication has a lot of important applications, particularly in the magnificent prospects of private communication. As one knows, the security of cryptographic channel
depends crucially on the secrecy of the key. The Vernam cipher is the only cipher system which has guaranteed security. In that system the key must be as long as the message and most be used
only once. Quantum cryptography is a method whereby key secrecy can be guaranteed by a physical law. So it is impossible, even in principle, to eavesdrop on such channels. Quantum cryptography
has been developed in recent years. Up to now, many schemes of quantum cryptography have been proposed. Now one of the main problems in this field is how to increase transmission distance. In
order to use quantum nature of light, up to now proposed schemes all use very dim light pulses. The average photon number is about 0.1. Because of the loss of the optical fiber, it is difficult
for the quantum cryptography based on one photon level or on dim light to realize quantum key-distribution over long distance. A quantum key distribution based on coherent state is introduced
in this paper. Here we discuss the feasibility and security of this scheme. (Derived from text)
back to top
41. Scaling of entanglement close to a quantum phase transition
Osterloh, A; Amico, L; Falci G,; Fazio, R Nature (0028-0836), vol. 416, no. 6881, 11 Apr. 2002, p. 608-610 Classical phase transitions occur when a physical system reaches a state below a
critical temperature characterized by macroscopic order. Quantum phase transitions occur at absolute zero; they are induced by the change of an external parameter or coupling constant, and are
driven by quantum fluctuations. Examples include transitions in quantum Hall systems, localization in Si-MOSFETs (metal oxide silicon field-effect transistors, Kravchenko et al., 1994), and the
superconductor-insulator transition in two-dimensional systems. Both classical and quantum critical points are governed by a diverging correlation length, although quantum systems possess
additional correlations that do not have a classical counterpart. This phenomenon, known as entanglement, is the resource that enables quantum computation and communication. The role of
entanglement at a phase transition is not captured by statistical mechanics. A complete classification of the critical many-body state requires the introduction of concepts from quantum
information theory. In this paper we connect the theory of critical phenomena with quantum information by exploring the entangling resources of a system close to its quantum critical point. We
demonstrate, for a class of one-dimensional magnetic systems, that entanglement shows scaling behavior in the vicinity of the transition point. (Author)
back to top
42. Optimal signal detection in entanglement-assisted quantum communication systems
Ban, M
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 4, no. 2, Apr. 2002, p. 143-148
Minimization of error probability is considered in entanglement-assisted quantum communication systems. It is shown that although quantum state signals being sent are not symmetric at a sender
side, the square root measurement becomes optimum when they are made symmetric at the receiver side. For communication systems of coherent signals, where a two-mode squeezed-vacuum state is
used as an entanglement resource, the quantum entanglement greatly reduces the average probability of error. The relation to the quantum dense coding of continuous variables is also discussed.
back to top
43. Relativity, entanglement and the physical reality of the photon
Tiwari, S C
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 4, no. 2, Apr. 2002, p. S39-S46
Recent experiments on the classic Einstein-Podolsky-Rosen (EPR) setting claim to test the compatibility between nonlocal quantum entanglement and the (special) theory of relativity.
Confirmation of quantum theory has led to the interpretation that Einstein's image of physical reality for each photon in the EPR pair cannot be maintained. A detailed critique on two
representative experiments is presented following the original EPR notion of local realism. It is argued that relativity does not enter into the picture; however, for the Bell-Bohm version of
local realism in terms of hidden variables such experiments are significant. Of the two alternatives, namely, incompleteness of quantum theory for describing an individual quantum system, and
the ensemble view, it is only the former that has been ruled out by the experiments. An alternative approach gives a statistical ensemble interpretation of the observed data, and the
significant conclusion that these experiments do not deny physical reality of the photon is obtained. After discussing the need for a photon model, a vortex structure is proposed based on the
space-time invariant property-spin, and pure gauge fields. To test the prime role of spin for photons and the angular-momentum interpretation of electromagnetic fields, experimental schemes
feasible in modern laboratories are suggested. (Author)
back to top
44. Quantum information processing with atoms and photons
Monroe, C
Nature (0028-0836), vol. 416, no. 6877, 14 Mar. 2002, p. 238-246
Quantum information processors exploit the quantum features of superposition and entanglement for applications not possible in classical devices, offering the potential for significant
improvements in the communication and processing of information. Experimental realization of large-scale quantum information processors remains a long-term vision, as the required nearly pure
quantum behavior is observed only in exotic hardware such as individual laser-cooled atoms and isolated photons. But recent theoretical and experimental advances suggest that cold atoms and
individual photons may lead the way towards bigger and better quantum information processors, effectively building mesoscopic versions of 'Schroedinger's cat' from the bottom up. (Author)
back to top
45. Quantum switch for continuous variable teleportation
Zhang, J; Xie, C; Peng, K
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 3, no. 5, Oct. 2001, p. 293-297
We propose a quantum teleportation scheme in which a quantum state is teleported from the sending station (Alice) to either of two receiving stations (Bob1, Bob2). In this scheme, two pairs of
EPR beams with identical frequency and constant phase relation are used to produce two pairs of conditional entangled beams by composing their modes on two beamsplitters. One output of a
beamsplitter is sent to Alice and the two outputs of the other beamsplitter are sent to Bob1 and Bob2. Which receiving station actually receives the teleported state can be decided by
correlating the in-phase or out-of-phase quadrature components of two two-mode squeezed vacuum states. The switch system manipulated by squeezed state light might be developed as a practical
quantum switch device for the communication and teleportation of quantum information. (Author)
back to top
46. Stimulated emission of polarization-entangled photons
Lamas-Linares, A; Howell, J C; Bouwmeester, D
Nature (0028-0836), vol. 412, no. 6850, 30 Aug. 2001, p. 887-890
We use stimulated parametric downconversion to study entangled states of light that bridge the gap between discrete and macroscopic optical quantum correlations. We demonstrate experimentally
the onset of laserlike action for entangled photons, through the creation and amplification of the spin-1/2 and spin-1 singlet states consisting of two and four photons, respectively. This
entanglement structure holds great promise in quantum information science, where there is a strong demand for entangled states of increasing complexity. (Author)
back to top
47. A method to protect quantum entanglement against certain kinds of phase and exchange errors
Yang, Chui-Ping; Gea-Banacloche, Julio
Journal of Optics B: Quantum and Semiclassical Optics (1464-4266), vol. 3, no. 1, Feb. 2001, p. S30-S33
We present a method to protect the entangled states of distant particles against decoherence due to local (but collective) phase errors, and local exchange-type interactions, by pairing up the
entangled particles. The method is based on a four-qubit code which forms a decoherence-free subspace for collective phase errors and exchange errors affecting the qubits in pairs. We also show
how the scheme can be generalized to protect certain entangled states of more than two particles. (Author)
back to top | {"url":"http://www.csa.com/discoveryguides/crypt/abstracts-f.php","timestamp":"2014-04-20T21:56:26Z","content_type":null,"content_length":"68300","record_id":"<urn:uuid:757379ed-602f-47ac-828e-db4603f33de0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths notation
October 25th 2009, 10:43 AM #1
Oct 2009
I live in London, UK
Maths notation
Greetings, I am a new member and I've come here to get some help with maths.
I have a degree in Chemistry and work as a chemist in industry, but I'm also studying to make a transition int chemical engineering. This means lots of advanced applied mathematics, and I'm
having trouble understanding my notes. Can you help with the following points, as my tutor is next to useless at the moment.
1. A matrix is denoted by a letter e.g. A with a line underneath (can't really show on here easily) but what is meant by A with a double underline?
2. What is meant by this:
E (as in "is an element of") R^nxn
The R is a very ornate looking letter, a bit like old calligraphy. Very odd.
I nreally hope someone can help.
1. A matrix is denoted by a letter e.g. A with a line underneath (can't really show on here easily) but what is meant by A with a double underline?
I have never seen that notation, could you give the tile and author of the text you're using in that class?
E (as in "is an element of") R^nxn
The R is a very ornate looking letter, a bit like old calligraphy. Very odd.
Is it something like $\mathbb{R} ^{n \times n}$ or $\mathcal{R} ^{n \times n}$ ? If it's one of these, it usually means an $n \times n$ matrix with real entries.
The problems come in the lecture notes, not a specific text. I'm studying by distance learning as the university is 200 miles away from where I live - the only one in the UK that delivers the
course I want in distance learning form. The traditional engineering content is easy to understand and fun to study but the maths...
The notes for this module are presented as powerpoint slides which are fine with a narrative eg in a face to face lecture, but I don't have that sitting in my apartment/flat in front of the
laptop, and to compound the issue I don't have any grounding in engineering mathematics. Bit of a steep learning curve at the moment!
I suspected it meant real numbers so thanks for confirming that - those two symbols you put up are both used.
The double-underscore thing is used in the context of applying the Newton method to a system of linear equations in 2 dimensions, i.e. evaluating F(x) and F'(x) where F is a matrix of functions
in x. The equation is written as F'(x) = J(x) with F having a single underscore (standard matrix notation) and J having a double underscore. All I can think is that this is standard matrix
notation for the deriviatve of a matrix of functions....
October 25th 2009, 11:21 AM #2
Super Member
Apr 2009
October 25th 2009, 01:43 PM #3
Oct 2009
I live in London, UK | {"url":"http://mathhelpforum.com/advanced-algebra/110336-maths-notation.html","timestamp":"2014-04-17T18:36:51Z","content_type":null,"content_length":"36893","record_id":"<urn:uuid:f3a13513-10df-43a0-9e29-be0e0a402b0b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Publication list
• Aurélie C. Lozano, Sanjeev R. Kulkarni and Robert E. Schapire.
Convergence and consistency of regularized boosting with weakly dependent observations.
IEEE Transactions on Information Theory, 60(1):651-660, 2014.
• Robert E. Schapire.
Explaining AdaBoost.
In Bernhard Schölkopf, Zhiyuan Luo, Vladimir Vovk, editors, Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik, Springer, 2013.
• Indraneel Mukherjee and Robert E. Schapire.
A theory of multiclass boosting.
Journal of Machine Learning Research 14:437-497, 2013.
Preliminary version appeared in Advances in Neural Information Processing Systems 23, 2011.
• Indraneel Mukherjee, Cynthia Rudin and Robert E. Schapire.
The rate of convergence of AdaBoost.
Journal of Machine Learning Research 14:2315-2347, 2013.
Preliminary version appeared in The 24th Conference on Learning Theory, 2011.
• Robert E. Schapire and Yoav Freund.
Boosting: Foundations and Algorithms.
MIT Press, 2012.
Publisher's site.
• Alekh Agarwal, Miroslav Dudík, Satyen Kale, John Langford and Robert E. Schapire.
Contextual bandit learning with predictable rewards.
In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.
• Cynthia Rudin, Robert E. Schapire and Ingrid Daubechies.
Does AdaBoost always cycle? [open problem].
In Proceedings of the 25th Annual Conference on Learning Theory, 2012.
• Sina Jafarpour, Volkan Cevher and Robert E. Schapire.
A game theoretic approach to expander-based compressive sensing.
In Proceedings, IEEE International Symposium on Information Theory, 2011.
• Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin and Robert E. Schapire.
Contextual bandit algorithms with supervised learning guarantees.
In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.
• Wei Chu, Lihong Li, Lev Reyzin and Robert~E. Schapire.
Contextual bandits with linear payoff functions.
In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.
• Sina Jafarpour, Robert E. Schapire and Volkan Cevher.
Compressive sensing meets game theory.
In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2011.
• Umar Syed and Robert E. Schapire.
A reduction from apprenticeship learning to classification.
In Advances in Neural Information Processing Systems 23, 2011.
• Satyen Kale, Lev Reyzin and Robert E. Schapire.
Non-stochastic bandit slate problems.
In Advances in Neural Information Processing Systems 23, 2011.
• Indraneel Mukherjee and Robert E. Schapire.
Learning with continuous experts using drifting games.
Theoretical Computer Science, 411:2670-2683, 2010.
• Berk Kapicioglu, Robert E. Schapire, Martin Wikelski and Tamara Broderick.
Combining spatial and telemetric features for learning animal movement models.
In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2010.
• Lihong Li, Wei Chu, John Langford and Robert E. Schapire.
A contextual-bandit approach to personalized news article recommendation.
In Proceedings of the 19th International Conference on World Wide Web, 2010.
• Robert E. Schapire.
The convergence rate of AdaBoost [open problem].
In The 23rd Conference on Learning Theory, 2010.
• Cynthia Rudin and Robert E. Schapire.
Margin-based ranking and an equivalence between AdaBoost and RankBoost.
Journal of Machine Learning Research 10:2193-2232, 2009.
• Yongxin Taylor Xi, Zhen James Xiang, Peter J. Ramadge, Robert E. Schapire.
Speed and sparsity of regularized boosting.
In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009.
• Zafer Barutcuoglu, Edoardo M. Airoldi, Vanessa Dumeaux, Robert E. Schapire, Olga G. Troyanskaya.
Aneuploidy prediction and tumor classification with heterogeneous hidden conditional random fields.
Bioinformatics 25(10):1307-1313, 2009.
• Ioannis Avramopoulos, Jennifer Rexford and Robert Schapire.
From optimization to regret minimization and back again.
In Proceedings of the Third Workshop on Tackling Computer System Problems with Machine Learning Techniques, 2008.
• Umar Syed, Michael Bowling and Robert E. Schapire.
Apprenticeship learning using linear programming.
In Proceedings of the 25th International Conference on Machine Learning, 2008.
• Yoav Freund and Robert E. Schapire.
Response to Mease and Wyner, Evidence contrary to the statistical view of boosting, JMLR 9:131-156, 2008.
Journal of Machine Learning Research, 9:171-174, 2008.
• Chris Bourke, Kun Deng, Stephen D. Scott, Robert E. Schapire and N.V. Vinodchandran.
On reoptimizing multi-class classifiers.
Machine Learning, 71:219-242, 2008.
• Joseph K. Bradley and Robert E. Schapire.
FilterBoost: Regression and classification on large datasets.
In Advances in Neural Information Processing Systems 20, 2008.
• Cynthia Rudin, Robert E. Schapire and Ingrid Daubechies.
Analysis of boosting algorithms using the smooth margin function.
The Annals of Statistics, 35(6):2723-2768, 2007.
• Umar Syed and Robert E. Schapire.
A game-theoretic approach to apprenticeship learning.
In Advances in Neural Information Processing Systems 20, 2008.
• Umar Syed and Robert E. Schapire.
Imitation learning with a value-based prior.
In Uncertainty in Artificial Intelligence: Proceedings of the Twenty-Third Conference, 2007.
• Miroslav Dudík, David M. Blei and Robert E. Schapire.
Hierarchical maximum entropy density estimation.
In Proceedings of the 24th International Conference on Machine Learning, 2007.
• Cynthia Rudin, Robert E. Schapire and Ingrid Daubechies.
Precise statements of convergence for AdaBoost and arc-gv.
AMS-IMS-SIAM Joint Summer Research Conference on Machine and Statistical Learning, Prediction and Discovery, pages 131-145, 2007.
• Luis E. Ortiz, Robert E. Schapire and Sham M. Kakade.
Maximum entropy correlated equilibria.
In Eleventh International Conference on Artificial Intelligence and Statistics, 2007.
• Miroslav Dudík, Steven J. Phillips and Robert E. Schapire.
Maximum entropy density estimation with generalized regularization and an application to species distribution modeling.
Journal of Machine Learning Research, 8(Jun):1217-1260, 2007.
• Lev Reyzin and Robert E. Schapire.
How boosting the margin can also boost classifier complexity.
In Proceedings of the 23rd International Conference on Machine Learning, 2006.
• Amit Agarwal, Elad Hazan, Satyen Kale and Robert E. Schapire.
Algorithms for portfolio management based on the Newton method.
In Proceedings of the 23rd International Conference on Machine Learning, 2006.
• Miroslav Dudík and Robert E. Schapire.
Maximum entropy distribution estimation with generalized regularization.
In 19th Annual Conference on Learning Theory, 2006.
• Jane Elith, Catherine H. Graham, Robert P. Anderson, Miroslav Dudík, Simon Ferrier, Antoine Guisan, Robert J. Hijmans, Falk Huettmann, John R. Leathwick, Anthony Lehmann, Jin Li, Lucia G.
Lohmann, Bette A. Loiselle, Glenn Manion, Craig Moritz, Miguel Nakamura, Yoshinori Nakazawa, Jacob McC. M. Overton, A. Townsend Peterson, Steven J. Phillips, Karen Richardson, Ricardo
Scachetti-Pereira, Robert E. Schapire, Jorge Soberón, Stephen Williams, Mary S. Wisz and Niklaus E. Zimmermann.
Novel methods improve prediction of species' distributions from occurrence data.
Ecography, 29:129-151, 2006.
• Zafer Barutcuoglu, Robert E. Schapire and Olga G. Troyanskaya.
Hierarchical multi-label prediction of gene function.
Bioinformatics, 22:830-836, 2006.
• Jordan Boyd-Graber, Christiane Fellbaum, Daniel Osherson and Robert Schapire.
Adding dense, weighted connections to WordNet.
In Proceedings of the Third International WordNet Conference, 2006.
• Miroslav Dudík, Robert E. Schapire and Steven J. Phillips.
Correcting sample selection bias in maximum entropy density estimation.
In Advances in Neural Information Processing Systems 18, 2006.
• Aurélie C. Lozano, Sanjeev R. Kulkarni and Robert E. Schapire.
Convergence and consistency of regularized boosting algorithms with stationary beta-mixing observations.
In Advances in Neural Information Processing Systems 18, 2006.
• Cynthia Rudin, Corinna Cortes, Mehryar Mohri and Robert E. Schapire.
Margin-based ranking meets boosting in the middle.
In 18th Annual Conference on Computational Learning Theory, 2005.
• Steven J. Phillips, Robert P. Anderson and Robert E. Schapire.
Maximum entropy modeling of species geographic distributions.
Ecological Modelling, 190:231-259, 2006.
• Gokhan Tur, Dilek Hakkani-Tür and Robert E.Schapire.
Combining active and semi-supervised learning for spoken language understanding.
Speech Communication, 45(2):171-186, 2005.
• Robert E. Schapire, Marie Rochery, Mazin Rahim and Narendra Gupta.
Boosting with prior knowledge for call classification.
IEEE Transactions on Speech and Audio Processing, 13(2), March, 2005.
• Cynthia Rudin, Ingrid Daubechies and Robert E. Schapire.
The dynamics of AdaBoost: Cyclic behavior and convergence of margins.
Journal of Machine Learning Research, 5: 1557-1595, 2004.
• Cynthia Rudin, Robert E. Schapire and Ingrid Daubechies.
Boosting based on a smooth margin.
In 17th Annual Conference on Computational Learning Theory, 2004.
Postscript or gzipped postscript.
• Steven J. Phillips, Miroslav Dudík and Robert E. Schapire.
A maximum entropy approach to species distribution modeling.
In Proceedings of the Twenty-First International Conference on Machine Learning, pages 655-662, 2004.
• Miroslav Dudík, Steven J. Phillips and Robert E. Schapire.
Performance guarantees for regularized maximum entropy density estimation.
In 17th Annual Conference on Learning Theory, 2004.
Postscript or gzipped postscript.
• Cynthia Rudin, Ingrid Daubechies and Robert E. Schapire.
On the dynamics of boosting.
In Advances in Neural Information Processing Systems 16, 2004.
Postscript or gzipped postscript.
• Yoav Freund, Yishay Mansour and Robert E. Schapire.
Generalization bounds for averaged classifiers.
The Annals of Statistics, 32(4):1698-1722, 2004.
• Peter Stone, Robert E. Schapire, Michael L. Littman, János A. Csirik and David McAllester.
Decision-theoretic bidding based on learned density models in simultaneous, interacting auctions.
Journal of Artificial Intelligence Research, 19:209-242, 2003.
Postscript or gzipped postscript.
• Gokhan Tur, Robert E. Schapire and Dilek Hakkani-Tür.
Active learning for spoken language understanding.
In IEEE International Conference on Acoustics, Speech and Signal Processing, 2003.
• Yoav Freund and Robert E. Schapire.
A discussion of ``Process consistency for AdaBoost'' by Wenxin Jiang, ``On the Bayes-risk consistency of regularized boosting methods'' by Gábor Lugosi and Nicolas Vayatis, ``Statistical behavior
and consistency of classification methods based on convex risk minimization'' by Tong Zhang.
The Annals of Statistics, 32(1), 2004.
Postscript or gzipped postscript.
• Robert E. Schapire.
Advances in boosting.
In Uncertainty in Artificial Intelligence: Proceedings of the Eighteenth Conference, 2002.
Postscript or gzipped postscript.
• Giuseppe Di Fabbrizio, Dawn Dutton, Narendra Gupta, Barbara Hollister, Mazin Rahim, Giuseppe Riccardi, Robert Schapire and Juergen Schroeter.
AT&T help desk.
In 7th International Conference on Spoken Language Processing, 2002.
• Robert E. Schapire, Peter Stone, David McAllester, Michael L. Littman and János A. Csirik.
Modeling auction price uncertainty using boosting-based conditional density estimation.
In Machine Learning: Proceedings of the Nineteenth International Conference, 2002.
Postscript or gzipped postscript.
• Robert E. Schapire, Marie Rochery, Mazin Rahim and Narendra Gupta.
Incorporating prior knowledge into boosting.
In Machine Learning: Proceedings of the Nineteenth International Conference, 2002.
Postscript or gzipped postscript.
• Peter Stone, Robert E. Schapire, János A. Csirik, Michael L. Littman and David McAllester.
ATTac-2001: A learning, autonomous bidding agent.
In Agent Mediated Electronic Commerce IV: Designing Mechanisms and Systems. Springer Verlag, 2002.
Postscript or gzipped postscript.
• Robert E. Schapire.
The boosting approach to machine learning: An overview.
In D. D. Denison, M. H. Hansen, C. Holmes, B. Mallick, B. Yu, editors, Nonlinear Estimation and Classification. Springer, 2003.
Postscript or gzipped postscript.
• M. Rochery, R. Schapire, M. Rahim, N. Gupta, G. Riccardi, S. Bangalore, H. Alshawi and S. Douglas.
Combining prior knowledge and boosting for call classification in spoken language dialogue.
In International Conference on Accoustics, Speech and Signal Processing, 2002.
Postscript or gzipped postscript.
• Marie Rochery, Robert Schapire, Mazin Rahim and Narendra Gupta.
BoosTexter for text categorization in spoken language dialogue.
Accepted to Automatic Speech Recognition and Understanding Workshop, 2001 (but withdrawn due to travel restrictions following September 11).
Postscript or gzipped postscript.
• Michael Collins, Sanjoy Dasgupta and Robert E. Schapire.
A generalization of principal component analysis to the exponential family.
In Advances in Neural Information Processing Systems 14, 2002.
Postscript or gzipped postscript.
• Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire.
The non-stochastic multi-armed bandit problem.
SIAM Journal on Computing, 32(1):48-77, 2002.
Postscript or gzipped postscript.
• David McAllester and Robert E. Schapire.
Learning theory and language modeling.
In Gerhard Lakemeyer and Bernhard Nebel, editors, Exploring Artificial Intelligence in the New Millenium. Morgan Kaufmann, 2002.
Postscript or gzipped postscript.
• Raj D. Iyer, David D. Lewis, Robert E. Schapire, Yoram Singer and Amit Singhal.
Boosting for document routing.
In Proceedings of the Ninth International Conference on Information and Knowledge Management, 2000.
Postscript or gzipped postscript.
• Michael Collins, Robert E. Schapire and Yoram Singer.
Logistic regression, AdaBoost and Bregman distances.
Machine Learning, 48(1/2/3), 2002.
Postscript or gzipped postscript.
• David McAllester and Robert E. Schapire.
On the convergence rate of Good-Turing estimators.
Preliminary version appeared in Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000.
Postscript or gzipped postscript of journal submission (5/17/01).
• Erin L. Allwein, Robert E. Schapire and Yoram Singer.
Reducing multiclass to binary: A unifying approach for margin classifiers.
Journal of Machine Learning Research, 1:113-141, 2000.
• Yoav Freund, Yishay Mansour and Robert E. Schapire.
Why averaging classifiers can protect against overfitting.
Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics, 2001.
• Yoav Freund and Robert E. Schapire.
Discussion of the paper "Additive logistic regression: a statistical view of boosting" by Jerome Friedman, Trevor Hastie and Robert Tibshirani.
The Annals of Statistics, 28(2):391-393, April, 2000.
Postscript or gzipped postscript.
• Robert E. Schapire.
Theoretical views of boosting and applications.
In Tenth International Conference on Algorithmic Learning Theory, 1999.
Postscript or gzipped postscript.
• Yoav Freund and Robert E. Schapire.
A short introduction to boosting.
Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999. (Appearing in Japanese, translation by Naoki Abe.)
Postscript or gzipped postscript.
• Robert E. Schapire.
A brief introduction to boosting.
In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, 1999.
Postscript or gzipped postscript.
• Robert E. Schapire.
Drifting games.
Machine Learning, 43(3):265-291, 2001.
Postscript or gzipped postscript.
• Steven Abney and Robert E. Schapire and Yoram Singer.
Boosting applied to tagging and PP attachment.
In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.
Postscript or gzipped postscript.
• Robert E. Schapire.
Theoretical views of boosting.
In Computational Learning Theory: Fourth European Conference, EuroCOLT'99, pages 1-10, 1999.
Postscript or gzipped postscript.
• Yoav Freund, Raj Iyer, Robert E. Schapire and Yoram Singer.
An efficient boosting algorithm for combining preferences.
Journal of Machine Learning Research, 4:933-969, 2003.
Postscript or compressed postscript.
• Robert E. Schapire and Yoram Singer.
BoosTexter: A boosting-based system for text categorization.
Machine Learning, 39(2/3):135-168, 2000.
Postscript or compressed postscript.
• Yoav Freund and Robert E. Schapire.
Large margin classification using the perceptron algorithm.
Machine Learning, 37(3):277-296, 1999.
Postscript or compressed postscript.
• Robert E. Schapire and Yoram Singer.
Improved boosting algorithms using confidence-rated predictions.
Machine Learning, 37(3):297-336, 1999.
Postscript or compressed postscript.
• Robert E. Schapire, Yoram Singer and Amit Singhal.
Boosting and Rocchio applied to text filtering.
In SIGIR '98: Proceedings of the 21st Annual International Conference on Research and Development in Information Retrieval, pages 215-223, 1998.
Postscript or compressed postscript.
• Yoav Freund and Robert E. Schapire.
Adaptive game playing using multiplicative weights.
Games and Economic Behavior, 29:79-103, 1999.
• William W. Cohen, Robert E. Schapire and Yoram Singer.
Learning to order things.
Journal of Artificial Intelligence Research, 10:243-270, 1999.
Postscript or compressed postscript.
• Robert E. Schapire, Yoav Freund, Peter Bartlett and Wee Sun Lee.
Boosting the margin: A new explanation for the effectiveness of voting methods.
The Annals of Statistics, 26(5):1651-1686, 1998.
Postscript or compressed postscript.
• Yoav Freund and Robert E. Schapire.
Discussion of the paper "Arcing Classifiers" by Leo Breiman.
The Annals of Statistics, 26(3):824-832, 1998.
Postscript or compressed postscript.
• Robert E. Schapire.
Using output codes to boost multiclass learning problems.
In Machine Learning: Proceedings of the Fourteenth International Conference, pages 313-321, 1997.
Postscript or compressed postscript.
• Yoav Freund, Robert E. Schapire, Yoram Singer and Manfred K. Warmuth.
Using and combining predictors that specialize.
In Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, pages 334-343, 1997.
Postscript or compressed postscript.
• Yoav Freund and Robert E. Schapire.
Experiments with a new boosting algorithm.
In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148-156, 1996.
Postscript or compressed postscript.
• Yoav Freund and Robert E. Schapire.
Game theory, on-line prediction and boosting.
In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 325-332, 1996.
• David D. Lewis, Robert E. Schapire, James P. Callan, and Ron Papka.
Training algorithms for linear text classifiers.
In SIGIR '96: Proceedings of the 19th Annual International Conference on Research and Development in Information Retrieval, 1996.
Postscript or compressed postscript.
• David P. Helmbold, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth.
On-line portfolio selection using multiplicative updates.
Mathematical Finance, 8(4):325-347, 1998.
Postscript or compressed postscript.
• Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire.
Gambling in a rigged casino: The adversarial multi-armed bandit problem.
Extended abstract appeared in 36th Annual Symposium on Foundations of Computer Science, pages 322-331, 1995.
Postscript or compressed postscript of significantly revised journal submission (6/8/98).
• Yoav Freund, Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, and Robert E. Schapire.
Efficient algorithms for learning to play repeated games against computationally bounded adversaries.
In 36th Annual Symposium on Foundations of Computer Science, pages 332-341, 1995.
Postscript or compressed postscript.
• David P. Helmbold and Robert E. Schapire.
Predicting nearly as well as the best pruning of a decision tree.
Machine Learning, 27(1):51-68, 1997.
Postscript or compressed postscript.
• David P. Helmbold, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth.
A comparison of new and old algorithms for a mixture estimation problem.
Machine Learning, 27(1):97-119, 1997.
Postscript or compressed postscript.
• Yoav Freund and Robert E. Schapire.
A decision-theoretic generalization of on-line learning and an application to boosting.
Journal of Computer and System Sciences, 55(1):119-139, 1997.
Postscript or compressed postscript.
• Robert E. Schapire and Manfred K. Warmuth.
On the worst-case analysis of temporal-difference learning algorithms.
Machine Learning, 22(1/2/3):95-121, 1996.
Postscript or compressed postscript.
• Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert E. Schapire, and Linda Sellie.
On the learnability of discrete distributions.
In Proceedings of the Twenty-Sixth Annual ACM Symposium on the Theory of Computing, pages 273-282, 1994.
Postscript or compressed postscript.
• Robert E. Schapire and Linda M. Sellie.
Learning sparse multivariate polynomials over a field with queries and counterexamples.
Journal of Computer and System Sciences, 52(2):201-213, April, 1996.
Postscript or compressed postscript.
• Yoav Freund, Michael Kearns, Dana Ron, Ronitt Rubinfeld, Robert E. Schapire, and Linda Sellie.
Efficient learning of typical finite automata from random walks.
Information and Computation, 138(1):23-48, 1997.
• Nicolò Cesa-Bianchi, Yoav Freund, David P. Helmbold, David Haussler, Robert E. Schapire, and Manfred K. Warmuth.
How to use expert advice.
Journal of the Association for Computing Machinery, 44(3):427-485, 1997.
Postscript or compressed postscript.
• Harris Drucker, Robert Schapire, and Patrice Simard.
Boosting performance in neural networks.
International Journal of Pattern Recognition and Artificial Intelligence, 7(4):705-719, 1993.
• Harris Drucker, Robert Schapire, and Patrice Simard.
Improving performance in neural networks using a boosting algorithm.
In Advances in Neural Information Processing Systems 5, pages 42-49. Morgan Kaufmann, 1993.
• Michael J. Kearns, Robert E. Schapire, and Linda M. Sellie.
Toward efficient agnostic learning.
Machine Learning, 17:115-141, 1994.
• David Haussler, Michael Kearns, Manfred Opper, and Robert Schapire.
Estimating average-case learning curves using Bayesian, statistical physics and VC dimension methods.
In Advances in Neural Information Processing Systems 4, pages 855-862. Morgan Kaufmann, 1992.
• Robert E. Schapire.
Learning probabilistic read-once formulas on product distributions.
Machine Learning, 14(1):47-81, 1994.
• David Haussler, Michael Kearns, and Robert E. Schapire.
Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension.
Machine Learning, 14:83-113, 1994.
• Robert E. Schapire.
The Design and Analysis of Efficient Learning Algorithms.
MIT Press, 1992.
• Sally A. Goldman, Michael J. Kearns, and Robert E. Schapire.
Exact identification of read-once formulas using fixed points of amplification functions.
SIAM Journal on Computing, 22(4):705-726, August 1993.
• Michael J. Kearns and Robert E. Schapire.
Efficient distribution-free learning of probabilistic concepts.
Journal of Computer and System Sciences, 48(3):464-497, 1994.
• Sally A. Goldman, Michael J. Kearns, and Robert E. Schapire.
On the sample complexity of weak learning.
Information and Computation, 117(2):276-287, March 1995.
• Robert E. Schapire.
The emerging theory of average-case complexity.
Technical Report Technical memo MIT/LCS/TM-431, MIT Laboratory for Computer Science, June 1990.
• Sally A. Goldman, Ronald L. Rivest, and Robert E. Schapire.
Learning binary relations and total orders.
SIAM Journal on Computing, 22(5):1006-1034, 1993.
• Robert E. Schapire.
The strength of weak learnability.
Machine Learning, 5(2):197-227, 1990.
• Robert E. Schapire.
Pattern languages are not learnable.
In Proceedings of the Third Annual Workshop on Computational Learning Theory, pages 122-129, August 1990.
• Ronald L. Rivest and Robert E. Schapire.
Diversity-based inference of finite automata.
Journal of the Association for Computing Machinery, 41(3):555-589, May 1994.
• Ronald L. Rivest and Robert E. Schapire.
Inference of finite automata using homing sequences.
Information and Computation, 103(2):299-347, April 1993.
• Ronald L. Rivest and Robert E. Schapire.
A new approach to unsupervised learning in deterministic environments.
In Yves Kodratoff and Ryszard Michalski, editors, Machine Learning: An Artificial Intelligence Approach, volume III, pages 670-684. Morgan Kaufmann, 1990.
Free software for viewing postscript files is available here. | {"url":"http://www.cs.princeton.edu/~schapire/publist.html","timestamp":"2014-04-20T23:29:38Z","content_type":null,"content_length":"38888","record_id":"<urn:uuid:1958caa7-01f2-4423-a313-9b2acaec641e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
I have recently started programming and have to learn abit about Haskell. I was told that we need to define types of the function. I have a simple function that works out the factorial of numbers,
but when I add a definition type above the functions fails to load properly in winGHCi. We are told to just write the function in notepad and save it as .hs and open in Haskell. This is the function:
function :: Int -> Int
factorial n = factorialWorker n 1 where
factorialWorker n res | n > 1 = factorialWorker (n - 1) (res * n)
| otherwise = res
It works fine without the first line. Have I just not wrote the definition type in correctly or should I just leave it out altogether? The error code I get is
The type signature for `function' lacks an accompanying binding
I really appreciate anyone that can help me out | {"url":"http://www.dreamincode.net/forums/topic/270199-haskell-functions/page__pid__1572469__st__0","timestamp":"2014-04-17T05:08:18Z","content_type":null,"content_length":"127188","record_id":"<urn:uuid:bd504548-e359-405c-aaac-97a7c26f5cb8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's the "Yoga of Motives"?
up vote 42 down vote favorite
There are some things about geometry that show why a motivic viewpoint is deep and important. A good indication is that Grothendieck and others had to invent some important and new
algebraico-geometric conjectures just to formulate the definition of motives.
There are things that I know about motives on some level, e.g. I know what the Grothendieck ring of varieties is or, roughy, what are the ingredients of the definition of motives.
But, how would you explain the Grothendieck's yoga of motives? What is it referring to?
motives ag.algebraic-geometry
There are two excellent answers, hard to select a one... – Ilya Nikokoshev Oct 26 '09 at 22:40
add comment
3 Answers
active oldest votes
So this is a crazy question, but I will try to give at least a partial answer. This question about the Beilinson regulator is also relevant, and this is also an attempt to reply to the
comments of Ilya there. I apologize for simplifying and glossing over some details, see the references for the full story.
First of all, some references: A leisurely but still far from content-free exposition by Kahn on the yoga of motives is available here (in French). For Grothendieck's idea of pure motives,
see Scholl: Classical motives, available on his webpage in zipped dvi format. For mixed motives, see this survey article of Levine. There is also lots of stuff in the Motives volumes,
edited by Jannsen, Kleiman and Serre, here is the Google Books page. Finally, I would strongly recommend the book by André: Introduction aux motifs - this is has lots of background and
"yoga", as well as precise statements about what is known and what one conjectures.
Pure motives
The standard way of explaining what motives are is to say that they form a "universal cohomology theory". To make this a bit more precise, let's start with pure motives. We fix a base
field, and consider the category of smooth projective varieties, and various cohomology functors on this category. The precise notion of cohomology functor in this context is given by the
axioms for a Weil cohomology theory, see this blog post of mine for more details.
There are (at least) three key points to mention here: one is that a Weil cohomologies are "geometric" theories, as opposed to "absolute". For example, when considering etale cohomology,
we are considering the functor given by base changing the variety to the absolute closure of the ground field, and then taking sheaf cohomology with respect to the constant sheaf Z/l for
some prime l, in the etale topology. The "absolute" theory here would be the same, but without base changing in the beginning. In the classical literature, and in number theory, the
geometric version is the most important, partly because it carries an action of the Galois group of the base field, and hence gives rise to Galois representations. On the other hand, the
absolute version is important for example in the work of Rost and Voevodsky on the Bloch-Kato conjecture, and in comparison theorems with motivic cohomology. Similarly, it seems like
cohomology theories in general come in geometric/absolute pairs.
The second key point to mention is that the Weil cohomology groups come with "extra structure", such as Galois action or Hodge structure. For example, l-adic cohomology takes values in the
category of l-adic vector spaces with Galois action, and Betti cohomology takes values in a suitable category of Hodge structures. A nice reference for some of this is Deligne: Hodge I, in
the ICM 1970 volume.
The third key point is that Weil cohomology theories are always "ordinary" in some sense, i.e. in some framework of oriented cohomology theories they would correspond to the additive
formal group law (see Lurie: Survey on elliptic cohomology). If we allowed more general (oriented) cohomology theories, the universal cohomology would not be pure motives, but algebraic
up vote 39 Now all these cohomology theories are functors on the category of smooth projective varieties, and the idea is that they should all factor through the category of pure motives, and that
down vote the category of pure motives should be universal with this property. We know how to construct the category of pure motives, but there is a choice involved, namely choosing an equivalence
accepted relation on algebraic cycles, see the article by Scholl above for more details. For many purposes, the most natural choice is rational equivalence, and the resulting notion of pure motives
is usually called Chow motives. For a precise statement about the universal property of Chow motives, see André, page 36: roughly (omitting some details), any sensible monoidal
contravariant functor on the category of smooth projective varieties, with values in a rigid tensor category, factors uniquely over the category of Chow motives.
Now to the point of realizations raised by Ilya in the question about regulators. Given a category of pure motives with a universal property as above, there must be functors from the
category of motives to the category of (pure) Hodge structures, to the category of Q_l vector spaces with Galois action, etc, simply because of the universal property. These functors are
called realization functors.
Mixed motives
It seems like all the cohomology functors one typically considers can be defined not only for smooth projective varieties, but also for more general varieties. The right notion of
cohomology here seems to be axiomatized by some version of the Bloch-Ogus axioms. One could again hope for a category which has a similar universal property as above, but now with respect
to all varieties. This category would be the category of mixed motives, and in the standard conjectural framework, one hopes that it should be an abelian category. It is not clear whether
this category exists or not, but see Levine's survey above for a discussion of some attempts to construct it, by Nori and others. If we had such a category, a suitable universal property
would imply that there are realization functors again, now to "mixed" categories, for example mixed Hodge structures. The realization functors would induce maps on Ext groups, and a
suitable such map would be the Beilinson regulator, from some Ext groups in the category of mixed motives (i.e. motivic cohomology groups) to the some Ext groups which can be identified
with Deligne-Beilinson cohomology.
We do not have the abelian category of mixed motives, but we have an excellent candidate for its derived category: this is Voevodsky's triangulated categories of motives. They are also
presented very well in the survey of Levine. A really nice recent development is the work of Déglise and Cisinski, in which they construct these triangulated categories over very general
base schemes (I think Voevodsky's original work was mainly focused on fields, at least he only proved nice properties over fields).
To end by reconnecting to the Beilinson conjectures, there is extremely recent work of Jakob Scholbach (submitted PhD thesis, maybe on the arXiv soon) which seems to indicate that the
Beilinson conjectures should really be formulated in the setting of the Déglise-Cisinski category of motives over Z, rather than the classical setting of motives over Q.
The yoga of motives involves far more than what I have mentioned so far, for example things related to periods and special values of L-functions, the standard conjectures, and the idea of
motivic (and maybe even "cosmic") Galois groups, but all this could maybe be the topic for another question, some other day :-)
That's amazing! I'll read the whole thing tomorrow (it's late night here, time to go off) but you're already obviously highly knowledgeable and motivated to write. – Ilya Nikokoshev Oct
23 '09 at 22:54
It's such a lot of things to learn that I'll probably be posting lots of questions on the topic of motives this month. – Ilya Nikokoshev Oct 26 '09 at 22:36
add comment
In one sentence: the theory of cohomology theories on algebraic varieties and the idea that there is a universal such thing.
Of course, this is not a very satisfying answer, unless we specify what a cohomology theory is. Examples are l-adic cohomologies, singular cohomology, de Rham cohomology, Deligne cohomology,
rigid (or Monsky-Washnitzer) cohomology. The idea is that any computation which seems to hold in all these nice cohomology theories should be motivic (which means that it should be obtained
from the analogous computation in the (conjectural) category of motives by the suitable realization functor): example of such computations are those which involve only intersection theory
(using cup products of cycle classes). Conjecturally, the theory of motives is essentially determined by intersection theory of schemes, while higher Chow groups (i.e. motivic cohomology)
should be to motives what Deligne cohomology is to mixed Hodge structures.
1) Pure motives --- Historically, such a cohomology theory was thought as one which behaves like singular cohomology (with rational coefficients) or de Rham cohomology on smooth and projective
varieties over complex numbers, so that we would have cycle classes, the Künneth formula, Gysin maps as well as the projective bundle formula (hence Poincaré duality), and, as a consequence, a
Lefschetz fixed point formula (i.e. everything needed to do intersection theory there). If the cohomology theory takes its values in the category of (graded) vector spaces over some field of
characteristic zero, this leaded to the notion of Weil cohomology (they were named after Weil because of his insights that the existence of such a cohomology for smooth and projective
varieties over a field of characteristic p>0 would imply the Weil conjectures, i.e. the good behaviour of the zeta functions associated to the Frobenius action). However, there is not any
universal Weil cohomology: over finite fields, the existence of l-adic cohomologies would imply that this universal theory would be with coefficients in the category of Q-vector spaces, and it
is known that that there is no Q-linear Weil cohomology for varieties over a field k which contains a non trivial extension of the field with p elements (this follows from a computation of
Serre which shows that supersingular elliptic curves over such a k cannot be realized with Q-linear coefficients). (One of) the observations of Grothendieck was that, in practice, Weil
cohomology theories takes their value in more complex categories, namely tannakian categories (e.g. Galois representations, mixed Hodge structures), which made him conjecture the existence of
a universal cohomology theory with values in a tannakian category. His candidate for this universal tannakian category is the category of pure motives up to numerical equivalence (which is
up completely determined by intersection theory in classical Chow groups of smooth and projective varieties over a field).
46 2) Mixed motives --- But this is only a small part of the story (or, if you wish, of the yoga). Cohomology theories, like l-adic cohomology or de Rham cohomology, are not defined only for
down smooth and projective varieties, and they don't come alone: they come with a whole bunch a derived categories of coefficients (in our examples, l-adic sheaves and D-modules), which have very
vote strong functoriality properties, reflecting dualities and gluing data (expressing decompositions into a closed a subscheme and its open complement) as well as nice descent properties (mainly
étale and proper descent). The idea is that any computation or construction which involves only these functorialities (known as the "6 Grothendieck operations") and which holds in all the
known examples should be motivic as well (in particular, intersection theory should appear naturally from there; non trivial structures on cohomology groups, like weight filtrations for
instance, should also be explained by these functorialities). I mean that there should exists a theory of motivic sheaves which should be the universal system of coefficients M over schemes
(not necessarily over a field). Given another system of coefficients A (like l-adic sheaves) we should get tensor exact functors M(X) --> A(X) (for any scheme X) which are compatible with
Grothendieck's 6 operations (i.e. pullbacks, direct images with or without compact support, etc). These realization functors are also conjectured to be faithful. All the regulator maps are
expected to come from such realization functors. At last, the category of pure motives mentioned above should be a full tensor subcategory of the abelian category of mixed motives over the
ground field.
The existence of such motivic sheaves has been conjectured in some way or another by Grothendieck, Deligne, and Beilinson. However, as they noticed themselves, we can weaken these requirements
by replacing the categories of coeffcients by their derived categories D(A), and only require that we have triangulated categories of mixed motives over schemes (without asking that they are
derived categories of an abelian category). The good news are then that, if we allow these categories of coefficients to be abstract triangulated categories, then such a universal functorial
theory of mixed motives over arbitrary schemes is not completely out of reach: the work of Voevodsky, Suslin, Levine, Morel, Ayoub and al. on homotopy theory of schemes makes it already quite
close to us: this theory allows to produce triangulated categories DM(X) such that triangulated tensor functors DM(X) --> D(A)(X) actually exist (and are compatible with Grothendieck's 6
operations), while the Hom's in DM compute exactly higher Chow groups (but we don't know if they are conservative, as expected). Hence a significant part of the Yoga is becoming actual
mathematics nowdays, via the homotopy theory of schemes.
I think I'll be posting now more questions to be able to understand such a deep topic. – Ilya Nikokoshev Oct 25 '09 at 21:43
add comment
I too would recommend to look into André's book very much, and several articles by Deligne, esp. "Hodge I", "Valeurs de fonctions de L et Périods Integrales", "A quoi servent les motifs?". I
found Nekovar's slides and Barbieri-Viale's "Pamphlet" usefull too.
up vote Edit: Goncalo Tabuada held a talk on "the construction of the categories of noncommutative motives (pure and mixed) in the spirit of Drinfeld Kontsevich's noncommutative algebraic geometry
5 down program. In the process, I will present the first conceptual characterization of Quillen's higher K-theory since Quillen's foundational work in the 70's" (link). Edit: New preprints (1, 2)
Edit: Nori's unpublished notes on motives.
Most of Tabuada's work is published, and all of it is available on the arXiv. The first paper, about a universal triangulated category computing K-theory of dg categories is: Higher
4 K-theory via universal invariants, Duke Math. J. 145 (2008), 121-206 The analogous corepresentability theorem for non-connective K-theory is proved in: Non-connective K-theory via
universal invariants, Compositio Math. 147 (2011), 1281-1320. The paper in which the category of motives à la Kontsevich is described explicitely is Symmetric monoidal structure on
Non-commutative motives, arXiv:1001.0228 – Denis-Charles Cisinski Jul 31 '11 at 19:44
add comment
Not the answer you're looking for? Browse other questions tagged motives ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/2146/whats-the-yoga-of-motives/2230","timestamp":"2014-04-17T07:48:15Z","content_type":null,"content_length":"79739","record_id":"<urn:uuid:971bbf94-3082-462c-b089-f1c5111cc4a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiation of an integral with implicitly defined variable
November 14th 2012, 01:25 AM #1
Nov 2012
London, UK
Differentiation of an integral with implicitly defined variable
Hey folks,
I am currently writing on a paper in economics and came up with the following problem I couldn't solve so far.
I would like to differentiate the integral from the following equation with regards to q_i
while ^z_i is implicitly defined by the following equation
The result is supposed to be the following equation, though I do not understand the way to arrive at it.
I would be glad if someone could help me to understand the way to the derivative. Please take into consideration that I have no background in mathematics
Re: Differentiation of an integral with implicitly defined variable
There are a lot of subscripts and functions that make this look harder than it really is.
You have: $V^i=\int_{\hat{z}_i}^{z} (R^i(q_i,q_j,z_i)-D_i) f(z_i) dz_i$
Now you want to take the partial derivative with respect to $q_i$. In other words, you want to see how $V_i$ changes when $q_i$ changes and all the other variables are held constant.
$\frac{\partial{V^i}}{\partial{q_i}} = \frac{\partial}{\partial{q_i}} \int_{\hat{z}_i}^{z} (R^i(q_i,q_j,z_i)-D_i) f(z_i) dz_i$
Assuming you can exchange the derivative with the integral,
$\frac{\partial{V^i}}{\partial{q_i}} = \int_{\hat{z}_i}^{z} \frac{\partial}{\partial{q_i}} \[ (R^i(q_i,q_j,z_i)-D_i) f(z_i) \] dz_i$
Since $f(z_i)$ doesn't vary with respect to $q_i$, we can treat it like a constant. The derivative of a constant times a function is the constant times the derivative:
$\frac{\partial{V^i}}{\partial{q_i}} = \int_{\hat{z}_i}^{z} \frac{\partial}{\partial{q_i}} \[ (R^i(q_i,q_j,z_i)-D_i) \] f(z_i) dz_i$
And lastly, $D_i$ is a constant, so the derivative of a function minus a constant is just the derivative of that function:
$\frac{\partial{V^i}}{\partial{q_i}} = \int_{\hat{z}_i}^{z} \frac{\partial{R^i(q_i,q_j,z_i)}}{\partial{q_i}} f(z_i) dz_i$
And I assume that in your notation, $V_i^i=\frac{\partial{V^i}}{\partial{q_i}}$ and $R_i^i(q_i,q_j,z_i)=\frac{\partial{R^i(q_i,q_j,z_i) }}{\partial{q_i}}$, so the result is:
$V_i^i = \int_{\hat{z}_i}^{z} R_i^i(q_i,q_j,z_i) f(z_i) dz_i$
- Hollywood
Re: Differentiation of an integral with implicitly defined variable
Thanks a lot, Hollywood! You have been a great help to me.
November 17th 2012, 12:38 AM #2
Super Member
Mar 2010
November 17th 2012, 02:02 AM #3
Nov 2012
London, UK | {"url":"http://mathhelpforum.com/calculus/207552-differentiation-integral-implicitly-defined-variable.html","timestamp":"2014-04-17T20:39:31Z","content_type":null,"content_length":"40799","record_id":"<urn:uuid:3fbb8894-a6ec-4a00-b15e-1263765b339d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
SC2 Map Analyzer
SC2 Map Analyzer extracts a ton of data from an SC2 melee map; does that excite you as much as it does me?! The manual explains how to generate output and fiddle with some of the analysis parameters.
What the analysis reads in
If you just want to read about what the analyzer computes, skip to the "What the analysis computes" section. If you want to know where the tool gets its input data and what the caveats are, it's
right here. Some future improvements will certainly concentrate on the input data.
The awesome community here at SC2Mapster discovered a lot of information about the internal file formats for SC2 maps. SC2 Map Analyzer reads some of these files to do its computations:
From MapInfo it gets the map name, map size, and boundaries of the playable area. This isn't too bad, although there is a section of the MapInfo where the first two character strings appear (in
SC2Mapster's documentation they're named 'theme' and 'planet' probably for the tileset) and I've found that the rules for parsing these strings are hard to deduce, such as an unknown number of
zero-bytes padding the end of the string. Future format revisions by Blizzard may very well break the map analyzer (it will tell you where it failed), so don't hesitate to start a ticket if that
The tool gets the major cliff-level info from t3HeightMap. This is straightforward, no problems, easy to ignore all the cosmetic ripples and bumps that make a map look nice. You end up with terrain
data in the analyzer that is rendered underneath all of the other images:
Now we've got an issue when it comes to pathing data. What you'd like is a nice, cell-fine matrix saying "ground units can/cannot walk here" and "cliff-jumpers can/cannot jump over here," but that
doesn't appear to be available-even though in the editor you can actually view those two types of pathing directly, so it's possible, somehow. But what about t3SyncPathingInfo? Yes, that has
cell-fine ground pathing info, and you could get a good approximation of cliff-jumping from guessing how the cliff-jumping algorithm works, but: t3SyncPathingInfo is just not included in every map.
It makes me cry. If you open a bunch of Blizzard's maps you'll find it sometimes, sometimes not, and no melee map I've ever made appears to have it. My guess is they changed the overall internal map
format so this file is no longer required, maybe it's calculated when you load in the game engine. In any case, I want the map analyzer to produce consistent results and therefore I approximate
pathing by using t3CellFlags.
t3CellFlags is included in every map I've opened, and I posted some info about it on SC2Mapster's wiki. Basically, each cell has a bit vector of flags, and right now it appears only two flags are
being used: one is for auto-generated foliage and the other is for a cell where the major cliff-level is changing. So I take this second bit and use it to make unpathable cells for ground units. How
bad is the approximation? Well the cell flags are four-cell clumps so the pathing data is a little conservative, meaning that distances measured by this tool are probably a little longer than if
units can hug cliffs in the game engine. Here's a close up of the pathing near the bottom starting position on Steppes of War. Notice how tight the main ramp is in terms of pathing:
I think the approximation is fine in the end, and I have some data in the shortest paths section to back that up.
Cliff-walk pathing I just allow between any two cells that aren't the lowest elevation, so there may be some cliffs a cliff-walker can't jump in the engine that my tool will let them jump. So far I
haven't seen an example of this that breaks the results, but I think it is possible with a very long stretch of squiggly cliff.
The Objects file is XML that places start locations, resources, destructible rocks and other doodads/units/points on the map. This file is really easy to deal with, but only when you know what to
grab out of it. Start locations and resources are easy because there is a very small number of variations to grab (like the space platform geysers are like normal vespene geysers). But what about
destructible rocks? There are a lot of shapes and variations, and the XML doesn't give you the object's footprint, so anything you want to integrate with the map data has to be manually encoded. I
laboriously placed every destructible on a map and figured out the footprints, then added the known footprints to the map analyzer. As of now, I'm ignoring the vast majority of normal doodads, some
of which block paths! Unless someone comes up with a way to automatically discover the object footprints, I'll just have to add them as they are needed. For the Blizzard ladder maps so far there has
been no problem because doodads generally don't form long, important barriers on the maps.
What the analysis calculates
The goal of SC2 Map Analyzer is to compute information about a melee map that is useful for a competitive player or a map designer. For instance, you know that the distance between start locations on
Desert Oasis is long, but how long? How much longer is it than the farthest spawns on Kulas Ravine? Have you ever tried to measure the chokes around a map? What if we just calculate the "openness" of
the entire map so it is very easy to see how the chokes and open spaces compare? Let's find out the answers to these questions and more!
Simple Facts
We can grab all kinds of simple facts about a map that are 1) useful by themselves and 2) useful for computing more interesting information. SC2 Map Analyzer finds facts like what is the playable
area of the map? What percentage of the cells are pathable? How many bases are there, and how many resources are at each one? This stuff is pretty easy for a human to do, it just takes a little
effort, but its nice to have it all tabulated. The analysis also does some extra legwork to classify bases as mains, naturals, thirds, islands or semi-islands.
Shortest Paths
The original motivation for creating SC2 Map Analyzer was to calculate shortest paths between locations. This is useful for comparing the relative distance between start locations and/or naturals in
the various spawn configurations of a map, or to compare the distances between maps. Also, the shortest paths by ground, cliff-walkers and by air are used later to calculate the influence of a start
location on a base versus another start location.
In order to simplify the shortest path algorithm I only consider a few connections between map cells, rather than allow a unit to stand anywhere, or walk at any angle. A cell connects to the cells it
shares a side with, the diagonal neighbors, and cells that are a knight's jump away.
Another caveat is that the pathing approximation, as explained in the t3CellFlags section above, affects the shortest path calculations. Unless we use the exact pathing map as the game engine, and
for that matter, the exact movement algorithm from the game engine, we will never be able to compute the true "game distance" between locations.
However, I think the tool does a fine approximation of computing game distances by comparing the analysis results with an experiment I did within the game itself. This spreadsheet shows the
comparison of the analysis to the experimental values. The in-game experiment measures game seconds for an SCV to move from one start location to another. SC2 Map Analyzer computes the distance in
cells between two start locations. The experiment values and the analysis values should be directly proportional (in other words, a distance divided by time should be the speed of the SCV) and the
comparison shows this. There are a few anomalous data points but everything is within 5% so I'm pretty confident the analysis distances are useful.
Openness is a term I'm defining as "the distance (in cells) from a cell to the nearest unpathable cell." It measures how "open" the cell is, and if you plot the openness values along a color scale
you get nice images that really give a feel for the space in a map.
We can do a lot of cool stuff with openness. Clearly you can judge the positional balance of slightly asymmetrical maps by checking the openness. You can also get the average openness across an
entire map, or compare how tight or open a map is compared to other maps. SC2 Map Analyzer samples the openness values in a radius around bases to get a rough idea of whether the bases are balanced.
More or less openness isn't necessarily better or worse, but we can compare the values to decide when they are different, how different are they?
Influence/Positional Balance
Now for the grand finale: can we use the data extracted to compute positional balance? Notice I did not say racial balance, which if Brood War taught us anything it's that the relative style/strength
/advantage of races due to terrain is a hard thing to predict or measure. But positional balance we might be able to do something about.
Consider a perfectly symmetrical map with respect to pathing, start locations, and bases. This map has perfect positional balance because no matter which start locations players spawn in, they are
faced with exactly the same situation. Okay, but perfectly symmetrical maps are boring both to look at and to play on. So let's see if we can't measure positional balance to judge whether an
asymmetrical map is still fair for competitive play.
SC2 Map Analyzer attacks this problem in two phases. The first phase calculates influence, which is the percent influence that one start location exerts on a base versus another start location. The
start locations are considered in pairs because the influence calculation is meant to reflect which player has greater potential influence in a 1v1 game.
Consider a base, B. What is the influence of Start Location 1 versus Start Location 2? First measure the shortest distances from B to 1 by ground, cliff-walk and air. These distances are weighted:
ground 55%, cliff-walk 10% and air 35%. Then compute the same weighted distances from B to 2. The influence of 1 on B versus 2 is inversely proportional to the percentage of 1's weighted distance
over the total with 2's. If that's just mumbo jumbo, here's an example:
Consider a base that is exactly the same distance between two start locations by ground, cliff-walk and air. These start locations will exert exactly 50% influence on the base versus each other,
because players in those spawns have an equal potential, by distances, to control that area of the map.
Another example:
A base near start location 1 (like the natural) has a SHORT distance by ground, cliff-jump and air. The distance from start location 2 to the natural of 1 is LONG by all accounts. The influence of 1
on B versus 2 is inversely proportional, so SHORT becomes a HIGH influence (for a natural it's usually something like 75%).
Images show influence more intuitively, so:
Here you can see the start locations exert almost 100% influence over the main base. The naturals are roughly 75%. Notice the island in between the mains and the gold expo at the far end; these bases
are supposed to be along the axis of reflection so they are perfectly contested between the players. But the analysis tells us start location 2 o'clock is a hair closer to the island (no problem) and
the gold expo is bit closer to the 12 o'clock base. Scrap Station has asymmetry and in the grand scheme of things these fluctuations in influence are reasonably close.
So we calculated influence. Great. What does that tell us about positional balance? Not the whole story yet. What if you have 75% influence over your natural and I have 75% over mine, but my natural
only has one mineral patch?! Ah! This is where phase two comes in: using influence and attributes of bases together to compute positional balance.
Positional Balance by Resources
We can add up the total resources for bases, then calculate the difference in influence percentage over the bases for start location 1 versus 2, then compute what percentage of resources start
location 1 has more influence over than start location 2! Awesome! So if a base is 50%/50% contested it won't factor into the balance at all, and a base 75%/25% (like a natural) will contribute 75-25
=50% of its resource value to the balance of the dominant start location.
Positional Balance by Openness
Let's consider a map that has perfect positional balance by resources, but my natural has a protective moat to make it harder to attack (without changing the shortest paths from the start locations,
of course!). Clearly the positional balance should reflect that as well. SC2 Map Analyzer does a separate positional balance calculation by taking the same influence percentages for each base and
applying them to the average openness of each base. This calculation (unlike the resource calc) doesn't say whether openness is good or bad; it is just one way to measure how different two positions
The map's overall positional balance can be thought of as the lowest positional balance computed by any separate method out of all the possible start location pairs.
You must login to post a comment. Don't have an account? Register to get one!
Date created
Jun 23, 2010
Last updated
Jun 25, 2010 | {"url":"http://www.sc2mapster.com/assets/sc2-map-analyzer/pages/analysis-details/","timestamp":"2014-04-19T07:08:19Z","content_type":null,"content_length":"46702","record_id":"<urn:uuid:f6566800-ad0f-4d8c-966f-f04f6be1f6e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to GoLive CyberStudio
(A word of explanation for those not familiar with the concept of fractal geometry: recent work by Mandelbrot, his colleagues and students suggests that the mathematical regularities of much of
the natural world are not integer- dimensional, but of dimensionalities which can be expressed as fractions of whole numbers. So what is being suggested here is that the synthesizing ego tends to
reduce these fractal spaces to integer-dimensional spaces - commonly two- or three- dimensional spaces - in order to bring them into consciousness. This, we suggest, is explained by the geometry
of phenomenological space.)
>-- back | {"url":"http://www.altx.com/ebr/ebr6/lordnote1.htm","timestamp":"2014-04-18T13:06:47Z","content_type":null,"content_length":"1406","record_id":"<urn:uuid:388a3fd2-2e83-422c-b824-4197461267cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
15 days till AP calc AB exam!!! how to prepare?!?!?!
Click here to go to the NEW College Discussion Forum
Discus: SAT/ACT Tests and Test Preparation: April 2004 Archive: 15 days till AP calc AB exam!!! how to prepare?!?!?!
we've had a sucky year in calc. There are only 5 ppl in my calc class, and we've pretty much stopped for the year. I know quite a bit, but i dont think its nearly enough for the test. I have the
barrons book. How should I prepare? btw i only have 2 weeks...
Take a practice test, for every question you miss or don't know very well do ALL the practice problems for.
A 70% is usually the cut-off score for a 5. Remember, the test makers do not expect you to be an expert in calc, just have a good understanding of basic ideas. Know that the derivative is the rate of
change. The integral is an accumulation function. And know how to use your graphing calculator. Time yourself on the test -- no more than two minute per mc problem and 15 minutes per free response.
Do not make stupid algebra mistakes and know basic trig (unit circle, basic identities).
thanx...this info will help. a 70% is the cutoff for a 5? that doesnt seem too hard...i'd be extremely happy if i got a 4!
My teacher told us anything above a 60 on our practice tests is VERY good. I think with the right about of studying you'll be in good shape
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/69/63677.html","timestamp":"2014-04-16T19:04:15Z","content_type":null,"content_length":"11816","record_id":"<urn:uuid:35d16a38-3abc-4676-9614-102a9b215b85>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Stata’s random-number generators, part 2, drawing without replacementUsing Stata's random-number generators, part 2, drawing without replacement
Using Stata’s random-number generators, part 2, drawing without replacement
Last time I told you that Stata’s runiform() function generates rectangularly (uniformly) distributed random numbers over [0, 1), from 0 to nearly 1, and to be precise, over [0,
0.999999999767169356]. And I gave you two formulas,
1. To generate continuous random numbers between a and b, use
generate double u = (b-a)*runiform() + a
The random numbers will not actually be between a and b: they will be between a and nearly b, but the top will be so close to b, namely 0.999999999767169356*b, that it will not matter.
2. To generate integer random numbers between a and b, use
generate ui = floor((b-a+1)*runiform() + a)
I also mentioned that runiform() can solve a variety of problems, including
• shuffling data (putting observations in random order),
• drawing random samples without replacement (there’s a minor detail we’ll have to discuss because runiform() itself produces values drawn with replacement),
• drawing random samples with replacement (which is easier to do than most people realize),
• drawing stratified random samples (with or without replacement),
• manufacturing fictional data (something teachers, textbook authors, manual writers, and blog writers often need to do).
Today we will cover shuffling and drawing random samples without replacement — the first two topics on the list — and we will leave drawing random samples with replacement for next time. I’m going to
tell you
1. To place observations in random order — to shuffle the observations — type
. generate double u = runiform()
. sort u
2. To draw without replacement a random sample of n observations from a dataset of N observations, type
. set seed #
. generate double u = runiform()
. sort u
. keep in 1/n
I will tell you that there are good statistical reasons for setting the random-number seed even if you don’t care about reproducibility.
If you do care about reproducibility, I will mention (1) that you need to use sort to put the original data in a known, reproducible order, before you generate the random variate u, and I will
explain (2) a subtle issue that leads us to use different code for N≤1,000 and N>1,000. The code for for N≤1,000 is
. set seed #
. sort variables_that_put_data_in_unique_order
. generate double u = runiform()
. sort u
. keep in 1/n
and the code for N>1,000 is
. set seed #
. sort variables_that_put_data_in_unique_order
. generate double u1 = runiform()
. generate double u2 = runiform()
. sort u1 u2
. keep in 1/n
You can use the N>1,000 code for the N≤1,000 case.
3. To draw without replacement a P-percent random sample, type
. set seed #
. keep if runiform() <= P/100
There’s no issue in this case when N is large.
As I mentioned, we’ll discuss drawing random samples with replacement next time. Today, the topic is random samples without replacement. Let’s start.
Shuffling data
I have a deck of 52 cards, in order, the first four of which are
. list in 1/4
| rank suit |
1. | Ace Club |
2. | 1 Club |
3. | 2 Club |
4. | 3 Club |
Well, actually I just have a Stata dataset with observations corresponding to playing cards. To shuffle the deck — to place the observations in random order — type
. generate double u = runiform()
. sort u
Having done that, here’s your hand,
. list in 1/5
| rank suit u |
1. | Queen Club .0445188 |
2. | 5 Diamond .0580662 |
3. | 7 Club .0610638 |
4. | King Heart .0907986 |
5. | 6 Spade .0981878 |
and here’s mine:
. list in 6/10
| rank suit u |
6. | 8 Diamond .1024369 |
7. | 5 Club .1086679 |
8. | 8 Spade .1091783 |
9. | 2 Spade .1180158 |
10. | Ace Club .1369841 |
All I did was generate random numbers — one per observation (card) — and then place the observations in ascending order of the random values. Doing that is equivalent to shuffling the deck. I used
runiform() random numbers, meaning rectangularly distributed random numbers over [0, 1), but since I’m only exploiting the random-numbers’ ordinal properties, I could have used random numbers from
any continuous distribution.
This simple, elegant, and obvious solution to shuffling data will play an important part of the solution to drawing observations without replacement. I have already more than hinted at the solution
when I showed you your hand and mine.
Drawing n observations without replacement
Drawing without replacement is exactly the same problem as dealing cards. The solution to the physical card problem is to shuffle the cards and then draw the top cards. The solution to randomly
selecting n from N observations is to put the N observations in random order and keep the first n of them.
. use cards, clear
. generate double u = runiform()
. sort u
. keep in 1/5
(47 observations deleted)
. list
| rank suit u |
1. | Ace Diamond .0064866 |
2. | 6 Heart .0087578 |
3. | King Spade .014819 |
4. | 3 Spade .0955155 |
5. | King Diamond .1007262 |
. drop u
You might later want to reproduce the analysis, meaning you do not want to draw another random sample, but you want to draw the same random sample. Perhaps you informally distributed some preliminary
results and, of course, then discovered a mistake. You want to redistribute updated results and show that your mistake didn’t change results by much, and to drive home the point, you want to use the
same samples as you used previously.
Part of the solution is to set the random-number seed. You might type
. set seed 49983882
. use cards, clear
. generate double u = runiform()
. sort u
. keep in 1/5
See help set seed in Stata. As a quick review, when you set the random-number seed, you set Stata’s random-number generator into a fixed, reproducible state, which is to say, the sequence of random
numbers that runiform() produces is a function of the seed. Set the seed today to the same value as yesterday, and runiform() will produce the same sequence of random numbers today as it did
yesterday. Thus, after setting the seed, if you repeat today exactly what you did yesterday, you will obtain the same results.
So imagine that you set the random number seed today to the value you set it to yesterday and you repeat the above commands. Even so, you might not get the same results! You will not get the same
results if the observations in cards.dta are in a different order yesterday and today. Setting the seed merely ensures that if yesterday the smallest value of u was in observation 23, it will again
be in observation 23 today (and it will be the same value). If yesterday, however, observation 23 was the 6 of Clubs, and today it’s the 7 of Hearts, then today you will select the 7 of Hearts in
place of the 6 of Clubs.
So make sure the data are in the same order. One way to do that is put the dataset in a known order before generating the random values on which you will sort. For instance,
. set seed 49983882
. use cards, clear
. sort rank suit
. generate double u = runiform()
. sort u
. keep in 1/5
An even better solution would add the line
. by rank suit: assert _N==1
just before the generate. That line would check whether sorting on variables rank and suit uniquely orders the observations.
With cards.dta, you can argue that the assert is unnecessary, but not because you know each rank-suit combination occurs once. You have only my assurances about that. I recommend you never trust
anyone’s assurances about data. In this case, however, you can argue that the assert is unnecessary because we sorted on all the variables in the dataset and thus uniqueness is not required. Pretend
there are two Ace of Clubs in the deck. Would it matter that the first card was Ace of Clubs followed by Ace of Clubs as opposed to being the other way around? Of course it would not; the two states
are indistinguishable.
So let’s assume there is another variable in the dataset, say whether there was a grease spot on the back of the card. Yesterday, after sorting, the ordering might have been,
| rank suit grease u |
1. | Ace Club yes .6012949 |
2. | Ace Club no .1859054 |
and today,
| rank suit grease u |
1. | Ace Club no .6012949 |
2. | Ace Club yes .1859054 |
If yesterday you selected the Ace of Clubs without grease, today you would select the Ace of Clubs with grease.
My recommendation is (1) sort on whatever variables put the data into a unique order, and then verify that, or (2) sort on all the variables in the dataset and then don’t worry whether the order is
Ensuring a random ordering
Included in our reproducible solution but omitted from our base solution was setting the random-number seed,
. set seed 49983882
Setting the seed is important even if you don’t care about reproducibility. Each time you launch Stata, Stata sets the same random-number seed, namely 123456789, and that means that runiform()
generates the same sequence of random numbers, and that means that if you generated all your random samples right after launching Stata, you would always select the same observations, at least
holding N constant.
So set the seed, but don’t set it too often. You set the seed once per problem. If I wanted to draw 10,000 random samples from the same data, I could code:
use dataset, clear
set seed 1702213
sort variables_that_put_data_in_unique_order
forvalues i=1(1)10000 {
generate double u = runiform()
sort u
keep in 1/n
drop u
save sample`i', replace
restore, preserve
In the example I save each sample in a file. In real life, I seldom (never) save the samples; I perform whatever analysis on the samples I need and save the results, which I usually append into a
single dataset. I don’t need to save the individual samples because I can recreate them.
And the result still might not be reproducible …
runiform() draws random-numbers with replacement. It is thus possible that two or more observations could have the same random values associated with them. Well yes, you’re thinking, I see that it’s
possible, but surely it’s so unlikely that it just doesn’t happen. But it does happen:
. clear all
. set obs 100000
obs was 0, now 100000
. generate double u = runiform()
. by u, sort: assert _N==1
1 contradiction in 99999 by-groups
assertion is false
In the 100,000-observation dataset I just created, I got a duplicate! By the way, I didn’t have to look hard for such an example, I got it the first time I tried.
I have three things I want to tell you:
1. Duplicates happen more often than you might guess.
2. Do not panic about the duplicates. Because of how Stata is written, duplicates do not lower the quality of the sample selected. I’ll explain.
3. Duplicates do interfere with reproducibility, however, and there is an easy way around that problem.
Let’s start with the chances of observing duplicates. I mentioned in passing last time that runiform() is a 32-bit random-number generator. That means runiform() can return any of 2^32 values. Their
values are, in order,
0 = 0
1/2^32 = 2.32830643654e-10
2/2^32 = 4.65661287308e-10
3/2^32 = 6.98491930962e-10
(2^32-2)/2^32 = 0.9999999995343387
(2^32-1)/2^32 = 0.9999999997671694
So what are the chances that in N draws with replacement from an urn containing these 2^32 values, that all values are distinct? The probability p that all values are distinct is
2^32 * (2^32-1) * ... *(2^32-N)
p = ----------------------------
Here are some values for various values of N. p is the probability that all values are unique, and 1-p is the probability of observing one or more repeated values.
N p 1-p
50 0.999999715 0.000000285
500 0.999970955 0.000029045
1,000 0.999883707 0.000116293
5,000 0.997094436 0.002905564
10,000 0.988427154 0.011572846
50,000 0.747490440 0.252509560
100,000 0.312187988 0.687812012
200,000 0.009498117 0.990501883
300,000 0.000028161 0.999971839
400,000 0.000000008 0.999999992
500,000 0.000000000 1.000000000
In shuffling cards we generated N=52 random values. The probability of a repeated values is infinitesimal. In datasets of N=10,000, I expect to see repeated values 1% of the time. In datasets of N=
50,000, I expect to see repeated values 25% of the time. By N=100,000, I expect to see repeated values more often than not. By N=500,000, I expect to see repeated value in virtually all sequences.
Even so, I promised you that this problem does not affect the randomness of the ordering. It does not because of how Stata’s sort command is written. Remember the basic solution,
. use dataset, clear
. generate double u = runiform()
. sort u
. keep in 1/n
Did you know sort has its own, private random-number generator built into it? It does, and sort uses its random-number generator to determine the order of tied observations. In the manuals we at
StataCorp are fond of writing, “the ties will be ordered randomly” and a few sophisticated users probably took that to mean, “the ties will be ordered in a way that we at StataCorp do not know and
even though they might be ordered in a way that will cause a bias in the subsequent analysis, because we don’t know, we’ll ignore the possibility.” But we meant it when wrote that the ties will be
ordered randomly; we know that because we put a random number generator into sort to ensure the result. And that is why I can now write that repeated values of the runiform() function cause a
reproducibility issue, but not a statistical issue.
The solution to the reproducibility issue is to draw two random numbers and use the random-number pair to order the observations:
. use dataset, clear
. sort varnames
. set seed #
. generate double u1 = runiform()
. generate double u2 = runiform()
. sort u1 u2
. keep in 1/n
You might wonder if we would ever need three random numbers. It is very unlikely. p, the probability of no problem, equals 1 to at least 5 digits for N=500,000. Of course, the chances of duplication
are always nonzero. If you are concerned about this problem, you could add an assert to the code to verify that the two random numbers together do uniquely identify the observations:
. use dataset, clear
. sort varnames
. set seed #
. generate double u1 = runiform()
. generate double u2 = runiform()
. sort u1 u2
. by u1 u2: assert _N==1 // added line
. keep in 1/n
I do not believe that doing that is necessary.
Is using doubles necessary?
In the generation of random numbers in all of the above, note that I am storing them as doubles. For the reproducibility issue, that is important. As I mentioned in part 1, the 32-bit random numbers
that runiform() produces will be rounded if forced into 23-bit floats.
Above I gave you a table of probabilities p that, in creating
. generate double u = runiform()
the values of u would be distinct. Here is what would happen if you instead stored u as a float:
u stored as
-------- double ---------- ----------float ----------
N p 1-p p 1-p
50 0.999999715 0.000000285 0.999853979 0.000146021
500 0.999970955 0.000029045 0.985238383 0.014761617
1,000 0.999883707 0.000116293 0.942190868 0.057809132
5,000 0.997094436 0.002905564 0.225346930 0.774653070
10,000 0.988427154 0.011572846 0.002574145 0.997425855
50,000 0.747490440 0.252509560 0.000000000 1.000000000
100,000 0.312187988 0.687812012 0.000000000 1.000000000
200,000 0.009498117 0.990501883 0.000000000 1.000000000
300,000 0.000028161 0.999971839 0.000000000 1.000000000
400,000 0.000000008 0.999999992 0.000000000 1.000000000
500,000 0.000000000 1.000000000 0.000000000 1.000000000
Drawing without replacement P-percent random samples
We have discussed drawing without replacement n observations from N observations. The number of observations selected has been fixed. Say instead we wanted to draw a 10% random sample, meaning that
we independently allow each observation to have a 10% chance of appearing in our sample. In that case, the final number of observations is expected to be 0.1*N, but it may (and probably will) vary
from that. The basic solution for drawing a 10% random sample is
. keep if runiform() <= 0.10
and the basic solution for drawing a P% random sample is
. keep if runiform() <= P/100
It is unlikely to matter whether you code <= or < in the comparison. As you now know, runiform() produces values drawn from 2^32 possible values, and thus the chance of equality is 2^-32 or roughly
0.000000000232830644. If you want a P% sample, however, theory says you should code <=.
If you care about reproducibility, you should expand the basic solution to read,
. set seed #
. use data, clear
. sort variables_that_put_data_in_unique_order
. keep if runiform() <= P/100
Below I draw a 10% sample from the card.dta:
. set seed 838
. use cards, clear
. sort rank suit
. keep if runiform() <= 10/100
(46 observations deleted)
. list
| rank suit |
1. | 2 Diamond |
2. | 2 Heart |
3. | 3 Club |
4. | 5 Heart |
5. | Jack Diamond |
6. | Queen Spade |
We’re not done, but we’re done for today
In part 3 of this series I will discuss drawing random samples with replacement.
Great post!
Can you explain why sort does not use the same seed as the other random number generators? That would make sort also foolproof with respect to reproducibility.
We did not because we we did not want reproducibility in this case.
Consider a piece of code that, run on the same data, produces different results in different runs. Assume the random behavior is not an intentional outcome of the procedure/formulas that the code
implements. Then the observed random behavior indicates a mistake in the procedure/formulas or a mistake in the code. I’ve seen both.
In my experience, indeterministic sorts are the second most likely cause of different-results-in-different-runs problems. The most likely cause is uninitialized variables. Besides the statistical
goals I mentioned in the posting, another of our goals in writing -sort- so that it randomly ordered ties was to highlight indeterministic sorts that lead to irreproducible results. At StataCorp, we
run certification scripts in a loop, over and over again, looking for just such problems.
Setting the random-number seed is a way of reproducing results from routines that are intended to produce different results in different runs. -sort- is not such a function; if it produces different
results in different runs, and that matters, that is a bug.
instead of save the generated samples separately save sample`i’, replace. Is there a way to put all generated samples as different variables and save them in one data base? | {"url":"http://blog.stata.com/2012/08/03/using-statas-random-number-generators-part-2-drawing-without-replacement/","timestamp":"2014-04-20T15:54:05Z","content_type":null,"content_length":"56557","record_id":"<urn:uuid:1a8b8972-a69d-4ab2-af62-8c55f3a14df4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Bobby on Monday, June 11, 2012 at 3:55pm.
1. Differentiae each function
a) Y=-3x^2+5x-4
b) F(x) = 6/x-3/x^2
c) F(x)=(3x^2-4x)(x^3+1)
2. Determine the equation of the tangent line to the curve y=2x^2-1 at the point where x=-2
3. Evaluate, rounding to two decimal places, If necessary
a) In 5
b) b) in e^2
c) (ine)^2
4. Differentiate each function
a) Y=cos^3 x
b) Y=sin(x^3)
c) Y=Sin^2xCos3x
5. Determine the value of k so that u=[-3,7] and v= [16,k] are perpendicular
6. Use the derivative of y=Sinx and y=Cosx to develop the derivative of y=tanx
7. Determine the coordinates of two points on the plane with equation 5x+4y-3z=6
8. Determine the work done by force f=[1,4] in newtons for an object moving along the vector d =[6,3]
• Calc - Anonymous, Monday, June 11, 2012 at 4:25pm
apparently you haven't tried any of these. How far did you get? Where did you get stuck? What makes you think someone wants to do your whole homework assignment for you?
Here's something:
3a) 1.6094
3b) 2
3c) 1
6. d/dx(sin/cos) = (cos*cos + sin*sin)/cos^2 = 1/cos^2 = sec^2
whew - that's enough for me. . .
Related Questions
Calc - Determine the x-value for each inflection point on the graph of the ...
Math - Can someone please check my answers below for accurateness? The ...
algebra 116 - Rewrite the equation y = 5x – 2 as a function of x. This problem ...
Math - Chain Rule - I've been having some difficulities with this but I think I ...
Calculus derivative - Find the slope and an equation of the tangent line to the ...
Algebra2 - Complete parts a – c for each quadratic function: a. Find the y-...
calculus derivatives - Please confirm: {(5x+2)^4/x^2-3x+1)^3}= 20(5x+2)^2(x^2-3x...
algebra1/McDougal Littell - I need to know how to work this problem and show my ...
Math - AlgebrAic factorization 1)6(2a-3)(a+7)+18(2a-3)(3a+2) 2) (3x-4)(5x+1)-(...
math - Logarithm 3^3x+1=3^5x-2 3^3x+1-3^5x-2=0 Log(3^3x+1-3^5x-2)=log0 Please ... | {"url":"http://www.jiskha.com/display.cgi?id=1339444521","timestamp":"2014-04-18T16:57:29Z","content_type":null,"content_length":"9168","record_id":"<urn:uuid:49bd9c5a-22b3-4c54-958c-aaab03ac20c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A motor scooter purchased for $1,000 depreciates at an annual rate of 15%. Write an exponential function, and graph the function. Use the graph to predict when the value will fall below $100.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Let x be the number of years since the purchace f(x) = (initial value)*(rate of growth)^x If it is depreciating at a rate of 15% in one year, then the value that remains is 85% of the original
value. So the rate should be 0.85. So f(x) = 1000*(0.85)^x Just to check: f(0) = 1000(0.85)^0 = 1000 \(\surd\) f(1) = 1000(0.85)^1 = 850 \(\surd\)
Best Response
You've already chosen the best response.
v(t) = 1000(0.85)t; the value will fall below $100 in about 14.2 yr. ??
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/503eff4ce4b0293b87a9f79d","timestamp":"2014-04-17T18:46:03Z","content_type":null,"content_length":"30365","record_id":"<urn:uuid:65cb482d-463d-46d5-875d-47ba3ab6d693>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a Homotopy between $L_\infty$-algebra morphisms
up vote 7 down vote favorite
A $L_\infty$-algebra can be defined in many different ways. One common way, that gives the 'right' kind of morphisms, is that a $L_\infty$-algebra is a graded cocommutative and coassociative
coalgebra, cofree in the category of locally nilpotent differential graded coalgebras and their morphisms are coalgebra morphisms that commute with the codifferential.
Breaking this compact definition down into something more concrete, the category of $L_\infty$-algebras can equally be defined in the following way:
A $L_\infty$-algebra is a $\mathbb{Z}$-graded vector space $V$, together with a sequence of graded anti-symmetric, $k$-linear maps
$D_k:V \times \cdots \times V \to V$,
homogeneous of degree $-1$,such that the 'weak' Jacobi identity
$ \sum_{p+q=n+1}\sum_{\sigma \in Sh(q,n-q)}\epsilon(\sigma;x_1,\ldots,x_n) D_p(D_q(x_{\sigma(1)},\ldots,x_{\sigma(q)}),x_{\sigma(q+1)},\ldots,x_{\sigma(n)})=0 $
is satisfied, for any $n\in\mathbb{N}$. Where $\epsilon$ is the Koszul sign and $Sh(p,q)$ is the set of suffle permutations.
A morphism of $L_\infty$-algebras $(V,D_{k\in\mathbb{N}})$ and $(W,l_{k\in\mathbb{N}})$ is a sequence $f_{k\in\mathbb{N}}$ of graded-antisymmetric, $k$-linear maps
$ f_k : V\times \cdots \times V \to W $
homogeneous of degree zero, such that the equation
$ \sum_{p+q=n+1}\sum_{\sigma \in Sh(q,n-q)}\epsilon(\sigma;x_1,\ldots,x_n) f_p(D_q(x_{\sigma(1)},\ldots,x_{\sigma(q)}),x_{\sigma(q+1)},\ldots,x_{\sigma(n)})=\\ \sum_{k_1+\cdots+k_j=n}^{k_i\geq 1}\
sum_{\sigma \in Sh(k_1,\ldots,k_j)} \epsilon(\sigma;x_1,\ldots,x_n) l_j(f_{k_1}(x_{\sigma(1)},\ldots,x_{\sigma(k_1)}),\ldots, f_{k_j}(x_{\sigma(n-k_j+1)},\ldots,x_{\sigma(n)})) $
is satisfied, for any $n\in\mathbb{N}$.
This defines the category of $L_\infty$-algebras, sometimes called the category of $L_\infty$-algebras with weak morphisms.
Now after that long and tedious definition, the question is:
What is a reasonable definition of a homotopy between two (weak) morphisms $f:V\to W$ and $g:V\to W$ of $L_\infty$-algebras? (And why?)
Edit: A lot of information pointing towards a definition of such a homotopy (or 2-morphism in $(\infty,1)$-categorical language) is spread out in the net. Much on the $n$-category cafe, like in
https://golem.ph.utexas.edu/category/2007/02/higher_morphisms_of_lie_nalgeb.html and in the nLab. However it looks like an explicit equation still isn't available.
I would do the tedious calculations myself, since I can get a lot of joy out of such huge and delicate computations, but I'm unable to finde a calculable way to achive that goal. (Such a way should
have the potential to apply to the higher homotopies too, hopefully leading towards an explicit description of the hom-space in this category)
P.S.: The tags are not very well suited, feel free to change them
homotopy-theory lie-algebras infinity-categories
add comment
2 Answers
active oldest votes
One standard answer*, in which any reasonable (characteristic $0$ — I haven't thought about any other case) algebraic category can be given a simplicial structure, is the following.
Let $\mathbb Q[\Delta^k] = \mathbb Q[t_0,\dots,t_k,\partial t_0,\dots,\partial t_k] / \bigl\langle \sum t_i = 1,\ \sum\partial t_i = 0\bigr\rangle$ denote the differential graded
commutative algebra (dgca) of polynomial forms on the standard $k$-simplex. Here $t_i$ are in (co)homological degree $0$, and their derivatives $\partial t_i$ are in degree $\pm 1$
depending on whether you prefer homological or cohomological conventions. It is straightforward to check that $\mathbb Q[\Delta^k]$ has (co)homology only in degree $0$, where it is
$1$-dimensional. Moreover, there are natural face and degeneracy maps between different $\mathbb Q[\Delta^k]$, making $\mathbb Q[\Delta^\bullet]$ into a simplicial dgca.
Given two $L_\infty$ algebras $V,W$ (or, really, objects of any reasonable category of "algebras"), one then defines the space of maps $V \to W$ to be the simplicial set $$ \hom_\bullet
(V,W) = \hom(V,W[\Delta^\bullet]),$$ where $W[\Delta^\bullet] = W\otimes_{\mathbb Q} \mathbb Q[\Delta^\bullet]$ is the $L_\infty$ algebra $W$ base-changed to live over the $k$-simplex. It
is reasonably straightforward to prove that this simplicial set satisfies the Kan horn-filling condition, at least when $V$ is "quasifree" — in particular, in your situation of "nonlinear
$L_\infty$-algebra homomorphisms", the Kan condition is always satisfied.
Before I spell this out, I'm going to change your notation. What you called $f_k$ I will call $f^{(k)}$, since it plays the role of the "$k$th Taylor coefficient of $f$". That way, I can
ask "what is a homotopy between two morphisms $f_0,f_1 : V \to W$ of $L_\infty$-algebras?"
up vote 2 The answer is the following data: (1) a (nonlinear) homomorphism $f_t: V \to W$ that depends polynomially on a parameter $t$, with the correct evaluations $f_t|_{t=0} = f_0$ and $f_t|_{t=1}
down vote = f_1$; (2) maps $\phi^{(k)}_t : V \to W[1]$ (or maybe I mean $[-1]$), also depending polynomially on the parameter $t$. These data must satisfy a certain ODE of the form: $$ \frac{\mathrm
accepted d}{\mathrm d t} f_t = \operatorname{ad}_{f_t}(\phi_t) $$ Of course, this is really an infinite sequence of equations (which are equations to things that depend polynomially on $t$). The
$k$th entry on the left hand side is $ \frac{\mathrm d}{\mathrm d t} f_t^{(k)}$. On the right hand side, the $k$th entry is computed as follows (up to a sign which I don't feel like working
out). Consider the equations saying that $f_t$ is a homomorphism; one of these equations is an equation of things with $k$ inputs $x$. Sum over all ways to replace, in each summand in this
equation, one of the occurrences of an $f$ by a $\phi$. Such a sum is what I mean by the right-hand side. In short-hand, what I mean is: there is (a sequence of) equations $M(f)$, such that
$f$ is a homomorphism iff $M(f) = 0$. The right hand side is $\frac{\partial M}{\partial f} \cdot \phi$.
In good situations like yours, all the ODEs that occur when studying $\hom_\bullet(V,W)$ are pretty well behaved. In particular, their integral forms are contraction mappings in the
appropriate sense, so the initial and boundary value problems are pretty easy to analyze formally.
*Here is an important (elementary) exercise to work out if you want to understand this "standard answer." Consider just the category of chain complexes. Then, for $k \geq 0$, $\pi_k\bigl( \
hom(V,W[\Delta^\bullet])$ is the space of chain maps $V \to W[\pm k]$ modulo chain homotopies, i.e. it is $\mathrm{H}_k(\underline\hom(V,W))$, where $\underline\hom$ denotes the chain
complex of all linear maps $f: V \to W$ with differential $f \mapsto [\partial,f] = \partial_W\circ f -(-1)^{\deg f}f\circ \partial_V$. (Whether the shift should be $[k]$ or $[-k]$, and
whether I mean $\mathrm H_{\pm k}$ or $\mathrm H^{\pm k}$ or ..., depend on your conventions, so I didn't work them out.)
Thanks for that great answer Theo. This is calles the Sullivan construction, right? I'll try some computations and come back to this later, when I have a better understanding of what
exactly is going on. – Mark.Neuhaus Aug 13 '13 at 12:13
Hi @Mark.Neuhaus: You know, I never learned a name for the construction --- it was something I picked up by osmosis / talking to other people. "Sullivan construction" sounds very
reasonable. I think the importance of the rings that I'm calling $\mathbb Q[\Delta^k]$ (and that maybe should instead by called $\Omega^\bullet(\Delta^k)$, perhaps with a subscript to
denote "polynomial") is due to Sullivan. – Theo Johnson-Freyd Aug 13 '13 at 14:51
In particular, note that the simplicial dg ring $\mathbb Q[\Delta^\bullet]$ appears also in Sullivan's rational homotopy theory. Given a set $X$ and a dg ring $R$, there is a dg ring $\
hom(X,R) = R^{\times X}$. Therefore, given any simplicial set $X_\bullet$ and simplicial dg ring $R_\bullet$, there is a dg ring $\hom_{\text{simplicial}}(X_\bullet,R_\bullet)$. Using $R_
\bullet = \mathbb Q[\Delta^\bullet]$ gives Sullivan's dg algebra of polynomial forms on $X$. – Theo Johnson-Freyd Aug 13 '13 at 14:54
Ok, finally I found the time to do some exercises on this construction. Is there any particular reason why one has to choose $\mathbb{Q}$ as the field in the graded simplicial polynomial
ring $\mathbb{Q}[\Delta^\bullet]$? Why not using $\mathbb{R}$? – Mark.Neuhaus Mar 24 at 4:52
@Mark.Neuhaus No, of course not. Things go funny if you're not over a (commutative) ring containing $\mathbb Q$, but that's really the only condition. That said, if $W$ is defined over
$R$ and $R \supseteq \mathbb Q$, then $W \otimes_R R[\dots] = W \otimes_{\mathbb Q} \mathbb Q[\dots]$. So there's also no gain. – Theo Johnson-Freyd Mar 25 at 15:44
add comment
There is a plethora of model structures for L-infinity algebras (going back to Quillen of course, but notably described and related in the great article by Jonathan Pridham arXiv:0705.0344).
Also structures of categories of fibrant objects. Each of these induces a model for homotopies of 1-morphisms of $L_\infty$-algebras, for instance a right homotopy given by mapping into a
path space object. What these are can be worked out for each of these model category/category of fibrant object structures (and all these notions will be suitably equivalent).
up vote
1 down An explicit model of such path space objects for $L_\infty$-algebras is discussed by Dolgushev in section 5 of his article arXiv:0703113.
See on the nLab at model structure for L-infinity algebra -- Homotopies and derived hom spaces.
Ok. I understand that you gave me the big picture, or to say a hole bunch of big pictures, all equivalent in the $(\infty,1)$-sense. But still from having a model structure, or a homotopy
structure, it is a long way to actual equations. I never did the hammock process or things like that, but it really looks like a lot of work. In your n-cat post from 2006 you had the same
desire for explicit $n$-morpism equation. So did you succeed or (if not) what was the reason to break on that? -Maybe explicit equations are too involved and you decided, that nowing their
existence is enough for your work? – Mark.Neuhaus Aug 13 '13 at 12:19
I'll consider the Sullivan Construction, Theo gave, as pretty doable to get actual equations for $n$-morphisms, at least for lower $n$. What would you say is, from a compuational POV,
another good way to proceed? – Mark.Neuhaus Aug 13 '13 at 12:26
I am just saying the equations that you need are those for a path space object in your preferred model. A fully explicit construction is in Dolgushev's note that I pointed to. Did you have
a look? This typically comes down to the kind of construction Theo mentions, but has the advantage that you actually know that it is the right thing. – Urs Schreiber Aug 13 '13 at 23:02
What I wrote back then was uninformed of homotopy theory. I wish somebody back then had pointed me to the homotopy theory. Most of it was already known. It's much more fun to play with
formulas if you know what you are doing and are not just guessing and fiddling around. :-) – Urs Schreiber Aug 13 '13 at 23:03
add comment
Not the answer you're looking for? Browse other questions tagged homotopy-theory lie-algebras infinity-categories or ask your own question. | {"url":"http://mathoverflow.net/questions/139175/what-is-a-homotopy-between-l-infty-algebra-morphisms","timestamp":"2014-04-18T05:42:25Z","content_type":null,"content_length":"75062","record_id":"<urn:uuid:24148528-aeba-4bfd-94d4-769697030eb0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Errors in Moore & McCabe
Radford Neal, November 1999
The following errors and omissions occur in Introduction to the Practice of Statistics, 3rd edition, by D. S. Moore and G. P. McCabe.
Some of these have been corrected in the Second Printing of this edition, as noted.
Note: A 4th edition has now been published. Surprisingly, many of the errors reported here are still present in this new edition.
Page 18, Example 1.10: The ozone hole.
The hole in the ozone layer is not caused by burning fossil fuels. It is caused by CFCs, used in refrigerators and in other applications. Burning fossil fuels is a cause of global warming. The
only connection between these two issues is that CFCs also cause global warming, though they aren't the main cause.
Page 54-55, Example 1.19: Returns on stocks and treasury bills.
This example is seriously flawed. See my explanation of what's really going on.
Page 73: Notation N(a,b) for the normal distribution with mean a and standard deviation b.
This abbreviation is not standard. It is far more common for N(a,b) to mean the normal distribution with mean a and variance b.
Page 80: Procedure for producing a normal quantile plot.
This doesn't work. If you try this, you will decide in step 1 that the last observation is at the 100% point. In step 2, you will then need to find the corresponding z-score, which is infinite.
Two sensible procedures are to say that the percentile points for 20 observations are 2.5%, 7.5%, ..., 97.5%, given by (i-1/2)/20 for i from 1 to 20, or alternatively to say that observation i
out of n is at percentile i/(n+1).
Page 127, Figure 2.10.
The point of these two scatterplots is to demonstrate that adding whitespace around the data makes the correlation seem stronger. However, the relative scales of the x and y axes are also
different: the horizontal scale changes from being 80 wide to being 250 wide, a factor 3.125, while the vertical scale changes from being 120 high to being 250 high, a factor of 2.083. One
therefore cannot draw any conclusion from this figure about the effect of whitespace.
Page 153, Exercise 2.47: "How would the slope and intercept of this line change if we measured all heights in centimetres rather than inches?" The answer at back says "The conversion would have no
Actually, the conversion would have no effect on the slope, but it would change the intercept.
Fixed in Second Printing
Page 173, Exercise 2.58: Data on the amount of beef consumed per person and the price of beef for the years 1970 to 1993.
This exercise (and answer in the Instructor's Guide) completely misses what appears to be really going on.
First, contrary to what is said in part (a), economic theory would not lead one to expect consumption to necessarily fall when the price rises. In a competitive market, the price and amount
comsumed are set by the intersection of the supply curve and the demand curve. It the price changes, one or both of these curves must have shifted, which will usually also cause the amount
consumed to change. Depending on which curve(s) shifted, and which way, any combination of an increase or decrease in price with an increase or decrease in consumption is possible.
Looking at the relationship of price and consumption in this data is pretty meaningless in any case, because of the important lurking variable of time. In this data, both price and consumption
are more strongly related to time than to each other, with both decreasing over time. A plausible explanation is that price decreased because of techological change, and at the same time
consumption declined because of changing consumer tastes. It may also be relevant that beef is highly substitutable with other foods at both the production and consumption end. The question may
therefore be rather meaningless, being analogous to asking about the relationship between the price of red cars and the number of red cars bought, ignoring production and sales of cars painted
colours other than red.
Page 174, Exercise 2.59: Data on price of various kinds of seafood in 1970 and 1980.
This is treated as problem regarding the relationship between the 1970 price and the 1980 price, using scatterplots, correlation, and regression. Actually, this is a good example of when one
should not use these tools. A pound of scallops is a very different thing than a pound of cod. Comparing their price per pound is not meaningful for most purposes.
Instead, for each kind of seafood, it would make sense to find the ratio of the price in 1980 to the price in 1970. The following is a stem plot of these ratios:
1 : 5
2 : 011
2 : 588
3 : 000112
3 :
4 :
4 : 7
The extreme points are 1.5, for haddock, and 4.7, for ocean perch. These are the seafoods that might merit a special look. The answer in the back of the book singles out scallops and lobsters as
being outliers, but their ratios of 3.0 and 2.0 are not exceptional. They appear as outliers when one inappropriately does a regression only because they are the two most expensive seafoods on a
per pound basis, making any variation in their price appear large in absolute terms, even when it is not large in percentage terms.
Pages 208-209, Examples 2.35 and 2.36, concerning causation.
The claim that there is a direct causal link between the height of mothers and the height of daughters is wrong. This is a case of both being influenced by a common cause, namely the genes of the
mother. If you starve a teenage mother (after the birth of her child) in order to reduce her adult height, this will not change the adult height of her child (assuming you feed the child a normal
The example regarding the "strong positive correlation between how much money individuals add to mutual funds each month and how well the stock market does the same month" is incomprehensible,
because the exact timing of the events is unclear. If the data is aggregated over the month, then some of the money went to mutual funds before part of the stock market change, but part of it
followed part of the change. The aggregation makes this a confusing and complex example.
Pages 258-259: Explanation of stratified sampling.
A crucial part of the explanation of stratified sampling is omitted here. The whole procedure makes sense only when you have census data that allows you to combine the results from sampling the
various strata into a final estimate that takes account of how many people are in each stratum.
Page 268, Example 3.20: "Are attitudes towards shopping changing?" plus Example 5.7 on page 381 (followed by examples 5.8 and 5.9).
The technical content of this example is correct, but the "real world" context is seriously garbled. See my explanation of what's really going on.
Page 303, Example 4.11: examples of independence and dependence.
The last paragraph attempts to illustrate independence and dependence with real examples. Both examples are actually concerned with random variables, not events. Independence of random variables
is discussed later, on page 337. These examples are wrong, or at least require quite strained interpretations if they are to be regarded as right. See my detailed explanation.
Page 312 and following material in Section 4.3, and Exercise 4.103:
In Section 4.3, the book gives the impression that when one talks about a random variable, the sample space consists of the possible numerical values of this random variable. This is generally
impossible when one wants to talk about more than one random variable at the same time. The correct view is that a random variable is a rule for assigning a number to each outcome. It's possible
that the outcomes are themselves numbers, and that the rule for random variable X is just that X is equal to the outcome itself, but this will not usually be the situation.
This inadequate view of random variables is reinforced by exercise 4.103, which asks "Give a sample space for the sum X of the spots", implying that the concept of a sample space is tied to a
particular random variable. The answer given in the back of the book is "S = { 3, 4, 5, ..., 18}". This is not a sample space that is sensible to use, since it is particular to X (even though the
question itself introduces other random variables, which can't be made sense of with this sample space), and since we have no direct way of assigning probabilities to the outcomes in this sample
space. A sensible answer is S = { (1,1,1), ..., (6,6,6) }, ie, all possible combinations of rolls for the three dice (with the dice being distinguished as first, second, and third).
Page 376: Notation B(n,p).
This notation is also used for the "beta" distribution. To avoid confusion, it is better to use the notation binomial(n,p).
Page 404, Figure 5.9, illustrating the central limit theorem, relating to Example 5.19 on the previous page.
These plots are grossly incorrect, and seriously misleading. Plot (a) seems to be about right, but (b) shows the density being positive at zero, which it isn't. It's (c) that's really bad,
though, since the mean of the distribution shown is clearly greater than 1, when it should be equal to 1.
Fixed in Second Printing
Page 405, Figure 5.10, again illustrating the central limit theorem.
The two density curves in this plot are also wrong. The match with reality is improved if one exchanges the two curves (ie, labels the solid as the exact distribution and the dashed as the normal
approximation), but they are still wrong in detail.
Page 409, Exercise 5.29(a): The answer given in the back is P(X<295)=0.8413.
The answer is actually 0.1587.
Fixed in Second Printing
Page 412, Exercise 5.39: "An experiment... divides 40 chicks at random into two groups of 20... inference... is based on the mean weight gain ... of the 20 chicks in the high-lysine group and the
means weight gain ... of the 20 in the normal-corn group. Because of the randomization, the two sample means are independent."
The last sentence above is not correct. See my detailed explanation.
Page 509-510, Figures 7.2 and 7.3:
The tails shown here are too light. The horizontal axis should be labelled "t", not "Z".
Page 511, Example 7.5: "Here is one way to report the conclusion: the mean of Newcomb's passage times, x-bar = 27.75, is lower than the value calculated by modern methods, mu=33.02 (t=-8.29, df=63, P
This summary misses the whole point. Of course the mean of Newcomb's times, 27.75, is less than 33.02, the accepted modern value. You don't need statistics to tell you that. What the t-test tells
you is that we have good reason to think that Newcomb's experiment suffered from systematic error, since the difference is too large to be plausibly explained by random errors.
Page 541: "This approximation is appealing because it is conservative... P-values are a bit smaller, so we are a little less likely to reject H_0 when it is true."
"P-values are a bit smaller" should be "P-values are a bit bigger".
Fixed in Second Printing
Page 545, discussion of the study in Example 7.14.
Two big potential problems that aren't mentioned here are conscious or unconscious bias on the part of the researcher/teacher and the possibility that the students' improvement was due to a
placebo effect.
Page 546, Example 7.16: "Software uses the t(289) distribution and gives P=0.051".
This should be "gives P=0.51".
Fixed in Second Printing
Page 553, discussion after Example 7.20: "Sample size strongly influences the P-value of a test. An effect that fails to be significant at a specified level alpha in a small sample can be significant
in a larger sample. In the light of the rather small samples in Example 7.20, the evidence for some effect of calcium on blood pressure is rather good."
This reasoning is circular. Increasing the sample size will tend to result in a smaller P-value only if the null hypothesis is false, which is the point at issue. Here is my more detailed
Page 673, "...when the logarithm of the sum of the four skinfold thicknesses... is 0, a value that cannot occur."
There's no fundamental reason that the logarithm of the sum of the thicknesses can't be zero (ie, that the sum can't be one), though it is a bit far from the observed values.
Page 675, Figure 10.10.
This figure is wrong in highly misleading ways. It is supposed to show the same data as Figure 10.3, with the least-squares regression line and 95% confidence limits for the mean response. These
confidence limits wrongly appear to be given by two straight lines that intersect on the least-squares regression line, implying that the confidence interval has zero width at that point, which
is not true. The limits at a point are also not symmetrical about the least-squares regression line at that point, which they should be. Finally, the data points shown are neither the same as the
correct points from the CD, nor the same as the points in Figure 10.3 (which are also wrong).
Page 696, Exercise 10.3: The solution in the back gives t=4.61.
Minitab gives t=6.94.
Page 754, Example 12.6: Pretest scores for three groups of children.
The context for this example is not described sufficiently for one to understand why an ANOVA analysis is being done. Were the three groups assigned randomly? If so, the only sense of doing a
test of this would be if you suspected that the randomization wasn't really random. If one is worried that a true random assignment might have by chance produced unbalanced groups, it is not at
all clear that doing a test of a null hypothesis that you know for certain is true is a sensible approach to assessing whether this is a problem. On the other hand, if the groups were not
assigned at random, the ANOVA test might be sensible, but again one might wonder about the relevance of the P-value found to the question of whether or not the differences are large enough for
this to be a problem for some (unspecified) later analysis. | {"url":"http://www.cs.utoronto.ca/~radford/mm-errata/errata.html","timestamp":"2014-04-20T18:25:15Z","content_type":null,"content_length":"16362","record_id":"<urn:uuid:a22e38ce-dc96-4c31-b799-024da68187af>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lala12345/answered","timestamp":"2014-04-17T09:51:09Z","content_type":null,"content_length":"66949","record_id":"<urn:uuid:b9aca445-81f4-4ef8-8f5a-837e43768949>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Judge a Weatherman?
Each day a weatherman gives a probability p of rain for the next day and each day it either rains or it doesn't. How do we judge the quality of these forecasts? A first attempt uses linear scores, p
if it rains, 1-p if it doesn't. However when you analyze this system the weatherman should predict p=1 if his belief is greater than 1/2 and p=0 otherwise.
A better measure is the log loss. The weatherman gets penalized -log(p) if it rains and -log(1-p) if it doesn't. A weatherman now has the incentive to announce his belief. There are other scoring
functions with this property but the log loss has some nice properties such as the best a weather could hope to achieve is exactly the entropy of the distribution. The log loss and other measures are
often used to analyze prediction mechanisms such as information markets.
Dean Foster and Rakesh Vohra have a different take looking at a notion called calibration. Here you take all the days that the weatherman predicted 70% chance of rain and check that 70% of those days
it actually rained. A prediction algorithm calibrates a binary sequence if for finite set of allowed probabilities, each of the subsequences consisting of predictions of probability p have about a p
fraction of ones. Foster and Vohra showed that some probabilistic calibration scheme will calibrate every sequence in the limit. In other words you can be a great weatherman in the calibration sense
just by looking at the history of rain and forgoing that pesky meterological training.
Dean Foster and Sham Kakade gave a couple of interesting talks at the Bounded Rationality workshop giving a deterministic scheme that achieves a weak form of calibration and use it to learn Nash
equilibirum in infinite repeated games.
6 comments:
1. Back when the Usenet was popular, a recurring question was "what do the weatherman probabilities mean?". This was well over a decade ago but I seem to recall that the answer was: the weather
forecast service runs a few different computer models (usually 5 of them) and sees what is the outcome 24 hours into the future. If two models predict rain, then the probability is 40%, if 4 of
them predict rain then it is 80%.
At the risk of restarting a long dead usenet thread, can any one out there confirm this?
2. This is old, but may shed some light on the issue:
3. It seems that both are correct.
Some weatherforecasters use statistical data to search for other times where weather conditions were the same as today. In this case the probability is historical: in 20% of the days that were
just like today it ended up raining.
Others run a set of computer models or the same computer model with small variations (known as ensemble) and report the percentage of such outcomes. It rained in 20% of our computer simulations.
Each outcome can be weighted to reflect the probability of a given variation.
For instance, if the chance of receiving above-median rainfall in a particular climate scenario is 60%, then 60% of past years when that scenario occurred had above median rainfall, and 40% had
below-median rainfall.
Because of weather's chaotic nature, errors or uncertainties in the starting point of a model can alter the results dramatically. One way to reduce the impact of such errors is through an
ensemble of forecasts. In this technique, one model is run several times, each with a slightly different, intentionally varied set of starting points.
4. I once thought about this same question, in the context of how to get students to reveal their probabilities on true-or-false tests. My solution was to give p^2 points for each question if the
answer is "true," or (1-p)^2 points if the answer is false. Lance, do you know what the properties of the log-loss function are that make it preferable?
5. Duhhh... I meant penalize by p^2 points if the answer is false, or by (1-p)^2 if the answer is true.
6. One nice things about logs is that uncertainty becomes additive. For instance if your students were completely ignorant, and you replaced your 10 binary questions with 1 question having 1024
answers, they would still get the same number of points if using log p award scheme.
Using log(p) as length for codeword of symbol with probability p is also guaranteed to produce lowest expected codeword length when codewords must be prefix-free (instantaneously decodable). I
wonder, what sort of bounds would hold for codes without any such constraints?
Finally, using f(x)=log x as a way to award points would elicit correct internal probabilities from rational students, whereas f(x)=x^2 will not. This seemed like an interesting topic, hence my
first blog entry has a derivation of this :) | {"url":"http://blog.computationalcomplexity.org/2005/02/how-to-judge-weatherman.html","timestamp":"2014-04-19T15:16:46Z","content_type":null,"content_length":"167806","record_id":"<urn:uuid:7b78f620-3b18-4c3d-997a-d1d1a85ab725>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Math Tutors
Auburn, WA 98002
Math and Science Tutor
...I have worked throughout the digital revolution of the past 30 years as a electronic designer, programmer, to discrete electronic components that are required to make CPU's, ADC's, DAC's, memory,
etc. The topics that are included in today's
discrete math
are tools...
Offering 10+ subjects including discrete math | {"url":"http://www.wyzant.com/Seattle_discrete_math_tutors.aspx","timestamp":"2014-04-24T15:25:19Z","content_type":null,"content_length":"60030","record_id":"<urn:uuid:0f17528b-08e3-4c7c-8baf-ae58137d40c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perspective Matrix implementation [Archive] - OpenGL Discussion and Help Forums
01-02-2009, 07:07 AM
im currently trying to implement perspective view with matrix but its not working. so this is what i did:
1. i got a screen with width and height 800
2. i implement the drawing, transformation, culling etc and it works perfectly. but the z-translation did not work. its fine because i havent implement the perspective view.
3. i implement the perspective by multiplying it with a matrix that shown in the redbook when implementing glFulstrum.
so here is my matrix
Matrix4 m(-2.0f*n/(r-l), 0, (r+l)/(r-l), 0,
0, -2.0f*n/(t-b), (t+b)/(t-b), 0,
0, 0, (f+n)/(f-n), -2.0f*(f*n)/(f-n),
0, 0, -1, 0);
but it simply does not work. i called it with this line:
perspectiveView(-400, 400, -400, 400, -400, 400);
l, r, t, b, n, f respectively.
is there something wrong? | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-166405.html","timestamp":"2014-04-18T18:34:07Z","content_type":null,"content_length":"10768","record_id":"<urn:uuid:2b859f46-3817-4b54-a32b-ed9515ec8cf3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
ermination of
Results 1 - 10 of 40
- Journal of Symbolic Computation , 1993
"... this paper we show that it is sufficient to impose the constructor discipline for obtaining the modularity of completeness. This result is a simple consequence of a quite powerful divide and
conquer technique for establishing completeness of such constructor systems. Our approach is not limited to s ..."
Cited by 31 (2 self)
Add to MetaCart
this paper we show that it is sufficient to impose the constructor discipline for obtaining the modularity of completeness. This result is a simple consequence of a quite powerful divide and conquer
technique for establishing completeness of such constructor systems. Our approach is not limited to systems which are composed of disjoint parts. The importance of our method is that we may decompose
a given constructor system into parts which possibly share function symbols and rewrite rules in order to infer completeness. We obtain a similar technique for semi-completeness, i.e. the combination
of confluence and weak normalisation. 1. Introduction
- Theoretical Computer Science , 1993
"... It is well-known that termination is not a modular property of term rewriting systems, i.e., it is not preserved under disjoint union. The objective of this paper is to provide a "uniform
framework" for sufficient conditions which ensure the modularity of termination. We will prove the following res ..."
Cited by 29 (3 self)
Add to MetaCart
It is well-known that termination is not a modular property of term rewriting systems, i.e., it is not preserved under disjoint union. The objective of this paper is to provide a "uniform framework"
for sufficient conditions which ensure the modularity of termination. We will prove the following result. Whenever the disjoint union of two terminating term rewriting systems is non-terminating,
then one of the systems is not C E -terminating (i.e., it looses its termination property when extended with the rules Cons(x; y) ! x and Cons(x; y) ! y) and the other is collapsing. This result has
already been achieved by Gramlich [7] for finitely branching term rewriting systems. A more sophisticated approach is necessary, however, to prove it in full generality. Most of the known sufficient
criteria for the preservation of termination [24, 15, 13, 7] follow as corollaries from our result, and new criteria are derived. This paper particularly settles the open question whether simple
termination ...
, 1993
"... We investigate restricted termination and confluence properties of term rewriting systems, in particular weak termination and innermost termination, and their interrelation. New criteria are
provided which are sufficient for the equivalence of innermost / weak termination and uniform termination of ..."
Cited by 27 (5 self)
Add to MetaCart
We investigate restricted termination and confluence properties of term rewriting systems, in particular weak termination and innermost termination, and their interrelation. New criteria are provided
which are sufficient for the equivalence of innermost / weak termination and uniform termination of term rewriting systems. These criteria provide interesting possibilities to infer completeness,
i.e. termination plus confluence, from restricted termination and confluence properties. Using these basic results we are also able to prove some new results about modular termination of rewriting.
In particular, we show that termination is modular for some classes of innermost terminating and locally confluent term rewriting systems, namely for non-overlapping and even for overlay systems. As
an easy consequence this latter result also entails a simplified proof of the fact that completeness is a decomposable property of so-called constructor systems. Furthermore we show how to obtain
similar re...
- In Proc. of the 10th Int. Conf. on Rewriting Techniques and Applications, LNCS 1631 , 1999
"... Abstract. In a previous work, we proved that an important part of the Calculus of Inductive Constructions (CIC), the basis of the Coq proof assistant, can be seen as a Calculus of Algebraic
Constructions (CAC), an extension of the Calculus of Constructions with functions and predicates defined by hi ..."
Cited by 27 (10 self)
Add to MetaCart
Abstract. In a previous work, we proved that an important part of the Calculus of Inductive Constructions (CIC), the basis of the Coq proof assistant, can be seen as a Calculus of Algebraic
Constructions (CAC), an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. In this paper, we prove that almost all CIC can be seen as a
CAC, and that it can be further extended with non-strictly positive types and inductive-recursive types together with non-free constructors and pattern-matching on defined symbols. 1.
- In Proceedings of the Fifth International Conference on Algebraic and Logic Programming (ALP'96), volume 1139 of LNCS , 1996
"... Conditional rewriting is universally recognized as being much more complicated than unconditional rewriting. In this paper we study how much of conditional rewriting can be automatically
inferred from the simpler theory of unconditional rewriting. We introduce a new tool, called unraveling, to autom ..."
Cited by 26 (3 self)
Add to MetaCart
Conditional rewriting is universally recognized as being much more complicated than unconditional rewriting. In this paper we study how much of conditional rewriting can be automatically inferred
from the simpler theory of unconditional rewriting. We introduce a new tool, called unraveling, to automatically translate a conditional term rewriting system (CTRS) into a term rewriting system
(TRS). An unraveling enables to infer properties of a CTRS by studying the corresponding ultra-property on the corresponding TRS. We show how to rediscover properties like decreasingness, and to give
easy proofs of some existing results on CTRSs. Moreover, we show how unravelings provide a valuable tool to study modularity of CTRSs, automatically giving a multitude of new results.
, 1994
"... In this paper we present the algebraic--cube, an extension of Barendregt's -cube with first- and higherorder algebraic rewriting. We show that strong normalization is a modular property of all
systems in the algebraic--cube, provided that the first-order rewrite rules are non-duplicating and the hig ..."
Cited by 25 (7 self)
Add to MetaCart
In this paper we present the algebraic--cube, an extension of Barendregt's -cube with first- and higherorder algebraic rewriting. We show that strong normalization is a modular property of all
systems in the algebraic--cube, provided that the first-order rewrite rules are non-duplicating and the higher-order rules satisfy the general schema of Jouannaud and Okada. This result is proven for
the algebraic extension of the Calculus of Constructions, which contains all the systems of the algebraic--cube. We also prove that local confluence is a modular property of all the systems in the
algebraic--cube, provided that the higher-order rules do not introduce critical pairs. This property and the strong normalization result imply the modularity of confluence. 1 Introduction Many
different computational models have been developed and studied by theoretical computer scientists. One of the main motivations for the development This research was partially supported by ESPRIT
Basic Research Act...
, 1995
"... This paper is concerned with the impact of stepwise development methodologies on prototyping. ..."
- Journal of Automated Reasoning , 2004
"... We propose a modular approach of term rewriting systems, making the best of their hierarchical structure. We define rewriting modules and then provide a new method to prove termination
incrementally. We obtain new and powerful termination criteria for standard rewriting. Our policy of restraining ..."
Cited by 24 (5 self)
Add to MetaCart
We propose a modular approach of term rewriting systems, making the best of their hierarchical structure. We define rewriting modules and then provide a new method to prove termination incrementally.
We obtain new and powerful termination criteria for standard rewriting. Our policy of restraining termination itself (thus relaxing constraints over hierarchies components) together with the
generality of the module approach are sufficient to express previous results and methods the premisses of which either include restrictions over unions or make a particular reduction strategy
, 1997
"... A property P of term rewriting systems (TRSs, for short) is said to be persistent if for any many-sorted TRS R, R has the property P if and only if its underlying unsorted TRS (R) has the
property P. This notion was introduced by H. Zantema (1994). In this paper, it is shown that confluence is pers ..."
Cited by 23 (6 self)
Add to MetaCart
A property P of term rewriting systems (TRSs, for short) is said to be persistent if for any many-sorted TRS R, R has the property P if and only if its underlying unsorted TRS (R) has the property P.
This notion was introduced by H. Zantema (1994). In this paper, it is shown that confluence is persistent.
- Information and Computation , 1992
"... We investigate the system obtained by adding an algebraic rewriting system R to an untyped lambda calculus in which terms are formed using the function symbols from R as constants. On certain
classes of terms, called here "stable", we prove that the resulting calculus is confluent if R is confluent, ..."
Cited by 20 (0 self)
Add to MetaCart
We investigate the system obtained by adding an algebraic rewriting system R to an untyped lambda calculus in which terms are formed using the function symbols from R as constants. On certain classes
of terms, called here "stable", we prove that the resulting calculus is confluent if R is confluent, and terminating if R is terminating. The termination result has the corresponding theorems for
several typed calculi as corollaries. The proof of the confluence result suggests a general method for proving confluence of typed β reduction plus rewriting; we sketch the application to the
polymorphic lambda calculus. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=435350","timestamp":"2014-04-20T17:36:58Z","content_type":null,"content_length":"36833","record_id":"<urn:uuid:6d9a86fb-7854-4890-80b7-1548656effbf>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: logic texts for computer scientists
Summary: logic texts for computer scientists
• To: bcpierce@cis.upenn.edu
• Subject: Summary: logic texts for computer scientists
• From: Mitchell Wand <wand@ccs.neu.edu>
• Date: Fri, 30 Apr 1999 11:21:51 -0400 (EDT)
• In-Reply-To: <199904300156.VAA14597@saul.cis.upenn.edu>
• References: <199904300156.VAA14597@saul.cis.upenn.edu>
Hmm, I notice that several of your correspondents think that Jean's paper has
about -10 pages, eg:
Gallier, J. Constructive Logics. Part I: A tutorial on proof systems
and typed lambda-calculi. Theoretical Computer Science, 110(2):
249-239 (1993).
I wonder what bibliography has this typo. It also goes to show how many of us
actually _read_ the bibliography information we get off the web... | {"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00129.html","timestamp":"2014-04-20T01:55:18Z","content_type":null,"content_length":"2645","record_id":"<urn:uuid:50c747c4-37a3-4926-b283-9b1e8df6a092>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Newtons Second Law - Simple Mechanics
May 17th 2010, 05:20 AM
[SOLVED] Newtons Second Law - Simple Mechanics
Kindly have a look at the attached image for the related question.
I've solved (i) to get the following values...
a= 2.5 ms^-2
T= 3.75 N
I want to inquire about part (ii)
What I want to know is that since acceleration of A was 2.5 then the acceleration of B should be -2.5 ... no?
I ask this because, for part (ii) I've got the correct answer of 'mass' by using
m=0.3 kg
But if I use -2.5 then the answer comes out to be m=0.5 kg.
What am I doing wrong?
May 17th 2010, 06:15 AM
Kindly have a look at the attached image for the related question.
I've solved (i) to get the following values...
a= 2.5 ms^-2
T= 3.75 N
I want to inquire about part (ii)
What I want to know is that since acceleration of A was 2.5 then the acceleration of B should be -2.5 ... no?
I ask this because, for part (ii) I've got the correct answer of 'mass' by using
m=0.3 kg
But if I use -2.5 then the answer comes out to be m=0.5 kg.
What am I doing wrong?
the acceleration is the same for the whole system which is + 2.5 . You only account the negative for the gravity part .
May 17th 2010, 06:16 AM
in body B, if you use acceleration -2.5 $m/s^{2}$, this means you are considering the upwards direction to be negative
so you need to consider T also negative because its upwards and the weight to be positive because its downwards
you got
considering the opposite. downwards negative and upwards positive.
its also works
as long as same direction vectors have the same signal
May 17th 2010, 07:44 AM
Thanks for the quick replies guys! | {"url":"http://mathhelpforum.com/math-topics/145137-solved-newtons-second-law-simple-mechanics-print.html","timestamp":"2014-04-18T05:21:33Z","content_type":null,"content_length":"6362","record_id":"<urn:uuid:8667e9fe-437e-414f-88d5-a5af485fea39>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
6.1 Algebraic Numbers
Next: 6.2 Algebraic Number Fields Up: 6 Algebraic Number Fields Previous: 6 Algebraic Number Fields
How concisely can we specify an algebraic number? Since every equation in one variable with complex coefficients can be solved completely with complex numbers as solutions (Gauss's Fundamental
theorem of algebra), one way to specify an algebraic number is to specify it as a complex number. However, a real (or complex) number is (in general) only specified by its entire decimal expansion
which cannot be stored in a finite space. On the other hand it is enough to specify an algorithm that produces, on sufficient iteration, an arbitrarily close approximation to the complex number that
represents it.
Thus, one way to specify an algebraic number P(t) which is the non-zero polynomial (with rational or integer coefficients) of least degree such that P(P divides any other Q for which Q(x[0] of the
form r + s^ . r and s rational so that successive iterations of Newton's method
x[k + 1] = x[k] -
(where P'(T) denotes the (entirely formal) derivative of P(T) with respect to T) converge to the complex number representing i and which - i, perhaps only a physicist can tell!''
Another way is to make use of Hensel's lemma. We will define below the discriminant D[P] for a polynomial P. For now it suffices that if a prime p does not divide D[P] then for any n so that p
divides P(n), we have that p does not divide P'(n). In other words D[P] is the least common multiple of gcd(P(n), P'(n)) as n varies over all integers. Now, for n sufficiently large it is clear that
there is such a prime p (i. e. not dividing D[P]) so that p divides P(n). We can now specify congruent to n modulo p. Because of Hensel's lemma (which is Newton's iteration done modulo powers of p!)
we can then produce n[k] so that n[k] is divisible by p^k for every k. In modern language, we are replacing the approximation by complex numbers given above by a p-adic approximation. We can
actually, find a suitable p so that this can be done for all roots of the polynomial P. (This is a particular case of Chebychev's density theorem).
An entirely less obvious problem is how we can perform common arithmetic operations on algebraic numbers when they are represented in this fashion. For that reason, and for the reason mentioned at
the beginning of this section we now turn to the matrix representation of algebraic numbers.
Next: 6.2 Algebraic Number Fields Up: 6 Algebraic Number Fields Previous: 6 Algebraic Number Fields Kapil Hari Paranjape 2002-10-20 | {"url":"http://www.imsc.res.in/~kapil/crypto/notes/node27.html","timestamp":"2014-04-17T13:39:55Z","content_type":null,"content_length":"7382","record_id":"<urn:uuid:91fdb465-cf6a-40d2-9185-b5ef96c152b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
CString Functions: Replace
Replacing a String Occurrence
The CString class provides a mechanism for deleting an occurrence of a certain string in another string. The function for performing this is called Replace. The CString::Replace() function allows you
to control the content of a string especially if you are getting the string from an unpredictable source.
Image the user is supposed to type a quadratic equation such as 3x^2+5. There are millions of ways the user can type it. The problem is that if you are planning on resolving the equation, before even
getting to the solutions of the equation, you need to be able to "know" what the equation is made of; you cannot just try to retrieve a, b, and c. The user could type 3 x^2+ 5 or 3x ^2 +5 or 3 x ^ 2
+ 5 or 3x ^2+ 5. As you can see, the possibilities are as numerous as imaginable. Remember that a string can consist of an empty space. Therefore, one of the first things you should perform is to
remove any empty space in the equation. Eventually, you will use other functions to analyze the equation, find the parentheses if any, find the special characters such as ^ usually used to express
the power in computer languages; you might also want to know if the user typed the equation in the form of Ax^2 + B = C.
1. Create a dialog based application using MFC.
2. Add an Edit Box control to the dialog. Change its identifier to IDC_EQUATION
3. Add a Button control to the dialog. Change its identifier to IDC_BTN_REMOVESPC and its caption to &Remove Space. Resize the button to make sure the caption is completely visible.
4. Press Ctrl + W to access the ClassWizard.
5. On the MFC ClassWizard property sheet, click the Member Variables property page.
6. Double-click IDC_EQUATION.
7. Change the name of the variable to m_Equation and make sure its Category Value is CString then click OK.
8. Click the MessageMaps property page.
9. Create a function for the IDC_BTN_REMOVESPC button using the BN_CLICKED message. Rename the function OnRemoveSpace
10. Implement the function as follows:
Listing 1 void CExoDialog1Dlg::OnRemoveSpace()
m_Equation.Replace(" ", "");
11. To test the program, press Ctrl + F5 | {"url":"http://www.functionx.com/visualc/vcstrings.htm","timestamp":"2014-04-21T12:46:43Z","content_type":null,"content_length":"7386","record_id":"<urn:uuid:334e21d6-f978-42b9-bbc0-29bb7c9f587b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Four Color Theorem
How many different colors are sufficient to color the countries on a map in such a way that no two adjacent countries have the same color? The figure below shows a typical arrangement of colored
Notice that we define adjacent regions as those that share a common boundary of non-zero length. Regions which meet at a single point are not considered to be "adjacent".
After examining a wide variety of different planar graphs, one discovers the apparent fact that every graph, regardless of size or complexity, can be colored with just four distinct colors. This
"four-color conjecture" was first noted by August Ferdinand Mobius in 1840. In 1852 a young man named Francis Guthrie wrote about the problem in a letter to his brother Frederick, then a student at
University College in London. Neither of the brothers was able to prove the conjecture, so Frederick asked one of his professors, Augustus DeMorgan (1806-1871). DeMorgan too was unable to prove the
conjecture, and after recognizing the difficulty of the problem, he wrote to Sir William Rowan Hamilton (1805-1865) to ask for help. It might seem that this problem would be irresistible to Hamilton,
since anything having to do with the number "four" immediately suggests a connection with his beloved quaternions. Also, Hamilton made contributions to graph theory (such as the idea of a Hamiltonian
circuit, i.e., a path along the edges of a graph that visits each vertex exactly once), a subject that was developed largely through efforts to prove the four color conjecture. Nevertheless, Hamilton
immediately wrote back that he did not believe he would solve DeMorgan's "quaternion of colors" any time soon.
The coloring of geographical maps is essentially a topological problem, in the sense that it depends only on the connectivities between the countries, not on their specific shapes, sizes, or
positions. We can just as well represent each country by a single point (vertex), and the adjacency between two bordering countries can be represented by a line (edge) connecting those two points.
It's understood that connecting lines cannot cross each other. A drawing of this kind is called a planar graph. A simple map (with just five "countries") and the corresponding graph are shown below.
A graph is said to be n-colorable if it's possible to assign one of n colors to each vertex in such a way that no two connected vertices have the same color. Obviously the above graph is not
3-colorable, but it is 4-colorable. The Four Color Theorem asserts that every planar graph - and therefore every "map" on the plane or sphere - no matter how large or complex, is 4-colorable. Despite
the seeming simplicity of this proposition, it was only proven in 1976, and then only with the aid of computers.
Notice that the above graph is "complete" in the sense that no more connections can be added (without crossing lines). The edges of a complete graph partition the graph plane into three-sided
regions, i.e., every region (including the infinite exterior) is bounded by three edges of the graph. Every graph can be constructed by first constructing a complete graph and then deleting some
connections (edges). Clearly the deletion of connections cannot cause an n-colorable graph to require any additional colors, so in order to prove the Four Color Theorem it would be sufficient to
consider only complete graphs.
Although DeMorgan was unable to prove the four-color conjecture, he did observe that no more than four regions on the plane can all be in mutual contact with each other. The graph of a set of three
mutually adjoining regions is simply a topological triangle, and if we add a fourth region, it is represented by a fourth vertex in the graph, which must be located either inside or outside the
triangle formed by the graph of the original three vertices. In either case, we must then connect this fourth vertex with each of the three original vertices so that all four of the regions are
mutually adjoining. Having done this, we can slide the vertices around (if necessary) to bring the graph into the form shown below.
This is the unique graph of four mutually adjoining plane regions (V = 4, E = 6), and also of the tetrahedron. It clearly divides the graph plane into four isolated regions, one region exterior to
the big triangle, and the three interior regions. A fifth vertex added to this graph will be able to have edges that reach only three of the four existing vertices, because each of the four regions
is completely bounded by three edges that block access to one of the existing vertices. Hence there does not exist a plane graph of five mutually connected vertices, so there does not exist a set of
five mutually adjoining regions on the plane.
Of course, the non-existence of a graph with five mutually adjoining vertices does not automatically imply the non-existence of a graph requiring five distinct colors. For example, there exist graphs
in which the largest subset of mutually adjoining vertices is two, and yet for which three colors are required. This is the case with any simple loop with an odd number of vertices, as illustrated in
the left-hand figure below. Similarly there are graphs containing no subset of four mutually adjoining vertices that nevertheless require four colors, such as the graph shown on the right.
Therefore, we must consider the possibility that five colors might be required for some graph, even though it's not possible for a graph to contain any set of five mutually connected vertices.
On the other hand, it could be argued that the two example shown above can each be reduced, clarifying the causal links between different parts of the graphs. For example, in the left figure we have
an alternating sequence of red, green, red, green, and if these were the only two available colors, it's clear that the chain must continue to alternate, no matter how long - or how short. Thus we
could just as well replace any chain of the form rgrg...rg with the simple pair rg, in which case the left-hand figure above consists of three mutually connected vertices. Likewise if we replace the
circumferential chain rgrg around the central yellow vertex in the right hand figure with just the pair rg, we get a set of four mutually connected vertices.
In view of this, it's tempting to think that DeMorgan's observation somehow, perhaps indirectly, does ultimately account for the truth of the four-color theorem. Indeed if we construct a graph by
adding vertices, one at a time, and make the maximum number of connections at each stage, we will always find the graph plane divided into "triangular" regions, each of which has access to only three
vertices. Thus it follows that four colors suffice for any graph that can be constructed in this way. For example, consider the two graphs shown below.
These two graphs each have V = 6 vertices and E = 12 edges, and in fact the two graphs are topologically identical. The plane regions represented by the vertices "a" and "b" each have five adjoining
neighbors, whereas the vertices "e" and "f" each have four, and the vertices "c" and "d" each have three. These are complete graphs and, as noted above, a complete graph divides the entire graph
plane into "triangular" regions, i.e., regions bounded by three edges connecting three vertices. Nevertheless, depending on how we add the points, it is possible that when a vertex is added to a
graph it has more than three neighbors, so we cannot say automatically that four colors would suffice. For example, if vertex "a" is the last to be added, it would have five pre-existing neighbors,
and if four colors have already been used in those five vertices, the vertex "a" would require a fifth color.
However, we need not add vertex "a" last. Another topologically equivalent way of drawing the above graph is shown below
This shows that we could first assign three distinct colors to the vertices e,b,f, and then place the vertex "a" in this triangle, connect it to each of the three surrounding vertices, and give it a
fourth color. Then we can place vertex d inside the triangle abe and give it the same color as f. Then we can place vertex c inside the triangle abf and give it the color of e. Hence the graph is
4-colorable. Moreover, any graph, or portion of a graph bounded by a triangle such as ebf , and having this hierarchical pattern of nested triangles, is 4-colorable. This is the case when the graph
contains some vertices with only three edges, and when those vertices and edges are removed, the remaining graph has some vertices with only three edges, which can be removed, and so on, until
finally all that remains is a single triangle.
Unfortunately (for the prospect of a simple proof), not every graph is of this hierarchical form. For example, consider the complete graph shown below.
This graph has V = 16 vertices and E = 42 edges. It is not hierarchical, because after removing the two vertices having only three edges each, the remaining graph has four or more edges attached to
each vertex. (We can also infer that this graph is not hierarchical from the fact that no three mutually connected vertices are directly connected to a fourth vertex.) Nevertheless, this graph is
4-colorable, as shown in the figure.
The above graph contains a "flat" patch consisting of a uniform hexagonal grid. An infinite hexagonal grid is obviously 4-colorable by means of the alternating layered pattern shown below.
In a sense, two extreme types of graph structures are the purely hierarchical graphs and the perfectly "flat" graphs, both of which are easily seen to be 4-colorable.
Notice that we need not consider graphs containing any configurations of four mutually connected vertices, because, as discussed previously, the edges of such a configuration necessarily split the
graph plane into four separate regions, each of which has access to only three of the four vertices. Hence if we can 4-color the interior of such a triangle, we can 4-color the tetrahedral "frame" as
well. If we could reduce all the causal links in a graph to their bare minimums, it might be possible to reduce every n-colorable graph to a graph with just n mutually connected vertices, and hence n
could never exceed four. However, the reduction of graphs with more than two colors is not a trivial task.
If we denote the number of vertices, edges, and faces (i.e., the bounded regions) of a planar graph by V, E, and F respectively, then Euler's formula for a plane (or a sphere) is V - E + F = 2.
Furthermore, each face of a complete graph is bounded by three edges, and each edge is on the boundary of two faces, so we have F = 2E/3, and Euler's formula for a complete planar graph is simply E =
3V - 6. Now, each edge is connected to two vertices, so the total number of attachments (in a complete graph) is 2E = 6V - 12, and hence the average number of attachments per vertex is 6 - 12/V. For
any incomplete graph, the total number of attachments is less. Consequently the average number of attachments per vertex for any graph (with a finite number of vertices) is less than 6, which implies
that at least one vertex has only five or fewer attachments.
If we have six available colors, a vertex with only five neighbors obviously imposes no constraint on the coloring of the other vertices, because, regardless of the colors of its five (or fewer)
neighbors, we can assign it a color without exceeding the six available colors. Therefore if we delete this vertex and all its connections from the graph, creating a graph with one fewer vertices,
it's clear that if the resulting graph is 6-colorable, then so was the original graph. Moreover, Euler's formula assures us that this reduced graph also contains at least one vertex with five or
fewer neighbors, so we can apply this procedure repeatedly, reducing the graph eventually to one with just 6 vertices, which is obviously 6-colorable. Hence the original graph is 6-colorable.
So, we've seen that Euler's formula immediately implies that no graph can require more than six colors. Furthermore, with just a little more work, we can also show that no graph can require more than
five colors. (Ultimately we will see that no graph can require more than four colors, but it's worthwhile to begin with the proof of the 5-colorability of every planar graph.) Obviously every graph
with five or fewer vertices is 5-colorable, so if there exists a finite graph that requires more than five colors, it must have more than five vertices. Let the positive integer V[6] denote the
smallest number of vertices on which there exists a graph that requires six (or more) colors. Conversely, every graph with fewer than V[6] vertices is 5-colorable.
Now, assuming the existence of a graph that requires more than five colors, we can consider one that has exactly V[6] vertices. By Euler's formula, this graph must contain at least one vertex with
five or fewer connections. However, it cannot contain any vertex with just four (or fewer) connections, because if it did, we could delete such a vertex and leave a graph with just V[6] - 1 vertices,
which is 5-colorable by definition. Re-inserting the deleted vertex would clearly have no effect on the 5-colorability of the graph, because the vertex has only four (or fewer) neighbors, so the
original graph must be 5-colorable, contradicting our assumption. Therefore, a graph with V[6] vertices that requires more than five colors cannot contain any vertex with just four or fewer
Since Euler's formula implies that the graph contains at least one vertex with five or fewer connections, the only remaining possibility is that the graph contains a vertex with exactly five
connections. However, this too leads to a contradiction. To show this, it's helpful to introduce the notion of a k-cluster, which is specified by a set of k distinct colors and one particular vertex
that has one of those colors. The original vertex is included in the cluster, and, in addition, every vertex with one of the k specified colors that neighbors a vertex in the cluster is also in the
cluster. By definition the only vertices outside a cluster that are directly connected to vertices inside the cluster have colors that are not in the specified set of k colors. Therefore, we can
apply any permutation of the k colors to the vertices in a cluster without invalidating the coloration. In particular, a 2-cluster is a contiguous set of vertices, each with one of two specified
colors, and we can transpose these two colors without upsetting the coloration of a graph.
Now consider a graph containing a vertex with exactly five immediate neighbors, of five distinct colors, as illustrated below.
Since the uncolored vertex in the center has neighbors of five distinct colors, it might seem that a sixth color is required. However, notice that we can transpose the blue and green colors in the
blue/green 2-cluster attached to the upper left vertex of the central pentagon, so that the upper left vertex is green instead of blue. Once we have done this, the uncolored vertex in the center has
neighbors of only four distinct colors. If we delete the central vertex, the overall graph has V[6] - 1 vertices, so it is 5-colorable, but re-inserting this vertex requires no sixth color (once we
have transposed the blue and green in the small 2-cluster as described), so the original graph is 5-colorable.
The only possible objection to the transposing of the colors in a cluster as a means of reducing the number of distinct colors around the perimeter of the pentagon is that we cannot necessarily
change the color of any vertex to the color of one of the opposite vertices of the pentagon, because two opposite vertices of a pentagon might be in the same 2-cluster. This is the case, for
instance, with the yellow and red vertices of the pentagon in the figure above. If we transpose the red and yellow colors in this 2-cluster, there would still be five distinct colors around the
perimeter of the pentagon. However, if such a cluster exists, connecting two opposite vertices of the pentagon, we are guaranteed that at least one other pair of vertices are cut off from each other,
i.e., they cannot be in the same 2-cluster. For example, the blue/green 2-cluster outlined in red in the above figure cannot be part of the blue/green 2-cluster attached to the upper right of the
pentagon. In general, for any vertex connected to exactly five other vertices, we can always (without invalidating the coloration) change the color of at least one of the five neighbors so that there
are only four distinct colors. It follows, using the reduction argument described in the preceding paragraph, that no graph can require more than five colors.
This still leaves open the question of whether five colors are ever actually required, or whether four colors will always suffice. If there exist planar graphs requiring five distinct colors, there
must be a positive integer V[5] that is the smallest number of vertices on which such a graph can be constructed. Let's call a graph with V[5] vertices that requires five colors a minimal 5-color
graph. By the reduction argument described previously, such a graph obviously cannot contain any vertex with three or fewer connections. Moreover, by considering 2-clusters again, we can prove that
such a graph cannot contain any vertex with just four connections. To see why, consider the portion of a graph illustrated below.
The uncolored vertex in the center has just four neighbors, which we've colored red, yellow, green, and blue. This might seem to require a fifth color for the central vertex, but in fact by permuting
the yellow and blue colors in the 2-cluster (outlined in red) above the central vertex we can transform this coloration so that the central uncolored vertex has neighbors of only three distinct
colors (red, green, and blue). Since the overall graph from which this configuration was taken is assumed to have exactly V[5] vertices, it follows that if we delete the central vertex, the resulting
graph has only V[5] - 1 vertices, so it is 4-colorable, but we can clearly add the central vertex back without requiring a fifth color, so the whole graph must be 4-colorable, contradicting our
original assumption. This shows that it's always possible to modify a graph containing a vertex with only four connections in such a way that the vertex is connected to only three distinct colors. At
most, only one of the two pairs of opposite vertices can be linked by a common 2-cluster, because if one pair is so linked, the other cannot be linked.
To summarize the argument up to this point, we've shown that no graph can require more than five colors, and we've also shown that if there exists a graph requiring five colors, there must exist a
minimal 5-color graph, and such a graph cannot contain any vertex with fewer than five connections. Also, by adding connections, we can "complete" any graph on V vertices so that all the faces are
triangular and there are a total of exactly 6V-12 attachments. This graph must still be a minimal 5-color graph because we haven't increased the number of vertices, and adding connections cannot
reduce the number of colors required, whereas no graph requires more than five colors. Therefore, letting a[5], a[6], a[7],... denote the number of vertices with precisely 5, 6, 7,... attachments
respectively, we must have a minimal 5-color graph such that
where V = a[5] + a[6] + a[7] + a[8] + a[9] + ... Substituting this expression for V into the above formula and re-arranging, we have the condition
This places fairly severe constraints on any putative complete minimal 5-color graph. For example, if we consider only such graphs with no more than six attachments at any single vertex, then this
formula implies a[5] = 12. In other words, a complete minimal 5-graph with no more than six attachments per vertex must have exactly 12 vertices with five attachments each. This suggests that these
12 vertices are arranged globally in the pattern of an icosahedron, and the remaining vertices with six attachments per vertex are in a regular hexagonal pattern, filling in the "faces" of the
icosahedron. (This pattern is the basis for "geodesic domes"). The fundamental graph of this type, with a[6] = 0, is shown below.
We can give explicit 4-colorings of every such pattern, so such graphs can be shown explicitly to require no more than four colors. Therefore, if a complete minimal 5-color graph exists, it must
contain at least one vertex with seven or more attachments.
It was not until 1976 that, with the help of modern computers, the 4-color conjecture was finally proven to be true. The proof, developed by Appel and Haken based on ideas of several earlier people
(Kempe, Heawood, Birkhoff, etc) consisted of finding a set of subgraphs, at least one of which must be contained in any normal planar graph (i.e., the set is unavoidable), and such that if a graph
containing one of those sub-graphs is not 4-colorable, then neither is a graph with one fewer vertices. This leads immediately to a contradiction, because it implies that, if we hypothesize the
existence of a graph that is not 4-colorable, we can always construct another graph, with fewer vertices, that is also not 4-colorable, eventually arriving at a graph with only four vertices, but
this graph obviously is 4-colorable. Hence we have a contradiction, so we can conclude that the original hypothesis was false, i.e., there does not exist a graph that is not 4-colorable. The
unavoidable set found by Appel and Haken consisted of nearly 1500 subgraphs, and many of these required considerable analysis to prove that they were "reducible". The development of this unavoidable
set, and the proofs of reducibility for its members, was carried out largely on computers, so the proof is notable (and slightly controversial) as an early example of a mathematical proposition whose
proof can only be carried out on a computer, and is, in some sense, beyond human mental capabilities to verify.
It's interesting to consider the problem of determining a 4-coloring for any given graph from a purely algebraic standpoint (harking back to Hamilton's quaternions). We might stipulate that each
vertex is to be assigned one of four possible values, and the conditions imposed by the connections would be represented by algebraic equations, one equation for each connection. Part of the
difficulty of this approach is that the condition of adjacency is not an equality, it is an inequality, i.e., we require that two vertices have unequal values (from among the four possible values).
Also, there is necessarily a great deal of ambiguity in the solution, because any permutation of the colors leaves a solution intact. In addition, there is even more ambiguity, since in general there
can be more than one distinct 4-coloration of a graph.
Given the four allowable (distinct) values a,b,c,d, we could algebraically impose the requirement for every vertex to have one of these values by requiring that the value u assigned to any vertex
satisfies f(u) = 0, where the polynomial f is defined as
Then we could algebraically impose the required inequality on the values u,v assigned to two connected vertices by stipulating that g(uv) = 0, where the polynomial g is defined as
The equation g(uv) = 0 is satisfied if and only if u and v have distinct values from the set {a,b,c,d}. For example, the set {-2,-1,1,2} gives
To illustrate, consider again the octahedral graph (which is 3-colorable):
The algebraic conditions for the individual "color" values assigned to the six variables A,B,C,D,E,F are
and the algebraic conditions imposed by the twelve connections are
This amounts to 18 equations in the 6 variables. We can, however, without loss of generality stipulate the "colors" of three of the variables, say, A = 1, B = -1, C = 2, since these three must have
mutually distinct values. This leaves us with the following twelve equations in the three unknowns D,E,F
The three equations f(D) = 0, g(D) = 0, and g(-D) = 0 jointly have only two solutions, namely, D = 2 and D = -2. Therefore those three equations can be replaced with the single equation D^2 - 4 = 0.
Likewise the three equations involving only E can be reduced to E^2 + 3E + 2 = 0, and the three equations involving only F can be reduced to F^2 + F - 2 = 0. Thus we have six equations in the three
A different approach would be to assign a four-dimensional vector to each vertex, and then the connection between two given vertices would be represented by requiring the dot product of those two
vectors to vanish, i.e., requiring that the two vectors be mutually orthogonal. The three vectors assigned to the vertices of any triangle would then be an orthogonal triad with some orientation in
4-dimensional space. Likewise the four vectors assigned to the vertices of a tetrahedral graph would be a complete set of four mutually orthogonal vectors (a tetrad), but still with arbitrary
orientation. For example, the conditions on the six vectors A,B,C,D,E,F assigned to the vertices of the octahedral graph shown above would be simply
In this context the four color theorem tells us that a space of four dimensions is sufficient to enable us to assign one of the four basis vectors to each vertex of a planar graph in such a way that
the vectors of every pair of adjacent vertices are orthogonal. We can assume each vector is of unit length, so it has three independent components. Therefore, we have 18 independent components in
all, constrained by 12 equations. Naturally the system is underspecified, because we can apply an arbitrary permutation to the vectors.
Since the edges of a complete graph partition the graph plane into three-sided regions whose vertices are three mutually connected points, we can uniquely assign the fourth color to each of these
regions. The result is useful for visualizing the symmetries of the coloring. For example, the 4-colored icosahedral graph discussed above looks like this:
Return to MathPages Main Menu | {"url":"http://mathpages.com/home/kmath266/kmath266.htm","timestamp":"2014-04-20T20:55:20Z","content_type":null,"content_length":"31707","record_id":"<urn:uuid:9d2b8306-3e95-4c4f-bb94-105e8b7b35d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
DEPARTMENT OF INFORMATION TECHNOLOGY
Sub Name : CS1202-Digital Principles and System Design
Branch/Sem : IT / III
CS1202 Digital Principles and System Design
Question Bank
Unit – I Boolean algebra and Logic Gates
Part A
1. Find the hexadecimal equivalent of the decimal number 256
2. Find the octal equivalent of the decimal number 64
3. What is meant by weighted and non-weighted coding?
4. Convert A3BH and 2F3H into binary and octal respectively
5. Find the decimal equivalent of (123)9
6. Find the octal equivalent of the hexadecimal number AB.CD
7. Encode the ten decimal digits in the 2 out of 5 code
8. Show that the Excess – 3 code is self –complementing
9. Find the hexadecimal equivalent of the octal number 153.4
10. Find the decimal equivalent of (346)7
11. A hexadecimal counter capable of counting up to at least (10,000) 10 is to be constructed.
What is the minimum number of hexadecimal digits that the counter must have?
12. Convert the decimal number 214 to hexadecimal
13. Convert 231.3 4 to base 7
14. Give an example of a switching function that contains only cyclic prime implicant
15. Give an example of a switching function that for which the MSP from is not unique.
16. Express x+yz as the sum of minterms
17. What is prime implicant?
18. Find the value of X = A B C (A+D) if A=0; B=1; C=1 and D=1
19. What are „minterms‟ and „maxterms‟?
20. State and prove Demorgan‟s theorem
21. Find the complement of x+yz
22. Define the following : minterm and term
23. State and prove Consensus theorem
24. What theorem is used when two terms in adjacent squares of K map are combined?
25. How will you use a 4 input NAND gate as a 2 input NAND gate?
26. How will you use a 4 input NOR gate as a 2 input NOR gate?
27. Show that the NAND connection is not associative
28. What happens when all the gates is a two level AND-OR gate network are replaced by
NOR gates?
29. What is meant by multilevel gates networks?
30. Show that the NAND gate is a universal building block
31. Show that a positive logic NAND gate is the same as a negative logic NOT gate
32. Distinguish between positive logic and negative logic
33. Implement AND gate and OR gate using NAND gate
34. What is the exact number of bytes in a system that contains (a) 32K byte, (b) 64M bytes,
and (c) 6.4G byte?
35. List the truth table of the function:
F = x y + x y‟ + y ‟z
Part B
1. (a) Explain how you will construct an (n+1) bit Gray code from an n bit
Gray code
(b) Show that the Excess – 3 code is self -complementing
2. (a) Prove that (x1+x2).(x1‟. x3‟+x3) (x2‟ + x1.x3) =x1‟x2
(b) Simplify using K-map to obtain a minimum POS expression:
(A‟ + B‟+C+D) (A+B‟+C+D) (A+B+C+D‟) (A+B+C‟+D‟) (A‟+B+C‟+D‟)
3. Reduce the following equation using Quine McClucky method of
minimization F (A,B,C,D) = _m(0,1,3,4,5,7,10,13,14,15)
4. (a) State and Prove idempotent laws of Boolean algebra.
(b) using a K-Map ,Find the MSP from of F= _(0,4,8,12,3,7,11,15) +_d(5)
5 (a) With the help of a suitable example ,explain the meaning of an redundant prime i
(b) Using a K-Map, Find the MSP form of F= _ (0-3, 12-15) + _d (7, 11)
6 (a) Simplify the following using the Quine – McClusky minimization technique
D = f(a,b,c,d) = _ (0,1,2,3,6,7,8,9,14,15).Does Quine –McClusky take care of don‟t
care conditions? In the above problem, will you consider any don‟t care conditions?
Justify your answer
(b) List also the prime implicants and essential prime implicants for the above case
7 (a) Determine the MSP and MPS focus of F= _ (0, 2, 6, 8, 10, 12, 14, 15)
(b) State and Prove Demorgan‟s theorem
8 Determine the MSP form of the Switching function
F = _ ( 0,1,4,5,6,11,14,15,16,17,20- 22,30,32,33,36,37,48,49,52,53,56,63)
9. (a) Determine the MSP form of the Switching function
F( a,b,c,d) =_(0,2,4,6,8) + _d(10,11,12,13,14,15)
(b) Find the Minterm expansion of f(a,b,c,d) = a‟(b‟+d) + acd‟
10 Simplify the following Boolean function by using the Tabulation Method
F= _ (0, 1, 2, 8, 10, 11, 14, 15)
11 State and Prove the postulates of Boolean algebra
12 (a) Find a Min SOP and Min POS for f = b‟c‟d + bcd + acd‟ + a‟b‟c + a‟bc‟d
13 Find an expression for the following function usingQuine McCluscky method
F= _ (0, 2, 3,5,7,9,11,13,14,16,18,24,26,28,30)
14 State and Prove the theorems of Boolean algebra with illustration
15 Find the MSP representation for
F(A,B,C,D,E) = _m(1,4,6,10,20,22,24,26) + _d (0,11,16,27) using K-Map method
Draw the circuit of the minimal expression using only NAND gates
16 (a) Show that if all the gates in a two – level AND-OR gate networks are replaced by
NAND gates the output function does not change
(b) Why does a good logic designer minimize the use of NOT gates?
17 Simplify the Boolean function F(A,B,C,D) = _ m (1,3,7,11,15) + _d (0,2,5) .if don‟t
care conditions are not taken care, What is the simplified Boolean function .What are
your comments on it? Implement both circuits
18 (a) Show that if all the gate in a two – level OR-AND gate network are replaced by NOR
gate, the output function does not change.
(b) Implement Y = (A+C) (A+D‟) ( A+B+C‟) using NOR gates only
19 (a) F3 = f(a,b,c,d) = _ (2,4,5,6)
F2 = f(a,b,c,d) = _ (2,3,,6,7)
F1 = f(a,b,c,d) = _ (2,5,6,7) .Implement the above Boolean functions
(i) When each is treated separately and
(ii)When sharing common term
(b) Convert a NOR with an equivalent AND gate
20 Implement the Switching function whose octal designation is 274 using NAND gates only
21 Implement the Switching function whose octal designation is 274 using NOR gates only
22 (a) Show that the NAND operation is not distributive over the AND operation
(b) Find a network of AND and OR gate to realize f(a,b,c,d) = _ m (1,5,6,10,13,14)
23 What is the advantages of using tabulation method? Determine the prime implicants of the
following function using tabulation method
F( W,X,Y,Z) = _(1,4,6,7,8,9,10,11,15)
23 (a) Explain about common postulates used to formulates various algebraic structures
(b) Given the following Boolean function F= A”C + A‟B + AB‟C + BC
Express it in sum of minterms & Find the minimal SOP expression
Unit – II Combinational Logic
Part A
1. How will you build a full adder using 2 half adders and an OR gate?
2. Implement the switching function Y= BC‟ + A‟B + D
3. Draw 4 bit binary parallel adder
4. Write down the truth table of a full adder
5. Write down the truth table of a full sub tractor
6. Write down the truth table of a half sub tractor
7. Find the syntax errors in the following declarations (note that names for primitive gates
are optional):
module Exmp1-3(A, B, C, D, F)
inputs A,B,C,
and g1(A,B,D);
not (D,B,A);
OR (F,B,C);
endmodule ;
8. Draw the logic diagram of the digital circuit specified by
module circt (A,B,C,D,F);
input A,B,C,D;
output F;
wire w,x,y,z,a,d;
and (x,B,C,d);
and y,a,C);
and (w,z,B);
or (z,y,A);
or (F,x,w);
not (a,A);
not (d,D);
9. Define Combinational circuits
10. Define Half and Full adder
11. Give the four elementary operations for addition and subtraction
12. Design the combinational circuit with 3 inputs and 1 output. The output is 1 when the
binary value of the inputs is less than 3.The output is 0 otherwise
13. Define HDL
14. What do you mean by carry propagation delay?
15. What is code converter?
16. Give short notes on Logic simulation and Logic synthesis
17. What do you mean by functional and timing simulation?
18. What do you mean by test bench?
19. Give short notes on simulation versus synthesis
20. Define half sub tractor and full sub tractor
Part B
1 Design a 4 bit magnitude comparator to compare two 4 bit number
2 Construct a combinational circuit to convert given binary coded decimal number into an
Excess 3 code for example when the input to the gate is 0110 then the circuit should
generate output as 1001
3 Design a combinational logic circuit whose outputs are F1 = a‟bc + ab‟c and
F2 = a‟ + b‟c + bc‟
4 (a) Draw the logic diagram of a *-bit 7483 adder
(b) Using a single 7483, Draw the logic diagram of a 4 bit adder/sub tractor
5 (a) Draw a diode ROM, which translates from BCD 8421 to Excess 3 code
(b) Distinguish between Boolean addition and Binary addition
6 Realize a BCD to Excess 3 code conversion circuit starting from its truth table
7 (a) Design a full sub tractor
(b) How to it differ from a full sub tractor
8 Design a combinational circuit which accepts 3 bit binary number and converts its
equivalent excess 3 codes
9 Derive the simplest possible expression for driving segment “a” through „g‟ in an 8421
BCD to seven segment decoder for decimal digits 0 through 9 .Output should be
active high (Decimal 6 should be displayed as 6 and decimal 9 as 9)
10 Write the HDL description of the circuit specified by the following Boolean function
(i) Y= (A+B+C) (A‟+B‟+C‟)
(ii) F= (AB‟ + A‟B) (CD‟+C‟D)
(iii) Z = ABC + AB‟ + A(D+B)
(iv) T= [(A+B} {B‟+C‟+D‟)]
11 Design 16 bit adder using 4 7483 ICs
Unit – III Design with MSI Devices
Part A
1. What is a decoder and obtain the relation between the number of inputs „n‟ and outputs
„m‟ of a decoder?
2. Distinguish between a decoder and a demultiplexer
3. Using a single IC 7485 ; draw the logic diagram of a 4 bit comparator
4. what is decoder
5. What do you mean by encoder?
6. Write the short notes on priority encoder
7. What is multiplexer? Draw the logic diagram of8 to 1 line multiplexer
8. What do you mean by comparator?
9. Write the HDL description of the circuit specified by the following Boolean function
10. How does ROM retain information?
11. Distinguish between PAL and PLA
12. Give the classification of memory
13. What is refreshing? How it is done?
14. What is Hamming code?
15. Write a short notes on memory decoding
16. List the basic types of programmable logic devices
17. What is PAL? How it differ from PROM and PLA?
18. Write a short notes on – PROM,EPROM,EEPROM
19. How many parity bits are required to form Hamming code if massage bits are 6?
20. How to find the location of parity bits in the Hamming code?
21. Generate the even parity Hamming codes for the following binary data
1101, 1001
22. A seven bit Hamming code is received as 11111101. What is the correct code?
23. Compare static RAMs and dynamic RAMs
24. Define Priority encoder
25. Define PLDs
Part B
1. Implement the switching function F= _(0,1,3,4,7) using a 4 input MUX and explain
2. Explain how will build a 64 input MUX using nine 8 input MUXs
3. State the advantages of complex MSI devices over SSI gates
4. Implement the switching function F(A,B,C) = _ ( ,2,4,5) using the DEMUX 74156
5. Implement the switching function F= _(0,1,3,4,12,14,15) using an 8 input MUX
6. Explain how will build a 16 input MUX using only 4 input MUXs
7. Explain the operation of 4 to 10 line decoder with necessary logic diagram
8. Draw a neat sketch showing implementation of Z1 = ab‟d‟e + a‟b‟c‟e‟ + bc + de ,
Z2 = a‟c‟e, Z3 = bc +de+c‟d‟e‟+bd and Z4 = a‟c‟e +ce using a 5*8*4 PLA
9. Implement the switching functions:
Z1 = ab‟d‟e + a‟b‟c‟e‟ + bc + de ,
Z2 = a‟c‟e,
Z3 = bc +de+c‟d‟e‟+bd and
Z4 = a‟c‟e +ce Using a 5*8*4 PLA
10 Design a switching circuit that converts a 4 bit binary code into a 4 bit Gray code using
ROM array
11.Design a combinational circuit using a ROM ,that accepts a 3- bit number and
generates an output binary number equal to the square of the given input number
Unit – IV Synchronous Sequential Logic
Part A
1. Derive the characteristic equation of a D flip flop
2. Distinguish between combinational and sequential logic circuits
3. What are the various types of triggering of flip-flops?
4. Derive the characteristic equation of a T flip flop
5. Derive the characteristic equation of a SR flip flop
6. What is race round condition? How it is avoided?
7. List the functions of asynchronous inputs
8. Define Master slave flip flop
9. Draw the state diagram of „T‟ FF, „D‟ FF
10. Define Counter
11. What is the primary disadvantage of an asynchronous counter?
12. How synchronous counters differ from asynchronous counters?
13. Write a short note on counter applications
14. Compare Moore and Mealy models
15. When is a counter said to suffer from lock out?
16. What is the minimum number of flip flops needed to build a counter of modulus z 8?
17. State the relative merits of series and parallel counters
18. What are Mealy and Moore machines?
19. When is a counter said to suffer from lockout?
20. What is the difference between a Mealy machine and a Moore Machines?
21. Distinguish between synchronous and asynchronous sequential logic circuits
22. Derive the characteristic equation of a JK flip flop
23. How will you convert a JK flip flop into a D flip flop
24. What is mean by the term „edge triggered‟?
25. What are the principle differences between synchronous and asynchronous networks
26. What is lockout? How it is avoided?
27. What is the pulse mode operation of asynchronous sequential logic circuits not very
28. What are the advantages of shift registers?
29. What are the applications of a shift register?
30. How many flip –flops are needed to build an 8 bit shift register?
31. A shift register comprises of JK flip-flops. How will you complement of the counters of the
32. List the basic types of shift registers in terms of data movement.
33. Write a short notes on PRBS generator
34. Give the HDL dataflow description for T flip - flop
35. Give the HDL dataflow description for JK flip – flop
Part B
1 Draw the state diagram and characteristics equation of T FF, D FF and JK FF
2 (a) What is race around condition? How is it avoided?
(b) Draw the schematic diagram of Master slave JK FF and input and output
waveforms.Discuss how it prevents race around condition
3 Explain the operation of JK and clocked JK flip-flops with suitable diagrams
4 Draw the state diagram of a JK flip- flop and D flip – flop
5 Design and explain the working of a synchronous mod – 3 counter
6 Design and explain the working of a synchronous mod – 7 counter
7 Design a synchronous counter with states 0,1, 2,3,0,1 …………. Using JK FF
8 Using SR flip flops, design a parallel counter which counts in the sequence
000,111,101,110,001,010,000 ………….
9 Using JK flip flops, design a parallel counter which counts in the sequence
000,111,101,110,001,010,000 ………….
10 (a) Discuss a decade counter and its working principle
(b) Draw as asynchronous 4 bit up-down counter and explain its working
11 (a) How is the design of combinational and sequential logic circuits possible with PLA?
(b) Mention the two models in a sequential circuit and distinguish between them
12 Design a modulo 5 synchronous counter using JK FF and implement it. Construct its
timing diagram
12 A sequential machine has one input line where 0‟s and 1‟s are being incident. The
machine has to produce a output of 1 only when exactly two 0‟s are followed by a „1‟
or exactly two 1‟s are followed by a „0‟.Using any state assignment and JK
flipflop,synthesize the machine
13 Using D flip –flop ,design a synchronous counter which counts in the sequence
000, 001, 010, 011, 100, 1001,110,111,000
15 Using JK flip-flops, design a synchronous sequential circuit having one and one
output. the output of the circuit is a 1 whenever three consecutive 1‟s are
observed. Otherwise the output is zero
14 Design a binary counter using T flip – flops to count in the following sequences:
(i) 000,001,010,011,100,101,110,111,000
(ii) 000,100,111,010,011,000
15 (a) Design a synchronous binary counter using T flip – flops
(b) Derive the state table of a serial binary adder
17. Design a 3 bit binary Up-Down counter
18. (i) Summarize the design procedure for synchronous sequential circuit
(ii) Reduce the following state diagram
Unit – V Asynchronous Sequential Logic
Part A
1. Distinguish between fundamental mode and pulse mode operation of asynchronous
sequential circuits
2. What is meant by Race?
3. What is meant by critical race?
4. What is meant by race condition in digital circuit?
5. Define the critical rate and non critical rate
6. What are races and cycles?
7. What is the significance of state assignment?
8. What are the steps for the analysis of asynchronous sequential circuit?
9. What are the steps for the design of asynchronous sequential circuit?
10. Write short notes on (a) Shared row state assignment
(b) One hot state assignment
11. What are Hazards?
12. What is a static 1 hazard?
13. What is a static 0 hazard?
14. What is dynamic hazard?
15. Define static 1 hazard, static 0 hazards, and dynamic hazard?
16. Describe how to detect and eliminate hazards from an asynchronous network?
17. What is static hazard?
18. List the types of hazards?
19. How to eliminate the hazard?
20. Draw the wave forms showing static 1 hazard?
Part B
1. What is the objective of state assignment in asynchronous circuit? Give hazard – free
realization for the following Boolean function f(A,B,C,D) = _M(0,2,6,7,8,10,12)
2. Summarize the design procedure for asynchronous sequential circuit
a. Discuss on Hazards and races
b. What do you know on hardware descriptive languages?
3. Design an asynchronous sequential circuit with 2 inputs X and Y and with one output Z
Wherever Y is 1, input X is transferred to Z .When Y is 0; the output does not change for
any change in X.Use SR latch for implementation of the circuit
4. Develop the state diagram and primitive flow table for a logic system that has 2 inputs,x
and y and an output z.And reduce primitive flow table. The behavior of the circuit is stated
as follows. Initially x=y=0. Whenever x=1 and y = 0 then z=1, whenever x = 0 and y = 1
then z = 0.When x=y=0 or x=y=1 no change in z ot remains in the previous state. The
logic system has edge triggered inputs with out having a clock .the logic system changes
state on the rising edges of the 2 inputs. Static input values are not to have any effect in
changing the Z output
5. Design an asynchronous sequential circuit with two inputs X and Y and with one output Z.
Whenever Y is 1, input X is transferred to Z.When Y is 0,the output does not change for
any change in X.
6. Obtain the primitive flow table for an asynchronous circuit that has two inputs x,y and one
output Z. An output z =1 is to occur only during the input state xy = 01 and then if the only if
the input state xy =01 is preceded by the input sequence.
7. A pulse mode asynchronous machine has two inputs. It produces an output whenever two
consecutive pulses occur on one input line only .The output remains at „1‟ until a pulse has
occurred on the other input line. Draw the state table for the machine.
(a) How will you minimize the number of rows in the primitive state table of an incompletely
specified sequential machine
(b) State the restrictions on the pulse width in a pulse mode asynchronous sequential
9. Construct the state diagram and primitive flow table for an asynchronous network that has
two inputs and one output. The input sequence X1X2 = 00,01,11 causes the output to
become 1.The next input change then causes the output to return to 0.No other inputs will
produce a 1 output | {"url":"http://www.docstoc.com/docs/93043969/CS2202-QB","timestamp":"2014-04-23T12:22:44Z","content_type":null,"content_length":"69738","record_id":"<urn:uuid:4c703253-bbff-4cbd-a4e7-a2fe7b35022e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bessel Potential Space inequality
up vote 0 down vote favorite
The Bessel Potential Space is defined for $s\in\mathbb{R}$ as,
$H^s(\mathbb{R}^d) = \{f\in L_2(\mathbb{R}^n) : (1+|\cdot|)^{s/2}\hat{f}(\cdot)\in L_2(\mathbb{R}^n)\}. $
This defines a Hilbert space such that for any $f,g\in H^s(\mathbb{R}^n)$,
$ \langle f, g\rangle = \int_{\mathbb{R}^n} \hat{f}(\omega)\overline{\hat{g}(\omega)} (1+|\omega|)^{s}d\omega. $
For any open set $\Omega\subset\mathbb{R}^n$ we have $H^s(\Omega)$ being the set of restrictions with norm,
$ \left\|f\right\|_{H^s(\Omega)} = $
$ \inf_{g\in H^s(\mathbb{R}^n)}\{\left\|g\right\|_{H^s(\mathbb{R}^n)} : g|\Omega=f \} $
Does this definition of the norm ensure we have the following: Given an open set $\Omega\subset \mathbb{R}^n$ and open sets $\Omega_1, \Omega_2\subset \mathbb{R}^n$ such that $\Omega = \Omega_1\cup \
Omega_2$ and $f\in H^s(\Omega)$,
$ \left\|f\right\|^2_{H^s(\Omega)}\leq \left\|f\right\|^2_{H^s(\Omega_1)} + \left\|f\right\|^2_{H^s(\Omega_2)}. $
sobolev-spaces bessel-potential
How exactly do yo define the last norm (the difference is, generally speaking, not an open set)? – fedja Jul 25 '11 at 13:15
@fedja: True. I have changed the question to actually what I require. I'm hoping all this is well-defined now. – alext87 Jul 25 '11 at 13:25
add comment
1 Answer
active oldest votes
The answer is negative anyway. Take $\mathbb R=(-\infty,a)\cup(-a,+\infty)$ with small $a>0$. Take $f=e^{-|x|}$. Then $\widehat f(y)\approx \frac 1{1+y^2}$. Now take $s=3-\delta$. $\|f\|_{H^
s(\mathbb R)}$ is huge if $\delta$ is small. On the other hand, we can expand $f$ from $(-\infty,0)$ to a Schwartz function $g$. To make it into an extension from $(-\infty,a)$, we can just
create a small "triangular dip" in $g$ with base and height of order $a$. This dip tends to $0$ in $H^s(\mathbb R)$ as $a\to 0$ (its Fourier transform is bounded by $C\min(a^2,y^{-2})$).
up vote Thus, we can get an extension of the norm dominated by the norm of $g$ in $H^3$ for each half.
3 down
vote Side note: most people I know will denote your $H^s$ by $H^{s/2}$ (having in mind that $s$ is the "number of derivatives"). :)
add comment
Not the answer you're looking for? Browse other questions tagged sobolev-spaces bessel-potential or ask your own question. | {"url":"http://mathoverflow.net/questions/71224/bessel-potential-space-inequality","timestamp":"2014-04-18T18:29:45Z","content_type":null,"content_length":"52825","record_id":"<urn:uuid:9df1c3f5-76ed-41ad-a8ba-ad94454c2d50>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] python's random.random() faster than numpy.random.rand() ???
[Numpy-discussion] python's random.random() faster than numpy.random.rand() ???
Robert Kern robert.kern at gmail.com
Sat Jan 27 16:08:27 CST 2007
Mark P. Miller wrote:
> Greetings:
> I've recently been working with numpy's random number generators and
> noticed that python's core random number generator is faster than
> numpy's for the uniform distribution.
> In other words,
> for a in range(1000000):
> b = random.random() #core python code
> is substantially faster than
> for a in range(1000000):
> b = numpy.random.rand() #numpy code
> For other distributions that I've worked with, numpy's random number
> generators are much faster than the core python generators.
Yes, the standard library's non-uniform generators are implemented in Python.
I'm not entirely sure where we are losing time in numpy, though. Perhaps the
generated Pyrex code for the interface is suboptimal. Or perhaps we ought to
compile with greater optimization.
> Any thoughts? Just wanted to mention it to you. Or should I be using
> some other approach to call numpy's uniform generator?
If you need a lot of values, request a lot of values all at once.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-January/025729.html","timestamp":"2014-04-19T06:10:07Z","content_type":null,"content_length":"4369","record_id":"<urn:uuid:c2e64eaf-2120-4eb9-a1e8-1d6b51c3a16f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: What's wrong with cumtrapz?
Replies: 2 Last Post: May 4, 2001 10:09 AM
Messages: [ Previous | Next ]
Re: What's wrong with cumtrapz?
Posted: May 4, 2001 10:09 AM
""Leung, Randolph [COPE/HKG]"" <RckLeung@Copeland-Corp.com> wrote in message
> I have some experimental time traces to integrate. CUMTRAPZ seems
> to be a simple and easy option for me. To test it, I tried to integrate a
> simple SIN(X) time trace as follows,
> x = 0:pi/100:4*pi];
> y = sin( 2*pi*x).
> inty = cumtrapz( x', y' );
> I would expect a COS(X) time trace after CUMTRAPZ. The integrated
> time trace gave a cos pattern of variation, correct amplitude BUT was
> wrong in phase and shifted upwards, i.e. it is greater than zero for all
> x and gives 0, rather than 1, at x = 0. I am very confused with the
> results.
It seems you may be forgetting a bit of your calculus. You are numerically
taking the INDEFINITE integral of your function. In that case, you must be
prepared to add a constant to your solution. [I think it is really only
shifted "upward", and not really wrong in phase.] You need to determine the
appropriate constant from other conditions of your problem.
Date Subject Author
5/3/01 What's wrong with cumtrapz? "Leung, Randolph [COPE/HKG]"
5/4/01 Re: What's wrong with cumtrapz? Nabeel Azar
5/4/01 Re: What's wrong with cumtrapz? Timothy E. Vaughan | {"url":"http://mathforum.org/kb/thread.jspa?threadID=268175&messageID=867034","timestamp":"2014-04-20T03:25:22Z","content_type":null,"content_length":"19348","record_id":"<urn:uuid:3bc34dc8-ede8-4fb3-bb80-f4bda9532f05>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radioactive Dating: Take Your Programming Skills Nuclear
Radioactive Dating: Take Your Programming Skills Nuclear *
Time Required Short (2-5 days)
Prerequisites Familiarity with a programming language and basic programming algorithms
Material Availability Readily available
Safety No issues
*Note: This is an abbreviated Project Idea, without notes to start your background research, a specific list of materials, or a procedure for how to do the experiment. You can identify abbreviated
Project Ideas by the asterisk at the end of the title. If you want a Project Idea with full instructions, please pick one without an asterisk.
Have you ever wondered how it's possible to so accurately date ancient artifacts? Geologists, paleontologists, archeologists, and anthropologists use a statistical process like
radioactive isotope decay
to date objects through a method called
radioactive dating
(also known as
radiometric dating
). To learn more about that method, check out the geology science project
How Old Is That Rock? Roll the Dice & Use Radiometric Dating to Find Out
. In that particular Project Idea, radioactive decay of isotopes is modeled by rolling dice. While that procedure is a great way to grasp the concept, it would certainly be a time-consuming and
tedious process in the real world, even with samples of only 100 dice, which could scarcely be called a "large number." Furthermore, how many trials of rolling up to 100 dice over and over
again—while accurately keeping track of the results—would you be willing to do by hand? The most motivated student will only get to a pretty small number of trials. Why not get the help of a computer
to do this repetitious work for you?
A computer program can help you create a simulation of what would happen in real life. In this Abbreviated Project Idea, you will simulate the decay of radioactive isotopes. To get started, do you
know how to program a random event like rolling a die or a decaying isotope? The tool you will use for this is called a pseudorandom number generator, which creates almost-random numbers on a
computer. This will set you on the path to generating a radioactive decay curve on your computer. At the end of this Abbreviated Project Idea, you'll find some suggestions on how you can incorporate
this information into a computer science project.
As you roll a standard six-sided die, you know each number from 1–6 has an equal chance, or probability, to land on top; but you cannot predict which exact number it will be on your roll. This type
of process is called a random process. So how can you generate a sample of random numbers by using a perfectly logical machine like a computer? Most programming languages have a random number
generator. This is a function that will provide you with an almost-random (or pseudorandom) number. Check out the references provided in the Bibliography section to get a feel for the difference
between true randomness and computer-generated randomness, as well as some information about how computers generate these numbers. To test whether or not these pseudorandom numbers can help generate
the outcome of rolling dice or predict how isotopes decay, start by looking up the random number generator function for your choice of programming language and read trough its specifications. As an
example, in Microsoft® Excel® (a spreadsheet program), RAND() generates a random real number between 0 and 1. The RAND() function in Excel clearly is not going to do the job. You might be lucky; the
programming language of your choice might be able to provide a random integer between 1 and 6—exactly what you would get from rolling one die. If not, do you know how to translate a random real
number between 0 and 1 to a random integer between 1 and 6?
The algorithm below will guide you to the formula.
1. First, transform the random real number between 0 and 1 to a random real number between 1 and 7. You can do this by multiplying the generated number by 6 and adding 1, as shown in the formula
Formula 1:
Can you verify this result in a random real number between 1 and 7?
2. Then, round this number down to its nearest integer. In Excel, the function INT will do this for you. You can program this functionality yourself, or look up the function for rounding in your
choice of programming language. Formula 2, below, lists the final formula for Excel.
Formula 2:
Can you verify this result is a random integer number between 1 and 6?
Now you only have to execute this formula 100 times, and store the numbers, to generate a sample of numbers you might get by rolling 100 dice.
The geology science project How Old Is That Rock? Roll the Dice & Use Radiometric Dating to Find Out can help you translate the result of rolling 100 dice to the decay of 100 isotopes and explains
how to add a time component and how to generate a decay curve.
Start by generating a decay curve for an isotope that decays with a chance of 1/6 in 1 time unit.
Here are some suggestions to expand your exploration:
• Investigate how the decay curve changes when you use a larger number of initial isotopes to generate your decay curve.
• Investigate how the decay curve changes as the decay probability of your isotope changes. What does it look like for a short-lived isotope (an isotope with a higher probability of decaying in 1
time unit) or a long-lived isotope (an isotope with a lower probability of decaying in 1 time unit)?
• Some isotopes have several decay paths; they decay with a specific probability to one daughter isotope, and with a different probability to a different daughter isotope. As an advanced challenge,
can you change the model to accommodate for these decay patterns? Can you graph a decay curve and make predictions using these decays?
• Some isotopes have a decay chain; they decay with a specific probability to a first daughter isotope, which in turn is unstable and decays with a specific probability into a final (stable)
isotope. As another advanced challenge, can you change the model to accommodate for this decay pattern? Can you graph a decay curve for these decays?
Sabine De Brabandere, PhD, Science Buddies
Share your story with Science Buddies!
I Did This Project! Please log in and let us know how things went.
Last edit date: 2014-03-19
• Random.org. (n.d.). Introduction to Randomness and Random Numbers. Retrieved April 22, 2013, from http://www.random.org/randomness/
• HowStuffWorks. (n.d.). How can a totally logical machine like a computer generate a random number? Retrieved April 21, 2013, from http://computer.howstuffworks.com/question697.htm
Share your story with Science Buddies!
I Did This Project! Please log in and let us know how things went.
Ask an Expert
The Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions
about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you
Ask an Expert
Related Links
If you like this project, you might enjoy exploring these related careers:
Computer Programmer
Computers are essential tools in the modern world, handling everything from traffic control, car welding, movie animation, shipping, aircraft design, and social networking to book publishing,
business management, music mixing, health care, agriculture, and online shopping. Computer programmers are the people who write the instructions that tell computers what to do.
Read more
Just as a doctor uses tools and techniques, like X-rays and stethoscopes, to look inside the human body, geoscientists explore deep inside a much bigger patient—planet Earth. Geoscientists seek to
better understand our planet, and to discover natural resources, like water, minerals, and petroleum oil, which are used in everything from shoes, fabrics, roads, roofs, and lotions to fertilizers,
food packaging, ink, and CD's. The work of geoscientists affects everyone and everything.
Read more
Statisticians use the power of math and probability theory to answer questions that affect the lives of millions of people. They tell educators which teaching method works best, tell policy-makers
what levels of pesticides are acceptable in fresh fruit, tell doctors which treatment works best, and tell builders which type of paint is the most durable. They are employed in virtually every type
of industry imaginable, from engineering, manufacturing, and medicine to animal science, food production, transportation, and education. Everybody needs a statistician!
Read more
Thank you for your feedback! | {"url":"http://www.sciencebuddies.org/science-fair-projects/project_ideas/CompSci_p045.shtml","timestamp":"2014-04-18T05:40:32Z","content_type":null,"content_length":"48195","record_id":"<urn:uuid:7e151b14-bb71-4ed6-adf4-57e132fd2c5e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coexistence in the chemostat as a result of metabolic by-products
Find out how to access preview-only content
October 2006
Volume 53
Issue 4
pp 556-584
Coexistence in the chemostat as a result of metabolic by-products
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
Classical chemostat models assume that competition is purely exploitative and mediated via a common, limiting and single resource. However, in laboratory experiments with pathogens related to the
genetic disease Cystic Fibrosis, species specific properties of production, inhibition and consumption of a metabolic by-product, acetate, were found. These assumptions were implemented into a
mathematical chemostat model which consists of four nonlinear ordinary differential equations describing two species competing for one limiting nutrient in an open system. We derive classical
chemostat results and find that our basic model supports the competitive exclusion principle, the bistability of the system as well as stable coexistence. The analytical results are illustrated by
numerical simulations performed with experimentally measured parameter values. As a variant of our basic model, mimicking testing of antibiotics for therapeutic treatments in mixed cultures instead
of pure ones, we consider the introduction of a lethal inhibitor, which cannot be eliminated by one of the species and is selective for the stronger competitor. We discuss our theoretical results in
relation to our experimental model system and find that simulations coincide with the qualitative behavior of the experimental result in the case where the metabolic by-product serves as a second
carbon source for one of the species, but not the producer.
Julia Heßeler and Julia K. Schmidt contributed equally to this work
1. Armstrong R.A., McGehee R. (1980) Competitive exclusion. Am. Nat. 115(2): 151–170 CrossRef
2. Bassler B.L. (1999) How bacteria talk to each other: regulation of gene expression by quorum sensing. Curr. Opin. Microbiol. 2, 582 CrossRef
3. Braselton J.P., Waltman P. (2001) A competition model with dynamically allocated inhibitor production. Math. Biosci. 173, 55–84 CrossRef
4. Butler G.J., Wolkowicz G.S.K. (1985) A mathematical model of the chemostat with a general class of functions describing nutrient uptake. SIAM J. Appl. Math. 45, 138–151 CrossRef
5. Chao L., Levin B.R. (1981) Structured habitats and the evolution of anticompetitor toxins in bacteria. Proc. Nat. Acad. Sci. USA 78, 6324–6328 CrossRef
6. Coppel W.A. (1965) Stability and Asymptotic Behavior of Differential Equations. D.C. Heath and Co., Boston
7. Diekmann O., Gyllenberg M., Metz J.A.J. (2003) Steady-state analysis of structured population models. Theor. Popul. Biol. 63, 309–338 CrossRef
8. Dockery J.D., Keener J.P. (2000) A mathematical model for quorum sensing in Pseudomonas aeruginosa. Bull. Math. Biol. 00, 1–22
9. Doebeli M. (2002) A model for the evolutionary dynamics of cross-feeding polymorphisms in microorganisms. Popul. Ecol. 44, 59–70 CrossRef
10. Ermentrout, B.: Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students. Society for Industrial and Applied Mathematics, (2002)
11. Freedman H.I., Xu. Y. (1993) Models of competition in the chemostat with instantaneous and delayed nutrient recycling. J. Math. Biol. 31, 513–527 CrossRef
12. Ghani M., Soothill J.S. (1997) Ceftazidime, gentamicin, and rifampicin, in combination, kill biofilm of mucoid Pseudomonas aeruginosa. Can. J. Microbiol. 43, 999–1004 CrossRef
13. Gopalsamy K. (1992) Stability and Oscillations in Delay Differential Equations of Population Dynamics. Kluwer, Dordrecht
14. S.R. Hansen, S.R., Hubbell, S.P.: Single-nutrient microbial competition: Qualitative agreement between experimental and theoretically forecast outcomes. Science 207(4438), 1491–1493 (1980)
15. Hardin G. (1960) The competitive exclusion principle. Science 131, 1292–1298 CrossRef
16. Hirsch M.W., Hanisch H., Gabriel J.-P. (1985) Differential equation models of some parasitic infections: methods for the study of asymptotic behavior. Comm. Pure Appl. Math. 38, 733–753 CrossRef
17. Hsu S.B. (1978) Limiting behavior for competing species. SIAM J. Appl. Math. 34, 760–763 CrossRef
18. Hsu S.B., Hubbell S.P., Waltman P. (1977) A mathematical theory for single-nutrient competition in continuous cultures of micro-organisms. SIAM J. Appl. Math. 32(2): 366–383 CrossRef
19. Hsu S.B., Li Y.-S., Waltman P. (2000) Competition in the presence of a lethal external inhibitor. Math. Biosci. 167(2): 177–199 CrossRef
20. Hsu S.B., Luo T.-K., Waltman P. (1995) Competition between plasmid-bearing and plasmid-free organisms in a chemostat with an inhibitor. J. Math. Biol. 34, 225–238 CrossRef
21. Hsu S.B., Waltman P. (1991) Analysis of a model of two competitors in a chemostat with an external inhibitor. SIAM J. Appl. Math. 52(2): 528–541 CrossRef
22. Hsu S.B., Waltman P. (1997) Competition between plasmid-bearing and plasmid-free organisms in selective media. Chem. Eng. Sci. 52(1): 23–35 CrossRef
23. Hsu S.B., Waltman P. (1998) Competition in the chemostat when one competitor produces a toxin. Jpn J. Indust. Appl. Math. 15, 471–490 CrossRef
24. Hsu S.B., Waltman P. (2002) A model of the effect of anti-competitor toxins on plasmid-bearing, plasmid-free competition. Taiwanese J. Mathematics 6, 135–155
25. S.B., Hsu, Waltman, P.: A survey of mathematical models of competition with an inhibitor. Math. Biosci. 187, 53–91 (2004)
26. Hutchinson G.E. (1961) The paradox of the plankton. Am. Nat. 95, 137–145 CrossRef
27. Lenski R.E., Hattingh S.E. (1986) Coexistence of two competitors on one resource and one inhibitor: A chemostat model based on bacteria and antibiotics. J. Theor. Biol. 122, 83–96 CrossRef
28. Li B. (1998) Global asymptotic behavior of the chemostat: General response functions and different removal rate. SIAM J. Appl. Math. 59(2): 411–422 CrossRef
29. Lu Z., Hadeler K.P. (1998) Model of plasmid-bearing, plasmid-free competition in the chemostat with nutrient recycling and an inhibitor. Math. Biosci. 148, 147–159 CrossRef
30. Luo T.K., Hsu S.B. (1995) Global analysis of a model of plasmid-bearing, plasmid-free competition in a chemostat with inhibitons. J. Math. Biol. 34, 41–76 CrossRef
31. Madigan, M.T. Martinko, J.M., Parker, J.: Brock Biology of Microorganisms. Prentice Hall Englewood Cliffs, (2003)
32. Marsh P.D., Bowden G.H.W. (2000) Microbial community interactions in biofilms. In: Allison D.G., Gilbert P., Lappin-Scott H.M., Wilson M. (eds) Community Structure and Co-operation in Biofilms.
Press Syndicate of the University of Cambridge, Cambridge, pp. 167–198
33. Passarge J., Huisman J.(2002) Competition in well-mixed habitats: From competitive exclusion to competitive chaos. In: Sommer U., Worm B. ed, Competition and Coexistence Ecological Studies., vol
161, Springer, Berlin Heidelberg New York, pp. 7–42
34. Reeves G.T., Narang A., Pilyugin S.S. (2004) Growth of mixed cultures on mixtures of substitutable substrates: the operating diagram for a structured model. J. Theor. Biol. 226, 143–157 CrossRef
35. Riedel K., et al. (2001) N-acylhomoserine-lactone-mediated communication between Pseudomonas aeruginosa and Burkholderia cepacia in mixed biofilms. Microbiology 147, 3249–3262
36. Ruan S., He X.-Z. Global stability in chemostat-type competition models with nutrient recycling. SIAM J. Appl. Math. 58(1): 170–198 (1998) A correction can be found online at http://
37. Sardonini C.A., DiBiasio D. (1987) A model for growth of Saccharomyces cerevisiae containing a recombinant plasmid in selective media. Biotechnol. Bioeng. 29, 469–475 CrossRef
38. Schmidt, J.K., König, B., Reichl, U.: Characterization of a three bacteria mixed culture in a chemostat: Evaluation and application of a quantitative Terminal-Restriction Fragment Polymorphism
(T-RFLP) analysis for absolute and species specific cell enumeration. Biotechnol. Bioeng. (2006) (submitted)
39. Smith H.L., Waltman P. (1995) The Theory of the Chemostat. Cambridge University Press, Cambridge
40. Turner P.E., Souza V., Lenski R.E. (1996) Test of ecological mechanisms promoting the stable coexistence of two bacterial genotypes. Ecology 77(7): 2119–2129 CrossRef
41. Wolkowicz G.S.K., Lu Z. (1992) Global dynamics of a mathematical model of competition in the chemostat: General response functions and differential death rates. SIAM J. Appl. Math. 52(1): 222–233
42. Wolkowicz G.S.K., Zhiqi L. (1998) Direct interference on competition in a chemostat. J. Biomath. 13(3): 282–291
Coexistence in the chemostat as a result of metabolic by-products
Cover Date
Print ISSN
Online ISSN
Additional Links
□ Competition
□ Chemostat
□ Coexistence
□ Metabolite
□ Interspecific Competition
□ Inhibitor
□ Quantitative T-RFLP
Industry Sectors
Author Affiliations
□ 1. Department of Mathematics and Physics, Albert-Ludwigs-University, Hermann-Herder-Str. 3, 79104, Freiburg, Germany
□ 2. Department of Bioprocess Engineering, Otto-von-Guericke-University, Universitätsplatz 2, 39106, Magdeburg, Germany
□ 3. Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106, Magdeburg, Germany | {"url":"http://link.springer.com/article/10.1007%2Fs00285-006-0012-3","timestamp":"2014-04-16T07:58:17Z","content_type":null,"content_length":"58478","record_id":"<urn:uuid:21f1b2f9-a640-43db-98bc-c371a500e437>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Factor completely: 3x2 + y x(3x + y) y(3x2) xy(3x + 1) Prime
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c4ac69e4b066f22e10c38e","timestamp":"2014-04-18T20:55:19Z","content_type":null,"content_length":"41745","record_id":"<urn:uuid:94d1b6e2-ca10-4be8-99d6-cb0b14877d28>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
A genetic local search algorithm for solving the symmetric and asymmetric TSP
Results 11 - 20 of 68
, 1996
"... Abstract. In this paper, an approach is presented to incorporate problem speci c knowledge into a genetic algorithm which is used to compute near-optimum solutions to traveling salesman problems
(TSP). The approach is based on using a tour construction heuristic for generating the initial population ..."
Cited by 54 (11 self)
Add to MetaCart
Abstract. In this paper, an approach is presented to incorporate problem speci c knowledge into a genetic algorithm which is used to compute near-optimum solutions to traveling salesman problems
(TSP). The approach is based on using a tour construction heuristic for generating the initial population, a tour improvement heuristic for nding local optima in a given TSP search space, and new
genetic operators for e ectively searching the space of local optima in order to nd the global optimum. The quality and e ciency of solutions obtained for a set of TSP instances containing between
318 and 1400 cities are presented. 1
, 2000
"... A memetic algorithm for tackling multiobjective optimization problems is presented. The algorithm employs the proven local search strategy used in the Pareto archived evolution strategy (PAES)
and combines it with the use of a population and recombination. Verification of the new algorithm is carri ..."
Cited by 52 (5 self)
Add to MetaCart
A memetic algorithm for tackling multiobjective optimization problems is presented. The algorithm employs the proven local search strategy used in the Pareto archived evolution strategy (PAES) and
combines it with the use of a population and recombination. Verification of the new algorithm is carried out by testing it on a set of multiobjective 0/1 knapsack problems. On each problem instance,
comparison is made between the new memetic algorithm, the (1+1)-PAES local searcher, and the strength Pareto evolutionary algorithm (SPEA) of Zitzler and Thiele. 1 Introduction In recent years,
genetic algorithms (GAs) have been applied more and more to multiobjective problems. For a comprehensive overview, see [2]. Undoubtedly, as an extremely general metaheuristic, GAs are well qualified
to tackle problems of a great variety. This asset, coupled with the possession of a population, seems to make them particularly attractive for use in multiobjective problems, where a number of
solutions appro...
- Evolutionary Computation , 2000
"... The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search
space is significantly different for the types of instances studied. Moreover, with increasing epistasis ..."
Cited by 48 (13 self)
Add to MetaCart
The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search space
is significantly different for the types of instances studied. Moreover, with increasing epistasis, the amount of gene interactions in the representation of a solution in an evolutionary algorithm,
the number of local minima for one type of instance decreases and, thus, the search becomes easier. We suggest that other characteristics besides high epistasis might have greater influence on the
hardness of a problem. To understand these characteristics, the notion of a dependency graph describing gene interactions is introduced.
- Handbook of Metaheuristics , 2003
"... ..."
, 1996
"... Ant System is a general purpose heuristic algorithm inspired by the foraging behavior of real ant colonies. Here we introduce an improved version of Ant System, that we called MAX-MIN Ant
System. We describe the new features present in MAX-MIN Ant System, make a detailed experimental investigation ..."
Cited by 46 (7 self)
Add to MetaCart
Ant System is a general purpose heuristic algorithm inspired by the foraging behavior of real ant colonies. Here we introduce an improved version of Ant System, that we called MAX-MIN Ant System. We
describe the new features present in MAX-MIN Ant System, make a detailed experimental investigation on the contribution of the design choices to the improved performance and give computational
results for the application to symmetric and asymmetric Traveling Salesman Problems. The performance of MAX-MIN Ant System can be further improved by adding a local search phase in which some ants
are allowed to improve their solution.
- Periaux (eds), Evolutionary Algorithms in Engineering and Computer Science: Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary Programming, Genetic Programming and Industrial
Applications , 1999
"... Ant algorithms [18, 14, 19] are a recently developed, population-based approach which has been successfully applied to several NP-hard combinatorial ..."
- in Proceedings of the 7th International Conference on Genetic Algorithms , 1997
"... Augmenting genetic algorithms with local search heuristics is a promising approach to the solution of combinatorial optimization problems. In this paper, a genetic local search approach to the
quadratic assignment problem (QAP) is presented. New genetic operators for realizing the approach are descr ..."
Cited by 38 (9 self)
Add to MetaCart
Augmenting genetic algorithms with local search heuristics is a promising approach to the solution of combinatorial optimization problems. In this paper, a genetic local search approach to the
quadratic assignment problem (QAP) is presented. New genetic operators for realizing the approach are described, and its performance is tested on various QAP instances containing between 30 and 256
facilities/locations. The results indicate that the proposed algorithm is able to arrive at high quality solutions in a relatively short time limit: for the largest publicly known problem instance, a
new best solution could be found. 1 INTRODUCTION In the quadratic assignment problem (QAP), n facilities have to be assigned to n locations at minimum cost. Given a set \Pi(n) of all permutations of
f1; 2; : : : ; ng and two n \Theta n matrices A = (a ij ) and B = (b ij ), the task is to minimize the quantity C(ß) = n X i=1 n X j=1 a ij b ß(i)ß(j) ; ß 2 \Pi(n): (1) Matrix A can be interpreted as
a ...
, 2001
"... This paper presents the evolving objects library (EOlib), an object-oriented framework for evolutionary computation (EC) that aims to provide a exible set of classes to build EC applications.
EOlib design objective is to be able to evolve any object in which tness makes sense. ..."
Cited by 36 (5 self)
Add to MetaCart
This paper presents the evolving objects library (EOlib), an object-oriented framework for evolutionary computation (EC) that aims to provide a exible set of classes to build EC applications. EOlib
design objective is to be able to evolve any object in which tness makes sense.
- in Proceedings of the 5th International Conference on Parallel Problem Solving from Nature - PPSN , 1998
"... . In this paper, two types of fitness landscapes of the graph bipartitioning problem are analyzed, and a memetic algorithm -- a genetic algorithm incorporating local search -- that finds
near-optimum solutions efficiently is presented. A search space analysis reveals that the fitness landscapes of g ..."
Cited by 33 (6 self)
Add to MetaCart
. In this paper, two types of fitness landscapes of the graph bipartitioning problem are analyzed, and a memetic algorithm -- a genetic algorithm incorporating local search -- that finds near-optimum
solutions efficiently is presented. A search space analysis reveals that the fitness landscapes of geometric and non-geometric random graphs differ significantly, and within each type of graph there
are also differences with respect to the epistasis of the problem instances. As suggested by the analysis, the performance of the proposed memetic algorithm based on Kernighan-Lin local search is
better on problem instances with high epistasis than with low epistasis. Further analytical results indicate that a combination of a recently proposed greedy heuristic and Kernighan-Lin local search
is likely to perform well on geometric graphs. The experimental results obtained for non-geometric graphs show that the proposed memetic algorithm (MA) is superior to any other heuristic known to us.
For th... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=430513&sort=cite&start=10","timestamp":"2014-04-18T20:11:38Z","content_type":null,"content_length":"35801","record_id":"<urn:uuid:c003a844-4a8d-45f6-82d6-e1228176fca5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How could a baby fall out of a twenty-story building onto the ground and live? Hint: It does not matter what the baby lands on, and it has nothing to do with luck.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
He fell from the ground floor lol...it's not written from where he fell ;p
Best Response
You've already chosen the best response.
This is an IQ question. Just apply ur brains.
Best Response
You've already chosen the best response.
It is just written that he fell from a twenty storey building.
Best Response
You've already chosen the best response.
@adityanaik28 , @sheena101 , @annas , @heena , @gurvinder , @apoorvk , @Ishaan94 , @Taufique , @him1618 , @Ruchi. , @Mani_Jha please help me.
Best Response
You've already chosen the best response.
Omg I just told the answer
Best Response
You've already chosen the best response.
It's written that he fell from a twenty storey building, but it is not written from which floor.
Best Response
You've already chosen the best response.
He fell from the ground from ;p
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@Aadarsh does that help?
Best Response
You've already chosen the best response.
U r rite, but let it be confirmed by others. Smart work ...........
Best Response
You've already chosen the best response.
hatts off to u parth :)
Best Response
You've already chosen the best response.
even i didnt catch this thing :P
Best Response
You've already chosen the best response.
well done @ParthKohli , and thanks @heena didi, for confirming it.
Best Response
You've already chosen the best response.
he fell from the ground floor
Best Response
You've already chosen the best response.
Thanks everyone..........
Best Response
You've already chosen the best response.
another one!!
Best Response
You've already chosen the best response.
@FoolForMath bhai, please try this.
Best Response
You've already chosen the best response.
its simple!! no need to even think!! :P
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
give me another!
Best Response
You've already chosen the best response.
Arrey I answered it where is the gum?
Best Response
You've already chosen the best response.
:O i want gum!
Best Response
You've already chosen the best response.
Wait wait.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
The building is twenty-storeyed. No problem. But the baby falls off it's 2-inch thick mattress kept near the entrance to the building, on to the pavement. Now, LIVE BABY LIVE!! I win. :P
Best Response
You've already chosen the best response.
I solemnly swear that I did not look at any of the replies above before answering. -_- And now that I have gone through them, Parth's solution has a flaw, babies can die even if they fall from a
couple of feet. My solution is fulll-proof. xD
Best Response
You've already chosen the best response.
Wha Wha apoorv bhai, kya jawaab hai!!!!!!! "Sleepwell Mattress"
Best Response
You've already chosen the best response.
Chalo, maine advertisement kar di unki muft mein! xD
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fc7958ae4b022f1e12efd76","timestamp":"2014-04-25T07:07:13Z","content_type":null,"content_length":"99576","record_id":"<urn:uuid:f6ded7e5-03b4-4e32-bc70-75d64f42e62a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
User François Brunault
bio website perso.ens-lyon.fr/…
location Lyon
age 34
visits member for 3 years, 10 months
seen Nov 10 '13 at 12:21
stats profile views 2,299
I am a number theorist working at the École normale supérieure de Lyon in France. My research interests are elliptic curves, L-functions and zeta functions, especially the study of their special
values (conjectures by Beilinson, Bloch, Kato, Zagier...).
18 awarded Notable Question
23 awarded Nice Answer
11 awarded Good Answer
Nov calculate function from its divizor
10 revised Updated link
Oct calculate function from its divizor
31 comment Dear Hicham, I edited my answer to give the link to the Pari/GP script.
Oct calculate function from its divizor
31 revised Added link to Pari/GP script.
25 awarded ag.algebraic-geometry
25 awarded nt.number-theory
25 awarded Revival
25 awarded Pundit
24 answered On Deligne's determinant of motives
Jun modularity of elliptic curves with cm
19 comment Note that if $\operatorname{Res}_{F/\mathbf{Q}} E$ appear inside $J_1(N)$ over $\mathbf{Q}$ then $E$ itself appears inside $J_1(N)$ over $F$. So in this case you can find a modular
parametrization $X_1(N) \to E$ which is defined over $F$.
Jun modularity of elliptic curves with cm
19 comment What is true is that every elliptic curve over $\overline{\mathbf{Q}}$ with CM by $K$ is isomorphic over $\overline{\mathbf{Q}}$ to a $K$-curve (a curve which is isogenous over $\
overline{\mathbf{Q}}$ to all its $\operatorname{Gal}(\overline{\mathbf{Q}}/K)$-conjugates), see Wortmann's article and the references.
Jun modularity of elliptic curves with cm
19 comment In the last sentence, I meant "isomorphic over $\overline{\mathbf{Q}}$"...
Jun modularity of elliptic curves with cm
19 comment There is a nice article where Shimura's result is generalized for a wider class of CM elliptic curves, see S. Wortmann, *Generalized Q-curves and factors of $J_1(N)$* dx.doi.org/10.1007/
BF02940901 These CM elliptic curves are sometimes called of Shimura type. I'm not sure but I think these are exactly the CM elliptic curves whose restriction of scalars appear inside
$J_1(N)$ over $\mathbf{Q}$. Moreover, every CM elliptic curve is isomorphic over $\mathbf{Q}$ to a curve of Shimura type.
Jun Mersenne Prime Sequences
15 comment @Dietrich : Eugène Catalan's footnote is here archive.org/stream/nouvellecorresp01mansgoog#page/n353/mode/2up In fact, he states this as an "empirical theorem" which holds for all terms
"up to a certain limit". To me this seems far from conjecturing that all terms are prime.
Jun Mersenne Prime Sequences
15 comment @Barakman : The point is that there is no obvious bias towards primality arising from belonging to $A_n$. The exponents of the numbers in $A_n$ are very large, and I see no reason why
they should be more prime than the Mersenne numbers of comparable size. So their primality becomes soon unlikely.
Jun Mersenne Prime Sequences
15 comment The Wagstaff heuristics primes.utm.edu/mersenne/heuristic.html assert that for large prime $p$, the probability of $2^p-1$ being prime is about $(\log p)/p$ (up to some multiplicative
constant). So it seems unlike to me that $A_n$ contains only prime numbers. I would rather conjecture that any such sequence will contain a composite number.
Jun How to explain the picturesque patterns in François Brunault's matrix?
15 comment Thank you for spotting these nice patterns! Not a precise explanation, but it may not be surprising that the entries of the matrix have nice $p$-adic properties. Indeed, any generalized
polynomial map (in the sense of the question you link to) extends to a map $\mathbf{Z}_p \to \mathbf{Z}_p$ which is continuous (and in fact, 1-Lipschitz).
Jun comment Elementary tools for proving congruences of modular forms
15 See E. Ghate, An introduction to congruences between modular forms math.tifr.res.in/%7Eeghate/basics.dvi | {"url":"http://mathoverflow.net/users/6506/francois-brunault?tab=activity","timestamp":"2014-04-17T15:54:27Z","content_type":null,"content_length":"47493","record_id":"<urn:uuid:9fac59dd-8325-4710-9a0e-f315ae26c789>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contour Maps (Difficulty with an equation)
March 24th 2010, 07:46 AM #1
Mar 2010
Contour Maps (Difficulty with an equation)
Right, I'm having difficulty with a particular contour map because I can't seem to get an answer from the multi-variable equation. I'll show you what I mean.
The contour I am trying to sketch is:
[LaTeX ERROR: Convert failed]
Now I realise that, because there is no y variable in the equation, I'm looking at a contour map of verticle straight lines because I'm only going to get results for x. But when I try and find
standard solutions (ie. g(x,y) = 1, 2, 3...), I always seem to run into complex results, $\sqrt{-a}$ where a is greater or equal to zero.
Am I using the wrong technique here (where I should really be using my understanding to justify the map) or am I making a very siiiiiiimple mistake and looking like a complete fool (which is what
I think I am doing
Wait, don't worry, I realised the idiotic mistake I was making.
March 24th 2010, 08:29 AM #2
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/135416-contour-maps-difficulty-equation.html","timestamp":"2014-04-16T13:57:18Z","content_type":null,"content_length":"32167","record_id":"<urn:uuid:361cde4e-c39f-422b-a076-8f03ffb12d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (9 - 12)
Title: Rational Exponents Rock!!
Description: During this lesson, students will be introduced to rational exponents. Rational exponents are fractional powers, or where a number is raised to a fraction.
Subject: Arts Education (7 - 12), or Mathematics (5 - 12), or Technology Education (9 - 12)
Title: Just the facts! Exploring Order of Operations and Properties of Real Numbers
Description: Students use their imagination while learning the importance of 'Order of Operations' and 'Properties of Real Numbers'. This lesson incorporates class discussions, wiki and/or online
discussion threads (free at www.wikispace.com and/or quicktopic.com), art and puzzles.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the
Malone Family Foundation.
Subject: Mathematics (7 - 12)
Title: Calendar Fun Operations
Description: This activity is designed to help students evaluate numerical expressions by using order of operations. The students will be provided a calendar for the current month of the year.
Students will then be provided with a worksheet that contains 30 expressions and a different symbol for each expression. The students will manually calculate each expression using order of
operations. Once the numerical value has been discovered for each expression, the symbol next to the expression will be drawn on the calendar for that date. This lesson plan was created as a result
of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: You Mean ANYTHING To The Zero Power Is One?
Description: This lesson is a technology-based project to reinforce concepts related to the Exponential Function. It can be used in conjunction with any textbook practice set. Construction of
computer models of several Exponential Functions will promote meaningful learning rather than memorization.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Stacking Squares
Description: This Illuminations lesson prompts students to explore ways of arranging squares to represent equivalences involving square- and cube-roots. Students explanations and representations
(with their various ways of finding these roots) form the basis for further work with radicals.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Web Resources
Introduction to Algebra Quiz
This is a quiz on evaluating algebraic expressions, learning how to use exponential notation, using properties of numbers in algebra, and learning how to write expressions and evaluate formulas.
Teacher Tools
Lesson plans,calculators, worksheets and everything else to help with teaching algebra. | {"url":"http://alex.state.al.us/all.php?std_id=54074","timestamp":"2014-04-18T08:22:55Z","content_type":null,"content_length":"50980","record_id":"<urn:uuid:983046c9-7442-4c9f-9db7-4741ae7f9869>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defining winding rules
Flash Player 10 and Adobe AIR 1.5 also introduce the concept of path “winding”: the direction for a path. The winding for a path is either positive (clockwise) or negative (counter-clockwise). The
order in which the renderer interprets the coordinates provided by the vector for the data parameter determines the winding.
Positive and negative winding
Arrows indicate drawing direction
Positively wound (clockwise)
Negatively wound (counter-clockwise)
Additionally, notice that the Graphics.drawPath() method has an optional third parameter called “winding”:
drawPath(commands:Vector.<int>, data:Vector.<Number>, winding:String = "evenOdd"):void
In this context, the third parameter is a string or a constant that specifies the winding or fill rule for intersecting paths. (The constant values are defined in the GraphicsPathWinding class as
GraphicsPathWinding.EVEN_ODD or GraphicsPathWinding.NON_ZERO.) The winding rule is important when paths intersect.
The even-odd rule is the standard winding rule and is the rule used by the legacy drawing API. The Even-odd rule is also the default rule for the Graphics.drawPath() method. With the even-odd winding
rule, any intersecting paths alternate between open and closed fills. If two squares drawn with the same fill intersect, the area in which the intersection occurs is filled. Generally, adjacent areas
are neither both filled nor both unfilled.
The non-zero rule, on the other hand, depends on winding (drawing direction) to determine whether areas defined by intersecting paths are filled. When paths of opposite winding intersect, the area
defined is unfilled, much like with even-odd. For paths of the same winding, the area that would be unfilled is filled:
Winding rules for intersecting areas
Even-odd winding rule
Non-zero winding rule | {"url":"http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS1EE3740D-F65C-43bf-9B12-74E34D7D1CBE.html","timestamp":"2014-04-16T10:29:31Z","content_type":null,"content_length":"25073","record_id":"<urn:uuid:3e5ac936-c91a-494e-ab32-091648e41224>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Things that amuse me
Abstracting on, suggested solutions
I guess I should be more constructive than just whining about how Haskell doesn't always do what I want. I do have some suggestions on how to fix things.
Explicit type applications
Let's look at a simple example again:
f :: forall a . a -> a
f = \ x -> x
b :: Bool
b = f True
The way I like to think of this (and what happens in ghc) is that this is shorthand for something more explicit, namely the F
version of the same thing. In F
all type abstraction and type application are explicit. Let's look at the explicit version (which is no longer Haskell).
f :: forall (a::*) . a -> a
f = /\ (a::*) -> \ (x::a) -> x
b :: Bool
b = f @Bool True
I'm using
for type abstraction and
expr @type
for type application. Furthermore each binder is annotated with its type. This is what ghc translates the code to internally, this process involves figuring out what all the type abstractions and
applications should be.
Not something a little more complicated (from my previous post)
class C a b where
x :: a
y :: b
f :: (C a b) => a -> [a]
f z = [x, x, z]
The type of
x :: forall a b . (C a b) => a
So whenever
occurs two type applications have to be inserted (there's also a dictionary to insert, but I'll ignore that). The decorated term for
(ignoring the context)
f :: forall a b . (C a b) => a -> [a]
f = /\ (a::*) (b::*) -> \ (z::a) -> [ x @a @b1, x @a @b2, z]
The reason for the ambiguity in type checking is that the type check cannot figure out that the
is in any way connected to
. Because it isn't. And there's currently no way we can connect them.
So I suggest that it should be possible to use explicit type application in Haskell when you want to. The code would look like this
f :: forall a b . (C a b) => a -> [a]
f z = [ x @a @b, x @a @b, z]
The order of the variables in the
determines the order in which the type abstractions come, and thus determines where to put the type applications.
Something like abstype
Back to my original problem with abstraction. What about if this was allowed:
class Ops t where
data XString t :: *
(+++) :: XString t -> XString t -> XString t
instance Ops Basic where
type XString Basic = String
(+++) = (++)
So the class declaration says I'm going to use data types (which was my final try and which works very nicely). But in the instance I provide a type synonym instead. This would be like using a
in the instance, but without having to use the newtype constructor everywhere. The fact that it's not a real data type is only visible inside the instance declaration. The compiler could in fact make
and insert all the coercions. This is, of course, just a variation of the
suggestion by Wehr and Chakravarty.
Labels: Haskell, Modules, overloading
2 Comments:
sof said...
This comment has been removed by the author.
Raoul Duke said...
get Liskell to have a good module system, and use that instead? :-) | {"url":"http://augustss.blogspot.com/2008/12/abstracting-on-suggested-solutions-i.html","timestamp":"2014-04-19T09:23:23Z","content_type":null,"content_length":"20619","record_id":"<urn:uuid:22f879c8-17e4-458a-8da4-f4564f0c2ba1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Creating cylinder [Archive] - OpenGL Discussion and Help Forums
03-28-2009, 02:38 PM
I'm absolute new to OpenGl and have the following question:
I'd like to draw a cylinder and specify the normal vectors for the lighting. I want to do this without the gluCylinder()-Methode.
Now I'm not sure how to start defining the coordinates for the cylinder.
Thanks for your help! | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-167115.html","timestamp":"2014-04-18T16:03:35Z","content_type":null,"content_length":"5765","record_id":"<urn:uuid:2a3e7a86-f099-4162-bc4a-69b88e941121>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multivaraible Calculus. surfaces in R2 andR3
1. The problem statement, all variables and given/known data
In class, we studied functions ⃗r : I → R^3, where I ⊂ R is some interval.
Let us now consider a function
⃗r : U → R^3, U ⊂ R^2
That is, we have a function, ⃗r, which sends a point (u, v) in the plane to a
point (vector) in R3 . You may call it a “vector function of two arguments”
or a “vector function of a vector argument”. To be specific:
U = {(u, v), v > 0} ⊂ R^2
⃗r(u, v) = (√v/2 cosh u, √v sinh u, v)
or, equivalently,
x = √v/2 cosh u
y = √v sinh u
z = v
1.Verify that the points ⃗r(u, v) satisfy the equation z = 4x^2− y^2
2.Identify the surface S given by the equation z = 4x^2-y^2
Is it an ellipsoid, paraboloid, hyperboloid (and which one), cone, cyllinder?
2. Relevant equations
3. The attempt at a solution
I know that the surface given by the equation is a hyperbolic parabloid, but I have no idea how to approach showing that the points satify the equation. When I tried just plugging the values of x,y,
and z into the equation I end up with something very messy, and I'm not quite sure what I am supposed to be solving for. If anyone could explain this to me it would be great!!
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution | {"url":"http://www.physicsforums.com/showthread.php?p=1913389","timestamp":"2014-04-20T00:57:01Z","content_type":null,"content_length":"21132","record_id":"<urn:uuid:95972c31-88d4-4a90-896a-dd8d3d378456>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Los Altos Hills, CA Prealgebra Tutor
Find a Los Altos Hills, CA Prealgebra Tutor
...Trig has many applications in physics and engineering. Approached properly it is relatively easy to learn. I've been a tutor for six years and have had success in raising students' grades and
their focus and interest in the subject.
32 Subjects: including prealgebra, reading, English, calculus
...My passion is in chemistry but the quantitative nature of the natural sciences means that I am fluent in algebra through calculus. By nature of my coursework and extracurricular research, I
also have extensive experience in lab work and spent a fair amount of my tutoring time assisting with lab ...
24 Subjects: including prealgebra, reading, calculus, chemistry
...I also do piano arrangements. I hold three degrees, one from San Jose State University (B.A. Music, cum laude), one from CAL Berkeley (B.
37 Subjects: including prealgebra, reading, English, physics
...My favorite subjects to tutor are high school math, physics, and Spanish. I worked as the Children's Programming Coordinator at the Center for the Homeless by Notre Dame for three years,
where, among my many duties, I tutored students twice a week. I have also run an after school program for gr...
27 Subjects: including prealgebra, chemistry, English, reading
...My credential is in English, math and science, but I believe I am qualified to tutor any subject. I have a way of figuring out if a student is stuck and get him or her past that place. I have
a way of explaining a difficult subject on a simple level.
35 Subjects: including prealgebra, chemistry, reading, English
Related Los Altos Hills, CA Tutors
Los Altos Hills, CA Accounting Tutors
Los Altos Hills, CA ACT Tutors
Los Altos Hills, CA Algebra Tutors
Los Altos Hills, CA Algebra 2 Tutors
Los Altos Hills, CA Calculus Tutors
Los Altos Hills, CA Geometry Tutors
Los Altos Hills, CA Math Tutors
Los Altos Hills, CA Prealgebra Tutors
Los Altos Hills, CA Precalculus Tutors
Los Altos Hills, CA SAT Tutors
Los Altos Hills, CA SAT Math Tutors
Los Altos Hills, CA Science Tutors
Los Altos Hills, CA Statistics Tutors
Los Altos Hills, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Los_Altos_Hills_CA_Prealgebra_tutors.php","timestamp":"2014-04-18T05:37:33Z","content_type":null,"content_length":"24280","record_id":"<urn:uuid:9c0503a9-db54-4c67-927a-646e21b73207>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
semiclassical approximation
To some extent, quantum mechanics and quantum field theory are a deformation of classical mechanics and classical field theory, with the deformation parameterized by Planck's constant $\hbar$. The
semiclassical approximation or quasiclassical approximation to quantization/quantum mechanics is the restriction of this deformation to just first order (or some finite order) in $\hbar$.
Applied to path integral quantization, the semiclassical approximation is meant to approximate the path integral $\int_{\phi \in \mathbf{Fields}} D\phi\; F(\phi) e^{iS(\phi)/\hbar}$ by an expansion
in $\hbar$ about the critical points of the action functional $S$ (hence the solutions of the Euler-Lagrange equations, hence to the classical trajectories of the system). As usual for the path
integral in physics, this often requires work to make precise, but at a heuristic level the idea is famous as the rotating phase approximation?: the idea is that in regions of field-space where $S$
varies fast as measured in units of Planck's constant, the complex phases of the integrand $\exp(i S / \hbar )$ tend to cancel each other in the integral so that substantial contributions to the
integral come only from the vicininity of critical points of $S$ (classical trajectories).
But semiclassical approximations can be applied to most other formulations of quantum physics, where they often lead to precise and powerful mathematical tools.
Notably in the Schrödinger picture of quantum evolution, solutions to the Schrödinger equation $i \hbar \frac{d}{d t} \psi = \hat H \psi$ (which characterizes quantum states given by wave functions $
\psi$ for Hamiltonian dynamics induced by a Hamilton operator $\hat H$) are usefully considered to first (or any finite) order in $\hbar$. This method, known after (some of) its inventors as the WKB
method or similar, amounts to expressing the wave function in the form $\psi = exp(S)$ where $S$ is a slowly varying function and solving the equation for $S$. Globally consistent such solutions to
first order lead to what are called Bohr-Sommerfeld quantization conditions. For the formalization of this method in symplectic geometry/geometric quantization see at semiclassical state.
This WKB method makes sense for a more general class of wave equations. For instance in wave optics this yields the short-wavelength limit of the geometrical optics approximation. Here $S$ is called
the eikonal?.
Multidimensional generalization of the WKB method appear to be rather nontrivial; they have been pioneered by Victor Maslov who introduced a topological invariant to remove ambiguities of the naive
version of the method, called the Maslov index.
Equivariant localization
In some special cases (most often in the presence of supersymmetry) the main contribution (the first term in expansion) amounts to the true result; the quantum correction sometimes leads however to
an overall scalar factor. This is the case of so-called localization (related directly in some cases to the equivariant localization in cohomology and Lefshetz-type fixed point formulas). Most of
well known examples of integrable systems and TQFTs lead to localization.
Large $N$-limit in gauge theories
The large N limit of gauge theories, which is of importance in collective field theory and in the study of relation between gauge and string theories is formally very similar to semiclassical
expansion, where the role of Planck constant is played by $1/N^2$.
In radiation theory
In the theory of radiation there is a different meaning of semiclassical treatment: one considers particles in a sorrounding electromagnetic field and the particles are treated as in
finite-dimensional quantum mechanics, with the electromagnetic field as an external classical field coupled to the particles via an interaction term.
• M.V. Fedoryuk, Semi-classical approximation, Springer Online Enc. of Math.
• Sean Bates, Alan Weinstein, Lectures on the geometry of quantization, pdf
• Victor Maslov, Stationary-phase method for Feynman’s continual integral, Theoret. and Math. Phys., 2:1 (1970), 21–25; Russian original: ТМФ, 2:1 (1970), 30–35 pdf.
• Victor Maslov, Theory of perturbations and asymptotic methods (Russian), Izdat. Moskov. Gos. Univ. 1965.
• Vladimir Arnold, Characteristic class entering in quantization conditions, Funct. Anal. its Appl. 1967, 1:1, 1–13, doi (В. И. Арнольд, “О характеристическом классе, входящем в условия
квантования”, Функц. анализ и его прил., 1:1 (1967), 1–14, pdf)
• Victor Guillemin, Shlomo Sternberg, Geometric asymptotics, AMS 1977, online; Semi-classical analysis, 499 pages, pdf
• A. S. Mishchenko, B. Yu. Sternin, V. E. Shatalov, Lagrangian manifolds and the canonical operator method, Nauka, Moscow, 1978. (in Russian). English transl.: Lagrangian manifolds and the Maslov
operator, Springer, Berlin, 1990.
• Richard Szabo, Equivariant cohomology and localization of path integrals, Lecture Notes in Physics, N.S. Monographs 63. Springer 2000. xii+315 pp. (early version: Equivariant localization of path
integrals, hep-th/9608068)
• Michael Atiyah, Circular symmetry and stationary phase approximation, Asterisque 131 (1985) 43–59
• Nicole Berline, Ezra Getzler, Michèle Vergne, Heat kernels and Dirac operators, Grundlehren 298, Springer 1992, “Text Edition” 2003.
• Albert Schwarz, Oleg Zaboronsky, Supersymmetry and localization, Comm. Math. Phys. 183, 2 (1997), 463-476, euclid
• Albert Schwarz, Semiclassical approximation in Batalin-Vilkovisky formalism, Comm. Math. Phys. 158 (1993), no. 2, 373–396, euclid.
• A. Laptev, I.M. Sigal, Global Fourier integral operators and semiclassical asymptotics, Review of Math. Physics, 12:5 (2000) 749–766 pdf
• Maurice A. de Gosson, Symplectic geometry, Wigner-Weyl-Moyal calculus, and quantum mechanics in phase space, 385 pp. pdf
• Semyon Dyatlov, Semiclassical Lagrangian distributions, pdf; Hoermander–Kashiwara and Maslov indices, pdf
Relation to quantum integrable systems is in a series of works of Vũ Ngọc, e.g.
• San Vũ Ngọc, Bohr-Sommerfeld conditions for integrable systems with critical manifolds of focus-focus type, Preprint Institut Fourier 433, 1998 15 pdf; Quantum monodromy in integrable systems,
Comm. Math. Phys. 203 (1999), no. 2, 465–479 doi
For large N-limit compared to semiclassical expansion see
• L. G. Yaffe, Large N limits as classical mechanics, Rev. Mod. Phys. 54, 407–435 (1982), pdf
For the semiclassical method in superstring theory see
• J. Maldacena, G. Moore, N. Seiberg, D. Shih, Exact vs. semiclassical target space of the minimal string, hep-th/0408039
• K. Hori, A. Iqbal, C. Vafa, D-Branes and mirror symmetry, hep-th/0005247 | {"url":"http://www.ncatlab.org/nlab/show/semiclassical+approximation","timestamp":"2014-04-20T08:51:14Z","content_type":null,"content_length":"86936","record_id":"<urn:uuid:fc0317d6-7a88-4173-8dff-5d655bb9e6e2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inductive Game
Copyright © 1985, Peter Suber.
Introduction to the Inductive Game
Of Rubik's Cube
Peter Suber, Philosophy Department, Earlham College
The normal game of Rubik's cube is to return a randomized cube to its solved state by any number of twists of its planes. Successful cubists have algorithms that guarantee solution. Players are
judged superior only by virtue of the elegance of their algorithms or the speed with which they can apply them.
The inductive game makes one small change with many large consequences. A player of the inductive game starts with a cube in the solved state, gives it n twists, and then tries to return it to
the solved state in n or fewer twists. Typically the player is not the one to make the randomizing twists, or else does so behind her own back. In short:
1. Out of the sight of the player, the randomizer takes a solved cube and gives it a certain number, n, of twists.
2. The player attempts to return the cube to the solved state with the fewest number of twists (n or fewer), which usually means to retrace the path of the randomizer.
First we should notice that the normal and inductive games are both inductive in some sense. The normal game is inductive in the process undertaken by players to discover the algorithms
sufficient for solution. That process has been said to model the scientific method, complete with the formulation and testing of theories, negative results, and confirmation. Once the inductive
method produces algorithms, normal play is more deductive than inductive. That is where the inductive game differs. Instead of producing algorithms that may be applied infallibly by an idiot, the
inductive game produces 'soft rules' or probabilistic guides that must be applied in each case with judgment, mother wit, and the weight of one's inductive experience. The inductive process of
discovery does not come to an end in deductive methods of play. Both discovery and play are inductive.
The objective of the inductive game is to find the shortest path from a randomized state to the solved state. It helps play enormously to know how many twists one is from solution when one
starts. For this purpose some standard terms are introduced below. But notice that on the 3x3x3 cube (the only type discussed here), it has been proved that every state of randomization is at
most 23 twists from solution. This fact has many important consequences for the inductive game.
It means that the randomizer may give the player a randomized cube in which the shortest path to solution is shorter than the path that retraces the steps of the randomizer. For example, if the
randomizer gives a solved cube 25 twists, then in principle there is a shorter path home than retracing those 25 steps. However, there is a good reason why the inductive game will not use cubes
randomized by more than 23 twists. The reason is that players who can solve 23-twist randomizations in 23 steps have hit upon "God's Algorithm" and need no greater challenge.
We do not know whether the quadrillions of distinct paths from distinct randomized states to the solved state fall into patterns. That is, we do not know whether "God's Algorithm" is a family of
methods with some learnable structure, or whether it is a chaotic tangle of disconnected pathways. If it is an organized family, or insofar as it is an organized family, then the inductive game
is one way to learn it.
The difference between retracing the randomizer's path and finding a shorter one will not knowably affect play when there are fewer than 23 twists in the randomized cube. We do not know whether
any 8-twist randomization, for example, when all twists are distinct, can be solved in fewer than 8 twists. But in any case, the 'rules' of the inductive game will yield the shortest path, not
necessarily the retraced path of the randomizer.
Why Play the Inductive Game?
The inductive game has several advantages over the normal game. First, each randomization is a fresh challenge. In the normal game, a competent player has algorithms of sufficient generality that
any randomized cube may be solved, if one is willing to spend 60 to 200 twists to do so. This is not the case with the inductive game. If one can solve the cube in the normal game and is bored by
that challenge, then the inductive game offers endless variety and escalating difficulty.
Second, there is no equivalent to an algorithm in the inductive game. Hence, no matter how much one learns, nothing one knows can be applied automatically. Success cannot be reduced to a
certainty, only to greater and greater probabilities.
Third, one learns more and more each time one plays the inductive game. In the normal game, after a while, one may diddle or even solve the cube without learning anything new. In the inductive
game, all attempts at solution, even unsuccessful ones, add to one's bank of inductive experience. For mastery of the inductive game one must rely on inductive knowledge of the probability that
certain configurations could have survived x twists. That sort of knowledge increases with every spin through the cube, in subtle and barely articulable ways.
Fourth, the inductive game cannot become routine or boring, except to gods. When one can solve 3-twist randomizations nearly 100% of the time, then one may move on to 4-twist randomizations.
Difficulty increases exponentially. There is a foreseeable end to the series, of course. Players who patiently gather up their nuanced, ineffable knowledge of random patterns may reach the level
of 23-twist randomizations. Improvement does not merely approach the banal satisfaction of more frequent success; it approaches hard knowledge of "God's Algorithm".
For convenience I will use the following terms in these special senses.
One 90-degree turn of one plane. Note that under this definition 180-degree turns amount to two twists. This convention is not merely terminological; it affects play. If the randomizer is
told to produce a 6-twist randomization, then under this definition the player knows that the cube will be no more than six 90-degree turns from solution. When players know exactly how many
90-degree turns they are from solution at the beginning, then they can monitor their own success more effectively. For example, two turns into a 4-twist randomization should produce a cube
only two twists from solution, which should be evident to the eye even of beginning players; if it is not, the player may confidently infer error at an earlier stage.
The solved state of the cube (in which each of the six faces displays tiles of only one color).
Shortest path home
The shortest series of twists needed to move from a randomized state to the solved state. As noted, the shortest path home may occasionally be shorter than the path that retraces the steps of
the randomizer.
The adjacency of two or more tiles of the same color that need not (and ought not) to be separated on the shortest path home. This is not the same thing as the adjacency of any set of tiles
of the same color, nor is it the same as the adjacency of tiles that 'properly belong together' or that are adjacent in the solved state. Sometimes called actual information for emphasis; see
Accidental information
The adjacency of two or more tiles of the same color that must be separated on the shortest path home, regardless whether they will be reunited in the solved state.
Apparent information
The adjacency of two or more tiles of the same color when the player is uncertain whether they make actual or accidental information.
Of a plane on the cube, to be ineligible for twisting in the judgment of the player. A basic rule of strategy holds, for example, that a plane is blocked if twisting it would destroy actual
Above the tree line
A randomized state of the cube that offers no apparent information. It has been proved that 8 twists is the minimum number sufficient to put one above the tree line. In my experience
randomizations of any degree rarely put one above the tree line; one must be seeking such a state to produce one. In a weak sense one is above the tree line whenever one suspects that all
apparent information is accidental. In my experience this does not commonly occur at fewer than 10 or 12 twists from home. One is very far above the tree line if no single twist will produce
apparent information. When one is above the tree line, one cannot use 'information' as a guide home. One must discover different kinds of clues. Hence, inductive games of roughly eight twists
or less require the ability to discern actual from accidental information, while games of more than eight twists are quite different. The term is taken from mountaineering; the analogy is to
the diminishing amount of life and information, and the increasing difficulty, as one ascends to the pinnacle of "God's Algorithm"
Basic Strategy Tips
At any given state of the cube there are only 12 possible moves. There are six planes and each may be twisted 90-degrees in either of two directions. Hence, one need never despair. One is not
asked to pick like a god among quadrillions of pathways. One is asked merely to pick the best of 12 options. Each of the 12 is surveyable to human intelligence, if only human intelligence knew
what to look for.
The two most basic 'rules' of the inductive game are:
1. Thou shalt not break up information.
2. Thou shalt endeavor to make more information.
A word on the subtlety of these basic rules is in order. The first is not comparative. It does not say to break up the least amount of information; it says to break up none. But in practice we
must reduce it to a comparative rule, for in practice all we usually get is apparent information. Modified for fallen human beings, the rule says to break up the least amount of apparent
information and none of what is most likely to be actual information.
The second rule is confessedly comparative. Sometimes no twist can make any even apparent information. Sometimes more than one possible twist will make the same amount of apparent information.
The rule says to try, and to try to make the most information that one can on each twist. But of course all we get again is apparent information. In human language it tells us to maximize what we
are willing to bet is actual information.
Both rules are qualified by a third rule:
3. The shortest path home may wind about. Do not expect information to grow on each twist.
This rule is a warning not to take the challenge of the inductive game as the mere inspection of 12 possible paths per turn. As much as one can, one must have an idea of the shape of the
different paths that diverge in the 12 directions from where one finds oneself. A solution in the inductive game is not a series of individual good guesses, each surmounting twelve to one odds
against success. It is a series of good guesses that form a coherent method of returning home. Masters of the normal game do not need to be told that the shortest path between two states of the
cube usually requires a detour through apparent chaos. If one never expects this and insists on building information on each step, then one will inevitably build accidental information and move
further from home. On the other hand, detours through chaos are much more frequent in the normal game than in the inductive game. (The perceptive reader should already know why.)
In other words, the best move will not always maximize new apparent information, although it will always preserve all actual information. If two possible twists give the same amount of new
apparent information, no other gives more, and neither destroys apparent information, then obviously one must choose between them on other grounds. These 'other grounds' are just the kind that
must be used above the tree line, and for the same reason. They will even have to be used below the tree line, occasionally, to distinguish between two moves when one makes more apparent
information than another, and even when the actuality of the information is not an issue.
The relatively small array of options available on each turn allows ample room for experimentation. If one likes one may try each of the 12 possible twists and see which of the 12 new states
makes the most interesting apparent information and destroys the least apparent information. However, one should experiment with care, and remember that not all apparent information is actual and
that not every correct move will increase information.
Players may be helped in their experimentation if they write down their moves. Then, if they confidently infer that they made a mistake earlier, they may retrace their steps and try again. A
simple notation for this is as follows. Use the first initial of the color of the center tile of the plane twisted; leave it unadorned if one twisted clockwise (while facing that plane); mark it
with an asterisk if one twisted counter-clockwise. For example, "w*" means the white face (the plane with the white tile in the center) was twisted 90 degrees counter-clockwise. The six colors on
most cubes are white, yellow, red, orange, blue, and green. Fortunately, these colors have no common initial letters.
Self-monitoring and experimentation might be aided even more if one writes down, next to each move made, one's judgment of its probable correctness and the most probable alternative moves. Then
when one infers error and backs up, one may go back to the lowest probability number from the beginning and start from there.
If randomizers keep note of the steps they took to randomize the cube, then the same randmized state may be given as a challenge to different players.
Some Rules
The following probabilistic guides (or 'rules') go beyond the basics of the last section. They are based on my experience alone and can undoubtedly be supplemented by others. None should be
followed absolutely. When they should be applied, and when overruled, are themselves subject to probabilistic guides that the player must learn by induction.
I have found it extremely difficult to state all the 'rules' that I use in practice, and extremely interesting to speculate why. In any event, these are not all the rules that even a moderate
beginner knows 'by thumb'. Moreover, no matter how clearly they are stated, they all seem much more complex in writing than when thought, recognized, and applied. The task here is like trying to
tell someone entirely through explicit rules how to recognize the face of your grandmother.
1. Always know how far from home one is supposed to be (assuming that every twist has been right). This is important for distinguishing accidental from actual information. One should be able to
ask just how likely it is that any given configuration of apparent information would have been preserved after x twists.
2. A center tile and edge tile of the same color are very often accidental information. One should presume their adjacency is accidental unless one has corroboration for the opposite theory.
3. A 2-tile configuration of apparent information is much more likely to be actual information if it is adjacent to another 2-tile configuration such that the twist that would destroy the first
would destroy both.
4. A configuration of apparent information is almost certainly accidental if its perimeter is not convex at every point. Important examples are the following: rectangles with missing corners;
strings that turn in a single plane; a meeting of strings from different planes at a vertex; a 3-tile edge string plus the center.
5. Other things being equal, a configuration is more likely to be actual information if it spills over to another face of the cube where it also makes apparent information.
6. In choosing between a 2-tile and a 3-tile string, when one must be destroyed, do not automatically assume that the longer string is more (or less) likely to be the actual information. In my
experience the odds are about equal either way, at least when one is under 8 twists from home. Above 8 twists, the shorter string is more likely to be actual information.
7. In general, the further one is from home, the less any apparent information is likely to be actual information. Similarly, at any stage, the larger a configuration of apparent information,
the less likely it is to be (in every part) actual information, unless one is very close to home.
8. In a configuration of 3 or more tiles of apparent information, do not assume that one accidental adjacency makes the other(s) accidental. Similarly, do not assume that all the adjacencies of
large configurations have the same status.
9. If the next twist has been narrowed down to one plane, and neither twist (clockwise or counter-clockwise) makes any apparent information, or both twists make the same amout of equally likely
information, then one must choose the direction of the twist on other grounds.
● Make the plane to be twisted the top plane. Look at the bottom plane and determine which planes it blocks. If only one plane is not blocked by the bottom plane (e.g. the front plane),
then twist the top plane in whatever direction is needed to allow the front plane to twist (without blockage from the top plane) in the next move.
● If two or more 'side planes' could be twisted for a given top plane, then try each move hypothetically (at least in your head) and compare the increases of apparent information.
● If the next twist has been narrowed down to two or more planes, then apply the method above hypothetically to each eligible plane, and compare the increases in apparent information.
● This procedure is the weakest gives the lowest probability of the right answer when the bottom plane is not blocked and could itself move without destroying apparent information. It is
stronger when the bottom plane is blocked. It is strongest when the bottom plane is blocked and the top plane is the only movable plane or the most likely plane to be moved.
Craig Rutledge has written an AS/400 program implementing these rules. He's put the source code online for anyone who might be interested. His code recently (March 2001) earned him the title
Games Master for the green screen masses from 400Times. | {"url":"http://dash.harvard.edu/bitstream/handle/1/4747486/suber_cube.htm?sequence=1","timestamp":"2014-04-20T13:26:12Z","content_type":null,"content_length":"20820","record_id":"<urn:uuid:05477fc4-1260-43e8-9de5-79f7ff293bb3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intermediate Algebra 5th Edition | 9780136007296 | eCampus.com
List Price: [S:$178.67:S]
Only three copies
in stock at this price.
In Stock Usually Ships in 24 Hours.
In Stock Usually Ships in 24 Hours.
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 5th edition with a publication date of 1/23/2008.
What is included with this book?
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
KEY MESSAGE: Elayn Martin-Gay's developmental math textbooks and video resources are motivated by her firm belief that every student can succeed. Martin-Gay's focus on the student shapes her clear,
accessible writing, inspires her constant pedagogical innovations, and contributes to the popularity and effectiveness of her video resources. This revision of Martin-Gay's algebra series continues
her focus on students and what they need to be successful. Martin-Gay also strives to provide the highest level of instructor and adjunct support. KEY TOPICS: Real Numbers And Algebraic Expressions;
Equations, Inequalities, And Problem Solving; Graphs and Functions; Systems of Linear Equations and Inequalities; Exponents, Polynomials, and Polynomial Functions; Rational Expressions; Rational
Exponents, Radicals, and Complex Numbers; Quadratic Equations and Functions; Exponential and Logarithmic Functions; Conic Sections; Sequences, Series, and the Binomial Theorem MARKET: For all readers
interested in intermediate algebra, and for all readers learning or revisiting essential skills in intermediate algebra through the use of lively and up-to-date applications.
Author Biography
An award-winning instructor and best-selling author, Elayn Martin-Gay has taught mathematics at the University of New Orleans for more than 25 years. Her numerous teaching awards include the local
University Alumni Association’s Award for Excellence in Teaching, and Outstanding Developmental Educator at University of New Orleans, presented by the Louisiana Association of Developmental
Prior to writing textbooks, Elayn developed an acclaimed series of lecture videos to support developmental mathematics students in their quest for success. These highly successful videos originally
served as the foundation material for her texts. Today, the videos are specific to each book in the Martin-Gay series. Elayn also pioneered the Chapter Test Prep Video to help students as they
prepare for a test–their most “teachable moment!”
Elayn’s experience has made her aware of how busy instructors are and what a difference quality support makes. For this reason, she created the Instructor-to-Instructor video series. These videos
provide instructors with suggestions for presenting specific math topics and concepts in basic mathematics, Prealgebra, beginning algebra, and intermediate algebra. Seasoned instructors can use them
as a source for alternate approaches in the classroom. New or adjunct faculty may find the CDs useful for review.
Her textbooks and acclaimed video program support Elayn's passion of helping every student to succeed.
Table of Contents
Real Numbers and Algebraic Expressions
Tips for Success in Mathematics
Algebraic Expressions and Sets of Numbers
Operations on Real Numbers
Integrated Review - Algebraic Expressions and Operations on Whole Numbers
Properties of Real Numbers
Equations, Inequalities, and Problem Solving
Linear Equations in One Variable
An Introduction to Problem Solving
Formulas and Problem Solving
Linear Inequalities and Problem Solving
Integrated Review - Linear Equations and Inequalities
Compound Inequalities
Absolute Value Equations
Absolute Value Inequalities
Graphs and Functions
Graphing Equations
Introduction to Functions
Graphing Linear Functions
The Slope of a Line
Equations of Lines
Integrated Review - Linear Equations in Two Variables
Graphing Piecewise-Defined Functions and Shifting and Reflecting Graphs of Functions
Graphing Linear Inequalities
Systems of Linear Equations and Inequalities
Solving Systems of Linear Equations in Two Variables
Solving Systems of Linear Equations in Three Variables
Systems of Linear Equations and Problem Solving
Integrated Review - Systems of Linear Equations
Solving Systems of Equations by Matrices
Solving Systems of Linear Inequalities
Exponents, Polynomials, and Polynomial Functions
Exponents and Scientific Notation
More Work with Exponents and Scientific Notation
Polynomials and Polynomial Functions
Multiplying Polynomials
The Greatest Common Factor and Factoring by Grouping
Factoring Trinomials
Factoring by Special Products
Integrated Review - Operations on Polynomials and Factoring Strategies
Solving Equations by Factoring and Problem Solving
Rational Expressions
Rational Functions and Multiplying and Dividing Rational Expressions
Adding and Subtracting Rational Expressions
Simplifying Complex Fractions
Dividing Polynomials: Long Divisions and Synthetic Division
Solving Equations Containing Rational Expressions
Integrated Review - Expressions and Equations Containing Rational Expressions
Rational Equations and Problem Solving
Variation and Problem Solving
Rational Exponents, Radicals, and Complex Numbers
Radicals and Radical Functions
Rational Exponents
Simplifying Radical Expressions
Adding, Subtracting, and Multiplying Radical Expressions
Rationalizing Denominators and Numerators of Radical Expressions
Integrated Review - Radicals and Rational Exponents
Radical Equations and Problem Solving
Complex Numbers
Quadratic Equations and Functions
Solving Quadratic Equations by Completing the Square
Solving Quadratic Equations by the Quadratic Formula
Solving Equations by Using Quadratic Methods
Integrated Review - Summary on Solving Quadratic Equations
Nonlinear Inequalities in One Variable
Quadratic Functions and Their Graphs
Further Graphing of Quadratic Functions
Exponential and Logarithmic Functions
The Algebra of Functions; Composite Functions
Inverse Functions
Exponential Functions
Logarithmic Functions
Properties of Logarithms
Integrated Review - Functions and Properties of Logarithms
Common Logarithms, Natural Logarithms, and Change of Base
Exponential and Logarithmic Equations and Applications
Conic Sections
The Parabola and t<$$$>
Table of Contents provided by Publisher. All Rights Reserved. | {"url":"http://www.ecampus.com/intermediate-algebra-5th-martingay-elayn/bk/9780136007296","timestamp":"2014-04-19T15:44:01Z","content_type":null,"content_length":"70591","record_id":"<urn:uuid:b4754803-b0a8-43a7-be8c-67d735c7895f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standing waves on a transmission line
There is a large variety of terminations for rf lines. Each type of termination has a characteristic effect on the standing waves on the line. From the nature of the standing waves, you can determine
the type of termination that produces the waves.
TERMINATION IN Z[0]
Termination in Z[0] (characteristic impedance) will cause a constant reading on an ac meter when it is moved along the length of the line. As illustrated in figure 3-34, view A, the curve, provided
there are no losses in the line, will be a straight line. If there are losses in the line, the amplitude of the voltage and current will diminish as they move down the line (view B). The losses are
due to dc resistance in the line itself.
Figure 3-34. - Effects of various terminations on standing waves.
In an open-circuited rf line (figure 3-34, view C), the voltage is maximum at the end, but the current is minimum. The distance between two adjacent zero current points is 1/2
l, and the distance between alternate zero current points is 1l. The voltage is zero at a distance of 1/4l from the end of the line. This is true at any frequency. A voltage peak occurs at the end of
the line, at 1/2l from the end, and at each 1/2l thereafter.
On the line terminated in a short circuit, shown in figure 3-34, view D, the voltage is zero at the end and maximum at 1/4l from the end. The current is maximum at the end, zero at 1/4l from the end,
and alternately maximum and zero every 1/4l thereafter.
When a line is terminated in capacitance, the capacitor does not absorb energy, but returns all of the energy to the circuit. This means there is 100 percent reflection. The current and voltage
relationships are somewhat more involved than in previous types of termination. For this explanation, assume that the capacitive reactance is equal to the Z[0] of the line. Current and voltage are in
phase when they arrive at the end of the line, but in flowing through the capacitor and the characteristic impedance (Z[0]) connected in series, they shift in phase relationship. Current and voltage
arrive in phase and leave out of phase. This results in the standing-wave configuration shown in figure 3-34, view E. The standing wave of voltage is minimum at a distance of exactly 1/8l from the
end. If the capacitive reactance is greater than Z[0] (smaller capacitance), the termination looks more like an open circuit; the voltage minimum moves away from the end. If the capacitive reactance
is smaller than Z[0], the minimum moves toward the end.
When the line is terminated in an inductance, both the current and voltage shift in phase as they arrive at the end of the line. When X[L] is equal to Z[0], the resulting standing waves are as shown
in figure 3-34, view F. The current minimum is located 1/8l from the end of the line. When the inductive reactance is increased, the standing waves appear closer to the end. When the inductive
reactance is decreased, the standing waves move away from the end of the line. []
Whenever the termination is not equal to Z[0], reflections occur on the line. For example, if the terminating element contains resistance, it absorbs some energy, but if the resistive element does
not equal the Z[0] of the line, some of the energy is reflected. The amount of voltage reflected may be found by using the equation:
E[R] = the reflected voltage
E[i] = the incident voltage
R[R] = the terminating resistance
Z[0]= the characteristic impedance of the line
If you try different values of R[L] in the preceding equation, you will find that the reflected voltage is equal to the incident voltage only when R[L] equals 0 or is infinitely large. When R[L]
equals Z[0], no reflected voltage occurs. When R[L]is greater than Z[0], E[R] is positive, but less than E[i]. As R[L] increases and approaches an infinite value, E[R] increases and approaches E[i]
in value. When R[L] is smaller than Z[0], E[R] has a negative value. This means that the reflected voltage is of opposite polarity to the incident wave at the termination of the line. As R[L]
approaches zero, E[R] approaches E[i] in value. The smaller the value of E[R], the smaller is the peak amplitude of the standing waves and the higher are the minimum values.
When R[L] is greater than Z[0], the end of the line is somewhat like an open circuit; that is, standing waves appear on the line. The voltage maximum appears at the end of the line and also at
half-wave intervals back from the end. The current is minimum (not zero) at the end of the line and maximum at the odd quarter-wave points. Since part of the power in the incident wave is consumed by
the load resistance, the minimum voltage and current are less than for the standing waves on an open-ended line. Figure 3-34, view G, illustrates the standing waves for this condition.
TERMINATION IN A RESISTANCE LESS THAN Z[0]
When R[L] is less than Z[0], the termination appears as a short circuit. The standing waves are shown in figure 3-34, view H. Notice that the line terminates in a current LOOP (peak) and a voltage
NODE (minimum). The values of the maximum and minimum voltage and current approach those for a shorted line as the value of R[L] approaches zero.
A line does not have to be any particular length to produce standing waves; however, it cannot be an infinite line. Voltage and current must be reflected to produce standing waves. For reflection to
occur, a line must not be terminated in its characteristic impedance. Reflection occurs on lines terminated in opens, shorts, capacitances, and inductances, because no energy is absorbed by the load.
If the line is terminated in a resistance not equal to the characteristic impedance of the line, some energy will be absorbed and the rest will be reflected.
The voltage and current relationships for open-ended and shorted lines are opposite to each other, as shown in figure 3-34, views C and D. The points of maximum and minimum voltage and current are
determined from the output end of the line, because reflection always begins at that end.
Q.26 A nonresonant line is a line that has no standing waves of current and voltage on it and is considered to be flat. Why is this true?
Q.27 On an open line, the voltage and impedance are maximum at what points on the line?
The measurement of standing waves on a transmission line yields information about equipment operating conditions. Maximum power is absorbed by the load when Z[L] = Z[0]. If a line has no standing
waves, the termination for that line is correct and maximum power transfer takes place.
You have probably noticed that the variation of standing waves shows how near the rf line is to being terminated in Z[0]. A wide variation in voltage along the length means a termination far from Z
[0]. A small variation means termination near Z[0]. Therefore, the ratio of the maximum to the minimum is a measure of the perfection of the termination of a line. This ratio is called the
STANDING-WAVE RATIO (swr) and is always expressed in whole numbers. For example, a ratio of 1:1 describes a line terminated in its characteristic impedance (Z[0]).
Voltage Standing-Wave Ratio
The ratio of maximum voltage to minimum voltage on a line is called the VOLTAGE STANDING-WAVE RATIO (vswr). Therefore:
The vertical lines in the formula indicate that the enclosed quantities are absolute and that the two values are taken without regard to polarity. Depending on the nature of the standing waves, the
numerical value of vswr ranges from a value of 1 (Z[L] = Z[0], no standing waves) to an infinite value for theoretically complete reflection. Since there is always a small loss on a line, the minimum
voltage is never zero and the vswr is always some finite value. However, if the vswr is to be a useful quantity, the power losses along the line must be small in comparison to the transmitted power.
Power Standing-Wave Ratio
The square of the voltage standing-wave ratio is called the POWER STANDING-WAVE RATIO (pswr). Therefore:
This ratio is useful because the instruments used to detect standing waves react to the square of the voltage. Since power is proportional to the square of the voltage, the ratio of the square of the
maximum and minimum voltages is called the power standing-wave ratio. In a sense, the name is misleading because the power along a transmission line does not vary.
Current Standing-Wave Ratio
The ratio of maximum to minimum current along a transmission line is called CURRENT STANDING-WAVE RATIO (iswr). Therefore:
This ratio is the same as that for voltages. It can be used where measurements are made with loops that sample the magnetic field along a line. It gives the same results as vswr measurements.
Q.28 At what point on an open-circuited rf line do voltage peaks occur?
Q.29 What is the square of the voltage standing-wave ratio called?
Q.30 What does vswr measure? | {"url":"http://www.tpub.com/neets/book10/41k.htm","timestamp":"2014-04-17T15:26:35Z","content_type":null,"content_length":"25029","record_id":"<urn:uuid:14d8d2cd-1f73-414d-97ed-d2edb8f43346>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Summability of Spectral Expansions Corresponding to the Sturm-Liouville Operator
International Journal of Mathematics and Mathematical Sciences
Volume 2012 (2012), Article ID 843562, 13 pages
Review Article
On Summability of Spectral Expansions Corresponding to the Sturm-Liouville Operator
Moscow State University of Instrument Engineering and Computer Science, Stromynka 20, Moscow 107996, Russia
Received 26 March 2012; Revised 23 May 2012; Accepted 27 May 2012
Academic Editor: H. Srivastava
Copyright © 2012 Alexander S. Makin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We study the completeness property and the basis property of the root function system of the Sturm-Liouville operator defined on the segment [0, 1]. All possible types of two-point boundary
conditions are considered.
1. Introduction
The spectral theory of two-point differential operators was begun by Birkhoff in his two papers [1, 2] of 1908 where he introduced regular boundary conditions for the first time. It was continued by
Tamarkin [3, 4] and Stone [5, 6]. Afterwards their investigations were developed in many directions. There is an enormous literature related to the spectral theory outlined above, and we refer to [7–
18] and their extensive reference lists for this activity.
The present communication is a brief survey of results in the spectral theory of the Sturm-Liouville operator: with two-point boundary conditions where the are linearly independent forms with
arbitrary complex-valued coefficients and is an arbitrary complex-valued function of class .
Our main focus is on the non-self-adjoint case. We will study the completeness property and the basis property of the root function system of operator (1.1), (1.2). The convergence of spectral
expansions is investigated only in classical sense; that is, the question about the summability of divergent series by a generalized method is not considered.
2. Preliminaries
Let us present briefly the main definitions and facts which will be used in what follows.
Let be a Banach space with the norm , and let be its dual with the norm .
A system of elements is said to be closed in if the linear span of this system is everywhere dense in ; that is, any element of the space can be approximated by a linear combination of elements of
this system with any accuracy in the norm of the space .
A system of elements is said to be minimal in if none of its elements belongs to the closure of the linear span of the other elements of this system.
Theorem 2.1 (see [19]). A system is minimal if and only if there exists a biorthogonal system dual to it, that is, a system of linear functionals from such that for all . Moreover, if the initial
system is simultaneously closed and minimal in , then the system biorthogonally dual to it is uniquely defined.
We say that a system is uniformly minimal in , if there exists such that for all , where is the closure of the linear span of all elements with serial numbers .
Theorem 2.2 (see [19]). A closed and minimal system is uniformly minimal in if and only if:
A system forms a basis of the space if, for any element , there exists a unique expansion of it in the elements of the system, that is, the series convergent to in the norm of the space . Any basis
is a closed and minimal system in , and, therefore, we can uniquely find its biorthogonal dual system , and hence the expansion of any element of with respect to the basis coincides with its
biorthogonal expansion, that is, for all .
Any basis in is a uniformly minimal system, and, therefore, (2.2) holds. However, it is well known that a closed and uniformly minimal system may not form a basis in .
A system biorthogonally dual to a basis in a reflexive Banach space itself forms a basis in .
A basis in the space is said to be an unconditional basis, if it remains a basis for any permutation for its elements.
In a Hilbert space , along with the concept of an unconditional basis, we have the close concept of a Riesz basis. A system is called a Riesz basis of the space if there exists a bounded invertible
operator such that the system forms an orthonormal basis in .
Theorem 2.3 (see [20]). A system forms a Riesz basis of the space if and only if it is an unconditional basis almost normalized in , that is,
A system is said to be complete in if the equality for all implies . In a Hilbert space, the properties of completeness and closeness of a system are equivalent.
We consider the operator as a linear operator on defined by (1.1) with the domain being absolutely continuous on , , .
By an eigenfunction of the operator corresponding to an eigenvalue , we mean any function which satisfies the equation: almost everywhere on .
By an associated function of the operator of order corresponding to the same eigenvalue and the eigenfunction , we mean any function which satisfies the equation: almost everywhere on . One can also
say that an eigenfunction is an associated function of zeroth order. The set of all eigen- and associated functions (or root functions) corresponding to the same eigenvalue together with the function
forms a root linear manifold. This manifold is called a root subspace if its dimension is finite.
Let the set of the eigenvalues of the operator be countable and all root linear manifolds root subspaces. Let us choose a basis in each root subspace. Any system obtained as the union of chosen bases
of all the root subspaces is called a system of eigen- and associated functions (or root function system) of the operator .
The main purpose of this paper is to study the basis property of the root function system of the operator . Before starting our investigation, we must verify completeness of the root function system
in .
It is convenient to write conditions (1.2) in the matrix form: and denote the matrix composed of the ith and jth columns of by ; we set .
Denote by the fundamental system of solutions to (1.1) with the initial conditions , . The eigenvalues of problem (1.1), (1.2) are the roots of the characteristic determinant: Simple calculations
show that It is easily seen that if then the characteristic determinant of the corresponding problem (1.1), (1.2) has the form:
Boundary conditions (1.2) are called nondegenerate if they satisfy one of the following relations:(1), (2), (3).
Evidently, boundary conditions (1.2) are nondegenerate if and only if .
Notice that for any nondegenerate boundary conditions an asymptotic representation for the characteristic determinant as one can find in [10].
Theorem 2.4 (see [10]). For any nondegenerate conditions, the spectrum of problem (1.1), (1.2) consists of a countable set of eigenvalues with only one limit point , and the dimensions of the
corresponding root subspaces are bounded by one constant. The system of eigen- and associated functions is complete and minimal in ; hence, it has a biorthogonal dual system .
For convenience, we introduce numbers , where is the square root of with nonnegative real part.
It is known that nondegenerate conditions can be divided into three classes:(1)strengthened regular conditions;(2)regular but not strengthened regular conditions;(3)irregular conditions.
The definitions are given in [8]. These three cases should be considered separately.
3. Strengthened Regular Conditions
Let boundary conditions (1.2) belong to class (1). According to [8], this is equivalent to the fulfillment one of the following conditions: It is well known that all but finitely many eigenvalues are
simple (in other words, they are asymptotically simple), and the number of associated functions is finite. Moreover, the is separated in the sense that there exists a constant such that for any
sufficiently large different numbers and , we have
Theorem 3.1. The system of root functions forms a Riesz basis in .
This statement was proved in [21, 22] and [9, Chapter XIX].
Class (1) contains many types of boundary conditions, for example, the Dirichlet boundary conditions , the Newmann boundary conditions , the Dirichlet-Newmann boundary conditions and others.
4. Regular but Not Strengthened Regular Conditions
Let boundary conditions belong to class (2). According to [8], this is equivalent to the fulfillment of the conditions where . It is well known [10] that the eigenvalues of problem (1.1), (1.2) form
two series: (if ) and (if ). Here, in both cases, and . We denote . It follows from [8] that asymptotic formulas (4.2) and (4.3) can be refined. Specifically, Obviously, ; that is, and become
infinitely close to each other as . If for all, except, possibly, a finite set, then the spectrum of problem (1.1), (1.2) is called asymptotically multiple. If the set of multiple eigenvalues is
finite, then the spectrum of problem (1.1), (1.2) is called asymptotically simple.
There exist numerous examples when the number of multiple eigenvalues is finite or infinite, and the total number of associated functions is finite or infinite also. We see that separation condition
(3.2) never holds. Depending on the particular form of the boundary conditions and the potential , the system of root functions may have or may not have the basis property [17, 22, 23], and even for
fixed boundary conditions, this property may appear or disappear under arbitrary small variations of the coefficient in the corresponding metric [24]. Thus, the considered case is much more
complicated than the previous one, so we will study it in detail.
For any problem (1.1), (1.2) let denote the set of potentials from the class such that the system of root functions forms a Riesz basis in , .
To analyze this class of problems, it is reasonable [12] to divide conditions (1.2) satisfying (4.1) into four types:(I);(II);(III);(IV).
The eigenvalue problem for (1.1) with boundary conditions of type I, II, III, or IV is called the problem of type I, II, III, or IV, respectively.
At first we consider the problems of type I. It was shown in [12] that any boundary conditions of type I are equivalent to the boundary conditions specified by the matrix: that is, to periodic or
antiperiodic boundary conditions. These boundary conditions are self-adjoint.
We set .
Theorem 4.1 (see [25]). Suppose that , where , and for . If there exists an such that forall , then the system of functions is a Riesz basis in .
If there exists a sequence of such that , and then the system of functions is not a basis in .
It is easy to verify that if where , for , and for , then the system of functions is not a basis in .
The following theorem is an easy corollary to Theorem 4.1.
Theorem 4.2 (see [25]). The sets and are everywhere dense in .
Theorem 4.1 was generalized in [26]. Recently, (see [27–29] and their extensive reference lists) by a number of authors, a very nice theory of the problems of type I was built. In particular, in
papers [28, 29] a criterion to have a Riesz basis property was established. The criterion is formulated in terms of periodic (resp., antiperiodic) and Dirichlet eigenvalues. Also in [30], it was
established the criterion for these boundary value problems to have a Riesz basis property in terms of a potential provided that it is a special trigonometric polynomial. The later criterion has an
advantage since it is given in terms of the coefficients of the potential.
Let us consider the problems of type II. It was also established in [12] that any boundary conditions of type II are equivalent to the boundary conditions specified by the matrix: where in both
cases. If is a real number and is a real function, then the corresponding boundary value problem is self-adjoint.
Theorem 4.3 (see [31]). If and , then the system forms a Riesz basis in , and the spectrum is asymptotically simple.
Denote by the biorthogonal dual system. The key point in the proof of Theorem 4.3 is obtaining the estimate: which is valid for any number . It follows from (4.10) and [32] that the system forms a
Riesz basis in .
A comprehensive description of boundary conditions of types III and IV was given in [12]. In particular, it is known that all of them are non-self-adjoint.
At first we consider the problems of type III. According to [12], any boundary conditions of type III are equivalent to the boundary conditions determined by the matrix: where either , , and ; , ,
and ; The sign is always upper if and lower if .
Let us consider the problems of type IV. According to [12], any boundary conditions of type IV are equivalent to the boundary conditions determined by the matrix: where either , , , and ; , , , and ;
or The sign is always upper if and lower if .
Theorem 4.4 (see [31]). If , then the system of root functions of problem (1.1), (1.2) is a Riesz basis in if and only if the spectrum is asymptotically multiple.
Thus, we have established that for problems of types III and IV, the question about the basis property for the system of eigen- and associated functions is reduced to the question about asymptotic
multiplicity of the spectrum. The presence of this property depends essentially on the particular form of the boundary conditions and the function .
Theorem 4.5 (see [33, 34]). If , then, for any function and any , there exists a function such that and problem (1.1), (1.2) with the potential has an asymptotically multiple spectrum.
For and , a similar proposition was deduced in [35].
Theorems 4.2, 4.3, and 4.5 and the results of [36] imply that the whole class of regular but not strengthened regular boundary conditions splits into two subclasses (a) and (b). Subclass (a)
coincides with the second type of boundary conditions and is characterized by the fact that the system of root functions of problem (1.1), (1.2) with boundary conditions from this subclass forms a
Riesz basis in for any potential ; that is, , . We will see below that boundary conditions from the subclass (a) are the only boundary conditions (in addition to strengthened regular ones) that
ensure the Riesz basis property of the system of root functions for any potential .
Subclass (b) contains the remaining regular but not strengthened regular boundary conditions. An entirely different situation takes place in this case. For any problem with boundary conditions from
this subclass, the sets and are dense everywhere in .
For problems of types III and IV with an arbitrary potential the following theorem is valid.
Theorem 4.6 (see [34]). Each root subspace contains one eigenfunction and, possibly, associated functions.
By Theorem 4.5, for problems of types III and IV, the set of potentials that ensure an asymptotically multiple spectrum is dense everywhere in . Therefore, it follows from Theorem 4.3 that we have
discovered a new wide class of eigenvalue problems for the Sturm-Liouville operator that have an infinite number of associated functions.
5. Irregular Conditions
Let boundary conditions (1.2) belong to class (3). According to [8, 12], this is equivalent to the fulfillment one of the following conditions:
According to [12], any boundary conditions of the considered class are equivalent to the boundary conditions determined by the matrix:
In case (3), as well as in case (1), all but finitely many eigenvalues are simple, the number of associated functions is finite, and separation condition (3.2) holds. However, the system never forms
even a usual basis in , because as . Here is the biorthogonal dual system. This case was investigated in [5, 6, 37].
6. Degenerate Conditions
Let boundary conditions (1.2) be degenerate. According to [10, 12], this is equivalent to the fulfillment of the following conditions:
According to [12], any boundary conditions of the considered class are equivalent to the boundary conditions determined by the matrix: If in the first case then for any potential , we have the
initial value problem (the Cauchy problem) which has no eigenvalues. The same situation takes place in the second case.
Further we will consider the first case if . Then the boundary conditions can be written in more visual form: Simple calculations show that if and then any is an eigenvalue of infinite multiplicity.
This abnormal example illustrates the difficulty of investigation of problems with boundary conditions of the considered class.
If then [38, 39] the root function system is complete in . Let if and if . One can calculate that the characteristic determinant . This, together with [38, 39], implies that the root function system
is not complete in . We see that depending on the potential the system of root functions may have or may not have the completeness property, moreover, this property may appear or disappear under
arbitrary small variations of the coefficient in the corresponding metric even for fixed boundary conditions.
Notice, that the most general results on completeness of the root function system of problem (1.1), (6.3) were obtained in [39]. The main result of the mentioned paper is:
Theorem 6.1 (see [39]). If for some and , then the system of root functions is complete in .
Recently, it was proved in [40] that the root function system never forms an unconditional basis in if multiplicities of the eigenvalues are uniformly bounded by some constant. Moreover, under the
condition mentioned above, it was established there that if the eigen- and associated function system of general ordinary differential operator with two-point boundary conditions forms an
unconditional basis then the boundary conditions are regular. Article [40] was published in 2006. At that time, it was unknown whether there exists a potential providing unbounded growth of
multiplicities of the eigenvalues. However, in2010 in [41] an example of a potential for which the characteristic determinant has the roots of arbitrary high multiplicity was constructed. Hence,
the corresponding root function system contains associated functions of arbitrary high order.
Denote by the eigenvalues of operator (1.1) with boundary conditions (6.3). Let denote multiplicity of the corresponding eigenvalue.
Theorem 6.2 (see [42]). If then the system is not a basis in .
The following assertion is a trivial corollary of Theorem 6.2.
If the system is a basis in then
Clearly, since Theorem 6.2 contains supplementary condition (6.4), it does not give the definitive solution of the basis property problem. If this condition does not hold then the mentioned problem
has not been solved. Moreover, it is unknown whether there exists a potential such that
7. Conclusion
In this section, we present Table 1 summarizing the spectral properties, outlined in the introduction for operator (1.1), (1.2). The second column indicates classification for the case depending on
the type of boundary conditions (SR: strengthened regular, WR: weakly regular—regular, but not strengthened regular, IR: irregular, DEG: degenerate). YES/NO means that the indicated property may
appear or disappear under variation of the coefficient ; ?/NO means that it has been proved that for a subset of potentials the property does not take place, and an example when the property holds is
unknown, thus, the definitive solution has not been received.
This work was supported by the Russian Foundation for Basic Research, Project no. 10-01-411.
1. G. D. Birkhoff, “On the asymptotic character of the solutions of certain linear differential equations containing a parameter,” Transactions of the American Mathematical Society, vol. 9, no. 2,
pp. 219–231, 1908. View at Publisher · View at Google Scholar
2. G. D. Birkhoff, “Boundary value and expansion problems of ordinary linear differential equations,” Transactions of the American Mathematical Society, vol. 9, no. 4, pp. 373–395, 1908. View at
Publisher · View at Google Scholar
3. J. Tamarkine, “Sur quelques points de la théorie des équations différentielles linéaires ordinaires et sur la généralisation de la série de fourier,” Rendiconti del Circolo Matematico di Palermo,
vol. 34, no. 1, pp. 345–382, 1912. View at Publisher · View at Google Scholar · View at Scopus
4. J. Tamarkin, “Some general problems of the theory of ordinary linear differential equations and expansion of an arbitrary function in series of fundamental functions,” Mathematische Zeitschrift,
vol. 27, no. 1, pp. 1–54, 1927. View at Publisher · View at Google Scholar · View at Scopus
5. M. H. Stone, “A comparison of the series of Fourier and Birkhoff,” Transactions of the American Mathematical Society, vol. 29, no. 4, pp. 695–761, 1926. View at Publisher · View at Google Scholar
6. M. H. Stone, “Irregular differential systems of order two and the related expansion problems,” Transactions of the American Mathematical Society, vol. 29, no. 1, pp. 23–53, 1927. View at
Publisher · View at Google Scholar
7. E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, NY, USA, 1955.
8. M. A. Naĭmark, Linear Differential Operators, Nauka, Moscow, Russia, 1969, English translation: Ungar, New York, NY, USA, 1967.
9. N. Dunford and J. T. Schwartz, Linear Operators, Part III, John Wiley & Sons, New York, NY, USA, 1971.
10. V. A. Marchenko, Sturm-Liouville Operators and Their Applications, Naukova Dumka, Kiev, Ukraine, 1977, English translation: Birkhäuser, Basel, Switzerland, 1986.
11. P. Lang and J. Locker, “Spectral theory of two-point differential operators determined by $-{D}^{2}$. I. Spectral properties,” Journal of Mathematical Analysis and Applications, vol. 141, no. 2,
pp. 538–558, 1989. View at Publisher · View at Google Scholar
12. P. Lang and J. Locker, “Spectral theory of two-point differential operators determined by $-{D}^{2}$. II. Analysis of cases,” Journal of Mathematical Analysis and Applications, vol. 146, no. 1,
pp. 148–191, 1990. View at Publisher · View at Google Scholar
13. J. Locker, “The spectral theory of second order two-point differential operators. I. A priori estimates for the eigenvalues and completeness,” Proceedings of the Royal Society of Edinburgh
Section A, vol. 121, no. 3-4, pp. 279–301, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
14. J. Locker, “The spectral theory of second order two-point differential operators. II. Asymptotic expansions and the characteristic determinant,” Journal of Differential Equations, vol. 114, no.
1, pp. 272–287, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
15. J. Locker, “The spectral theory of second order two-point differential operators. III. The eigenvalues and their asymptotic formulas,” The Rocky Mountain Journal of Mathematics, vol. 26, no. 2,
pp. 679–706, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
16. J. Locker, “The spectral theory of second order two-point differential operators. IV. The associated projections and the subspace ${S}_{\infty }\left(L\right)$,” The Rocky Mountain Journal of
Mathematics, vol. 26, no. 4, pp. 1473–1498, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
17. V. A. Il'in and L. V. Kritskov, “Properties of spectral expansions corresponding to non-self-adjoint differential operators,” Journal of Mathematical Sciences, vol. 116, no. 5, pp. 3489–3550,
2003. View at Publisher · View at Google Scholar
18. J. Locker, Spectral Theory of Non-Self-Adjoint Two-Point Differential Operators, vol. 192 of Mathematical Surveys and Monographs, North-Holland, Amsterdam, The Netherlands, 2003.
19. S. G. Kreĭn, Functional Analysis, Nauka, Moscow, Russia, 1972.
20. I. Ts. Gokhberg and M. G. Krein, Introduction to the Theory of Linear Not Self-Adjoint Operators, Nauka, Moscow, Russia, 1965.
21. V. P. Mihaĭlov, “On Riesz bases in ${ℒ}_{2}\left(0,1\right)$,” Doklady Akademii Nauk SSSR, vol. 144, no. 5, pp. 981–984, 1962 (Russian).
22. G. M. Kesel'man, “On the unconditional convergence of eigenfunction expansions of certain differential operators,” Izvestija Vysših Učebnyh Zavedeniĭ Matematika, vol. 39, no. 2, pp. 82–93, 1964
23. P. W. Walker, “A nonspectral Birkhoff-regular differential operator,” Proceedings of the American Mathematical Society, vol. 66, no. 1, pp. 187–188, 1977. View at Zentralblatt MATH
24. V. A. Il'in, “On a connection between the form of the boundary conditions and the basis property and the property of equiconvergence with a trigonometric series of expansions in root functions of
a nonselfadjoint differential operator,” Differentsial'nye Uravneniya, vol. 30, no. 9, pp. 1516–1529, 1994 (Russian).
25. A. S. Makin, “On the convergence of expansions in root functions of a periodic boundary value problem,” Doklady Mathematics, vol. 73, no. 1, pp. 71–76, 2006 (Russian), translation from Doklady
Akademii Nauk, vol. 406, no. 4, pp. 452–457, 2006.
26. O. A. Veliev and A. A. Shkalikov, “On the Riesz basis property of eigen- and associated functions of periodic and anti-periodic Sturm-Liouville problems,” Mathematical Notes, vol. 85, no. 5-6,
pp. 671–686, 2009. View at Publisher · View at Google Scholar
27. F. Gesztesy and V. Tkachenko, “A criterion for Hill operators to be spectral operators of scalar type,” Journal d'Analyse Mathématique, vol. 107, pp. 287–353, 2009. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
28. P. Djakov and B. Mytyagin, “Criteria for existence of Riesz bases consisting of root functions of Hill and 1D Dirac operators,” http://arxiv.org/abs/1106.5774.
29. F. Gesztesy and V. Tkachenko, “A Schauder and Riesz basis criterion for non-self-adjoint Schrödinger operator with periodic and anti-periodic boundary conditions,” Journal of Differential
Equations, vol. 253, no. 2, pp. 400–437, 2012.
30. P. Djakov and B. Mityagin, “Convergence of spectral decompositions of Hill operators with trigonometric polynomial potentials,” Mathematische Annalen, vol. 351, no. 3, pp. 509–540, 2011. View at
Publisher · View at Google Scholar
31. A. S. Makin, “On the basis property of systems of root functions of regular boundary value problems for the Sturm-Liouville operator,” Differential Equations, vol. 42, no. 12, pp. 1717–1728, 2006
(Russian), translation from Differentsial'nye Uravneniya, vol. 42, no. 12, pp.1646–1656, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
32. V. A. Il'in, “Unconditional basis property on a closed interval of systems of eigen- and associated functions of a second-order differential operator,” Doklady Akademii Nauk SSSR, vol. 273, no.
5, pp. 1048–1053, 1983 (Russian).
33. A. S. Makin, “Inverse problems of spectral analysis for the Sturm-Liouville operator with regular boundary conditions. I,” Differential Equations, vol. 43, no. 10, pp. 1364–1375, 2007 (Russian),
translation from Differentsial'nye Uravneniya, vol. 43, no. 10, pp.1334–1345, 2007. View at Zentralblatt MATH
34. A. S. Makin, “Inverse problems of spectral analysis of the Sturm-Liouville operator with regular boundary conditions. II,” Differential Equations, vol. 43, no. 12, pp. 1668–1678, 2007 (Russian),
translation from Differentsial'nye Uravneniya, vol. 43, no. 12, pp. 1626–1636, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
35. V. A. Tkachenko, “Spectral analysis of the nonselfadjoint Hill operator,” Doklady Akademii Nauk SSSR, vol. 322, no. 2, pp. 248–252, 1992.
36. A. S. Makin, “On a class of boundary value problems for the Sturm-Liouville operator,” Differential Equations, vol. 35, no. 8, pp. 1067–1076, 1999 (Russian), translation from Differentsial'nye
Uravneniya, vol. 35, no. 8, pp. 1058–1066, 1999.
37. S. Homan, Second-order linear dierential operators defined by irregular boundary conditions [Ph.D. thesis], Yale University, 1957.
38. M. M. Malamud, “On the completeness of a system of root vectors of the Sturm-Liouville operator with general boundary conditions,” Doklady Akademii Nauk, vol. 419, no. 1, pp. 19–22, 2008
(Russian). View at Publisher · View at Google Scholar
39. M. M. Malamud, “On the completeness of a system of root vectors of the Sturm-Liouville operator with general boundary conditions,” Functional Analysis and Its Applications, vol. 42, no. 3, pp.
198–204, 2008. View at Publisher · View at Google Scholar
40. A. Minkin, “Resolvent growth and Birkhoff-regularity,” Journal of Mathematical Analysis and Applications, vol. 323, no. 1, pp. 387–402, 2006. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
41. A. S. Makin, “Characterization of the spectrum of the Sturm-Liouville operator with irregular boundary conditions,” Differential Equations, vol. 46, no. 10, pp. 1427–1437, 2010 (Russian),
translation from Differentsial'nye Uravneniya, vol. 46, no. 10, pp.1421–1432, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
42. A. S. Makin, “On divergence of spectral expansions corresponding to the Sturm-Liouville operator with degenerate boundary conditions,” Differential Equations. In press. | {"url":"http://www.hindawi.com/journals/ijmms/2012/843562/","timestamp":"2014-04-16T21:56:43Z","content_type":null,"content_length":"433387","record_id":"<urn:uuid:0be034da-eeb6-4091-840b-e5fca8d83e1d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |