content
stringlengths
86
994k
meta
stringlengths
288
619
Derivative question April 24th 2010, 06:47 AM #1 Apr 2010 Derivative question Hello everyone, seeing as I'm not a natural English speaker, I'll try my best to express my problem in English terms, fingers crossed The image of the function: is a line 'l'. a) write an equation of the straight line that is tangent to 'l' in the abcissa point 2. b)determine the points of line 'l' in which the tangent is parallel to the straight line of the following equation: I hope this sounds ok'ish to you guys, I'm not very familiar with English maths terms. Anyway, a) was easy enough, I reached the conclusion that the answer is -3. b) is where I don't know where to turn, I'm still learning this basic maths stuff (I've had no high school maths education before reaching college), so there are still a few topics that I need to Would anyone be kind of enough to help me out here step by step? If it helps, I have the final answer for b): $(1, -3)$ and $(\frac53, -\frac{57}{27})$ Apologies if this is in the wrong section. "curve l" would be better-the word "line" implies straight. a) write an equation of the straight line that is tangent to 'l' in the abcissa point 2. b)determine the points of line 'l' in which the tangent is parallel to the straight line of the following equation: I hope this sounds ok'ish to you guys, I'm not very familiar with English maths terms. Anyway, a) was easy enough, I reached the conclusion that the answer is -3. The problem asks for an "equation of a the straight line". "-3" is not an equation! The equation of the tangent line will be y= f'(2)(x- 2)+ f(2). b) is where I don't know where to turn, I'm still learning this basic maths stuff (I've had no high school maths education before reaching college), so there are still a few topics that I need to Should one of those "1"s be a "y"? What you have written would be a vertical line- and this graph never has vertical tangent line. If it was y+ 4x+ 1= 0 (or 1+ 4x+ y= 0), then the straight line has slope -4. Then tangent to the curve will be parallel to that if f'(x)= -4. Would anyone be kind of enough to help me out here step by step? If it helps, I have the final answer for b): $(1, -3)$ and $(\frac53, -\frac{57}{27})$ Apologies if this is in the wrong section. By the way, your English is excellent- far better than my [Put the language of your choice here]! Hey HallsofIvy, you are totally right, the right expression is y+ 4x+ 1= 0 What I got from a) (following the derivative rules) was -3x-1, this answer was confirmed on the textbook answers, is this wrong then? Could you maybe explain to me b) in more detail please? Thank you for your time. Just giving this a lil bump, any help would be appreciated. April 24th 2010, 12:38 PM #2 MHF Contributor Apr 2005 April 24th 2010, 12:48 PM #3 Apr 2010 April 27th 2010, 07:59 AM #4 Apr 2010
{"url":"http://mathhelpforum.com/calculus/141069-derivative-question.html","timestamp":"2014-04-16T16:20:45Z","content_type":null,"content_length":"41682","record_id":"<urn:uuid:60a933ef-2a3d-4b40-9b8d-3c9e33558afc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
R on the Raspberry Pi I’m interested in running R on the Raspberry Pi, and on Raspbian in particular. There are loads of Debian packages for R, and I’m hoping that many of these find there way into Raspbian eventually. Right now it is possible to install and run R from Raspbian, but relatively few packages are available. However, the package r-base can be installed, and that is enough to get up and running with a basic R installation. So, % sudo apt-get install r-base % R should be enough to get started. Indeed, here’s a little Raspbian session to illustrate R running on the Pi: pi@raspberrypi ~/src/r $ uname -a Linux raspberrypi 3.1.9+ #168 PREEMPT Sat Jul 14 18:56:31 BST 2012 armv6l GNU/Linux pi@raspberrypi ~/src/r $ R R version 2.15.1 (2012-06-22) -- "Roasted Marshmallows" Copyright (C) 2012 The R Foundation for Statistical Computing ISBN 3-900051-07-0Platform: arm-unknown-linux-gnueabihf (32-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > rnorm(5) [1] -1.8385888 -1.1114294 0.7943391 1.0070076 0.7702747> Nice! Base graphics such as scatter plots and histograms all work fine, and can be piped to a remote X server if needed. So even without all the add-on packages it is a perfectly reasonable platform for basic data analysis. To benchmark it, I used my standard Gibbs sampling script, gibbs.R for (i in 1:N) { for (j in 1:thin) { which I can run and time from the linux command line with % time Rscript gibbs.R > /dev/null Unfortunately, this takes over 400 minutes, which is around 3 times slower than the equivalent python benchmarking script that I have run on Raspbian. On Intel, R is around half the speed of python, so there’s a bit of a gap there, but actually python runs slower than it should on the Pi anyway. Comparing against R on Intel, on my fast i7 laptop, this R script takes around 7 minutes, and on my Atom based netbook, it takes around 57 minutes. This is consistent with my other findings – namely that the speed difference between C and higher level languages is greater on the Pi than on Intel. Nevertheless, for many basic data analysis tasks, speed isn’t that much of an issue, and it’s certainly going to be very convenient to have R on the Pi. 3 thoughts on “R on the Raspberry Pi” 1. Anonymous UPDATE: Updated post to reflect the fact that r-base now installs fine. Reply ↓ 2. robinattynemouthrobin great there seems to be many R packages now – I’m in the process of writting a introductory stats book using mainly R commander and that even works on the raspberry pi – also note that ggplots is available – be great for teach kids etc. Reply ↓
{"url":"http://darrenjw2.wordpress.com/2012/07/21/r-on-the-raspberry-pi/","timestamp":"2014-04-19T19:35:22Z","content_type":null,"content_length":"52596","record_id":"<urn:uuid:370b66d2-4520-496e-9ad3-000f058c8bda>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
trigonometry poems Author Message rs-bxe.cam Posted: Saturday 30th of Dec 18:47 Hi friends! Are there any online resources to learn about the concepts of trigonometry poems? I didn’t really get the chance to cover the entire syllabus as yet. This is probably why I face problems while solving questions. nxu Posted: Sunday 31st of Dec 09:04 Hi, I think that I can to help you out. Have you ever tried out a program to help you with your math assignments? a while ago I was also stuck on a similar issues like you, and then I came across Algebrator. It helped me a lot with trigonometry poems and other algebra problems, so since then I always rely on its help! My math grades got better because of From: Siberia, Russian Federation malhus_pitruh Posted: Monday 01st of Jan 12:45 Even I made use of Algebrator to understand the basic principles of College Algebra a month back. It is worth putting the money in for the the purchase of Algebrator since it offers qualitytutoring in Pre Algebra and is available at a nominal rate. From: Girona, Catalunya (Spain) Thannium Posted: Wednesday 03rd of Jan 09:26 Thanks for the help. Could you please direct me to the website where I can purchase the product ? SjberAliem Posted: Thursday 04th of Jan 10:09 Here http://www.algebrahomework.org/polynomials.html. Please let me know if this has been of any help to you. From: Macintosh HD Jrobhic Posted: Saturday 06th of Jan 09:21 I am a regular user of Algebrator. It not only helps me finish my homework faster, the detailed explanations provided makes understanding the concepts easier. I strongly advise using it to help improve problem solving skills. From: Chattanooga,
{"url":"http://www.algebrahomework.org/algebrahomework/adding-functions/trigonometry-poems.html","timestamp":"2014-04-17T03:55:04Z","content_type":null,"content_length":"49938","record_id":"<urn:uuid:2c169a1b-2236-4067-bc77-17c33ffca01e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Best Response You've already chosen the best response. tell me question Best Response You've already chosen the best response. Derive an algebraic expression for the incremental output voltage vout in terms of parameters K, VT, and RS, the input bias voltage VIN, and the incremental input voltage vin. Hint: Remember that you computed the total output voltage vOUT in H5P2. Also remember that vout=vin⋅∂vOUT∂vIN|vIN=VIN. So, in the space provided below, write your algebraic expression for vout: Best Response You've already chosen the best response. Best Response You've already chosen the best response. if correct please click best response Best Response You've already chosen the best response. sure bro ... kindly help me in one more question part i send it to u ... H5P2 SOURCE FOLLOWER LARGE SIGNAL que: Write an algebraic expression for iDS in terms of K, vIN, vOUT, and VT. Remember, algebraic expressions are case sensitive. and que: Write an algebraic expression for vOUT in terms of iDS and RS. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. please click best response Best Response You've already chosen the best response. Best Response You've already chosen the best response. bro no problem u welcome u help me alot its me fully duty to say ur response is best. kindly just help me in last 1 que : H5P3:If we have an incremental input of vi=0.002V what is the incremental output vo (in Volts)? Best Response You've already chosen the best response. Best Response You've already chosen the best response. correct or not Best Response You've already chosen the best response. correct bro thnx a lot bro... really nice u help me alot ... :) Best Response You've already chosen the best response. my pleasure Best Response You've already chosen the best response. Given vi is .0009. Why do you need it for .002 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507a8e65e4b035d9313598f8","timestamp":"2014-04-18T20:44:32Z","content_type":null,"content_length":"61177","record_id":"<urn:uuid:f56f156b-1ed1-4d76-a7c3-386a330ed67c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
de Rham space Synthetic differential geometry Discrete and concrete objects In a context of synthetic differential geometry or D-geometry, the de Rham space $dR(X)$ of a space $X$ is the quotient of $X$ that identifies infinitesimally close points. It is the coreduced reflection of $X$. On $Rings^{op}$ Let CRing be the category of commutative rings. For $R \in CRing$, write $I \in R$ for the nilradical of $R$, the ideal consisting of the nilpotent elements. The canonical projection $R \to R/I$ to the quotient y the ideal corresponds in the opposite category $Ring^{op}$ to the inclusion $Spec (R/I) \to Spec R$ of the reduced part of $Spec R$. For $X \in PSh(Ring^{op})$ a presheaf on $Ring^{op}$ (for instance a scheme), its de Rham space $X_{dR}$ is the presheaf defined by $X_{dR} : Spec R \mapsto X\left(Spec \left(R/I\right)\right) \,.$ As a quotient If $X \in PSh(Ring^{op})$ is a smooth scheme then the canonical morphism $X \to X_{dR}$ is an epimorphism (hence an epimorphism over each $Spec R$) and therefore in this case $X_{dR}$ is the quotient of the relation “being infinitesimally close” between points of $X$: we have that $X_ {dR}$ is the coequalizer $X_{dR} = \lim_\to \left( X^{inf} \stackrel{\to}{\to} X \right) \,,$ of the two projections out of the formal neighbourhood of the diagonal. Relation to jet bundles For $E \to X$ a bundle over $X$, its direct image under base change along the projection map $X \longrightarrow \Pi_{inf} X$ yields its jet bundle. See there for more. Relation to formally étale morphism of schemes Crystalline site For $X : Ring \to Set$ a scheme, the big site $Ring^{op}/X_{dR}$ of $X_{dR}$, is the crystaline site of $X$. Grothendieck connection Morphisms $X_{dR} \to Mod$ encode flat higher connections: local systems. Accordingly, descent for deRham spaces – sometimes called deRham descent encodes flat 1-connections. This is described at Grothendieck connection, The category of D-modules on a space is equivalent to that of quasicoherent sheaves on the corresponding deRham space. Accordingly, quasicoherent $\infty$-stacks on the full $\Pi^{inf}(X)$ encode a higher categorical version of this, as discussed at ∞-vector bundle. Infinitesimal path $\infty$-groupoids The term de Rham space or de Rham stack apparently goes back to • Carlos Simpson, Homotopy over the complex numbers and generalized de Rham cohomology Moduli of VectorBundles, M. Maruyama, ed., Dekker (1996), 229-263. A review of the constructions is on the first two pages of • Jacob Lurie, Notes on crystals and algebraic $\mathcal{D}$-modules (pdf) The deRham space construction on spaces (schemes) is described in section 3, p. 7 which goes on to assert the existence of its derived functor on the homotopy category $Ho Sh_\infty(C)$ of ∞-stacks in proposition 3.3. on the same page. The characterization of formally smooth scheme as above is also on that page. See also online comments by David Ben-Zvi here and here on the $n$Café. and here on MO.
{"url":"http://ncatlab.org/nlab/show/de%20Rham%20space","timestamp":"2014-04-18T13:15:57Z","content_type":null,"content_length":"56299","record_id":"<urn:uuid:d3b9d49d-243b-4b77-8abd-29c9acf95553>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by cate Total # Posts: 25 Response Evaluation (Urgent! Please, help me!) I think it sounds pretty good, though I have never watched the movie and although your article is not necessarily convincing I'm not about to start nitpicking. If you are in tenth grade or under I'd say it sounds pretty good. social science formulate a 3 stage hypothesis from "How have Asia's achievments broadened our understanding of the influence of our neighbours ?" 1. The great wall od china is a famous tourist destination toAustralians as it helps us understand the Chinese culture and History A popular retail store knows that the purchase amounts by its customers is a random variable that follows a normal distribution with a mean of $30 and a standard deviation of $9. What is the probability that a randomly selected customer will spend $20 or more at this store? Mothers Against Drunk Driving (MADD) is a very visible group whose main focus is to educate the public about the harm caused by drunk drivers. A study was recently done that emphasized the problem we all face with drinking and driving. Five hundred accidents that occurred on a... An ice cream vendor sells three flavors: chocolate, strawberry, and vanilla. Forty five percent of the sales are chocolate, while 30% are strawberry, with the rest vanilla flavored. Sales are by the cone or the cup. The percentages of cones sales for chocolate, strawberry, and... The following data were obtained from a survey of college students. The variable X represents the number of non-assigned books read during the past six months. x 0 1 2 3 4 5 P (X=x) 0.20 0.25 0.20 0.15 0.10 0.10 Find P( X > 1) Suppose that 50 identical batteries are being tested. After 8 hours of continuous use, assume that a given battery is still operating with a probability of 0.70 and has failed with a probability of 0.30. What is the probability that fewer than 40 batteries will last at least 8... What is the pH at the equivalence point in the titration of 10.0 mL of 0.6 M HZ with 0.200 M NaOH? Ka = 7.0 × 10−6 for HZ. Dang it. I was (kinda) close. But thanks!!! Solve for the variable: 17d-16=-43d-4 I got d= 1/3 but I'm not sure.... probably wrong A $125,000 John Deere combine depreciates at 11% per year. How much will it be worth in 18 years? it cost $350 to carpet an 8ft. by 10 ft room. at this rate, how many dollars would it cost to put the same carpeting in a 24 ft. by 30 ft. room? What is the missing dimension in this volume statement? 24cm3(3 squared)=3cm x____ cm x 4cm I'm thinking 84%, because 20% of 25 is 5, 80% being 20 out of 25, and 4% being 1, 20 plus 1 equals 21, and 80 plus 4 equals 84% so close... I must have messed up somewhere. Thanks! I did that, and got 368.0, and then made that 3680 and divided that by 66 and got 57.57575757... thank you! i just couldn't figure it out, like, i was just sitting there and i kept trying to find a common denominator for both of them... don't know why... i think i had a brain fart moment. Cassie successfully harvested 7/12 of her crop, and Robert successfully harvested 58% of his crop. Who successfully harvested the larger portion of his or her crop? What can photosynthesis be affected by? I think like, how much water a plant receives, the concentration of carbon dioxide in the air, and teh weather (how much sunlight). Is this right? You and a friend are playing a game of squirt-gun tag in a maze. Suddenly you see your friend's image in a small planar mirror. You take a shot over the barrier in front of you and find that your friend is just at the end of the 7.0 m range of your squirt gun. If you are 4... Please graph the following function: y = tan (2(theta) + (pi)) + 1 12th grade what are the rhtorical devices used in Bernice Bobs Her Hair and what effects did they create? 7th grade How do I find the muscial notes for doe, rae, me, fa, sol...on a staff? In explanatory writing what are spatial words. (vs. time order words)? how do kenetic and potential energy effect a basic pendulum? How does gravity effect it? Since this is not my area of expertise, I searched Google under the key words "pendulum kinetic potential energy gravity" to get these possible sources: http://galileo.phys.virgi...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=cate","timestamp":"2014-04-19T17:10:58Z","content_type":null,"content_length":"11551","record_id":"<urn:uuid:c4655c67-ef4e-4a90-af93-01c80df52b7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Fourier transform of typical signals • Impulse As shown above, • Unit Step As shown above, • Constant As shown above, This is a useful formula. • Complex exponential The spectrum of a complex exponential can be found from the above due to the frequency shift property: • Sinusoids Similarly, we have • Exponential decay - right-sided • Exponential decay - left-sided Due to the time reversal property, we also have (for • Exponential decay - two-sided As the two-sided exponential decay is the sum of the right and left-sided exponential decays, its spectrum of • Comb function The comb function is defined as Its Fourier series coefficient is: and its spectrum is: We see that the spectrum of an impulse train with time interval Therefore we have this equation which can be compared with the equation in continuous case: • Square wave A square wave or rectangular function of width and due to linearity, its Fourier spectrum is the difference between the two corresponding spectra: • Sinc function The spectrum of an ideal low-pass filter is and its impulse response can be found by inverse Fourier transform: • Triangle function Alternatively, as the triangle function is the convolution of two square functions ( • Gaussian function The Fourier transform of a Gaussian or bell-shaped function Here we have used the identity We see that the Fourier transform of a bell-shaped function is also a bell-shaped function: Note that the area underneath either (Note that if and let
{"url":"http://fourier.eng.hmc.edu/e101/lectures/handout3/node3.html","timestamp":"2014-04-17T21:22:37Z","content_type":null,"content_length":"26629","record_id":"<urn:uuid:a6e72055-270f-4403-9cf7-19e0dc4754af>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Articles, Presentations, and Tutorials Have your own presentation, article or informative blog post on MathJax? Let us know and we can add it here. Accessible Pages with MathJax by Neil Soiffer Design Science, Inc. MathJax: a JavaScript-based engine for including TeX and MathML in HTML by Davide P. Cervone 2010 Joint Mathematics Meetings in San Francisco MathType, Math Markup, and the Goal of Cut and Paste by Robert Miner 2010 Joint Mathematics Meetings in San Francisco How to integrate MathJax into your web pages by Casey W. Stark
{"url":"http://www.mathjax.org/resources/articles-and-presentations/","timestamp":"2014-04-20T15:52:00Z","content_type":null,"content_length":"28439","record_id":"<urn:uuid:fe0f5a26-150e-4691-8c80-4573989b144a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Gail Englert: Learning Multiplication Tables Learning Multiplication Tables by Gail Englert A response to the question: Is there a tried and tested "best" way of learning multiplication? Back to Index of Elementary Thoughts I am not a teacher by profession, but having a child in grade four and trying to teach him ways to memorize his multiplication tables is proving difficult. It takes him longer than most of his peers to grasp most mathematical problems and memorizing times tables is very difficult for him. His teacher has them do timed multiplication graphs but he is just adding, not learning the answers - such as in the row where he has to go from 6 x 1 through to 6 x 12, where he is just adding 6 onto each answer, so when asked what is 6 x 6 for example he wouldn't have a clue. My question therefore, is: is there a tried and tested "best" way of learning multiplication? Dear Susan, I teach 4th graders, and I don't think there is any one "best" way for learning the facts. Instead, I use several approaches, and students pick up and use the ones that work best for them. We start by figuring out which ones are usually pretty easy for "us" to learn... like the doubles (2 x __ ). 1 x __ and 0 x __ are also fairly easy, and the 5's and 10's, since students have been skip counting with them for a while already. We do look for patterns in the multiples, and students make discoveries like "every other one is odd/even," or the ones digit is always a zero or a five." We try to figure out why so we can use the notion to predict the outcome for other problems. Then I use some visual aids to help my students "see" the factors combining into products. We make sets (4 sets of 6, 6 sets of 4) and "discover" that it doesn't matter which order we use. We construct rectangular arrays, and find out that each multiplication problem looks like a rectangle. We predict the shape of the rectangle (tall and skinny, for 6x1, short and fat for 1x6), and look for other ways to make the same product (2x3). We even make patterns by drawing sets of lines that cross each other: 3x6 is three vertical lines and 6 horizontal lines. Then we count the number of intersections. We note that "special" factors will form squares, like 4x4, and 9x9. This all takes time. Some students need to have those manipulatives right in their hands, and to draw the sets of dots, or crossing lines, for 6x7. The important thing is to help students gain a sense of the products, so they will see how much more quickly the products grow, compared to the sums. Eventually the "playing" around will pay off. -Gail, for the T2T service Join a discussion of this topic in T2T.
{"url":"http://mathforum.org/t2t/faq/gail/multiply.html","timestamp":"2014-04-19T20:28:18Z","content_type":null,"content_length":"6696","record_id":"<urn:uuid:1d30621d-3fe8-481c-afb0-65366405fcc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Department, Princeton University Massimo Taronna - Scuola Normale Superiore - Higher-Spin Interactions: three-point functions and beyond Taking String Theory as a ``theoretical laboratory'', I will present handy expressions for bosonic and fermionic (SUSY) higher-spin Noether currents. I will also describe a class of non-local higher-spin Lagrangian couplings that are generically required by the Noether procedure starting from four-points. The construction clarifies the origin of old problems for these systems and links String Theory to some aspects of Field Theory that go beyond its conventional low energy limit. I will finally discuss how the extension of these results to (A)dS brings about the emergence of minimal-like couplings from higher-derivative ones. Location: PCTS Seminar Room Date/Time: 11/01/11 at 1:00 pm - 11/01/11 at 2:30 pm Bring your own lunch at 12:30 p.m. Category: High Energy Theory Seminar Department: Physics
{"url":"http://www.princeton.edu/physics/events_archive/viewevent.xml?id=267","timestamp":"2014-04-16T18:20:56Z","content_type":null,"content_length":"10351","record_id":"<urn:uuid:ffaa8289-d2a2-4e55-9017-8d7637a3e2c2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Locking: ~ Restriction £[ Resolution Results 1 - 10 of 20 - Artificial Intelligence , 2000 "... In this paper we provide algorithms for reasoning with partitions of related logical axioms in propositional and first-order logic (FOL). We also provide a greedy algorithm that automatically decomposes a set of logical axioms into partitions. Our motivation is two-fold. First, we are concerned with ..." Cited by 52 (8 self) Add to MetaCart In this paper we provide algorithms for reasoning with partitions of related logical axioms in propositional and first-order logic (FOL). We also provide a greedy algorithm that automatically decomposes a set of logical axioms into partitions. Our motivation is two-fold. First, we are concerned with how to reason e#ectively with multiple knowledge bases that have overlap in content. Second, we are concerned with improving the e#ciency of reasoning over a set of logical axioms by partitioning the set with respect to some detectable structure, and reasoning over individual partitions. Many of the reasoning procedures we present are based on the idea of passing messages between partitions. We present algorithms for reasoning using forward message-passing and using backward message-passing with partitions of logical axioms. Associated with each partition is a reasoning procedure. We characterize a class of reasoning procedures that ensures completeness and soundness of our message-passing ... , 2001 "... Motivated by the problem of query answering over multiple structured commonsense theories, we exploit graph-based techniques to improve the efficiency of theorem proving for structured theories. Theories are organized into subtheories that are minimally connected by the literals they share. We prese ..." Cited by 26 (5 self) Add to MetaCart Motivated by the problem of query answering over multiple structured commonsense theories, we exploit graph-based techniques to improve the efficiency of theorem proving for structured theories. Theories are organized into subtheories that are minimally connected by the literals they share. We present message-passing algorithms that reason over these theories using consequence finding, specializing our algorithms for the case of first-order resolution, and for batch and concurrent theorem proving. We provide an algorithm that restricts the interaction between subtheories by exploiting the polarity of literals. We attempt to minimize the reasoning within each individual partition by exploiting existing algorithms for focused incremental and general consequence finding. Finally, we propose an algorithm that compiles each subtheory into one in a reduced sublanguage. We have proven the soundness and completeness of all of these algorithms. 1 , 1991 "... A general theory of deduction systems is presented. The theory is illustrated with deduction systems based on the resolution calculus, in particular with clause graphs. This theory distinguishes four constituents of a deduction system: ffl the logic, which establishes a notion of semantic entailmen ..." Cited by 19 (0 self) Add to MetaCart A general theory of deduction systems is presented. The theory is illustrated with deduction systems based on the resolution calculus, in particular with clause graphs. This theory distinguishes four constituents of a deduction system: ffl the logic, which establishes a notion of semantic entailment; ffl the calculus, whose rules of inference provide the syntactic counterpart of entailment; ffl the logical state transition system, which determines the representation of formulae or sets of formulae together with their interrelationships, and also may allow additional operations reducing the search space; ffl the control, which comprises the criteria used to choose the most promising from among all applicable inference steps. Much of the standard material on resolution is presented in this framework. For the last two levels many alternatives are discussed. Appropriately adjusted notions of soundness, completeness, confluence, and Noetherianness are introduced in order to - In Proc. CTRS-90 , 1991 "... A new positive-unit theorem-proving procedure for equational Horn clauses is presented. It uses a term ordering to restrict paxamodulation to potentially maximal sides of equations. Completeness is shown using proof orderings. 1. ..." Cited by 16 (0 self) Add to MetaCart A new positive-unit theorem-proving procedure for equational Horn clauses is presented. It uses a term ordering to restrict paxamodulation to potentially maximal sides of equations. Completeness is shown using proof orderings. 1. , 1991 "... Two new theorem-proving procedures for equational Horn clauses are presented. The largest literal is selected for paramodulation in both strategies, except that one method treats positive literals as larger than negative ones and results in a unit strategy. Both use term orderings to restrict paramo ..." Cited by 11 (3 self) Add to MetaCart Two new theorem-proving procedures for equational Horn clauses are presented. The largest literal is selected for paramodulation in both strategies, except that one method treats positive literals as larger than negative ones and results in a unit strategy. Both use term orderings to restrict paramodulation to potentially maximal sides of equations and to increase the amount of allowable simplification (demodulation). Completeness is shown using proof orderings. , 1989 "... This thesis explores the relative complexity of proofs produced by the automatic theorem proving procedures of analytic tableaux, linear resolution, the connection method, tree resolution and the Davis-Putnam procedure. It is shown that tree resolution simulates the improved tableau procedure and th ..." Cited by 9 (0 self) Add to MetaCart This thesis explores the relative complexity of proofs produced by the automatic theorem proving procedures of analytic tableaux, linear resolution, the connection method, tree resolution and the Davis-Putnam procedure. It is shown that tree resolution simulates the improved tableau procedure and that SL-resolution and the connection method are equivalent to restrictions of the improved tableau method. The theorem by Tseitin that the Davis-Putnam Procedure cannot be simulated by tree resolution is given an explicit and simplified proof. The hard examples for tree resolution are contradictions constructed from simple Tseitin graphs. - BioSystems , 1990 "... We describe here a prover PC that normally acts as an ordinary theorem prover, but which returns a "precondition" when it is unable to prove the given formula. If F is the formula attempted to be proved and PC returns the precondition Q, then (Q \Gamma! F ) is a theorem (that PC can prove). This pr ..." Cited by 8 (0 self) Add to MetaCart We describe here a prover PC that normally acts as an ordinary theorem prover, but which returns a "precondition" when it is unable to prove the given formula. If F is the formula attempted to be proved and PC returns the precondition Q, then (Q \Gamma! F ) is a theorem (that PC can prove). This prover, PC, uses a Proof-Plan. In its simplest mode, when there is no proof-plan, it acts like ordinary Abduction. We show here how this method can be used to derive certain proofs by analogy. To do this, it uses a proof-plan from a given guiding proof to help construct the proof of a similar theorem, by "debugging" (automatically) that proof-plan. We show here the analogy proofs of a few simple example theorems and one hard pair, Ex4 and Ex4L. The given proof-plan for Ex4 is used by the system to prove automatically Ex4; and that same proof-plan is then used to prove Ex4L, during which the proof-plan is "debugged" (automatically). These two examples are similar to two other, more difficult, t... - Journal of Symbolic Computation , 2003 "... The guarded fragment is a fragment of first-order logic that has been introduced for two main reasons: First, to explain the good computational and logical behavior of propositional modal logics. Second, to serve as a breeding ground for well-behaved process logics. In this paper we give resolution- ..." Cited by 4 (2 self) Add to MetaCart The guarded fragment is a fragment of first-order logic that has been introduced for two main reasons: First, to explain the good computational and logical behavior of propositional modal logics. Second, to serve as a breeding ground for well-behaved process logics. In this paper we give resolution-based decision procedures for the guarded fragment and for the loosely guarded fragment (sometimes also called pairwise guarded fragment). By constructing an implementable decision procedure for the guarded fragment and for the loosely guarded fragment, we obtain an effective procedure for deciding modal logics that can be embedded into these fragments. The procedures have been implemented in the theorem prover Bliksem. 1. , 2000 "... We are interested in developing a methodology for integrating mechanized reasoning systems such as Theorem Provers, Computer Algebra Systems, and Model Checkers. Our approach is to provide a framework for specifying mechanized reasoning systems and to use specifications as a starting point for integ ..." Cited by 3 (1 self) Add to MetaCart We are interested in developing a methodology for integrating mechanized reasoning systems such as Theorem Provers, Computer Algebra Systems, and Model Checkers. Our approach is to provide a framework for specifying mechanized reasoning systems and to use specifications as a starting point for integration. We build on top of the work presented in Giunchiglia et al. (1994) which introduces the notion of Open Mechanized Reasoning Systems (OMRS) as a specification framework for integrating reasoning systems. An OMRS specification consists of three components: the logic component, the control component, and the interaction component. In this paper we focus on the control level. We propose to specify the control component by first adding control knowledge to the data structures representing the logic by means of annotations and then by specifying proof strategies via tactics. To show the adequacy of the approach we present and discuss a structured specification of constraint contextual rewriting as a set of cooperating specialized reasoning modules.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=299457","timestamp":"2014-04-16T20:47:21Z","content_type":null,"content_length":"35946","record_id":"<urn:uuid:88abc5df-d96a-4e24-bedf-9486a97b783e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 57 Rochelle borrowed $850 for 6 years at a simple interest rate of 8%. How much will Rochelle have to repay after the 6 years ? international relations I need help on writing a thesis statement on why this century will remain unpredictable I need it in either a realism liberalism contructism referring to international politic relations what compromises did Macdonald make at the conferences? How do you think Canada would have been different if Macdonald had succeeded on forming the strong national government he desired instead of a federation with strong provincial governments? in what ways did the conferences reflect Victorian value and beliefs? Describe what a constitutional conference wold look like today, and who would be included. How do i find a fraction between each pair of fractions ex. 1/8 and 1/4 what would happen if there were no negative feedbsck control on growth and thyroid hormone Asworth college which of the sentences uses modifiers correctly? a. Kurt and his hockey teamm ates nearly cooked two hundred pancakes for the breakfast fundraising event. b. Rudolph enjoys eating curries from the indian delicatessen with a lot of spices. C. on the phone the telemarketer persu... AP Physics B If the system shown in Figure P8.29 is set in rotation about each of the axes mentioned in problem 29, find the torque that will produce an angular acceleration on 1.5 rad/s2 in each case. if it can be assumed that 39.5% of all telephone #'s are unlisted, and that all telephone #'s are independent,find the probability that at least 1 of 5 telephone numbers are listed. I think that the first one (car size) is a positive relationship -- although I am having troiuble understanding why. I think the temp outside is negative. Amt of pop and gpa -- no association? Fo each pair of variables, is the association likely to be positive or negative, or no association and why Car size in square feet and cost of car($) Temperature outside in the winter and propane consumed to heat house Amt of soda pop consumed and GPA Suppose that 65% of all California State students own a motorcycle. A random sample of 100 CS students is surveyed and 61% indicate that they own a motorcycle. a. What is the value of the population percent given to you in the problem? b. What is the value of the sample percen... Suppose that 65% of all California State students own a motorcycle. A random sample of 100 CS students is surveyed and 61% indicate that they own a motorcycle. a. What is the value of the population percent given to you in the problem? b. What is the value of the sample percen... Fawnest, meaning to be the most fawn-like. That's the best I could do. I hope that is the right answer. If you are using the word fawn as an adjective, it would be grammatically correct. The table shows the minimum wage for three different years. Year 1940, 1968,1997 wages for 1940 $.025, 1968 $1.60, 1997 $5.15 estimate the minimum wage in 1976 and compare it to the actual value of $2.30 and estimate when the minimum wages was $1.00. If the current trends cont... can you help me with my subtracting the answer is 7 algebra equation solving ((y+2)/(y^2-49))subtract ((y)/(y^2+6y-7)) subtract and simplify ,,,,,i know i need to find LCD which is y+7,and multiply both sides to get it that way i think answer is -6y-2/y+7....is this right??? algebras equations ((y+2)/(y^2-49))subtract ((y)/(y^2+6y-7)) The directions are simplify and subtract algebra equation solving (y+2 ) (y) --------- subtract ----------- (y^2-49) (y^2-6y+7) The directions are subtract and simplify.... Arrays and Expanded Algorithm This one is very hard Assume that an average SNMP response message is 100byte long. Assume that a manager sends 40 SNMP Get commands each second. What percentage of a 100 Mbps LAN link s capacity would the resulting response traffic represent? jerome is 73 inch talls what weight will keep his body mass index between 19 and 31? Your class decides to publish a calendar to raise money. The initial cost, regardless of the number of calendars printed, is $900. After the initial cost, each calendar costs $1.50 to produce. What is the minimum number of calendars your class must sell at $6 per calendar to m... Joe had eight sections of fence to paint. He painted 2/3 of each section in one hour. How many hours did it take him to paint all the section of the fence? It would be her and hers. She is not possesive. Accounting III What arguments can be advanced in favor of treating fixed manufacturing overhead costs as product costs? What arguments can be advanced in favor of treating fixed manufacturing overhead costs as period costs? Which arguments do you find to be the most valid? Explain. Assume that you start with $900 per month healthcare increase and add five percent(5%) per year for the two (2) remaining years of a three year contract. Show the cash value and roll-up value of the What is the connection between habit and moral character? What does it mean to be a person? Thank You Much! Have A Bless Day!! Thank you very much, this is what I came up with as well:Grouping is placing a group of people together by their race, origin, religion etc. stereotypes are usually based on assumption or generalization that someone makes on the characteristics of a particular group. Usually, ... What differentiates the act of grouping people from the act of stereotyping? Is a person more than a physical body? What is the mind? What is thought? What are your views on Free will and determinism? 1. The behavior of atoms is governed entirely by physical law. 2. Humans have free will. Do you accept both (1) and (2) Math 101 49 is 80% of Do you believe that the move from selling music on a physical product to selling it digitally represents a change that will affect other industries micro economics if a few large firms were broken down into a lot of smaller firms how would this effect the supply and demand substitutes would be like steak pork chops complements would be like ketchup or other things you eat with chicken like gravy a c.d. account or into stocks if a few large firms were broken down into a lot of smaller firms how would this effect the supply and demand in a graph? if you have 88 ft of fencing to enclose a rectangular plot but don't fence one side, find the length and width of the rectangle that will maximize the area How do you say, 1. all entrees served with soup 2. served with salad Thank you. social studies Would you please show me a website that shows an actual answer to a dbq? I would like to see a model answer. I am in eigth grade. Thank you. 5th grade WHAT IS MY RULE? 3 90 1 4 120 3,000 hca 210 measuring Quality Why do we have an electricity crisis? What will happen if we don't save electricity? 8th grade You have to do maths every single day. It's like listening to a song and before you know you know all the words of the song. The same with maths. The more you practice the better you will do social studies Which battle was the most important one of the revolutionary war, and why? social studies What are Obama's electoral votes and McCain's electoral votes right now? Thank youu. social studies Thank you. social studies How were kids educated in the Southern Colonies? Thank you. I am in seventh grade. I need to draw the structure of a lipid, protein, carbohydrate. Is there a simple way to draw these? Thank you. Show that the equation (1) divided by (x+1) - (x)divided by (x-2)=0 has no real roots Well, to begin, start with 1/(x+1-x). The x's cancel out because they are opposite signs, so now you have 1/1, or just 1. Then, you are dividing 1 by (x-2). In order for an expression to ... I have this question too! and i'm pretty sure that this is the answer: It binds to it and changes its shape, just as the interaction of an enzyme and its substrate.
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Rochelle","timestamp":"2014-04-18T14:04:02Z","content_type":null,"content_length":"17217","record_id":"<urn:uuid:cf18a062-1c92-49cf-9a44-a0b3f5e7d887>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A grocery clerk sets up a display of oranges in the form of a triangle using 10 oranges at the base and 1 at the top (Only part of the display is shown.). How many oranges are there in the display? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f8ef99ee4b000310fad3fa9","timestamp":"2014-04-17T19:13:51Z","content_type":null,"content_length":"86457","record_id":"<urn:uuid:3cd02ebb-483b-42df-b52d-ba8280710274>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Interesting Math History - Origin of the Concept of Zero Interesting Math History Question : Where did the concept of Zero originate ? Answer : The concept of Zero is attributed to the Hindus. The Hindus were also the first to use zero in the way it is used today. Some symbol was required in positional number systems to mark the place of a power of the base not actually occurring. This was indicated by the Hindus by a small circle, which was called Shunya, the Sanskrit word for vacant. This was translated into the Arabic Sifr about 800 A.D. Subsequent changes have given us the word zero. In Babylone by middle of the 2nd millenium BC, the lack of a positional value (or zero) was indicated by a space between sexagesimal numerals. In 498 AD the Indian mathematician and astronomer Aryabhatta stated that Sthanam sthanam dasha gunam means place to place in ten times in value, which may be the origin of the modern decimal-based place value notation. Arabs spread the Hindu decimal zero and its new mathematics to Europe in the Middle Ages.
{"url":"http://www.calculatoredge.com/math/mathhistory/historyans4.htm","timestamp":"2014-04-18T18:57:34Z","content_type":null,"content_length":"9950","record_id":"<urn:uuid:78186092-44c2-4796-b418-535776ab1e26>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 914 If Betsy leaves Town A traveling due east for 6 miles to Town B, then turns due north and travels 9 miles to Town C, how far will her trip home be to Town A as she travels directly southwest? If Betsy leaves Town A traveling due east for 6 miles to Town B, then turns due north and travels 9 miles to Town C, how far will her trip home be to Town A as she travels directly southwest? its not right. i checked A rocket of mass 4.50 105 kg is in flight. its thrust is directed at an angle of 64.0° above the horizontal and has a magnitude of 7.50 106 N. Find the magnitude and direction of the rockets acceleration. Give the direction as an angle above the horizonal. A 1000 kilogram car traveling at 25 mph hits a 60 kg person. How much force is exerted on the person? A 1000 kilogram car traveling at 25 mph hits a 60 kg person. How much force is exerted on the person? Name 3 different ways in which these numbers can be classified. 11 4.5 50 8 16 3/5 100 20 9 8.6 14 1/2 Name some different ways in which these numbers can be classified. 11,4.5,50,8,16,3/5,100,20,9,8.6,141/2 Physics & Calc Derive the stationary target equations: v1 = [(m1-m2)/(m1+m2)]v1 v2 = [(2m1)/(m1 + m2)] v1 A woman stands on a scale in a moving elevator. Her mass is 71.0 kg, and the combined mass of the elevator and scale is an additional 815 kg. Starting from rest, the elevator accelerates upward. During the acceleration, the hoisting cable applies a force of 9440 N. What does t... The weight of an object is the same on two different planets. The mass of planet A is only fifteen percent that of planet B. Find rA/rB, which is the ratio of the radii of the planets. A rock of mass 48 kg accidentally breaks loose from the edge of a cliff and falls straight down. The magnitude of the air resistance that opposes its downward motion is 275 N. What is the magnitude of the acceleration of the rock? Water is moving with a speed of 5.0 m/s through a pipe with a cross-sectional area of 4.2 cm2. The water gradually descends 8.2 m as the pipe increases to 9.0 cm2. (a) What is the speed at the lower level? (b) If the pressure at the upper level is 1.8 × 105 Pa, what is t... physics please help I used the same equation as you did for average force, but when I divided change in momentum by time of impact I got 444. averageforce=changemomentum/timeimpact =(1.80)/(4.05*10^3) = 444.4 If n ÷ d gives a display of 16.8, does d\n? Why Physics - Standing Waves a) It's a straight line because it is a function os the form y=mx+b b) The slope of the line if m=0.5 c) The intercept of ln v is -0.5ln(LMD) and the intercept of the ln T axis is 0.5 ln (LMD) What is the molarity of NaOCL in a solution of bleach if 10.00mL requires 22.35mL of 0.0291 M Na2S2O4 to reach the endpoint. At a time when mining asteroids has become feasible, astronauts have connected a line between their 3640 kg space tug and a 6500 kg asteroid. Using their ship's engine, they pull on the asteroid with a force of 490 N. Initially the tug and the asteroid are at rest, 500 m a... An office window has dimensions 3.6 m by 2.5 m. As a result of the passage of a storm, the outside air pressure drops to 0.925 atm, but inside the pressure is held at 1.0 atm. What net force pushes out on the window? 5th grade math 3x=x with x= the quantity the brother has Apply T=T and use uk an us but u must have uk biger than the force How fast must a 144 g baseball travel in order to have a de Broglie wavelength that is equal to that of an x-ray photon with = 100. pm? Calculate the energy difference (E) for the transition from n = 1 to n = 6 energy levels of hydrogen per 1 mol of H atoms. (Report your answer to at least 3 significant figures.) Suppose the ΔGof, ΔHof, and ΔSo are available and valid at 298 K. Which equation(s) could be used to calculate the change in Gibbs Energy if all product and reactant concentrations (pressures) are 1 M (1 atm) and the temperature is 298 K? Choose all that apply. ... how quickly must a 54.7 g tennis ball travel i order to have a de Broglie wavelength that is equal to a photon of green light? Is this a simile or a metaphor The rain felt like small kisses on Rosemary's face, When 32.0 mL of 0.560 M H2SO4 is added to 32.0 mL of 1.12 M KOH in a coffee-cup calorimeter at 23.50°C, the temperature rises to 30.17°C. Calculate H of this reaction per mole of H2SO4 and KOH reacted. (Assume that the total volume is the sum of the individual volumes ... A 30.5 g sample of an alloy at 93.7°C is placed into 50.1 g water at 23.4°C in an insulated coffee cup. The heat capacity of the coffee cup (without the water) is 9.2 J/K. If the final temperature of the system is 31.1°C, what is the specific heat capacity of the a... The only force acting on a 3.0 kg body as it moves along the positive x axis has an x component Fx = -9x N, where x is in meters. The velocity of the body at x = 2.3 m is 9.7 m/s. (a) What is the velocity of the body at x = 4.4 m? (b) At what positive value of x will the body ... A 26.5 g sample of ethylene glycol, a car radiator coolant, loses 675 J of heat. What was the initial temperature of ethylene glycol if the final temperature is 32.5°C (c of ethylene glycol = 2.42 J/ gK)? Please help me, I don't even know where to begin!! What is the enthalpy change when 17.0 g of water is cooled from 36.0°C to 6.40°C? 3 M HCl solution was used to generate CO2 from the CaCO3 . What will be the effect on the value reported for the molar volume of CO2 if 6 M HCl was used instead (too high, too low, no difference)? Two cars, A and B, are traveling in the same direction, although car A is 174 m behind car B. The speed of A is 25.6 m/s, and the speed of B is 18.6 m/s. How much time does it take for A to catch B? A mixture of gases collected over water at 14C has a total pressure of 1.198 atm and occupies 72 mL. How many grams of water escaped into the vapor phase? Spanish-SraJMcGin-please check Posted by Megan on Wednesday, October 13, 2010 at 5:08pm. I just reposted this from the earlier post- I have two questions about translation. If asked this question, how do I answer it. Cual es tu numero de telefono? I know it is asking my phone number which is 447-7161 but I ... Spanish-SraJMcGin-Please check Hola, I know about the accent marks-I use spanish typeit and do my work in there when I have to type it-sorry about not doing it here One question, I can only use the words my teacher taught so far in Spanish and I didn't learn ir a nadar for the second one. Can I just put... No numero de telfono-doesn't that say no number of telephone is .... instead of my phone number is .... For what I like to do on Sat. can I say El sabado me gusta.... I have two questions about translation. If asked this question, how do I answer it. Cual es tu numero de telefono? I know it is asking my phone number which is 447-7161 but I don't know how to put it in sentence form. The next question is Que te gusta hacer el sabado? I wa... Suppose the water near the top of Niagara Falls has a horizontal speed of 2.7 m/s just before it cascades over the edge of the falls. At what vertical distance below the edge does the velocity vector of the water point downward at a 72° angle below the horizontal? How to do Excel Spreadsheet- Formulas? I have to do an excel spreadsheet for my computer class. Problem is my professor didn't at all explain how to do it. What formulas am I supposed to use! Please Help. The info he gave us are : Cash Sales: 56%, Rec. Last month: 28%, Rec... Algebra-Please help-I'm desperate to understand Posted by Megan on Tuesday, October 12, 2010 at 3:44pm. I have a question if someone could help, please explain it. Given b = -1 and h = -1, what is the equation of the graph if the parent function of the radical is y = square root (x) The x should be under a square root sign.... Algebra-drwls please check! Hi, no I just checked this-all they wanted was what a (-b) qunatity and a negative (h) quantity would do to the parent function- like how it would be written If you had the parent function y = square root of (x)-how a negative b and negative h would affect it Algebra-drwls please check! Posted by Megan on Tuesday, October 12, 2010 at 3:44pm. I have a question if someone could help, please explain it. Given b = -1 and h = -1, what is the equation of the graph if the parent function is y = sqrt (x) Answers: a.y = sqrt(x-1) b.y = sqrt (-x+1) c.y = - sqrt (-x-1) ... Algebra drwls please check Can you please check my problem Thank you I have a question if someone could help, please explain it. Given b = -1 and h = -1, what is the equation of the graph if the parent function is y = sqrt (x) Answers: a.y = sqrt(x-1) b.y = sqrt (-x+1) c.y = - sqrt (-x-1) d.y = sqrt(-x-1) I think it is "b" the measure of the supplement of an angle is 40 more than two times the measure of the complement of the angle find the measure of the angle 19. Burning coal and oil in a power plant produces pollutants such as sulfur dioxide, SO2. The sulfur-containing compound can be removed from other waste gases, however, by the following reaction: 2 SO2(g) + 2 CaCO3(s) + O2(g) 2 CaSO4(s) + 2 CO2(g) [Molar masses: 64.0... Chemistry :( Disulfur dichloride, S2Cl2, is used to vulcanize rubber. It can be made by treating molten sulfur with gaseous chlorine: S8(l) + 4 Cl2(g) --> 4 S2Cl2(l) [Molar masses: 256.6 70.91 135.0] Starting with a mixture of 32.0 g of sulfur and 71.0 g of Cl2, which is the limiting re... Calculate the pH, to one decimal place, of the solution made by mixing 21.0 mL of 0.33 M HNO3 with 121.0 mL of 0.32 M formic acid (HCOOH) Experiment 10: Which Fuels provide the most heat? There are many possible sources of error in this experiment. List three that you can think of. Would each error have a large effect, a medium effect, or a small effect on the calculated heat content of a fuel? Also indicate whe... Experiment on Which Fuels Provide the Most Heat? Comparison of the Energy Content of Fuels. Suppose you put 50 mL of water in the can instead of 100 mL and heated it 40 degrees instead of 20 degrees. In what ways, if any, would this affect the results of the experiment? What w... The arrow goes after the C6H5CH2COOH OH- + C6H5CH2COOH H2O + C6H5CH2COO- What is the K for the reaction above in terms of Ka's, Kb's, Kw etc.? H- + C6H5CH2COOH = H2O + C6H5CH2COO- What is the K for the reaction above in terms of Ka's, Kb's, Kw etc.? At -8 degrees celseis what state is bromime in? What average force is needed to accelerate a 6.00-gram pellet from rest to 110 over a distance of 0.800 along the barrel of a rifle? 3. constant angular velocity When a rigid body is being rotated, every particle within the body will rotate in the same direction, while retaining its original shape. The omega(angular velocity) will remain the same. yes, it really does help. How did you decided that it absorbed heat? I though exothermic reactions released it. I don't understand how you decide what is increasing or decreasing and how you tell. 2 NO2(g) N2(g) + 2 O2(g) The ΔH° for the reaction above is -66.4 kJ. The system is initially at equilibrium. What happens if N2 is added to the reaction mixture at constant temperature and volume? The reaction absorbs energy. The reaction releases energy. [NO2] increa... H2(g) + I2(g) 2 HI(g) The forward reaction above is exothermic. At equilibrium, what happens if I2 is removed from the reaction mixture at constant temperature and volume? The reaction absorbs energy. The reaction releases energy. [H2] increases. [H2] decreases. [H2] remains c... Find the volume in liters of 0.452 M manganese(II) sulfate that contains 54.5 g of solute? What parallels, or differences, can you identify between Athenian expansionism in the age of the Peloponnesian War and the contemporary career of the American republic? pre cal Use synthetic division to show that x is a solution of the third-degree polynomial equation, and use the result to factor the polynomial completely. List all the real zeros of the function. x^3 - 28x - 48 = 0 Value of x = -4 Please help!!Thank you If your car decelerates at a rat of 4.5 m/s2 how long will it take for your car to stop????? is INH(CH3)3 an acid, base or neutral? math 5 grade which x = 300 a 2x30 x6 b 10x6x20 c15x2x10 d 2x3x15 math 5 grade leon rents a video for $5. he returns the video game 3 days late. the late fee is $1 for each day late.simplify the expression 5+3.1 to find out how much it costs leon to rent the video game If you are making 180.0 mL of 3.00% (m/m) CuSO45 H2O, how many grams of the solute should you use? I can't figure out what to do with the percent. Suppose you are given a list of solutions (all with nonvolatile solutes) and their concentrations. You pick out the one with the lowest freezing point. It should also have (pick all that apply): the lowest boiling point the highest osmotic pressure the lowest solvent vapor pre... Why do the stem changing forms for -zer and -cer have special endings?? Why do the stem changing forms for -zer and -cer have special endings?? A reastion consumes 5.0g of A and 6.0g of B. How many grams of C and D should be obtained? 1A + 3B -> 2C + 4D MC answers: 23 11 1 10 Not enough information to answer the question A reastion consumes 5.0g of A and 6.0g of B. How many grams of C and D should be obtained? 1A + 3B -> 2C + 4D MC answers: 23 11 1 10 Not enough information to answer the question I need to find the present progressive teses of the words lie, lay,awake The sum of my ones and tens digit is 10. My tens digit is greater than my ones digit. I am a prime number. What number am I? I think it is 19 11th grade Chemistry 1 If your weight is 120 pounds and your mass is 54 kilograms, how would those values change if you were on the moon? The gravitational force on the moon is 1/6 the gravitational force on Earth. 8th grade Initial observation- is the beginning of your observation and your observation of the whole subject. Independent Variable- is the variable that you change in an experiment. Dependent Variable- is the variable that changes due to the reaction of the independent variable. Keep t... H.) A farmer planted a field of Bt 123 corn and wants to estimate the yield in terms of bushels per acre. He counts 22 ears in 1/1000 of an acre. He determines that each ear has about 700 kernels on average. He also knows that a bushel contains about 90 000 kernels on average.... I need help with the interior and exterior plan of Chartres. The textbook makes its seem so complicated. I can't answer the question. BTW, I didn't post the entire assignment just the two questions I had trouble with. 1. Pretend that you are entering Chartres cathedral through the central portal. You want to get to the apse. What parts of the plan might you traverse in order to get there? Describe what you will see on the way. Hint: There are six architectural features grouped into three pa... A 1 m solution of MgCl2 will cause the decrease in the freezing point of water to be approximately ___ than that caused by a 1 m solution of sucrose a. 3x kleiner/ 3x smaller b. 2x kleiner / 2x smaller c. dieselfde / the same d. 2x groter / 2x greater e. 3x groter / 3x greater... How many calories of heat are needed to raise 10 g water from 20 to 21 degree celcius english 2 where can i find critiques for the poems "Sometimes Words are So Close" and "Ironing Their Clothes" by julia alvarez? Human Biology For a portion of Junior year next year, we're supposed to build a human skeleton out of anything we can find. So most people build them out of styrofoam or clay. Do you think building a skeleton out of twigs would be creative and/or relatively easy? Someone did a really cr... Factor the following. You will earn 5 points for each polynomial that is factored correctly. Please label your individual answers with a - f. a.) 4x2 - 25 b.) 3x2 + 6y c.) x2 - 7x + 10 d.) 2x2 - 9x - 18 e.) 2ax + 6bx + ay + 3by f.) 6x2 + 12x - 48 at the veterinarian's office terri learned that her dot weighed 4 times as much as her cat. together the pets weighed 40 lbs. how much did the dog weigh We just learned equations & inequalities today and im so confused on how to do it!will someone help me??? -5 + 2 < -2 I NEED TO KNOW ASAP what a Keq is and what the formula of it would be. I really don't know how to approach this problem and I really need help. Using the average molarity of your initial acetic acid solutions, the initial volumes, and the volume of NaOH added to reach the equivalence point, calculate the [C2H3O2-] concentration at the equival... An unknown compound analyzes to be 42.85% C, 7.20% H, and 49.95% N. A 0.915 g sample of the gas occupies 250. mL at 760. mm Hg and 100.°C. Type your answers using the format CH4 for CH4. Enter the elements in the order given. I have calculated the emperical formula to be C... To what Celsius temperature must 44.5 mL of methane gas at 43.0°C be changed so the volume will be 63.0 mL? Assume the pressure and the amount of gas are held constant. I have already tried T1V1=T2V2 NH3(g) + O2(g) NO2(g) + H2O(g) Consider the above unbalanced equation. What volume of NH3 at 950. mm Hg and 33.5°C is needed to react with 190. mL of O2 at 485 mm Hg and 143°C? =? mL To what Celsius temperature must 44.5 mL of methane gas at 43.0°C be changed so the volume will be 63.0 mL? Assume the pressure and the amount of gas are held constant. Which of the following increase the melting point of an ionic compound? decreased charge on the ions increased size of the ions increased charge on the ions decreased size of the ions increased amount of the compound What effect does lowering the pressure on the surface of water have on the boiling point? It increases the boiling point It decreases the boiling point the boiling point remains the same Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Megan&page=6","timestamp":"2014-04-17T02:23:23Z","content_type":null,"content_length":"31115","record_id":"<urn:uuid:caabf4d8-b57d-4ebe-99a2-7c84382534ac>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Villa Rica Prealgebra Tutor Find a Villa Rica Prealgebra Tutor ...I am also available for teaching Spanish, as well as almost any subject for lower grades. I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for educational fields. 40 Subjects: including prealgebra, English, reading, Spanish ...I have had the distinct pleasure and privilege of teaching children, adolescents, and adults in various settings. For example, I have taught in the following capacities: volunteer chaplain/ counselor, mental health counselor, pastor, administrator, GED prep, literacy council, homeschool, tutor, c... 13 Subjects: including prealgebra, reading, writing, GED ...Although my degree is in Music, I have also been well educated in physical science, biology, human anatomy and physiology, literature, history, and mathematics. While at Auburn I purposefully took many courses that were not required for my music degree in order to be as well rounded as possible.... 26 Subjects: including prealgebra, reading, English, writing ...I then went on to win numerous awards and fellowships that fully supported my education through my masters and PhD at MIT. While in graduate school I mentored inner city high school students from rough neighborhoods, which is where I really cultivated my passion for helping people. I enjoy tuto... 12 Subjects: including prealgebra, calculus, geometry, GRE ...I truly believe that every student is capable of learning. However, some students require one-on-one tutoring to accomplish their goals. Thus, I feel that I am uniquely qualified to help your student achieve his or her academic goals.I have a Bachelors of Arts degree in Chemistry and Mathematic... 57 Subjects: including prealgebra, reading, chemistry, GRE
{"url":"http://www.purplemath.com/Villa_Rica_prealgebra_tutors.php","timestamp":"2014-04-16T13:07:21Z","content_type":null,"content_length":"24022","record_id":"<urn:uuid:1a114167-94a5-4663-8d6d-212200d4ada1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Villa Rica Prealgebra Tutor Find a Villa Rica Prealgebra Tutor ...I am also available for teaching Spanish, as well as almost any subject for lower grades. I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for educational fields. 40 Subjects: including prealgebra, English, reading, Spanish ...I have had the distinct pleasure and privilege of teaching children, adolescents, and adults in various settings. For example, I have taught in the following capacities: volunteer chaplain/ counselor, mental health counselor, pastor, administrator, GED prep, literacy council, homeschool, tutor, c... 13 Subjects: including prealgebra, reading, writing, GED ...Although my degree is in Music, I have also been well educated in physical science, biology, human anatomy and physiology, literature, history, and mathematics. While at Auburn I purposefully took many courses that were not required for my music degree in order to be as well rounded as possible.... 26 Subjects: including prealgebra, reading, English, writing ...I then went on to win numerous awards and fellowships that fully supported my education through my masters and PhD at MIT. While in graduate school I mentored inner city high school students from rough neighborhoods, which is where I really cultivated my passion for helping people. I enjoy tuto... 12 Subjects: including prealgebra, calculus, geometry, GRE ...I truly believe that every student is capable of learning. However, some students require one-on-one tutoring to accomplish their goals. Thus, I feel that I am uniquely qualified to help your student achieve his or her academic goals.I have a Bachelors of Arts degree in Chemistry and Mathematic... 57 Subjects: including prealgebra, reading, chemistry, GRE
{"url":"http://www.purplemath.com/Villa_Rica_prealgebra_tutors.php","timestamp":"2014-04-16T13:07:21Z","content_type":null,"content_length":"24022","record_id":"<urn:uuid:1a114167-94a5-4663-8d6d-212200d4ada1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Level curves August 21st 2010, 02:25 PM Level curves I have to find some level curves for: $f(x,y)=1-|x|-|y|$ So, if we call $S$ at the surface given by the equation $z=f(x,y)$, then $z=1\Rightarrow{-|x|-|y|=0}\Rightarrow{x=y=0} \therefore P(0,0,1)\in{S}$ Now, that particular case its simple, cause it gives just a point, but if I go downwards I get: I'm not sure how to represent this. How does this look on the xy plane? I know that: $|y|=\begin{Bmatrix} 1-|x| & \mbox{ si }& y\geq{0}\\-1+|x| & \mbox{si}& y<0\end{matrix}$ $|x|=\begin{Bmatrix} x & \mbox{ si }& x\geq{0}\\-x & \mbox{si}& x<0\end{matrix}$ But it don't helps me to visualize the "curve". I know actually that it looks like a parallelogram, but thats because I've used mathematica to compute the surface :P I don't know how to deduce it August 21st 2010, 06:06 PM Write $y=\pm(1-|x|)$. Can you graph $y=1-|x|$? August 22nd 2010, 03:58 AM A level curve for f(x)= 1- |x|- |y| is given by 1- |x|- |y|= C, where C is a constant, or |x|+ |y|= 1- C. Now, it should be obvious that, since the left side of that is never negative, C cannot be larger than 1. If C= 1, the equation becomes |x|+ |y|= 0 which, since neither |x| nor |y| can be negative, is only true when x= y= 0. The level curve is the single point (0, 0). For C< 1, as always with absolute values, the simplest thing to do is to break the problem into cases. 1. If x and y are both positive, |x|+ |y|= x+ y= 1- C. That is a straight line but remember to only draw it in the first quadrant. 2. If x< 0 and y> 0 then |x|+ |y|= -x+ y= 1- C. Again, a portion of a straight line but now in the second quadrant. 3. If x< 0 and y< 0 then |x|+ |y|= -x- y= 1- C. A portion of a straight line in the third quadrant. 4. If x> 0 and y< 0 then |x|+ |y|= x- y= 1- C. A portion of a straight line in the fourth quadrant. If you draw those four segments for one value of C, say C= 0, it should be easy to see what the other level curves are.
{"url":"http://mathhelpforum.com/calculus/154150-level-curves-print.html","timestamp":"2014-04-17T10:01:01Z","content_type":null,"content_length":"7474","record_id":"<urn:uuid:c6055bc7-4831-4e35-b42e-3117e36d0b3e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Klein Four Group February 21st 2010, 10:50 AM Klein Four Group Let G be a Klein Four Group. How many distinct homomorphisms $\phi : G \to G$ are there? How many of these are isomorphisms? Stuck on how to start this. Any hints? February 21st 2010, 11:18 AM Write $G=\{1,a,b,ab\}=<a,b\;;ab=ba\,,\, a^2=b^2=1>$ . Now, any endomorphism of $G$ is uniquely determined by its values on the generators $a,b$...now start from here. February 22nd 2010, 01:04 AM I don't understand...What exactly do you mean by "is uniquely determined by its values on the generators http://www.mathhelpforum.com/math-he...09167892-1.gif, ..." ? February 22nd 2010, 01:57 AM The Klein four group has two generators, a and b. Any homomophism must map generators to generators- it must map {a, b} into itself- or map one or more generators onto the identity (I added this last part- I think we need to include this). Any isomorphism must map the set of generators one-to-one and onto itself. February 22nd 2010, 02:05 AM The Klein four group has two generators, a and b. Any homomophism must map generators to generators- it must map {a, b} into itself- or map one or more generators onto the identity (I added this last part- I think we need to include this). Any isomorphism must map the set of generators one-to-one and onto itself. I think there is some confusion here: any HOMOMORPHISM from G to itself (i.e., an endomorphism of G) is uniquely determined by its action on any set of generators of G, but it doesn't HAVE TO map generators to generators. It can map a generator to an element whose order is a divisor of the generator's one, for example. Now, an ISOMORPHISM from G to G (i.e., an automorphism of G) has to map generators to generators. So, for example, the map $f(a)=b,\, f(b)=1,\,f(ab)=b,\,f(1)=1$ is an endomorphism of G but not an automorphism (why?), whereas the map $g(a)=ab,\,g(b)=a,\,g(ab)=b,\,g(1)=1$ is an automorphism of February 22nd 2010, 03:34 AM Description of the endomorphisms of the Klein group I don't understand...What exactly do you mean by "is uniquely determined by its values on the generators http://www.mathhelpforum.com/math-he...09167892-1.gif, ..." ? Let us denote the Klein group by $G=\{e,a,b,c\}$, where e is the identity of G and a, b and c are the three elements of order 2 in G. Intuitively, we cannot "distinguish" between a, b and c in a group-theoretic manner; notice that $a^2=b^2=c^2=e$, that $ab=ba=c$, that $ac=ca=b$, and that $bc=cb=a$. In some sense, therefore, there is a "symmetry" between the elements a, b and c .Mathematically, we say that there is an isomorphism of G that carries a to b, an isomorphism that carries a to c, an isomorphism that carries b to c; succintly, every permutation of a, b and c corresponds to an isomorphism of G. Therefore, there are precisely 6 isomorphisms from G to G, since there are precisely 6 permutations on three letters. How do we determine the number of homomorphisms from G to G? Well, we can consider the kernel of each homomorphism in doing so (recall the first isomorphism theorem?). Since G is abelian, every subgroup of G is normal, and therefore a kernel for some homomorphism defined on G. However, if we deal exclusively with non-trivial homomorphisms that are not isomorphisms, the only possibilities for the kernel of the given homomorphism are $K_1=\{e,a\}$, $K_2=\{e,b\}$ and $K_3=\{e,c\}$. By the first isomorphism theorem, $|\mbox{Im}(f)|=\frac{|G|}{|\mbox{Ker}(f)|}$, and therefore whether the kernel of f is $K_1$, $K_2$ or $K_3$, the image of f must be a subgroup of order 2. Since there are 3 subgroups of G of order 2, and three posibilities for the kernel of f, there are precisely 9 homomorphisms from G to G that are non-trivial and not isomorphisms. Since we discovered that there are 6 isomorphisms of G earlier (and of course there is one trivial homomorphism of G mapping every element of G to the identity), there are in total 16 isomorphisms of G. Does this answer your questions? Regarding the idea that a homomorphism is "uniquely determined by its values on the generators of its domain", recall that if C is a cyclic group and f is a homomorphism on C, f is determined by its image of the generator of C. Why? If g generates C, and $f(g)=x$ (for some x in the range of f), $f(g^i)=[f(g)]^i=x^i$. Since every element of C is a power of g by the very definition of cyclic groups, the image of every element of C under f is determined by the image of the generator of C under f. Essentially, this idea generalizes to groups with more than one generator; it might be a good exercise to appreciate this thoroughly. February 25th 2010, 10:53 PM Thank you! Will go through your posts in more detail over the weekend to see if I fully understand it. March 31st 2011, 05:07 PM I think there is some confusion here: any HOMOMORPHISM from G to itself (i.e., an endomorphism of G) is uniquely determined by its action on any set of generators of G, but it doesn't HAVE TO map generators to generators. It can map a generator to an element whose order is a divisor of the generator's one, for example. Now, an ISOMORPHISM from G to G (i.e., an automorphism of G) has to map generators to generators. So, for example, the map $f(a)=b,\, f(b)=1,\,f(ab)=b,\,f(1)=1$ is an endomorphism of G but not an automorphism (why?), whereas the map $g(a)=ab,\,g(b)=a,\,g(ab)=b,\,g(1)=1$ is an automorphism of if an isomorphism maps generators to generators how could f(a)=ab allow f to be an isomorphism? So far I have this: Since an endomorphism is uniquely defined by the actions on its generators. And for the klein 4 group the generators are a and b then There are 3 choices for a and 2 choices for b so there are 6 endomorphisms? This does not seem right. Could you guys help me out? March 31st 2011, 07:22 PM a homomorphism φ is completely determined by its image on two of the elements of order 2 in G. let's call them a and b, its as good as any other names. so how many ways can we pick a 2-element set out of a 4-element set? 2^4 = 16 ways. 6 of these mappings are automorphisms (that take generators to distinct generators, one of which is the identity map on G). that map that takes a-->1 b-->1 results in everything going to 1, the trivial map. that leaves 9 other possible homomorphisms. suppose a-->1. since we have already counted the trivial map, we have 3 choices for the image of b. these map {1,a,b,ab} to {1,φ(b)} with kerφ = {1,a}. similarly b-->1 yields 3 homomorphisms with {1,a,b,ab} going to {1,φ(a)} with kerφ = {1,b}. so, what's left? well, the only possibilities are that a and b go to the SAME generator of G, (3 choices here) and we have kerφ = {1,ab}. that is, all 16 maps of {a,b} to G yield homomorphisms. and these must be the ONLY homomorphisms because φ(1) has to be 1, and φ(ab) has to be φ(a)φ(b). April 1st 2011, 04:37 AM if an isomorphism maps generators to generators how could f(a)=ab allow f to be an isomorphism? So far I have this: Since an endomorphism is uniquely defined by the actions on its generators. And for the klein 4 group the generators are a and b then There are 3 choices for a and 2 choices for b so there are 6 endomorphisms? This does not seem right. Could you guys help me out? Because $ab$ is a generator! $\langle a, ab\rangle = \langle ab, b\rangle = \langle a, b\rangle$. Indeed, HallsOfIvy's claims that the Klein 4-group has 2 generators is incorrect! It has 3! But it can be generated by 2. That is, you need to work out where a and b are sent, but they can also be sent to ab! However, I believe you have all the information you need to continue on your own. I mean, in the example Tonio gave you can notice/work out that the function is a homomorphism, and that it is an injection (trivial kernel) and a surjection, and so it is an automorphism!
{"url":"http://mathhelpforum.com/advanced-algebra/129957-klein-four-group-print.html","timestamp":"2014-04-21T10:45:44Z","content_type":null,"content_length":"20752","record_id":"<urn:uuid:7ff8c992-ac0b-42b2-bbd7-f34d032eaadc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Natick Math Tutor Find a Natick Math Tutor ...It addresses several types of equations such as first Order Differential Equations such as Linear Equations, Separable Equations, Bernoulli Equations, Homogeneous Equations, Exact and Non-Exact Equations, Integrating Factor technique,Radioactive Decay, Population Dynamics, Existence and Uniquenes... 38 Subjects: including prealgebra, elementary (k-6th), grammar, reading ...I have tutored several students for the GRE. My background is ideal for this: I majored in math at Harvard and taught high school math for many years, so I have a good background for the Quantitative Reasoning section. I am a trained attorney who graduated from a top 20 law school, so I have a good background in Verbal Reasoning and Analytical Writing. 29 Subjects: including algebra 2, trigonometry, linear algebra, ACT Math ...I am enrolled in the Darden School of Business at the University of Virginia and will receive my MBA in 2015, so I have honed and can share the organizational and study skills that have made me successful in academia. Finally, my diverse business experience, particularly in management consulting... 46 Subjects: including precalculus, prealgebra, physics, business ...I have taken three years of undergraduate physics, and have been a Teacher's Assistant for the freshman physics course. I have also done several years of lab work. I have taken Classical Mechanics, Electricity and Magnetism, Statistical Mechanics, Quantum Mechanics, Astrophysics, and more. 23 Subjects: including algebra 1, algebra 2, American history, calculus ...Since 2008 I've tutored high school and college students from a wide range of Boston-area schools. I'm familiar with a wide range of curricula but always take care to review a teacher or professor's specific approach prior to tutorial sessions so as to make the most of our time. If interested i... 15 Subjects: including precalculus, nutrition, fitness, algebra 1
{"url":"http://www.purplemath.com/natick_math_tutors.php","timestamp":"2014-04-21T04:38:38Z","content_type":null,"content_length":"23604","record_id":"<urn:uuid:b10e040c-34f3-4808-8201-193eb04f2bed>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 1999 [00359] [Date Index] [Thread Index] [Author Index] Re: Q: Union and SameTest Option • To: mathgroup at smc.vnet.net • Subject: [mg16048] Re: [mg16016] Q: Union and SameTest Option • From: BobHanlon at aol.com • Date: Sun, 21 Feb 1999 00:15:21 -0500 • Sender: owner-wri-mathgroup at wolfram.com In a message dated 2/20/99 6:43:15 AM, btml01 at uni-bayreuth.de writes: >Can anybody explain to me what's going on here ? >Union[ {{a,b,1},{a,c,1},{x,y,2},{a,b,2}}, > SameTest->(#1[[2]]===#2[[2]]&) ] >is ok since the first and the last list element have the same second >component. But what is wrong here now ? >Union[ {{a,b,1},{a,c,1},{x,y,2},{a,b,2}}, > SameTest->(#1[[3]]===#2[[3]]&) ] test = {{a,b,1},{a,c,1},{x,y,2},{a,b,2}}; Union[ test, SameTest->(#1[[2]]===#2[[2]]&) ] {{a, b, 1}, {a, c, 1}, {x, y, 2}} Union[ test, SameTest->(#1[[3]]===#2[[3]]&) ] {{a, b, 1}, {a, b, 2}, {a, c, 1}, {x, y, 2}} The first case worked not because the second elements were equal, but because the first elements were also equal. Note that the first case does not work if {a, b, 2} is changed to {e, b, 2}. Union[ {{a,b,1},{a,c,1},{x,y,2},{e,b,2}}, SameTest->(#1[[2]]===#2[[2]]&) ] {{a, b, 1}, {a, c, 1}, {e, b, 2}, {x, y, 2}} Based on this behavior, I would guess that Mathematica first sorts the elements, then compares only adjacent elements for "sameness". Consequently, to implement the "Union" that you want, you need to modify the approach. myUnion[x_List, n_Integer?Positive] := Module[ {temp = RotateLeft[#, n-1]& /@ x}, temp = Union[temp, SameTest -> (#1[[1]] === #2[[1]]&)]; RotateRight[#, n-1]& /@ temp]; Table[myUnion[test, k], {k, 3}]//ColumnForm {{a, b, 1}, {x, y, 2}} {{a, b, 1}, {a, c, 1}, {x, y, 2}} {{a, b, 1}, {a, b, 2}} Bob Hanlon
{"url":"http://forums.wolfram.com/mathgroup/archive/1999/Feb/msg00359.html","timestamp":"2014-04-19T22:19:55Z","content_type":null,"content_length":"35862","record_id":"<urn:uuid:2da88435-9cab-41ab-b33d-7057b0dc71e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wikibooks, open books for an open world This category contains pages that are part of the Arithmetic book. Books or Pages The following 51 pages are in this category, out of 51 total. I cont. A R B S E T F W H Z
{"url":"http://en.wikibooks.org/wiki/Category:Arithmetic","timestamp":"2014-04-19T09:35:43Z","content_type":null,"content_length":"31436","record_id":"<urn:uuid:484d101a-ab76-434d-8d74-3e9778d7902f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilities of Success - A Primer - Wyatt Investment Research I have received numerous questions on the probabilties of success. What is means, how it works. Hopefully the following post answers a few of those pending questions. Once you have found a highly-liquid ETF in an extreme overbought/oversold state you can begin to look for a high-probability trade. But before I get into the heavy stuff let me start out with some obligatory technical mumbo jumbo and then I will get to an example that should hopefully help to clear things up. Probability of expiring: The ‘probability of expiring’ reflects whether an underlying stock’s price is above or below a strike price at expiration. An underlying stock will either finish out-of-the-money or in-the-money so there are two possible scenarios for ‘probability of expiring’: probability of expiring in-the-money or probability of expiring out-of-the-money (Prob.OTM). Remember, we want to keep it simple so let’s focus on what matters – probability of expiring out-of-the-money. Probability of expiring out-of-the-money is the chance that a strike price will close at expiration below an underlying stock price for calls and above an underlying stock price for puts. My trading software (Thinkorswim) offers this helpful tool, but for those of you who do not have a platform that offers Prob.ITM you can just use delta of an option as it is roughly the same I will explain in a moment why it is so valuable to know the Prob.OTM. Again, before I get to the nitty gritty, let me explain ‘probability of touching’ (Prob.Touch). Probability of touching: considers the possibility of the stock hitting (touching) the strike price at any time between now and expiration. Again, I realize that some of you do not have access to trading software that gives you the probability of touching either, but any worthy trading software will provide you with the delta of any given option. And the Prob.Touch is simply double the delta. So, the real question is, how can we use Prob.OTM and Prob.Touch to our advantage? Look at the chart below. At the time I wrote up this example the price of SPDR S&P 500 ETF (SPY) was trading at $131.50 and in an overbought state. My assumption based on the current overbought state of SPY was the S&P 500 would move lower over the next 39 days (July expiration). This is where it gets interesting. Because I thought SPY would close below its price of $131.50 I wanted to choose a strike that had a Prob.OTM that is AT LEAST above 50% and in almost all cases higher. I prefer 85%. Look at the strikes below for SPY call options in July to see what qualifies – 132 and above. The strike immediately above the price $131.50 of SPY, 132, has a Prob.OTM of 50.87%. That’s not high enough for me. It is essentially a coin flip. Again, I prefer something that has a higher Prob.OTM – say the Jul12 139 strike, for instance. It has a Prob.OTM of 87.26%. That means that that if I sell a call vertical, otherwise known as a bear call spread, I might sell the 139 call strike and buy maybe the 141 strike. The trade would have a probability of success (also known as the Prob.OTM) of 87.27%. Extrapolate the 87% out 100 trades or 1000 trades and you begin to see the value of using options strategies with a high Prob.OTM. But what about Prob.Touch? How does that factor into all of this probability madness. Prob. Touch should be viewed as the potential stress level of a particular trade. In our case, if we sold the SPY Jul12 139/141 call spread, the underlying ETF or SPY would have a 26.28% chance of touching our short strike of 139. I like that percentage because there is still a low probability that SPY will ‘touch’ my short strike. This is invaluable information because it gives you a good idea of how stressful the trade will be. Just think if we decided to choose to short a strike with a lower Prob.OTM, which inherently has a higher Prob.Touch, at say the 135. Again, we want to use a bear call spread so we would sell the 135 /137. The 135 has a Prob.OTM or probability of success of over 70%, which is still fairly high, considering a stock trade only has a 50% chance of success. But if you notice the Prob.Touch you will discover that the probability is over 62%. That just means that while you still have a good chance of the trade going in your favor, you should expect to experience some stress with the trade. Most newbie traders don’t think about this important aspect. Always remember – you want to take emotions out of the equation. One way to do this is position-sizing, which should ALWAYS be considered with each and every trade. But the other way is to keep your Prob.Touch below 50% preferably below 30%. I know this is a lot to grasp, but again these are the strategies that are revolutionizing how self-directed investors (like you and me) think about investing. The movement has already begun – so don’t be left behind.
{"url":"http://www.wyattresearch.com/article/probabilities-of-success-a-primer/","timestamp":"2014-04-21T03:16:04Z","content_type":null,"content_length":"58494","record_id":"<urn:uuid:a6c85553-4900-4501-b557-7cc6cc2a07bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Using Dates AD and BC Date: 11/25/2003 at 09:17:56 From: Theodore Subject: How do you work the BC & AD math problems How do you calculate how many years there are between dates where one is given in years B.C. and the other is in years A.D.? For example, how many years were there from 15 B.C. to 63 A.D.? Do you start with zero or not? I just start with zero and add the B.C. amount, then add the A. D. amount. Is that correct? Date: 11/25/2003 at 12:21:14 From: Doctor Peterson Subject: Re: How do you work the BC & AD math problems Hi Theodore. Thanks for writing to Dr. Math. As discussed here Year 0 there was no year 0; that forces us to make an adjustment when we compare A.D. and B.C. dates. If there were a year 0, the time line would look like this: / \ -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 (I've put the numbers between the marks because a given year spans the time from Jan 1 one year to Jan 1 the next.) Then you could find the time from, say, June 1, 5 B.C. to June 1, 7 A.D. by subtracting a negative number: 7 - -5 = 7 + 5 = 12 years That amounts to what you do: there are 5 years before 0, and 7 years after, making a total of 12. But in fact there was no year zero, because the people who invented the B.C./A.D. system didn't know about the number zero yet. (That was, of course, long after the year zero, but still a long time ago!) So reality looks like this: / \ -9 -8 -7 -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 7 8 9 To make that, I just deleted one year's worth from a copy of the first version--and that's all you have to do to find the difference between dates. After you subtract the first date from the second, you subtract one year if the non-existent year zero would have been between them. (7 - -5) - 1 = 12 - 1 = 11 years. If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum Date: 11/25/2003 at 20:29:40 From: W. Theodore Subject: Thank you (How do you work the BC &amp; AD math problems) Dr. Math, I would like to say thank you for your help. I am home- schooled and my text has nothing to say about that topic. Thanks again!
{"url":"http://mathforum.org/library/drmath/view/64396.html","timestamp":"2014-04-18T01:13:23Z","content_type":null,"content_length":"7757","record_id":"<urn:uuid:02798489-94d2-4545-8328-ba50087140eb>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
math--please check Number of results: 299,405 To sam! Was the five part unknown question for compounds A through E yours? If so, did you get it answered ok? no it was not mine! can you please check my math? can you please check my math? Monday, December 4, 2006 at 1:26pm by DrBob222 language art CHECK Will somebody please check this for me and see if i used the words correctly please? Having spent many years as political opponents, the two senators have developed a(n)MUTUAL respect for each other. Please check if it is correct. thank you Tuesday, January 17, 2012 at 8:51pm by saranghae12 Math (Check) The equation y = a^x is a decreasing function a = 1) 1 2) 2 3) -3 4) .25 I think that the correct answer is 3) -3...but I am not sure why? Can anyone please check if I am right and explain why? Please....Thank you for your help!!! Monday, March 9, 2009 at 7:11pm by jess Math 157 Can someone please help me. At a quality control checkpoint on a manafacturing assembly line, 10% of the items failed check A, 12% failed check B, and 3% failed both checks A and B. a. If a product failed check A, what is the probability that it also failed check B? b. If a ... Wednesday, December 8, 2010 at 9:51pm by Rena Can someone please help me. At a quality control checkpoint on a manafacturing assembly line, 10% of the items failed check A, 12% failed check B, and 3% failed both checks A and B. a. If a product failed check A, what is the probability that it also failed check B? b. If a ... Wednesday, December 8, 2010 at 8:37pm by Rena math (please check) Can you please check if I did these right? 1. 415x+301=1009x+129 2. 1596y-3080=900y I know I'm being annoying and stupid, but I'll stop posting. thanxxx! Monday, September 21, 2009 at 11:58am by Anonymous Math 116 algebra please check Solve using the multiplication principle. 10x= -90 The solution is x = -9 please check Wednesday, July 16, 2008 at 7:32pm by jay jay Math please check answer please check my answer :) The monthly payment of a $100,000 mortgage at a rate of 8 1/2 % for 20 years is 8678.23 Thursday, January 10, 2008 at 5:51pm by keleb Math 116 algebra please check Solve for the indicated letter. e=9f, for f The solution is f = 1/9e Please check this Wednesday, July 16, 2008 at 7:27pm by jay jay Math 116 algebra please check Simplify 6[-73-(-19-83)]=174 Please check Wednesday, July 16, 2008 at 7:34pm by jay jay Math[PLease Check] Problem: 25-x^2 6 ------ * --- 12 5-x What I got: 5-x --- 2 *PLease Check My Work. No. Factor the 25-x^2 into two factors. Then the 5-x in the denominator divides out. So then is it: 5+x --- 2 yes. Saturday, February 24, 2007 at 6:12pm by Margie English check please I've corrected it again 4.enhance 5.discern Please check. If its not correct can you give me anotther hint please. Wednesday, September 14, 2011 at 1:12pm by ME To Ms.Sue or any Math helpers Can you please help me with my math law of exponent quesions, please check my answers, please its due very soon. Sunday, September 8, 2013 at 8:08pm by Sophie Math - please double check me Only your second answer is correct. For the first one, you're asked to estimate. Hint: 4 * 3 = 12 Please try again, and we'll be glad to check them again. Friday, March 28, 2008 at 12:13pm by Ms. Sue 7th grade math check Ok I was wondering if you could check these 3 problems please. I am suppose to set these equations equal but I am not sure if did them right. I think I accidently solved these systems as substitutions. The answers are suppose to be in coordinate pairs. 1) x - y = 11 -3x + y... Thursday, April 12, 2012 at 9:11pm by saranghae12 Math 116 algebra please check Multiply 1/7×(-3/7)=-3/49 Please check Wednesday, July 16, 2008 at 7:36pm by jay jay math - please check question When we try to "solve" for the value of a variable, we need an equation. I do not see an equal sign or the equivalent. Please check the question to see if something is missing. Wednesday, August 17, 2011 at 6:19pm by MathMate computers please check my answer is there any 4th grade answers to math please help me find it please!!!!!!!! Wednesday, June 4, 2008 at 8:28am by lilesn math please help someone! anyone!!!! Lamont earns $5.70 per hour. last Friday Lamont received a pay check for $745.25. this pay check included an extra $200 bonus. about how many hours did Lamont work during the last pay period? ms sue please help, i would really appreciate it. Wednesday, December 11, 2013 at 5:32pm by math help MS.SUE Here are my answers can you pretty please check them please I'm in a hurry, sorry thank you so much :( !!!!! Wednesday, December 5, 2012 at 3:48pm by Sammy, !!!Please Answer!! Math(Please check) I have the same quotient, but the remainder is 8x+350. Can you check your calculations? Saturday, August 28, 2010 at 9:00pm by MathMate college math I'll be glad to check your answer. Please check your problem. Is the last term 9y or 9-y? Saturday, June 9, 2012 at 7:36pm by Ms. Sue Math: law of exponents check answers I like this last one: (c^2/3d^-1)^-2 = c^-4 / (3^-2 d^2)= 9/c^4d^2 check it please. Sunday, September 8, 2013 at 7:53pm by bobpursley Math(Please check) The (1/2) has been missed out, should read: ∫ du/(2u^2) =-1/(2(x^2-2)) After that you can plug in the limits of integration. I get 5/28 integrating from 2 to 3. The bottom limit is the start value, and the top value is the end value. It is usual to go from 2 to 3, and ... Saturday, April 30, 2011 at 6:06pm by MathMate Grammar PLease Check MY answers! Ms.Sue please its a grade please check my answers it would mean the world to me. Wednesday, April 3, 2013 at 8:38pm by Mandie Math-Algebra-Please check Thank you-I appreciate it- I posted another- could you just check it to make sure I got this concept. Wednesday, October 6, 2010 at 6:26pm by Tanya Math-Algebra-Please check Thank you-I appreciate it- I posted another- could you just check it to make sure I got this concept. Wednesday, October 6, 2010 at 6:26pm by Tanya English-CHECK PLEASE Ok, I figured this out (I think) but can you please check? Don't worry. I can carry the tent all by myself because it is so light. I do not even need a lantern it is so light outside. In these sentences, the word "light" is used as a: A:homophone B:homonyme C:synonym D:simile ... Friday, April 15, 2011 at 6:03pm by Catherine math-please check my answer 0.6x + 4 < 1.0x - 1 6x + 40 < 10x - 10 4x > 50 x > 12.5 can someone please check my work and tell me if I am right? Friday, July 30, 2010 at 10:22pm by Michelle Grammar--Please check! Can someone please check my above post about gerund phrases? Thanks! -MC Wednesday, February 18, 2009 at 8:35pm by mysterychicken SS Please check answers 1, 8, and 10 I believe are wrong. I don't know about 9. Please double check your book. Monday, April 8, 2013 at 5:37pm by Ms. Sue Spanish-Please check Please check-Thank you I come from the library. Yo vengo de biblioteca. Sunday, May 15, 2011 at 3:29pm by Emily Spanish 7th gradePlease check SraJMcGin-Could someone help me, please? Could you please check the above answer- Monday, May 23, 2011 at 2:40pm by Emma- Spanish-SRAJMcGin please check Could Sra JMcGin please check my answers to these questions if she has time? Thank you Tuesday, March 13, 2012 at 10:03am by Tiffany Social Studies 8R - Paragraph Check please check my my essay okayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy please let me know if there are any wrong let me know Thursday, November 8, 2012 at 6:58pm by alexis Social Studies 8R - Paragraph Check please check my my essay okayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy please let me know if there are any wrong let me know Thursday, November 8, 2012 at 6:58pm by alexis Math for Steve or someone good at math Please check my answers. 1. b 2. a 3. c 4. b 5. d If i got any wrong please tell me the right ones. Thursday, November 8, 2012 at 2:05pm by Jman Grammar--PLEASE CHECK! Please check if my answers in the 2 posts above are correct Thank you -MC Sunday, May 10, 2009 at 3:52pm by mysterychicken Public Speaking Please Check I don't know about 1 and 3, but 2 and 4 are definitely wrong. Please go back and check these definitions in your text materials. Sunday, January 10, 2010 at 10:59pm by Ms. Sue Algebra-Mathmate, Reiny or Dr. Bob-please double check-Did I even begin correctly? Please check the above problem-Thank you Monday, December 13, 2010 at 7:03pm by Matt math plase check What number would be used to represent a temperature of 7- degrees below zero. I think its is -.70 please check Wednesday, June 6, 2012 at 5:27pm by toni Physics-above question -please check it Just please check my calculations- Friday, October 8, 2010 at 3:59pm by Katherine Algebra Please check my answer It should have read please check- Monday, October 18, 2010 at 8:50pm by Annie drwl / bobpursley please check I had posted three questions at 1:48pm, 2:40pm and 2:41pm and I am having problems with them. Can you please check them for me. Thanks! Repost if you are not satisified with the responses. Tuesday, March 13, 2007 at 8:50pm by Mary please check(Social studies) 1, 2, and 4 are wrong. I know nothing about Daniella. Please note: I will only check your answers to these questions one more time. Thursday, March 13, 2014 at 7:56pm by Ms. Sue Math please check Find the unknown value in the percent proportion. 5/ = 4/100 My asnwear please check am I right? 5/x = 4/100 500=4x the unkown value is 125 So 5/125 = 4/100 Thursday, July 24, 2008 at 7:34pm by Bryan Math please check answer please check my answer thanks What would be the amount of compound interest on $ 8,000 invested for one year at 6 %, compounded quarterly ? (need to show all of your work ) ok this is what I got $8,000 x 0.6 = $ 480 $ 480 / 4 = $120 $120 x 0.6 / 4 = $ 121.80 $8,241.80 x 0.6 /4... Monday, May 19, 2008 at 5:14pm by Maria Physics check I just want somebody to check my answer on this problem please :-) There is a box which weighs 1000 kg. There is an applied force acting upon it with a force of 20000 N (horizontally). The mu is 1.2. What is the acceleration (again, horizontally) of the object? I got 8.24 m/s^... Monday, November 5, 2007 at 10:55pm by Emily Social Studies Please Help I know but please I'm the parent and my son tell me to check and i don't know 2 much please in your own words please :( :( :( :( please pretty please please please !!!! :( don't give give me links please in your own words please please :( !!!!! Identify two ... Saturday, April 6, 2013 at 9:49pm by Alejandra 5th grade Math Find the product. Estimate to check. 0.12 X 04 _______ O.108 How do I estimate to check? 0.07 X 9.2 ________ 0.644 How do I estimate to check? 0.848 X 3.2 _________ 2.7136 How do I estimate to check? Choose the better estimate 22 X 0.6 a.(12) b.(1.2) I chose a 2.3 X 4.8 a.(10... Sunday, October 4, 2009 at 12:10am by Please Help math- check this please I need to know what is 4.72 in expanded form please Monday, November 19, 2007 at 6:09pm by Melody Math(Please check) Please diregard, I figured out my mistake. Thursday, November 1, 2012 at 2:50pm by Hannah 7th grade math please check ASAP please! You are guessing. Wednesday, December 5, 2012 at 7:09pm by bobpursley math-PLEASE CHECK AND HELP find the surface area and volume of the right circular cylinder. The radius is 3 and height 6. This is what I put for volume: v=pir^2h pi(3cm)^2 6 cm= 54pi cm cubed can you please check this and show work on how to do the surface area? Thanks. Sunday, May 16, 2010 at 9:15am by crystal math (please check my work) i got y=x-4 then can u please proofread that for me Tuesday, February 23, 2010 at 4:09pm by jerson Math 8R - HW Qs. Check (can anyone help me please) can anyone help me please Sunday, September 9, 2012 at 2:04pm by Laruen Math please ASAP is it "D"? Wednesday, March 26, 2014 at 3:25pm by Ms. Sue Please check my answer PLease Check My Work 3. T^2-25/t^2+t-20= (t-5) (t+5)/ (t-10) (t+2) 4. 2x^2+6x+4/4x^2-12x-16= 2(x+2) (x+1)/4 (x-4) (x+1) 5. 6-y/y^2-2y-24= 1/(y+4) *Please check them.- 3. (t-5)(t+5) / (t+4)(t-5) 4. 2((x-1)(x+2)) / 4((x+4) (x-1)) 5. (6-y) / (y+6)(y-4) i dont really understand what this is... but lol ... Thursday, February 22, 2007 at 7:31pm by Margie drwls please check Can you please check my physics posts. Thanks! Tuesday, November 6, 2007 at 12:30am by Mary Algebra drwls please check Can you please check my problem Thank you Tuesday, October 12, 2010 at 3:44pm by Megan Spanish-Please check they're differnent Please check these are different Friday, November 25, 2011 at 1:23pm by Mia American Government- Please Check Could someone check this for me, please and thank you (: Wednesday, March 7, 2012 at 2:27pm by Abigail Math/Physics(Please check work) Please disregard. I figured out my mistake. Thursday, November 1, 2012 at 12:51pm by Hannah Help!Math...Again!!!PLease Help! can you just check i please. Wednesday, November 15, 2006 at 5:02pm by Anonymous Math(Please check, urgent) Please disregard. Thank you. Monday, September 17, 2012 at 10:53pm by Hannah Math! Please Check My Awnser! Please someone help me! Tuesday, January 14, 2014 at 3:20pm by Electro Math please check answer Please check my answer thanks :) Add the following and reduce answer to lowest terms. 1/3 +5/6 + and 3/8 ----------------------- 1/3 = 8/24 5/6 = 20/24 5/6 = 9/24 -------------- 37/24 = 1 13/24 Monday, January 7, 2008 at 6:14pm by keleb solve and check your answer. Please check this for me to see if I have the right answer. __2__ = __3__ 1-x 1+x x= 1/5 Thursday, April 4, 2013 at 11:41am by Michael please please someone check my work I don't understand it fully so I am asking if someone could please check my work thank you if it's too much I'm sorry. God Bless Wednesday, January 21, 2009 at 5:07pm by Hellokitty1993 Friday, December 7, 2012 at 2:58pm by Help!!!!!!!!!!!!!! Fast its urgent Please!!!!!!!!! Math! Please Check My Answers Please Help! Wednesday, August 14, 2013 at 2:21pm by Jack Algebra-Please check Please check- sqrt of 2x + 15 = x Answers: -5.3 -3.5 -3 5 I think according to my calculations it would be 5, correct? sqrt root of 7-x = 3-solve for x Answers: -2 2 4 -4 My choice would be 2 Is that correct- I squared both sides and came out with 7-x = 9 and then x = 2 Thank ... Tuesday, October 12, 2010 at 11:12am by Juliette Check this problem for me do I have negative sign right with the answer. -1(-1)+-5-5=-2/-10==-1/5 the one on top and the -1(-1) and on the bottom is -5-5. so check this answer for me. Please, Please Saturday, March 7, 2009 at 12:08am by Elaine Check this problem for me do I have negative sign right with the answer. -1(-1)+-5-5=-2/-10==-1/5 the one on top and the -1(-1) and on the bottom is -5-5. so check this answer for me. Please, Please Saturday, March 7, 2009 at 12:08am by Elaine Check for grmmatical errors and stuff ya know? Please and thankies sooo much please it would be greatly appreciated no one will check it, thankies :) Monday, February 3, 2014 at 7:12pm by ilikezackalot Health care please check answer Please check my answer thank you Verifying the accuracy of HCPCS codes is an important function in what ? I think that it is maintence of chargemasters Tuesday, March 11, 2008 at 2:49pm by Rebbeckha Algebra-Bob Pursley orMathmate or Reiny or Dr. Bob, please check Could one of you please check when you have time. Thank you Tuesday, December 14, 2010 at 11:49am by Anna Spanish-8th grade-SraJMcGin please check SraJMcGin please check-thank you very much in advance This is homework for Thursday Wednesday, February 9, 2011 at 12:31pm by Jessie Spanish-Please another to check Please also check these-I had to post twice to get them listed-had a problem numbering the answers Friday, December 30, 2011 at 4:54pm by Sophie Health please check my answer Please check my answer thanks True or Faslse The term used to describe why medical treatment is necessary is procedure I say False Tuesday, April 8, 2008 at 5:04pm by Graie Spanish 7th grade-Please check Please check. How would I say "I feel like drawing. Would it be: Yo tengo ganas de dibujar. Wednesday, April 6, 2011 at 12:19pm by Sally spanish/check please Please check the spelling of #4. You choose correctly but the spelling is: conozco. Otherwise, ¡perfecto! Sra Tuesday, May 8, 2012 at 8:30pm by SraJMcGin English please check my answer please check to see if I have the right answer thank you The patient had a radiogram performed. What doed the root word indicate I think it's b Thursday, March 20, 2008 at 5:59pm by Dakotah math(check answers please) PLEASE CHECK THESE ANSWERS compare the pair of numbers use <,>,= 3/4 _ 6/8 A.3/4=6/8(I PICKED THIS) B.3/4<6/8 C.3/4>6/8 Write the decimal as a fraction or mixed number in simplest form. 3.45 A.4 3/4 B.4 9/20 C.3 3/4 D.3 9/20(I PICKED THIS Wednesday, November 20, 2013 at 5:44pm by matt Here's the first one. x+5=12 x + 5 - 5 = 12 - 5 x = 7 I'll be glad to check your answers for the other problems. Please check the last problem to be sure you've typed it correctly. Monday, May 2, 2011 at 7:40pm by Ms. Sue I have no money at all no credit card or checks or anything! So can someone please tell me where to get a free background check or free record check?? I really need this please!! Monday, February 14, 2011 at 4:51am by NEEDS HELP!! This is a different question then the previous post-please check also Monday, May 28, 2012 at 2:33pm by Sammy-Please check this is a differnet question that can someone please check my answer for me? thanks in advance 9(x+4)=3(3x+2)+30 I got x=8 is this correct if not could you please tell me where I missed the step at? Thursday, September 24, 2009 at 8:30pm by Anonymous English 7 - Please Check My Journal Entry Please check my journal entry. This is a HUGE part of my grade for english. Is it good, any grammer mistakes. Please read it. Thursday, February 2, 2012 at 5:38pm by Laruen Math-PLEASE check PLEASE HELP!!!!! Tuesday, March 1, 2011 at 5:48pm by Annabel Health please check my answer Please check my answer thanks The radiologist is able to evaluate the movement of body organs by doing what procedure A. Ultrasound B. MRI C. Film survey D. Fluroscopic exam I picked A Thursday, March 20, 2008 at 5:53pm by Kaleigh-Anne Please Check(Math) d+7/d^2+49= / (d+7) (d+7)*I got the bottom but I don't know what to put at the top...please help!!! Please HElp Me!!Sorry,I am trying to e patient... The d+7 on top cancels with one of the d+7 on the bottom and you are left with 1/(d+7) Tuesday, February 20, 2007 at 2:22pm by steve the hippo Spanish-Check please! I need help Please check-I'm having a problem interpreting these questions and I have to answer them for homework. Could you please check my translation I'm not sure what these questions mean and I have to answer them for homework ¿Qué necesitas para comer en cereal?does this ask what I ... Wednesday, March 21, 2012 at 7:49pm by Ellen Thank you drwls. You have a little math error though the answer is 541. I understand everything you did. Can you PLEASE be the person to check my work PLEASE? Monday, April 7, 2008 at 12:07pm by Jon math please help None of the above answers is correct. Please check to make sure you copied the problem and answer choices correctly. Sunday, February 20, 2011 at 8:11pm by Ms. Sue At a quality control checkpoint on a manufacturing assembly line, 8% of the items failed check A, 10% failed check B, and 2% failed both checks A and B. a. If a product failed check A, what is the probability that it also failed check B? b. If a product failed check B, what is... Friday, November 6, 2009 at 9:00pm by Anna At a quality control checkpoint on a manufacturing assembly line, 8% of the items failed check A, 10% failed check B, and 2% failed both checks A and B. a. If a product failed check A, what is the probability that it also failed check B? b. If a product failed check B, what is... Friday, November 6, 2009 at 9:21pm by Anna At a quality control checkpoint on a manufacturing assembly line, 8% of the items failed check A, 10% failed check B, and 2% failed both checks A and B. a. If a product failed check A, what is the probability that it also failed check B? b. If a product failed check B, what is... Friday, November 6, 2009 at 9:25pm by Anna Math please check answer please check my answer thanks :) A Pet supply store recorded net sales of $423,400 for the year. The store's beginning inventory at retail was $105,850 and it's ending inventory at retial was $127,020. What would be the inventory turnover at retail, rounded to the nearset ... Thursday, November 29, 2007 at 7:00am by Michalea Health please check my answer Please check my answer thanks What does the abbrevation "AD" represent in a medical report ? My answer is both ears Thursday, March 20, 2008 at 5:40pm by Kaleigh-Anne Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=math--please+check","timestamp":"2014-04-16T05:15:32Z","content_type":null,"content_length":"34894","record_id":"<urn:uuid:b0f63c5c-a317-46f9-b154-defb7ae9e7fa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
5 search hits Mechanisms of nanofractal structure formation and post-growth evolution (2011) Veronika V. Dick Nanotechnology is a rapidly developing branch of science, which is focused on the study of phenomena at the nanometer scale, in particular related to the possibilities of matter manipulation. One of the main goals of nanotechnology is the development of controlled, reproducible, and industrially transposable nanostructured materials. The conventional technique of thin-film growth by deposition of atoms, small atomic clusters and molecules on surfaces is the general method, which is often used in nanotechnology for production of new materials. Recent experiments show, that patterns with different morphology can be formed in the course of nanoparticles deposition process on a surface. In this context, predicting of the final architecture of the growing materials is a fundamental problem worth studying. Another factor, which plays an important role in industrial applications of new materials, is the question of post-growth stability of deposited structures. The understanding of the post-growth relaxation processes would give a possibility to estimate the lifetime of the deposited material depending on the conditions at which the material was fabricated. Controllable post-growth manipulations with the architecture of deposited structures opens new path for engineering of nanostructured materials. The task of this thesis is to advance understanding mechanisms of formation and post-growth evolution of nanostructured materials fabricated by atomic clusters deposition on a surface. In order to achieve this goal the following main problems were addressed: 1. The properties of isolated clusters can significantly differ from those of analogous clusters occurring on a solid surface. The difference is caused by the interaction between the cluster and the solid. Therefore, the understanding of structural and dynamical properties of an atomic cluster on a surface is a topic of intense interest from the scientific and technological point of view. In the thesis, stability, energy, and geometry of an atomic cluster on a solid surface were studied using a liquid drop approach which takes into account the cluster-solid interaction. Geometries of the deposited clusters are compared with those of isolated clusters and the differences are discussed. 2. The formation scenarios of patterns on a surface in the course of the process of cluster deposition depend strongly on the dynamics of deposited clusters. Therefore, an important step towards predicting pattern morphology is to study dynamics of a single cluster on a surface. The process of cluster diffusion on a surface was modeled with the use of classical molecular dynamics technique, and the diffusion coefficients for the silver nanoclusters were obtained from the analysis of trajectories of the clusters. The dependence of the diffusion coefficient on the system’s temperature and cluster-surface interaction was established. The results of the calculations are compared with the available experimental results for the diffusion coefficient of silver clusters on graphite surface. 3. The methods of classical molecular dynamics cannot be used for modeling the self-assembly processes of atomic clusters on a surface, because these processes occur on the minutes timescale, what would require an unachievable computer resource for the simulation. Based on the results of molecular dynamics simulations for a single cluster on a surface a Monte-Carlo based approach has been developed to describe the dynamics of the self-assembly of nanoparticles on a surface. This method accounts for the free particle diffusion on a surface, aggregation into islands and detachment from these islands. The developed method is allowed to study pattern formation of structures up to thousands nm, as well as the stability of these structures. Developed method was implemented in MBN Explorer computer package. 4. The process of the pattern formation on a surface was modeled for several different scenarios. Based on the analysis of results of simulations was suggested a criterion, which can be used to distinguish between different patterns formed on a surface, for example: between fractals or compact islands.This criteria can be used to predict the final morphology of a growing structure. 5. The post-growth evolution of patterns on a surface was also analyzed. In particular, attention in the thesis is payed to a systematical theoretical analysis of the post-growth processes occurring in nanofractals on a surface. The time evolution of fractal morphology in the course of the post-growth relaxation was analyzed, the results of these calculations were compared with experimental data available for the post-growth relaxation of silver cluster fractals on graphite substrate. All the aforementioned problems are discussed in details in the thesis. Einfluss schwerer hadronischer Zustände auf das QCD-Phasendiagramm und die Ausfrierbedingungen in einem hadronischen chiralen Modell (2006) Gebhard Zeeb In der vorliegenden Dissertation werden mit einem chiralen SU(3)-Modell die thermodynamischen Eigenschaften von stark wechselwirkender hadronischer Materie und die mikroskopischen Medium-Eigenschaften von Hadronen bei hohen Temperaturen und hohen Baryonen-Dichten untersucht. Das verwendete chirale Modell ist ein erweitertes sigma-omega-Modell in Mittlerer-Feld-Näherung (Mean-Field) mit baryonischen und mesonischen effektiven Freiheitsgraden; es basiert auf spontan gebrochener chiraler Symmetrie und Skaleninvarianz. Das Phasenübergangsverhalten des chiralen Modells wird systematisch untersucht und dabei gezeigt, dass es signifikant von den Kopplungen zusätzlicher schwererer hadronischer Freiheitsgrade ('Resonanzen') abhängt. Durch entsprechende Ankopplung des niedrigsten baryonischen Dekupletts kann ein Phasendiagramm in qualitativer Übereinstimmung mit aktuellen Vorhersagen der Gitter-QCD erreicht werden. Alternativ wird die Ankopplung einer schweren baryonischen Test-Resonanz untersucht, welche effektiv für das Spektrum der schweren hadronischen Zustände steht. Hier ergibt sich für einen bestimmten Bereich der Kopplungen sogar eine quantitative Übereinstimmung zu den Gitter-QCD-Vorhersagen bei gleichzeitig guter Beschreibung der Grundzustandseigenschaften von Kernmaterie. Für diese Zustandsgleichung werden Vorhersagen (innerhalb der Modellannahmen) zu geplanten Experimenten gemacht -- konkret wird gezeigt, dass der Phasenübergangsbereich für das CBM Experiment des geplanten Beschleunigerzentrums FAIR an der GSI Darmstadt experimentell zugänglich ist. Weiter wird das chirale Modell auf die Beschreibung von experimentellen Teilchenzahlverhältnissen (Yield-Ratios) aus Schwerionen-Kollisionen von AGS, SPS und RHIC angewendet. Studiert werden Parametersätze mit stark unterschiedlichen Phasendiagrammen aufgrund unterschiedlicher Ankopplung des baryonischen Dekupletts sowie ein ideales Hadronengas. Bei den niedrigen und mittleren Kollisionsenergien zeigt sich eine verbesserte Beschreibung durch die chiralen Parametersätze im Vergleich zum idealen Hadronengas, besonders deutlich für Parametersätze mit Phasendiagramm ähnlich der Vorhersage aus der Gitter-QCD. Die Wechselwirkung im chiralen Modell führt zu Medium-Modifikationen der chemischen Potentiale und der Hadronenmassen. Die resultierenden Ausfrierparameter mu und T sind deshalb gegenüber dem nichtwechselwirkenden Fall signifikant verändert. An den Ausfrierpunkten zeigen sich deutliche Abweichungen der effektiven Massen von den Vakuummassen (5 bis 15 %) und des effektiven baryo-chemischen Potentials vom ursprünglichen Wert (bis zu 20 %). Ferner werden universelle Kriterien für das Ausfrieren diskutiert und isentrope Expansion zu den Ausfrierpunkten untersucht, wo sich eine starke Abhängigkeit der Trajektorien von der Zustandsgleichung ergibt. Schließlich wird der Einfluss des Dilaton-Felds (Gluonkondensat) auf das Phasenübergangsverhalten bei mu=0 studiert, indem das Gluonkondensat an die Dekuplett-Baryonen gekoppelt wird. Es zeigt sich, dass dadurch eine Restauration der Skaleninvarianz im Modell möglich wird, die gleichzeitig auch eine vollständige Restauration der chiralen Symmetrie bewirkt. Die Restauration der Skaleninvarianz erfolgt erst bei Temperaturen, die oberhalb der chiralen Restauration (im nichtseltsamen Sektor) liegen. Diese Modellerweiterung ermöglicht es, zukünftig das Phasenübergangsverhalten -- Restauration von chiraler Symmetrie und Skaleninvarianz -- auch bei nichtverschwindenden Baryonendichten zu untersuchen. Die Resultate dieser Arbeit zeigen die Wichtigkeit der schweren hadronischen Zustände, der Resonanzen, für das QCD-Phasendiagramm. Für die Zukunft ist eine Ankopplung des gesamten hadronischen Massenspektrums an das Modell erstrebenswert, wie sich sowohl aus der Untersuchung der Modellerweiterung um eine Test-Resonanz als auch aus der Anwendung auf experimentelle Teilchenzahlverhältnisse ergibt. Structure of exotic nuclei and superheavy elements in meson field theory (2008) Khin Nyan Linn In this work the nuclear structure of exotic nuclei and superheavy nuclei is studied in a relativistic framework. In the relativistic mean-field (RMF) approximation, the nucleons interact with each other through the exchange of various effective mesons (scalar, vector, isovector-vector). Ground state properties of exotic nuclei and superheavy nuclei are studied in the RMF theory with the three different parameter sets (ChiM, NL3, NL-Z2). Axial deformation of nuclei within two drip lines are performed with the parameter set (ChiM). The position of drip lines are investigated with three different parameter sets (ChiM, NL3, NL-Z2) and compared with the experimental drip line nuclei. In addition, the structure of hypernuclei are studied and for a certain isotope, hyperon halo nucleus is predicted. Chiral symmetry restoration and deconfinement in neutron stars (2009) Veronica Antocheviz Dexheimer Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties. The O(N=2) model in polar coordinates at nonzero temperature (2009) Martin Grahl Chapter 1 contains the general background of our work. We briefly discuss important aspects of quantum chromodynamics (QCD) and introduce the concept of the chiral condensate as an order parameter for the chiral phase transition. Our focus is on the concept of universality and the arguments why the O(4) model should fall into the same universality class as the effective Lagrangian for the order parameter of (massless) two-flavor QCD. Chapter 2 pedagogically explains the CJT formalism and is concerned with the WKB method. In chapter 3 the CJT formalism is then applied to a simple Z(2) symmetric toy model featuring a one-minimum classical potential. As for all other models we are concerned with in this thesis, we study the behavior at nonzero temperature. This is done in 1+3 dimensions as well as in 1+0 dimensions. In the latter case we are able to compare the effective potential at its global minimum (which is minus the pressure) with our result from the WKB approximation. In chapter 4 this program is also carried out for the toy model with a double-well classical potential, which allows for spontaneous symmetry breaking and tunneling. Our major interest however is in the O(2) model with the fields treated as polar coordinates. This model can be regarded as the first step towards the O(4) model in four-dimensional polar coordinates. Although in principle independent, all subjects discussed in this thesis are directly related to questions arising from the investigation of this particular model. In chapter 5 we start from the generating functional in cartesian coordinates and carry out the transition to polar coordinates. Then we are concerned with the question under which circumstances it is allowed to use the same Feynman rules in polar coordinates as in cartesian coordinates. This question turns out to be non-trivial. On the basis of the common Feynman rules we apply the CJT formalism in chapter 6 to the polar O(2) model. The case of 1+0 dimensions was intended to be a toy model on the basis of which one could more easily explore the transition to polar coordinates. However, it turns out that we are faced with an additional complication in this case, the infrared divergence of thermal integrals. This problem requires special attention and motivates the explicit study of a massless field under topological constraints in chapter 8. In chapter 7 we investigate the cartesian O(2) model in 1+0 dimensions. We compare the effective potential at its global minimum calculated in the CJT formalism and via the WKB approximation. Appendix B reviews the derivation of standard thermal integrals in 1+0 and 1+3 dimensions and constitutes the basis for our CJT calculations and the discussion of infrared divergences. In chapter 9 we discuss the so-called path integral collapse and propose a solution of this problem. In chapter 10 we present our conclusions and an outlook. Since we were interested in organizing our work as pedagogical as possible within the narrow scope of a diploma thesis, we decided to make extensive use of appendices. Appendices A-H are intended for students who are not familiar with several important concepts we are concerned with. We will refer to them explicitly to establish the connection between our work and the general context in which it is settled.
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/referee/Stefan+Schramm","timestamp":"2014-04-19T07:31:14Z","content_type":null,"content_length":"48268","record_id":"<urn:uuid:925ee2c7-1d0a-4cae-b351-cd12ec8ae11c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Black Bear Hunt Areas and Area Descriptions Black Bear Hunting Area No. 1: That portion of Warren and Sussex counties lying within a continuous line beginning at the intersection of the Portland Bridge and the Delaware River at Columbia; then northward along the east bank of the Delaware River to the New York state line; then east along the New York state line to Rt. 519; then south along Rt. 519 to its intersection with Rt. 627; then south along Rt. 627 to its intersection with Rt. 626; then south along Rt. 626 to its intersection with Rt. 521; then southwest along Rt. 521 to its intersection with Rt. 94 in Blairstown; the southwest along Rt. 94 to the Portland Bridge, the point of beginning in Columbia. The islands of Labar, Tocks, Poxono, Depew, Namanock, Minisink and Mashipacong lying in the Delaware River are also included within this Hunting Area. Black Bear Hunting No. Area 2: That portion of Sussex, Warren and Morris counties lying within a continuous line beginning at Portland Bridge in Columbia; then northward along Rt. 94 to its intersection with Rt. 521 in Blairstown; then north along Rt. 521 to its intersection with Rt. 626; then north along Rt. 626 to its intersection with Rt. 627; then north along Rt. 627 to its intersection with Rt. 519 in Branchville; then north along Rt. 519 to the New York state line; then southeast along the New York state line to Rt. 517; then south along Rt. 517 to its intersection with Rt. 94; then south on Rt. 94 to its intersection with Rt. 23 in Hamburg Borough; then south along Rt. 23 to its intersection with Rt. 517 in Franklin; then south along Rt. 517 to its intersection with Rt. 15 in Sparta; then south along Rt. 15 to its intersection with Interstate 80 in Dover; then west along interstate 80 to its intersection with Rt. 94; then south along Rt. 94 to the intersection with the Portland Bridge and the Delaware River located in Columbia, the point of beginning. That portion of Sussex, Passaic, Morris and Bergen counties lying within a continuous line beginning at the intersection of Rt. 80 and Rt. 15 in Dover; then north along Rt. 15 to its intersection with Rt. 517 in Sparta; then north along Rt. 517 to its intersection with Rt. 23 Franklin; then north along Rt. 23 to its intersection with Rt 94 in Hamburg Borough; then north along Rt. 94 to its intersection with Rt 517; then north along Rt 517 to the New York state line; then east along the New York state line to its intersection with Rt. 287; then south along Rt. 287 to its intersection with Rt. 80; then west along Rt. 80 to its intersection with Rt. 15 the point of beginning in Dover. Black Bear Hunting Area No. 4: That portion of Sussex, Warren, Morris, Somerset and Hunterdon counties lying within a continuous line beginning at the intersection of Route 78 and the Delaware River; then north along the east bank of the Delaware River to the Portland Bridge at Columbia; then northeast along Rt. 94 to its intersection with Rt. 80; then east along Rt. 80 to its intersection with Rt. 287; then southwest along Rt. 287 to its intersection with Rt. 78; then west along Rt. 78 to the Delaware River the point of beginning.
{"url":"http://www.state.nj.us/dep/fgw/bearzonemap.htm","timestamp":"2014-04-20T09:20:48Z","content_type":null,"content_length":"18611","record_id":"<urn:uuid:82ccab12-e419-4708-b37c-d68707cb8d00>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
propagating flow Mapping space General abstract Differential topology Stable homotopy theory Smooth mapping spaces are very nice examples of infinite dimensional smooth manifolds. One reason for their nice behaviour is that it is often possible to use the structure of the target space (a finite dimensional manifold) to study the mapping space. This provides a route from finite dimensional differential topology to that of infinite dimensions. It is usually the case that one needs stronger structure on the target space to do this than would be needed just for the finite dimensional result, but as the target space is a finite dimensional manifold that stronger structure is usually there anyway. An example of this is the concept of a tubular neighbourhood of an embedding. As shown at A Not-So-Nice Submanifold, not all embeddings of infinite dimensional manifolds - even of mapping spaces - admit tubular neighbourhoods. However, if the embedding is of a coincidental nature then it is possible to use structure on the target manifold to find tubular neighbourhoods in infinite dimensions. By “coincidental nature”, we mean that the submanifold is defined by taking those smooth maps with some condition imposed that says that certain “coincidences” happen. That is, that the values of the map at certain points coincide, or are constrained to lie in some submanifold of the target space. Let us consider, for an example, the simplest case: the inclusion of based loops in smooth loops. So let $M$ be a smooth finite dimensional manifold, $L M$ its smooth free loop space and $\Omega M$ its smooth based loop space. Let $x_0$ be the basepoint of $M$, and $1$ the basepoint of the circle, $S^1$. To get a tubular neighbourhood of this, we need to start by finding a suitable neighbourhood. To do this, we choose a chart $\phi \colon \mathbb{R}^n \to U \subseteq M$ at $x_0$ (so that $\phi(0) = x_0$). We define the neighbourhood of $\Omega M$ to be those loops $\alpha \in L M$ such that $\alpha(1) \in U$. To make this a tubular neighbourhood, we need to find a way to project these loops on to $\Omega M$. That is, given a loop $\alpha \colon S^1 \to M$ with $\alpha(1) \ in U$ we need to produce a loop $\pi(\alpha) \colon S^1 \to M$ with $\pi(\alpha)(1) = x_0$. It is obvious how to move the basepoint: simply scale it to $x_0$ using the scalar multiplication coming from the chart. However, that just moves $\alpha(1)$. We need to move the whole of $\alpha$ with it - or at least, drag that part of it near $1$. We cannot assume that all of $\alpha$ lies in $U$. The solution is to define a diffeomorphism $\Psi$ of $M$ with the property that $\Psi(\alpha(1)) = x_0$. Then we can define $\pi(\alpha) = \Psi \circ \alpha$. As we vary $\alpha$, so we also vary $\ alpha(1)$, and thus must vary the diffeomorphism $\Psi$. The trick is to choose $\Psi$ so that it varies smoothly with $\alpha(1)$. In the above example, this is straightforward because everything happens in the chart, and thus effectively on $\mathbb{R}^n$. There are more complicated examples. One such is the basic construction in string topology. Within $L M \times L M$ we consider those pairs of loops $(\alpha,\beta)$ such that $\alpha(1) = \beta(1)$. Then for $(\alpha,\beta)$ with $\alpha(1)$near to $\beta(1)$ we want to project the pair $(\alpha,\beta)$ to a pair where these values coincide. Now although the points $\alpha(1)$ and $\beta(1)$ are near (in some vague not-yet-defined sense), they may, as a pair, roam all over the manifold. Thus local solutions are not applicable here. Nonetheless, the question still comes down to the ability to choose diffeomorphisms consistently according to some conditions. One way to choose these diffeomorphisms is to use the notion of a propagating flow. The term was coined by Veronique Godin in (Godin, 07) and is based on an idea due to Andrew Stacey in (Stacey, 05), though there are probably antecedents. The basic idea is contained in the following definition. Let $\pi \colon E \to M$ be a vector bundle over a smooth manifold. Everything here is assumed to be finite dimensional. We consider the space $Diff_{fc}(E)$ of diffeomorphisms of $E$ which preserve the fibres of $E$ and have compact support. A propagating flow is a smooth map $\phi \colon E \to Diff_{fc}(E)$ with the property that $\phi(v) : v \mapsto \pi(v)$ where we identify $\pi(v)$ with the zero vector in the corresponding fibre of $E$. The original definition of a propagating flow in Stacey, 05 and Godin, 07 used the exponentiation map from vector fields to diffeomorphisms to get the actual diffeomorphism. The idea of that was that it is much easier to build a vector field with the required properties than a diffeomorphism because vector fields are more easily manipulated. But the diffeomorphisms used here are simple enough that they can be constructed directly. This makes it easier to generalise to situations where the exponentiation map cannot be assumed to exist. Let us consider the linear situation first. We start with a vector space, $V$, and a vector $v \in V$. We want to define a diffeomorphism $\phi_v \colon V \to V$ with the property that $\phi_v(v) = 0$. This is simple enough: $\phi_1(w) = w - v.$ The problem with this is that we do not want just any diffeomorphism. We want one that is the identity “near infinity”. Let us start by fixing the diffeomorphism in the $v$-direction. Choose a smooth function $\sigma \colon \mathbb{R} \to [-1,0]$ with the following properties: 1. $\sigma(t) = \begin{cases} 0 & t \le -2 \\ - 1 & 0 \le t \le 1 \\ 0 & 2 \le t \end{cases}$ 2. $\sigma'(t) \gt -1$ (note that this is possible as we have given ourselves an interval of length $2$ to get from $0$ at $t = -2$ to $-1$ at $t = 0$. We are actually interested in the function $t \mapsto t + h \sigma(t)$ for $|h| \le 1$. The first property of $\sigma$ tells us that this agrees with the identity outside $[-2,2]$ and that it is $t \ mapsto t - h$ on $[0,1]$. The second property tells us that its derivative is strictly bigger than $1 - h$ and so, for $h \le 1$, is a diffeomorphism. On $V$, we choose a “dual functional” to $v$. That is, we choose some continuous linear functional $f \colon V \to \mathbb{R}$ with $f(v) = 1$ (we need to assume that $v e 0$ for this part, we shall correct for that later). Then we define a diffeomorphism on $V$ by: $\phi_2(w) = w + \sigma(f(w))v = (w - f(w)v) + (f(w) + \sigma(f(w)))v.$ The second expression shows that what we have done is used $f$ to identify $V$ with $\ker f \oplus \mathbb{R}$ and then applied $t \mapsto t + \sigma(t)$ on the $\mathbb{R}$-factor. This fixes our diffeomorphism in the $v$-direction. To fix it in the other direction, we choose a smooth function $\widehat{\tau} \colon \ker f \to [0,1]$ which is $0$ “near infinity” and $1$ at $0$. Then we mix this in to the above as follows: $\phi_3(w) = w + \widehat{\tau}(w - f(w)v) \sigma(f(w))v = (w - f(w)v) + (f(w) + \widehat{\tau}(w - f(w)v) \sigma(f(w)) v.$ As before, the second expression makes it clear that this is a diffeomorphism. When $\widehat{\tau}(w - f(w)v) = 0$ then it is the identity. This will do, but there are a few too many choices in the above. To simplify these, we assume that $V$ admits a smooth inner product, $g$. Let us write $q$ for the square of the associated norm. Then we can choose $f$ to be evaluation of the inner product at $v$ and $\tau$ to be composition of the inner product with a suitable bump function on $\mathbb{R}$. We shall write $\tau$ for that bump function. To make the final formula cleaner, we assume that $\tau(t) = 1$ for $|t| \le 2$. This leads us to: \begin{aligned} \phi_v(w) &= w + \tau\left(\frac{q(w)}{1 + q(v)}\right) \sigma\left(\frac{g(w,v)}{q(v)}\right) v &= w - \frac{g(w,v)}{q(v)}v + \left(\frac{g(w,v)}{q(v)} + \tau\left(\frac{q(w)}{q(v)}\ right) \sigma\left(\frac{g(w,v)}{q(v)}\right) \right)v. \end{aligned} As written, this makes sense only for $v e 0$. But it extends to the identity at $v = 0$. To see that this extension is smooth, we need merely point out that as $v \to 0$, so $q(v) \to 0$ and thus $\ sigma\left(\frac{g(w,v)}{q(v)}\right) \to 0$. The second expression again shows that, for $v e 0$, this is a diffeomorphism. This, then, is our required linear diffeomorphism. For $w$ with $q(w) \ge 2(1 + q(v))$ it is the identity, and thus will extend “at infinity”, whilst $\phi_v(v) = 0$. The next step is to extend this to a bundle over a manifold. So let $\pi \colon E \to M$ be a smooth vector bundle over a smooth manifold. We wish to extend the above formula so that it is valid for $E$. That is, $v$ is an arbitrary point in $E$ and we wish to define $\phi_v \colon E \to E$ such that $\phi_v(v) = 0_{\pi(v)}$. So also we must take $w$ to be an arbitrary point in $E$. And thereby lies the problem: in the formula $v$ and $w$ interact but they may be in different fibres. The solution is to extend one of them to a vector field. Since $v$ is static in the formula for $\phi_v$, that is the obvious choice. We should also note that the explicit formula requires the existence of a smooth orthogonal structure on $E$. Thus we want to define a smooth function $X \colon E \to \Gamma(E)$ with the property that $X_v(\pi(v)) = v$. This is easy if $E$ is trivial, and the condition is convex, so a standard partition of unity argument will suffice. Specifically, let $\{\rho_\lambda : \lambda \in \Lambda\}$ be a partition of unity on $M$ with the property that for each $\lambda \in \lambda$ there is an open set $U_\ lambda$ containing the support of $\rho_\lambda$ over which $E$ is trivial. Let $E_\lambda$ denote the restriction of $E$ to $U_\lambda$ and let $\psi_\lambda \colon E_\lambda \to U_\lambda \times V$ be a trivialisation. Let $p_\lambda \colon E_\lambda \to V$ be the composition of this trivialisation with the projection on to the $V$-factor. We define $X_\lambda \colon E_\lambda \to \Gamma(E_\ lambda)$ by $X_{\lambda,v}(p) = \psi_\lambda^{-1}(p,p_\lambda(v)).$ Now we define $X \colon E \to \Gamma(E)$ by $X_v(p) = \sum_\lambda \rho_\lambda(p) X_{\lambda,v}(p).$ Notice that $X_v(p) = 0$ if $p$ is “sufficiently far” from $\pi(v)$. Thus our diffeomorphism $\phi_v$ at $w$, with $p = \pi(w)$, is: $\phi_v(w) = \begin{cases} w + \tau\left(\frac{q(w)}{1 + q(X_v(p))}\right) \sigma \left(\frac{g(w,X_v(p))}{q(X_v(p))}\right) X_v(p) & X_v(p) e 0 \\ w & X_v(p) = 0 \end{cases}.$ The idea goes back to The term propagating flow was coined in and used for the construction of umkehr maps to construct string topology operations.
{"url":"http://ncatlab.org/nlab/show/propagating+flow","timestamp":"2014-04-18T18:13:55Z","content_type":null,"content_length":"71123","record_id":"<urn:uuid:9c0a9387-1b94-45d5-a652-0216fcba98af>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Opinion dynamics and learning in social networks,” Dyn , 2011 "... To be submitted on November 2011 A monopolist offers a product to a market of consumers with heterogeneous quality preferences. Although initially uninformed about the product quality, they learn by observing past purchase decisions and reviews of other consumers. Our goal is to analyze the social l ..." Cited by 2 (0 self) Add to MetaCart To be submitted on November 2011 A monopolist offers a product to a market of consumers with heterogeneous quality preferences. Although initially uninformed about the product quality, they learn by observing past purchase decisions and reviews of other consumers. Our goal is to analyze the social learning mechanism and its effect on the seller’s pricing decision. This analysis borrows from the literature on social learning and on pricing and revenue management. Consumers follow a naive decision rule and, under some conditions, eventually learn the product’s quality. Using mean-field approximation, the dynamics of this learning process are characterized for markets with high demand intensity. The relationship between the price and the speed of learning depends on the heterogeneity of quality preferences. Two pricing strategies are studied: a static price and a single price change. Properties of the optimal prices are derived. Numerical experiments suggest that pricing strategies that account for social learning may increase revenues considerably relative to strategies that do not. "... Abstract. Game theory studies situations in which strategic players can modify the state of a given system, due to the absence of a central authority. Solution concepts, such as Nash equilibrium, are defined to predict the outcome of such situations. In the spirit of the field, we study the computat ..." Add to MetaCart Abstract. Game theory studies situations in which strategic players can modify the state of a given system, due to the absence of a central authority. Solution concepts, such as Nash equilibrium, are defined to predict the outcome of such situations. In the spirit of the field, we study the computation of solution concepts by means of decentralized dynamics. These are algorithms in which players move in turns to improve their own utility and the hope is that the system reaches an “equilibrium” quickly. We study these dynamics for the class of opinion games, recently introduced by [1]. These are games, important in economics and sociology, that model the formation of an opinion in a social network. We study best-response dynamics and show that the convergence to Nash equilibria is polynomial in the number of players. We also study a noisy version of best-response dynamics, called logit dynamics, and prove a host of results about its convergence rate as the noise in the system varies. To get these results, we use a variety of techniques developed to bound the mixing time of Markov chains, including coupling, spectral characterizations and bottleneck ratio. 1 "... Abstract: The formation of opinions in a large population is governed by endogenous (human interactions) and exogenous (media influence) factors. In the analysis of opinion evolution in a large population, decision making rules can be approximated with non-Bayesian ”rule of thumb” methods. This pape ..." Add to MetaCart Abstract: The formation of opinions in a large population is governed by endogenous (human interactions) and exogenous (media influence) factors. In the analysis of opinion evolution in a large population, decision making rules can be approximated with non-Bayesian ”rule of thumb” methods. This paper focuses on an Eulerian bounded-confidence model of opinion dynamics with a potential time-varying input. First, we prove some properties of this system’s dynamics with time-varying input. Second, we derive a simple sufficient condition for opinion consensus, and prove the convergence of the population’s distribution with no input to a sum of Dirac Delta functions. Finally, we define an input’s attraction range, and for a normally distributed input and uniformly distributed initial population, we conjecture that the length of attraction range is an increasing affine function of population’s confidence bound and input’s variance. , 2013 "... We investigate the role of manipulation in a model of opinion formation where agents have opinions about some common question of interest. Agents repeatedly communicate with their neighbors in the social network, can exert some effort to manipulate the trust of others, and update their opinions taki ..." Add to MetaCart We investigate the role of manipulation in a model of opinion formation where agents have opinions about some common question of interest. Agents repeatedly communicate with their neighbors in the social network, can exert some effort to manipulate the trust of others, and update their opinions taking weighted averages of neighbors ’ opinions. The incentives to manipulate are given by the agents’ preferences. We show that manipulation can modify the trust structure and lead to a connected society, and thus, make the society reaching a consensus. Manipulation fosters opinion leadership, but the manipulated agent may even gain influence on the long-run opinions. In sufficiently homophilic societies, manipulation accelerates (slows down) convergence if it decreases (increases) homophily. Finally, we investigate the tension between information aggregation and spread of misinformation. We find that if the ability of the manipulating agent is weak and the agents underselling (overselling) their information gain (lose) overall influence, then manipulation reduces misinformation and agents "... Abstract—In this letter, by introducing the strategic decision making into the Chinese restaurant process, we propose a new game, called Chinese Restaurant Game, as a new general framework for analyzing the individual decision problem in a network with negative network externality. Our analysis show ..." Add to MetaCart Abstract—In this letter, by introducing the strategic decision making into the Chinese restaurant process, we propose a new game, called Chinese Restaurant Game, as a new general framework for analyzing the individual decision problem in a network with negative network externality. Our analysis shows that a balance in utilities among the customers in the game will eventually be achieved under the strategic decision making process. The equilibrium grouping is defined to describe the predicted outcome of the proposed game, which can be found by a simple algorithm. The simulation results confirm that the rational customers in Chinese restaurant game automatically achieve a balance in loading in order to reduce the impact from the negative network externality. Index Terms—Chinese restaurant game, game theory, Nash equilibrium, network externality. I.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=13188298","timestamp":"2014-04-18T02:09:58Z","content_type":null,"content_length":"23933","record_id":"<urn:uuid:29cbd87b-8e91-496a-8cfa-479723c7e2ae>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Unformatted Document Excerpt Tsinghua University Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Introduction Contents 1 1.1 Machine Perception . . . . . . . . . . . . . 1.2 An Example . . . . . . . . . . . . . . . . . . 1.2.1 Related fields . . . . . . . . . . . . . 1.3 The Sub-problems of Pattern Classification 1.3.1 Feature Extraction . . . . . . . . . . 1.3.2 Noise . . . . . . . . . . . . . . . . . 1.3.3 Overfitting . . . . . . . . . . . . . . 1.3.4 Model Selection . . . . . . . . . . . . 1.3.5 Prior Knowledge . . . . . . . . . . . 1.3.6 Missing Features . . . . . . . . . . . 1.3.7 Mereology . . . . . . . . . . . . . . . 1.3.8 Segmentation . . . . . . . . . . . . . 1.3.9 Context . . . . . . . . . . . . . . . . 1.3.10 Invariances . . . . . . . . . . . . . . 1.3.11 Evidence Pooling . . . . . . . . . . . 1.3.12 Costs and Risks . . . . . . . . . . . 1.3.13 Computational Complexity . . . . . 1.4 Learning and Adaptation . . . . . . . . . . 1.4.1 Supervised Learning . . . . . . . . . 1.4.2 Unsupervised Learning . . . . . . . . 1.4.3 Reinforcement Learning . . . . . . . 1.5 Conclusion . . . . . . . . . . . . . . . . . . Summary by Chapters . . . . . . . . . . . . . . . Bibliographical and Historical Remarks . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 11 11 11 12 12 12 12 13 13 13 14 14 15 15 16 16 16 17 17 17 17 19 19 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 CONTENTS Chapter 1 Introduction face, read T he ease with which we recognize akeys inunderstand spoken words, decidehandwritten characters, identify our car our pocket by feel, and whether an apple is ripe by its smell belies the astoundingly complex processes that underlie these acts of pattern recognition. Pattern recognition -- the act of taking in raw data and taking an action based on the "category" of the pattern -- has been crucial for our survival, and over the past tens of millions of years we have evolved highly sophisticated neural and cognitive systems for such tasks. 1.1 Machine Perception It is natural that we should seek to design and build machines that can recognize patterns. From automated speech recognition, fingerprint identification, optical character recognition, DNA sequence identification and much more, it is clear that reliable, accurate pattern recognition by machine would be immensely useful. Moreover, in solving the myriad problems required to build such systems, we gain deeper understanding and appreciation for pattern recognition systems in the natural world -- most particularly in humans. For some applications, such as speech and visual recognition, our design efforts may in fact be influenced by knowledge of how these are solved in nature, both in the algorithms we employ and the design of special purpose hardware. 1.2 An Example To illustrate the complexity of some of the types of problems involved, let us consider the following imaginary and somewhat fanciful example. Suppose that a fish packing plant wants to automate the process of sorting incoming fish on a conveyor belt according to species. As a pilot project it is decided to try to separate sea bass from salmon using optical sensing. We set up a camera, take some sample images and begin to note some physical differences between the two types of fish -- length, lightness, width, number and shape of fins, position of the mouth, and so on -- and these suggest features to explore for use in our classifier. We also notice noise or variations in the 3 4 CHAPTER 1. INTRODUCTION images -- variations in lighting, position of the fish on the conveyor, even "static" due to the electronics of the camera itself. Given that there truly are differences between the population of sea bass and that model of salmon, we view them as having different models -- different descriptions, which are typically mathematical in form. The overarching goal and approach in pattern classification is to hypothesize the class of these models, process the sensed data to eliminate noise (not due to the models), and for any sensed pattern choose the model that corresponds best. Any techniques that further this aim should be in the conceptual toolbox of the designer of pattern recognition systems. Our prototype system to perform this very specific task might well have the form shown in Fig. 1.1. First the camera captures an image of the fish. Next, the camera's presignals are preprocessed to simplify subsequent operations without loosing relevant processing information. In particular, we might use a segmentation operation in which the images of different fish are somehow isolated from one another and from the background. The segmentation information from a single fish is then sent to a feature extractor, whose purpose is to reduce the data by measuring certain "features" or "properties." These features feature extraction (or, more precisely, the values of these features) are then passed to a classifier that evaluates the evidence presented and makes a final decision as to the species. The preprocessor might automatically adjust for average light level, or threshold the image to remove the background of the conveyor belt, and so forth. For the moment let us pass over how the images of the fish might be segmented and consider how the feature extractor and classifier might be designed. Suppose somebody at the fish plant tells us that a sea bass is generally longer than a salmon. These, then, give us our tentative models for the fish: sea bass have some typical length, and this is greater than that for salmon. Then length becomes an obvious feature, and we might attempt to classify the fish merely by seeing whether or not the length l of a fish exceeds some critical value l . To choose l we could obtain some design or training training samples of the different types of fish, (somehow) make length measurements, samples and inspect the results. Suppose that we do this, and obtain the histograms shown in Fig. 1.2. These disappointing histograms bear out the statement that sea bass are somewhat longer than salmon, on average, but it is clear that this single criterion is quite poor; no matter how we choose l , we cannot reliably separate sea bass from salmon by length alone. Discouraged, but undeterred by these unpromising results, we try another feature -- the average lightness of the fish scales. Now we are very careful to eliminate variations in illumination, since they can only obscure the models and corrupt our new classifier. The resulting histograms, shown in Fig. 1.3, are much more satisfactory -- the classes are much better separated. So far we have tacitly assumed that the consequences of our actions are equally costly: deciding the fish was a sea bass when in fact it was a salmon was just as cost undesirable as the converse. Such a symmetry in the cost is often, but not invariably the case. For instance, as a fish packing company we may know that our customers easily accept occasional pieces of tasty salmon in their cans labeled "sea bass," but they object vigorously if a piece of sea bass appears in their cans labeled "salmon." If we want to stay in business, we should adjust our decision boundary to avoid antagonizing our customers, even if it means that more salmon makes its way into the cans of sea bass. In this case, then, we should move our decision boundary x to smaller values of lightness, thereby reducing the number of sea bass that are classified as salmon (Fig. 1.3). The more our customers object to getting sea bass with their 1.2. AN EXAMPLE 5 Figure 1.1: The objects to be classified are first sensed by a transducer (camera), whose signals are preprocessed, then the features extracted and finally the classification emitted (here either "salmon" or "sea bass"). Although the information flow is often chosen to be from the source to the classifier ("bottom-up"), some systems employ "top-down" flow as well, in which earlier levels of processing can be altered based on the tentative or preliminary response in later levels (gray arrows). Yet others combine two or more stages into a unified step, such as simultaneous segmentation and feature extraction. salmon -- i.e., the more costly this type of error -- the lower we should set the decision threshold x in Fig. 1.3. Such considerations suggest that there is an overall single cost associated with our decision, and our true task is to make a decision rule (i.e., set a decision boundary) so as to minimize such a cost. This is the central task of decision theory of which pattern classification is perhaps the most important subfield. Even if we know the costs associated with our decisions and choose the optimal decision boundary x , we may be dissatisfied with the resulting performance. Our first impulse might be to seek yet a different feature on which to separate the fish. Let us assume, though, that no other single visual feature yields better performance than that based on lightness. To improve recognition, then, we must resort to the use decision theory 6 CHAPTER 1. INTRODUCTION salmon Count 22 20 18 16 12 10 8 6 4 2 0 sea bass Length 5 10 15 l* 20 25 Figure 1.2: Histograms for the length feature for the two categories. No single threshold value l (decision boundary) will serve to unambiguously discriminate between the two categories; using length alone, we will have some errors. The value l marked will lead to the smallest number of errors, on average. Count 14 12 10 8 6 4 2 0 2 4 x* 6 Lightness 8 10 salmon sea bass Figure 1.3: Histograms for the lightness feature for the two categories. No single threshold value x (decision boundary) will serve to unambiguously discriminate between the two categories; using lightness alone, we will have some errors. The value x marked will lead to the smallest number of errors, on average. 1.2. AN EXAMPLE 7 Width 22 21 20 19 18 17 16 15 14 salmon sea bass Lightness 2 4 6 8 10 Figure 1.4: The two features of lightness and width for sea bass and salmon. The dark line might serve as a decision boundary of our classifier. Overall classification error on the data shown is lower than if we use only one feature as in Fig. 1.3, but there will still be some errors. of more than one feature at a time. In our search for other features, we might try to capitalize on the observation that sea bass are typically wider than salmon. Now we have two features for classifying fish -- the lightness x1 and the width x2 . If we ignore how these features might be measured in practice, we realize that the feature extractor has thus reduced the image of each fish to a point or feature vector x in a two-dimensional feature space, where x1 x2 x= . Our problem now is to partition the feature space into two regions, where for all patterns in one region we will call the fish a sea bass, and all points in the other we call it a salmon. Suppose that we measure the feature vectors for our samples and obtain the scattering of points shown in Fig. 1.4. This plot suggests the following rule for separating the fish: Classify the fish as sea bass if its feature vector falls above the decision boundary shown, and as salmon otherwise. This rule appears to do a good job of separating our samples and suggests that perhaps incorporating yet more features would be desirable. Besides the lightness and width of the fish, we might include some shape parameter, such as the vertex angle of the dorsal fin, or the placement of the eyes (as expressed as a proportion of the mouth-to-tail distance), and so on. How do we know beforehand which of these features will work best? Some features might be redundant: for instance if the eye color of all fish correlated perfectly with width, then classification performance need not be improved if we also include eye color as a feature. Even if the difficulty or computational cost in attaining more features is of no concern, might we ever have too many features? Suppose that other features are too expensive or expensive to measure, or provide little improvement (or possibly even degrade the performance) in the approach described above, and that we are forced to make our decision based on the two features in Fig. 1.4. If our models were extremely complicated, our classifier would have a decision boundary more complex than the simple straight line. In that case all the decision boundary 8 CHAPTER 1. INTRODUCTION Width 22 21 20 19 18 17 16 15 14 salmon sea bass ? Lightness 2 4 6 8 10 Figure 1.5: Overly complex models for the fish will lead to decision boundaries that are complicated. While such a decision may lead to perfect classification of our training samples, it would lead to poor performance on future patterns. The novel test point marked ? is evidently most likely a salmon, whereas the complex decision boundary shown leads it to be misclassified as a sea bass. training patterns would be separated perfectly, as shown in Fig. 1.5. With such a "solution," though, our satisfaction would be premature because the central aim of designing a classifier is to suggest actions when presented with novel patterns, i.e., fish not yet seen. This is the issue of generalization. It is unlikely that the complex decision boundary in Fig. 1.5 would provide good generalization, since it seems to be "tuned" to the particular training samples, rather than some underlying characteristics or true model of all the sea bass and salmon that will have to be separated. Naturally, one approach would be to get more training samples for obtaining a better estimate of the true underlying characteristics, for instance the probability distributions of the categories. In most pattern recognition problems, however, the amount of such data we can obtain easily is often quite limited. Even with a vast amount of training data in a continuous feature space though, if we followed the approach in Fig. 1.5 our classifier would give a horrendously complicated decision boundary -- one that would be unlikely to do well on novel patterns. Rather, then, we might seek to "simplify" the recognizer, motivated by a belief that the underlying models will not require a decision boundary that is as complex as that in Fig. 1.5. Indeed, we might be satisfied with the slightly poorer performance on the training samples if it means that our classifier will have better performance on novel patterns. But if designing a very complex recognizer is unlikely to give good generalization, precisely how should we quantify and favor simpler classifiers? How would our system automatically determine that the simple curve in Fig. 1.6 is preferable to the manifestly simpler straight line in Fig. 1.4 or the complicated boundary in Fig. 1.5? Assuming that we somehow manage to optimize this tradeoff, can we then predict how well our system will generalize to new patterns? These are some of the central problems in statistical pattern recognition. For the same incoming patterns, we might need to use a drastically different cost generalization The philosophical underpinnings of this approach derive from William of Occam (1284-1347?), who advocated favoring simpler explanations over those that are needlessly complicated -- Entia non sunt multiplicanda praeter necessitatem ("Entities are not to be multiplied without necessity"). Decisions based on overly complex models often lead to lower accuracy of the classifier. 1.2. AN EXAMPLE 9 Width 22 21 20 19 18 17 16 15 14 salmon sea bass Lightness 2 4 6 8 10 Figure 1.6: The decision boundary shown might represent the optimal tradeoff between performance on the training set and simplicity of classifier. function, and this will lead to different actions altogether. We might, for instance, wish instead to separate the fish based on their sex -- all females (of either species) from all males if we wish to sell roe. Alternatively, we might wish to cull the damaged fish (to prepare separately for cat food), and so on. Different decision tasks may require features and yield boundaries quite different from those useful for our original categorization problem. This makes it quite clear that our decisions are fundamentally task or cost specific, and that creating a single general purpose artificial pattern recognition device -- i.e., one capable of acting accurately based on a wide variety of tasks -- is a profoundly difficult challenge. This, too, should give us added appreciation of the ability of humans to switch rapidly and fluidly between pattern recognition tasks. Since classification is, at base, the task of recovering the model that generated the patterns, different classification techniques are useful depending on the type of candidate models themselves. In statistical pattern recognition we focus on the statistical properties of the patterns (generally expressed in probability densities), and this will command most of our attention in this book. Here the model for a pattern may be a single specific set of features, though the actual pattern sensed has been corrupted by some form of random noise. Occasionally it is claimed that neural pattern recognition (or neural network pattern classification) should be considered its own discipline, but despite its somewhat different intellectual pedigree, we will consider it a close descendant of statistical pattern recognition, for reasons that will become clear. If instead the model consists of some set of crisp logical rules, then we employ the methods of syntactic pattern recognition, where rules or grammars describe our decision. For example we might wish to classify an English sentence as grammatical or not, and here statistical descriptions (word frequencies, word correlations, etc.) are inapapropriate. It was necessary in our fish example to choose our features carefully, and hence achieve a representation (as in Fig. 1.6) that enabled reasonably successful pattern classification. A central aspect in virtually every pattern recognition problem is that of achieving such a "good" representation, one in which the structural relationships among the components is simply and naturally revealed, and one in which the true (unknown) model of the patterns can be expressed. In some cases patterns should be represented as vectors of real-valued numbers, in others ordered lists of attributes, in yet others descriptions of parts and their relations, and so forth. We seek a represen- 10 CHAPTER 1. INTRODUCTION tation in which the patterns that lead to the same action are somehow "close" to one another, yet "far" from those that demand a different action. The extent to which we create or learn a proper representation and how we quantify near and far apart will determine the success of our pattern classifier. A number of additional characteristics are desirable for the representation. We might wish to favor a small number of features, which might lead to simpler decision regions, and a classifier easier to train. We might also wish to have features that are robust, i.e., relatively insensitive to noise or other errors. In practical applications we may need the classifier to act quickly, or use few electronic components, memory or processing steps. A central technique, when we have insufficient training data, is to incorporate knowledge of the problem domain. Indeed the less the training data the more important is such knowledge, for instance how the patterns themselves were produced. One method that takes this notion to its logical extreme is that of analysis by synthesis, where in the ideal case one has a model of how each pattern is generated. Consider speech recognition. Amidst the manifest acoustic variability among the possible "dee"s that might be uttered by different people, one thing they have in common is that they were all produced by lowering the jaw slightly, opening the mouth, placing the tongue tip against the roof of the mouth after a certain delay, and so on. We might assume that "all" the acoustic variation is due to the happenstance of whether the talker is male or female, old or young, with different overall pitches, and so forth. At some deep level, such a "physiological" model (or so-called "motor" model) for production of the utterances is appropriate, and different (say) from that for "doo" and indeed all other utterances. If this underlying model of production can be determined from the sound (and that is a very big if ), then we can classify the utterance by how it was produced. That is to say, the production representation may be the "best" representation for classification. Our pattern recognition systems should then analyze (and hence classify) the input pattern based on how one would have to synthesize that pattern. The trick is, of course, to recover the generating parameters from the sensed pattern. Consider the difficulty in making a recognizer of all types of chairs -- standard office chair, contemporary living room chair, beanbag chair, and so forth -- based on an image. Given the astounding variety in the number of legs, material, shape, and so on, we might despair of ever finding a representation that reveals the unity within the class of chair. Perhaps the only such unifying aspect of chairs is functional: a chair is a stable artifact that supports a human sitter, including back support. Thus we might try to deduce such functional properties from the image, and the property "can support a human sitter" is very indirectly related to the orientation of the larger surfaces, and would need to be answered in the affirmative even for a beanbag chair. Of course, this requires some reasoning about the properties and naturally touches upon computer vision rather than pattern recognition proper. Without going to such extremes, many real world pattern recognition systems seek to incorporate at least some knowledge about the method of production of the patterns or their functional use in order to insure a good representation, though of course the goal of the representation is classification, not reproduction. For instance, in optical character recognition (OCR) one might confidently assume that handwritten characters are written as a sequence of strokes, and first try to recover a stroke representation from the sensed image, and then deduce the character from the identified strokes. analysis by synthesis 1.3. THE SUB-PROBLEMS OF PATTERN CLASSIFICATION 11 1.2.1 Related fields Pattern classification differs from classical statistical hypothesis testing, wherein the sensed data are used to decide whether or not to reject a null hypothesis in favor of some alternative hypothesis. Roughly speaking, if the probability of obtaining the data given some null hypothesis falls below a "significance" threshold, we reject the null hypothesis in favor of the alternative. For typical values of this criterion, there is a strong bias or predilection in favor of the null hypothesis; even though the alternate hypothesis may be more probable, we might not be able to reject the null hypothesis. Hypothesis testing is often used to determine whether a drug is effective, where the null hypothesis is that it has no effect. Hypothesis testing might be used to determine whether the fish on the conveyor belt belong to a single class (the null hypothesis) or from two classes (the alternative). In contrast, given some data, pattern classification seeks to find the most probable hypothesis from a set of hypotheses -- "this fish is probably a salmon." Pattern classification differs, too, from image processing. In image processing, the input is an image and the output is an image. Image processing steps often include rotation, contrast enhancement, and other transformations which preserve all the original information. Feature extraction, such as finding the peaks and valleys of the intensity, lose information (but hopefully preserve everything relevant to the task at hand.) As just described, feature extraction takes in a pattern and produces feature values. The number of features is virtually always chosen to be fewer than the total necessary to describe the complete target of interest, and this leads to a loss in information. In acts of associative memory, the system takes in a pattern and emits another pattern which is representative of a general group of patterns. It thus reduces the information somewhat, but rarely to the extent that pattern classification does. In short, because of the crucial role of a decision in pattern recognition information, it is fundamentally an information reduction process. The classification step represents an even more radical loss of information, reducing the original several thousand bits representing all the color of each of several thousand pixels down to just a few bits representing the chosen category (a single bit in our fish example.) image processing associative memory 1.3 The Sub-problems of Pattern Classification We have alluded to some of the issues in pattern classification and we now turn to a more explicit list of them. In practice, these typically require the bulk of the research and development effort. Many are domain or problem specific, and their solution will depend upon the knowledge and insights of the designer. Nevertheless, a few are of sufficient generality, difficulty, and interest that they warrant explicit consideration. 1.3.1 Feature Extraction The conceptual boundary between feature extraction and classification proper is somewhat arbitrary: an ideal feature extractor would yield a representation that makes the job of the classifier trivial; conversely, an omnipotent classifier would not need the help of a sophisticated feature extractor. The distinction is forced upon us for practical, rather than theoretical reasons. Generally speaking, the task of feature extraction is much more problem and domain dependent than is classification proper, and thus requires knowledge of the domain. A good feature extractor for sorting fish would 12 CHAPTER 1. INTRODUCTION surely be of little use for identifying fingerprints, or classifying photomicrographs of blood cells. How do we know which features are most promising? Are there ways to automatically learn which features are best for the classifier? How many shall we use? 1.3.2 Noise The lighting of the fish may vary, there could be shadows cast by neighboring equipment, the conveyor belt might shake -- all reducing the reliability of the feature values actually measured. We define noise very general terms: any property of the sensed pattern due not to the true underlying model but instead to randomness in the world or the sensors. All non-trivial decision and pattern recognition problems involve noise in some form. In some cases it is due to the transduction in the signal and we may consign to our preprocessor the role of cleaning up the signal, as for instance visual noise in our video camera viewing the fish. An important problem is knowing somehow whether the variation in some signal is noise or instead to complex underlying models of the fish. How then can we use this information to improve our classifier? 1.3.3 Overfitting In going from Fig 1.4 to Fig. 1.5 in our fish classification problem, we were, implicitly, using a more complex model of sea bass and of salmon. That is, we were adjusting the complexity of our classifier. While an overly complex model may allow perfect classification of the training samples, it is unlikely to give good classification of novel patterns -- a situation known as overfitting. One of the most important areas of research in statistical pattern classification is determining how to adjust the complexity of the model -- not so simple that it cannot explain the differences between the categories, yet not so complex as to give poor classification on novel patterns. Are there principled methods for finding the best (intermediate) complexity for a classifier? 1.3.4 Model Selection We might have been unsatisfied with the performance of our fish classifier in Figs. 1.4 & 1.5, and thus jumped to an entirely different class of model, for instance one based on some function of the number and position of the fins, the color of the eyes, the weight, shape of the mouth, and so on. How do we know when a hypothesized model differs significantly from the true model underlying our patterns, and thus a new model is needed? In short, how are we to know to reject a class of models and try another one? Are we as designers reduced to random and tedious trial and error in model selection, never really knowing whether we can expect improved performance? Or might there be principled methods for knowing when to jettison one class of models and invoke another? Can we automate the process? 1.3.5 Prior Knowledge In one limited sense, we have already seen how prior knowledge -- about the lightness of the different fish categories helped in the design of a classifier by suggesting a promising feature. Incorporating prior knowledge can be far more subtle and difficult. In some applications the knowledge ultimately derives from information about the production of the patterns, as we saw in analysis-by-synthesis. In others the knowledge may be about the form of the underlying categories, or specific attributes of the patterns, such as the fact that a face has two eyes, one nose, and so on. 1.3. THE SUB-PROBLEMS OF PATTERN CLASSIFICATION 13 1.3.6 Missing Features occlusion Suppose that during classification, the value of one of the features cannot be determined, for example the width of the fish because of occlusion by another fish (i.e., the other fish is in the way). How should the categorizer compensate? Since our two-feature recognizer never had a single-variable threshold value x determined in anticipation of the possible absence of a feature (cf., Fig. 1.3), how shall it make the best decision using only the feature present? The naive method, of merely assuming that the value of the missing feature is zero or the average of the values for the training patterns, is provably non-optimal. Likewise we occasionally have missing features during the creation or learning in our recognizer. How should we train a classifier or use one when some features are missing? 1.3.7 Mereology We effortlessly read a simple word such as BEATS. But consider this: Why didn't we read instead other words that are perfectly good subsets of the full pattern, such as BE, BEAT, EAT, AT, and EATS? Why don't they enter our minds, unless explicitly brought to our attention? Or when we saw the B why didn't we read a P or an I, which are "there" within the B? Conversely, how is it that we can read the two unsegmented words in POLOPONY -- without placing the entire input into a single word category? This is the problem of subsets and supersets -- formally part of mereology, the study of part/whole relationships. It is closely related to that of prior knowledge and segmentation. In short, how do we recognize or group together the "proper" number of elements -- neither too few nor too many? It appears as though the best classifiers try to incorporate as much of the input into the categorization as "makes sense," but not too much. How can this be done? 1.3.8 Segmentation In our fish example, we have tacitly assumed that the fish were isolated, separate on the conveyor belt. In practice, they would often be abutting or overlapping, and our system would have to determine where one fish ends and the next begins -- the individual patterns have to be segmented. If we have already recognized the fish then it would be easier to segment them. But how can we segment the images before they have been categorized or categorize them before they have been segmented? It seems we need a way to know when we have switched from one model to another, or to know when we just have background or "no category." How can this be done? Segmentation is one of the deepest problems in automated speech recognition. We might seek to recognize the individual sounds (e.g., phonemes, such as "ss," "k," ...), and then put them together to determine the word. But consider two nonsense words, "sklee" and "skloo." Speak them aloud and notice that for "skloo" you push your lips forward (so-called "rounding" in anticipation of the upcoming "oo") before you utter the "ss." Such rounding influences the sound of the "ss," lowering the frequency spectrum compared to the "ss" sound in "sklee" -- a phenomenon known as anticipatory coarticulation. Thus, the "oo" phoneme reveals its presence in the "ss" earlier than the "k" and "l" which nominally occur before the "oo" itself! How do we segment the "oo" phoneme from the others when they are so manifestly intermingled? Or should we even try? Perhaps we are focusing on groupings of the wrong size, and that the most useful unit for recognition is somewhat larger, as we saw in subsets and 14 CHAPTER 1. INTRODUCTION supersets (Sect. 1.3.7). A related problem occurs in connected cursive handwritten character recognition: How do we know where one character "ends" and the next one "begins"? 1.3.9 Context We might be able to use context -- input-dependent information other than from the target pattern itself -- to improve our recognizer. For instance, it might be known for our fish packing plant that if we are getting a sequence of salmon, that it is highly likely that the next fish will be a salmon (since it probably comes from a boat that just returned from a fishing area rich in salmon). Thus, if after a long series of salmon our recognizer detects an ambiguous pattern (i.e., one very close to the nominal decision boundary), it may nevertheless be best to categorize it too as a salmon. We shall see how such a simple correlation among patterns -- the most elementary form of context -- might be used to improve recognition. But how, precisely, should we incorporate such information? Context can be highly complex and abstract. The utterance "jeetyet?" may seem nonsensical, unless you hear it spoken by a friend in the context of the cafeteria at lunchtime -- "did you eat yet?" How can such a visual and temporal context influence your speech recognition? 1.3.10 Invariances In seeking to achieve an optimal representation for a particular pattern classification task, we confront the problem of invariances. In our fish example, the absolute position on the conveyor belt is irrelevant to the category and thus our representation should also be insensitive to absolute position of the fish. Here we seek a representation that is invariant to the transformation of translation (in either horizontal or vertical directions). Likewise, in a speech recognition problem, it might be required only that we be able to distinguish between utterances regardless of the particular moment they were uttered; here the "translation" invariance we must ensure is in time. The "model parameters" describing the orientation of our fish on the conveyor belt are horrendously complicated -- due as they are to the sloshing of water, the bumping of neighboring fish, the shape of the fish net, etc. -- and thus we give up hope of ever trying to use them. These parameters are irrelevant to the model parameters that interest us anyway, i.e., the ones associated with the differences between the fish categories. Thus here we try to build a classifier that is invariant to transformations such as rotation. orientation The orientation of the fish on the conveyor belt is irrelevant to its category. Here the transformation of concern is a two-dimensional rotation about the camera's line of sight. A more general invariance would be for rotations about an arbitrary line in three dimensions. The image of even such a "simple" object as a coffee cup undergoes radical variation as the cup is rotated to an arbitrary angle -- the handle may become hidden, the bottom of the inside volume come into view, the circular lip appear oval or a straight line or even obscured, and so forth. How might we insure that our pattern recognizer is invariant to such complex changes? size The overall size of an image may be irrelevant for categorization. Such differences might be due to variation in the range to the object; alternatively we may be genuinely unconcerned with differences between sizes -- a young, small salmon is still a salmon. 1.3. THE SUB-PROBLEMS OF PATTERN CLASSIFICATION 15 For patterns that have inherent temporal variation, we may want our recognizer to be insensitive to the rate at which the pattern evolves. Thus a slow hand wave and a fast hand wave may be considered as equivalent. Rate variation is a deep problem in speech recognition, of course; not only do different individuals talk at different rates, but even a single talker may vary in rate, causing the speech signal to change in complex ways. Likewise, cursive handwriting varies in complex ways as the writer speeds up -- the placement of dots on the i's, and cross bars on the t's and f's, are the first casualties of rate increase, while the appearance of l's and e's are relatively inviolate. How can we make a recognizer that changes its representations for some categories differently from that for others under such rate variation? A large number of highly complex transformations arise in pattern recognition, and many are domain specific. We might wish to make our handwritten optical character recognizer insensitive to the overall thickness of the pen line, for instance. Far more severe are transformations such as non-rigid deformations that arise in threedimensional object recognition, such as the radical variation in the image of your hand as you grasp an object or snap your fingers. Similarly, variations in illumination or the complex effects of cast shadows may need to be taken into account. The symmetries just described are continuous -- the pattern can be translated, rotated, sped up, or deformed by an arbitrary amount. In some pattern recognition applications other -- discrete -- symmetries are relevant, such as flips left-to-right, or top-to-bottom. In all of these invariances the problem arises: How do we determine whether an invariance is present? How do we efficiently incorporate such knowledge into our recognizer? rate deformation discrete symmetry 1.3.11 Evidence Pooling In our fish example we saw how using multiple features could lead to improved recognition. We might imagine that we could do better if we had several component classifiers. If these categorizers agree on a particular pattern, there is no difficulty. But suppose they disagree. How should a "super" classifier pool the evidence from the component recognizers to achieve the best decision? Imagine calling in ten experts for determining if a particular fish is diseased or not. While nine agree that the fish is healthy, one expert does not. Who is right? It may be that the lone dissenter is the only one familiar with the particular very rare symptoms in the fish, and is in fact correct. How would the "super" categorizer know when to base a decision on a minority opinion, even from an expert in one small domain who is not well qualified to judge throughout a broad range of problems? 1.3.12 Costs and Risks We should realize that a classifier rarely exists in a vacuum. Instead, it is generally to be used to recommend actions (put this fish in this bucket, put that fish in that bucket), each action having an associated cost or risk. Conceptually, the simplest such risk is the classification error: what percentage of new patterns are called the wrong category. However the notion of risk is far more general, as we shall see. We often design our classifier to recommend actions that minimize some total expected cost or risk. Thus, in some sense, the notion of category itself derives from the cost or task. How do we incorporate knowledge about such risks and how will they affect our classification decision? 16 CHAPTER 1. INTRODUCTION Finally, can we estimate the total risk and thus tell whether our classifier is acceptable even before we field it? Can we estimate the lowest possible risk of any classifier, to see how close ours meets this ideal, or whether the problem is simply too hard overall?ory label to improve the classifier. For instance, in optical character recognition, the input might be an image of a character, the actual output of the classifier the category label "R," and the desired output a "B." In reinforcement learning or learning with a critic, no desired category signal is given; instead, the only teaching feedback is that the tentative category is right or wrong. This is analogous to a critic who merely states that something is right or wrong, but does not say specifically how it is wrong. (Thus only binary feedback is given to the classifier; reinforcement learning also describes the case where a single scalar signal, say some number between 0 and 1, is given by the teacher.) In pattern classification, it is most common that such reinforcement is binary -- either the tentative decision is correct or it is not. (Of course, if our problem involves just two categories and equal costs for errors, then learning with a critic is equivalent to standard supervised learning.) How can the system learn which are important from such non-specific feedback? critic 1.5 Conclusion At this point the reader may be overwhelmed by the number, complexity and magnitude of these sub-problems. Further, these sub-problems are rarely addressed in isolation and they are invariably interrelated. Thus for instance in seeking to reduce the complexity of our classifier, we might affect its ability to deal with invariance. We point out, though, that the good news is at least three-fold: 1) there is an "existence proof" that many of these problems can indeed be solved -- as demonstrated by humans and other biological systems, 2) mathematical theories solving some of these problems have in fact been discovered, and finally 3) there remain many fascinating unsolved problems providing opportunities for progress. Summary by Chapters The overall organization of this book is to address first those cases where a great deal of information about the models is known (such as the probability densities, category labels, ...) and to move, chapter by chapter, toward problems where the form of the 18 CHAPTER 1. INTRODUCTION distributions are unknown and even the category membership of training patterns is unknown. We begin in Chap. ?? (Bayes decision theory) by considering the ideal case in which the probability structure underlying the categories is known perfectly. While this sort of situation rarely occurs in practice, it permits us to determine the optimal (Bayes) classifier against which we can compare all other methods. Moreover in some problems it enables us to predict the error we will get when we generalize to novel patterns. In Chap. ?? (Maximum Likelihood and Bayesian Parameter Estimation) we address the case when the full probability structure underlying the categories is not known, but the general forms of their distributions are -- i.e., the models. Thus the uncertainty about a probability distribution is represented by the values of some unknown parameters, and we seek to determine these parameters to attain the best categorization. In Chap. ?? (Nonparametric techniques) we move yet further from the Bayesian ideal, and assume that we have no prior parameterized knowledge about the underlying probability structure; in essence our classification will be based on information provided by training samples alone. Classic techniques such as the nearest-neighbor algorithm and potential functions play an important role here. We then in Chap. ?? (Linear Discriminant Functions) return somewhat toward the general approach of parameter estimation. We shall assume that the so-called "discriminant functions" are of a very particular form -- viz., linear -- in order to derive a class of incremental training rules. Next, in Chap. ?? (Nonlinear Discriminants and Neural Networks) we see how some of the ideas from such linear discriminants can be extended to a class of very powerful algorithms such as backpropagation and others for multilayer neural networks; these neural techniques have a range of useful properties that have made them a mainstay in contemporary pattern recognition research. In Chap. ?? (Stochastic Methods) we discuss simulated annealing by the Boltzmann learning algorithm and other stochastic methods. We explore the behavior of such algorithms with regard to the matter of local minima that can plague other neural methods. Chapter ?? (Non-metric Methods) moves beyond models that are statistical in nature to ones that can be best described by (logical) rules. Here we discuss tree-based algorithms such as CART (which can also be applied to statistical data) and syntactic based methods, such as grammar based, which are based on crisp rules. Chapter ?? (Theory of Learning) is both the most important chapter and the most difficult one in the book. Some of the results described there, such as the notion of capacity, degrees of freedom, the relationship between expected error and training set size, and computational complexity are subtle but nevertheless crucial both theoretically and practically. In some sense, the other chapters can only be fully understood (or used) in light of the results presented here; you cannot expect to solve important pattern classification problems without using the material from this chapter. We conclude in Chap. ?? (Unsupervised Learning and Clustering), by addressing the case when input training patterns are not labeled, and that our recognizer must determine the cluster structure. We also treat a related problem, that of learning with a critic, in which the teacher provides only a single bit of information during the presentation of a training pattern -- "yes," that the classification provided by the recognizer is correct, or "no," it isn't. Here algorithms for reinforcement learning will be presented. 1.5. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 19 Bibliographical and Historical Remarks Classification is among the first crucial steps in making sense of the blooming buzzing confusion of sensory data that intelligent systems confront. In the western world, the foundations of pattern recognition can be traced to Plato [2], later extended by Aristotle [1], who distinguished between an "essential property" (which would be shared by all members in a class or "natural kind" as he put it) from an "accidental property" (which could differ among members in the class). Pattern recognition can be cast as the problem of finding such essential properties of a category. It has been a central theme in the discipline of philosophical epistemology, the study of the nature of knowledge. A more modern treatment of some philosophical problems of pattern recognition, relating to the technical matter in the current book can be found in [22, 4, 18]. In the eastern world, the first Zen patriarch, Bodhidharma, would point at things and demand students to answer "What is that?" as a way of confronting the deepest issues in mind, the identity of objects, and the nature of classification and decision. A delightful and particularly insightful book on the foundations of artificial intelligence, including pattern recognition, is [9]. Early technical treatments by Minsky [14] and Rosenfeld [16] are still valuable, as are a number of overviews and reference books [5]. The modern literature on decision theory and pattern recognition is now overwhelming, and comprises dozens of journals, thousands of books and conference proceedings and innumerable articles; it continues to grow rapidly. While some disciplines such as statistics [7], machine learning [17] and neural networks [8], expand the foundations of pattern recognition, others, such as computer vision [6, 19] and speech recognition [15] rely on it heavily. Perceptual Psychology, Cognitive Science [12], Psychobiology [21] and Neuroscience [10] analyze how pattern recognition is achieved in humans and other animals. The extreme view that everything in human cognition -- including rule-following and logic -- can be reduced to pattern recognition is presented in [13]. Pattern recognition techniques have been applied in virtually every scientific and technical discipline. 20 CHAPTER 1. i |j )P (j |x). (11) risk decision rule In decision-theoretic terminology, an expected loss is called a risk, and R(i |x) is called the conditional risk. Whenever we encounter a particular observation x, we can minimize our expected loss by selecting the action that minimizes the conditional risk. We shall now show that this Bayes decision procedure actually provides the optimal performance on an overall risk. Stated formally, our problem is to find a decision rule against P (j ) that minimizes the overall risk. A general decision rule is a function (x) that tells us which action to take for every possible observation. To be more specific, for every x the decision function (x) assumes one of the a values 1 , ..., a . The overall risk R is the expected loss associated with a given decision rule. Since R(i |x) is the conditional risk associated with action i , and since the decision rule specifies the action, the overall risk is given by R= R((x)|x)p(x) dx, (12) where dx is our notation for a d-space volume element, and where the integral extends over the entire feature space. Clearly, if (x) is chosen so that R(i (x)) is as small as possible for every x, then the overall risk will be minimized. This justifies the following statement of the Bayes decision rule: To minimize the overall risk, compute the conditional risk c R(i |x) = j=1 (i |j )P (j |x) (13) Bayes risk for i = 1,...,a and select the action i for which R(i |x) is minimum. The resulting minimum overall risk is called the Bayes risk, denoted R , and is the best performance that can be achieved. 2.2.1 Two-Category Classification Let us consider these results when applied to the special case of two-category classification problems. Here action 1 corresponds to deciding that the true state of nature is 1 , and action 2 corresponds to deciding that it is 2 . For notational simplicity, let ij = (i |j ) be the loss incurred for deciding i when the true state of nature is j . If we write out the conditional risk given by Eq. 13, we obtain R(1 |x) R(2 |x) = 11 P (1 |x) + 12 P (2 |x) = 21 P (1 |x) + 22 P (2 |x). and (14) There are a variety of ways of expressing the minimum-risk decision rule, each having its own minor advantages. The fundamental rule is to decide 1 if R(1 |x) < R(2 |x). In terms of the posterior probabilities, we decide 1 if (21 - 11 )P (1 |x) > (12 - 22 )P (2 |x). (15) Note that if more than one action minimizes R(|x), it does not matter which of these actions is taken, and any convenient tie-breaking rule can be used. 2.3. MINIMUM-ERROR-RATE CLASSIFICATION 9 Ordinarily, the loss incurred for making an error is greater than the loss incurred for being correct, and both of the factors 21 - 11 and 12 - 22 are positive. Thus in practice, our decision is generally determined by the more likely state of nature, although we must scale the posterior probabilities by the loss differences. By employing Bayes' formula, we can replace the posterior probabilities by the prior probabilities and the conditional densities. This results in the equivalent rule, to decide 1 if (21 - 11 )p(x|1 )P (1 ) > (12 - 22 )p(x|2 )P (2 ), (16) and 2 otherwise. Another alternative, which follows at once under the reasonable assumption that 21 > 11 , is to decide 1 if p(x|1 ) 12 - 22 P (2 ) > . p(x|2 ) 21 - 11 P (1 ) (17) This form of the decision rule focuses on the x-dependence of the probability densities. We can consider p(x|j ) a function of j (i.e., the likelihood function), and then form the likelihood ratio p(x|1 )/p(x| 2 ). Thus the Bayes decision rule can be interpreted as calling for deciding 1 if the likelihood ratio exceeds a threshold value that is independent of the observation x. likelihood ratio 2.3 Minimum-Error-Rate Classification In classification problems, each state of nature is usually associated with a different one of the c classes, and the action i is usually interpreted as the decision that the true state of nature is i . If action i is taken and the true state of nature is j , then the decision is correct if i = j, and in error if i = j. If errors are to be avoided, it is natural to seek a decision rule that minimizes the probability of error, i.e., the error rate. The loss function of interest for this case is hence the so-called symmetrical or zero-one loss function, (i |j ) = 0 1 i=j i=j i, j = 1, ..., c. (18) zero-one loss This loss function assigns no loss to a correct decision, and assigns a unit loss to any error; thus, all errors are equally costly. The risk corresponding to this loss function is precisely the average probability of error, since the conditional risk is c R(i |x) = j=1 (i |j )P (j |x) P (j |x) j=i = = 1 - P (i |x) (19) We note that other loss functions, such as quadratic and linear-difference, find greater use in regression tasks, where there is a natural ordering on the predictions and we can meaningfully penalize predictions that are "more wrong" than others. 10 CHAPTER 2. BAYESIAN DECISION THEORY and P (i |x) is the conditional probability that action i is correct. The Bayes decision rule to minimize risk calls for selecting the action that minimizes the conditional risk. Thus, to minimize the average probability of error, we should select the i that maximizes the posterior probability P (i |x). In other words, for minimum error rate: Decide i if P (i |x) > P (j |x) for all j = i. (20) This is the same rule as in Eq. 6. We saw in Fig. 2.2 some class-conditional probability densities and the posterior probabilities; Fig. 2.3 shows the likelihood ratio p(x|1 )/p(x|2 ) for the same case. In general, this ratio can range between zero and infinity. The threshold value a marked is from the same prior probabilities but with zero-one loss function. Notice that this leads to the same decision boundaries as in Fig. 2.2, as it must. If we penalize mistakes in classifying 1 patterns as 2 more than the converse (i.e., 21 > 12 ), then Eq. 17 leads to the threshold b marked. Note that the range of x values for which we classify a pattern as 1 gets smaller, as it should. p(x|1) p(x|2) b a x R2 R1 R2 R1 Figure 2.3: The likelihood ratio p(x|1 )/p(x|2 ) for the distributions shown in Fig. 2.1. If we employ a zero-one or classification loss, our decision boundaries are determined by the threshold a . If our loss function penalizes miscategorizing 2 as 1 patterns more than the converse, (i.e., 12 > 21 ), we get the larger threshold b , and hence R1 becomes smaller. 2.3.1 *Minimax Criterion Sometimes we must design our classifier to perform well over a range of prior probabilities. For instance, in our fish categorization problem we can imagine that whereas the physical properties of lightness and width of each type of fish remain constant, the prior probabilities might vary widely and in an unpredictable way, or alternatively we want to use the classifier in a different plant where we do not know the prior probabilities. A reasonable approach is then to design our classifier so that the worst overall risk for any value of the priors is as small as possible -- that is, minimize the maximum possible overall risk. 2.3. MINIMUM-ERROR-RATE CLASSIFICATION 11 In order to understand this, we let R1 denote that (as yet unknown) region in feature space where the classifier decides 1 and likewise for R2 and 2 , and then write our overall risk Eq. 12 in terms of conditional risks: R = R1 [11 P (1 ) p(x|1 ) + 12 P (2 ) p(x|2 )] dx [21 P (1 ) p(x|1 ) + 22 P (2 ) p(x|2 )] dx. R2 + (21) We use the fact that P (2 ) = 1 - P (1 ) and that to rewrite the risk as: R1 p(x|1 ) dx = 1 - p(x|1 ) dx R2 = Rmm , minimax risk R(P (1 )) = 22 + (12 - 22 ) R1 p(x|2 ) dx (22) + P (1 ) (11 - 22 ) - (21 - 11 ) p(x|1 ) dx - (12 - 22 ) p(x|2 ) dx . R2 = R1 0 for minimax solution This equation shows that once the decision boundary is set (i.e., R1 and R2 determined), the overall risk is linear in P (1 ). If we can find a boundary such that the constant of proportionality is 0, then the risk is independent of priors. This is the minimax solution, and the minimax risk, Rmm , can be read from Eq. 22: minimax risk Rmm = 22 + (12 - 22 ) R1 p(x|2 ) dx p(x|1 ) dx. (23) = 11 + (21 - 11 ) R2 Figure 2.4 illustrates the approach. Briefly stated, we search for the prior for which the Bayes risk is maximum, the corresponding decision boundary gives the minimax solution. The value of the minimax risk, Rmm , is hence equal to the worst Bayes risk. In practice, finding the decision boundary for minimax risk may be difficult, particularly when distributions are complicated. Nevertheless, in some cases the boundary can be determined analytically (Problem 3). The minimax criterion finds greater use in game theory then it does in traditional pattern recognition. In game theory, you have a hostile opponent who can be expected to take an action maximally detrimental to you. Thus it makes great sense for you to take an action (e.g., make a classification) where your costs -- due to your opponent's subsequent actions -- are minimized. 12 CHAPTER 2. BAYESIAN DECISION THEORY P (error) .4 .4 .3 .3 .2 .2 .1 .1 P(1) 0 0.2 0.4 0.6 0.8 1 Figure 2.4: The curve at the bottom shows the minimum (Bayes) error as a function of prior probability P (1 ) in a two-category classification problem of fixed distributions. For each value of the priors (e.g., P (1 ) = 0.25) there is a corresponding optimal decision boundary and associated Bayes error rate. For any (fixed) such boundary, if the priors are then changed, the probability of error will change as a linear function of P (1 ) (shown by the dashed line). The maximum such error will occur at an extreme value of the prior, here at P (1 ) = 1. To minimize the maximum of such error, we should design our decision boundary for the maximum Bayes error (here P (1 ) = 0.6), and thus the error will not change as a function of prior, as shown by the solid red horizontal line. 2.3.2 *Neyman-Pearson Criterion In some problems, we may wish to minimize the overall risk subject to a constraint; for instance, we might wish to minimize the total risk subject to the constraint R(i |x) dx < constant for some particular i. Such a constraint might arise when there is a fixed resource that accompanies one particular action i , or when we must not misclassify pattern from a particular state of nature i at more than some limited frequency. For instance, in our fish example, there might be some government regulation that we must not misclassify more than 1% of salmon as sea bass. We might then seek a decision that minimizes the chance of classifying a sea bass as a salmon subject to this condition. We generally satisfy such a Neyman-Pearson criterion by adjusting decision boundaries numerically. However, for Gaussian and some other distributions, NeymanPearson solutions can be found analytically (Problems 5 & 6). We shall have cause to mention Neyman-Pearson criteria again in Sect. 2.8.3 on operating characteristics. 2.4. CLASSIFIERS, DISCRIMINANTS AND DECISION SURFACES 13 2.4 2.4.1 Classifiers, Discriminant Functions and Decision Surfaces The Multi-Category Case There are many different ways to represent pattern classifiers. One of the most useful is in terms of a set of discriminant functions gi (x), i = 1, ..., c. The classifier is said to assign a feature vector x to class i if gi (x) > gj (x) for all j = i. (24) Thus, the classifier is viewed as a network or machine that computes c discriminant functions and selects the category corresponding to the largest discriminant. A network representation of a classifier is illustrated in Fig. 2.5. Action (e.g., classification) Costs Discriminant functions g1(x) g2(x) ... gc(x) Input x1 x2 x3 ... xd Figure 2.5: The functional structure of a general statistical pattern classifier which includes d inputs and c discriminant functions gi (x). A subsequent step determines which of the discriminant values is the maximum, and categorizes the input pattern accordingly. The arrows show the direction of the flow of information, though frequently the arrows are omitted when the direction of flow is self-evident. A Bayes classifier is easily and naturally represented in this way. For the generion of the discriminability (from an arbitrary x ) allows us to calculate the Bayes error rate -- the most important property of any 2.8. *ERROR BOUNDS FOR NORMAL DENSITIES 35 classifier. If the actual error rate differs from the Bayes rate inferred in this way, we should alter the threshold x accordingly. It is a simple matter to generalize the above discussion and apply it to two categories having arbitrary multidimensional distributions, Gaussian or not. Suppose we have two distributions p(x|1 ) and p(x|2 ) which overlap, and thus have non-zero Bayes classification error. Just as we saw above, any pattern actually from 2 could be properly classified as 2 (a "hit") or misclassified as 1 (a "false alarm"). Unlike the one-dimensional case above, however, there may be many decision boundaries that give a particular hit rate, each with a different false alarm rate. Clearly here we cannot determine a fundamental measure of discriminability without knowing more about the underlying decision rule than just the hit and false alarm rates. In a rarely attainable ideal, we can imagine that our measured hit and false alarm rates are optimal, for example that of all the decision rules giving the measured hit rate, the rule that is actually used is the one having the minimum false alarm rate. If we constructed a multidimensional classifier -- regardless of the distributions used -- we might try to characterize the problem in this way, though it would probably require great computational resources to search for such optimal hit and false alarm rates. In practice, instead we eschew optimality, and simply vary a single parameter controlling the decision rule and plot the resulting hit and false alarm rates -- a curve called merely an operating characteristic. Such a control parameter might be the bias or nonlinearity in a discriminant function. It is traditional to choose a control parameter that can yield, at extreme values, either a vanishing false alarm or a vanishing hit rate, just as can be achieved with a very large or a very small x in an ROC curve. We should note that since the distributions can be arbitrary, the operating characteristic need not be symmetric (Fig. 2.21); in rare cases it need not even be concave down at all points. hit operating characteristic p(x|i) 1 2 p(x < x* | x 1 1) false alarm x p(x < x* | x 2) 1 Figure 2.21: In a general operating characteristic curve, the abscissa is the probability of false alarm, P (x R2 |x 1 ), and the ordinate the probability of hit, P (x R2 |x 2 ). As illustrated here, operating characteristic curves are generally not symmetric, as shown at the right. Classifier operating curves are of value for problems where the loss matrix ij might be changed. If the operating characteristic has been determined as a function of the control parameter ahead of time, it is a simple matter, when faced with a new loss function, to deduce the control parameter setting that will minimize the expected risk (Problem 38). 36 CHAPTER 2. BAYESIAN DECISION THEORY 2.9 Bayes Decision Theory -- Discrete Features Until now we have assumed that the feature vector x could be any point in a ddimensional Euclidean space, Rd . However, in many practical applications the components of x are binary-, ternary-, or higher integer valued, so that x can assume only one of m discrete values v1 , ..., vm . In such cases, the probability density function p(x|j ) becomes singular; integrals of the form p(x|j ) dx must then be replaced by corresponding sums, such as P (x|j ), x (77) (78) where we understand that the summation is over all values of x in the discrete distribution. Bayes' formula then involves probabilities, rather than probability densities: P (j |x) = where c P (x|j )P (j ) , P (x) (79) P (x) = j=1 P (x|j )P (j ). (80) The definition of the conditional risk R(|x) is unchanged, and the fundamental Bayes decision rule remains the same: To minimize the overall risk, select the action i for which R(i |x) is minimum, or stated formally, = arg max R(i |x). i (81) The basic rule tat we must integrate (marginalize) the posterior probability over the bad features. Finally we use the Bayes decision rule on the resulting posterior probabilities, i.e., choose i if P (i |xg ) > P (j |xg ) for all i and j. We shall consider the Expectation-Maximization (EM) algorithm in Chap. ??, which addresses a related problem involving missing features. 2.10.2 Noisy Features It is a simple matter to generalize the results of Eq. 91 to the case where a particular feature has been corrupted by statistically independent noise. For instance, in our fish classification example, we might have a reliable measurement of the length, while variability of the light source might degrade the measurement of the lightness. We assume we have uncorrupted (good) features xg , as before, and a noise model, expressed as p(xb |xt ). Here we let xt denote the true value of the observed xb features, i.e., without the noise present; that is, the xb are observed instead of the true xt . We assume that if xt were known, xb would be independent of i and xg . From such an assumption we get: P (i |xg , xb ) = p(i , xg , xb , xt ) dxt . p(xg , xb ) (92) Of course, to tell the classifier that a feature value is missing, the feature extractor must be designed to provide more than just a numerical value for each feature. 2.11. *COMPOUND BAYES DECISION THEORY AND CONTEXT 41 Now p(i , xg , xb , xt ) = P (i |xg , xb , xt )p(xg , xb , xt ), but by our independence assumption, if we know xt , then xb does not provide any additional information about i . Thus we have P (i |xg , xb , xt ) = P (i |xg , xt ). Similarly, we have p(xg , xb , xt ) = p(xb |xg , xt )p(xg , xt ), and p(xb |xg , xt ) = p(xb |xt ). We put these together and thereby obtain P (i |xg , xt )p(xg , xt )p(xb |xt ) dxt p(xg , xt )p(xb |xt ) dxt gi (x)p(x)p(xb |xt ) dxt , p(x)p(xb |xt ) dxt (93) P (i |xg , xb ) = = which we use as discriminant functions for classification in the manner dictated by Bayes. Equation 93 differs from Eq. 91 solely by the fact that the integral is weighted by the noise model. In the extreme case where p(xb |xt ) is uniform over the entire space (and hence provides no predictive information for categorization), the equation reduces to the case of missing features -- a satisfying result. 2.11 Compound Bayesian Decision Theory and Context Let us reconsider our introductory example of designing a classifier to sort two types of fish. Our original assumption was that the sequence of types of fish was so unpredictable that the state of nature looked like a random variable. Without abandoning this attitude, let us consider the possibility that the consecutive states of nature might not be statistically independent. We should be able to exploit such statistical dependence to gain improved performance. This is one example of the use of context to aid decision making. The way in which we exploit such context information is somewhat different when we can wait for n fish to emerge and then make all n decisions jointly than when we must decide as each fish emerges. The first problem is a compound decision problem, and the second is a sequential compound decision problem. The former case is conceptually simpler, and is the one we shall examine here. To state the general problem, let = ((1), ..., (n))t be a vector denoting the n states of nature, with (i) taking on one of the c values 1 , ..., c . Let P () be the prior probability for the n states of nature. Let X = (x1 , ..., xn ) be a matrix giving the n observed feature vectors, with xi being the feature vector obtained when the state of nature was (i). Finally, let p(X|) be the conditional probability density function for X given the true set of states of nature . Using this notation we see that the posterior probability of is given by P (|X) = p(X|)P () = p(X) p(X|)P () . p(X|)P () (94) In general, one can define a loss matrix for the compound decision problem and seek a decision rule that minimizes the compound risk. The development of this theory parallels our discussion for the simple decision problem, and concludes that the optimal procedure is to minimize the compound conditional risk. In particular, if there is no loss for being correct, and if all errors are equally costly, then the procedure 42 CHAPTER 2. BAYESIAN DECISION THEORY reduces to computing P (|X) for all and selecting the for which this posterior probability is maximum. While this provides the theoretical solution, in practice the computation of P (|X) can easily prove to be an enormous task. If each component (i) can have one of c values, there are cn possible values of to consider. Some simplification can be obtained if the distribution of the feature vector xi depends only on the corresponding state of nature (i), not on the values of the other feature vectors or the other states of nature. In this case the joint density p(X|) is merely the product of the component densities p(xi |(i)): n p(X|) = i=1 p(xi |(i)). (95) While this simplifies the problem of computing p(X|), there is still the problem of computing the prior probabilities P (). This joint probability is central to the compound Bayes decision problem, since it reflects the interdependence of the states of nature. Thus it is unacceptable to simplify the problem of calculating P () by assuming that the states of nature are independent. In addition, practical applications usually require some method of avoiding the computation of P (|X) for all cn possible values of . We shall find some solutions to this problem in Chap. ??. Summary The basic ideas underlying Bayes decision theory are very simple. To minimize the overall risk, one should always choose the action that minimizes the conditional risk R(|x). In particular, to minimize the probability of error in a classification problem, one should always choose the state of nature that maximizes the posterior probability P (j |x). Bayes' formula allows us to calculate such probabilities from the prior probabilities P (j ) and the conditional densities p(x|j ). If there are different penalties for misclassifying patterns from i as if from j , the posteriors must be first weighted according to such penalties before taking action. If the underlying distributions are multivariate Gaussian, the decision boundaries will be hyperquadrics, whose form and position depends upon the prior probabilities, means and covariances of the distributions in question. The true expected error can be bounded above by the Chernoff and computationally simpler Bhattacharyya bounds. If an input (test) pattern has missing or corrupted features, we should form the marginal distributions by integrating over such features, and then using Bayes decision procedure on the resulting distributions. Receiver operating characteristic curves describe the inherent and unchangeable properties of a classifier and can be used, for example, to determine the Bayes rate. For many pattern classification applications, the chief problem in applying these results is that the conditional densities p(x|j ) are not known. In some cases we may know the form these densities assume, but may not know characterizing parameter values. The classic case occurs when the densities are known to be, or can assumed to be multivariate normal, but the values of the mean vectors and the covariance matrices are not known. More commonly even less is known about the conditional densities, and procedures that are less sensitive to specific assumptions about the densities must be used. Most of the remainder of this book will be devoted to various procedures that have been developed to attack such problems. 2.11. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 43 Bibliographical and Historical Remarks The power, coherence and elegance of Bayesian theory in pattern recognition make it among the most beautiful formalisms in science. Its foundations go back to Bayes himself, of course [3], but he stated his theorem (Eq. 1) for the case of uniform priors. It was Laplace [25] who first stated it for the more general (but discrete) case. There are several modern and clear descriptions of the ideas -- in pattern recognition and general decision theory -- that can be recommended [7, 6, 26, 15, 13, 20, 27]. Since Bayesian theory rests on an axiomatic foundation, it is guaranteed to have quantitative coherence; some other classification methods do not. Wald presents a non-Bayesian perspective on these topics that can be highly recommended [36], and the philosophical foundations of Bayesian and non-Bayesian methods are explored in [16]. Neyman and Pearson provided some of the most important pioneering work in hypothesis testing, and used the probability of error as the criterion [28]; Wald extended this work by introducing the notions of loss and risk [35]. Certain conceptual problems have always attended the use of loss functions and prior probabilities. In fact, the Bayesian approach is avoided by many statisticians, partly because there are problems for which a decision is made only once, and partly because there may be no reasonable way to determine the prior probabilities. Neither of these difficulties seems to present a serious drawback in typical pattern recognition applications: for nearly all critical pattern recognition problems we will have training data; we will use our recognizer more than once. For these reasons, the Bayesian approach will continue to be of great use in pattern recognition. The single most important drawback of the Bayesian approach is its assumption that the true probability distributions for the problem can be represented by the classifier, for instance the true distributions are Gaussian, and all that is unknown are parameters describing these Gaussians. This is a strong assumption that is not always fulfilled and we shall later consider other approaches that do not have this requirement. Chow[10] was among the earliest to use Bayesian decision theory for pattern recognition, and he later established fundamental relations between error and reject rate [11]. Error rates for Gaussians have been explored by [18], and the Chernoff and Bhattacharyya bounds were first presented in [9, 8], respectively and are explored in a number of statistics texts, such as [17]. Computational approximations for bounding integrals for Bayesian probability of error (the source for one of the homework problems) appears in [2]. Neyman and Pearson also worked on classification given constraints [28], and the analysis of minimax estimators for multivariate normals is presented in [5, 4, 14]. Signal detection theory and receiver operating characteristics are fully explored in [21]; a brief overview, targetting experimental psychologists, is [34]. Our discussion of the missing feature problem follows closely the work of [1] while the definitive book on missing features, including a great deal beyond our discussion here, can be found in [30]. Entropy was the central concept in the foundation of information theory [31] and the relation of Gaussians to entropy is explored in [33]. Readers requiring a review of information theory [12], linear algebra [24, 23], calculus and continuous mathematics, [38, 32] probability [29] calculus of variations and Lagrange multipliers [19] should consult these texts and those listed in our Appendix. 44 CHAPTER 2. BAYESIAN DECISION THEORY Problems Section 2.1 1. In the two-category case, under the Bayes' decision rule the conditional error is given by Eq. 7. Even if the posterior densities are continuous, this form of the conditional error virtually always leads to a discontinuous integrand when calculating the full error by Eq. 5. (a) Show that for arbitrary densities, we can replace Eq. 7 by P (error|x) = 2P (1 |x)P (2 |x) in the integral and get an upper bound on the full error. (b) Show that if we use P (error|x) = P (1 |x)P (2 |x) for < 2, then we are not guaranteed that the integral gives an upper bound on the error. (c) Analogously, show that we can use instead P (error|x) = P (1 |x)P (2 |x) and get a lower bound on the full error. (d) Show that if we use P (error|x) = P (1 |x)P (2 |x) for > 1, then we are not guaranteed that the integral gives an lower bound on the error. Section 2.2 2erive is the best, even among our model set. We shall return to the problem of choosing among candidate models in Chap. ??. sample covariance absolutely unbiased asymptotically unbiased 3.3 Bayesian estimation We now consider the Bayesian estimation or Bayesian learning approach to pattern classification problems. Although the answers we get by this method will generally be nearly identical to those obtained by maximum likelihood, there is a conceptual difference: whereas in maximum likelihood methods we view the true parameter vector we seek, , to be fixed, in Bayesian learning we consider to be a random variable, and training data allows us to convert a distribution on this variable into a posterior probability density. 10 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION 3.3.1 The Class-Conditional Densities The computation of the posterior probabilities P (i |x) lies at the heart of Bayesian classification. Bayes' formula allows us to compute these probabilities from the prior probabilities P (i ) and the class-conditional densities p(x|i ), but how can we proceed when these quantities are unknown? The general answer to this question is that the best we can do is to compute P (i |x) using all of the information at our disposal. Part of this information might be prior knowledge, such as knowledge of the functional forms for unknown densities and ranges for the values of unknown parameters. Part of this information might reside in a set of training samples. If we again let D denote the set of samples, then we can emphasize the role of the samples by saying that our goal is to compute the posterior probabilities P (i |x, D). From these probabilities we can obtain the Bayes classifier. Given the sample D, Bayes' formula then becomes P (i |x, D) = p(x|i , D)P (i |D) c j=1 . (23) p(x|j , D)P (j |D) As this equation suggests, we can use the information provided by the training samples to help determine both the class-conditional densities and the a priori probabilities. Although we could maintain this generality, we shall henceforth assume that the true values of the a priori probabilities are known or obtainable from a trivial calculation; thus we substitute P (i ) = P (i |D). Furthermore, since we are treating the supervised case, we can separate the training samples by class into c subsets D1 , ..., Dc , with the samples in Di belonging to i . As we mentioned when addressing maximum likelihood methods, in most cases of interest (and in all of the cases we shall consider), the samples in Di have no influence on p(x|j , D) if i = j. This has two simplifying consequences. First, it allows us to work with each class separately, using only the samples in Di to determine p(x|i , D). Used in conjunction with our assumption that the prior probabilities are known, this allows us to write Eq. 23 as P (i |x, D) = p(x|i , Di )P (i ) c j=1 . (24) p(x|j , Dj )P (j ) Second, because each class can be treated independently, we can dispense with needless class distinctions and simplify our notation. In essence, we have c separate problems of the following form: use a set D of samples drawn independently according to the fixed but unknown probability distribution p(x) to determine p(x|D). This is the central problem of Bayesian learning. 3.3.2 The Parameter Distribution Although the desired probability density p(x) is unknown, we assume that it has a known parametric form. The only thing assumed unknown is the value of a parameter vector . We shall express the fact that p(x) is unknown but has known parametric form by saying that the function p(x|) is completely known. Any information we might have about prior to observing the samples is assumed to be contained in a known prior density p(). Observation of the samples converts this to a posterior density p(|D), which, we hope, is sharply peaked about the true value of . 3.4. BAYESIAN PARAMETER ESTIMATION: GAUSSIAN CASE 11 Note that we are changing our supervised learning problem into an unsupervised density estimation problem. To this end, our basic goal is t category, we shall write Dn = {x1 , ..., xn }. Then from Eq. 52, if n > 1 p(Dn |) = p(xn |)p(Dn-1 |). (53) Substituting this in Eq. 51 and using Bayes' formula, we see that the posterior density satisfies the recursion relation 3.5. BAYESIAN PARAMETER ESTIMATION: GENERAL THEORY 17 p(|Dn ) = p(xn |)p(|Dn-1 ) . p(xn |)p(|Dn-1 ) d (54) With the understanding that p(|D0 ) = p(), repeated use of this equation produces the sequence of densities p(), p(|x1 ), p(|x1 , x2 ), and so forth. (It should be obvious from Eq. 54 that p(|Dn ) depends only on the points in Dn , not the sequence in which they were selected.) This is called the recursive Bayes approach to parameter estimation. This is, too, our first example of an incremental or on-line learning method, where learning goes on as the data is collected. When this sequence of densities converges to a Dirac delta function centered about the true parameter value -- Bayesian learning (Example 1). We shall come across many other, non-incremental learning schemes, where all the training data must be present before learning can take place. In principle, Eq. 54 requires that we preserve all the training points in Dn-1 in order to calculate p(|Dn ) but for some distributions, just a few parameters associated with p(|Dn-1 ) contain all the information needed. Such parameters are the sufficient statistics of those distributions, as we shall see in Sect. 3.6. Some authors reserve the term recursive learning to apply to only those cases where the sufficient statistics are retained -- not the training data -- when incorporating the information from a new training point. We could call this more restrictive usage true recursive Bayes learning. Example 1: Recursive Bayes learning Suppose we believe our one-dimensional samples come from a uniform distribution 1/ 0 0x otherwise, recursive Bayes incremental learning p(x|) U (0, ) = but initially we know only that our parameter is bounded. In particular we assume 0 < 10 (a non-informative or "flat prior" we shall discuss in Sect. 3.5.2). We will use recursive Bayes methods to estimate and the underlying densities from the data D = {4, 7, 2, 8}, which were selected randomly from the underlying distribution. Before any data arrive, then, we have p(|D0 ) = p() = U (0, 10). When our first data point x1 = 4 arrives, we use Eq. 54 to get an improved estimate: 1/ 0 for 4 10 otherwise, p(|D1 ) p(x|)p(|D0 ) = where throughout we will ignore the normalization. When the next data point x2 = 7 arrives, we have 1/2 0 for 7 10 otherwise, p(|D2 ) p(x|)p(|D1 ) = and similarly for the remaining sample points. It should be clear that since each successive step introduces a factor of 1/ into p (x|), and the distribution is nonzero only for x values above the largest data point sampled, the general form of our solution is p(|Dn ) 1/n for max[Dn ] 10, as shown in the figure. Given our full data x 18 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION p(|Dn) 4 0.6 3 2 0.4 1 0.2 0 2 4 6 8 10 The posterior p(|Dn ) for the model and n points in the data set in this Example. The posterior begins p() U (0, 10), and as more points are incorporated it becomes increasingly peaked at the value of the highest data point. ^ set, the maximum likelihood solution here is clearly = 8, and this implies a uniform p(x|D) U (0, 8). According to our Bayesian methodology, which requires the integration in Eq. 50, the density is uniform up to x = 8, but has a tail at higher values -- an indication that the influence of our prior p() has not yet been swamped by the information in the training data. p(x|D) 0.2 ML 0.1 Bayes x 0 2 4 6 8 10 Given the full set of four points, the distribution based on the maximum likelihood ^ solution is p(x|) U (0, 8), whereas the distribution derived from Bayesian methods has a small tail above x = 8, reflecting the prior information that values of x near 10 are possible. Whereas the maximum likelihood approach estimates a point in space, the Bayesian approach instead estimates a distribution. Technically speaking, then, we cannot directly compare these estimates. It is only when the second stage of inference is done -- that is, we compute the distributions p(x|D), as shown in the above figure -- that the comparison is fair. For most of the typically encountered probability densities p(x|), the sequence of posterior densities does indeed converge to a delta function. Roughly speaking, this implies that with a large number of samples there is only one value for that causes p(x|) to fit the data, i.e., that can be determined uniquely from p(x|). When this is the case, p(x|) is said to be identifiable. A rigorous proof of convergence under these conditions requires a precise statement of the properties required of p(x|) and p() and considerable care, but presents no serious difficulties (Problem 21). There are occasions, however, when more than one value of may yield the same value for p(x|). In such cases, cannot be determined uniquely from p(x|), and p(x|Dn ) will peak near all of the values of that explain the data. Fortunately, this ambiguity is erased by the integration in Eq. 26, since p(x|) is the same for all of identifiability 3.5. BAYESIAN PARAMETER ESTIMATION: GENERAL THEORY 19 these values of . Thus, p(x|Dn ) will typically converge to p(x) whether or not p(x|) is identifiable. While this might make the problem of identifiabilty appear to be moot, we shall see in Chap. ?? that identifiability presents a genuine problem in the case of unsupervised learning. 3.5.1 When do Maximum Likelihood and Bayes methods differ? In virtually every case, maximum likelihood and Bayes solutions are equivalent in the asymptotic limit of infinite training data. However since practical pattern recognition problems invariably have a limited set of training data, it is natural to ask when maximum likelihood and Bayes solutions may be expected to differ, and then which we should prefer. There are several criteria that will influence our choice. One is computational complexity (Sec. 3.7.2), and here maximum likelhood methods are often to be pref^ ered since they require merely differential calculus techniques or gradient search for , rather than a possibly complex multidimensional integration needed in Bayesian estimation. This leads to another consideration: interpretability. In many cases the maximum likelihood solution will be easier to interpret and understand since it returns the single best model from the set the designer provided (and presumably understands). In contrast Bayesian methods give a weighted average of models (parameters), often leading to solutions more complicated and harder to understand than those provided by the designer. The Bayesian approach reflects the remaining uncertainty in the possible models. Another consideration is our confidence in the prior information, such as in the ^ form of the underlying distribution p(x|). A maximum likelihood solution p(x|) must of course be of the assumed parametric form; not so for the Bayesian solution. We saw this difference in Example 1, where the Bayes solution was not of the parametric form originally assumed, i.e., a uniform p(x|D). In general, through their use of the full p(|D) distribution Bayesian methods use more of the information brought to the problem than do maximum likelihood methods. (For instance, in Example 1 the addition of the third training point did not change the maximum likelihood solution, but did refine the Bayesian estimate.) If such information is reliable, Bayes methods can be expected to give better results. Further, general Bayesian methods with a "flat" or uniform prior (i.e., where no prior information is explicitly imposed) are equivalent to maximum likelihood methods. If there is much data, leading to a strongly peaked p(|D), and the prior p() is uniform or flat, then the MAP estimate is essentially the same as the maximum likelihood estimate. ^ When p(|D) is broad, or asymmetric around , the methods are quite likely to yield p(x|D) distributions that differ from one another. Such a strong asymmetry (when not due to rare statisticspect to that invariance. It is tempting to assert that the use of non-informative priors is somehow "objective" and lets the data speak for themselves, but such a view is a bit naive. For 3.6. *SUFFICIENT STATISTICS 21 example, we may seek a non-informative prior when estimating the standard deviation of a Gaussian. But this requirement might not lead to the non-informative prior for estimating the variance, 2 . Which should we use? In fact, the greatest benefit of this approach is that it forces the designer to acknowledge and be clear about the assumed invariance -- the choice of which generally lies outside our methodology. It may be more difficult to accommodate such arbitrary transformations in a maximum a posteriori (MAP) estimator (Sec. 3.2.1), and hence considerations of invariance are of greatest use in Bayesian estimation, or when the posterior is very strongly peaked and the mode not influenced by transformations of the density (Problem 19). 3.6 *Sufficient Statistics From a practical viewpoint, the formal solution provided by Eqs. 26, 51 & 52 is not computationally attractive. In pattern recognition applications it is not unusual to have dozens or hundreds of parameters and thousands of training samples, which makes the direct computation and tabulation of p(D|) or p(|D) quite out of the question. We shall see in Chap. ?? how neural network methods avoid many of the difficulties of setting such a large number of parameters in a classifier, but for now we note that the only hope for an analytic, computationally feasible maximum likelihood solution lies in being able to find a parametric form for p(x |) that on the one hand matches the characteristics of the problem and on the other hand allows a reasonably tractable solution. Consider the simplification that occurred in the problem of learning the parameters of a multivariate Gaussian density. The basic data processing required was merely the computation of the sample mean and sample covariance. This easily computed and easily updated statistic contained all the information in the samples relevant to estimating the unknown population mean and covariance. One might suspect that this simplicity is just one more happy property of the normal distribution, and that such good fortune is not likely to occur in other cases. While this is largely true, there are distributions for which computationally feasible solutions can be obtained, and the key to their simplicity lies in the notion of a sufficient statistic. To begin with, any function of the samples is a statistic. Roughly speaking, a sufficient statistic is a (possibly vector-valued) function s of the samples D that contains all of the information relevant to estimating some parameter . Intuitively, one might expect the definition of a sufficient statistic to involve the requirement that p(|s, D) = p(|s). However, this would require treating as a random variable, limiting the definition to a Bayesian domain. To avoid such a limitation, the conventional definition is as follows: A statistic s is said to be sufficient for if p(D|s,) is independent of . If we think of as a random variable, we can write p(|s, D) = p(D|s, )p(|s) , p(D|s) (56) whereupon it becomes evident that p(|s, D) = p(|s) if s is sufficient for . Conversely, if s is a statistic for which p(|s, D) = p(|s), and if p(|s) = 0, it is easy to show that p(D|s, ) is independent of (Problem 27). Thus, the intuitive and the conventional definitions are basically equivalent. As one might expect, for a Gaussian distribution the sample mean and covariance, taken together, represent a sufficient statistic for the true mean and covariance; if these are known, all other statistics 22 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION such as the mode, range, higher-order moments, number of data points, etc., are superfluous when estimating the true mean and covariance. A fundamental theorem concerning sufficient statistics is the Factorization Theorem, which states that s is sufficient for if and only if p(D|) can be factored into the produe ones for which the difference between the means is large relative to the standard deviations. However no feature is useless if its means for the two classes differ. An obvious way to reduce the error rate further is to introduce new, independent features. Each new feature need not add much, but if r can be increased without limit, the probability of error can be made arbitrarily small. In general, if the performance obtained with a given set of features is inadequate, it is natural to consider adding new features, particularly ones that will help separate the class pairs most frequently confused. Although increasing the number of features increases the cost and complexity of both the feature extractor and the classifier, it is often reasonable to believe that the performance will improve. After all, if the probabilistic structure of the problem were completely known, the Bayes risk could not possibly be increased by adding new features. At worst, the Bayes classifer would ignore the new features, but if the new features provide any additional information, the performance must improve (Fig. 3.3). Unfortunately, it has frequently been observed in practice that, beyond a certain point, the inclusion of additional features leads to worse rather than better performance. This apparent paradox presents a genuine and serious problem for classifier 28 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION x3 x2 x1 Figure 3.3: Two three-dimensional distributions have nonoverlapping densities, and thus in three dimensions the Bayes error vanishes. When projected to a subspace -- here, the two-dimensional x1 - x2 subspace or a one-dimensional x1 subspace -- there can be greater overlap of the projected distributions, and hence greater Bayes errors. design. The basic source of the difficulty can always be traced to the fact that we have the wrong model -- e.g., the Gaussian assumption or conditional assumption are wrong -- or the number of design or training samples is finite and thus the distributions are not estimated accurately. However, analysis of the problem is both challenging and subtle. Simple cases do not exhibit the experimentally observed phenomena, and more realistic cases are difficult to analyze. In an attempt to provide some rigor, we shall return to topics related to problems of dimensionality and sample size in Chap. ??. 3.7.2 Computational Complexity order big oh We have mentioned that one consideration affecting our design methodology is that of the computational difficulty, and here the technical notion of computational complexity can be useful. First, we will will need to understand the notion of the order of a function f (x): we say that the f (x) is "of the order of h(x)" -- written f (x) = O(h(x)) and generally read "big oh of h(x)" -- if there exist constants c0 and x0 such that |f (x)| c0 |h(x)| for all x > x0 . This means simply that for sufficiently large x, an upper bound on the function grows no worse than h(x). For instance, suppose f (x) = a0 + a1 x + a2 x2 ; in that case we have f (x) = O (x2 ) because for sufficiently large x, the constant, linear and quadratic terms can be "overcome" by proper choice of c0 and x0 . The generalization to functions of two or more variables is straightforward. It should be clear that by the definition above, the big oh order of a function is not unique. For instance, we can describe our particular f (x) as being O(x2 ), O(x3 ), O(x4 ), O (x2 ln x). Because of the non-uniqueness of the big oh notation, we occasionally need to be 3.7. PROBLEMS OF DIMENSIONALITY 29 more precise in describing the order of a function. We say that f (x) = (h(x)) "big theta of h(x)" if there are constants x0 , c1 and c2 such that for x > x0 , f (x) always lies between c1 h(x) and c2 h(x). Thus our simple quadratic function above would obey f (x) = (x2 ), but would not obey f (x) = (x3 ). (A fuller explanation is provided in the Appendix.) In describing the computational complexity of an algorithm we are generally interested in the number of basic mathematical operations, such lly necessary for us to determine these constants to find which of several implemementations is the simplest. Nevertheless, big oh and big theta analyses, as just described, are generally the best way to describe the computational complexity of an algorithm. Sometimes we stress space and time complexities, which are particularly relevant when contemplating parallel implementations. For instance, the sample mean of a category could be calculated with d separate processors, each adding n sample values. Thus we can describe this implementation as O(d) in space (i.e., the amount of memory or possibly the number of processors) and O(n) in time (i.e., number of sequential steps). Of course for any particular algorithm there may be a number of time-space tradeoffs, for instance using a single processor many times, or using many processors in parallel for a shorter time. Such tradeoffs are important considerations can be important in neural network implementations, as we shall see in Chap. ??. A common qualitative distinction is made between polynomially complex and exponentially complex algorithms -- O(ak ) for some constant a and aspect or variable k of the problem. Exponential algorithms are generally so complex that for reasonable size cases we avoid them altogether, and resign ourselves to approximate solutions that can be found by polynomially complex algorithms. 3.7.3 Overfitting It frequently happens that the number of available samples is inadequate, and the question of how to proceed arises. One possibility is to reduce the dimensionality, either by redesigning the feature extractor, by selecting an appropriate subset of the existing features, or by combining the existing features in some way (Chap ??). Another possibility is to assume that all c classes share the same covariance matrix, and to pool the available data. Yet another alternative is to look for a better estimate for . If any reasonable a priori estimate 0 is available, a Bayesian or pseudo-Bayesian estimate of the form 0 + (1 - ) might be employed. If 0 is diagonal, this diminishes the troublesome effects of "accidental" correlations. Alternatively, one can remove chance correlations heuristically by thresholding the sample covariance matrix. For example, one might assume that all covariances for which the magnitude of the correlation coefficient is not near unity are actually zero. An extreme of this approach is to assume statistical independence, thereby making all the off-diagonal elements be zero, regardless of empirical evidence to the contrary -- an O(nd) calculation. Even though such assumptions are almost surely incorrect, the resulting heuristic estimates sometimes provide better performance than the maximum likelihood estimate of the full parameter space. 3.7. PROBLEMS OF DIMENSIONALITY 31 Here we have another apparent paradox. The classifier that results from assuming independence is almost certainly suboptimal. It is understandable that it will perform better if it happens that the features actually are independent, but how can it provide better performance when this assumption is untrue? The answer again involves the problem of insufficient data, and some insight into its nature can be gained from considering an analogous problem in curve fitting. Figure 3.4 shows a set of ten data points and two candidate curves for fitting them. The data points were obtained by adding zero-mean, independent noise to a parabola. Thus, of all the possible polynomials, presumably a parabola would provide the best fit, assuming that we are interested in fitting data obtained in the future as well as the points at hand. Even a straight line could fit the training data fairly well. The parabola provides a better fit, but one might wonder whether the data are adequate to fix the curve. The best parabola for a larger data set might be quite different, and over the interval shown the straight line could easily be superior. The tenth-degree polynomial fits the given data perfectly. However, we do not expect that a tenth-degree polynomial is required here. In ge to an improved estimate, labelled by the iteration number i; here, after three iterations the algorithm has converged. We must be careful and note that the EM algorithm leads to the greatest loglikelihood of the good data, with the bad data marginalized. There may be particular values of the bad data that give a different solution and an even greater log-likelihood. For instance, in this Example if the missing feature had value x41 = 2, so that x4 = 2 , we would have a solution 4 1.0 2.0 = 0.5 2.0 and a log-likelihood for the full data (good plus bad) that is greater than for the good alone. Such an optimization, however, is not the goal of the canonical EM algorithm. Note too that if no data is missing, the calculation of Q(; i ) is simple since no integrals are involved. Generalized Expectation-Maximization or GEM algorithms are a bit more lax than the EM algorithm, and require merely that an improved i+1 be set in the M step (line 5) of the algorithm -- not necessarily the optimal. Naturally, convergence will not be as rapid as for a proper EM algorithm, but GEM algorithms afford greater freedom to choose computationally simpler steps. One version of GEM is to find the maximum likelihood value of unknown features at each iteration step, then recalculate in light of these new values -- if indeed they lead to a greater likelihood. generalized ExpectationMaximization 36 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION In practice, the term Expectation-Maximization has come to mean loosely any iterative scheme in which the likelihood of some data increases with each step, even if such methods are not, technically speaking, the true EM algorithm as presented here. 3.9 Bayesian Belief Networks The methods we have described up to now are fairly general -- all that we assumed, at base, was that we could parameterize the distributions by a feature vector . If we had prior information about the distribution of , this too could be used. Sometimes our knowledge about a distribution is not directly of this type, but instead about the statistical dependencies (or independencies) among the component features. Recall that for some multidimensional distribution p(x), if for two features we have p(xi , xj ) = p(xi )p(xj ), we say those variables are statistically independent (Fig. 3.6). x3 x1 x2 Figure 3.6: A three-dimensional distribution which obeys p(x1 , x3 ) = p(x1 )p(x3 ); thus here x1 and x3 are statistically independent but the other feature pairs are not. There are many cases where we know or can safely assume which variables are or are not independent, even without sampled data. Suppose for instance we are describing the state of an automobile -- temperature of the engine, pressures of the fluids and in the tires, voltages in the wires, and so on. Our basic knowledge of cars includes the fact that the oil pressure in the engine and the air pressure in a tire are functionally unrelated, and hence can be safely assumed to be statistically independent. However the oil temperature and engine temperature are not independent (but could be conditionally independent). Furthermore we may know several variables that might influence another: the coolant temperature is affected by the engine temperature, the speed of the radiator fan (which blows air over the coolant-filled radiator), and so on. We will represent these dependencies graphically, by means of Bayesian belief nets, also called causal networks, or simply belief nets. They take the topological form of a directed acyclic graph (DAG), where each link is directional, and there are no loops. (More general networks permit such loops, however.) While such nets can represent continuous multidimensional distributions, they have enjoyed greatest application and 3.9. *BAYESIAN BELIEF NETWORKS 37 success for discrete variables. For this reason, and because the formal properties are simpler, we shall concentrate on the discrete case. P(a) A P(c |a) C P(c|d) P(e|c) E P(f|e) F P(g|f) P(g|e) G D P(b) B P(d|b) Figure 3.7: A belief network consists of nodes (labelur entire belief net consisted of X, its parents and children, and we needed to update only the values on X. In the more general case, where the network is large, there may be many nodes whose values are unknown. In that case we may have to visit nodes randomly and update the probabilites until the entire configuration of probabilities is stable. It can be shown that under weak conditions, this process will converge to consistent values of the variables throughout the entire network (Problem 44). Belief nets have found increasing use in complicated problems such as medical diagnosis. Here the upper-most nodes (ones without their own parents) represent a fundamental biological agent such as the presence of a virus or bacteria. Intermediate nodes then describe diseases, such as flu or emphysema, and the lower-most nodes the symptoms, such as high temperature or coughing. A physician enters measured values into the net and finds the most likely disease or cause. Such networks can be used in a somewhat more sophisticated way, automatically computing which unknown variable (node) should be measured to best reveal the identity of the disease. We will return in Chap. ?? to address the problem of learning in such belief net models. 3.10 Hidden Markov Models While belief nets are a powerful method for representing the dependencies and independencies among variables, we turn now to the problem of representing a particular but extremely important dependencies. In problems that have an inherent temporality -- that is, consist of a process that unfolds in time -- we may have states at time t that are influenced directly by a state at t - 1. Hidden Markov models (HMMs) have found greatest use in such problems, for instance speech recognition or gesture recognition. While the notation and description is aunavoidably more complicated than the simpler models considered up to this point, we stress that the same underlying ideas are exploited. Hidden Markov models have a number of parameters, whose values are set so as to best explain training patterns for the known category. Later, a test pattern is classified by the model that has the highest posterior probability, i.e., that best "explains" the test pattern. 3.10.1 First-order Markov models We consider a sequence of states at successive times; the state at any time t is denoted (t). A particular sequence of length T is denoted by T = {(1), (2), ..., (T )} as 3.10. *HIDDEN MARKOV MODELS 43 for instance we might have 6 = {1 , 4 , 2 , 2 , 1 , 4 }. Note that the system can revisit a state at different steps, and not every state need be visited. Our model for the production of any sequence is described by transition probabilities P (j (t + 1)|i (t)) = aij -- the time-independent probability of having state j at step t + 1 given that the state at time t was i . There is no requirement that the transition probabilities be symmetric (aij = aji , in general) and a particular state may be visited in succession (aii = 0, in general), as illustrated in Fig. 3.9. transition probability a22 2 a21 a11 1 a12 a31 a13 Figure 3.9: The discrete states, i , in a basic Markov model are represented by nodes, and the transition probabilities, aij , by links. In a first-order discrete time Markov model, at any step t the full system is in a particular state (t). The state at step t + 1 is a random function that depends solely on the state at step t and the transition probabilities. Suppose we are given a particular model -- that is, the full set of aij -- as well as a particular sequence T . In order to calculate the probability that the model generated the particular sequence we simply multiply the successive probabilities. For instance, to find the probability that a particular model generated the sequence described above, we would have P ( T |) = a14 a42 a22 a21 a14 . If there is a prior probability on the first state P ((1) = i ), we could include such a factor as well; for simplicity, we will ignore that detail for now. Up to here we have been discussing a Markov model, or technically speaking, a first-order discrete time Markov model, since the probability at t + 1 depends only on the states at t. For instance, in a Markov model for the production of spoken words, we might have states representing phonemes, and a Markov model for the production of a spoken work might have states representing phonemes. Such a Markov model for the word "cat" would have states for /k/, /a/ and /t/, with transitions from /k/ to /a/; transitions from /a/ to /t/; and transitions from /t/ to a final silent state. Note however that in speech recognition the perceiver does not have access to the states (t). Instead, we measure some properties of the emitted sound. Thus we will have to augment our Markov model to allow for visible states -- which are directly accessible to external measurement -- as separate from the states, which are not. a23 a32 a33 3 3.10.2 First-order hidden Markov models We continue to assume that at every time step t the system is in a state (t) but now we also assume that it emits some (visible) symbol v(t). While sophisticated Markov models allow for the emission of continuous functions (e.g., spectra), we will restrict ourselves to the case where a discrete symbol is emitted. As with the states, we define 44 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION a particular sequence of such visible states as VT = {v(1), v(2), ..., v(T )} and thus we might have V6 = {v5 , v1 , v1 , v5 , v2 , v3 }. Our model is then that in any state (t) we have a probability of emitting a particular visible state vk (t). We denote this probability P (vk (t)|j (t)) = bjk . Because we have access only to the visible states, while the i are unobservable, such a full model is called a hidden Markov model (Fig. 3.10) v1 v2 v3 v 4 b21 a22 b22 b23 b24 2 a21 a11 1 a12 a31 a13 b11 b 12 v1 v2 b b b13 14 31 b32 v1 v2 v3 v4 b b33 34 v3 v4 a23 a32 a33 3 Figure 3.10: Three hidden units in an HMM and the transitions between them are shown in black while the visible states and the emission probabilities of visible states are shown in red. This model shows all transitions as being possible; in other HMMs, some such candidate transitions are not allowed. 3.10.3 Hidden Markov Model Computation absorbing state Now we define some new terms and clarify our notation. In general networks such as those in Fig. 3.10 are finite-state machines, and when they have associated transition probabilities, they are called Markov networks. They are strictly causal -- the probabilities depend only upon previous states. A Markov model is called ergodic if every one of the states has a non-zero probability of occuring given some starting state. A final or absorbing state 0 is one which, if entered, is never left (i.e., a00 = 1). As mentioned, we denote the transition probabilities aij among hidden states and for the probability bjk of the emission of a visible state: aij bjk = P (j (t + 1)|i (t)) = P (vk (t)|j (t)). (86) We demand that some transition occur from step t t + 1 (even if it is to the same state), and that some visible symbol be emitted after every step. Thus we have the normalization conditions: aij j = 1 for all i and 3.10. *HIDDEN MARKOV MODELS bjk k 45 1 for all j, (87) = where the limits on the summations are over all hidden states and all visible symbols, respectively. With these preliminaries behind us, we can now focus on the three central issues in hidden Markov models: The Evaluation problem. Suppose we have an HMM, complete with transition probabilites aij and bjk . Determine the probability that a particular sequence of visible states VT was generated by that model. The Decoding problem. Suppose we have an HMM as well as a set of observations VT . Determine the most likely sequence of hidden states T that led to those observations. The Learning problem. Suppose we are given the coarse structure of a model (the number of states and the number of visible states) but not the probabilities aij and bjk . Given a set of training observations of visible symbols, determine these parameters. We consider each of these problems in turn. 3.10.4 Evaluation rmax The probability that the model produces a sequence VT of visible states is: P (VT ) = r=1 P (VT | T )P ( T ), r r (88) where each r indexes a particular sequence T = {(1), (2), ..., (T )} of T hidden r states. In the general case of c hidden states, there will be rmax = cT possible terms in the sum of Eq. 88, corresponding to all possible sequences of length T . Thus, according to Eq. 88, in order to compute the probability that the model generated the particular sequence of T visible states VT , we should take each conceivable sequence of hidden states, calculate the probability they produce VT , and then add up these probabilities. The probability of a particular visible sequence is merely the product of the corresponding (hidden) transition probabilities aij and the (visible) output probabilities bjk of each step. Because we are dealing here with a first-order Markov process, the second factor in Eq. 88, which describes the transition probability for the hidden states, can be rewritten as: T P ( T ) = r t=1 P ((t)|(t - 1)) (89) that is, a product of the aij 's according to the hidden sequence in question. In Eq. 89, (T ) = 0 is some final absorbing state, which uniquely emits the visible state v0 . In speech recognition applications, 0 typically represents a null state or lack of utterance, and v0 is some symbol representing silence. Because of our assumption that the output probabilities depend only upon the hidden state, we can write the first factor in Eq. 88 as 46 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION T P (VT | T ) = r t=1 P (v(t)|(t)), (90) that is, a product of bjk 's according to the hidden state and the corresponding visible state. We can now use Eqs. 89 & 90 to express Eq. 88 as rmax T P (VT ) = r=1 t=1 P (v(t)|(t))P ((t)|(t - 1)). (91) Despite its formal complexity, Eq. 91 has a straightforward interpretation. The probability that we observe the particular sequence of T visible states VT is equal to the sum over all rmax possible sequences of hidden states of the conditional probability that the system has made a particular transition multiplied by the probability that it then emitted the visible symbol in our target sequence. All these are captured in our paramters aij and bkj , and thus Eq. 91 can be evaluated directly. Alas, this is an O(cT T ) calculation, which is quite prohibitive in practice. For instance, if c = 10 and T = 20, we must perform on the order of 1021 calculations. A computationaly simpler algorithm for the same goal is as follows. We can calculate P (VT ) recursively, since each term P (v(t)|(t))P ((t)|(t - 1)) involves only v(t), (t) and (t - 1). We do this by defining t = 0 and i = initial state 0 1 t = 0 and i = initial state i (t) = (92) (t - 1)aij bjk v(t) otherwise, j where the notation bjk v(t) means the transition probability bjk selected by the visible state emitted at time t. thus the only non-zero contribution to the sum is for the index k which matches the visible state v(t). Thus i (t) represents the probability that our HMM is in hidden state i at step t having generated the first t elements of VT . This calculation is implemented in the Forward algorithm in the following way: Algorithm 2 (HMM Forward) 1 2 3 4 5 6 initialize (1), t = 0, aij , bjk , visible sequence VT , (0) = 1 for t t + 1 j (t) c i (t - 1)aij bjk i=1 until t = T return P (VT ) 0 (T ) end where in line 5, 0 denotes the probability of the associated sequence ending to the known final state. The Forward algorithm has, thus, a computational complexity of O(c2 T ) -- far more efficient than the complexity associated with exhaustive enumeration of paths of Eq. 91 (Fig. 3.11). For the illustration of c = 10, T = 20 above, we would need only on the order of 2000 calculations -- more than 17 orders of magnitude faster than that to examine each path individually. We shall have cause to use the Backward algorithm, which is the time-reversed version of the Forward algorithm. Algorithm 3 (HMM Backward) 3.10. *HIDDEN MARKOV MOents the phoneme /v/, 2 represents /i/, ..., and 0 a final silent state. Such a left-to-right model is more restrictive than the general HMM in Fig. 3.10, and precludes transitions "back" in time. The Forward algorithm gives us P (V T |). The prior probability of the model, P (), is given by some external source, such as a language model in the case of speech. This prior probability might depend upon the semantic context, or the previous words, or yet other information. In the absence of such information, it is traditional to assume a uniform density on P (), and hence ignore it in any classification problem. (This is an example of a "non-informative" prior.) 3.10.5 Decoding Given a sequence of visible states VT , the decoding problem is to find the most probable sequence of hidden states. While we might consider enumerating every possible path and calculating the probability of the visible sequence observed, this is an O(cT T ) calculation and prohibitive. Instead, we use perhaps the simplest decoding algorithm: Algorithm 4 (HMM decoding) 1 2 4 5 7 8 10 11 12 13 14 begin initialize Path = {}, t = 0 for t t + 1 k = 0, 0 = 0 for k k + 1 k (t) bjk v(t) until k = c j arg max j (t) j c i (t - 1)aij i=1 AppendT o Path j until t = T return Path end A closely related algorithm uses logarithms of the probabilities and calculates total probabilities by addition of such logarithms; this method has complexity O(c2 T ) (Problem 48). 50 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION max(T) 0 max(1) 1 1 1 max(3) 2 2 max(2) 3 3 3 3 3 3 2 2 1 1 max(T-1) 2 2 1 0 0 0 0 0 c c c c c c t= 1 2 3 4 T-1 T Figure 3.13: The decoding algorithm finds at each time step t the state that has the highest probability of having come from the previous step and generated the observed visible state vk . The full path is the sequence of such states. Because this is a local optimization (dependent only upon the single previous time step, not the full sequence), the algorithm does not guarantee that the path is indeed allowable. For instance, it might be possible that the maximum at t = 5 is 1 and at t = 6 is 2 , and thus these would appear in the path. This can even occur if a12 = P (2 (t+1)|1 (t)) = 0, precluding that transition. The red line in Fig. 3.13 corresponds to Path, and connects the hidden states with the highest value of i at each step t. There is a difficulty, however. Note that there is no guarantee that the path is in fact a valid one -- it might not be consistent with the underlying models. For instance, it is possible that the path actually implies a transition that is forbidden by the model, as illustrated in Example 5. Example 5: HMM decoding We find the path for the data of Example 4 for the sequence {1 , 3 , 2 , 1 , 0 }. Note especially that the transition from 3 to 2 is not allowed according to the transition probabilities aij given in Example 4. The path locally optimizes the probability through the trellis. 3.10. *HIDDEN MARKOV MODELS 51 V3 V1 0 V3 0 V2 0 V0 .0011 0 0 1 1 .09 .0052 .0024 0 2 3 t= 0 .01 .0077 .0002 0 0 1 .2 2 .0057 3 .0007 4 0 5 The locally optimal path through the HMM trellis of Example 4. HMMs address the problem of rate invariance in the following two ways. The first is that the transition probabilities themselves incorporate probabilistic structure of the durations. Moreover, using postprocessing, we can delete repeated states and just get the sequence somewhat independent of variations in rate. Thus in post-processing we can convert the sequence {1 , 1 , 3 , 2 , 2 , 2 } to {1 , 3 , 2 }, which would be appropriate for speech recognition, where the fundamental phonetic units are not repeated in natural speech. 3.10.6 Learning The goal in HMM learning is to determine model parameters -- the transition probabilities aij and bjk -- from an ensemble of training samples. There is no known method for obtaining the optimal or most likely set of parameters from the data, but we can nearly always determine a good solution by a straightforward technique. The Forward-backward Algorithm The Forward-backward algorithm is an instance of a generalized Expectation-Maximization algorithm. The general approach will be to iteratively update the weights in order to better explain the observed training sequences. Above, we defined i (t) as the probability that the model is in state i (t) and has generated the target sequence up to step t. We can analogously define i (t) to be the probability that the model is in state i (t) and will generate the remainder of the given target sequence, i.e., from t + 1 T . We express i (t) as: i (t) = sequence's final state and t = T 0 1 i (t) = sequence's final state and t = T i (t) = aij bjk v(t + 1)j (t + 1) otherwise, j (94) To understand Eq. 94, imagine we knew i (t) up to step T - 1, and we wanted to calculate the probability that the model would generate the remaining single visible 52 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION symbol. This probability, i (T ), is just the probability we make a transition to state i (T ) multiplied by the probability that this hidden state emitted the correct final visible symbol. By the definition of i (T ) in Eq. 94, this will be either 0 (if i (T ) is not the final hidden state) or 1 (if it is). Thus it is clear that i (T - 1) = j aij bij v(T )i (T ). Now that we have determined i (T - 1), we can repeat the process, to determine i (T - 2), and so on, backward through the trellis of Fig. ??. But the i (t) and i (t) we determined are merely estimates of their true values, since we don't know the actual value of the transition probabilities aij and bij in Eq. 94. We can calculate an improved value by first defining ij (t) -- the probability of transition between i (t-1) and j (t), given the model generated the entire training sequence VT by any path. We do this by defining ij (t), as follows: ij (t) = i (t - 1)aij bij i (t) , P (V T |) (95) where P (VT |) is the probability that the model generated sequence VT by any path. Thus ij (t) is the probability of a transition from state i (t - 1) to j (t) given that the model generated the complete visible sequence V T . We can now calculate an improved estimate for aij . The expected number of transitions between state i (t - 1) and j (t) at any time in the sequence is simply T T ^ k ik (t). Thus aij (the estimate of the t=1 ij (t), whereas at step t it is t=1 probability of a transition from i (t - 1) to j (t)) can be found by taking the ratio between the expected number of transitions from i to j and the total expected number of any transitions from i . That is: T ij (t) aij = ^ t=1 T . ik (t) (96) t=1 k In the same way, we can obtain an improved estimate ^ij by calculating the ratio b between the frequency that any particular symbol vk is emitted and that for any symbol. Thus we have ^jk = b jk (t) T . (97) jk (t) t=1 In short, then, we start with rough or arbitrary estimates of aij and bjk , calculate improved estimates by Eqs. 96 & 97, and repeat until some convergence criterion is met (e.g., sufficiently small change in the estimated values of the parameters on subsequent iterations). This is the Baum-Welch or Forward-backward algorithm -- an example of a Generalized Expectation-Maximumization algorithm (Sec. 3.8): Algorithm 5 (Forward-backward) 1 2 3 4 5 6 begin initialize aij , bjk , training sequence V T , convergence criterion do z z + 1 Compute a(z) from a(z - 1) and b(z - 1) by Eq. 96 ^ Compute ^ b(z) from a(z - 1) and b(z - 1) by Eq. 97 aij (z) aij (z - 1) ^ bjk (z) ^jk (z - 1) b 3.10. SUMMARY 7 8 9 53 until max[aij (z) - aij (z - 1), bjk (z) - bjk (z - 1)] < ; convergence achievedln : F orBackstop i,j,k return aij aij (z); bjk bjk (z) end The stopping or convergence criterion in line ?? halts learning when no estimated transition probability changes more than a predetermined amount, . In typical speech recognition applications, convergence requires several presentations of each training sequence (fewer than five is common). Other popular stopping criteria are based on overall probability that the learned model could have generated the full training data. Summary If we know a parametric form of the class-conditional probability densities, we can reduce our learning task from one of finding the distribution itself, to that of finding the parameters (represented by a vector i for each category i ), and use the resulting distributions for classification. The maximum likelihood method seeks to find the parameter value that is best supported by the training data, i.e., maximizes the probability of obtaining the samples actually observed. (In practice, for computational simplicity one typically uses log-likelihood.) In Bayesian estimation the parameters are considered random variables having a known a priori density; the training data convert this to an a posteriori density. The recursive Bayes method updates the Bayesian parameter estimate incrementally, i.e., as each training point is sampled. While Bayesian estimation is, in principle, to be preferred, maximum likelihood methods are generally easier to implement and in the limit of large training sets give classifiers nearly as accurate. A sufficient statistic s for is a function of the samples that contains all information needed to determine . Once we know the sufficient statistic for models of a given form (e.g., exponential family), we need only estimate their value from data to create our classifier -- no other functions of the data are relevant. Expectation-Maximization is an iterative scheme to maximize model parameters, even when some data are missing. Each iteration employs two steps: the expectation or E step which requires marginalizing over the missing variables given the current model, and the maximization or M step, in which the optimum parameters of a new model are chosen. Generalized Expectation-Maximization algorithms demand merely that parameters be improved -- not optimized -- on each iteration and have been applied to the training of a large range of models. Bayesian belief nets allow the designer to specify, by means of connection topology, the functional dependences and independencies among model variables. When any subset of variables is clamped to some known values, each node comes to a probability of its value through a Bayesian inference calculation. Parameters representing conditional dependences can be set by an expert. Hidden Markov models consist of nodes representing hidden states, interconnected by links describing the conditional probabilities of a transition between the states. Each hidden state also has an associated set of probabilities of emiting a particular visible states. HMMs can be useful in modelling sequences, particularly context dependent ones, such as phonemes in speech. All the transition probabilities can be learned (estimated) iteratively from sample sequences by means of the Forward-backward or Baum-Welch algorithm, an example of a generalized EM algorithm. Classification 54 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION proceeds by finding the single model among candidates that is most likely to have produced a given observed sequence. Bibliographical and Historical Remarks Maximum likelihood and Bayes estimation have a long history. The Bayesian approach to learning in pattern recognition began by the suggestion that the proper way to use samples when the conditional densities are unknown is the calculation of P (i |x, D), [6]. Bayes himself appreciated the role of non-informative priors. An analysis of different priors from statistics appears in [21, 15] and [4] has an extensive list of references. The origins of Bayesian belief nets traced back to [33], and a thorough literature review can be found in [8]; excellent modern books such as [24, 16] and tutorials [7] can be recommended. An important dissertation on the theory of belief nets, with an application to medical diagnosis is [14], and a summary of work on diagnosis of machine faults is [13]. While we have focussed on directed acyclic graphs, belief nets are of broader use, and even allow loops or arbitrary topologies -- a topic that would lead us far afield here, but which is treated in [16]. The Expectation-Maximization algorithm is due to Dempster et al.[11] and a thorough overview and history appears in [23]. On-line or incremental versions of EM are described in [17, 31]. The definitive compendium of work on missing data, including much beyond our discussion here, is [27]. Markov developed what later became called the Markov framework [22] in order to analyze the the text of his fellow Russian Pushkin's masterpiece Eugene Onegin. Hidden Markov models were introduced by Baum and collaborators [2, 3], and have had their greatest applications in the speech recognition [25, 26], and to a lesser extent statistical language learning [9], and sequence identification, such as in DNA sequences [20, 1]. Hidden Markov methods have been extended to two-dimensions and applied to recognizing characters in optical document images [19]. The decoding algorithm is related to pioneering work of Viterbi and followers [32, 12]. The relationship between hidden Markov models and graphical models such as Bayesian belief nets is explored in [29]. Knuth's classic [18] was the earliest compendium of the central results on computational complexity, the majority due to himself. The standard books [10], which inspired several homework problems below, are a bit more accessible for those without deep backgrounds in computer science. Finally, several other pattern recognition textbooks, such as [28, 5, 30] which take a somewhat different approach to the field can be recommended. Problems Section 3.2 1. Let x have an exponential density p(x|) = e-x 0 x0 otherwise. (a) Plot p(x|) versus x for = 1. Plot p(x|) versus , (0 5), for x = 2. 3.10. PROBLEMS 55 (b) Suppose that n samples x1 , ..., xn are drawn independently according to p(x|). Show that the maximum likelihood estimate for is given by ^ = 1 n 1 n . xk k=1 (c) On your graph generated with = 1 in part (a), mark the maximum likelihood ^ estimate for large n. 2. Let x have a uniform density p(x|) U (0, ) = 1/ 0 0x otherwise. (a) Suppose that n samples D = {x1 , ..., xn } are drawn independently according to p(x|). Show that the maximum likelihood estimate for is max[D], i.e., the value of the maximum element in D. (b) Suppose that n = 5 points are drawn from the distribution and the maximum value of which happens to be max xk = 0.6. Plot the likelihood p(D|) in the range 0 1. Explain in words why you do not need to know the values of the other four points. 3. Maximum likelihood methods apply to estimates of prior probabilities as well. Let samples be drawn by successive, independent selections of a state of nature i with unknown probability P (i ). Let zik = 1 if the state of nature for the kth sample is i and zik = 0 otherwise. (a) Show that n k P (zi1 , . . . , zin |P (i )) = k=1 P (i )zik (1 - P (i ))1-zik . (b) Show that the maximum likelihood estimate for P (i ) is 1 ^ P (i ) = n Interpret your result in words. 4. Let x be a d-dimensional binary (0 or 1) vector with a multivariate Bernoulli distribution d n zik . k=1 P (x|) = i=1 x i i (1 - i )1-xi , where = (1 , ..., d )t is an unknown parameter vector, i being the probability that xi = 1. Show that the maximum likelihood estimate for is 1 ^ = n n xk . k=1 56 CHAPTER 3. MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION 5. Let each component xi of x be binary valued (0 or 1) in a two-category problem with P (1 ) = P (2 ) = 0.5. Suppose that the probability of obtaining a 1 in any component is pi1 pi2 = p = 1 - p, and we assume for definiteness p > 1/2. The probability of error is known to approach zero as the dimensionality d approaches infinity. This problem asks you to explore the behavior as we increase the number of features in a single sample -- a complementary situation. (a) Suppose that a single sample x = (x1 , ..., xd )t is drawn from category 1 . Show that the maximum likelihood estimate for p is given by p= ^ 1 d d xi . i=1 (b) Describe the behavior of p as d approaches infinity. Indicate why such behavior ^ means that by letting the number of featntinuous at x. The second condition, which only makes sense if p(x) = 0, assures us that the frequency ratio will converge (in probability) to the probability P . The third condition is clearly necessary if pn (x) given by Eq. 7 is to converge at all. It also says that although a huge number of samples will eventually fall within the small region Rn , they will form a negligibly small fraction of the total number of samples. There are two common ways of obtaining sequences of regions that satisfy these conditions (Fig. 4.2). One is to shrink initial region by specifying the volume Vn an as some function of n, such as Vn = 1/ n. It then must be shown that the random variables kn and kn /n behave properly, or more to the point, that pn (x) converges to 6 CHAPTER 4. NONPARAMETRIC TECHNIQUES p(x). This is basically the Parzen-window method that will be examined in Sect. 4.3. The second method is to specify kn as some function of n, such as kn = n. Here the volume Vn is grown until it encloses kn neighbors of x. This is the kn -nearestneighbor estimation method. Both of these methods do in fact converge, although it is difficult to make meaningful statements about their finite-sample behavior. n=1 2 3 10 Figure 4.2: Two methods for estimating the density at a point x (at the center of each square) are to xxx. 4.3 Parzen Windows The Parzen-window approach to estimating densities can be introduced by temporarily assuming that the region Rn is a d-dimensional hypercube. If hn is the length of an edge of that hypercube, then its volume is given by V n = hd . n window function (8) We can obtain an analytic expression for kn , the number of samples falling in the hypercube, by defining the following window function: (u) = 1 0 |uj | 1/2 otherwise. j = 1, ..., d (9) Thus, (u) defines a unit hypercube centered at the origin. It follows that ((x - xi )/hn ) is equal to unity if xi falls within the hypercube of volume Vn centered at x, and is zero otherwise. The number of samples in this hypercube is therefore given by n kn = i=1 x - xi hn , (10) and when we substitute this into Eq. 7 we obtain the estimate pn (x) = 1 n n i=1 1 Vn x - xi hn . (11) This equation suggests a more general approach to estimating density functions. Rather than limiting ourselves to the hypercube window function of Eq. 9, suppose we allow a more general class of window functions. In such a case, Eq. 11 expresses our estimate for p(x) as an average of functions of x and the samples xi . In essence, 4.3. PARZEN WINDOWS 7 the window function is being used for interpolation -- each sample contributing to the estimate in accordance with its distance from x. It is natural to ask that the estimate pn (x) be a legitimate density function, i.e., that it be nonnegative and integrate to one. This can be assured by requiring the window function itself be a density function. To be more precise, if we require that (x) 0 and (u) du = 1, (13) (12) and if we maintain the relation Vn = hd , then it follows at once that pn (x) also n satisfies these conditions. Let us examine the effect that the window width hn has on pn (x). If we define the function n (x) by n (x) = then we can write pn (x) as the average pn (x) = 1 n n 1 Vn x hn , (14) n (x - xi ). i=1 (15) Since Vn = hd , hn clearly affects both the amplitude and the width of n (x) (Fig. 4.3). n If hn is very large, the amplitude of n is small, and x must be far from xi before n (x - xi ) changes much from n (0). In this case, pn (x) is the superposition of n broad, slowly changing functions and is a very smooth "out-of-focus" estimate of p(x). On the other hand, if hn is very small, the peak value of n (x - xi ) is large and occurs near x = xi . In this case p(x) is the superposition of n sharp pulses centered at the samples -- an erratic, "noisy" estimate (Fig. 4.4). For any value of hn , the distribution is normalized, i.e., n (x - xi ) dx = 1 Vn x - xi hn dx = (u) du = 1. (16) Thus, as hn approaches zero, n (x - xi ) approaches a Dirac delta function centered at xi , and pn (x) h1 = .5 1 h1 = .2 n=1 0 1 1 2 3 4 0 1 1 2 3 4 0 1 1 2 3 4 n = 16 0 1 n = 256 1 2 3 4 0 1 1 2 3 4 0 1 1 2 3 4 0 1 n= 1 2 3 4 0 1 1 2 3 4 0 1 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 Figure 4.7: Parzen-window estimates of a bimodal distribution using different window widths and numbers of samples. Note particularly that the n = estimates are the same (and match the true generating distribution), regardless of window width h. 4.3.4 Classification example In classifiers based on Parzen-window estimation, we estimate the densities for each category and classify a test point by the label corresponding to the maximum posterior. If there are multiple categories with unequal priors we can easily include these too (Problem 4). The decision regions for a Parzen-window classifier depend upon the choice of window function, of course, as illustrated in Fig. 4.8. In general, the training error -- the empirical error on the training points themselves -- can be made arbitrarily low by making the window width sufficiently small. However, the goal of creating a classifier is to classify novel patterns, and alas a low training error does not guarantee a small test error, as we shall explore in Chap. ??. Although a generic Gaussian window shape can be justified by considerations of noise, statistical independence and uncertainty, in the absense of other information about the underlying distributions there is little theoretical justification of one window width over another. These density estimation and classification examples illustrate some of the power and some of the limitations of nonparametric methods. Their power resides in their generality. Exactly the same procedure was used for the unimodal normal case and the bimodal mixture case and we did not need to make any assumptions about the We ignore cases in which the same feature vector has been assigned to multiple categories. 4.3. PARZEN WINDOWS 15 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.2 0.4 0.6 0.8 1 0 0 0.2 0.4 0.6 0.8 1 Figure 4.8: The decision boundaries in a two-dimensional Parzen-window dichotomizer depend on the window width h. At the left a small h leads to boundaries that are more complicated than for large h on same data set, shown at the right. Apparently, for this data a small h would be appropriate for the upper region, while a large h for the lower region; no single window width is ideal overall. distributions ahead of time. With enough samples, we are essentially assured of convergence to an arbitrarily complicated target density. On the other hand, the number of samples needed may be very large indeed -- much greater than would be required if we knew the form of the unknown density. Little or nothing in the way of data reduction is provided, which leads to severe requirements for computation time and storage. Moreover, the demand for a large number of samples grows exponentially with the dimensionality of the feature space. This limitation is related to the "curse of dimensionality," and severely restricts the practical application of such nonparametric procedures (Problem 11). The fundamental reason for the curse of dimensionality is that high-dimensional functions have the potential to be much more complicated than low-dimensional ones, and that those complications are harder to discern. The only way to beat the curse is to incorporate knowledge about the data that is correct. 4.3.5 Probabilistic Neural Networks (PNNs) A hardware implementation of the Parzen windows approach is found in Probabilistic Neural Networks (Fig. 4.9). Suppose we wish to form a Parzen estimate based on n patterns, each of which is d-dimensional, randomly sampled from c classes. The PNN for this case consists of d input units comprising the input layer, each unit is connect to each of the n pattern units; each pattern unit is, in turn, connected to one and only one of the c category units. The connections from the input to pattern units represent modifiable weights, which will be trained. (While these wen one dimension and the k-nearest-neighbor density estimates, for k = 3 and 5. Note especially that the discontinuities in the slopes in the estimates generally occur away fom the positions of the points themselves. 3 2 1 0 Figure 4.11: The k-nearest-neighbor estimate of a two-dimensional density for k = 5. Notice how such a finite n estimate can be quite "jagged," and that discontinuities in the slopes generally occur along lines away from the positions of the points themselves. 20 1 CHAPTER 4. NONPARAMETRIC TECHNIQUES 1 n=1 kn = 1 0 1 n = 16 kn = 4 1 2 3 4 0 1 1 2 3 4 0 1 n = 256 kn = 16 1 2 3 4 0 1 1 2 3 4 0 1 1 2 3 4 0 1 1 2 3 4 n= kn = 0 1 2 3 4 0 1 2 3 4 Figure 4.12: Several k-nearest-neighbor estimates of two unidimensional densities: a Gaussian and a bimodal distribution. Notice how the finite n estimates can be quite "spiky." and kn = n = 1, the estimate becomes pn (x) = 1 . 2|x - x1 | (32) This is clearly a poor estimate of p(x), with its integral embarrassing us by diverging to infinity. As shown in Fig. 4.12, the estimate becomes considerably better as n gets larger, even though the integral of the estimate remains infinite. This unfortunate fact is compensated by the fact that pn (x) never plunges to zero just because no samples fall within some arbitrary cell or window. While this might seem to be a meager compensation, it can be of considerable value in higher-dimensional spaces. As with the Parzen-window approach, we could obtain a family of estimates by taking kn = k1 n and choosing different values for k1 . However, in the absense of any additional information, one choice is as good as another, and we can be confident only that the results will be correct in the infinite data case. For classification, one popular method is to adjust the window width until the classifier has the lowest error on a separate set of samples, also drawn from the target distributions, a technique we shall explore in Chap. ??. 4.5. THE NEAREST-NEIGHBOR RULE 21 4.4.1 Estimation of a posteriori probabilities The techniques discussed in the previous sections can be used to estimate the a posteriori probabilities P (i |x) from a set of n labelled samples by using the samples to estimate the densities involved. Suppose that we place a cell of volume V around x and capture k samples, ki of which turn out to be labelled i . Then the obvious estimate for the joint probability p(x, i ) is pn (x, i ) = and thus a reasonable estimate for P (i |x) is Pn (i |x) = pn (x, i ) c ki /n , V ki . k (33) = (34) pn (x, j ) j=1 That is, the estimate of the a posteriori probability that i is the state of nature is merely the fraction of the samples within the cell that are labelled i . Consequently, for minimum error rate we select the category most frequently represented within the cell. If there are enough samples and if the cell is sufficiently small, it can be shown that this will yield performance approaching the best possible. When it comes to choosing the size of the cell, it is clear that we can use either the Parzen-window approach or the kn -nearest-neighbor approach. In the first case, Vn would be some specified function of n, such as Vn = 1/ n. In the second case, Vn would be expanded until some specified number of samples were captured, such as k = n. In either case, as n goes to infinity an infinite number of samples will fall within the infinitely small cell. The fact that the cell volume could become arbitrarily small and yet contain an arbitrarily large number of samples would allow us to learn the unknown probabilities with virtual certainty and thus eventually obtain optimum performance. Interestingly enough, we shall now see that we can obtain comparable performance if we base our decison solely on the label of the single nearest neighbor of x. 4.5 The Nearest-Neighbor Rule While the k-nearest-neighbor algorithm was first proposed for arbitrary k, the crucial matter of determining the error bound was first solved for k = 1. This nearestneighbor algorithm has conceptual and computational simplicity. We begin by letting Dn = {x1 , ..., xn } denote a set of n labelled prototypes, and x Dn be the prototype nearest to a test point x. Then the nearest-neighbor rule for classifying x is to assign it the label associated with x . The nearest-neighbor rule is a sub-optimal procedure; its use will usually lead to an error rate greater than the minimum possible, the Bayes rate. We shall see, however, that with an unlimited number of prototypes the error rate is never worse than twice the Bayes rate. Before we get immersed in details, let us try to gain a heuristic understanding of why the nearest-neighbor rule should work so well. To begin with, note that the label associated with the nearest neighbor is a random variable, and the probability that = i is merely the a posteriori probability P (i |x ). When the number of samples is very large, it is reasonable to assume that x is sufficiently close to x that P (|x ) P (i |x). Since this is exactly the probability that nature will be in state i , the nearest-neighbor rule is effectively matching probabilities with nature. 22 If we define m (x) by CHAPTER 4. NONPARAMETRIC TECHNIQUES P (m |x) = max P (i |x), i (35) Voronoi tesselation then the Bayes decision rule always selects m . This rule allows us to partition the feature space into cells consisting of all points closer to a given training point x than to any other training points. All points in such a cell are thus labelled by the category of the training point -- a so-called Voronoi tesselation of the space (Fig. 4.13). Figure 4.13: In two dimensions, the nearest-neighbor algorithm leads to a partitioning of the input space into Voronoi cells, each labelled by the category of the training point it contains. In three dimensions, the cells are three-dimensional, and the decision boundary resembles the surface of a crystal. When P (m |x) is close to unity, the nearest-neighbor selection is almost always the same as the Bayes selection. That is, when the minimum probability of error is small, the nearest-neighbor probability of error is also small. When P (m |x) is close to 1/c, so that all classes are essentially equally likely, the selections made by the nearest-neighbor rule and the Bayes decision rule are rarely the same, but the probability of error is approximately 1 - 1/c for both. While more careful analysis is clearly necessary, these observations should make the good performance of the nearest-neighbor rule less surprising. Our analysis of the behavior of the nearest-neighbor rule will be directed at obtaining the infinite-sample conditional average probability of error P (e|x), where the averaging is with respect to the training samples. The unconditional average probability of error will then be found by averaging P (e|x) over all x: P (e) = P (e|x)p(x) dx. (36) In passing we should recall that the Bayes decision rule minimizes P (e) by minimizing P (e|x) for every x. Recall from Chap. ?? that if we let P (e|x) be the minimum possible value of P (e|x), and P be the minimum possible value of P (e), then P (e|x) = 1 - P (m |x) (37) 4.5. THE NEAREST-NEIGHBOR RULE and P = P (e|x)p(x) dx. 23 (38) 4.5.1 Convergence of the Nearest Neighbor We now wish to evaluate the average probability of error for the nearest-neighbor rule. In particular, if Pn (e) is the n-sample error rate, and if P = lim Pn (e), n (39) then we want to show that P P P 2 - c P . c-1 (40) We begin by observing that when the nearest-neighbor rule is used with a particular set of n samples, the resulting error rate will depend on the accidental characteristics of the samples. In particular, if different sets of n samples are used to classify x, different vectors x will be obtained for the nearest-neighbor of x. Since the decision rule depends on this nearest-neighbor, we have a conditional probability of error P (e|x, x ) that depends on both x and x . By averaging over x , we obtain P (e|x) = P (e|x, x )p (x |x) dx . (41) where we understand that there is an implicit dependence upon the ncalculation is O(d), and thus this search 4.5. THE NEAREST-NEIGHBOR RULE 29 B C 1 A D E 1 OR AND AND AND ... ... 12 d 12 ... d 12 ... d 12 ... d 12 ... d ... ... 12 d 12 ... d 12 ... d ... ... 12 d Figure 4.17: A parallel nearest-neighbor circuit can perform search in constant -- i.e., O(1) -- time. The d-dimensional test pattern x is presented to each box, which calculates which side of a cell's face x lies on. If it is on the "close" side of every face of a cell, it lies in the Voronoi cell of the stored pattern, and receives its label. is O(dn2 ). An alternative but straightforward parallel implementation is shown in Fig. 4.17, which is O(1) in time and O(n) in space. There are three general algorithmic techniques for reducing the computational burden in nearest-neighbor searches: computing partial distances, prestructuring, and editing the stored prototypes. In partial distance, we calculate the distance using some subset r of the full d dimensions, and if this partial distance is too great we do not compute further. The partial distance based on r selected dimensions is r 1/2 partial distance Dr (a, b) = k=1 (ak - bk ) 2 (56) where r < d. Intuitively speaking, partial distance methods assume that what we know about the distance in a subspace is indicative of the full space. Of course, the partial distance is strictly non-decreasing as we add the contributions from more and more dimensions. Consequently, we can confidently terminate a distance calculation to any prototype once its partial distance is greater than the full r = d Euclidean distance to the current closest prototype. In presturcturing we create some form of search tree in which prototypes are selectively linked. During classification, we compute the distance of the test point to one or a few stored "entry" or "root" prototypes and then consider only the prototypes linked to it. Of these, we find the one that is closest to the test point, and recursively consider only subsequent linked prototypes. If the tree is properly structured, we will reduce the total number of prototypes that need to be searched. Consider a trivial illustration of prestructuring in which we store a large number of prototypes that happen to be distributed uniformly in the unit square, i.e., p(x) search tree 30 CHAPTER 4. NONPARAMETRIC TECHNIQUES editing U 0 , 1 . Imagine we prestructure this set using four entry or root prototypes -- 0 1 3/4 at 1/4 , 1/4 , 3/4 and 3/4 -- each fully linked only to points in its corresponding 1/4 3/4 1/4 quadrant. When a test pattern x is presented, the closest of these four prototypes is determined, and then the search is limited to the prototypes in the corresponding quadrant. In this way, 3/4 of the prototypes need never be queried. Note that in this method we are no longer guaranteed to find the closest prototype. For instance, suppose the test point is near a boundary of the quadrants, e.g., x = 0.499 0.499 . In this particular case only prototypes in the first quadrant will be searched. Note however that the closest prototype might actually be in one of the other three quadrants, somewhere near 0.5 . This illustrates a very general property in pattern 0.5 recognition: the tradeoff of search complexity against accuracy. More sophisticated search trees will have each stored prototype linked to a small number of others, and a full analysis of these methods would take us far afield. Nevertheless, here too, so long as we do not query all training prototypes, we are not guaranteed that the nearest prototype will be found. The third method for reducing the complexity of nearest-neighbor search is to eliminate "useless" prototypes during training, a technique known variously as editing, pruning or condensing. A simple method to reduce the O(n) space complexity is to eliminate prototypes that are surrounded by training points of the same category label. This leaves the decision boundaries -- and hence the error -- unchanged, while reducing recall times. A simple editing algorithm is as follows. Algorithm 3 (Nearest-neighbor editnd a2 . The small red number in each image is the Euclidean distance between the tangent approximation and the image generated by the unapproximated transformations. Of course, this Euclidean distance is 0 for the prototype and for the cases a1 = 1, a2 = 0 and a1 = 0, a2 = 1. (The patterns generated with a1 + a2 > 1 have a gray background because of automatic grayscale conversion of images with negative pixel values.) optimal a can also be found by standard matrix methods, but these generally have higher computational complexities, as is explored in Problems 21 & 22. We note that the methods for editing and prestructuring data sets described in Sec. 4.5.5 can be applied to tangent distance classifers too. Nearest-neighbor classifiers using tangent distance have been shown to be highly accurate, but they require the designer to know which invariances and to be able to perform them on each prototype. Some of the insights from tangent approach can also be used for learning which invariances underly the training data -- a topic we shall revisit in Chap. ??. 36 CHAPTER 4. NONPARAMETRIC TECHNIQUES x3 x1 x2 ) D tan(x', x2 Ta tange TV1 nt sp ace TV2 x2 x' x1 Figure 4.22: A stored prototype x , if transformed by combinations of two basic transformations, would fall somewhere on a complicated curved surface in the full d-dimensional space (gray). The tangent space at x is an r-dimensional Euclidean space, spanned by the tangent vectors (here TV1 and TV2 ). The tangent distance Dtan (x , x) is the smallest Euclidean distance from x to the tangent space of x , shown in the solid red lines for two points, x1 and x2 . Thus although the Euclidean distance from x to x1 is less than to x2 , for the tangent distance the situation is reversed. The Euclidean distance from x2 to the tangent space of x is a quadratic function of the parameter vector a, as shown by the pink paraboloid. Thus simple gradient descent methods can find the optimal vector a and hence the tangent distance Dtan (x , x2 ). 4.7 Fuzzy Classification Occassionally we may have informal knowledge about a problem domain where we seek to build a classifier. For instance, we might feel, generally speaking, that an adult salmon is oblong and light in color, while a sea bass is stouter and dark. The approach taken in fuzzy classification is to create so-called "fuzzy category memberships functions," which convert an objectively measurable parameter into a subjective "category memberships," which are then used for classification. We must stress immediately that the term "categories" used by fuzzy practitioners refers not to the final class as we have been discussing, but instead just overlapping ranges of feature values. For instance, if we consider the feature value of lightness, fuzzy practitioners might split this into five "categories" -- dark, medium-dark, medium, medium-light and light. In order to avoid misunderstandings, we shall use quotations when discussing such "categories." For example we might have the lightness and shape of a fish be judged as in Fig. 4.23. Next we need a way to convert an objective measurement in several features into a category decision about the fish, and for this we need a merging or conjunction conjunction rule -- a way to take the "category memberships" (e.g., lightness and shape) and rule yield a number to be used for making the final decision. Here fuzzy practitioners have 4.7. *FUZZY CLASSIFICATION 37 1 x Figure 4.23: "Category membership" functions, derived from the designer's prior knowledge, together with a lead to discriminants. In this figure x might represent an objectively measureable value such as the reflectivity of a fish's skin. The designer believes there are four relevant ranges, which might be called dark, medium-dark, medium-light and light. Note, the memberships are not in true categories we wish to classify, but instead merely ranges of feature values. at their disposal a large number of possible functions. Indeed, most functions can be used and there are few principled criteria to preference one overial way of obtaining polynomial discriminant functions. discriminant Before becoming too enthusiastic, however, we should note one of the problems with this approach. A key property of a useful window function is its tendency to peak at the origin and fade away elsewhere. Thus ((x - xi )/hn ) should peak sharply at x = xi , and contribute little to the approximation of pn (x) for x far from xi . Unfortunately, polynomials have the annoying property of becoming unbounded. Thus, in a polynomial expansion we might find the terms associated with an xi far from x contributing most (rather than least) to the expansion. It is quite important, therefore, to be sure that the expansion of each windown function is in fact accurate in the region of interest, and this may well require a large number of terms. There are many types of series expansions one might consider. Readers familiar with integral equations will naturally interpret Eq. 66 as an expansion of the kernel 4.9. *APPROXIMATIONS BY SERIES EXPANSIONS eigenfunction 43 (x, xi ) in a series of eigenfunctions. (In analogy with eigenvectors and eigenvalues, eigenfunctions are solutions to certain differential equations with fixed real-number coefficients.) Rather than computing eigenfunctions, one might choose any reasonable set of functions orthogonal over the region of interest and obtain a least-squares fit to the window function. We shall take an even more straightforward approach and expand the window function in a Taylor series. For simplicity, we confine our attention to a one-dimensional example using a Gaussian window function: (u) = e-u 2 m-1 (-1)j j=0 u2j . j! This expansion is most accurate near u = 0, and is in error by less than u2m /m!. If we substitute u = (x - xi )/h, we obtain a polynomial of degree 2(m - 1) in x and xi . For example, if m = 2 the window function can be approximated as x - xi h = and thus pn (x) = 1 nh n 1- x - xi 2 h 2 1 1 1 + 2 x xi - 2 x2 - 2 x2 , h h h i i=1 x - xi h b0 + b1 x + b2 x2 , (70) where the coefficients are 1 1 1 - h h3 n 2 1 h3 n 1 . h3 n n b0 b1 b2 = = x2 i i=1 xi i=1 = - This simple expansion condenses the information in n samples into the values, b0 , b1 , and b2 . It is accurate if the largest value of |x - xi | is not greater than h. Unfortunately, this restricts us to a very wide window that is not capable of much resolution. By taking more terms we can use a narrower window. If we let r be the largest value of |x - xi | and use the fact that the error is the m-term expansion of ((x - xi )/h) is less than (r/h)2m m!, then using Stirling's approximation for m! we find that the error in approximating pn (x) is less than 1 r/h h m! 2m h 2m 1 e m r h 2 m . (71) Thus, the error becomes small only when m > e(r/h)2 . This implies the need for many terms if the window size h is small relative to the distance r from x to the most 44 CHAPTER 4. NONPARAMETRIC TECHNIQUES distant sample. Although this example is rudimentary, similar considerations arise in the multidimensional case even when more sophisticated expansions are used, and the procedure is most attractive when the window size is relatively large. 4.10 Fisher Linear Discriminant One of the recurring problems encountered in applying statistical techniques to pattern recognition problems has been called the "curse of dimensionality." Procedures that are analytically or computationally manageable in low-dimensional spaces can become completely impractical in a space of 50 or 100 dimensions. Pure fuzzy methods are particularly ill-suited to such high-dimensional problems since it is implausible that the designer's linguistic intuition extends to such spaces. Thus, various techniques have been developed for reducing the dimensionality of the feature space in the hope of obtaining a more manageable problem. We can reduce the dimensionality from d dimensions to one dimension if we merely project the d-dimensional data onto a line. Of course, even if the samples formed well-separated, compact clusters in d-space, projection onto unchanged. If we have very little data, we would tend to project to a subspace of low dimension, while if there is more data, we can use a higher dimension, as we shall explore in Chap. ??. Once we have projected the distributions onto the optimal subspace (defined as above), we can use the methods of Chapt. ?? to create our full classifier. As in the two-class case, multiple discriminant analysis primarily provides a reasonable way of reducing the dimensionality of the problem. Parametric or nonparametric techniques that might not have been feasible in the original space may work well in the lower-dimensional space. In particular, it may be possible to estimate separate covariance matrices for each class and use the general multivariate normal assumption after the transformation where this could not be done with the original data. In general, if the transformation causes some unnecessary overlapping of the data and increases the theoretically achievable error rate, then the problem of classifying the data still remains. However, there are other ways to reduce the dimensionality of 4.11. SUMMARY 51 data, and we shall encounter this subject again in Chap. ??. We note that there are also alternate methods of discriminant analysis -- such as the selection of features based on statistical sigificance -- some of which are given in the references for this chapter. Of these, Fisher's method remains a fundamental and widely used technique. Summary There are two overarching approaches to non-parametric estimation for pattern classification: in one the densities are estimated (and then used for classification), in the other the category is chosen directly. The former approach is exemplified by Parzen windows and their hardware implementation, Probabilistic neural networks. The latter is exemplified by k-nearest-neighbor and several forms of relaxation networks. In the limit of infinite training data, the nearest-neighbor error rate is bounded from above by twice the Bayes error rate. The extemely high space complexity of the nominal nearest-neighbor method can be reduced by editing (e.g., removing those prototypes that are surrounded by prototypes of the same category), prestructuring the data set for efficient search, or partial distance calculations. Novel distance measures, such as the tangent distance, can be used in the nearest-neighbor algorithm for incorporating known tranformation invariances. Fuzzy classification methods employ heuristic choices of "category membership" and heuristic conjunction rules to obtain discriminant functions. Any benefit of such techniques is limited to cases where there is very little (or no) training data, small numbers of features, and when the knowledge can be gleaned from the designer's prior knowledge. Relaxation methods such as potential functions create "basins of attraction" surrounding training prototypes; when a test pattern lies in such a basin, the corresponding prototype can be easily identified along with its category label. Reduced coloumb energy networks are one in the class of such relaxation networks, the basins are adjusted to be as large as possible yet not include prototypes from other categories. The Fisher linear discriminant finds a good subspace in which categories are best separated; other techniques can then be applied in the subspace. Fisher's method can be extended to cases with multiple categories projected onto subspaces of higher dimension than a line. Bibliographical and Historical Remarks Parzen introduced his window method for estimating density functions [32], and its use in regression was pioneered by Ndaraya and Watson [?, ?]. Its natural application to classification problems stems from the work of Specht [39], including its PNN hardware implementation [40]. Nearest-neighbor methods were first introduced by [16, 17], but it was over fifteen years later that computer power had increased, thereby making it practical and renewing interest in its theoretical foundations. Cover and Hart's foundational work on asymptotic bounds [10] were expanded somewhat s are all large and roughly equal in Rd , and that neighborhoods that have even just a few points must have large radii. (c) Find ld (p), the length of a hypercube edge in d dimensions that contains the fraction p of points (0 p 1). To better appreciate the implications of your result, calculate: l5 (0.01), l5 (0.1), l20 (0.01), and l20 (0.1). (d) Show that nearly all points are close to an edge of the full space (e.g., the unit hypercube in d dimensions). Do this by calculating the L distance from one point to the closest other point. This shows that nearly all points are closer to an edge than to another training point. (Argue that L is more favorable than L2 distance, even though it is easier to calculate here.) The result shows that most points are on or near the convex hull of training samples and that nearly every point is an "outlier" with respects to all the others. 12. Show how the "curse of dimensionality" (Problem 11) can be "overcome" by choosing or assuming that your model is of a particular sort. Suppose that we are estimating a function of the form y = f (x) + N (0, 2 ). n (a) Suppose the true function is linear, f (x) = j=1 aj xj , and that the approximation ^ is f (x) = n aj xj . Of course, the fit coefficients are: ^ j=1 n yi - d 2 aj xij , aj = arg min ^ aj i=1 j=1 ^ for j = 1, . . . , d. Prove that E[f (x) - f (x)]2 = d 2 /n, i.e., that it increases linearly with d, and not exponentially as the curse of dimensionality might otherwise suggest. (b) Generalize your result from part (a) to the case where a function is expressed n in a different basis set, i.e., f (x) = i=1 ai Bi (x) for some well-behaved basis set Bi (x), and hence that the result does not depend on the fact that we have used a linear basis. 13. Consider classifiers based on samples from the distributions 2x for 0 x 1 0 otherwise, 2 - 2x for 0 x 1 0 otherwise. p(x|1 ) and p(x|2 ) = = (a) What is the Bayes decision rule and the Bayes classification error? 56 CHAPTER 4. NONPARAMETRIC TECHNIQUES (b) Suppose we randomly select a single point from 1 and a single point from 2 , and create a nearest-neighbor classifier. Suppose too we select a test point from one of the categories (1 for definiteness). Integrate to find the expected error rate P1 (e). (c) Repeat with two training samples from each category and a single test point in order to find P2 (e). (d) Generalize to find the arbitrary Pn (e). (e) Compare lim Pn (e) with the Bayes error. n 14. Repeat Problem 13 but with 3/2 0 3/2 0 for 0 x 2/3 otherwise, for 1/3 x 1 otherwise. p(x|1 ) and p(x|2 ) = = 15. Expand in greater detail Algorithm 3 and add a conditional branch that will speed it. Assuming the data points come from c categories and there are, on average, k Voronoi neighbors of any point x, on average how much faster will your improved algorithm be? 16. Consider the simple nearest-neighbor editing algorithm (Algorithm 3). (a) Show by counterexample that this algorithm does not yield the minimum set of points. (Hint: consider a problem where the points from each of two-categories are constrained to be on the intersections of a two-dimensional Cartesian grid.) (b) Create a sequential editing algorithm, in which each point is considered in turn, and retained or rejected before the next point is considered. Prove that your algorithm does or does not depend upon the sequence the points are considered. 17. Consider classification problem where each of the c categories possesses the same distribution as well as prior P (i ) = 1/c. Prove that the upper bound in Eq. 53, i.e., P P 2 - c P , c-1 is achieved in this "zero-information" case. 18. Derive Eq. 55. Section 4.6 19. Consider the Euclidean metric in d dimensions: d D(a, b) = k=1 (ak - bk )2 . 4.11. PROBLEMS 57 Suppose we rescale each axis by a fixed factor, i.e., let xk = k xk for real, non-zero constants k , k = 1, 2, ..., d. Prove that the resulting space is a metric space. Discuss the import of this fact for standard nearest-neighbor classification methods. 20. Prove that the Minkowski me of the properties of density estimation in the following way. sampled from x3 0.14 -0.38 0.69 1.31 0.87 1.35 0.92 0.97 0.99 0.88 4.11. COMPUTER EXERCISES 63 (a) Write a program to generate points according to a uniform distribution in a unit cube, -1/2 xi 1/2 for i = 1, 2, 3. Generate 104 such points. (b) Write a program to estimate the density at the origin based on your 104 points as a function of the size of a cubical window function of size h. Plot your estimate as a function of h, for 0 < h 1. (c) Evaluate the density at the origin using n of your points and the volume of a cube window which just encloses n points. Plot your estimate as a function of n = 1, ..., 104 . (d) Write a program to generate 104 points from a spherical Gaussian density (with = I) centered on the origin. Repeat (b) & (c) with your Gaussian data. (e) Discuss any qualitative differences between the functional dependencies of your estimation results for the uniform and Gaussian densities. Section 4.3 2. Consider Parzen-window estimates and classifiers for points in the table above. Let your window function be a spherical Gaussian, i.e., ((x - xi )/h) Exp[-(x - xi )t (x - xi )/(2h2 )]. (a) Write a program to classify an arbitrary test point x based on the Parzen window estimates. Train your classifier using the three-dimensional data from your three categories in the table above. Set h = 1 and classify the following three points: (0.50, 1.0, 0.0)t , (0.31, 1.51, -0.50)t and (-0.3, 0.44, -0.1)t . (b) Repeat with h = 0.1. Section 4.4 3. Consider k-nearest-neighbor density estimations in different numbers of dimensions (a) Write a program to find the k-nearest-neighbor density for n (unordered) points in one dimension. Use your program to plot such a density estimate for the x1 values in category 3 in the table above for k = 1, 3 and 5. (b) Write a program to find the k-nearest-neighbor density estimate for n points in two dimensions. Use your program to plot such a density estimate for the x1 - x2 values in 2 for k = 1, 3 and 5. (c) Write a program to form a k-nearest-neighbor classifier for the three-dimensional data from the three categories in the table above. Use your program with k = 1, 3 and 5 to estimate the relative densities at the following points: (-0.41, 0.82, 0.88)t , (0.14, 0.72, 4.1)t and (-0.81, 0.61, -0.38)t . Section 4.5 4. Write a program to create a Voronoi tesselation in two dimensions as follows. 64 CHAPTER 4. NONPARAMETRIC TECHNIQUES (a) First derive analytically the equation of a line separating two arbitrary points. (b) Given the full data set D of prototypes and a particular point x D, write a program to create a list of line segments comprising the Voronoi cell of x. (c) Use your program to form the Voronoi tesselation of the x1 - x2 features from the data of 1 and 3 in the table above. Plot your Voronoi diagram. (d) Write a program to find the category decision boundary based on this full set D. (e) Implement a version of the pruning method described in Algorithm 3. Prune your data set from (c) to form a condensed set. (f) Apply your programs from (c) & (d) to form the Voronoi tesselation and boundary for your condensed data set. Compare the decision boundaries you found for the full and the condensed sets. 5. Explore the tradeoff between computational complexity (as it relates to partial distance calculations) and search accuracy in nearest-neighbor classifiers in the following exercise. (a) Write a program to generate n prototypes from a uniform distributions in a 6-dimensional hypercube centered on the origin. Use your program to generate 106 points for category 1 , 106 different points for category 2 , and likewise for 3 and 4 . Denote this full set D. (b) Use your program to generate a test set Dt of n = 100 points, also uniformly distributed in the 6-dimensional hypercube. (c) Write a program to implement the nearest-neighbor neighbor algorithm. Use this program to label each of your points in Dt by the category of its nearest neighbor in D. From now on we will assume that the labels you find are gorithm, 18 training Algorithm, 17 probability posterior nonparametric estimation, 5 subjective, 38 prototype, 19 Rayleigh quotient, 47 RCE, see reduced coulomb energy classification Algorithm, 41 training Algorithm, 40 reduced coulomb energy, 40 scatter between-class, 47 74 within-class, 46, 47 scatter matrix, see matrix, scatter search tree, 30 subjective probability, see probability, subjective tangent vector, 34 training data limited, 7 triangle inequality, see metric, triangle inequality variance Parzen estimate convergence, 11 vector mean total, see mean vector, total Voronoi cell, 23 tesselation, 23 window function Gaussian, 43 within-class scatter, see scatter, withinclass zero-information distribution, 27 INDEX Contents 5 Linear Discriminant Functions 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Linear Discriminant Functions and Decision Surfaces . . . . . . . . . 5.2.1 The Two-Category Case . . . . . . . . . . . . . . . . . . . . . 5.2.2 The Multicategory Case . . . . . . . . . . . . . . . . . . . . . 5.3 Generalized Linear Discriminant Functions . . . . . . . . . . . . . . 5.4 The Two-Category Linearly-Separable Case . . . . . . . . . . . . . . 5.4.1 Geometry and Terminology . . . . . . . . . . . . . . . . . . . 5.4.2 Gradient Descent Procedures . . . . . . . . . . . . . . . . . . Algorithm 1: Gradient descent . . . . . . . . . . . . . . . . . . . . . . Algorithm 2: Newton descent . . . . . . . . . . . . . . . . . . . . . . 5.5 Minimizing the Perceptron Criterion Function . . . . . . . . . . . . . 5.5.1 The Perceptron Criterion Function . . . . . . . . . . . . . . . Algorithm 3: Batch Perceptron . . . . . . . . . . . . . . . . . . . . . 5.5.2 Convergence Proof for Single-Sample Correction . . . . . . . Algorithm 4: Fixed increment descent . . . . . . . . . . . . . . . . . 5.5.3 Some Direct Generalizations . . . . . . . . . . . . . . . . . . Algorithm 5: Fixed increment descent . . . . . . . . . . . . . . . . . Algorithm 6: Batch variable increment Perceptron . . . . . . . . . . Algorithm 7: Balanced Winnow algorithm . . . . . . . . . . . . . . . 5.6 Relaxation Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 The Descent Algorithm . . . . . . . . . . . . . . . . . . . . . Algorithm 8: Relaxation training with margin . . . . . . . . . . . . . Algorithm 9: Relaxation rule . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Convergence Proof . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Nonseparable Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Minimum Squared Error Procedures . . . . . . . . . . . . . . . . . . 5.8.1 Minimum Squared Error and the Pseudoinverse . . . . . . . . Example 1: Constructing a linear classifier by matrix pseudoinverse 5.8.2 Relation to Fisher's Linear Discriminant . . . . . . . . . . . . 5.8.3 Asymptotic Approximation to an Optimal Discriminant . . . 5.8.4 The Widrow-Hoff Procedure . . . . . . . . . . . . . . . . . . . Algorithm 10: LMS algorithm . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Stochastic Approximation Methods . . . . . . . . . . . . . . . 5.9 *The Ho-Kashyap Procedures . . . . . . . . . . . . . . . . . . . . . . 5.9.1 The Descent Procedure . . . . . . . . . . . . . . . . . . . . . Algorithm 11: Ho-Kashyap . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Convergence Proof . . . . . . . . . . . . . . . . . . . . . . . . 1 3 3 4 4 6 6 11 11 12 13 14 14 14 15 17 18 21 21 22 23 23 23 24 25 25 27 28 28 29 30 32 34 34 35 37 37 39 39 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 5.9.3 Nonseparable Behavior . . . . . . . . . . . . . . 5.9.4 Some Related Procedures . . . . . . . . . . . . Algorithm 12: Modified Ho-Kashyap . . . . . . . . . . 5.10 *Linear Programming Algorithms . . . . . . . . . . . . 5.10.1 Linear Programming . . . . . . . . . . . . . . . 5.10.2 The Linearly Separable Case . . . . . . . . . . 5.10.3 Minimizing the Perceptron Criterion Function . 5.11 *Support Vector Machines . . . . . . . . . . . . . . . . 5.11.1 SVM training . . . . . . . . . . . . . . . . . . . Example 2: SVM for the XOR problem . . . . . . . . . 5.12 Multicategory Generalizations . . . . . . . . . . . . . . 5.12.1 Kesler's Construction . . . . . . . . . . . . . . 5.12.2 Convergence of the Fixed-Increment Rule . . . 5.12.3 Generalizations for MSE Procedures . . . . . . Bibliographical and Historical Remarks . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer exercises . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 42 42 44 44 45 46 49 50 51 52 52 53 55 57 57 65 67 70 Chapter 5 Linear Discriminant Functions 5.1 Introduction I n Chap. ?? we assumed that the forms for the underlying probability densities were known, and used the training samples to estimate the values of their parameters. In this chapter we shall instead assume we know the proper forms for the discriminant functions, and use the samples to estimate the values of parameters of the classifier. We shall examine various procedures for determining discriminant functions, some of which are statistical and some of which are not. None of them, however, requires knowledge of the forms of underlying probability distributions, and in this limited sense they can be said to be nonparametric. Throughout this chapter we shall be concerned with discriminant functions that are either linear in the components of x, or linear in some given set of functions of x. Linear discriminant functions have a variety of pleasant analytical properties. As we have seen in Chap. ??, they can be optimal if the underlying distributions are cooperative, such as Gaussians having equal covariance, as might be obtained through an intelligent choice of feature detectors. Even when they are not optimal, we might be willing to sacrifice some performance in order to gain the advantage of their simplicity. Linear discriminant functions are relatively easy to compute and in the absense of information suggesting otherwise, linear classifiers are an attractive candidates for initial, trial classifiers. They also illustrate a number of very important principles which will be used more fully in neural networks (Chap. ??). The problem of finding a linear discriminant function will be formulated as a problem of minimizing a criterion function. The obvious criterion function for classification purposes is the sample risk, or training error -- the average loss incurred in classifying the set of training samples. We must emphasize right away, however, that despite the attractiveness of this criterion, it is fraught with problems. While our goal will be to classify novel test patterns, a small training error does not guarantee a small test error -- a fascinating and subtle problem that will command our attention in Chap. ??. As we shall see here, it is difficult to derive the minimum-risk linear discriminant anyway, and for that reason we investigate several related criterion functions that are analytically more tractable. Much of our attention will be devoted to studying the convergence properties 3 training error 4 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS and computational complexities of various gradient descent procedures for minimizing criterion functions. The similarities between many of the procedures sometimes makes it difficult to keep the differences between them clear and for this reason we have included a summary of the principal results in Table 5.1 at the end of Sect. 5.10. 5.2 5.2.1 Linear Discriminant Functions and Decision Surfaces The Two-Category Case A discriminant function that is a linear combination of the components of x can be written as g(x) = wt x + w0 , threshold weight (1) where w is the weight vector and w0 the bias or threshold weight. A two-category linear classifier implements the following decision rule: Decide 1 if g(x) > 0 and 2 if g(x) < 0. Thus, x is assigned to 1 if the inner product wt x exceeds the threshold -w0 and 2 otherwise. If g(x) = 0, x can ordinarily be assigned to either class, but in this chapter we shall leave the assignment undefined. Figure 5.1 shows a typical implementation, a clear example of the general structure of a pattern recognition system we saw in Chap. ??. g(x) w0 wd w1 w2 x0 = 1 x1 x2 ... ... xd Figure 5.1: A simple linear classifier having d input units, each corresponding to the values of the components of an input vector. Each input feature value xi is multiplied by its corresponding weight wi ; the output unit sums all these products and emits a +1 if wt x + w0 > 0 or a -1 otherwise. The equation g(x) = 0 defines the decision surface that separates points assigned to 1 from points assigned to 2 . When g(x) is linear, this decision surface is a hyperplane. If x1 and x2 are both on the decision surface, then wt x1 + w0 = wt x2 + w0 or wt (x1 - x2 ) = 0, 5.2. LINEAR DISCRIMINANT FUNCTIONS AND DECISION SURFACES 5 and this shows that w is normal to any vector lying in the hyperplane. In general, the hyperplane H divides the feature space into two halfspaces, decision region R1 for 1 and region R2 for 2 . Since g (x) > 0 if x is in R1 , it follows that the normal vector w points into R1 . It is sometimes said that any x in R1 is on the positive side of H, and any x in R2 is on the negative side. The discriminant function g(x) gives an algebraic measure of the distance from x to the hyperplane. Perhaps the easiest way to see this is to express x as x = xp + r w , w where xp is the normal projection of x onto H, and r is the desired algebraic distance -- positive if x is on the positive side and negative if x is on the negative side. Then, since g(xp ) = 0, g(x) = wt x + w0 = r w , or g(x) . w r= In particular, the distance from the origin to H is given by w0 / w . If w0 > 0 the origin is on the positive side of H, and if w0 < 0 it is on the negative side. If w0 = 0, then g(x) has the homogeneous form wt x, and the hyperplane passes through the origin. A geometric illustration of these algebraic results is given in Fig. 5.2. x3 r x R1 xp R2 w || /||w g(x)=0 x2 H x1 w0 Figure 5.2: The linear decision boundary H, where g(x) = wt x + w0 = 0, separates the feature space into two half-spaces R1 (where g(x) > 0) and R2 (where g(x) < 0). To summarize, a linear discriminant function divides the feature space by a hyperplane decision surface. The orientation of the surface is determined by the normal vector w, and the location of the surface is determined by the bias w0 . The discriminant function g(x) is proportional to the signed distance from x to the hyperplane, with g(x) > 0 when x is on the positive side, and g(x) < 0 when x is on the negative side. 6 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS 5.2.2 The Multicategory Case There is more than one way to devise multicategory classifiers employing linear discriminant functions. For example, we might reduce the problem to c - 1 two-class problems, where the ith problem is solved by a linear discriminant function that separates points assigned to i from those not assigned to i . A more extravagant approach would be to use c(c - 1)/2 linear discriminants, one for every pair of classes. As illustrated in Fig. 5.3, both of these approaches can lead to regions in which the classification is undefined. We shall avoid this problem by adopting the approach taken in Chap. ??, defining c linear discriminant functions gi (x) = wt xi + wi0 linear machine i = 1, ..., c, (2) and assigning x to i if gi (x) > gj (x) for all j = i; in case of ties, the classification is left undefined. The resulting classifier is called a linear machine. A linear machine divides the feature space into c decision regions, with gi (x) being the largest discriminant if xo y reduces the problem to one of finding a homogeneous linear discriminant function. Some of the advantages and disadvantages of this approach can be clarified by considering a simple example. Let the quadratic discriminant function be g(x) = a1 + a2 x + a3 x2 , so that the three-dimensional vector y is given by 1 y = x . x2 (7) (8) The mapping from x to y is illustrated in Fig. 5.5. The data remain inherently onedimensional, since varying x causes y to trace out a curve in three dimensions. Thus, one thing to notice immediately is that if x is governed by a probability law p(x), the induced density p(y) will be degenerate, being zero everywhere except on the curve, ^ ^ where it is infinite. This is a common problem whenever d > d, and the mapping takes points from a lower-dimensional space to a higher-dimensional space. ^ ^ The plane H defined by at y = 0 divides the y-space into two decision regions R1 ^ 2 . Figure ?? shows the separating plane corresponding to a = (-1, 1, 2)t , the and R ^ ^ decision regions R1 and R2 , and their corresponding decision regions R1 and R2 in the original x-space. The quadratic discriminant function g(x) = -1 + x + 2x2 is 5.3. GENERALIZED LINEAR DISCRIMINANT FUNCTIONS 9 4 y= () 1 x x2 2 R1 y3 x -2 -1 0 1 2 0 R2 1 0 0.5 1 0 2 R1 R2 R1 y2 y1 1.5 -1 2 2.5 Figure 5.5: The mapping y = (1, x, x2 )t takes a line and transforms it to a parabola in three dimensions. A plane splits the resulting y space into regions corresponding to two categories, and this in turn gives a non-simply connected decision region in the one-dimensional x space. positive if x < -1 or if x > 0.5, and thus R1 is multiply connected. Thus although the decision regions in y-space are convex, this is by no means the case in x-space. More generally speaking, even with relatively simple functions yi (x), decision surfaces induced in an x-space can be fairly complex (Fig. 5.6). Unfortunately, the curse of dimensionality often makes it hard to capitalize on ^ this flexibility in practice. A complete quadratic discriminant function involves d = (d + 1)(d + 2)/2 terms. If d is modestly large, say d = 50, this requires the com^ putation of a great many terms; inclusion of cubic and higher orders leads to O(d3 ) ^ components of the weight vector a must be determined terms. Furthermore, the d ^ from training samples. If we think of d as specifying the number of degrees of freedom for the discriminant function, it is natural to require that the number of samples be not less than the number of degrees of freedom (cf., Chap. ??). Clearly, a general series expansion of g(x) can easily lead to completely unrealistic requirements for computation and data. We shall see in Sect. ?? that this drawback can be accommodated by imposing a constraint of large margins, or bands between the training patterns, however. In this case, we are not technically speaking fitting all the free parameters; instead, we are relying on the assumption that the mapping to a highdimensional space does not impose any spurious structure or relationships among the training points. Alternatively, multilayer neural networks approach this problem by employing multiple copies of a single nonlinear function of the input features, as we shall see in Chap. ??. While it may be hard to realize the potential benefits of a generalized linear discriminant function, we can at least exploit the convenience of being able to write g(x) in the homogeneous form at y. In the particular case of the linear discriminant function 10 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS y3 y2 w x x 2 1x R1 ^ 2 ) 1 x y = x2 ^ H R2 x2 ( ^ y1 x1 R1 R2 R1 x1 Figure 5.6: The two-dimensional input space x is mapped through a polynomial function f to y. Here the mapping is y1 = x1 , y2 = x2 and y3 x1 x2 . A linear discriminant in this transformed space is a hyperplane, which cuts the surface. Points ^ to the positive side of the hyperplane H correspond to category 1 , and those beneath it 2 . Here, in terms of the x space, R1 is a not simply connected. d d g(x) = w0 + i=1 wi xi = i=0 wi xi (9) where we set x0 = 1. Thus we can write 1 x1 y= . . . xd augmented vector , = x 1 (10) and y is sometimes called an augmented feature vector. Likewise, an augmented weight vector can be written as: w0 w0 w1 . a= . = (11) . w . wd This mapping from d-dimensional x-space to (d+1)-dimensional y-space is mathematically trivial but nonetheless quite convenient. The addition of a constant component to x preserves all distance relationships among samples. The resulting y vectors all lie in a d-dimensional subspace, which is the x-space itself. The hyperplane deci^ sion surface H defined by at y = 0 passes through the origin in y-space, even though the corresponding hyperplane H can be in any position in x-space. The distance from ^ y to H is given by |at y|/ a , or |g(x)|/ a . Since a > w , this distance is less 5.4. THE TWO-CATEGORY LINEARLY-SEPARABLE CASE 11 than, or at most equal to the distance from x to H. By using this mapping we reduce the problem of finding a weight vector w and a threshold weight w0 to the problem of finding a single weight vector a (Fig. 5.7). R1 y0 y0 = 1 y2 a R2 y0 = 0 y1 Figure 5.7: A three-dimensional augmented feature space y and augmented weight vector a (at the origin). The set of points for which at y = 0 is a plane (or more generally, a hyperplane) perpendicular to a and passing through the origin of yspace, as indicated by the red disk. Such a plane need not pass through the origin of the two-dimensional x-space at the top, of course, as shown by the dashed line. Thus there exists an augmented weight vector a that will lead to any straight decision line in x-space. 5.4 5.4.1 The Two-Category Linearly-Separable Case Geometry and Terminology Suppose now that we have a set of n samples y1 , ..., yn , some labelled 1 and some labelled 2 . We want to use these samples to determine the weights a in a linear discriminant function g(x) = at y. Suppose we have reason to believe that there exists a solution for which the probability of error is very low. Then a reasonable approach is to look for a weight vector that classifies all of the samples correctly. If such a weight vector exists, the samples are said to be linearly separable. A sample yi is classified correctly if at yi > 0 and yi is labelled 1 or if at yi < 0 and yi is labelled 2 . This suggests a "normalization" that simplifies the treatment of the two-category case, viz., the replacement of all samples labelled 2 by their negatives. With this "normalization" we can forget the labels and look for a weight vector a such that at yi > 0 for all of the samples. Such a weight vector is called a separating vector or more generally a solution vector. The weight vector a can be thought of as specifying a point in weight space. Each sample yi places a constraint on the possible location of a solution vector. The equation at yi = 0 defines a hyperplane through the origin of weight space having yi as a normal vector. The solution vector -- if it exists -- must be on the positive side linearly separable separating vector 12 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS solution region of every hyperplane. Thus, a solution vector must lie in the intersection of n halfspaces; indeed any vector in this region is a solution vector. The corresponding region is called the solution region, and should not be confused with the decision region in feature space corresponding to any particular category. A two-dimensional example illustrating the solution region for both the normalized and the unnormalized case is shown in Fig. 5.8. solution region y2 a solution region y2 a separ atin ne g pla y1 "sepa ra pla ting" ne y1 Figure 5.8: Four training samples (black for 1 , red for 2 ) and the solution region in feature space. The figure on the left shows the raw data; the solution vectors leads to a plane that separates the patterns from the two categories. In the figure on the right, the red points have been "normalized" -- i.e., changed in sign. Now the solution vector leads to afier. batch training Thus, the batch Perceptron algorithm for finding a solution vector can be stated very simply: the next weight vector is obtained by adding some multiple of the sum of the misclassified samples to the present weight vector. We use the term "batch" to refer to the fact that (in general) a large group of samples is used when computing each weight update. (We shall soon see alternate methods based on single samples.) Figure 5.12 shows how this algorithm yields a solution vector for a simple two-dimensional example with a(1) = 0, and (k) = 1. We shall now show that it will yield a solution for any linearly separable problem. 5.5. MINIMIZING THE PERCEPTRON CRITERION FUNCTION 17 10 5 Jp 0 y3 y1 -2 0 a2 2 y2 y3 y1 solution y3 region 4 4 2 a1 0 -2 Figure 5.12: The Perceptron criterion, Jp is plotted as a function of the weights a1 and a2 for a three-pattern problem. The weight vector begins at 0, and the algorithm sequentially adds to it vectors equal to the "normalized" misclassified patterns themselves. In the example shown, this sequence is y2 , y3 , y1 , y3 , at which time the vector lies in the solution region and iteration terminates. Note that the second update (by y3 ) takes the candidate vector farther from the solution region than after the first update (cf. Theorem 5.1. (In an alternate, batch method, all the misclassified points are added at each iteration step leading to a smoother trajectory in weight space.) 5.5.2 Convergence Proof for Single-Sample Correction We shall begin our examination of convergence properties of the Perceptron algorithm with a variant that is easier to analyze. Rather than testing a(k) on all of the samples and basing our correction of the set Yk of misclassified training samples, we shall consider the samples in a sequence and shall modify the weight vector whenever it misclassifies a single sample. For the purposes of the convergence proof, the detailed nature of the sequence is unimportant as long as every sample appears in the sequence infinitely often. The simplest way to assure this is to repeat the samples cyclically, though from a practical point of view random selection is often to be preferred (Sec. 5.8.5). Clearly neither the batch nor this single-sample version of the Perceptron algorithm are on-line since we must store and potentially revisit all of the training patterns. Two further simplifications help to clarify the exposition. First, we shall temporarily restrict our attention to the case in which (k) is constant -- the so-called fixed-increment case. It is clear from Eq. 18 that if (t) is constant it merely serves to scale the samples; thus, in the fixed-increment case we can take (t) = 1 with no loss in generality. The second simplification merely involves notation. When the samples fixed increment 18 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS are considered sequentially, some will be misclassified. Since we shall only change the weight vector when there is an error, we really need only pay attention to the misclassified samples. Thus we shall denote the sequence of samples using superscripts, i.e., by y1 , y2 , ..., yk , ..., where each yk is one of the n samples y1 , ..., yn , and where each yk is misclassified. For example, if the samples y1 , y2 , and y3 are considered cyclically, and if the marked samples y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , ... fixedincrement rule (19) are misclassified, then the sequence y1 , y2 , y3 , y4 , y5 , ... denotes the sequence y1 , y3 , y1 , y2 , y2 , ... With this understanding, the fixed-increment rule for generating a sequence of weight vectors can be written as a(1) a(k + 1) = a(k) + yk arbitrary k1 (20) where at (k)yk 0 for all k. If we let n denote the total number of patterns, the algorithm is: Algorithm 4 (Fixed-increment single-sample Perceptron) 1 2 3 4 5 6 begin initialize a, k = 0 do k (k + 1)modn if yk is misclassified by a then a a - yk until all patterns properly classified return a end The fixed-increment Perceptron rule is the simplest of many algorithms that have been proposed for solving systems of linear inequalities. Geometrically, its interpretation in weight space is particularly clear. Since a(k) misclassifies yk , a(k) is not on the positive side of the yk hyperplane at yk = 0. The addition of yk to a(k) moves the weight vector directly toward and perhaps across this hyperplane. Whether the hyperplane is crossed or not, the new inner product at (k + 1)yk is larger than the old inner product at (k)yk by the amount yk 2 , and the correction is clearly moving the weight vector in a good direction (Fig. 5.13). 5.5. MINIMIZING THE PERCEPTRON CRITERION FUNCTION 1 2 3 19 4 5 6 7 8 9 Figure 5.13: Samples from two categories, 1 (black) and 2 (red) are shown in augmented feature space, along with an augmented weight vector a. At each step in a fixed-increment rule, one of the misclassified patterns, yk , is shown by the large dot. A correction a (proportional to the pattern vector yk ) is added to the weight vector -- towards an 1 point or away from an 2 point. This changes the decision boundary from the dashed position (from the previous update) to the solid position. The sequence of resulting a vectors is shown, where later values are shown darker. In this example, by step 9 a solution vector has been found and the categories successfully separated by the decision boundary shown. Clearly this algorithm can only terminate if the samples are linearly separable; we now prove that indeed it terminates so long as the samples are linearly separable. Theorem 5.1 (Perceptron Convergence) If training samples are linearly separable then the sequence of weight vectors given by Algorithm 4 will terminate at a solution vector. Proof: In seeking a proof, it is natural to try to show that each correction brings the weight ^ vector closer to the solution region. That is, one might try to show that if a is any ^ ^ solution vector, then a(k + 1) - a is smaller than a(k) - a . While this turns out 20 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS not to be true in general (cf. steps 6 & 7 in Fig. 5.13), we shall see that it is true for solution vectors that are sufficiently long. ^ ^ Let a be any solution vector, so that at yi is strictly positive for all i, and let be a positive scale factor. From Eq. 20, a(k + 1) - ^ = (a(k) - ^) + yk a a and hence a(k + 1) - ^ a 2 = a(k) - ^ a 2 + 2(a(k) - ^)t yk + yk a 2 . Since yk was misclassified, at (k)yk 0, and thus a(k + 1) - ^ a 2 a(k) - ^ a 2 - 2^t yk + yk a 2 . ^ Because at yk is strictly positive, the second term will dominate the third if is sufficiently large. In particular, if we let be the maximum length of a pattern vector, 2 = max yi 2 , i (21) and be the smallest inner product of the solution vector with any pattern vector, i.e., ^ = min at yi > 0, i (22) then we have the inequality a(k + 1) - ^ a If we choose = we obtain a(k + 1) - ^ a 2 2 a(k) - ^ a 2 - 2 + 2 . 2 , (23) a(k) - ^ a 2 - 2. Thus, the squared distance from a(k) to ^ is reduced by at least 2 at each correction, a and after k corrections a(k + 1) - ^ a 2 a(k) - ^ a 2 - k 2 . (24) Since the squared distance cannot become negative, it follows that the sequence of corrections must terminate after no more than k0 corrections, where k0 = a(1) - ^ a 2 2 . (25) 5.5. MINIMIZING THE PERCEPTRON CRITERION FUNCTION 21 Since a correction occurs whenever a sample is misclassified, and since each sample appears infinitely often in the sequence, it follows that when corrections cease the resulting weight vector must classify all of the samples correctly. The number k0 gives us a bound on the number of corrections. If a(1) = 0, we get the following particularly simple expression for k0 : ^ 2 a k0 = 2 2 ^ 2 2 a = 2 2 max yi = i i 2 ^ a 2 t^ min[yi a]2 . (26) The denominator in Eq. 26 shows that the difficulty of the problem is essentially determined by the samples most nearly orthogonal to the solution vector. Unfortunately, it provides no help when we face an unsolved problem, since the bound is expressed in terms of a solution vector whid be finite. It is not outside either, since each correction causes the weight vector to move times its distance from the boundary plane, thereby preventing the vector from being bounded away from the boundary forever. Hence the limit point must be on the boundary. 5.7 Nonseparable Behavior The Perceptron and relaxation procedures give us a number of simple methods for finding a separating vector when the samples are linearly separable. All of these methods are called error-correcting procedures, because they call for a modification of the weight vector when and only when an error is encountered. Their success on separable problems is largely due to this relentless search for an error-free solution. In practice, one would only consider the use of these methods if there was reason to believe that the error rate for the optimal linear discriminant function is low. Of course, even if a separating vector is found for the training samples, it does not follow that the resulting classifier will perform well on independent test data. ^ A moment's reflection will show that any set of fewer than 2d samples is likely to be linearly separable -- a matter we shall return to in Chap. ??. Thus, one should use several times that many design samples to overdetermine the classifier, thereby ensuring that the performance on training and test data will be similar. Unfortunately, sufficiently large design sets are almost certainly not linearly separable. This makes it important to know how the error-correction procedures will behave when the samples are nonseparable. Since no weight vector can correctly classify every sample in a nonseparable set (by definition), it is clear that the corrections in an error-correction procedure can never cease. Each algorithm produces an infinite sequence of weight vectors, any member of which may or may not yield a useful "solution." The exact nonseparable behavior of these rules has been studied thoroughly in a few special cases. It is known, for example, that the length of the weight vectors produced by the fixed-increment rule are bounded. Empirical rules for terminating the correction procedure are often based on this tendency for the length of the weight vector to fluctuate near some limiting value. From a theoretical viewpoint, if the components of the samples are integervalued, the fixed-increment procedure yields a finite-state process. If the correction process is terminated at some arbitrary point, the weight vector may or may not be in a good state. By averaging the weight vectors produced by the correction rule, one can reduce the risk of obtaining a bad solution by accidentally choosing an unfortunate termination time. A number of similar heuristic modifications to the error-correction rules have been suggested and studied empirically. The goal of these modifications is to obtain acceptable performance on nonseparable problems while preserving the ability to find a separating vector on separable problems. A common suggestion is the use of a variable increment (k), with (k) approaching zero as k approaches infinity. The rate at which (k) approaches zero is quite important. If it is too slow, the results will still be sensitive to those training samples that render the set nonseparable. If it is too fast, the weight vector may converge prematurely with less than optimal results. One way to choose (k) is to make it a function of recent performance, decreasing it as performance improves. Another way is to program (k) by a choice such as (k) = (1)/k. When we examine stochastic approximation techniques, we shall see that this latter choice is the theoretical solution to an analogous problem. Before we errorcorrecting procedure 28 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS take up this topic, however, we shall consider an approach that sacrifices the ability to obtain a separating vector for good compromise performance on both separable and nonseparable problems. 5.8 5.8.1 Minimum Squared Error Procedures Minimum Squared Error and the Pseudoinverse The criterion functions we have considered es that the sequence of weight vectors tends to converge to the desired solution. Instead of pursuing this topic further, we shall turn to a very similar rule that arises from a stochastic descent procedure. We note, however, that the solution need not give a separating vector, even if one exists, as shown in Fig. 5.17 (Computer exercise 10). y2 se r pa ati ng r pe hy e an pl LMS solut ion y1 Figure 5.17: The LMS algorithm need not converge to a separating hyperplane, even if one exists. Since the LMS solution minimizes the sum of the squares of the distances of the training points to the hyperplane, for this exmple the plane is rotated clockwise compared to a separating hyperplane. 5.8.5 Stochastic Approximation Methods All of the iterative descent procedures we have considered thus far have been described in deterministic terms. We are given a particular set of samples, and we generate a particular sequence of weight vectors. In this section we digress briefly to consider an MSE procedure in which the samples are drawn randomly, resulting in a random sequence of weight vectors. We will return in Chap. ?? to the theory of stochastic approximation though here some of the main ideas will be presented without proof. Suppose that samples are drawn independently by selecting a state of nature with probability P (i ) and then selecting an x according to the probability law p(x|i ). For each x we let be its label, with = +1 if x is labelled 1 and = -1 if x is labelled 2 . Then the data consist of an infinite sequence of independent pairs (x, 1 ), (x2 , 2 ), ..., (xk , k ), .... Even though the label variable is binary-valued it can be thought of as a noisy version of the Bayes discriminant function g0 (x). This follows from the observation that P ( = 1|x) = P (1 |x), and P ( = -1|x) = P (2 |x), so that the conditional mean of is given by E|x [] = P (|x) = P (1 |x) - P (2 |x) = g0 (x). (62) 36 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS Suppose that we wish to approximate g0 (x) by the finite series expansion ^ d g(x) = at y = i=1 ai yi (x), ^ where both the basis functions yi (x) and the number of terms d are known. Then we ^ can seek a weight vector a that minimizes the mean-squared approximation error 2 2 = E[(at y - g0 (x))2 ]. (63) Minimization of would appear to require knowledge of Bayes discriminant g0 (x). However, as one might have guessed from the analogous situation in Sect. 5.8.3, it ^ can be shown that the weight vector a that minimizes 2 also minimizes the criterion function Jm (a) = E[(at y - )2 ]. (64) This should also be plausible from the fact that is essentially a noisy version of g0 (x) (Fig. ??). Since the gradient is Jm = 2E[(at y - )y], we can obtain the closed-form solution ^ a = E [yyt ]-1 E[y]. (66) (65) Thus, one way to use the samples is to estimate E[yyt ] and E[y], and use Eq. 66 to obtain the MSE optimal linear discriminant. An alternative is to minimize Jm (a) by a gradient descent procedure. Suppose that in place of the true gradient we substitute the noisy version 2(at yk - k )yk . This leads to the update rule a(k + 1) = a(k) + (k - at (k)yk )yk , (67) which is basically just the Widrow-Hoff rule. It can be shown (Problem ??) that if E[yyt ] is nonsingular and if the coefficients (k) satisfy m m lim (k) = + k=1 (68) and m m lim 2 (k) < k=1 (69) ^ then a (k) converges to a in mean square: k ^ lim E[ a(k) - a 2 ] = 0. (70) The reasons we need these conditions on (k) are simple. The first condition keeps the weight vector from converging so fast that a systematic error will remain forever uncorrected. The second condition ensures that random fluctuations are eventually suppressed. Both conditions are satisfied by the conventional choice (k) = 1/k. 5.9. *THE HO-KASHYAP PROCEDURES 37 Unfortunately, this kind of programmed decrease of (k), independent of the problem at hand, often leads to very slow convergence. Of course, this is neither the only nor the best descent algorithm for minimizing Jm . For example, if we note that the matrix of second partial derivatives for Jm is given by D = 2E[yyt ], we see that Newton's rule for minimizing Jm (Eq. 15) is a(k + 1) = a(k) + E[yyt ]-1 E[( - at y)y]. A stochastic analog of this rule is a(k + 1) = a(k) + Rk+1 (k - at (k)yk )yk . with t R-1 = R-1 + yk yk , k+1 k (71) (72) or, equivalently, Rk+1 = Rk - Rk yk (Rk yk )t . t 1 + yk Rk yk (73) This rule also produces a sequence of weight vectors that converges to the optimal solution in mean square. Its convergence is faster, but it requires more computation per step (Computer exercise 8). These gradient procedures can be viewed as methods for minimizing a criterion function, or finding the zero of its gradient, in the presence of noise. In the statistical literature, functions such as Jm and Jm that have the form E[f (a, x)] are called regression functions, and the iterative algorithms are called stochastic approximation procedures. Two well known ones are the Kiefer-Wolfowitz procedure for minimizing a regression function, and the Robbins-Monro procedure for finding a root of a regression function. Often the easiest way to obtain a convergence proof for a particular descent or approximation procedure is to show that it satisfies the convergence conditions for these more general procedures. Unfortunately, an exposition of these methods in their full generality would lead us rather far afield, and we must close this digression by referring the interested reader to the literature. regression function stochastic approximation 5.9 5.9.1 The Ho-Kashyap Procedures The Descent Procedure The procedures we have considered thus far differ in several ways. The Perceptron and relaxation procedures find separating vectors if the samples are linearly separable, but do not converge on nonseparable problems. The MSE procedures yield a weight vector whether the samples are linearly separable or not, but there is no guarantee This recursive formula for computing Rk , which is roughly (1/k)E[yyt ]-1 , cannot be used if Rk is singular. The equivalence of Eq. 72 and Eq. 73 follows from Problem ?? of Chap. ??. 38 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS that this vector is a separating vector in the separable case (Fig. 5.17). If the margin vector b is chosen arbitrarily, all we can say is that the MSE procedures minimize Ya - b 2 . Now if the training samples happen to be linearly separable, then there ^ ^ exists an a and a b such that Y^ = b > 0, a ^ ^ ^ where by b > 0, we mean that every component of b is positive. Clearly, were we ^ to take b = b and apply the MSE procedure, we would obtain a separating vector. ^ Of course, we usually do not know b beforehand. However, we shall now see how the MSE procedure can be modified to obtain both a separating vector a and a margin vector b. The underlying idea comes from the observation that if the samples are separable, and if both a and b in the criterion function Js (a, b) = Ya - b 2 (74) are allowed to vary (subject to the constraint b > 0), then the minimum value of Js is zero, and the a that achieves that minimum is a separating vector. To minimize Js , we shall use a modified gradient descent procedure. The gradient of Js with respect to a is given by a Js = 2Yt (Ya - b), and the gradient of Js with respect to b is given by b Js = -2(Ya - b). For any value of b, we can always take a = Y b, (77) (76) (75) thereby obtaining a Js = 0 and minimizing Js with respect to a in one step. We are not so free to modify b, however, since we must respect the constraint b > 0, and we must avoid a descent procedure that converges to b = 0. One way to prevent b from converging to zero is to start with b > 0 and to refuse to reduce any of its components. We can do this and still try to follow the negative gradient if we first set all positive components of b Js to zero. Thus, if we let |v| denote the vector whose components are the magnitudes of the corresponding components of v, we are led to consider an update rule for the margin of the form 1 b(k + 1) = b(k) - [b Js - |b Js |]. 2 (78) Using Eqs. 76 & 77, and being a bit more specific, we obtain the the second term becomes et (k)(YY - I)e+ (k) = -et (k)e+t (k) = - e+ (k) 2 , 5.9. *THE HO-KASHYAP PROCEDURES 41 the nonzero components of e+ (k) being the positive components of e(k). Since YY is symmetric and is equal to (YY )t (YY ), the third term simplifies to (YY - I)e+ (k) 2 = 2 e+t (k)(YY - I)t (YY - I)e+ (k) = 2 e+ (k) 2 - 2 e+ (k)YY e+ (k), and thus we have 1 ( e(k) 4 2 - e(k + 1) 2 ) = (1 - ) e+ (k) 2 + 2 e+t (k)YY e+ (k). (85) Since e+ (k) is nonzero by assumption, and since YY is positive semidefinite, e(k) 2 > e(k + 1) 2 if 0 < < 1. Thus the sequence e(1) 2 , e(2) 2 , ... is monotonically decreasing and must converge to some limiting value e 2 . But for convergence to take place, e+ (k) must converge to zero, so that all the positive com^ ponents of e(k) must converge to zero. Since et (k)b = 0 for all k, it follows that all of the components of e(k) must converge to zero. Thus, if 0 < < 1 and if the samples are linearly separable, a(k) will converge to a solution vector as k goes to infinity. If we test the signs of the components of Ya(k) at each step and terminate the algorithm when they are all positive, we will in fact obtain a separating vector in a finite number of steps. This follows from the fact that Ya(k) = b(k) + e(k), and that the components of b(k) never decrease. Thus, if bmin is the smallest component of b(1) and if e(k) converges to zero, then e(k) must enter the hypersphere e(k) = bmin after a finite number of steps, at which point Ya(k) > 0. Although we ignored terminating conditions to simplify the proof, such a terminating condition would always be used in practice. 5.9.3 Nonseparable Behavior If the convergence proof just given is examined to see how the assumption of separability was employed, it will be seen that it was needed twice. First, the fact that ^ et (k)b = 0 was used to show that either e(k) = 0 for some finite k, or e+ (k) is never zero and corrections go on forever. Second, this same constraint was used to show that if e+ (k) converges to zero, e(k) must also converge to zero. If the samples are not linearly separable, it no longer follows that if e+ (k) is zero then e(k) must be zero. Indeed, on a nonseparable problem one may well obtain a nonzero error vector having no positive components. If this occurs, the algorithm automatically terminates and we have proof that the samples are not separable. What happens if the patterns are not separable, but e+ (k) is never zero? In this case it still follows that e(k + 1) = e(k) + 2(YY - I)e+ (k) and 1 ( e(k) 4 2 (86) - e(k + 1) 2 ) = (1 - ) e+ (k) 2 + 2 e+t (k)YY e+ (k). (87) Thus, the sequence e(1) 2 , e(2) 2 , ... must still converge, though the limiting value e 2 cannot be zero. Since convergence requires that e+ (k) = 0 for some finite k, 42 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS or e+ (k) converges to zero while e(k) is bounded away from zero. Thus, the HoKashyap algorithm provides us with a separating vector in the separable case, and with evidence of nonseparability in the nonseparable case. However, there is no bound on the number of steps needed to disclose nonseparability. 5.9.4 Some Related Procedures If we write Y = (Yt Y)-1 Yt and make use of the fact that Yt e(k) = 0, we can modify the Ho-Kashyap rule as follows b(1) > 0 but otherwise arbitrary a(1) = Y b(1) (88) b(k + 1) = b(k) + (e(k) + |e(k)|) a(k + 1) = a(k) + Y |e(k)|, where, as usual, e(k) = Ya(k) - b(k). This then gives the algorithm for fixed learning rate: Algorithm 12 (Modified Ho-Kashyap) 1 2 3 4 5 6 7 8 9 10 (89) begin initialize a, b, < 1, criterion bmin , kmax do k k + 1 e Ya - b e+ 1/2(e + Abs[e]) b b + 2(k)(e + Abs[e]) a Y b if Abs[e] bmin then return a, b and exit until k = kmax print NO SOLUTION FOUND end This algorithm differs from the Perceptron and relaxation algorithms for solving linear inequalities in at least three ways: (1) it varies both the weight vector a and the margin vector b, (2) it provides evidence of nonseparability, but (3) it requires the computation of the pseudoinverse of Y. Even though this last computation need be done only once, it can be time consuming, and it requires special treatment if Yt Y is singular. An interesting alternative algorithm that resembles Eq. 88 but avoids the need for computing Y is b(1) > 0 but otherwise arbitrary a(1) = arbitrary , (90) b(k + 1) = b(k) + (e(k) + |e(k)|) a(k + 1) = a(k) + RYt |e(k)| ^ ^ where R is an arbitrary, constant, postive-definite d-by-d matrix. We shall show that if is properly chosen, this algorithm also yields a solution vector in a finite number of steps, provided that a solution exists. Furthermore, if no solution exists, the vector Yt |e(k)| either vanishes, exposing the nonseparability, or converges to zero. The proof is fairly straightforward. Whether the samples are linearly separable or not, Eqs. 89 & 90 show that 5.9. *THE HO-KASHYAP PROCEDURES 43 e(k + 1) = Ya(k + 1) - b(k + 1) = (YRYt - I)|e(k)|. We can find, then, that the squared magnitude is 2 e(k + 1) and furthermore = |e(k)|t ( 2 YRYt YRY - 2YRYt + I)|e(k)|, e where 2 - e (k + 1) 2 = (Yt |e(k)|)t A(Yt |e(k)|), (91) A = 2R - 2 RYt R. (92) Clearly, if is positive but sufficiently small, A will be approximately 2R and hence positive definite. Thus, if Yt |e(k)| = 0 we will have e(k) 2 > e(k + 1) 2 . At this point we must distinguish between the separable and the nonseparable ^ ^ case. In the separable case there exists an a and a b > 0 satisfying Y^ = b. Thus, if a ^ |e(k)| = 0, ^ |e(k)|t Y^ = |e(k)|t b > 0, a so that Yt |e(k)| can not be zero unless e(k) is zero. Thus, the sequence e(1) 2 , e(2) 2 , ... is monotonically decreasing and must converge to some limiting value e 2 . But for convergence to take place, Yt |e(k)| must converge to zero, which implies that |e(k)| and hence e(k) must converge to zero. Since e(k) starts out positive and never decreases, it follows that a(k) must converge to a separating vector. Moreover, by the same argument used before, a solution must actually be obtained after a finite number of steps. In the nonseparable case, e(k) can neither be zero nor converge to zero. It may happen that Yt |e(k)| = 0 at some step, which would provide proof of nonseparability. However, it is also possible for the sequence of corrections to go on forever. In this case, it again follows that the sequence e(1) 2 , e(2) 2 , ... must converge to a limiting value e 2 = 0, and that Yt |e(k)| must converge to zero. Thus, we again obtain evidence of nonseparability in the nonseparable case. Before closing this discussion, let us look briefly at the question of choosing and R. The simplest choice for R is the identity matrix, in which case A = 2I - 2 Yt Y. This matrix will be positive definite, thereby assuring convergence, if 0 < < 2/max , where max is the largest eigenvalue of Yt Y. Since the trace of Yt Y is both the sum of the eigenvalues of Yt Y and the sum of the squares of the elements of Y, one can ^ yi 2 in selecting a value for . use the pessimistic bound dmax i A more interesting approach is to change at each step, selecting that value that maximizes e(k) 2 - e(k + 1) 2 . Equations 91 & 92 give e(k) 2 - e(k + 1) 2 = |e(k)|t Y(2R - 2 RYt YR)Yt |e(k)|. (93) By differentiating with respect to , we obtain the optimal value 44 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS (k) = which, for R = I, simplifies to |e(k)|t YRYt |e(k)| , |e(k)|t YRYt YRYt |e(k)| (94) (k) = Yt |e(k)| YYt |e (k)| 2 2 . (95) This same approach can also be used to select the matrix R. By replacing R in Eq. 93 by the symmetric matrix R + R and neglecting second-order terms, we obtain 2 ( e(k) - e(k + 1) 2 ) = |e(k)|Y[Rt (I - Yt YR) + (I - RYt Y)R]Yt |e(k)|. Thus, the decrease in the squared error vector is maximized by the choice R= 1 t -1 (Y Y) (96) and since RYt = Y , the descent algorithm becomes virtually identical with the original Ho-Kashyap algorithm. 5.10 5.10.1 Linear Programming Algorithms Linear Programming objective function The Perceptron, relaxation and Ho-Kashyap procedures are basically gradient descent procedures for solving simultaneous linear inequalities. Linear programming techniques are procedures for maximizing or minimizing linear functions subject to linear equality or inequality constraints. This at once suggests that one might be able to solve linear inequalities by using them as the constraints in a suitable linear programming problem. In this section we shall consider two of several ways that this can be done. The reader need have no knowledge of linear programming to understand these formulations, though such knowledge would certainly be useful in applying the techniques. A classical linear programming problem can be stated as follows: Find a vector u = (u1 , ..., um )t that minimizes the linear (scalar) objective function z = t u subject to the constraint Au , (98) (97) simplex algorithm where is an m-by-1 cost vector, is an l-by-1 vector, and A is an l-by-m matrix. The simplex algorithm is the classical iterative procedure for solving this problem (Fig. 5.18). For technical reasons, it requires the imposition of one more constraint, viz., u 0. If we think of u as being the weight vector a, this constraint is unacceptable, since in most cases the solution vector will have both positive and negative components. However, suppose that we write 5.10. *LINEAR PROGRAMMING ALGORITHMS u3 45 u2 u1 Figure 5.18: Surfaces of constant z = t u are shown in gray, while constraints of the form Au. are shown in red. The simplex algorithm finds an extremum of z given the constraints, i.e., where the gray plan intersects the red at a single point. a a + - a- , where a+ and 1 (|a| + a) 2 (99) (100) 1 (|a| - a). (101) 2 Then both a+ and a- are nonnegative, and by identifying the components of u with the components of a+ and a- , for example, we can accept the constraint u 0. a- 5.10.2 The Linearly Separable Case Suppose that we have a set of n samples y1 , ..., yn and we want a weight vector a that satisfies at yi bi > 0 for all i. How can we formulate this as a linear programming problem? One approach is to introduce what is called an artificial variable 0 by writing at yi + bi . If is sufficiently large, there is no problem in satisfying these constraints; for example, they are satisfied if a = 0 and = maxi bi . However, this hardly solves our original problem. What we want is a solution with = 0, which is the smallest value can have and still satisfy 0. Thus, we are led to consider the following problem: Minimize over all values of and a that satisfy the conditions at yi bi and 0. In the terminology of linear programming, any solution satisfying the constraints is called a feasible solution. A feasible solution for which the number of nonzero variables does not exceed the number of constraints (not counting the simplex requirement for nonnegative variables) is called a basic feasible solution. Thus, the solution a = 0 and = maxi bi is a basic feasible solution. Possession of such a solution simplifies the application of the simplex algorithm. 46 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS If the answer is zero, the samples are linearly separable, and we have a solution. If the answer is positive, there is no separating vector, but we have proof that the samples are nonseparable. Formally, our problem is to find a vector u that minimizes the objective function z = t u subject to the constraints Au and u 0, where A= t y1 t y2 . . . t yn t -y1 t -y2 . . . t -yn 1 1 . . . 1 a+ 0 a- , = 0 , , u = 1 = b1 b2 . . . bn . ^ Thus, the linear programming problem involves m = 2d + 1 variables and l = n constraints, plus the simplex algorithm constraints u 0. The simplex algorithm will find the minimum value of the objective function z = t u = in a finite number ^ of steps, and will exhibit a vector u yielding that value. If the samples are linearly ^ separable, the minimum value of will be zero, and a solution vector a can be obtained ^ from u. If the samples are not separable, the minimum value of will be positive. ^ The resulting u is usually not very useful as an approximateturn to this equation in Chap. ??, but for now we can understand this informally by means of the leave one out bound. Suppose we have n points in the training set, and train a Support Vector Machine on n - 1 of them, and test on the single remaining point. If that remaining point happens to be a support vector for the full n sample case, then there will be an error; otherwise, there will not. Note that if we can find a En [error rate] support vector leave-oneout bound 50 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS transformation () that well separates the data -- so the expected number of support vectors is small -- then Eq. 107 shows that the expected error rate will be lower. y2 R R 2 1 ma xim um ma rgi ma nb xim um ma rgi nb op tim al hy per pla ne y1 Figure 5.19: Training a Support Vector Machine consists of finding the optimal hyperplane, i.e., the one with the maximum distance from the nearest training patterns. The support vectors are those (nearest) patterns, a distance b from the hyperplane. The three support vectors are shown in solid dots. 5.11.1 SVM training We now turn to the problem of training an SVM. The first step is, of course, to choose the nonlinear -functions that map the input to a higher dimensional space. Often this choice will be informed by the designer's knowledge of the problem domain. In the absense of such information, one might choose to use polynomials, Gaussians or yet other basis functions. The dimensionality of the mapped space can be arbitrarily high (though in practice it may be limited by computational resources). We begin by recasting the problem of minimizing the magnitude of the weight vector constrained by the separation into an unconstrained problem by the method of Lagrange undetermined multipliers. Thus from Eq. 106 and our goal of minimizing ||a||, we construct the functional L(a, ) = 1 ||a||2 - 2 n k [zk at yk - 1]. k=1 (108) and seek to minimize L() with respect to the weight vector a, and maximize it with respect to the undetermined multipliers k 0. The last term in Eq. 108 expresses the goal of classifying the points correctly. It can be shown using the so-called KuhnTucker construction (Problem 30) (also associated with Karush whose 1939 thesis addressed the same problem) that this optimization can be reformulated as maximizing n L() = k=1 i - 1 2 n t k j zk zj yj yk , k,j (109) subject to the constraints 5.11. *SUPPORT VECTOR MACHINES 51 n zk k = 0 k=1 k 0, k = 1, ..., n, (110) given the training data. While these equations can be solved using quadratic programming, a number of alternate schemes have been devised (cf. Bibliography). Example 2: SVM for the XOR problem The exclusive-OR is the simplest problem that cannot be solved using a linear discriminant operating directly on the features. The points k = 1, 3 at x = (1, 1)t and (-1, -1)t are in category 1 (red in the figure), while k = 2, 4 at x = (1, -1)t and (-1, 1)t are in 2 (black in the figure). Following the approach of Support Vector Machines, we preprocess the features to map them to a higher dimension space where they can be linearly separated. While many -functions could be used, here we use the simplest expansion up to second order: 1, 2x1 , 2x2 , 2x1 x2 , x2 and x2 , where 1 2 the 2 is convenient for normalization. We seek to maximize Eq. 109, 4 k - k=1 1 2 n t k j zk zj yj yk k,j subject to the constraints (Eq. 110) 1 - 2 + 3 - 4 = 0 k = 1, 2, 3, 4. 0 k It is clear from the symmetry of the problem that 1 = 3 and that 2 = 4 at the solution. While we could use iterative gradient descent as described in Sect. 5.9, for this small problem we can use analytic techniques instead. The solution is a = 1/8, k for k = 1, 2, 3, 4, and from the last term in Eq. 108 this implies that all four training patterns are support vectors -- an unusual case due to the highly symmetric nature of the XOR problem. The final discriminant function is g(x) = g(x1 , x2 ) = x1 x2 , and the decision hyperplane is defined by g = 0, which properly classifies all training patterns. The margth the classic paper by Ronald A. Fisher [4]. The application of linear discriminant function to pattern classification was well described in [7], which posed the problem of optimal (minimum-risk) linear discriminant, and proposed plausible gradient descient procedures to determine a solution from samples. Unfortunately, little can be said about such procedures without knowing the underlying distributions, and even then the situation is analytically complex. The design of multicategory classifiers using two-category procedures stems from [12]. Minsky and Papert's Perceptrons [11] was influential in pointing out the weaknesses of linear classifiers -- weaknesses that were overcome by the methods we shall study in Chap. ??. The Winnow algorithms [8] in the error-free case and [9, 6] and subsequent work in the general case have been useful in the computational learning community, as they allow one to derive convergence bounds. While this work was statistically oriented, many of the pattern recognition papers that appeared in the late 1950s and early 1960s adopted other viewpoints. One viewpoint was that of neural networks, in which individual neurons were modelled as threshold elements, two-category linear machines -- work that had its origins in the famous paper by McCulloch and Pitts [10]. As linear machines have been applied to larger and larger data sets in higher and higher dimensions, the computational burden of linear programming [2] has made this approach less popular. Stochastic approximations, e.g, [15], An early paper on the key ideas in Support Vector Machines is [1]. A more extensive treatment, including complexity control, can be found in [14] -- material we shall visit in Chap. ??. A readable presentation of the method is [3], which provided the inspiration behind our Example 2. The Kuhn-Tucker construction, used in the SVM training method described in the text and explored in Problem 30, is from [5] and used in [13]. The fundamental result is that exactly one of the following three cases holds. 1) The original (primal) conditions have an optimal solution; in that case the dual cases do too, and their objective values are equal, or 2) the primal conditions are infeasible; in that case the dual is either unbounded or itself infeasible, or 3) the primal conditions are unbounded; in that case the dual is infeasible. Problems Section 5.2 58 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS 1. Consider a linear machine with discriminant functions gi (x) = wt x + wi0 , i = 1, ..., c. Show that the decision regions are convex by showing that if x1 Ri and x2 Ri then x1 + (1 - )x2 Ri if 0 1. 2. Figure 5.3 illustrates the two most popular methods for designing a c-category c classifier from linear boundary segments. Another method is to save the full 2 linear i /j boundaries, and classify any point by taking a vote based on all these boundaries. Prove whether the resulting decision regions must be convex. If they need not be convex, construct a non-pathological example yielding at least one non-convex decision region. 3. Consider the hyperplane used for discriminant functions. (a) Show that the distance from the hyperplane g(x) = wt x + w0 = 0 to the point xa is |g(xa )|/ w by minimizing x - xa 2 subject to the constraint g(x) = 0. (b) Show that the projection of xa onto the hyperplane is given by xp = xa - g(xa ) w. w 2 4. Consider the three-category linear machine with discriminant functions gi (x) = t wi x + wi0 , i = 1, 2, 3. (a) For the special case where x is two-dimensional and the threshold weights wi0 are zero, sketch the weight vectors with their tails at the origin, the three lines joining their heads, and the decision boundaries. (b) How does this sketch change when a constant vector c is added to each of the three weight vectors? 5. In the multicategory case, a set of samples is said to be linearly separable if there exists a linear machine that can classify them all correctly. If any samples labelled i can be separated from all others by a single hyperplane, we shall say the samples total are totally lce? Section 5.5 2. Write a program to implement the Perceptron algorithm. (a) Starting with a = 0, apply your program to the training data from 1 and 2 . Note the number of iterations required for convergence. (b) Apply your program to 3 and 2 . Again, note the number of iterations required for convergence. (c) Explain the difference between the iterations required in the two cases. 3. The Pocket algorithm uses the criterion of longest sequence of correctly classified points, and can be used in conjunction a number of basic learning algorithms. For instance, one use the Pocket algorithm in conjunction with the Perceptron algorithm in a sort of ratchet scheme as follows. There are two sets of weights, one for the normal Pocket algorithm the following table. 3 4 x1 x2 x1 x2 -3.0 -2.9 -2.0 -8.4 0.5 8.7 -8.9 0.2 2.9 2.1 -4.2 -7.7 -0.1 5.2 -8.5 -3.2 -4.0 2.2 -6.7 -4.0 -1.3 3.7 -0.5 -9.2 -3.4 6.2 -5.3 -6.7 -4.1 3.4 -8.7 -6.4 -5.1 1.6 -7.1 -9.7 1.9 5.1 -8.0 -6.3 66 CHAPTER 5. LINEAR DISCRIMINANT FUNCTIONS Perceptron algorithm, and a separate one (not directly used for training) which is kept "in your pocket." Both are randomly chosen at the start. The "pocket" weights are tested on the full data set to find the longest run of patterns properly classified. (At the beginning, this run will be short.) The Perceptron weights are trained as usual, but after every weight update (or after some finite number of such weight updates), the Perceptron weight is tested on data points, randomly selected, to determine the longest run of properly classified points. If this length is greater than the pocket weights, the Perceptron weights replace the pocket weights, and perceptron training continues. In this way, the poscket weights continually improve, classifying longer and longer runs of randomly selected points. (a) Write a pocket algorithm to be employed with Perceptron algorithm. (b) Apply it to the data from 1 and 3 . How often are the pocket weights updated? 4. Start with a randomly chosen a, Calculate 2 (Eq. 21 At the end of training calculate (Eq. 22). Verify k0 (Eq. 25). 5. Show that the first xx points of categories x and x xx. Construct by hand a nonlinear mapping of the feature space to make them linearly separable. Train a Perceptron classifier on them. 6. Consider a version of the Balanced Winnow training algorithm (Algorithm 7). Classification of test data is given by line 2. Compare the converge rate of Balanced Winnow with the fixed-increment, single-sample Perceptron (Algorithm 4) on a problem with large number of redundant features, as follows. (a) Generate a training set of 2000 100-dimensional patterns (1000 from each of two categories) in which only the first ten features are informative, in the following way. For patterns in category 1 , each of the first ten features are chosen randomly and uniformly from the range +1 xi 2, for i = 1, ..., 10. Conversely, for patterns in 2 , each of the first ten features are chosen randomly and uniformly from the range -2 xi -1. All other features from both categories are chosen from the range -2 xi +2. (b) Construct by hand the obvious separating hyperplane. (c) Adjust the learning rates so that your two algorithms have roughly the same convergence rate on the full training set when only the first ten features are considered. That is, assume each of the 2000 training patterns consists of just the first ten features. (d) Now apply your two algorithms to 2000 50-dimensional patterns, in which the first ten features are informative and the remaining 40 are not. Plot the total number of errors versus iteration. (e) Now apply your two algorithms to the full training set of 2000 100-dimensional patterns. (f) Summarize your answers to parts (c) - (e). Section 5.6 7. Consider relaxation methods. 5.12. COMPUTER EXERCISES 67 (a) Implement batch relaxation with margin (Algorithm 8), set b = 0.1 and a(1) = 0 and apply it to the data in 1 and 3 . Plot the criterion function as a function of the number of passes through the training set. (b) Repeat for b = 0.5 and a(1) = 0. Explainning? . . . . . . . 6.8.14 Stopped training . . . . . . . . . . . . . . . . . . . 1 3 3 4 8 8 10 11 15 15 16 16 17 17 19 19 20 21 23 24 25 26 28 29 29 30 31 32 32 32 33 34 34 36 36 37 37 38 39 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 6.8.15 How many hidden layers? . . . . . . . 6.8.16 Criterion function . . . . . . . . . . . 6.9 *Second-order methods . . . . . . . . . . . . 6.9.1 Hessian matrix . . . . . . . . . . . . . 6.9.2 Newton's method . . . . . . . . . . . . 6.9.3 Quickprop . . . . . . . . . . . . . . . . 6.9.4 Conjugate gradient descent . . . . . . Example 1: Conjugate gradient descent . . . . 6.10 *Additional networks and training methods . 6.10.1 Radial basis function networks (RBF) 6.10.2 Special bases . . . . . . . . . . . . . . 6.10.3 Time delay neural networks (TDNN) . 6.10.4 Recurrent networks . . . . . . . . . . . 6.10.5 Counterpropagation . . . . . . . . . . 6.10.6 Cascade-Correlation . . . . . . . . . . Algorithm 4: Cascade-correlation . . . . . . . 6.10.7 Neocognitron . . . . . . . . . . . . . . 6.11 Regularization and complexity adjustment . . 6.11.1 Complexity measurement . . . . . . . 6.11.2 Wald statistics . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . Bibliographical and Historical Remarks . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . Computer exercises . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 40 41 41 42 42 43 44 46 46 47 47 48 49 50 50 51 51 53 53 54 56 58 64 67 74 Chapter 6 Multilayer Neural Networks 6.1 Introduction modifiable weights to output units. I sisting ofininput units connected abypowerful gradient descent method forThe LMS algorithm, particular, provided reducing the error, even when the patterns are not linearly separable. Unfortunately, the class of solutions that can be obtained from such networks -- hyperplane discriminants -- while surprisingly good on a range or real-world problems, is simply not general enough in demanding applications: there are many problems for which linear discriminants are insufficient for minimum error. With a clever choice of nonlinear functions, however, we can obtain arbitrary decisions, in particular the one leading to minimum error. The central difficulty is, naturally, choosing the appropriate nonlinear functions. One brute force approach might be to choose a complete basis set (all polynomials, say) but this will not work; such a classifier would have too many free parameters to be determined from a limited number of training patterns (Chap. ??). Alternatively, we may have prior knowledge relevant to the classification problem and this might guide our choice of nonlinearity. In the absence of such information, up to now we have seen no principled or automatic method for finding the nonlinearities. What we seek, then, is a way to learn the nonlinearity at the same time as the linear discriminant. This is the approach of multilayer neural networks (also called multilayer Perceptrons): the parameters governing the nonlinear mapping are learned at the same time as those governing the linear discriminant. We shall revisit the limitations of the two-layer networks of the previous chapter, and see how three-layer (and four-layer...) nets overcome those drawbacks -- indeed how such multilayer networks can, at least in principle, provide the optimal solution to an arbitrary classification problem. There is nothing particularly magical about multilayer neural networks; at base they implement linear discriminants, but in a space where the inputs have been mapped nonlinearly. The key power provided by such networks is that they admit fairly simple algorithms where the form of the nonlinearity n the previous chapter we saw a number of methods for training classifiers con- Some authors describe such networks as single layer networks because they have only one layer of modifiable weights, but we shall instead refer to them based on the number of layers of units. 3 4 CHAPTER 6. MULTILAYER NEURAL NETWORKS can be learned from training data. The models are thus extremely powerful, have nice theoretical properties, and apply well to a vast array of real-world applications. One of the most popular methods for training such multilayer networks is based backpropagation on gradient descent in error -- the backpropagation algorithm (or generalized delta rule), a natural extension of the LMS algorithm. We shall study backpropagation in depth, first of all because it is powerful, useful and relatively easy to understand, but also because many other training methods can be seen as modifications of it. The backpropagation training method is simple even for complex models (networks) having hundreds or thousands of parameters. In part because of the intuitive graphical representation and the simplicity of design of these models, practitioners can test different models quickly and easily; neural networks are thus a sort of "poor person's" technique for doing statistical pattern recognition with complicated models. The conceptual and algorithmic simplicity of backpropagation, along with its manifest success on many real-world problems, help to explain why it is a mainstay in adaptive pattern recognition. While the basic theory of backpropagation is simple, a number of tricks -- some a bit subtle -- are often used to improve performance and increase training speed. Choices involving the scaling of input values and initial weights, desired output values, and more can be made based on an analysis of networks and their function. We shall also discuss alternate training schemes, for instance ones that are faster, or adjust their complexity automatically in response to training data. Network architecture or topology plays an important role for neural net classification, and the optimal topology will depend upon the problem at hand. It is here that another great benefit of networks becomes apparent: often knowledge of the problem domain which might be of an informal or heuristic nature can be easily incorporated into network architectures through choices in the number of hidden layers, units, feedback connections, and so on. Thus setting the topology of the network is heuristic model selection. The practical ease in selecting models (network topologies) and estimating parameters (training via backpropagation) enable classifier designers to try out alternate models fairly simply. regularA deep problem in the use of neural network techniques involves regularization, ization complexity adjustment, or model selection, that is, selecting (or adjusting) the complexity of the network. Whereas the number of inputs and outputs is given by the feature space and number of categories, the total number of weights or parameters in the network is not -- or at least not directly. If too many free parameters are used, generalization will be poor; conversely if too few parameters are used, the training data cannot be learned adequately. How shall we adjust the complexity to achieve the best generalization? We shall explore a number of methods for complexity adjustment, and return in Chap. ?? to their theoretical foundations. It is crucial to remember that neural networks do not exempt designers from intimate knowledge of the data and problem domain. Networks provide a powerful and speedy tool for building classifiers, and as with any tool or technique one gains intuition and expertise through analysis and repeated experimentation over a broad range of problems. 6.2 hidden layer Feedforward operation and classification Figure 6.1 shows a simple three-layer neural network. This one consists of an input layer (having two input units), a hidden layer with (two hidden units) and an output 6.2. FEEDFORWARD OPERATION AND CLASSIFICATION 5 layer (a single unit), interconnected by modifiable weights, represented by links between layers. There is, furthermore, a single bias unit that is connected to each unit other than the input units. The function of units is loosely based on properties of biological neurons, and hence they are sometimes called "neurons." We are interested in the use of such networks for pattern recognition, where the input units represent the components of a feature vector (to be learned or to be classified) and signals emitted by output units will be discriminant functions used for classification. bias neuron We call any units that are neither input nor output units "hidden" because their activations are not directly "seen" by the external environment, i.e., the input or output. 6 recall CHAPTER 6. MULTILAYER NEURAL NETWORKS net activation We can clarify our notation and describe the feedforward (or classification or recall) operation of such a network on what is perhaps the simplest nonlinear problem: the exclusive-OR (XOR) problem (Fig. 6.1); a three-layer network can indeed solve this problem whereas a linear machine operating directly on the features cannot. Each two-dimensional input vector is presented to the input layer, and the output of each input unit equals the corresponding component in the vector. Each hidden unit performs the weighted sum of its inputs to form its (scalar) net activation or simply net. That is, the net activation is the inner product of the inputs with the weights at the hidden unit. For simplicity, we augment both the input vector (i.e., append a feature value x0 = 1) and the weight vector (i.e., append a value w0 ), and can then write d d netj = i=1 xi wji + wj0 = i=0 t xi wji wj x, (1) synapse where the subscript i indexes units on the input layer, j for the hidden; wji denotes the input-to-hidden layer weights at the hidden unit j. In analogy with neurobiology, such weights or connections are sometimes called "synapses" and the value of the connection the "synaptic weights." Each hidden unit emits an output that is a nonlinear function of its activation, f (net), i.e., yj = f (netj ). The example shows a simple threshold or sign (read "signum") function, f (net) = Sgn (net) 1 -1 if net 0 if net < 0, (3) (2) transfer function but as we shall see, other functions have more desirable properties and are hence more commonly used. This f () is sometimes called the transfer function or merely "nonlinearity" of a unit, and serves as a function discussed in Chap. ??. We have assumed the same nonlinearity is used at the various hidden and output units, though this is not crucial. Each output unit similarly computes its net activation based on the hidden unit signals as nH nH netk = j=1 yj wkj + wk0 = j=0 t yj wkj = wk y, (4) where the subscript k indexes units in the output layer (one, in the figure) and nH denotes the number of hidden units (two, in the figure). We have mathematically treated the bias unit as equivalent to one of the hidden units whose output is always y0 = 1. Each output unit then computes the nonlinear function of its net, emitting zk = f (netk ). (5) where in the figure we assume that this nonlinearity is also a sign function. It is these final output signals that represent the different discriminant functions. We would typically have c such output units and the classification decision is to label the input pattern with the label corresponding to the maximum yk = gk (x). In a two-category case such as XOR, it is traditional to use a single output unit and label a pattern by the sign of the output z. 6.2. FEEDFORWARD OPERATION AND CLASSIFICATION 7 z 1 0 -1 0 -1 0 1 -1 1 zk output k y1 1 0 -1 0 -1 0 1 -1 1 y2 -1 .7 -1.5 -.4 wkj hidden j 1 1 wji input i 1 0 -1 0 -1 0 1 -1 1 bias .5 1 1 x1 x2 x2 z= -1 R2 z=1 R1 x1 R2 z= -1 Figure 6.1: The two-bit parity or exclusive-OR problem can be solved by a three-layer network. At the bottom is the two-dimensional feature space x1 - x2 , and the four patterns to be classified. The three-layer network is shown in the middle. The input units are linear and merely distribute their (feature) values through multiplicative weights to the hidden units. The hidden and output units here are linear threshold units, each of which forms the linear sum of its inputs times their associated weight, and emits a +1 if this sum is greater than or equal to 0, and -1 otherwise, as shown by the graphs. Positive ("excitatory") weights are denoted by solid lines, negative ("inhibitory") weights by dashed lines; the weight magnitude is indicated by the relative thickness, and is labeled. The single output unit sums the weighted signals from the hidden units (and bias) and emits a +1 if that sum is greater than or equal to 0 and a -1 otherwise. Within each unit we show a graph of its input-output or transfer function -- f (net) vs. net. This function is linear for the input units, a constant for the bias, and a step or sign function elsewhere. We say that this network has a 2-2-1 fully connected topology, describing the number of units (other than the bias) in successive layers. 8 CHAPTER 6. MULTILAYER NEURAL NETWORKS It is easy to verify that the three-layer network with the weight values listed indeed solves the XOR problem. The hidden unit computing y1 acts like a Perceptron, and computes the boundary x1 + x2 + 0.5 = 0; input vectors for which x1 + x2 + 0.5 0 lead to y1 = 1, all other inputs lead to y1 = -1. Likewise the other hidden unit computes the boundary x1 + x2 - 1.5 = 0. The final output unit emits z1 = +1 if and only if both y1 and y2 have value +1. This gives to the appropriate nonlinear decision region shown in the figure -- the XOR problem is solved. 6.2.1 General feedforward operation expressive power From the above example, it should be clear that nonlinear multilayer networks (i.e., ones with input units, hidden units and output units) have greater computational or expressive power than similar networks that otherwise lack hidden units; that is, they can implement more functions. Indeed, we shall see in Sect. 6.2.2 that given sufficient number of hidden units of a general type any function can be so represented. Clearly, we can generalize the above discussion to more inputs, other nonlinearities, and arbitrary number of output units. For classification, we will have c output units, one for each of the categories, and the signal from each output unit is the discriminant function gk (x). We gather the results from Eqs. 1, 2, 4, & 5, to express such discriminant functions as: gk (x) zk = f nH d wkj f j=1 i=1 wji xi + wj0 + wk0 . (6) This, then, is the class of functions that can be implemented by a three-layer neural network. An even broader generalization would allow transfer functions at the output layer to differ from those in the hidden layer, or indeed even different functions at each individual unit. We will have cause to use such networks later, but the attendant notational complexities would cloud our presentation of the key ideas in learning in networks. 6.2.2 Expressive power of multilayer networks It is natural to ask if every decision can be implemented by such a three-layer network (Eq. 6). The answer, due ulpractical because for most problems we know ahead of time neither the number of hidden units required, nor the proper weight values. Even if there were a constructive proof, it would be of little use in pattern recognition since we do not know the desired function anyway -- it is related to the training patterns in a very complicated way. All in all, then, these results on the expressive power of networks give us confidence we are on the right track, but shed little practical light on the problems of designing and training neural networks -- their main benefit for pattern recognition (Fig. 6.3). x2 Two layer fl R1 R2 x1 x2 x2 x1 Three layer R1 R2 ... R2 R1 x1 x2 x1 Figure 6.3: Whereas a two-layer network classifier can only implement a linear decision boundary, given an adequate number of hidden units, three-, four- and higher-layer networks can implement arbitrary decision boundaries. The decision regions need not be convex, nor simply connected. 6.3 Backpropagation algorithm We have just seen that any function from input to output can be implemented as a three-layer neural network. We now turn to the crucial problem of setting the weights based on training patterns and desired output. 6.3. BACKPROPAGATION ALGORITHM 11 Backpropagation is one of the simplest and most general methods for supervised training of multilayer neural networks -- it is the natural extension of the LMS algorithm for linear systems we saw in Chap. ??. Other methods may be faster or have other desirable properties, but few are more instructive. The LMS algorithm worked for two-layer systems because we had an error (proportional to the square of the difference between the actual output and the desired output) evaluated at the output unit. Similarly, in a three-layer net it is a straightforward matter to find how the output (and thus error) depends on the hidden-to-output layer weights. In fact this dependency is the same as in the analogous two-layer case, and thus the learning rule is the same. But how should the input-to-hidden weights be learned, the ones governing the nonlinear transformation of the input vectors? If the "proper" outputs for a hidden unit were known for any pattern, the input-to-hidden weights could be adjusted to approximate it. However, there is no explicit teacher to state what the hidden unit's output should be. This is called the credit assignment problem. The power of backpropagation is that it allows us to calculate an effective error for each hidden unit, and thus derive a learning rule for the input-to-hidden weights. Networks have two primary modes of operation: feedforward and learning. Feedforward operation, such as illustrated in our XOR example above, consists of presenting a pattern to the input units and passing the signals through the network in order to yield outputs from the output units. Supervised learning consists of presenting an input pattern as well as a desired, teaching or target pattern to the output layer and changing the network parameters (e.g., weights) in order to make the actual output more similar to the target one. Figure 6.4 shows a three-layer network and the notation we shall use. credit assignment target pattern 6.3.1 Network learning The basic approach in learning is to start with an untrained network, present an input training pattern and determine the output. The error or criterion function is some scalar function of the weights that is minimized when the network outputs match the desired outputs. The weights are adjusted to reduce this measure of error. Here we present the learning rule on a per pattern basis, and return to other protocols later. We consider the training error on a pattern to be the sum over output units of the squared difference between the desired output tk (given by a teacher) and the actual output zk , much as we had in the LMS algorithm for two-layer nets: c training error J(w) 1/2 k=1 (tk - zk )2 = 1/2(t - z)2 , (8) where t and z are the target and the network output vectors of length c; w represents all the weights in the network. cognition accuracy (Computer exercise ??). We describe the overall amount of pattern presentations by epoch -- the number of presentations of the full training set. For other variables being constant, the number of epochs is an indication of the relative amount of learning. The basic stochastic and batch protocols of backpropagation for n patterns are shown in the procedures below. Algorithm 1 (Stochastic backpropagation) 1 2 3 4 5 6 7 epoch begin initialize network topology (# hidden units), w, criterion , , m 0 do m m + 1 xm randomly chosen pattern wij wij + j xi ; wjk wjk + k yj until J(w) < return w end In the on-line version of backpropagation, line 3 of Algorithm 1 is replaced by sequential selection of training patterns (Problem 9). Line 5 makes the algorithm end when the change in the criterion function J(w) is smaller than some pre-set value . While this is perhaps the simplest meaningful stopping criterion, others generally lead to Some on-line training algorithms are considered models of biological learning, where the organism is exposed to the environment and cannot store all input patterns for multiple "presentations." The notion of epoch does not apply to on-line training, where instead the number of pattern presentations is a more appropriate measure. stopping criterion 16 CHAPTER 6. MULTILAYER NEURAL NETWORKS better performance, as we shall discuss in Sect. 6.8.14. In the batch version, all the training patterns are presented first and their corresponding weight updates summed; only then are the actual weights in the network updated. This process is iterated until some stopping criterion is met. So far we have considered the error on a single pattern, but in fact we want to consider an error defined over the entirety of patterns in the training set. With minor infelicities in notation we can write this total training error as the sum over the errors on n individual patterns: n J= p=1 Jp . (21) In stochastic training, a weight update may reduce the error on the single pattern being presented, yet increase the error on the full training set. Given a large number of such individual updates, however, the total error as given in Eq. 21 decreases. Algorithm 2 (Batch backpropagation) 1 2 3 4 5 6 7 8 9 10 11 begin initialize network topology (# hidden units), w, criterion , , r 0 do r r + 1 (increment epoch) m 0; wij 0; wjk 0 do m m + 1 xm select pattern wij wij + j xi ; wjk wjk + k yj until m = n wij wij + wij ; wjk wjk + wjk until J(w) < return w end In batch backpropagation, we need not select pattern randomly, since the weights are updated only after all patterns have been presented once. We shall consider the merits and drawbacks of each protocol in Sect. 6.8. 6.3.3 Learning curves validation error Because the weights are initialized with random values, error on the training set is large; through learning the error becomes lower, as shown in a learning curve (Fig. 6.6). The (per pattern) training error ultimately reaches an asymptotic value which depends upon the Bayes error, the amount of training data and the expressive power (e.g., the number of weights) in the network -- the higher the Bayes error and the fewer the number of such weights, the higher this asymptotic value is likely to be (Chap. ??). Since batch backpropagation performs gradient descent in the criterion function, these training error decreases monotonically. The average error on an independent test set is virtually always higher than on the training set, and while it generally decreases, it can increase or oscillate. Figure 6.6 also shows the average error on a validation set -- patterns not used directly for gradient descent training, and thus indirectly representative of novel patterns yet to be classified. The validation set can be used in a stopping criterion in both batch and stochastic protocols; gradient descent training on the training set is stopped when a minimum is reached in the validation error (e.g., near epoch 5 in 6.4. ERROR SURFACES J/n 17 validation tra ini test ng epochs 1 2 3 4 5 6 7 8 9 10 11 Figure 6.6: A learning curve shows the criterion function as a function of the amount of training, typically indicated by the number of epochs or presentations of the full n training set. We plot the average error per pattern, i.e., 1/n p=1 Jp . The validation error and the test (or generalization) error per pattern are virtually always higher than the training error. In some protocols, training is stopped at the minimum of the validation set. the figure). We shall return in Chap. ?? to understand in greater depth why this version of cross validation stopping criterion often leads to networks having improved recognition accuracy. cross validation 6.4 Error surfaces Since backpropagation is based on gradient descent in a criterion function, we can gain understanding and intuition about the algorithm by studying error surfaces themselves -- the function J(w). Of course, such an error surface depends upon the training and classification task; nevertheless there are some general properties of error surfaces that seem to hold over a broad range of real-world pattern recognition problems. One of the issues that concerns us are local minima; if many local minima plague the error landscape, then it is unlikely that the network will find the global minimum. Does this necessarily lead to poor performance? Another issue is the presence of plateaus -- regions where the error varies only slightly as a function of weights. If such plateaus are plentiful, we can expect training according to Algorithms 1 & 2 to be slow. Since training typically begins with small weights, the error surface in the neighborhood of w 0 will determine the general direction of descent. What can we say about the error in this region? Most interesting real-world problems are of high dimensionality. Are there any general properties of high dimensional error functions? We now explore these issues in some illustrative systems. 6.4.1 Some small networks Consider the simplest three-layer nonlinear network, here solving a two-category problem in one dimension; this 1-1-1 sigmoidal network (and bias) is shown in Fig. 6.7. The data shown are linearly separable, and the optimal decision boundary (a point somewhat below x1 = 0) separates the two categories. During learning, the weights descends to the global minimum, and the problem is solved. 18 CHAPTER 6. MULTILAYER NEURAL NETWORKS z1 J(w) w2 y1 w0 x0,y0 1 0.75 0.5 w1 x1 0.25 0 0 -100 -20 40 20 w1 w0 0 -40 100 x1 -4 -3 -2 -1 0 x* 1 2 3 4 R1 R2 Figure 6.7: Six one-dimensional patterns (three in each of two classes) are to be learned by a 1-1-1 network with sigmoidal hidden and output units (and bias). The error surface as a function of w1 and w2 is also shown (for the case where the bias weights have their final values). The network starts with random weights, and through (stochastic) training descends to the global minimum in error, as shown by the trajectory. Note especially that a low error solution exists, which in fact leads to a decision boundary separating the training points into their two categories. Here the error surface has a single (global) minimum, which yields the decision point separating the patterns of the two categories. Different plateaus in the surface correspond roughly to different numbers of patterns properly classified; the maximum number of such misclassified patterns is three in this example. The plateau regions, where weight change does not lead to a change in error, here correspond to sets of weights that lead to roughly the same decision point in the input space. Thus as w1 increases and w2 becomes more negative, the surface shows that the error does not change, a result that can be informally confirmed by looking at the network itself. Now consider the same network applied to another, harder, one-dimensional problem -- one that is not linearly separable (Fig. 6.8). First, note that overall the error surface is slightly higher than in Fig. 6.7 because even the best solution attainable with this network leads to one pattern being misclassified. As before, the different plateaus in error correspond to different numbers of training patterns properly learned. However, one must not confuse the (squared) error measure with classification error (cf. Chap. ??, Fig. ??). For instance here there are two general ways to misclassify exactly two patterns, but these have different errors. Incidentally, a 1-3-1 network (but not a 1-2-1 network) can solve this problem (Computer exercise 3). From these very simple examples, where the correspondences among weight values, decision boundary and error are manifest, we can see how the error of the global minimum is lower when the problem can be solved and that there are plateaus corresponding to sets of weights that lead to nearly the same decision boundary. Furthermore, the surface near w 0 (the traditional region for starting learning) has high 6.4. ERROR SURFACES z1 J(w) 1 19 w2 y1 0.75 0.5 w0 x0,y0 0.25 40 20 w1 x1 0 0 -100 -20 w1 w0 x1 0 -40 100 -4 -3 -2 -1 0 x* 1 2 3 4 R1 R2 Figure 6.8: As in Fig. 6.7, except here the patterns are not linearly separable; the error surface is slightly higher than in that figure. error and happens in this case to have a large slope; if the starting point had differed somewhat, the network would descend to the same final weight values. 6.4.2 XOR A somewhat more complicated problem is the XOR problem we have already considered. Figure ?? shows several two-dimensional slices through the nine-dimensional weight space of the 2-2-1 sigmoidal network (with bias). The slices shown include a global minimum in the error. Notice first that the error varies a bit more gradually as a function of a single weight than does the error in the networks solving the problems in Figs. 6.7 & 6.8. This is because in a large network any single weight has on average a smaller relative contribution to the output. Ridges, valleys and a variety of other shapes can all be seen in the surface. Several local minima in the high-dimensional weight space exist, which here correspond to solutions that classify three (but not four) patterns. Although it is hard to show it graphically, the error surface is invariant with respect to certain discrete permutations. For instance, if the labels on the two hidden units are exchanged (and the weight values changed appropriately), the shape of the error surface is unaffected (Problem ??). 6.4.3 Larger networks Alas, the intuition we gain from considering error surfaces for small networks gives only hints of what is going on in large networks, and at times can be quite misleading. Figure 6.10 shows a network with many weights solving a complicated high-dimensional two-category pattern classification problem. Here, the error varies quite gradually as a single weight is changed though we can get troughs, valleys, canyons, and a host of 20 CHAPTER 6. MULTILAYER NEURAL NETWORKS w'1 w'2 w'0 w02 w01 w11 w12 w21 w22 J 4 -4 -3 -2 1 0 -4 -3 -1 -1 0 0 -2 3 2 -3 -2 -4 1 2 J 4 3 0 -4 -3 -1 -2 -1 0 0 w'2 w'1 w'0 w'1 J 4 -4 -3 -2 1 0 0 1 -1 3 0 4 2 3 2 0 1 2 1 2 J 4 3 0 -4 -3 3 -1 4 0 -2 w21 w11 w22 w12 Figure 6.9: Two-dimensional slices through the nine-dimensional error surface after extensive training for a 2-2-1 network solving the XOR problem. shapes. Whereas in low dimensional spaces local minima can be plentiful, in high dimension, the problem of local minima is different: the high-dimensional space may afford more ways (dimensions) for the system to "get around" a barrier or local maximum during learning. In networks with many superfluous weights (i.e., more than are needed to learn the training set), one is less likely to get into local minima. However, networks with an unnecessarily large number of weights are undesirable because of the dangers of overfitting, as we shall see in Sect. 6.11. 6.4.4 How important are multiple minima? The possibility of the presence of multiple local minima is one reason that we resort to iterative gradient descent -- analytic methods are highly unlikely to find a single global minimum, especially in high-dimensional weight spaces. In computational practice, we do not want our network to be caught in a local minimum having high training error since this usually indicates that key features of the problem have not been learned by the network. In such cases it is traditional to re-initialize the weights and train again, possibly also altering other parameters in the net (Sect. 6.8). In many problems, convergence to a non-global minimum is acceptable, if the error is nevertheless fairly low. Furthermore, common stopping criteria demand that training terminate even before the minimum is reached and thus it is not essential that the network be converging toward the global minimum for acceptable performance (Sect. 6.8.14). 6.5. BACKPROPAGATION AS FEATURE MAPPING 21 Figure 6.10: A network with xxx weights trained on data from a complicated pattern recognition problem xxx. 6.5 Backpropagation as feature mapping Since the hidden-to-output layer leads to a linear discriminant, the novel computational power provided by multilayer neural nets can be attributed to the nonlinear warping of the input to the representation at the hidden units. Let us consider this transformation, again with the help of the XOR problem. Figure 6.11 shows a three-layer net addressing the XOR problem. For any input pattern in the x1 - x2 space, we can show the corresponding output of the two hidden units in the y1 - y2 space. With small initial weights, the net activation of each hidden unit is small, and thus the linear portion of their transfer function is used. Such a linear transformation from x to y leaves the patterns linearly inseparable (Problem 1). However, as learning progresses and the input-to-hidden weights increase in magnitude, the nonlinearities of the hidden units warp and distort the mapping from input to the hidden unit space. The linear decision boundary at the end of learning found by the hidden-to-output weights is shown by the straight dashed line; the nonlinearly separable problem at the inputs is transformed into a linearly separable at the hidden units. We can illustrate such distortion in the three-bit parity problem, where the output = +1 if the number of 1s in the input is odd, and -1 otherwise -- a generalization of the XOR or two-bit parity problem (Fig. 6.12). As before, early in learning the hidden units operate in their linear range and thus the representation after the hidden units remains linearly inseparable -- the patterns from the two categories lie at alternating vertexes of a cube. After learning and the weights have become larger, the nonlinearities of the hidden units are expressed and patterns have been moved and can be linearly separable, as shown. Figure 6.13 shows a two-dimensional two-category problem and the pattern representations in a 2-2-1 and in a 2-3-1 network of sigmoidal hidden units. Note that 22 CHAPTER 6. MULTILAYER NEURAL NETWORKS x1 1 y1 y2 -1 1 x2 -1 y2 2 bias x1 x2 1.5 1 ry da 0 fin -0.5 -1 al de ci sio n bo un 0.5 Epoch 1 15 30 45 60 -1.5 y1 -1.5 -1 -0.5 0 0.5 1 1.5 J 2 1.5 1 0.5 Epoch 10 20 30 40 50 60 Figure 6.11: A 2-2-1 backpropagation network (with bias) and the four patterns of the XOR problem are shown at the top. The middle figure shows the outputs of the hidden units for each of the four patterns; these outputs move across the y1 - y2 space as the full network learns. In this space, early in training (epoch 1) the two categories are not linearly separable. As the input-to-hidden weights learn, the categories become linearly separable. Also shown (by the dashed line) is the linear decision boundary determined by the hidden-to-output weights at the end of learning -- indeed the patterns of the two classes are separated by this boundary. The bottom graph shows the learning curves -- the error on individual patterns and the total error as a function of epoch. While the error on each individual pattern does not decrease monotonically, the total training error does decrease monotonically. 6.5. BACKPROPAGATION AS FEATURE MAPPING 2 23 Error 4 1 0 3 2 -1 1 -2 Epoch 25 50 75 100 125 150 -1 0 1 2 2 y2 0 -1 Error 4 1 3 y3 0 -1 -2 1 -2 -1 Epoch 10 20 30 40 50 60 70 y1 0 1 2 -2 2 1 2 Figure 6.12: A 3-3-1 backpropagation network (plus bias) can indeed solve the threebit parity problem. The representation of the eight patterns at the hidden units (y1 - y2 - y3 space) as the system learns and the (planar) decision boundary found by the hidden-to-output weights at the end of learning. The patterns of the two classes are separated by this plane. The learning curve shows the error on individual patterns and the total error as a function of epoch. in the two-hidden unit net, the categories are separated somewhat, but not enough for error-free classification; the expressive power of the net is not sufficiently high. In contrast, the three-hidden unit net can separate the patterns. In general, given sufficiently many hidden units in a sigmoidal network, any set of different patterns can be learned in this way. 6.5.1 Representations at the hidden layer -- weights In addition to focusing on the transformation of patterns, we can also consider the representation of learned weights themselves. Since the hidden-to-output weights merely leads to a linear discriminant, it is instead the input-to-hidden weights that are most instructive. In particular, such weights at a single hidden unit describe the input pattern that leads to maximum activation of that hidden unit, analogous to a "matched filter." Because the hidden unit transfer functions are nonlinear, the correspondence with classical methods such as matched filters (and principal components, Sect. ??) is not exact; nevertheless it is often convenient to think of the hidden units as finding feature groupings useful for the linear classifier implemented by the hidden-to-output layer weights. Figure 6.14 shows the input-to-hidden weights (displayed as patterns) for a simple task of character recognition. Note that one hidden unit seems "tuned" for a pair of horizontal bars while the other to a single lower bar. Both of these feature groupings are useful building blocks for the patterns presented. In complex, high-dimensional problems, however, the pattern of learned weights may not appear to be simply related to the features we suspect are appropriate for the task. This could be because we may be mistaken about which are the true, relevant feature groupings; nonlinear matched filter 24 y2 CHAPTER 6. MULTILAYER NEURAL NETWORKS y3 y2 y1 -1 0 -1 1 1 0 2 1.5 1 0.5 0 y1 2-2-1 x2 5 2-3-1 4 input 3 2 1 1 2 3 4 5 x1 Figure 6.13: Seven patterns from a two-dimesional two-category nonlinearly separable classification problem are shown at the bottom. The figure at the top left shows the hidden unit representations of the patterns in a 2-2-1 sigmoidal network (with bias) fully trained to the global error minimum; the linear boundary implemented by the hidden-to-output weights is also shown. Note that the categories are almost linearly separable in this y1 - y2 space, but one training point is misclassified. At the top right is the analogous hidden unit representation for a fully trained 2-3-1 network (with bias). Because of the higher dimension of the hidden layer representation, the categories are now linearly separable; indeed the learned hidden-to-output weights implement a plane that separates the categories. interactions between features may be significant in a problem (and such interactions are not manifest in the patterns of weights at a single hidden unit); or the network may have too many weights (degrees of freedom), and thus the feature selectivity is low. It is generally much harder to represent the hidden-to-output layer weights in terms of input features. Not only do the hidden units themselves already encode a somewhat abstract pattern, there is moreover no natural ordering of the hidden units. Together with the fact that the output of hidden units are nonlinearly related to the inputs, this makes analyzing hidden-to-output weights somewhat problematic. Often the best we can do is list the patterns of input wed in a single learning step. Thus, for rapid and uniform learning, we should calculate the second derivative of the criterion function with respect to each weight and set the optimal learning rate separately for each weight. We shall return in Sect. ?? to calculate second derivatives in networks, and to alternate descent and training methods such as Quickprop that give fast, uniform learning. For typical problems addressed with sigmoidal networks and parameters discussed throughout this section, it is found that a learning rate 36 CHAPTER 6. MULTILAYER NEURAL NETWORKS of 0.1 is often adequate as a first choice, and lowered if the criterion function diverges, or raised if learning seems unduly slow. 6.8.10 Momentum Error surfaces often have plateaus -- regions in which the slope dJ(w)/dw is very small -- for instance because of "too many" weights. Momentum -- loosely based on the notion from physics that moving objects tend to keep moving unless acted upon by outside forces -- allows the network to learn more quickly when plateaus in the error surface exist. The approach is to alter the learning rule in stochastic backpropagation to include some fraction of the previous weight update: w(m + 1) = w(m) + w(m) + w(m - 1) gradient descent momentum (36) Of course, must be less than 1.0 for stability; typical values are 0.9. It must be stressed that momentum rarely changes the final solution, but merely allows it to be found more rapidly. Momentum provides another benefit: effectively "averaging out" stochastic variations in weight updates during stochastic learning and thereby speeding learning, even far from error plateaus (Fig. 6.20). J(w) c sto w. ha m sti c en tu m om w1 Figure 6.20: The incorporation of momentum into stochastic gradient descent by Eq. 36 (white arrows) reduces the variation in overall gradient directions and speeds learning, especially over plateaus in the error surface. Algorithm 3 shows one way to incorporate momentum into gradient descent. Algorithm 3 (Stochastic backpropagation with momentum) 1 2 3 4 5 6 begin initialize topology (# hidden units), w, criterion, (< 1), , , m 0, bji 0, bkj 0 do m m + 1 xm randomly chosen pattern bji j xi + bji ; bkj k yj + bkj wji wji + bji ; wkj wkj + bkj until J(w) < 6.8. PRACTICAL TECHNIQUES FOR BACKPROPAGATION 7 8 37 return w end 6.8.11 Weight decay One method of simplifying a network and avoiding overfitting is to impose a heuristic that the weights should be small. There is no principled reason why such a method of "weight decay" should always lead to improved network performance (indeed there are occasional cases where it leads to degraded performance) but it is found in most cases that it helps. The basic approach is to start with a network with "too many" weights (or hidden units) and "decay" all weights during training. Small weights favor models that are more nearly linear (Problems 1 & 41). One of the reasons weight decay is so popular is its simplicity. After each weight update every weight is simply "decayed" or shrunk according to: wnew = wold (1 - ), (37) where 0 < < 1. In this way, weights that are not needed for reducing the criterion function become smaller and smaller, possibly to such a small value that they can be eliminated altogether. Those weights that are needed to solve the problem cannot decay indefinitely. In weight decay, then, the system achieves a balance between pattern error (Eq. 60) and some measure of overall weight. It can be shown (Problem 43) that the weight decay is equivalent to gradient descent in a new effective error or criterion function: Jef = J(w) + 2 t w w. (38) The second term on the right hand side of Eq. 38 preferentially penalizes a single large weight. Another version of weight decay includes a decay parameter that depends upon the value of the weight itself, and this tends to distribute the penalty throughout the network: mr = /2 2 (1 + wmr ) 2. (39) We shall discuss principled methods for setting , and see how weight decay is an instance of a more general regularization procedure in Chap. ??. 6.8.12 Hints Often we have insufficient training data for adequate classification accuracy and we would like to add information or constraints to improve the network. The approach of learning with hints is to add output units for addressing an ancillary problem, one related to the classification problem at hand. The expanded network is trained on the classification problem of interest and the ancillary one, possibly simultaneously. For instance, suppose we seek to train a network to classify c phonemes based on some acoustic input. In a standard neural network we would have c output units. In learning with hints, we might add two ancillary output units, one which represents vowels and the other consonants. During training, the target vector must be lengthened to include components for the hint outputs. During classification the hint units are not used; they and their hidden-to-output weights can be discarded (Fig. 6.21). 38 CHAPTER 6. MULTILAYER NEURAL NETWORKS categories 1 2 3 c h1 hints h2 output ... hidden input Figure 6.21: In learning with hints, the output layer of a standard network having c units (discriminant functions) is augmented with hint units. During training, the target vectors are also augmented with signals for the hint units. In this way the input-to-hidden weights learn improved feature groupings. During classification the hint units are not used, and thus they and their hidden-to-output weights are removed from the trained network. The benefit provided by hints is in improved feature selection. So long as the hints are related to the classification problem at hand, the feature groupings useful for the hint task are likely to aid category learning. For instance, the feature groupings useful for distinguishing vowel sounds from consonants in general are likely to be useful for distinguishing the /b/ from /oo/ or the /g/ from / ii/ categories in particular. Alternatively, one can train just the hint units in order to develop improved hidden unit representations (Computer exercise 16). Learning with hints illustrates another benefit of neural networks: hints are more easily incorporated into neural networks than into classifiers based on other algorithms, such as the nearest-neighbor or MARS. 6.8.13 On-line, stochastic or batch training? Each of the three leading training protocols described in Sect. 6.3.2 has strengths and drawbacks. On-line learning is to be used when the amount of training data is so large, or that memory costs are so high, that storing the data is prohibitive. Most practical neural network classification problems are addressed instead with batch or stochastic protocols. Batch learning is typically slower than stochastic learning. To see this, imagine a training set of 50 patterns that consists of 10 copies each of five patterns (x1 , x2 , ..., x5 ). In batch learning, the presentations of the duplicates of x1 provide as much information as a single presentation of x1 in the stochastic case. For example, suppose in the batch case the learning rate is set optimally. The same weight change can be achieved with just a single presentation of each of the five different patterns in the batch case (with learning rate correspondingly greater). Of course, true problems do not have exact duplicates of individual patterns; nevertheless, true data sets are generally highly redundant, and the above analysis holds. For most applications -- especially ones employing large redundant training sets -- stochastic training is hence to be preferred. Batch training admits some second- 6.8. PRACTICAL TECHNIQUES FOR BACKPROPAGATION 39 order techniques that cannot be easily incorporated into stochastic learning protocols and in some problems should be preferred, as we shall see in Sect. ??. 6.8.14 Stopped training In three-layer networks having many weights, excessive training can lead to poor generalization, as the net implements a complex decision boundary "tuned" to the specific training data rather than the general properties of the underlying distributions. In training the two-layer networks of Chap. ??, we could train as long as we like without fear that it would degrade final recognition accuracy because the complexity of the decision boundary is not changed -- it is always simply a hyperplane. This example shows that the general phenomenon should be called "overfitting," and not "overtraining." Because the network weights are initialized with small values, the units operate in their linear range and the full network implements linear discriminants. As training progresses, the nonlinearities of the units are expressed and the decision boundary warps. Qualitatively speaking, stopping the training before gradient descent is complete can help avoid overfitting. In practice, the elementary criterion of stopping when the error function decreases less than some preset value (e.g., line ?? in Algorithm ??), does not lead reliably to accurate classifiers as it is hard to know beforehand what an appropriate threshold should be set. A far more effective method is to stop training when the error on a separate validation set reaches a minimum (Fig. ??). We shall explore the theory underlying this version of cross validation in Chap. ??. We note in passing that weight decay is equivalent to a form of stopped training (Fig. 6.22). w2 learning stopped initial weights w1 Figure 6.22: When weights are initialized with small magnitudes, stopped training is equivalent to a form of weight decay since the final weights are smaller than they would be after extensive training. 6.8.15 How many hidden layers? The backpropagation algorithm applies equally well to networks with three, four, or more layers, so long as the units in such layers have differentiable transfer functions. Since, as we have seen, three layers suffice to implement any arbitrary function, we 40 CHAPTER 6. MULTILAYER NEURAL NETWORKS would need special problem conditions or requirements recommend the use of more than three layers. One possible such requirement is translation, rotation or other distortion invariances. If the input layer represents the pixel image in an optical character recognition problem, we generally want such a recognizer to be invariant with respect to such transformations. It is easier for a three-layer net to accept small translations than to accept large ones. In practice, then, networks with several hidden layers distribute the invariance task throughout the net. Naturally, the weight initialization, learning rate, data preprocessing arguments apply to these networks too. The Neocognitron network architecture (Sec. 6.10.7) has many layers for just this reason (though it is trained by a method somewhat different than backpropagation). It has been found empirically that networks with multiple hidden layers are more prone to getting caught in undesirable local minima. In the absence of a problem-specific reason for multiple hidden layers, then, it is simplest to proceed using just a single hidden layer. 6.8.16 Criterion function The squared error criterion of Eq. 8 is the most common training criterion because it is simple to compute, non-negative, and simplifies the proofs of some theorems. Nevertheless, other training criteria occasionally have benefits. One popular alternate is the cross entropy which for n patterns is of the form: n c J(w)ce = m=1 k=1 tmk ln(tmk /zmk ), (40) Minkowski error where tmk and zmk are the target and the actual output of unit k for pattern m. Of course, this criterion function requires both the teaching and the output values in the range (0, 1). Regularization and overfitting avoidance is generally achieved by penalizing complexity of models or networks (Chap. ??). In regularization, the training error and the complexity penalty should be of related functional forms. Thus if the pattern error is the sum of squares, then a reasonable network penalty would be squared length of the total weight vector (Eq. 38). Likewise, if the model penalty is some description length (measured in bits), then a pattern error based on cross entropy would be appropriate (Eq. 40). e rule: w(m + 1) = dJ dw m dJ dw m-1 - dJ dw m w(m). (51) If the third- and higher-order terms in the error are non-negligible, or if the assumption of weight independence does not hold, then the computed error minimum will not equal the true minimum, and further weight updates will be needed. When a number of obvious heuristics are imposed -- to reduce the effects of estimation error when the surface is nearly flat, or the step actually increases the error -- the method can be significantly faster than standard backpropagation. Another benefit is that each weight has, in effect, its own learning rate, and thus weights tend to converge at roughly the same time, thereby reducing problems due to nonuniform learning. 6.9. *SECOND-ORDER METHODS J(w) dJ dw 43 w(m) m-1 w(m+1) dJ dw m w w* Figure 6.23: The quickprop weight update takes the error derivatives at two points separated by a known amount, and by Eq. 51 makes its next weight value. If the error can be fully expressed as a second-order function, then the weight update leads to the weight (w ) leading to minimum error. 6.9.4 Conjugate gradient descent Another fast learning method is conjugate gradient descent, which employs a series of line searches in weight or parameter space. One picks the first descent direction (for instance, determined by the gradient) and moves along that direction until the minimum in error is reached. The second descent direction is then computed: this direction -- the "conjugate direction" -- is the one along which the gradient does not change its direction, but merely its magnitude during the next descent. Descent along this direction will not "spoil" the contribution from the previous descent iterations (Fig. ??). (1 ) re ct io n w2 HI w(2) ce de s nt w n fl w2 ectio h dir aries) searc tion v line c poor ient dire d (gra H=I w di (1 w (2 ) ) st w(1) con j (gra ugate d doe dient d irectio s no n i t va rection w(2 ) ry) fl fir w1 w1 Figure 6.24: Conjugate gradient descent in weight space employs a sequence of line searches. If w(1) is the first descent direction, the second direction obeys wt (1)Hw(2) = 0. Note especially that along this second descent, the gradient changes only in magnitude, not direction; as such the second descent does not "spoil" the contribution due to the previous line search. In the case where the Hessian is diagonal (right), the directions of the line searches are orthogonal. More specifically, if we let w(m - 1) represent the direction of a line search on 44 CHAPTER 6. MULTILAYER NEURAL NETWORKS step m - 1. (Note especially that this is not an overall magnitude of change, which is determined by the line search). We demand that the subsequent direction, w(m), obey wt (m - 1)Hw(m) = 0, (52) where H is the Hessian matrix. Pairs of descent directions that obey Eq. 52 are called "conjugate." If the Hessian is proportional to the identity matrix, then such directions are orthogonal in weight space. Conjugate gradient requires batch training, since the Hessian matrix is defined over the full training set. The descent direction on iteration m is in the direction of the gradient plus a component along the previous descent direction: w(m) = -J(w (m)) + m w(m - 1), (53) and the relative proportions of these contributions is governed by . This proportion can be derived by insuring that the descent direction on iteration m does not spoil that from direction m - 1, and indeed all earlier directions. It is generally calculated in one of two ways. The first formula (Fletcher-Reeves) is m = [J(w(m))]t J(w(m)) [J(w(m - 1))]t J(w(m - 1)) (54) A slightly preferable formula (Polak-Ribiere) is more robust in non-quadratic error functions is: m = [J(w(m))]t [J(w(m)) - J(w(m - 1))] . [J(w(m - 1))]t J(w(m - 1)) (55) Equations 53 & 36 show that conjugate gradient descent algorithm is analogous to calculating a "smart" momentum, where plays the role of a momentum. If the error function is quadratic, then the convergence of conjugate gradient descent is guaranteed when the number of iterations equals the total number of weights. Example 1: Conjugate gradient descent Consider finding the miminimum of a simple quadratic criterion function centered 2 2 on the origin of weight space, J(w) = 1/2(.2w1 + w2 ) = wt Hw, where by simple .2 0 differentiation the Hessian is found to be H = 0 1 . We start descent descent at a randomly selected position, which happens to be w(0) = -8 , as shown in the figure. -4 The first descent direction is determined by a simple gradient, which is easily found to 1 (0) be -J(w(0)) = - .4w2 (0) = 3.2 . In typical complex problems in high dimensions, 8 2w the minimum along this direction is found using a line search, in this simple case the minimum can be found be calculus. We let s represent the distance along the first descent direction, and find its value for the minimum of J(w) according to: -8 3.2 +s -4 8 t d ds .2 0 01 -8 3.2 +s -4 8 =0 which has solution s = 0.562. Therefore the minimum along this direction is 6.9. *SECOND-ORDER METHODS 45 w(1) = w(0) + 0.562(-J(w(0))) -8 3.2 -6.202 = + 0.562 = . -4 8 0.496 Now we turn to the use of conjugate gradients for the next descent. The simple gradient evaluated at w(1) is -J(w(1)) = - .4w1 (1) 2w2 (1) 2.48 . -0.99 = (It is easy to verify that this direction, shown as a black arrow in the figure, does not point toward the global minimum at w = 0 .) We use the Fletcher-Reeves formula 0 (Eq. 54) to construct the conjugate gradient direction: (-2.48 .99) -2.48 7.13 [J(w(1))]t J(w(1)) .99 = = 0.096. = -3.2 [J(w(0))]t J(w(0)) 74 (-3.2 8) 8 1 = Incidentally, for this quadratic error surface, the Polak-Ribiere formula (Eq. 55) would give the same value. Thus the conjugate descent direction is w(1) = -J(w(1)) + 1 3.2 8 2.788 . -.223 = 7.5 5 2.5 )) w(0 w2 w(1) 0 -J (w( -2.5 -5 -7.5 -J ( 1)) w(0) -7.5 -5 -2.5 0 2.5 5 7.5 w1 Conjugate gradient descent in a quadratic error landscape, shown in contour plot, starts at a random point w(0) and descends by a sequence of line searches. The first direction is given by the standard gradient and terminates at a minimum of the error -- the point w(1). Standard gradient descent from w(1) would be along the black vector, "spoiling" some of the gains made by the first descent; it would, furthermore, miss the global minimum. Instead, the conjugate gradient (red vector) does not spoil the gains from the first descent, and properly passes through the global error minimum at w = 0 . 0 46 CHAPTER 6. MULTILAYER NEURAL NETWORKS As above, rather than perform a traditional line search, we use calculus to find the error minimum along this second descent direction: d t [w(1) + sw(1)] H [w(1) + sw(1)] ds t .2 0 -6.202 2.788 -6.202 2.788 +s +s 01 0.496 -.223 0.496 -.223 = = 0 d ds which has solution s = 2.231. This yields the next minimum to be -6.202 2.788 + 2.231 0.496 -.223 0 . 0 w(2) = w(1) + sw(1) = = Indeed, the conjugate gradient search finds the global minimum in this quadratic error function in two search steps -- the number of dimensions of the space. 6.10 *Additional networks and training methods The elementary method of gradient descent used by backpropagation can be slow, even with straightforward improvements. We now consider some alternate networks and training methods. 6.10.1 Radial basis function networks (RBF) We have already considered several classifiers, such as Parzen windows, that employ densities estimated by localized basis functions such as Gaussians. In light of our discussion of gradient descent and backpropagation in particular, we now turn to a different method for training such networks. A radial basis function network with linear output unit implements nH zk (x) = j=0 wkj j (x). (56) where we have included a j = 0 bias unit. If we define a vector whose components are the hidden unit outputs, and a matrix W whose entries are the hidden-to-output weights, then Eq. 56 can be rewritten as: z(x) = W. Minimizing the criterion function J(w) = 1 (y(xm ; w) - tm )2 2 m=1 n (57) is formally equivalent to the linear problem we saw in Chap. ??. Weous translation constraint is also imposed between the hidden and output layer units. 6.10.4 Recurrent networks Up to now we have considered only networks which use feedforward flow of information during classification; the only feedback flow was of error signals during training. Now we turn to feedback or recurrent networks. In their most general form, these have found greatest use in time series prediction, but we consider here just one specific type of recurrent net that has had some success in static classification tasks. Figure 6.26 illustrates such an architecture, one in which the output unit values are fed back and duplicated as auxiliary inputs, augmenting the traditional feature values. During classification, a static pattern x is presented to the input units, the feedforward flow computed, and the outputs fed back as auxiliary inputs. This, in turn, leads to a different set of hidden unit activations, new output activations, and so on. Ultimately, the activations stabilize, and the final output values are used for classification. As such, this recurrent architecture, if "unfolded" in time, is equivalent to the static network shown at the right of the figure, where it must be emphasized that many sets of weights are constrained to be the same (weight sharing), as indicated. This unfolded representation shows that recurrent networks can be trained via standard backpropagation, but with the weight sharing constraint imposed, as in TDNNs. hid de n ou tpu t 6.10. *ADDITIONAL NETWORKS AND TRAINING METHODS 49 z(3) wkj y(3) wji z(2) wkj y(2) wji z(1) y(1) x x x wkj wji z y x Figure 6.26: The form of recurrent network most useful for static classification has the architecture shown at the bottom, with the recurrent connections in red. It is functionally equivalent to a static network with many hidden layers and extensive weight sharing, as shown above. Note that the input is replicated. 6.10.5 Counterpropagation Occasionally, one wants a rapid prototype of a network, yet one that has expressive power greater than a mere two-layer network. Figure 6.27 shows a three-layer net, which consists of familiar input, hidden and output layers. When one is learning the weights for a pattern in category i , In this way, the hidden units create a Voronoi tesselation (cf. Chap. ??), and the hidden-to-output weights pool information from such centers of Voronoi cells. The processing at the hidden units is competitive learning (Chap. ??). The speedup in counterpropagation is that only the weights from the single most active hidden unit are adjusted during a pattern presentation. While this can yield suboptimal recognition accuracy, counterpropagation can be orders of magnitude faster than full backpropagation. As such, it can be useful during preliminary data exploration. Finally, the learned weights often provide an excellent starting point for refinement by subsequent full training via backpropagation. It is called "counterpropagation" for an earlier implementation that employed five layers with signals that passed bottom-up as well as top-down. 50 CHAPTER 6. MULTILAYER NEURAL NETWORKS Figure 6.27: The simplest version of a counterpropagation network consists of three layers. During training, an input is presented and the most active hidden unit is determined. The only weights that are modified are the input-to-hidden weights leading to this most active hidden unit and the single hidden-to-output weight leading to the proper category. Weights can be trained using an LMS criterion. 6.10.6 Cascade-Correlation The central notion underlying the training of networks by cascade-correlation is quite simple. We begin with a two-layer network and train to minimum of an LMS error. If the resulting training error is low enough, training is stopped. In the more common case in which the error is not low enough, we fix the weights but add a single hidden unit, fully connected from inputs and to output units. Then these new weights are trained using an LMS criterion. If the resulting error is not sufficiently low, yet anoe-correlation and counterpropagation are generally faster than backpropagation. Complexity adjustment: weight decay, Wald statistic, which for networks is optimal brain damage and optimal brain surgeon, which use the second-order approximation to the true saliency as a pruning criterion. 56 CHAPTER 6. MULTILAYER NEURAL NETWORKS Bibliographical and Historical Remarks McCulloch and Pitts provided the first principled mathematical and logical treatment of the behavior of networks of simple neurons [49]. This pioneering work addressed non-recurrent as well as recurrent nets (those possessing "circles," in their terminology), but not learning. Its concentration on all-or-none or threshold function of neurons indirectly delayed the consideration of continuous valued neurons that would later dominate the field. These authors later wrote an extremely important paper on featural mapping (cf. Chap. ??), invariances, and learning in nervous systems and thereby advanced the conceptual development of pattern recognition significantly [56]. Rosenblatt's work on the (two-layer) Perceptron (cf. Chap. ??) [61, 62] was some of the earliest to address learning, and was the first to include rigorous proofs about convergence. A number of stochastic methods, including Pandemonium [66, 67], were developed for training networks with several layers of processors, though in keeping with the preoccupation with threshold functions, such processors generally computed logical functions (AND or OR), rather than some continuous functions favored in later neural network research. The limitations of networks implementing linear discriminants -- linear machines -- were well known in the 1950s and 1960s and discussed by both their promoters [62, cf., Chapter xx, "Summary of Three-Layer Series-Coupled Systems: Capabilities and Deficiencies"] and their detractors [51, cf., Chapter 5, "CON N ECT ED : A Geometric Property with Unbounded Order"]. A popular early method was to design by hand three-layer networks with fixed input-to-hidden weights, and then train the hidden-to-output weight [80, for a review]. Much of the difficulty in finding learning algorithms for all layers in a multilayer neural network came from the prevalent use of linear threshold units. Since these do not have useful derivatives throughout their entire range, the current approach of applying the chain rule for derivatives and the resulting "backpropagation of errors" did not gain more adherents earlier. The development of backpropagation was gradual, with several steps, not all of which were appreciated or used at the time. The earliest application of adaptive methods that would ultimately become backpropagation came from the field of control. Kalman filtering from electrical engineering [38, 28] used an analog error (difference between predicted and measured output) for adjusting gain parameters in predictors. Bryson, Denham and Dreyfus showed how Lagrangian methods could train multilayer networks for control, as described in [6]. We saw in the last chapter the work of Widrow, Hoff and their colleagues [81, 82] in using analog signals and the LMS training criterion applied to pattern recognition in two-layer networks. Werbos [77][78, Chapter 2], too, discussed a method for calculating the derivatives of a function based on a sequence of samples (as in a time series), which, if interpreted carefully carried the key ideas of backpropagation. Parker's early "Learning logic" [53, 54], developed independently, showed how layers of linear units could be learned by a sufficient number of input-output pairs. This work lacked simulations on representative or challenging problems (such as XOR) and was not appreciated adequately. Le Cun independently developed a learning algorithm for three-layer networks [9, in French] in which target values are propagated, rather than derivatives; the resulting learning algorithm is equivalent to standard backpropagation, as pointed out shortly thereafter [10]. Without question, the paper by Rumelhart, Hinton and Williams [64], later expanded into a full and readable chapter [65], brought the backpropagation method to the attention of the widest audience. These authors clearly appreciated the power of 6.11. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 57 the method, demonstrated it on key tasks (such as the exclusive OR), and applied it to pattern recognition more generally. An enormous number of papers and books of applications -- from speech production and perception, optical character recognition, data mining, finance, game playing and much more -- continues unabated. One novel class of for such networks includes generalization for production [20, 21]. One view of the history of backpropagation is [78]; two collections of key papers in the history of neural processing more generally, including many in pattern recognition, are [3, 2]. Clear elementary papers on neural networks can be found in [46, 36], and several good textbooks, which differ from the current one in their emphasis on neural networks over other pattern recognition techniques, can be recommended [4, 60, 29, 27]. An extensive treatment of the mathematical aspects of networks, much of which is beyond that needed for mastering the use of networks for pattern classification, can be found in [19]. There is continued exploration of the strong links between networks and more standard statistical methods; White presents and overview [79], and books such as [8, 68] explore a number of close relationships. The important relation of multilayer Perceptrons to Bayesian methods and probability estimation can be found in [23, 59, 43, 5, 13, 63, 52].posterior probability!and backpropagation Original papers on projection pursuit and MARS, can be found in [15] and [34], respectively, and a good overview in [60]. Shortly after its wide dissemination, the backpropagation algorithm was criticized for its lack of biological plausibility; in particular, Grossberg [22] discussed the non-local nature of the algorithm, i.e., that synaptic weight values were transported without physical means. Somewhat later Stork devised a local implementation of backpropagation was [71, 45], and pointed out that it was nevertheless highly implausible as a biological model. The discussions and debates over the relevance of Kolmogorov's Theorem [39] to neural networks, e.g. [18, 40, 41, 33, 37, 12, 42], have centered on the expressive power. The proof of the univerasal expressive power of three-layer nets based on bumps and Fourier ideas appears in [31]. The expressive power of networks having non-traditional transfer functions was explored in [72, 73] and elsewhere. The fact that three-layer networks can have local minima in the criterion function was explored in [50] and some of the properties of error surfaces illustrated in [35]. The Levenberg-Marquardt approximation and deeper analysis of second-order methods can be found in [44, 48, 58, 24]. Three-layer networks trained via cascadecorrelation have been shown to perform well compared to standard three-layer nets trained via backpropagation [14]. Our presentation of counterpropagation networks focussed on just three of the five layers in a full such network [30]. Although there was little from a learning theory new presented in Fukushima's Neocognitron [16, 17], its use of many layers and mixture of hand-crafted feature detectors and learning groupings showed how networks could address shift, rotation and scale invariance. Simple method of weight decay was introduced in [32], and gained greater acceptance due to the work of Weigend and others [76]. The method of hints was introduced in [1]. While the Wald test [74, 75] has been used in traditional statistical research [69], its application to multilayer network pruning began with the work of Le Cun et al's Optimal Brain Damage method [11], later extended to include non-diagonal Hessian matrices [24, 25, 26], including some speedup methods [70]. A good review of the computation and use of second order derivatives in networks can be found in [7] and of pruning algorithms in [58]. 58 CHAPTER 6. MULTILAYER NEURAL NETWORKS Problems Section 6.2 1. Show that if the transfer function of the hidden units is linear, a three-layer network is equivalent to a two-layer one. Explain why, therefore, that a three-layer network with linear hidden units cannot solve a non-linearly separable problem such as XOR or n-bit parity. 2. Fourier's Theorem can be used to show that a three-layer neural net with sigmoidal hidden units can approximate to arbitrary accuracy any posterior function. Consider two-dimensional input and a single output, z(x1 , x2 ). Recall that Fourier's Theorem states that, given weak restrictions, any such functions can be written as a possibly infinite sum of cosine functions, as z(x1 , x2 ) f1 f2 Af1 f2 cos(f1 x1 ) cos(f2 x2 ), with coefficients Af1 f2 . (a) Use the trigonometric identity cos cos = 1 1 cos( + ) + cos( - ) 2 2 to write z(x1 , x2 ) as a linear combination of terms cos(f1 x1 + f2 x2 ) and cos(f1 x1 - f2 x2 ). (b) Show that cos(x) or indeed any continuous function f (x) can be approximated to any accuracy by a linear combination of sign functions as: N f (x) f (x0 ) + i=0 [f (xi+1 ) - f (xi )] 1 + Sgn(x - xi ) , 2 where the xi are sequential values of x; the smaller xi+1 - xi , the better the approximation. (c) Put your results together to show that z(x1 , x2 ) can be expressed as a linear combination of step functions or sign functions whose arguments are themselves linear combinations of the input variables x1 and x2 . Explain, in turn, why this implies that a three-layer network with sigmoidal hidden units and a linear output unit can implement any function that can be expressed by a Fourier series. (d) Does your construction guarantee that the derivative df (x)/dx can be well approximated too? Section 6.3 3. Consider an d - nH - c network trained with n patterns for me epochs. (a) What is the space complexity in this problem? (Consider both the storage of network parameters as well as the storage of patterns, but not the program itself.) 6.11. PROBLEMS 59 (b) Suppose the network is trained in stochastic mode. What is the time complexity? Since this is dominated by the number of multiply-accumulations, use this as a measure of the time complexity. (c) Suppose the network is trained in batch mode. What is the time complexity? 4. Prove that the formula for the sensitivity for a hidden unit in a three-layer net (Eq. 20) generalizes to a hidden unit in a four- (or higher-) layer network, where the sensitivity is the weighted sum of sensitivities of units in the next higher layer. 5. Explain in words why the backpropagation rule for training input-to-hidden weights makes intuitive sense by considering the dependency upon each of the terms in Eq. 20. 6. One might reason that the the dependence of the backpropagation learning rules (Eq. ??) should be roughly inversely related to f (net); i.e., that weight change should be large where the output does not vary. In fact, of course, the learning rule is linear in f (net). What, therefore, is wrong with the above view? 7. Show that the learning rule described in Eqs. 16 & 20 works for bias, where x0 = y0 = 1 is treated as another input and hidden unit. 8. Consider a standard three-layer backpropagation net with d input units, nH hidden units, c output units, and bias. (a) How many weights are in the net? (b) Consider the symmetry in the value of the weights. In particular, show that if the sign if flipped on every weight, the network function is unaltered. (c) Consider now the hidden unit exchange symmetry. There are no labels on the hidden units, and thus they can be exchanged (along with corresponding weights) and leave network function unaffected. Prove that the number of such equivalent labellings -- the exchange symmetry factor -- is thus nH 2nH . Evaluate this factor for the case nH = 10. 9. Using the style of procedure, write the procedure for on-line version of backpropagation training, being careful to distinguish it from stochastic and batch procedures. 10. Express the derivative of a sigmoid in terms of the sigmoid itself in the following two cases (for positive constants a and b. . . . . 7.6 *Genetic Programming . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliographical and Historical Remarks . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer exercises . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 4 5 5 8 9 11 12 13 17 20 20 20 23 25 27 27 31 31 32 33 35 36 41 42 47 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 CONTENTS Chapter 7 Stochastic Methods 7.1 Introduction the general approach is to specify model having parameters L seen, estimate their values from training adata. When the one or morefairly simple and then models are and of low dimension, we can use analytic methods such as computing derivatives and performing gradient descent to find optimal model parameters. If the models are somewhat more complicated, we may calculate local derivatives and use gradient methods, as in neural networks and some maximum-likelihood problems. In most high-dimensional and complicated models, there are multiple maxima and we must use a variety of tricks -- such as performing the search multiple times from different starting conditions -- to have any confidence that an acceptable local maximum has been found. These methods become increasingly unsatisfactory as the models become more complex. A naive approach -- exhaustive search through solution space -- rapidly gets out of hand and is completely impractical for real-world problems. The more complicated the model, the less the prior knowledge, and the less the training data, the more we must rely on sophisticated search for finding acceptable model parameters. In this chapter we consider stochastic methods for finding parameters, where randomness plays a crucial role in search and learning. The general approach is to bias the search toward regions where we expect the solution to be and allow randomness -- somehow -- to help find good parameters, even in very complicated models. We shall consider two general classes of such methods. The first, exemplified by Boltzmann learning, is based on concepts and techniques from physics, specifically statistical mechanics. The second, exemplified by genetic algorithms, is based on concepts from biology, specifically the mathematical theory of evolution. The former class has a highly developed and rigorous theory and many successes in pattern recognition; hence it will command most of our effort. The latter class is more heuristic yet affords flexibility and can be attractive when adequate computational resources are available. We shall generally illustrate these techniques in cases that are simple, and which might also be addressed with standard gradient procedures; nevertheless we emphasize that these stochastic methods may be preferable in complex problems. 3 earning plays a central role in the construction of pattern classifiers. As we have 4 CHAPTER 7. STOCHASTIC METHODS The methods have high computational burden, and would be of little use without computers. 7.2 Stochastic search energy We begin by discussing an important and general quadratic optimization problem. Analytic approaches do not scale well to large problems, however, and thus we focus here on methods of search through different candidate solutions. We then consider a form of stochastic search that finds use in learning for pattern recognition. Suppose we have a large number of variables si , i = 1, . . . , N where each variable can take one of two discrete values, for simplirgetically unfavorable and the full system explores configurations that have high energy. Annealing proceeds by gradually lowering the temperature of the system -- ultimately toward zero and thus no randomness -- so as to allow the system to relax into a low-energy configuration. Such annealing is effective because even at moderately high temperatures, the system slightly favors regions in the configuration space that are overall lower in energy, and hence are more likely to contain the global minimum. the As temperature is lowered, the system has increased probability of finding the optimum configuration. This method is successful in a wide range of energy functions or energy "landscapes," though there are pathological cases such as the "golf course" landscape in Fig. 7.2 where it is unlikely to succeed. Fortunately, the problems in learning we shall consider rarely involve such pathological functions. 7.2.2 The Boltzmann factor The statistical properties of large number of interacting physical components at a temperature T , such as molecules in a gas or magnetic atoms in a solid, have been thoroughly analyzed. A key result, which relies on a few very natural assumptions, is visible -1 6 E CHAPTER 7. STOCHASTIC METHODS E x2 x1 x2 x1 Figure 7.2: The energy function or energy "landscape" on the left is meant to suggest the types of optimization problems addressed by simulated annealing. The method uses randomness, governed by a control parameter or "temperature" T to avoid getting stuck in local energy minima and thus to find the global minimum, like a small ball rolling in the landscape as it is shaken. The pathological "golf course" landscape at the right is, generally speaking, not amenable to solution via simulated annealing because the region of lowest energy is so small and is surrounded by energetically unfavorable configurations. The configuration space of the problems we shall address are discrete and thus the continuous x1 - x2 space shown here is a bit misleading. the following: the probability the system is in a (discrete) configuration indexed by having energy E is given by P () = Boltzmann factor partition function e-E /T , Z(T ) (2) where Z is a normalization constant. The numerator is the Boltzmann factor and the denominator the partition function, the sum over all possible configurations Z(T ) = e-E /T , (3) which guarantees Eq. 2 represents a true probability. The number of configurations is very high, 2N , and in physical systems Z can be calculated only in simple cases. Fortunately, we need not calculate the partition function, as we shall see. Because of the fundamental importance of the Boltzmann factor in our discussions, it pays to take a slight detour to understand it, at least in an informal way. Consider a different, but nontheless related system: one consisting of a large number of non-interacting magnets, that is, without interconnecting weights, in a uniform external magnetic field. If a magnet is pointing up, si = +1 (in the same direction as the field), it contributes a small positive energy to the total system; if the magnet is pointing down, a small negative energy. The total energy of the collection is thus proportional to the total number of magnets pointing up. The probability the system has a particular total energy is related to the number of configurations that have that energy. Consider the highest energy configuration, with all magnets pointing up. There is only N = 1 configuration that has this energy. The next to highest N In the Boltzmann factor for physical systems there is a "Boltzmann constant" which converts a temperature into an energy; we can ignore this factor by scaling the temperature in our simulations. 7.2. STOCHASTIC SEARCH 7 energy comes with just a single magnet pointing down; there are N = N such con1 figurations. The next lower energy configurations have two magnets pointing down; there are N = N (N - 1)/2 of these configurations, and so on. The number of states 2 declines exponentially with increasing energy. Because of the statistical independence of the magnets, for large N the probability of finding the state in energy E also decays exponentially (Problem 7). In sum, then, the exponential form of the Boltzmann factor in Eq. 2 is due to the exponential decrease in the number of accessible configurations with increasing energy. Further, at high temperature there is, roughly speaking, more energy available and thus an increased probability of higher-energy states. This describes qualitatively the dependence of the probability upon T in the Boltzmann factor -- at high T , the probability is distributed roughly evenly among all configurations while at low T , it is concentrated at the lowest-energy configurations. If we move from the collection of independent magnets to the case of magnets interconnected by weights, the situation is a bit more complicated. Now the energy associated with a magnet pointing up or down depends upon the state of others. Nonetheless, in the case of large N , the number of configurations decays exponentially with the energy of the configuration, as described by the Boltzmann factor of Eq. 2. Simulated annealing algorithm The above discussion and the physical analogy suggest the following simulated annealing method for finding the optimum configuration to our general optimization problem. Start with randomized states throughout the network, si (1), and select a high initial "temperature" T (1). (Of course in the simulation T is merely a control parameter which will control the randomness; it is not a true physical temperature.) Next, choose a node i randomly. Suppose its state is si = + 1. Calculate the system energy in this configuration, Ea ; next recalculate the energy, Eb , for a candidate new state si = - 1. If this candidate state has a lower energy, accept this change in state. If however the energy is higher, accept this change with a probability equal to e-Eab /T , (4) where Eab = Eb -Ea . This occasional acceptance of a state that is energetically less favorable is crucial to the success of simulated annealing, and is in marked distinction to naive gradient descent and the greedy approach mentioned above. The key benefit is that it allows the system to jump out of unacceptable local energy minima. For example, at very high temperatures, every configuration has a Boltzmann factor e-E/T e0 roughly equal. After normalization by the partition function, then, every configuration is roughly equally likely. This implies every node is equally likely to be in either of its two states (Problem 6). The algorithm continues polling (selecting and testing) the nodes randomly several times and setting their states in this way. Next lower the temperature and repeat the polling. Now, according to Eq. 4, there will be a slightly smaller probability that a candidate higher energy state will be accepted. Next the algorithm polls all the nodes until each node has been visited several times. Then the temperature is lowered further, the polling repeated, and so forth. At very low temperatures, the probability that an energetically less favorable state will be accepted is small, and thus the search becomes more like a greedy algorithm. Simulated annealing terminates when the temperature is very low (near zero). If this cooling has been sufficiently slow, the system then has a high probability of being in a low energy state -- hopefully the global energy minimum. polling 8 CHAPTER 7. STOCHASTIC METHODS Because it is the difference in energies between the two states that determines the acceptance probabilities, we need only consider nodes connected to the one being polled -- all the units not connected to the polled unit are in the same state and contribute the same total amount to the full energy. We let Ni denote the set of nodes connected with non-zero weights to node i; in a fully connected net would include the complete set of N - 1 remaining nodes. Further, we let Rand[0, 1) denote a randomly selected positive real number less than 1. With this notation, then, the randomized or stochastic simulated annealing algorithm is: Algorithm 1 (Stochastic simulated annealing) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 begin initialize T (k), kmax , si (1), wij for i, j = 1, . . . , N k0 do k k + 1 do select node i randomly; suppose its state is si Ea -1/2 Ni j wij si sj Eb -Ea if Eb < Ea then si -si else if e-(Eb -Ea )/T (k) > Rand[0, 1) then si -si until all nodes polled several times until k = kmax or stopping criterion met return E, si , for i = 1, . . . , N end annealing schedule Because units are polled one at a time, the algorithm is occasionally called sequential simulated annealing. Note that in line 5, we define Ea based only on those units connected to the polled one -- a slightly different convention than in Eq. 1. Changing the usage in this way has no effect, since in line 9 it is the difference in energies that determines transition probabilities. There are several aspects of the algorithm that must be considered carefully, in particular the starting temperature, ending temperature, the rate at which the temperature is decreased and the stopping criterion. This function is called the cooling schedule or more frequently the annealing schedule, T (k), where k is an iteration index. We demand T (1) to be sufficiently high that all configurations have roughly equal probability. This demands the temperature be larger than the maximum difference in energy between any configurations. Such a high temperature allows the system to move to any configuration which may be needed, since the random initial configuration may be far from the optimal. The decrease in temperature must be both gradual and slow enough that the system can move to any part of the state space before being trapped in an unacceptable local minimum, points we shall consider below. At the very least, annealing must allow N/2 transitions, since a global optimum never differs from any configuration by more than this number of steps. (In practice, annealing can require polling several orders of magnitude more times than this number.) The final temperature must be low enough (or equivalently kmax must be large enough or a stopping criterion must be good enough) that there is a negligible probability that if the system is in a global minimum it will move out. Figure 7.3 shows that early in the annealling process when the temperature is high, the system explores a wide range of configurations. Later, as the temperature 7.2. STOCHASTIC SEARCH 9 is lowered, only states "close" to the global minimum are tested. Throughout the process, each transition corresponds to the change in state of a single unit. A typical choice of annealing schedule is T (k + 1) = cT (k) with 0 < c < 1. If computational resources are of no concern, a high initial temperature, large c < 1, and large kmax are most desirable. Values in the range 0.8 < c < 0.99 have been found to work well in many real-world problems. In practice the algorithm is slow, requiring many iterations and many passes through all the nodes, though for all but the smallest problems it is still faster than exhaustive search (Problem 5). We shall revisit the issue of parameter setting in the context of learning in Sect. 7.3.4. While Fig. 7.3 displayed a single trajectory through the configuration space, a more relevant property is the probability of being in a configuration as the system is annealed gradually. Figure 7.4 shows such probability distributions at four temperatures. Note especially that at the final, low temperature the probability is concentrated at the global minima, as desired. While this figure shows that for positive temperature all states have a non-zero probability of being visited, we must recognize that only a small fraction of configurations are in fact visited in any anneal. In short, in the vast majority of large problems, annealing does not require that all configurations be explored, and hence it is more efficient than exhaustive search. 7.2.3 Deterministic simulated annealing Stochastic simulated annealing is slow, in part because of the discrete nature of the search through the space of all configurations, i.e., an Non, and it is worthwhile explaining it carefully. Figure 7.9 illustrates in greater detail the learning of the single training pattern in Fig. 7.8. Because s1 and s2 are clamped throughout, EQ [s1 s2 ]i o clamped = 1 = E[s1 s2 ]i clamped , and thus the weight w12 is not changed, as indeed given by Eq. 14. Consider a more general case, involving s1 and s7 . During the learning phase both units are clamped at +1 and thus the correlation is EQ [s1 s7 ] = +1. During the unlearning phase, the output s7 is free to vary and the correlation is lower; in fact it happens to be negative. Thus, the learning rule seeks to increase the magnitude of w17 so that the input s1 = +1 leads to s7 = +1, as can be seen in the matrix on the right. Because hidden units are only weakly correlated (or anticorrelated), the weights linking hidden units are changed only slightly. In learning a training set of many patterns, each pattern is presented in turn, and the weights updated as just described. Learning ends when the actual output matches the desired output for all patterns (cf. Sect. 7.3.4). 7.3. BOLTZMANN LEARNING 17 before training E 1 output 2 s6 s3 s1 - + s7 s5 s2 s7 s6 s5 s4 s3 s2 s1 after training s4 -----++ ----+++ ---++++ ---+-++ --++-++ --+++++ --+-+++ --+--++ -++--++ -++-+++ -++++++ -+++-++ -+-+-++ -+-++++ -+--+++ -+---++ ++---++ ++--+++ ++-++++ ++-+-++ ++++-++ +++++++ +++-+++ +++--++ +-+--++ +-+-+++ +-+++++ +-++-++ +--+-++ +--++++ +---+++ +----++ +-...++ + input + training pattern these configurations become more probable after training Figure 7.8: The fully connected seven-unit network at the left is being trained via the Boltzmann learning algorithm with the input pattern s1 = +1, s2 = +1, and the output values s6 = -1 and s7 = +1, representing categories 1 and 2 , respectively. All 25 = 32 configurations with s1 = +1, s2 = +1 are shown at the right, along with their energy (Eq. 1). The black curve shows the energy before training; the red curve shows the energy after training. Note particularly that after training all configurations that represent the full training pattern have been lowered in energy, i.e., have become more probable. Consequently, patterns that do not represent the training pattern become less probable after training. Thus, after training, if the input pattern s1 = +1, s2 = +1 is presented and the remaining network annealed, there is an increased chance of yielding s6 = -1, s7 = +1, as desired. 7.3.2 Missing features and category constraints A key benefit of Boltzmann training (including its preferred implementation, described in Sect. 7.3.3, below) is its ability to deal with missing features, both during training and during classification. If a deficient binary pattern is used for training, input units corresponding to missing features are allowed to vary -- they are temporarily treated as (unclamped) hidden units rather than clamped input units. As a result, during annealing such units assume values most consistent with the rest of the input pattern and the current state of the network (Problem 14). Likewise, when a deficient pattern is to be classified, any units corresponding to missing input features are not clamped, and are allowed to assume any value. Some subsidiary knowledge or constraints can be incorporated into a Boltzmann network during classification. Suppose in a five-category problem it is somehow known that a test pattern is neither in category 1 nor 4 . (Such constraints could come from context or stages subsequent to the classifier itself.) During classification, then, the output units corresponding to 1 and 4 are clamped at si = -1 during the anneal, and the final category read as usual. Of course in this example the possible categories are then limited to the unclamped output units, for 2 , 3 and 5 . Such constraint imposition may lead to an improved classification rate (Problem 15). 18 CHAPTER 7. STOCHASTIC METHODS -1 -0.5 EQ[s1 s2]i o clamped s1 s 2 s 3 s4 s5 s6 s7 s1 s2 s3 s4 s5 s6 s7 s1 s2 s3 s4 s5 s6 s7 0 +0.5 +1 w = EQ[s1 s2]i o clamped - E[s1 s2]i clamped s 1 s2 s3 s4 s5 s6 s7 s1 s2 s3 s4 s5 s6 s7 E[s1 s2]i clamped s1 s2 s3 s4 s5 s6 s7 learning unlearning weight update Figure 7.9: Boltzmann learning of a single pattern is illustrated for the seven-node network of Fig. 7.8. The (symmetric) matrix on the left shows the correlation of units for the learning component, where the input units are clamped to s1 = +1, s2 = +1, and the outputs to s6 = -1, s7 = +1. The middle matrix shows the unlearning component, where the inputs are clamped but outputs are free to vary. The difference between those matrices is shown on the right, and is proportional to the weight update (Eq. 14). Notice, for instance, that because the correlation between s1 and s2 is large in both the learning and unlearning components (because those variables are clamped), there is no associated weight change, i.e., w12 = 0. However, strong correlations between s1 and s7 in the learning but not in the unlearning component implies that the weight w17 should be increased, as can be seen in the weight update matrix. Pattern completion The problem of pattern completion is to estimate the full pattern given just a part of that pattern; as such, it is related to the problem of classification with missing features. Pattern completion is naturally addressed in Boltzmann networks. A fully interconnected network, with or without hidden units, is trained with a set of representative patterns; as before, the visible units correspond to the feature components. When a deficient pattern is presented, a subset of the visible units are clamped to the components of a partial pattern, and the network annealed. The estimate of the unknown features appears on the remaining visible units, as illustrated in Fig. 7.10 (Computer exercise 3). Such pattern completion in Boltzmann networks can be more accurate when known category information is imposed at the output units. Boltzmann networks without hidden or category units are related to so-called Hopfield networks or Hopfield auto-association networks (Problem 12). Such networks store patterns but not their category labels. The learning rule for such networks does not require the full Boltzmann learning of Eq. 14. Instead, weights are set to be proportional to the correlation of the feature vectors, averaged over the training set, Hopfield network 7.3. BOLTZMANN LEARNING 19 learned patterns s1 s2 s4 s6 s3 s1 s8 s9 hidden s10 s11 s12 deficient pattern presented pattern completed by network s5 s 7 s2 s3 s4 s5 s6 s7 - visible + Figure 7.10: A Boltzmann network can be used for pattern completion, i.e., filling in unknown features of a deficient pattern. Here, a twelve-unit network with five hidden units has been trained with the 10 numeral patterns of a seven-segment digital display. The diagram at the lower left shows the correspondence between the display segments and nodes of the network; a black segment is represented by a +1 and a light gray segment as a -1. Consider the deficient pattern consisting of s2 = -1, s5 = +1. If these units are clamped and the full network annealed, the remaining five visible units will assume values most probable given the clamped ones, as shown at the right. wij EQ [si sj ], (15) with wii = 0; further, there is no need to consider temperature. Such learning is of course much faster than true Boltzmann learning using annealing. If a network fully trained by Eq. 15 is nevertheless annealed, as in full Boltzmann learning, there is no guarantee that the equilibrium correlations in the learning and unlearning phases are equal, i.e., that wij = 0 (Problem 13). The successes of such Hopfield networks in true pattern recognition have been modest, partly because the basic Hopfield network does not have as natural an output representation for categorization problems. Occassionally, though they can be used in simple low-dimensional pattern completion or auto-association problems. One of their primary drawbacks is their limited capacity, analogous to the fact that a two-layer network cannot implement arbitrary decision boundaries as can a thnegative to all other output units. The resulting internal representation is closely related to that in the probabilistic neural network implementation of Parzen windows (Chap. ??). Naturally, this representation is undesirable as the number of weights grows exponentially with the number of patterns. Training becomes slow; furthermore generalization tends to be poor. Since the states of the hidden units are binary valued, and since it takes log2 n bits to specify n different items, there must be at least log2 n hidden units if there is to be a distinct hidden configuration for each of the n patterns. Thus a lower bound on the number of hidden units is log2 n , which is necessary for a distinct hidden configuration for each pattern. Nevertheless, this bound need not be tight, as there may be no set of weights insuring a unique representation (Problem 16). Aside from these bounds, it is hard to make firm statements about the number of hidden units needed -- this number depends upon the inherent difficulty of the classification problem. It is traditional, then, to start with a somewhat large net and use weight decay. Much as we saw in backpropagation (Chap. ??), a Boltzmann network with "too many" hidden units and weights can be improved by means of weight decay. During training, a small increment is added to wij when si and sj are both positive or both negative during learning phase, but subtracted in the unlearning phase. It is traditional to decrease throughout training. Such a version of weight decay tends to reduce the effects on the weights due to spurious random correlations in units and to eliminate unneeded weights, thereby improving generalization. One of the benefits of Boltzmann networks over backpropagation networks is that "too many" hidden units in a backpropagation network tend to degrade performance more than "too many" in a Boltzmann network. This is because during learning, there is stochastic averaging over states in a Boltzmann network which tends to smooth decision boundaries; backpropagation networks have no such equivalent averaging. Of course, this averaging comes at a higher computational burden for Boltzmann networks. The next matter to consider is weight initialization. Initializing all weights to zero is acceptable, but leads to unnecessarily slow learning. In the absence of information otherwise, we can expect that roughly half the weights will be positive and half negative. In a network with fully interconnected hidden units there is nothing to differentiate the individual hidden units; thus we can arbitrarily initialize roughly half of the weights to have positive values and the rest negative. Learning speed is increased if weights are initialized with random values within a proper range. Assume a fully interconnected network having N units (and thus N - 1 N connections to each unit). Assume further that at any instant each unit has an equal chance of being in state si = +1 or si = -1. We seek initial weights that will make the net force on each unit a random variable with variance 1.0, roughly the useful range shown in Fig. 7.5. This implies weights should be initialized randomly throughout the range - 3/N < wij < +1 3/N (Problem 17). As mentioned, annealing schedules of the form T (k + 1) = cT (k) for 0 < c < 1 are generally used, with 0.8 < c < 0.99. If a very large number of iterations -- several thousand -- are needed, even c = 0.99 may be too small. In that case we can write c = e-1/k0 , and thus 22 CHAPTER 7. STOCHASTIC METHODS T (k) = T (1)e-k/k0 , and k0 can be interpreted as a decay constant. The initial temperature T (1) should be set high enough that virtually all candidate state transitions are accepted. While this condition can be insured by choosing T (1) extremely high, in order to reduce training time we seek the lowest adequate value of T (1). A lower bound on the acceptable initial temperature depends upon the problem, but can be set empirically by monitoring state transitions in short simulations at candidate temperatures. Let m1 be the number of energy-decreasing traemporal scales. Correlations are learned by the weights linking the hidden units, here labeled E. It is somewhat more difficult to train linked Hidden Markov Models to learn structure at different time scales. mentation on massively parallel computers. In broad overview, such methods proceed as follows. First, we create several classifiers -- a population -- each varying somewhat from the other. Next, we judge or score each classifier on a representative version of the classification task, such as accuracy on a set of labeled examples. In keeping with the analogy with biological evolution, the resulting (scalar) score is sometimes called the fitness. Then we rank these classifiers according to their score and retain the best classifiers, some portion of the total population. Again, in keeping with biological terminology, this is called survival of the fittest. We now stochastically alter the classifiers to produce the next generation -- the children or offspring. Some offspring classifiers will have higher scores than their parents in the previous generation, some will have lower scores. The overall process is then repeated for subsequent generation: the classifiers are scored, the best ones retained, randomly altered to give yet another generation, and so on. In part because of the ranking, each generation has, on average, a slightly higher score than the previous one. The process is halted when the single best classifier in a generation has a score that exceeds a desired criterion value. The method employs stochastic variations, and these in turn depend upon the fundamental representation of each classifier. There are two primary representations we shall consider: a string of binary bits (in basic genetic algorithms), and snippets of computer code (in genetic programming). In both cases, a key property is that population score fitness survival of the fittest offspring parent .. . . . .. .. . . . C .. . . . 7.5. *EVOLUTIONARY METHODS 27 occasionally very large changes in classifier are introduced. The presence of such large changes and random variations implies that evolutionary methods can find good classifiers even in extremely complex discontinuous spaces or "fitness landscapes" that are hard to address by techniques such as gradient descent. 7.5.1 Genetic Algorithms chromosome In basic genetic algorithms, the fundamental representation of each classifier is a binary string, called a chromosome. The mapping from the chromosome to the features and other aspects of the classifier depends upon the problem domain, and the designer has great latitude in specifying this mapping. In pattern classification, the score is usually chosen to be some monotonic function of the accuracy on a data set, possibly with penalty term to avoid overfitting. We use a desired fitness, , as the stopping criterion. Before we discuss these points in more depth, we first consider more specifically the structure of the basic genetic algorithm, and then turn to the key notion of genetic operators, used in the algorithm. Algorithm 4 (Basic Genetic algorithm) 1 2 3 4 5 6 7 8 9 10 11 begin initialize , Pco , Pmut , L N -bit chromosomes do Determine fitness of each chromosome, fi , i = 1, . . . , L Rank the chromosomes do Select two chromosomes with highest score if Rand[0, 1) < Pco then crossover the pair at a randomly chosen bit else change each bit with probability Pmut Remove the parent chromosomes until N offspring have been created until Any chromosome's score f exceeds return Highest fitness chromosome (best classifier) end Figure 7.13 shows schematically the evolution of a population of classifiers given by Algorithm 4. Genetic operators There are three primary genetic operators that govern reproduction, i.e., producing offspring in the next generation described in lines 5 & 6 of Algorithm 4. The last two of these introduce variation into the chromosomes (Fig. 7.14): Replication: A chromosome is merely reproduced, unchanged. Crossover: Crossover involves the mixing -- "mating" -- of two chromosomes. A split point is chosen randomly along the length of either chromosome. The first part of chromosome A is spliced to the last part of chromosome B, and vice versa, thereby yielding two new chromosomes. The probability a given pair of chromosomes will undergo crossover is given by Pco in Algorithm 4. Mutation: Each bit in a single chromosome is given a small chance, Pmut , of being changed from a 1 to a 0 or vice versa. mating 28 CHAPTER 7. STOCHASTIC METHODS if f > then end N 11010100101000 01001110111011 11100010110100 00001110010100 11001010101010 00101100100100 11110100101011 10001001010001 11010110101000 11110101101001 score fi 15 11 29 36 54 73 22 92 84 27 rank survival of the fittest and reproduction 10001001010001 11010110101000 00101100100100 11001010101010 00001110010100 11100010110100 11110101101001 11110100101011 11010100101000 01001110111011 10001001010001 11010110101000 00101100100100 11001010101010 00001110010100 11100010110100 11110101101001 11110100101011 11010100101000 01001110111011 92 84 73 54 36 29 27 22 15 11 generation k parents generation k+1 offspring Figure 7.13: A basic genetic algorithm is a stochastic iterative search method. Each of the L classifiers in the population in generation k is represented by a string of bits of length N , called a chromosome (on the left). Each classifier is judged or scored according its performance on a classification task, giving L scalar values fi . The chromosomes are then ranked according to these scores. The chromosomes are considered in descending order of score, and operated upon by the genetic operators of replication, crossover and mutation to form the next generation of chromosomes -- the offspring. The cycle repeats until a classifier exceeds the criterion score . Other genetic operators may be employed, for instance inversion -- where the chromosome is reversed front to back. This operator is used only rarely since inverting a chromosome with a high score nearly always leads to one with very low score. Below we shall briefly consider another operator, insertions. Representation When designing a classifier by means of genetic algorithms we must specify the mapping from a chromosome to properties of the classifier itself. Such mapping will depend upon the form of classifier and problem domain, of course. One of the earliest and simplest approaches is to let the bits specify features (such as pixels in a character recognition problem) in a two-layer Perceptron with fixed weights (Chap. ??). The primary benefit of this particular mapping is that different segments of the chromosome, which generally remain undisturbed under the crossover operator, may evolve to recognize different portions of the input space such as the descender (lower) or the ascender (upper) portions of typed characters. As a result, occasionally the crossover operation will append a good segment for the ascender region in one chromosome to a good segment for the descender region in another, thereby yielding an excellent overall classifier. Another mapping is to let different segments of the chromosome represent the weights in a multilayer neural net with a fixed topology. Likewise, a chromosome could represent a network topology itself, the presence of an individual bit implying two particular neurons are interconnected. One of the most natural representations is for the bits to specify properties of a decision tree classifier (Chap. ??), as shown in Fig. 7.15. 7.5. *EVOLUTIONARY METHODS 29 parents (generation k) A 11010100101001010101111010100011111010010 B 00101100001010001010100001010110101001110 11010100101001010101111010100011111010010 11011100100001110101111110110011101010010 replication crossover mutation offspring (generation k+1) Figure 7.14: Three basic genetic operations are used to transform a population of chromosomes at one generation to form a new generation. In replication, the chromosome is unchanged. Crosi) = efi /T , E[efi /T ] (24) where the expectation is over the current generation and T is a control parameter loosely referred to as a temperature. Early in the evolution the temperature is set high, giving all chromosomes roughly equal probability of being selected. Late in the evolution the temperature is set lower so as to find the chromosomes in the region of the optimal classifier. We can express such search by analogy to biology: early in the search the population remains diverse and explores the fitness landscape in search of promising areas; later the population exploits the specific fitness opportunities in a small region of the space of possible classifiers. 7.5.2 Further heuristics There are many additional heuristics that can occasionally be of use. One concerns the adaptation of the crossover and mutation rates, Pco and Pmut . If these rates are too low, the average improvement from one generation to the next will be small, and the search unacceptably long. Conversely, if these rates are too high, the evolution is undirected and similar to a highly inefficient random search. We can monitor the average improvement in fitness of each generation and the mutation and crossover rates as long as such improvement is rapid. In practice, this is done by encoding the rates in the chromosomes themselves and allowing the genetic algorithm to select the proper values. Another heuristic is to use a ternary, or n-ary chromosomes rather than the traditional binary ones. These representations provide little or no benefit at the algorithmic level, but may make the mapping to the classifier itself more natural and easier to compute. For instance, a ternary chromosome might be most appropriate if the classifier is a decision tree with three-way splits. Occasionally the mapping to the classifier will work for chromosomes of different length. For example, if the bits in the chromosome specify weights in a neural network, then longer chromosomes would describe networks with a larger number of hidden units. In such a case we allow the insertion operator, which with a small probability inserts bits into the chromosome at a randomly chosen position. This so-called "messy" genetic algorithm method has a more appropriate counterpart in genetic programming, as we shall see in Sect. 7.6. insertion 7.5.3 Why do they work? Because there are many heuristics to choose as well as parameters to set, it is hard to make firm theoretical statements about building classifiers by means of evolutionary methods. The performance and search time depend upon the number of bits, the size of a population, the mutation and crossover rates, choice of features and mapping from chromosomes to the classifier itself, the inherent difficulty of the problem and possibly parameters associated with other heuristics. A genetic algorithm restricted to mere replication and mutation is, at base, a version of stochastic random search. The incorporation of the crossover operator, which mates two chromosomes, provides a qualitatively different search, one that has no counterpart in stochastic grammars (Chap. ??). Crossover works by finding, rewarding and recombining "good" segments of chromosomes, and the more faithfully the segments of the chromosomes represent such functional building blocks, the better 32 CHAPTER 7. STOCHASTIC METHODS we can expect genetic algorithms to perform. The only way to insure this is with prior knowledge of the problem domain and the desired form of classifier. 7.6 *Genetic Programming Genetic programming shares the same algorithmic structure of basic genetic algorithms, but differs in the representation of each classifier. Instead of chromosomes consisting of strings of bits, genetic programming uses snippets of computer programs made up of mathematical operators and variables. As a result, the genetic operators are somewhat different; moreover a new operator plays a significant role in genetic programming. The four principal operators in genetic programming are (Fig. 7.16): Replication: A snippet is merely reproduced, unchanged. mating Crossover: Crossover involves the mixing -- "mating" -- of two snippets. A split point is chosen from allowable locations in snippet A as well as from snippet B. The first part of snippet A is spliced to the back part of chromosome B, and vice versa, thereby yielding two new snippets. Mutation: Each bit in a single snippet is given a small chance of being changed to a different value. Such a change must be compatible with the syntax of the total snippet. For instance, a number can be replaced by another number; a mathematical operator that takes a single argument can be replaced by another such operator, and so forth. insertion Insertion: Insertion consists in replacing a single element in the snippet with another (short) snippet randomly chosen from a set. In the c-category problem, it is simplest to form c dichotomizers just as in genetic algorithms. If the output of the classifier is positive, the test pattern belongs to category i , if negative, then it is NOT in i . Representation A program must be expressed in some language, and the choice affects the complexity of the procedure. Syntactically rich languages such as C or C++ are complex and somwhat difficult to work with. Here the syntactic simplicity of a language such asLisp is advantageous. Many Lisp expressions can be written in the form (<operator> <operand> <operand>), where an <operand> can be a constant, a variable or another parenthesized expression. For example, (+ X 2) and (* 3 (+ Y 5)) are valid Lisp expressions for the arithmetic expressions x + 2 and 3(y + 5), respectively. These expressions are easily represented by a binary tree, with the operator being specified at the node and the operands appearing as the children (Fig. 7.17). Whatever language is used, genetic programming operators used for mutation should replace variables and constants with variables and constants, and operators with functionally compatible operators. They should aslo be required to produce syntactically valid results. Nevertheless, occassionally an ungrammatical code snippet may be produced. For that reason, it is traditional to employ a wrapper -- a routine that decides whether the classifier is meaningful, and eliminates them if not. wrapper 7.6. SUMMARY parents (generation k) A (OR (AND (NOT X0)(NOT X1))(AND X0 X1)) B (OR (AND (X2)(NOT X0))(AND X2 X0)) (OR (AND (NOT X0)(NOT X1))(AND X2 X0)) (OR (AND (X2)(NOT X0))(AND X0 X1)) (NOT X2) (OR (AND (NOT X0)(NOT X1))(AND X0 X1)) (OR (OR (NOT X1)(NOT X1))(AND X2 X1)) 33 (OR (AND (NOT X0)(NOT X1))(AND X0 X1)) (OR (AND (NOT X0)(NOT X1))(AND X0 X1)) (OR (AND (NOT X0)(NOT X1))(AND X0 X1)) (OR (AND (NOT X0)(NOT X1))(AND (NOT X2) X1)) replication crossover mutation insertion offspring (generation k+1) Figure 7.16: Four basic genetic operations are used to transform a population of snippets of code at one generation to form a new generation. In replication, the snippet is unchanged. Crossover involves the mixing or "mating" of two snippets to yield two new snippets. A position along the snippet A is randomly chosen from the allowable locations (red vertical line); likewise one is chosen for snippet B. Then the front portion of A is spliced to the back portion of B and vice versa. In mutation, each element is given a small chance of being changed. There are several different types of elements, and replacements must be of the same type. For instance, only a number can replace another number; only a numerical operator that takes a single argument can replace a similar operator, and so on. In insertion, a randomly selected element is replaced by a compatible snippet, keeping the entire snippet grammatically well formed and meaningful. It is nearly impossible to make sound theoretical statements about genetic programming and even the rules of thumb learned from simulations in one domain, such as control or function optimization are of little value in another domain, such as classification problems. Of course, the method works best in problems that are matched by the classifier representation, as simple operations such as multiplication, division, square roots, logical NOT, and so on. Nevertheless, we can state that as computation continues to decrease in cost, more of the burden of solving classification problems will be assumed by computation rather than careful analysis, and here techniques such as evolutionary ones will be of use in classification research. Summary When a pattern recognition problem involves a model that is discrete or of such high complexity that analytic or gradient descent methods are unlikely to work, we may employ stochastic techniques -- ones that at some level rely on randomness to find model parameters. Simulated annealing, based on physical annealing of metals, consists in randomly perturbing the system, and gradually decreasing the randomness to a low final level, in order to find an optimal solution. Boltzmann learning trains the weights in a network so that the probability of a desired final output is increased. Such learning is based on gradient descent in the Kullback-Liebler divergence between two distributions of visible states at the output units: one distribution describes these units when clamped at the known category information, and the other when they are free to assume values based on the activations throughout the network. Some 34 CHAPTER 7. STOCHASTIC METHODS + + * / + * parents (generation k) / * * / X0 X1 X2 X4 X3 X0 X2 X2 X0 X2 X4 X3 X4 X1 (- (+ (+ X0 X1) (* X2 (- X4 X0)))(+ (/ X3 X2)(* X2 X0))) (/ (* X2 X4) (* X3 (/ X4 X1))) + + * X4 X0 X1 X2 X4 X1 X2 X4 / * / * + * X3 / X0 X3 X2 X2 X0 (- (+ (+ X0 X1) (* X2 (- X4 X0)))(/ X4 X1)) (/ (* X2 X4) (* X3 (+ (/ X3 X2)(* X2 X0)))) offspring (generation k+1) Figure 7.17: Unlike the decision trees of Fig. 7.15 and Chap. ??, the trees shown here are merely a representation using the syntax of Lisp that implements a single function. x2 For instance, the upper-right (parent) tree implements x3 (x4x4 1 ) . Such functions are /x used with an implied threshold or sign function when used for classification. Thus the function will operate on the features of a test pattern and emit category i if the function is positive, and NOT i otherwise. graphical models, such as hidden Markov models and Bayes belief networks, have counterparts in structured Boltzmann networks, and this leads to new applications of Boltzmann learning. Search methods based on evolution -- genetic algorithms and genetic programming -- perform highly parallel stochastic searches in a space set by the designer. The fundamental representation used in genetic algorithms is a string of bits, or chromosome; the representation in genetic programming is a snippet of computer code. Variation is introduced by means of crossover, mutation and insertion. As with all classification methods, the better the features, the better the solution. There are many heuristics that can be employed and parameters that must be set. As the cost of computation contiues to decline, computationally intensive methods, such as Boltzmann networks and evolutionary methods, should become increasingly popular. 7.6. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 35 Bibliographical and Historical Remarks The general problem of search is of central interest in computer science and artificial intelligence, and is far to expansive to treat here. Nevertheless, techniques such as depth first, breadth first, branch-and-bound, A* [19], occassionally find use in fields touching upon pattern recognition, and practitioners should have at least a passing knowledge of them. Good overviews can be found in [33] and a number of textbooks on artificial intelligence, such as [46, 67, 55]. For rigor and completeness, Knuth's book on the subject is without peer [32]. The infinite monkey theorem, attributed to Sir Arthur Eddington, states that if there is a sufficiently large number of monkeys typing at typewriters, eventually one will bang out the script to Hamlet. It reflects one extreme of the tradeoff between prior knowledge about the location of a solution on the one hand and the effort of search required to fit relatively little on pattern recognition. There are several collections of papers on evolutionary techniques in pattern recognition, including [48]. An intriguing effect due to the interaction of learning and evolution is the Baldwin effect, where learning can influence the rate of evolution [22]; it has been shown that too much learning (as well as too little learning) leads to slower evolution [28]. Evolutionary methods can lead to "non-optimal" or inelegant solutions, and there is computational evidence that this occurs in nature [61, 62]. Problems Section 7.1 1. One version of the infinite monkey theorem states that a single (immortal) monkey typing randomly will ultimately reproduce the script of Hamlet. Estimate the time needed for this, assuming the monkey can type two characters per second, that the play has 50 pages, each containing roughly 80 lines, and 40 characters per line. Assume there are 30 possible characters (a through z), space, period, exclamation point and carriage return. Compare this time to the estimated age of the universe, 1010 years. Section 7.2 2. Prove that for any optimization problem of the form of Eq. 1 having a nonsymmetric connection matrix, there is an equivalent optimization problem in which the matrix is replaced by its symmetric part. 3. The complicated energy landscape in the left of Fig. 7.2 is misleading for a number of reasons. (a) Discuss the difference between the continuous space shown in that figure with the discrete space for the true optimization problem. (b) The figure shows a local minimum near the middle of the space. Given the nature of the discrete space, are any states closer to any "middle"? (c) Suppose the axes referred to continuous variables si (as in mean-field annealing). If each si obeyed a sigmoid (Fig. 7.5), could the energy landscape be nonmonotonic, as is shown in Fig. 7.2? 4. Consider exhaustive search for the minimum of the energy given in Eq. 1 for binary units and arbitrary connections wij . Suppose that on a uniprocessor it takes 7.6. PROBLEMS 37 10-8 seconds to calculate the energy for each configuration. How long will it take to exhaustively search the space for N = 100 units? How long for N = 1000 units? 5. Suppose it takes a uniprocessor 10-10 seconds to perform a single multiplyaccumulate, wij si sj , in the calculation of the energy E = -1/2 wij si sj given in ij Eq. 1. (a) Make some simplifying assumptions and write a formula for the total time required to search exhaustively for the minimum energy in a fully connected network of N nodes. (b) Plot your function using a log-log scale for N = 1, . . . , 105 . (c) What size network, N , could be searched exhaustively in a day? A year? A century? 6. Make and justify any necessary mathematical assumptions and show analytically that at high temperature, every configuration in a network of N units interconnected by weights is equally likely (cf. Fig. 7.1). 7. Derive the exponential form of the Boltzmann factor in the following way. Consider an isolated set of M + N independent magnets, each of which can be in an si = +1 or si = -1 state. There is a uniform magnetic field applied and this means that the energy of the si = +1 state has some positive energy, which we can arbitrarily set to 1; the si = -1 state has energy -1. The total energy of the system is therefore the sum of the number pointing up, ku , minus the number pointing down, kd ; that is, ET = ku - kd . (Of course, ku + kd = M + N regardless of the total energy.) The fundamental statistical assumptions describing this system are that the magnets are independent, and that the probability a subsystem (viz., the N magnets), has a particular energy is proportional to the number of configurations that have this energy. (a) Consider the subsystem of N magnets, which has energy EN . Write an expression for the number of configurations K(N, EN ) that have energy EN . (b) As in part (a), write a general expression for the number of configurations in the subsystem M magnets at energy EM , i.e., K(M, EM ). (c) Since the two subsystems consist of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 42 44 44 46 56 60 66 Chapter 8 Non-metric Methods 8.1 Introduction e have considered pattern recognition based on feature vectors of real-valued numbers, and in all there has been a natural measure W and discrete-valued vectors. For instancecasesthe nearest-neighbor classifier the of distance between such in notion figures conspicuously -- indeed it is the core of the technique -- while for neural networks the notion of similarity appears when two input vectors sufficiently "close" lead to similar outputs. Most practical pattern recognition methods address problems of this sort, where feature vectors are real-valued and there exists some notion of metric. But suppose a classification problem involves nominal data -- for instance descriptions that are discrete and without any natural notion of similarity or even ordering. Consider the use of information about teeth in the classification of fish and sea mammals. Some teeth are small and fine (as in baleen whales) for straining tiny prey from the sea. Others (as in sharks) coming in multiple rows. Some sea creatures, such as walruses, have tusks. Yet others, such as squid, lack teeth altogether. There is no clear notion of similarity (or metric) for this information about teeth: it is meaningless to consider the teeth of a baleen whale any more similar to or different from the tusks of a walrus, than it is the distinctive rows of teeth in a shark from their absence in a squid, for example. Thus in this chapter our attention turns away from describing patterns by vectors of real numbers and towardusing lists of attributes. A common approach is to specify the values of a fixed number of properties by a property d-tuple For example, consider describing a piece of fruit by the four properties of color, texture, taste and smell. Then a particular piece of fruit might be described by the 4-tuple {red, shiny, sweet, small}, which is a shorthand for color = red, texture = shiny, taste = sweet and size = small. Another common approach is to describe the pattern by a variable length string of nominal attributes, such as a sequence of base pairs in a segment of DNA, e.g., "AGCTTCAGATTCCA." Such lists or strings might be themselves the output of other component classifiers of the type we have seen elsewhere. For instance, we might train a neural network to recognize different component brush nominal data property d-tuple string We often put strings between quotation marks, particularly if this will help to avoid ambiguities. 3 4 CHAPTER 8. NON-METRIC METHODS strokes used in Chinese and Japanese characters (roughly a dozen basic forms); a classifier would then accept as inputs a list of these nominal attributes and make the final, full character classification. How can we best use such nominal data for classification? Most importantly, how can we efficiently learn categories using such non-metric data? If there is structure in strings, how can it be represented? In considering such problems, we move beyond the notion of continuous probability distributions and metrics toward discrete problems that are addressed by rule-based or syntactic pattern recognition methods. 8.2 Decision trees root node link branch leaf descendent sub-tree It is natural and intuitive to classify a pattern through a sequence of questions, in which the next question asked depends on the answer to the current question. This "20-questions" approach is particularly useful for non-metric data, since all of the questions can be asked in a "yes/no" or "true/false"or "value(property) set of values" style that does not require any notion of metric. Such a sequence of questions is displayed in a directed decision tree or simply tree, where by convention the first or root node is displayed at the top, connected by successive (directional) links or branches to other nodes. These are similarly connected until we reach terminal or leaf nodes, which have no further links (Fig. 8.1). Sections 8.3 & 8.4 describe some generic methods for creating such trees, but let us first understand how they are used for classification. The classification of a particular pattern begins at the root node, which asks for the value of a particular property of the pattern. The different links from the root node corresopnd to the different possible values. Based on the answer we follow the appropriate link to a subsequent or descendent node. In the trees we shall discuss, the links must be mutually distinct and exhaustive, i.e., one and only one link will be followed. The next step is to make the decision at the appropriate subsequent node, which can be considered the root of a sub-tree. We continue this way until we reach a leaf node, which has no further question. Each leaf node bears a category label and the test pattern is assigned the category of the leaf node reached. The simple decision tree in Fig. 8.1 illustrates one benefit of trees over many other classifiers such as neural networks: interpretability. It is a straightforward matter to render the information in such a tree as logical expressions. Such interpretability has two manifestations. First, we can easily interpret the decision for any particular test pattern as the conjunction of decisions along the path to its corresponding leaf node. Thus if the properties are {taste, color, shape, size}, the pattern x = {sweet, yellow, thin, medium} is classified as Banana because it is (color = yellow) AND (shape = thin). Second, we can occasionally get clear interpretations of the categories themselves, by creating logical descriptions using conjunctions and disjunctions (Problem 8). For instance the tree shows Apple = (green AND medium) OR (red AND medium). Rules derived from trees -- especially large trees -- are often quite complicated and must be reduced to aid interpretation. For our example, one simple rule describes Apple = (medium AND NOT yellow). Another benefit of trees is that they lead to We retain our convention of representing patterns in boldface even though they need not be true vectors, i.e., they might contain nominal data that cannot be added or multiplied the way vector components can. For this reason we use the terms "attribute" to represent both nominal data and real-valued data, and reserve "feature" for real-valued data. 8.3. CART 5 root Color? green yellow red level 0 Size? medium Shape? small round thin Size? medium small level 1 big Watermelon Apple Grape big Size? Banana Apple Taste? sour Grape level 2 small sweet Cherry Grapefruit Lemon level 3 Figure 8.1: Classification in a basic decision tree proceeds from top to bottom. The questions asked at each node concern a particular property of the pattern, and the downward links correspond to the possible values. Successive nodes are visited until a terminal or leaf node is reached, where the category label is read. Note that the same question, Size?, appears in different places in the tree, and that different questions can have different numbers of branches. Moreover, different leaf nodes, shown in pink, can be labeled by the same category (e.g., Apple). rapid classification, employing a sequence of typically simple queries. Finally, we note that trees provide a natural way to incorporate prior knowledge from human experts. In practice, though, such expert knowledge if of greatest use when the classification problem is fairly simple and the training set is small. 8.3 CART Now we turn to the matter of using training data to create or "grow" a decision tree. We assume that we have a set D of labeled training data and we have decided on a set of properties that can be used to discriminate patterns, but do not know how to organize the tests into a tree. Clearly, any decision tree will progressively split the set of training examples into smaller and smaller subsets. It would be ideal if all the samples in each subset had the same category label. In that case, we would say that each subset was pure, and could terminate that portion of the tree. Usually, however, there is a mixture of labels in each subset, and thus for each branch we will have to decide either to stop splitting and accept an imperfect decision, or instead select another property and grow the tree further. This suggests an obvious recursive tree-growing process: given the data represented at a node, either declare that node to be a leaf (and state what category to assign to it), or find another property to use to split the data into subsets. However, this is only one example of a more generic tree-growing methodology know as CART (Classification and Regression Trees). CART provides a general framework that can be instatiated in various ways to produce different decision trees. In the CART approach, six general kinds of questions arise: 1. Should the properties be restricted to binary-valued or allowed to be multi- 6 split CHAPTER 8. NON-METRIC METHODS valued? That is, how many decision outcomes or splits will there be at a node? 2. Which property should be tested at a node? 3. When should a node be declared a leaf? 4. If the tree becomes "too large," how can it be made smaller and simpler, i.e., pruned? 5. If a leaf node is impure, how should the category label be assigned? 6. How should missing data be handled? We consider each of these questions in turn. 8.3.1 Number of splits branching factor Each decision outcome at a node is called a split, since it corresponds to splitting a subset of the training data. The root node splits the full training set; each successive decision splits a proper subset of the data. The number of splits at a node is closely related to question 2, specifying which particular split will be made at a node. In general, the number of splits is set by the designer, and could vary throughout the tree, as we saw in Fig. 8.1. The number of links descending from a node is sometimes called the node's branching factor or branching ratio, denoted B. However, every decision (and hence every tree) can be represented using just binary decisions (Problem 2). Thus the root node querying fruit color (B = 3) in our example could be replaced by two nodes: the first would ask fruit = green?, and at the end of its "no" branch, another node would ask fruit = yellow?. Because of the universal expressive power of binary trees and the comparative simplicity in training, we shall concentrate on such trees (Fig. 8.2). 8.3.2 Test selection and node impurity purity Much of the work in designing trees focuses on deciding which property test or query should be performed at each node. With non-numeric data, there is no geometrical interpretation of how the test at a node splits the data. However, for numerical data, there is a simple way to visualize the decision boundaries that are produced by decision trees. For example, suppose that the test at each node has the form "is xi xis ?" This leads to hyperplane decision boundaries that are perpendicular to the coordinate axes, and to decision regions of the form illustrated in Fig. 8.3. The fundamental principle underlying tree creation is that of simplicity: we prefer decisions that lead to a simple, compact tree with few nodes. This is a version of Occam's razor, that the simplest model that explains data is the one to be preferred (Chap. ??). To this end, we seek a property test T at each node N that makes the data reaching the immediate descendent nodes as "pure" as possible. In formalizing this notion, it turns out to be more conveninet to define the impurity, rather than The problem is further complicated by the fact that there is no reason why the test at a node has to involve only one property. One might well consider logical combinations of properties, such as using (size = medium) AND (NOT (color = yellow))? as a test. Trees in which each test is based on a single property are called monothetic; if the query at any of the nodes involves two or more properties, the tree is called polythetic. For simplicity, we generally restrict our treatment to monothetic trees. In all cases, the key requirement is that the decision at a node be well-defined and unambiguous so that the response leads down one and only one branch. 8.3. CART color = Green? yes no 7 size = big? yes no yes color = yellow? no Watermelon size = medium? yes no shape = round? yes no size = small? yes no Apple Grape size = big? yes no Banana taste = sweet? yes no Apple Grapefruit Lemon Cherry Grape Figure 8.2: A tree with arbitrary branching factor at different nodes can always be represented by a functionally equivalent binary tree, i.e., one having branching factor B = 2 throughout. By convention the "yes" branch is on the left, the "no" branch on the right. This binary tree contains the same information and implements the same classification as that in Fig. 8.1. the purity of a node. Several different mathematical measures of impurity have been proposed, all of which have basically the same behavior. Let i(N ) denote the impurity of a node N . In all cases, we want i(N ) to be 0 if all of the patterns that reach the node bear the same category label, and to be large if the categories are equally represented. The most popular measure is the entropy impurity (or occasionally information impurity): i(N ) = - j entropy impurity P (j ) log2 P (j ), (1) where P (j ) is the fraction of patterns at node N that are in category j . By the well-known properties of entropy, if all the patterns are of the same category, the impurity is 0; otherwise it is positive, with the greatest value occuring when the different classes are equally likely. Another definition of impurity is particularly useful in the two-category case. Given the desire to have zero impurity when the node represents only patterns of a single category, the simplest polynomial form is: i(N ) = P (1 )P (2 ). (2) variance impurity This can be interpreted as a variance impurity since under reasonable assumptions it ^ Here we are a bit sloppy with notation, since we normally reserve P for probability and P for ^ frequency ratios. We could be even more precise by writing P (x j |N ) -- i.e., the fraction of training patterns x at node N that are in category j , given that they have survived all the previous decisions that led to the node N -- but for the sake of simplicity we sill avoid such notational overhead. 8 CHAPTER 8. NON-METRIC METHODS x2 R1 R2 x3 R1 R2 R1 R2 R1 x2 R1 R2 R1 R2 x1 x1 Figure 8.3: Monothetic decision trees create decision boundaries with portions perpendicular to the feature axes. The decision regions are marked R1 and R2 in these two-dimensional and three-dimensional two-category examples. With a sufficiently large tree, any decision boundary can be approximated arbitrarily well. is related to the variance of a distribution associated with the two categories (Problem 10). A generalization of the variance impurity, applicable to two or more categories, is the Gini impurity: i(N ) = i=j Gini impurity P (i )P (j ) = 1 - j P 2 (j ). (3) misclassification impurity This is just the expected error rate at node N if the category label is selected randomly from the class distribution present at N . This criterion is more strongly peaked at equal probabilities than is the entropy impurity (Fig. 8.4). The misclassification impurity can be written as i(N ) = 1 - max P (j ), j (4) and measures the minimum probability that a training pattern would be misclassified at N . Of the impurity measures typically considered, this measure is the most strongly peaked at equal probabilities. It has a discontinuous derivative, though, and this can present problems when searching for an optimal decision over a continuous parameter space. Figure 8.4 shows these impurity functions for a two-category case, as a function of the probability of one of the categories. We now come to the key question -- given a partial tree down to node N , what value s should we choose for the property test T ? An obvious heuristic is to choose the test that decreases the impurity as much as possible. The drop in impurity is defined by i(N ) = i(N ) - PL i(NL ) -hen the misclassification remains at 0.1 for all splits. Now consider a split which sends 70 1 patterns to the right along with 0 2 patterns, and sends 20 1 and 10 2 to the left. This is an attractive split but the misclassification impurity is still 0.1. On the other hand, the Gini impurity for this split is less than the Gini for the parent node. In short, the Gini impurity shows that this as a good split while the misclassification rate does not. In multiclass binary tree creation, the twoing criterion may be useful. The overall goal is to find the split that best splits groups of the c categories, i.e., a candidate "supercategory" C1 consisting of all patterns in some subset of the categories, and candidate "supercategory" C2 as all remaining patterns. Let the class of categories be C = {1 , 2 , . . . , c }. At each node, the decision splits the categories into C1 = {i1 , i2 , . . . , ik } and C2 = C - C1 . For every candidate split s, we compute a change in impurity i(s, C1 ) as though it corresponded to a standard two-class problem. That is, we find the split s (C1 ) that maximizes the change in impurity. Finally, we find the supercategory C1 which maximizes i(s (C1 ), C1 ). The benefit of this impurity is that it is strategic -- it may learn the largest scale structure of the overall problem (Problem 4). It may be surprising, but the particular choice of an impurity function rarely seems to affect the final classifier and its accuracy. An entropy impurity is frequently used because of its computational simplicity and basis in information theory, though the Gini impurity has received significant attention as well. In practice, the stopping criterion and the pruning method -- when to stop splitting nodes, and how to merge leaf nodes -- are more important than the impurity function itself in determining final classifier accuracy, as we shall see. Multi-way splits Although we shall concentrate on binary trees, we briefly mention the matter of allowing the branching ratio at each node to be set during training, a technique will return to in a discussion of the ID3 algorithm (Sect. 8.4.1). In such a case, it is tempting to use a multi-branch generalization of Eq. 5 of the form B i(s) = i(N ) - k=1 Pk i(Nk ), (6) where Pk is the fraction of training patterns sent down the link to node Nk , and B Pk = 1. However, the drawback with Eq. 6 is that decisions with large B are k=1 inherently favored over those with small B whether or not the large B splits in fact represent meaningful structure in the data. For instance, even in random data, a The twoing criterion is not a true impurity measure. 8.3. CART 11 high-B split will reduce the impurity more than will a low-B split. To avoid this drawback, the candidate change in impurity of Eq. 6 must be scaled, according to iB (s) = - i(s) B . (7) Pk log2 Pk k=1 a method based on the gain ratio impurity (Problem 17). Just as before, the optimal split is the one maximizing iB (s). gain ratio impurity 8.3.3 When to stop splitting Consider now the problem of deciding when to stop splitting during the training of a binary tree. If we continue to grow the tree fully until each leaf node corresponds to the lowest impurity, then the data has typically been overfit (Chap. ??). In the extreme but rare case, each leaf corresponds to a single training point and the full tree is merely a convenient implementation of a lookup table; it thus cannot be expected to generalize well in (noisy) problems having high Bayes error. Conversely, if splitting is stopped too early, then the error on the training data is not sufficiently low and hence performance may suffer. How shall we decide when to stop splitting? One traditional approach is to use techniques of Chap. ??, in particular cross-validation. That is, the tree is trained using a subset of the data (for instance 90%), with the remaining (10%) kept as a validation set. We continue splitting nodes in successive layers until the error on the validation data is minimized. Another method is to set a (small) threshold value in the ry or confidence level, such as .01 or .05. The critical values of the confidence depend upon the number of degrees of freedom, which in the case just described is 1, since for a given probability P the single value n1L specifies all other values (n1R , n2L and n2R ). If the "most significant" split at a node does not yield a 2 exceeding the chosen confidence level threshold, splitting is stopped. 8.3.4 horizon effect Pruning Occassionally, stopped splitting suffers from the lack of sufficient look ahead, a phenomenon called the horizon effect. The determination of the optimal split at a node N is not influenced by decisions at N 's descendent nodes, i.e., those at subsequent levels. In stopped splitting, node N might be declared a leaf, cutting off the possibility of beneficial splits in subsequent nodes; as such, a stopping condition may be met "too early" for overall optimal recognition accuracy. Informally speaking, the stopped splitting biases the learning algorithm toward trees in which the greatest impurity reduction is near the root node. The principal alternative approach to stopped splitting is pruning. In pruning, a tree is grown fully, that is, until leaf nodes have minimum impurity -- beyond any 8.3. CART 13 putative "horizon." Then, all pairs of neighboring leaf nodes (i.e., ones linked to a common antecedent node, one level above) are considered for elimination. Any pair whose elimination yields a satisfactory (small) increase in impurity is eliminated, and the common antecedent node declared a leaf. (This antecedent, in turn, could itself be pruned.) Clearly, such merging or joining of the two leaf nodes is the inverse of splitting. It is not unusual that after such pruning, the leaf nodes lie in a wide range of levels and the tree is unbalanced. Although it is most common to prune starting at the leaf nodes, this is not necessary: cost-complexity pruning can replace a complex subtree with a leaf directly. Further, C4.5 (Sect. 8.4.2) can eliminate an arbitrary test node, thereby replacing a subtree by one of its branches. The benefits of pruning are that it avoids the horizon effect; further, since there is no training data held out for cross-validation, it directly uses all information in the training set. Naturally, this comes at a greater computational expense than stopped splitting, and for problems with large training sets, the expense can be prohibitive (Computer exercise ??). For small problems, though, these computational costs are low and pruning is generally to be preferred over stopped splitting. Incidentally, what we have been calling stopped training and pruning are sometimes called pre-pruning and post-pruning, respectively. A conceptually different pruning method is based on rules. Each leaf has an associated rule -- the conjunction of the individual decisions from the root node, through the tree, to the particular leaf. Thus the full tree can be described by a large list of rules, one for each leaf. Occasionally, some of these rules can be simplified if a series of decisions is redundant. Eliminating the irrelevant precondition rules simplifies the description, but has no influence on the classifier function, including its generalization ability. The predominant reason to prune, however, is to improve generalization. In this case we therefore eliminate rules so as to improve accuracy on a validation set (Computer exercise 6). This technique may even allow the elimination of a rule corresponding to a node near the root. One of the benefits of rule pruning is that it allows us to distinguish between the contexts in which any particular node N is used. For instance, for some test pattern x1 the decision rule at node N is necessary; for another test pattern x2 that rule is irrelevant and thus N could be pruned. In traditional node pruning, we must either keep N or prune it away. In rule pruning, however, we can eliminate it where it is not necessary (i.e., for patterns such as x1 ) and retain it for others (such as x2 ). A final benefit is that the reduced rule set may give improved interpretability. Although rule pruning was not part of the original CART approach, such pruning can be easily applied to CART trees. We shall consider an example of rule pruning in Sect. 8.4.2. merging 8.3.5 Assignment of leaf node labels Assigning category labels to the leaf nodes is the simplest step in tree construction. If successive nodes are split as far as possible, and each leaf node corresponds to patterns in a single category (zero impurity), then of course this category label is assigned to the leaf. In the more typical case, where either stopped splitting or pruning is used and the leaf nodes have positive impurity, each leaf should be labeled by the category that has most points represented. An extremely small impurity is not necessarily desirable, since it may be an indication that the tree is overfitting the training data. Example 1 illustrates some of these steps. 14 CHAPTER 8. NON-METRIC METHODS Example 1: A simple tree classifier Consider the following n = 16 points in two dimensions for training a binary CART tree (B = 2) using the entropy impurity (Eq. 1). 1 (black) x1 x2 .15 .83 .09 .55 .29 .35 .38 .70 .52 .48 .57 .73 .73 .75 .47 .06 1 x2 x1 < 0.6 .8 x1 .10 .08 .23 .70 .62 .91 .65 .75 2 (red) x2 .29 .15 .16 .19 .47 .27 .90 .36* (.32 ) 1.0 R1 .88 .6 x2 < 0.32 x2 < 0.61 .65 R1 .4 R2 * R2 R1 x1 .2 .4 .6 .8 1 x2 .81 x1 < 0.35 1 2 x1 < 0.69 1.0 .2 2 1 2 1 0 1 x2 < 0.33 .8 1.0 .76 .92 R1 .6 R2 .59 R1 x2 < 0.09 x1 < 0.6 1 2 1 R2 R1 x1 < 0.69 .4 2 1 .2 0 .2 .4 .6 .8 1 x1 Training data and associated (unpruned) tree are shown at the top. The entropy impurity at non-terminal nodes is shown in red and the impurity at each leaf is 0. If the single training point marked * were instead slightly lower (marked ), the resulting tree and decision regions would differ significantly, as shown at the bottom. 8.3. CART The impurity of the root node is 2 15 i(Nroot ) = - i=1 P (i )log2 P (i ) = -[.5log2 .5 + .5log2 .5] = 1.0. For simplicity we consider candidate splits parallel to the feature axes, i.e., of the form "is xi < xis ?". By exhaustive search of the n - 1 positions for the x1 feature and n - 1 positions for the x2 feature we find by Eq. 5 that the greatest reduction in the impurity occurs near x1s = 0.6, and hence this becomes the decision criterion at the root node. We continue for each sub-tree until each final node represents a single category (and thus has the lowest impurity, 0), as shown in the figure. If pruning were invoked, the pair of leaf nodes at the left would be the first to be deleted (gray shading) since there the impurity is increased the least. In this example, stopped splitting with the proper threshold would also give the same final network. In general, however, with large trees and many pruning steps, pruning and stopped splitting need not lead to the same final tree. This particular training set shows how trees can be sensitive to details of the training points. If the 2 point marked * in the top figure is moved slightly (marked ), the tree and decision regions differ significantly, as shown at the bottom. Such instability is due in large part to the discrete nature of decisions early in the tree learning. Example 1 illustrates the informal notion of instability or sensitivity to training points. Of course, if we train any common classifier with a slightly different training set the final classification decisions will differ somewhat. If we train a CART classifier, however, the alteration of even a single training point can lead to radically different decisions overall. This is a consequence of the discrete and inherently greedy nature of such tree creation. Instability often indicates that incremental and off-line versions of the method will yield significantly different classifiers, even when trained on the same data. stability 8.3.6 Computational complexity Suppose we have n training patterns in d dimensions in a two-category problem, and wish to construct a binary tree based on splits parallel to the feature axes using an entropy impurity. What are the time and the space complexities? At the root node (level 0) we must first sort the training data, O(nlogn) for each of the d features or dimensions. The entropy calculation is O(n) + (n - 1)O(d) since we examine n - 1 possible splitting points. Thus for the root node the time complexity is O(dnlogn). Consider an average case, where roughly half the training points are sent to each of the two branches. The above analysis implies that splitting each node in level 1 has complexity O(d n/2 log(n/2)); since there are two such nodes at that level, the total complexity is O(dnlog(n/2)). Similarly, for the level 2 we have O(dnlog(n/4)), and so on. The total number of levels is O(log n). We sum the terms for the levels and find that the total average time complexity is O(dn (log n)2 ). The time complexity for recall is just the depth of the tree, i.e., the total number of levels, is O(log n). The space complexity is simply the number of nodes, which, given some simplifying assumptions (such as a single training point per leaf node), is 1 + 2 + 4 + ... + n/2 n, that is, O(n) (Problem 9). 16 CHAPTER 8. NON-METRIC METHODS We stress that these assumptions (for instance equal splits at each node) rarely hold exactly; moreover, heuristics can be used to speed the search for splits during training. Nevertheless, the result that for fixed dimension d the training is O(dn2 log n) and classification O(log n) is a good rule of thumb; it illustrates how training is far more computationally expensive than is classification, and that on average this discrepancy grows as the problem gets larger. There are several techniques for reducing the complexity during the training of trees based on real-valued data. One of the simplest heuristics is to begin the search for splits xis at the "middle" of the range of the training set, moving alternately to progressively higher and lower values. Optimal splits always occur for decision thresholds between adjacent points from different categories and thus one should test only such ranges. These and related techniques generally provide only moderate reductions in computation (Computer exercise ??). When the patterns consist of nominal data, candidate splits could be over every subset of attributes, or just a single entry, and the computational burden is best lowered using insight into features (Problem 3). 8.3.7 Feature choice As with most pattern recognition techniques, CART and other tree-based methods work best if the "proper" features are used (Fig. 8.5). For real-valued vector data, most standard preprocessing techniques can be used before creating a tree. Preprocessing by principal components (Chap. ??) can be effective, since it finds the "important" axes, and this generally leads to simple decisions at the nodes. If however the principal axes in one region differ significantly from those in another region, then no single choice of axes overall will suffice. In that case we may need to employ the techniques of Sect. 8.3.8, for instance allowing splits to be at arbitrary orientation, often giving smaller and more compact trees. 8.3.8 Multivariate decision trees If the "natural" splits of real-valued data do not fall parallel to the feature axes or the full training data set differs significantly from simple or accommodating distributions, then the above methods may be rather inefficient and lead to poor generalization (Fig. 8.6); even pruning may be insufficient to give a good classifier. The simplest solution is to allow splits that are not parallel to the feature axes, such as a general linear classifier trained via gradient descent on a classification or sum-squared-error criterion (Chap. ??). While such training may be slow for the nodes near the root if the training set is large, training will be faster at nodes closer to the leafs since less training data is used. Recall can remain quite fast since the linear functions at each node can be computed rapidly. 8.3.9 Priors and costs Up to now we have tacitly assumed that a category i is represented with the same frequency in both the training and the test data. If this is not the case, we need a method for controlling tree creation so as to have lower error on the actual final classification task when the frequencies are different. The most direct method is to "weight" samples to correct for the prior frequencies (Problem 16). Furthermore, we may seek to minimize a general cost, rather than a strict misclassification or 0-1 8.3. CART 17 x2 x1 < 0.27 1 R1 x2 < 0.32 x2 < 0.6 .8 x1 < 0.07 .6 1 2 1 x1 < 0.55 .4 R2 1 2 x2 < 0.86 .2 2 x1 .2 x2 .4 .6 .8 1 x1 < 0.81 0 1 - 1.2 x1 + x2 < 0.1 2 1 R1 .8 2 R2 1 .6 .4 .2 0 .2 .4 .6 .8 1 x1 Figure 8.5: If the class of node decisions does not match the form of the training data, a very complicated decision tree will result, as shown at the top. Here decisions are parallel to the axes while in fact the data is better split by boundaries along another direction. If however "proper" decision forms are used (here, linear combinations of the features), the tree can be quite simple, as shown at the bottom. cost. As in Chap. ??, we represent such information in a cost matrix ij -- the cost of classifying a pattern as i when it is actually j . Cost information is easily incorporated into a Gini impurity, giving the following weighted Gini impurity, weighted Gini impurity i(N ) = ij ij P (i )P (j ), (10) which should be used during training. Costs can be incorporated into other impurity measures as well (Problem 11). 18 x2 CHAPTER 8. NON-METRIC METHODS 1 x2 < 0.5 0.8 R1 0.6 x1 < 0.95 x2 < 0.56 R2 R1 R2 0.4 2 1 1 x2 < 0.54 1 2 0.2 0 0.2 0.4 0.6 0.8 1 x1 x2 0.04 x1 + 0.16 x2 < 0.11 1 0.8 0.27 x1 - 0.44 x2 < -0.02 1 R1 0.6 0.96 x1 - 1.77x2 < -0.45 2 0.4 R2 0.2 5.43 x1 - 13.33 x2 < -6.03 2 0 0.2 0.4 0.6 0.8 1 x1 1 2 Figure 8.6: One form of multivariate tree employs general linear decisions at each node, giving splits along arbitrary directions in the feature space. In virtually all interesting cases the training data is not linearly separable, and thus the LMS algorithm is more useful than methods that require the data to be linearly separable, even though the LMS need not yield a minimum in classification error (Chap. ??). The tree at the bottom can be simplified by methods outlined in Sect. 8.4.2. 8.3.10 Missing attributes deficient pattern Classification problems might have missing attributes during training, during classification, or both. Consider first training a tree classifier despite the fact that some training patterns are missing attributes. A naive approach would be to delete from consideration any such deficient patterns; however, this is quite wasteful and should be employed only if there are many complete patterns. A better technique is to proceed as otherwise described above (Sec. 8.3.2), but instead calculate impurities at a node N using only the attribute information present. Suppose there are n training points at N and that each has three attributes, except one pattern that is missing attribute x3 . To find the best split at N , we calculate possible splits using all n points using attribute x1 , then all n points for attribute x2 , then the n - 1 non-deficient points for attribute x3 . Each such split has an associated reduction in impurity, calculated as before, though here with different numbers of patterns. As always, the desired split is the one which gives the greatest decrease in impurity. The generalization of this procedure to more features, to multiple patterns with missing attributes, and even to 8.3. CART 19 patterns with several missing attributes is straightforward, as is its use in classifying non-deficient patterns (Problem 14). Now consider how to create and use trees that can classify a deficient pattern. The trees described above cannot directly handle test patterns lacking attributes (but see Sect. 8.4.2), and thus if we suspect that such deficient test patterns will occur, we must modify the training procedure discussed in Sect. 8.3.2. The basic approach during classification is to use the traditional ("primary") decision at a node whenever possible (i.e., when the queries involves a feature that is present in the deficient test pattern) but to use alternate queries whenever the test pattern is missing that feature. During training then, in addition to the primary split, each non-terminal node N is given an ordered set of surrogate splits, consisting of an attribute label and a rule. The first such surrogate split maximizes the "predictive association" with the primary split. A simple measure of the predictive association of two splits s1 and s2 is merely the numerical count of patterns that are sent to the "left" by both s1 and s2 plus the count of the patterns sent to the "right" by both the splits. The second surrogate split is defined similarly, being the one which uses another feature and best approximates the primary split in this way. Of course, during classification of a deficient test pattern, we use the first surrogate split that does not involve the test pattern's missing attributes. This missing value strategy corresponds to a linear model replacing the pattern's missing value by the value of the non-missing attribute most strongly correlated with it (Problem ??). This strategy uses to maximum advantage the (local) associations among the attributes to decide the split when attribute values are missing. A method closely related to surrogate splits is that of virtual values, in which the missing attribute is assigned its most likely value. Example 2: Surrogate splits and missing attributes Consider the creation of a monothetic tree using an entropy impurity and the following ten training points. Since the tree will be used to classify test patterns with missing features, we will give each node surrogate splits. x1 x2 x3 x4 x5 0 1 2 4 5 7 , 8 , 9 , 1 , 2 8 9 0 1 2 y1 y2 y3 y4 y5 3 6 7 8 9 3 , 0 , 4 , 5 , 6 . 3 4 5 6 7 surrogate split predictive association virtual value 1 : 2 : Through exhaustive search along all three features, we find the primary split at the root node should be "x1 < 5.5?", which sends {x1 , x2 , x3 , x4 , x5 , y1 } to the left and {y2 , y3 , y4 , y5 } to the right, as shown in the figure. We now seek the first surrogate split at the root node; such a split must be based on either the x2 or the x3 feature. Through exhaustive search we find that the split "x3 < 3.5?" has the highest predictive association with the primary split -- a value of 8, since 8 patterns are sent to matching directions by each rule, as shown in the figure. The second surrogate split must be along the only remaining feature, x2 . We find that for this feature the rule "x2 < 3.5?" has the highest predictive association 20 CHAPTER 8. NON-METRIC METHODS with the primary split, a value of 6. (This, incidentally, is not the optimal x2 split for impurity reduction -- we use it because it best approximates the preferred, primary split.) While the above describes the training of the root node, training of other nodes is conceptually the same, though computationally less complex because fewer points need be considered. primary split x1<5.5? first surrogate split x3<3.5? second surrogate split x2<3.5? x1, x2, x3, x4, x5, y1 y2, y3, y4, y5 x3, x4, x5, y1 y2, y3, y4, y5, x1, x2 predictive association with primary split = 8 x4, x5, y1, y3, y4, y5, y2 x1, x2, x3 predictive association with primary split = 6 Of all possible splits based on a single feature, the primary split, "x1 < 5.5?", minimizes the entropy impurity of the full training set. The first surrogate split at the root node must use a feature other than x1 ; its threshold is set in order to best approximate the action of the primary split. In this case "x3 < 3.5?" is the first surrogate split. Likewise, here the second surrogate split must use the x2 feature; its threshold is chosen to best approximate the action of the primary split. In this case "x2 < 3.5?" is the second surrogate split. The pink shaded band marks those patterns sent to the matching direction as the primary split. The number of patterns in the shading is thus the predictive association with the primary split. During classification, any test pattern containing feature x1 would be queried using the primary split, "x1 5.5?" Consider though the deficient test pattern (, 2, 4)t , where * is the missing x1 feature. Since the primary split cannot be used, we turn instead to the first surrogate split, "x3 3.5?", which sends this point to the right. Likewise, the test pattern (, 2, )t would be queried by the second surrogate split, "x2 3.5?", and sent to the left. Sometimes the fact that an attribute is missing can be informative. For instance, in medical diagnosis, the fact that an attribute (such as blood sugar level) is missing might imply that the physician had some reason not to measure it. As such, a missing attribute could be represented as a new feature, and used in classification. 8.4 Other tree methods Virtually all tree-based classification techniques can incorporate the fundamental techniques described above. In fact that discussion expanded beyond the core ideas in the earliest presentations of CART. While most tree-growing algorithms use an entropy impurity, there are many choices for stopping rules, for pruning methods and for the treatment of missing attributes. Here we discuss just two other popular tree algorithms. 8.4.1 ID3 ID3 received its name because it was the third in a series of identification or "ID" procedures. It is intended for use with nominal (unordered) inputs only. If the problem 8.4. OTHER TREE METHODS 21 involves real-valued variables, they are first binned into intervals, each interval being treated as an unordered nominal attribute. Every split has a branching factor Bj , where Bj is the number of discrete attribute bins of the variable j chosen for splitting. In practice these are seldom binary and thus a gain ratio impurity should be used (Sect. 8.3.2). Such trees have their number of levels equal to the number of input variables. The algorithm continues until all nodes are pure or there are no more variables to split on. While there is thus no pruning in standard presentations of the ID3 algorithm, it is straightforward to incorporate pruning along the ideas presented above (Computer exercise 4). 8.4.2 C4.5 The C4.5 algorithm, the successor and refinement of ID3, is the most popular in a series of "classification" tree methods. In it, real-valued variables are treated the same as in CART. Multi-way (B > 2) splits are used with nominal data, as in ID3 with a gain ratio impurity based on Eq. 7. The algorithm uses heuristics for pruning derived based on the statistical significance of splits. A clear difference between C4.5 and CART involves classifying patterns with missing features. During training there are no special accommodations for subsequent classification of deficient patterns in C4.5; in particular, there are no surrogate splits precomputed. Instead, if node N with branching factor B queries the missing feature in a deficient test pattern, C4.5 follows all B possible answers to the descendent nodes and ultimately B leaf nodes. The final classification is based on the labels of the B leaf nodes, weighted by the decision probabilities at N . (These probabilities are simply those of decisions at N on the training data.) Each of N 's immediate descendent nodes can be considered the root of a sub-tree implementing part of the full classification model. This missing-attribute scheme corresponds to weighting these sub-models by the probability any training pattern at N would go to the corresponding outcome of the decision. This method does not exploit statistical correlations between different features of the training points, whereas the method of surrogate splits in CART does. Since C4.5 does not compute surrogate splits and hence does not need to store them, this algorithm may be preferred over CART if space complexity (storage) is a major concern. The C4.5 algorithm has the provision for pruning based on the rules derived from the learned tree. Each leaf node has an associated rule -- the conjunction of the decisions leading from the root node, through the tree, to that leaf. A technique called C4.5Rules deletes redundant antecedents in such rules. To understand this, consider the left-most leaf in the tree at the bottom of Fig. 8.6, which corresponds to the rule IF AN D AN D AN D T HEN (0.40x1 + 0.16x2 (0.27x1 - 0.44x2 (0.96x1 - 1.77x2 (5.43x1 - 13.33x2 x 1 . < 0.11) < -0.02) < -0.45) < -6.03) C4.5Rules This rule can be simplified to give IF ( 0.40x1 + 0.16x2 < 0.11) 22 CHAPTER 8. NON-METRIC METHODS AN D (5.43x1 - 13.33x2 T HEN < -6.03) x 1 , as should be evident in that figure. Note especially that information corresponding to nodes near the root can be pruned by C4.5Rules. This is more general than impurity based pruning methods, which instead merge leaf nodes. 8.4.3 Which tree classifier is best? In Chap. ?? we shall consider the problem of comparing different classifiers, including trees. Here, rather than directly comparing typical implementations of CART, ID3, C4.5 and other numerous tree methods, it is more instructive to consider variations within the different component steps. After all, with care one can generate a tree using any reasonable feature processing, impurity measure, stopping criterion or pruning method. Many of the basic principles applicable throughout pattern classification guide us here. Of course, if the designer has insight into feature preprocessing, this should be exploited. The binning of real-valued features used in early versions of ID3 does not take full advantage of order information, and thus ID3 should be applied to such data only if computational costs are otherwise too high. It has been found that an entropy impurity works acceptably in most cases, and is a natural default. In general, pruning is to be preferred over stopped training and cross-validation, since it takes advantage of more of the information in the training set; however, pruning large training sets can be computationally expensive. The pruning of rules is less useful for problems that have high noise and are at base statistical in nature, but such pruning can often simplify classifiers for problems where the data were generated by rules themselves. Likewise, decision trees are poor at inferring simple concepts, for instance whether more than half of the binary (discrete) attributes have value +1. As with most classification methods, one gains expertise and insight through experimentation on a wide range of problems. No single tree algorithm dominates or is dominated by others. It has been found that trees yield classifiers with accuracy comparable to other methods we have discussed, such as neural networks and nearest-neighbor classifiers, especially when specific prior information about the appropriate form of classifier is lacking. Tree-based classifiers are particularly useful with non-metric data and as such they are an important tool in pattern recognition research. 8.5 *Recognition with strings character word Suppose the patterns are represented as ordered sequences or strings of discrete items, as in a sequence of letters in an English word or in DNA bases in a gene sequence, such as "AGCTTCGAATC." (The letters A, G, C and T stand for the nucleic acids adenine, guanine, cytosine and thymine.) Pattern classification based on such strings of discrete symbols differs in a number of ways from the more commonly used techniques we have addressed up to here. Because the string elements -- called characters, letters or symbols -- are nominal, there is no obvious notion of distance between strings. There is a further difficulty arising from the fact that strings need not be of the same length. While such strings are surely not vectors, we nevertheless broaden our familiar boldface notation to now apply to strings as well, e.g., x = "AGCTTC," though we will often refer to them as patterns, strings, templates or general words. (Of course, 8.5. *RECOGNITION WITH STRINGS 23 there is no requirement that these be meaningful words in a natural language such as English or French.) A particularly long string is denoted text. Any contiguous string that is part of x is called a substring, segment, or more frequently a factor of x. For example, "GCT" is a factor of "AGCTTC." There is a large number of problems in computations on strings. The ones that are of greatest importance in pattern recognition are: String matching: Given x and text, test whether x is a factor of text, and if so, where it appears. Edit distance: Given two strings x and y, compute the minimum number of basic operations -- character insertions, deletions and exchanges -- needed to transform x into y. String matching with errors: Given x and text, find the locations in text where the "cost" or "distance" of x to any factor of text is minimal. String matching with the "don't care" symbol: This is the same as basic string matching, but with a special symbol, / , the don't care symbol, which can match any other symbol. We should begin by understanding the several ways in which these string operations are used in pattern classification. Basic string matching can be viewed as an extreme case of template matching, as in finding a particular English word within a large electronic corpus such as a novel or digital repository. Alternatively, suppose we have a large text such as Herman Melville's Moby Dick, and we want to classify it as either most relevant to the topic of fish or to the topic of hunting. Test strings or keywords for the fish topic might include "salmon," "whale," "fishing," "ocean," while those for hunting might include "gun," "bullet," "shoot," and so on. String matching would determine the number of occurrences of such keywords in the text. A simple count of the keyword occurrences could then be used to classify the text according to topic. (Other, more sophisticated methods for this latter stage would generally be preferable.) The problem of string matching with the don't care symbol is closely related to standard string matching, even though the best algorithms for the two types of problems differ, as we shall see. Suppose, for instance, that in DNA sequence analysis we have a segment of DNA, such as x = "AGCCG / / / / / GACTG," where the first and last sections (called motifs) are important for coding a protein while the middle section, which consists of five characters, is nevertheless known to be inert and to have no function. If we are given an extremely long DNA sequence (the text), string matching with the don't care symbol using the pattern x containing / symbols would determine if text is in the class of sequences that could yield the particular protein. The string operation that finds greatest use in pattern classification is based on edit distance, and is best understood in terms of the nearest-neighbor algorithm (Chap. ??). Recall that in that algorithm each training pattern or prototype is stored along with its category label; an unknown test pattern is then classified by its nearest prototype. Suppose now that the prototypes are strings and we seek to classify a novel test string by its "nearest" stored string. For instance an acoustic speech recognizer might label every 10-ms interval with the most likely phoneme present in an utterance, giving a string of discrete phoneme labels such as "tttoooonn." Edit distance would then be used to find the "nearest" stored training pattern, so that its category label can be read. text factor don't care symbol keyword 24 CHAPTER 8. NON-METRIC METHODS The difficulty in this approach is that there is no obvious notion of metric or distance between strings. In order to proceed, then, we must introduce some measure of distance between the strings. The resulting edit distance is the minimum number of fundamental operations needed to transform the test string into a prototype string, as we shall see. The string-matching-with-errors problem contains aspects of both the basic string matching and the edit distance problems. The goal is to find all locations in text where x is "close" to the substring or factor of text. This measure of closeness is chosen to be an edit distance. Thus the strn be safely increased without missing a valid shift; the larger of these proposed shifts is selected and s is increased accordingly. The bad-character heuristic utilizes the rightmost character in text that does not match the aligned character in x. Because character comparisons proceed right-toleft, this "bad character" is found as efficiently as possible. Since the current shift s is invalid, no more character comparisons are needed and a shift increment can be made. The bad-character heuristic proposes incrementing the shift by an amount to align the rightmost occurrence of the bad character in x with the bad character identified in text. This guarantees that no valid shifts have been skipped (Fig. 8.8). badcharacter heuristic 26 bad character p r o b a b i l i t i e CHAPTER 8. NON-METRIC METHODS good suffix s _ f o r _ e s t i m a t e s s e s t i m a t e s p r o b a b i l i t i e s _ f o r _ e s t i m a t e s s+3 proposed by bad-character heuristic e s t i m a t e s p r o b a b i l i t i e s _ f o r _ e s t i m a t e s s+7 proposed by good-suffix heuristic e s t i m a t e s Figure 8.8: String matching by the Boyer-Moore algorithm takes advantage of information obtained at one shift s to propose the next shift; the algorithm is generally much less computationally expensive than naive string matching, which always increments shifts by a single character. The top figure shows the alignment of text and pattern x for an invalid shift s. Character comparisons proceed right to left, and the first two such comparisons are a match -- the good suffix is "es." The first (right-most) mismatched character in text, here "i," is called the bad character. The bad-character heuristic proposes incrementing the shift to align the right-most "i" in x with the bad character "i" in text -- a shift increment of 3, as shown in the middle figure. The bottom figure shows the effect of the good-suffix heuristic, which proposes incrementing the shift the least amount that will align the good suffix, "es" in x, with that in text -- here an increment of 7. Lines 11 & 12 of the Boyer-Moore algorithm select the larger of the two proposed shift increments, i.e., 7 in this case. Although not shown in this figure, after the mismatch is detected at shift s + 7, both the bad-character and the good-suffix heuristics propose an increment of yet another 7 characters, thereby finding a valid shift. suffix prefix good suffix Now consider the good-suffix heuristic, which operates in parallel with the badcharacter heuristic, and also proposes a safe shift increment. A general suffix of x is a factor or substring of x that contains the final character in x. (Likewise, a prefix contains the initial character in x.) At shift s the rightmost contiguous characters in text that match those in x are called the good suffix, or "matching suffix." As before, because character comparisons are made right-to-left, the good suffix is found with the minimum number of comparisons. Once a character mismatch has been found, the good-suffix heuristic proposes to increment the shift so as to align the next occurrence of the good suffix in x with that identified in text. This insures that no valid shift has been skipped. Given the two shift increments proposed by the two heuristics, line 12 goodsuffix heuristic 8.5. *RECOGNITION WITH STRINGS 27 of the Boyer-Moore algorithm chooses the larger. These heuristics rely on the functions F and G. The last-occurrence function, F(x), is merely a table containing every letter in the alphabet and the position of its rightmost occurrence in x. For the pattern in Fig. 8.8, the table would contain: a, 6; e, 8; i, 4; m, 5; s, 9; and t, 8. All 20 other letters in the English alphabet are assigned a value 0, signifying that they do not appear in x. The construction of this table is simple (Problem 22) and need be done just once; it does not significanly affect the computational cost of the Boyer-Moore algorithm. The good-suffix function, G(x), creates a ta 11 is the location. Two minor heuristics for reducing computational effort are relevant to the stringmatching-with-errors problem. The first is that except in highly unusual cases, the length of the candidate factors of text that need be considered are roughly equal to length[x]. Second, for each candidate shift, the edit-distance calculation can be terminated if it already exceeds the current minimum. In practice, this latter heuristic can reduce the computational burden significantly. Otherwise, the algorithm for string matching with errors is virtually the same as that for edit distance (Computer exercise 10). 8.5.5 String matching with the "don't-care" symbol String matching with the "don't-care" symbol, / , is formally the same as basic string matching, but the / in either x or text is said to match any character (Fig. 8.11). text x r c h _ p a / t t e r / / s _ i n _ l o n g / s t r / n g s p a t / r s pattern match Figure 8.11: String matching with don't care symbol is the same as basic string matching except the / symbol -- in either text or x -- matches any character. The figure shows the only valid shift. An obvious approach to string matching with the don't care symbol is to modify the naive string-matching algorithm to include a condition for matching the don't care symbol. Such an approach, however, retains the computational inefficiencies of naive string matching (Problem 29). Further, extending the Boyer-Moore algorithm to include / is somewhat difficult and inefficient. The most effective methods are based on fundamental methods in computer arithmetic and, while fascinating, would take us away from our central concerns of pattern recognition (cf. Bibliography). The use of this technique in pattern recognition is the same as string matching, with a particular type of "tolerance." While learning is a general and fundamental technique throughout pattern recog- 8.6. GRAMMATICAL METHODS 31 nition, it has found limited use in recognition with basic string matching. This is because the designer typically knows precisely which strings are being sought -- they do not need to be learned. Learning can, of course, be based on the outputs of a string-matching algorithm, as part of a larger pattern recognition system. 8.6 Grammatical methods Up to here, we have not been concerned with any detailed models that might underly the generation of the sequence of characters in a string. We now turn to the case where rules of a particular sort were used to generate the strings and thus where their structure is fundamental. Often this structure is hierarchical, where at the highest or most abstract level a sequence is very simple, but at subsequent levels there is greater and greater complexity. For instance, at its most abstract level, the string "The history book clearly describes several wars" is merely a sentence. At a somewhat more detailed level it can be described as a noun phrase followed by a verb phrase. The noun phrase can be expanded at yet a subsequent level, as can the verb phrase. The expansion ends when we reach the words "The," "history," and so forth -- items that are considered the "characters," atomic and without further structure. Consider too strings representing valid telephone numbers -- local, national and international. Such numbers conform to a strict structure: either a country code is present or it is not; if not, then the domestic national code may or may not be present; if a country code is present, then there is a set of permissible city codes and for each city there is a set of permissible area codes and individual local numbers, and so on. As we shall see, such structure is easily specified in a grammar, and when such structure is present the use of a grammar for recognition can improve accuracy. For instance, grammatical methods can be used to provide constraints for a full system that uses a statistical recognizer as a component. Consider an optical character recognition system that recognizes and interprets mathematical equations based on a scanned pixel image. The mathematical symbols often have specific "slots" that can be filled with certain other symbols; this can be specified by a grammar. Thus an integral sign has two slots, for upper and lower limits, and these can be filled by only a limited set of symbols. (Indeed, a grammar is used in many mathematical typesetting programs in order to prevent authors from creating meaningless "equations.") A full system that recognizes the integral sign could use a grammar to limit the number of candidate categories for a particular slot, and this increases the accuracy of the full system. Similarly, consider the problem of recognizing phone numbers within acoustic speech in an automatic dialing application. A statistical or Hidden-Markov-Model acoustic recognizer might perform word spotting and pick out number words such as "eight" and "hundred." A subsequent stage based on a formal grammar would then exploit the fact that telephone numbers are highly constrained, as mentioned. We shall study the case when crisp rules specify how the representation at one level leads to a more expanded and complicated representation at the next level. We sometimes call a string generated by a set of rules a sentence; the rules are specified by a grammar, denoted G. (Naturally, there is no requirement that these be related in any way to sentences in natural language such as English.) In pattern recognition, we are given a sentence and a grammar, and seek to determine whether the sentence was generated by G. sentence 32 CHAPTER 8. NON-METRIC METHODS 8.6.1 Grammars The notion of a grammar is very general and powerful. Formally, a grammar G consists of four components: symbols: Every sentence consists of a string of characters (which are also called primitive symbols, terminal symbols or letters), taken from an alphabet A. For bookkeeping, it is also convenient to include the null or empty string denoted , which has length zero; if is appended to any string x, the result is again x. variables: These are also called non-terminal symbols, intermediate symbols or occasionally internal symbols, and are taken from a set I. root symbol root symbol: The root symbol or starting symbol is a special internal symbol, the source from which all sequences are derived. The root symbol is taken from a set S. productions: The set of production rules, rewrite rules, or simply rules, denoted P, specify how to transform a set of variables and symbols into other variables and symbols. These rules determine the core structures that can be produced by the grammar. For instance if A is an internal symbol and c a terminal symbol, the rewrite rule cA cc means that any time the segment cA appears in a string, it can be replaced by cc. Thus we denote a general grammar by its alphabet, its variables, its particular root symbol, and the rewrite rules: G = (A, I, S, P). The language generated by grammar, denoted L(G), is the set of all strings (possibly infinite in number) that can be generated by G. Consider two examples; the is quite simple and abstract. Let A = {a, b, c}, first p2 : AB BA p1 : S aSBA OR aBA p4 : bA bc p3 : bB bb S = S, I = {A, B, C}, and P = . p5 : cA cc p6 : aB ab (In order to make the list of rewrite rules more compact, we shall condense rules having the same left hand side by means of the OR on the right hand side. Thus rule p1 is a condensation of the two rules S aSBA and S aBA.) If we start with S and apply the rewrite rules in the following orders, we have the following two cases: root p1 p6 p4 S aBA abA abc root p1 p1 p6 p2 p3 p4 p5 S aSBA aaBABA aabABA aabBAA aabbAA aabbcA aabbcc null string production rule language production After the rewrite rules have been applied in these sequences, no more symbols match the left-hand side of any rewrite rule, and the process is complete. Such a transformation from the root symbol to a final string is called a production. These two productions show that abc and aabbcc are in the language generated by G. In fact, it can be shown (Problem 38) that this grammar generates the language L(G) = {an bn cn |n 1}. 8.6. GRAMMATICAL METHODS 33 A much more complicated grammar underlies the English language, of course. The alphabet consists of all English words, A = {the, history, book, sold, over, 1000, copies, . . . }, and the intermediate symbols are the parts of speech: I = { noun , verb , noun phrase , verb phrase , adjective , adverb , adverbial phrase }. The root symbol here is S = sentence . A restricted set of the production rules in English includes: sentence noun phrase verb phrase noun phrase adjective noun phrase verb phrase verb phrase adverbial phrase P= noun book OR theorem OR . . . verb describes OR buys OR holds OR . . . adverb over OR . . . This subset of the rules of English grammar does not prevent the generation of meaningless sentences, of course. For instance, the nonsense sentence "Squishy green dreams hop heuristically" can be derived in this subset of English grammar. Figure 8.12 shows the steps of a production in a derivation tree, where the root symbol is displayed at the top and the terminal symbols at the bottom. <sentence> derivation tree <noun phrase> <verb phrase> <adjective> <noun phrase> <verb> <adverbial phrase> The <adjective> history <noun> book <noun phrase> sold <preposition> <noun phrase> over <adjective> 1000 <noun> copies <noun phrase> Figure 8.12: This derivation tree illustrates how a portion of English grammar can transform the root symbol, here sentence , into a particular sentence or string of elements, here English words, which are read from left to right. 8.6.2 Types of string grammars There are four main types of grammar, arising from different types of structure in the productions. As we have seen, a rewrite rule is of the form , where and are strings made up of intermediate and terminal symbols. Type 0: Free or unrestricted Free grammars have no restrictions on the rewrite rules and thus they provide no constraints or structure on the strings they can 34 CHAPTER 8. NON-METRIC METHODS produce. While in principle they can express an arbitrary set of rules, this generality comes at the tremendous expense of possibly unbounded learning time. Knowing that a string is derived from a type 0 grammar provides no information and as such, type 0 grammars in general have but little use in pattern recognition. Type 1: Context-sensitive A grammar is called context-sensitive if every rewrite rule is of the form I x where and are any strings made up of intermediate and terminal symbols, I is an intermediate symbol and x is an intermediate or terminal symbol (other than ). We say that "I can be rewritten as x in the context of on the left and on the right." Type 2: Context-free A grammar is called context free if every production is of the form Ix where I is an intermediate symbol and x an intermediate or terminal symbol (other than ). Clearly, unlike a type 1 grammar, here there is no need for a "context" for the rewriting of I by x. Type 3: Finite State or Regular A grammar is called regular if every rewrite rule is of the form z OR z where and are made up of intermediate symbols and z is a terminal symbol (other than ). Such grammars are also called finite state because they can be generated by a finite state machine, which we shall see in Fig. 8.16. A language generated by a grammar of type i is called a type i language. It can be shown that the class of grammars of type i includes all grammars of type i + 1; thus there is a strict hierarchy in grammars. Any context-free grammar can be converted into one in Chomsky normal form (CNF). Such a grammar has all rules of the form A BC and Az Chomsky normal form where A, B and C are intermediate symbols (that is, they are in I) and z is a terminal symbol. For every context-free grammar G, there is another G in Chomsky normal form such that L(G) = L(G ) (Problem 36). Example 3: A grammar for pronouncing numbers In order to understand these issues better, consider a grammar that yields pronunciation of any number between 1 and 999,999. The alphabet has 29 basic terminal symbols, i.e., the spoken words A = {one, two, . . . , ten, eleven, . . . , twenty, thirty, . . . , ninety, hundred, thousand}. 8.6. GRAMMATICAL METHODS 35 There are six non-terminal symbols, corresponding to general six-digit, three-digit, and two-digit numbers, the numbers between ten and nineteen, and so forth, as will be clear below: I = {digits6, digits3, digits2, digit1, teens, tys}. The root node corresponds to a general number up to six digits in length: S = digits6. The set of rewrite rules is based on a knowledge of English: digits6 digits3 thousand digits3 digits6 digits3 thousand OR digits3 digits3 digit1 hundred digits2 digits3 digit1 hundred OR digits2 P= digits2 teens OR tys OR tys digit1 OR digit1 digit1 one OR two OR . . . nine teens ten OR eleven OR . . . nineteen tys twenty OR thirty OR . . . OR ninety The grammar takes digit6 and applies the productions until the elements in the final alphabet are produced, as shown in the figure. Because it contains rewrite rules such as digits6 digits3 thousand, this grammar cannot be type 3. It is easy to confirm that it is type 2. digit6 digits3 digit1 six hundred thousand digits2 tys thirty digit1 nine 639,014 digits3 digits2 teens fourteen 2,953 digits3 digit1 two digit6 thousand digit1 nine digits3 hundred digits2 tys fifty digit1 three These two derivation trees show how the grammar G yields the pronunciation of 639,014 and 2,953. The final string of terminal symbols is read from left to right. 8.6.3 Recognition using grammars Recognition using grammars is formally very similar to the general approaches used throughout pattern recognition. Suppose we suspect that a test sentence was generated by one of c different grammars, G1 , G2 , . . . , Gc , which can be considered as different models or classes. A test sentence x is classified according to which grammar could have produced it, or equivalently, the language L(Gi ) of which x is a member. Up to now we have worked forward -- forming a derivation from a root node to a final sentence. For recognition, though, we must employ the inverse process: that is, given a particular x, find a derivation in G that leads to x. This process, called parsing, is virtually always much more difficult than forming a derivation. We now parsing 36 CHAPTER 8. NON-METRIC METHODS discuss one general approach to parsing, and briefly mention two others. Bottom-up parsing Bottom-up parsing starts with the test sentence x, and seeks to simplify it, so as to represent it as the root symbol. The basic approach is to use candidate productions from P "backwards," i.e., find rewrite rules whose right hand side matches part of the current string, and replace that part with a segment that could have produced it. This is the general method in the Cocke-Younger-Kasami algorithm, which fills a parse table from the "bottom up." The grammar must be expressed in Chomsky normal form and thus the productions P must all be of the form A BC, a broad but not all inclusive category of grammars. Entries in the table are candidate strings in a portion of a valid derivation. If the table contains the source symbol S, then indeed we can work forward from S and derive the test sentence, and hence x L(G). In the following, xi (for i = 1, . . . n) represents the individual terminal characters in the string to be parsed. Algorithm 4 (Bottom-up parsing) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 parse table begin initialize G = (A, I, S, P), x = x1 x2 . . . xn i0 do i i + 1 Vi1 {A | A xi } until i = n j1 do j j + 1 i 0 do i i + 1 Vij k0 do k k + 1 Vij Vij {A | A BC P, B Vik and C Vi+k,j-k } until k = j - 1 until i = n - j + 1 until j = n if S V1n then print "parse of" x "successful in G" return end Consider the operation of Algorithm 4 in the following simple abstract example. Let the grammar G have two terminal and three intermediate symbols: A = {a, b}, and I = {A, B, C}. The root symbol is S, and there are just four production rules: S AB OR BC on, as illustrated by the pink lines in Fig. 8.13. S A B b A a A a B C C B a b Figure 8.15: This valid derivation of "babaa" in G can be read from the pink lines in the parse table of Fig. 8.13 generated by the bottom-up parse algorithm. The bottom-up and top-down parsers just described are quite general and there are a number of parsing algorithms which differ in space and time complexities. Many parsing methods depend upon the model underlying the grammar. One popular such model is finite state machines. Such a machine consists of nodes and transition links; each node can emit a symbol, as shown in Fig. 8.16. finite state machine 8.7 Grammatical inference In many applications, the grammar is designed by hand. Nevertheless, learning plays an extremely important role in pattern recognition research and it is natural that we attempt to learn a grammar from example sentences it generates. When seeking to follow that general approach we are immediately struck by differences between the areas addressed by grammatical methods and those that can be described as statistical. First, for most languages there are many -- often an infinite number of -- grammars that can produce it. If two grammars G1 and G2 generate the same language (and no other sentences), then this ambiguity is of no consequence; recognition will be the same. However, since training is always based on a finite set of samples, the problem is underspecified. There are an infinite number of grammars consistent with the training data, and thus we cannot recover the source grammar uniquely. 8.7. GRAMMATICAL INFERENCE 39 mouse S the A cow B was C found D seen by E under the F barn G farmer Figure 8.16: One type of finite state machine consists of nodes that can emit terminal symbols ("the," "mouse," etc.) and transition to another node. Such operation can be described by a grammar. For instance, the rewrite rules for this finite state machine include S theA, A mouseB OR cowB, and so on. Clearly these rules imply this finite state machine implements a type 3 grammar. The final internal node (shaded) would lead to the null symbol . There are two main techniques used to make the problem of inferring a grammar from instances tractable. The first is to use both positive and negative instances. That is, we use a set D+ of sentences known to be derivable in the grammar; we also use a set D- that are known to be not derivable in the grammar. In a multicategory case, it is common to take the positive instances in Gi and use them for negative examples in Gj for j = i. Even with both positive and negative instances, a finite training set rarely specifies the grammar uniquely. Thus our second technique is to impose conditions and constraints. A trivial illustration is that we demand that the alphabet of the candidate grammar contain only those symbols that appear in the training sentences. Moreover, we demand that every production rule in the grammar be used. We seek the "simplest" grammar that explains the training instances where "simple" generally refers to the total number of rewrite rules, or the sum of their lengths, or other natural criterion. These are versions of Occam's razor, that the simplest explanation of the data is to be preferred (Chap ??). In broad overview, learning proceeds as follows. An initial grammar G0 is guessed. Often it is useful to specify the type of grammar (1, 2 or 3), and thus place constraints on the forms of the candidate rewrite rules. In the absence of other prior information, it is traditional to make G0 as simple as possible and gradually expand the set of productions as needed. Positive training sentences x+ are selected from D+ one by i one. If x+ cannot be parsed by the grammar, then new rewrite rules are proposed i for P. A new rule is accepted if and only if it is used for a successful parse of x+ and i does not allow any negative samples to be parsed. In greater detail, then, an algorithm for inferring the grammar is: Algorithm 5 (Grammatical inference (overview)) 1 2 3 4 5 6 7 8 9 10 11 12 begin initialize D+ , D- , set of rules that "cover" the training data. After such training it is traditional to simplify the resulting logical rule by means of standard logical methods. The designer must specify the predicates and functions, based on a prior knowledge of the problem domain. The algorithm begins by considering the most general rules using these predicates and functions, and finds the "best" simple rule. Here, "best" means that the rule describes the largest number of training examples. Then, the algorithm searches among all refinements of the best rule, choosing the refinement that too is "best." This process is iterated until no more refinements can be added, or when the number of items described is maximum. In this way a single, though possibly complex, if-then rule has been learned (Fig. 8.18). The sequential covering algorithm iterates this process and returns a set of rules. Because of its greedy nature, the algorithm need not learn the smallest set of rules. IF THEN Fish(x)=T sequential covering IF HasHair(x) IF (Width(x)>2m) IF Swims(x) IF Runs(x) IF HasEyes(x) THEN Fish(x)=F THEN Fish(x)=F THEN Fish(x)=T THEN Fish(x)=F THEN Fish(x)=T IF Swims(x) IF Swims(x) LaysEggs(x) Runs(x) THEN Fish(x)=T THEN Fish(x)=F IF Swims(x) IF Swims(x) IF Swims(x) (Weight(x)>9kg) HasScales(x) HasHair(x) THEN Fish(x)=F THEN Fish(x)=T THEN Fish(x)=F IF Swims(x) HasScales(x) HasGills(x) THEN Fish (x)=T IF Swims(x) IF Swims(x) HasScales(x) HasScales(x) (Length(x)>5m) HasEyes(x) THEN Fish(x)=T THEN Fish(x)=F Figure 8.18: In sequential covering, candidate rules are searched through successive refinements. First, the "best" rule having a single conditional predicate is found, i.e., the one explaining most training data. Next, other candidate predicates are added, the best compound rule selected, and so forth. A general approach is to search first through all rules having a single attribute. 44 CHAPTER 8. NON-METRIC METHODS Next, consider the rule having a single conjunction of two predicates, then these conjunctions, and so on. Note that this greedy algorithm need not be optimal -- that is, it need not yield the most compact rule. Summary Non-metric data consists of lists of nominal attributes; such lists might be unordered or ordered (strings). Tree-based methods such as CART, ID3 and C4.5 rely on answers to a series of questions (typically binary) for classification. The designer selects the form of question and the tree is grown, starting at the root node, by finding splits of data that make the representation more "pure." There are several acceptable impurity measures, such as misclassification, variance and Gini; the entropy impurity, however, has found greatest use. To avoid overfitting and to improve generalization, one must either employ stopped splitting (declaring a node with non-zero impurity to be a leaf), or instead prune a tree trained to minimum impurity leafs. Tree classifiers are very flexible, and can be used in a wide range of applications, including those with data that is metric, non-metric or in combination. When comparing patterns that consist of strings of non-numeric symbols, we use edit distance -- a measure of the number of fundamental operations (insertions, deletions, exchanges) that must be performed to transform one string into another. While the general edit distance is not a true metric, edit distance can nevertheless be used for nearest-neighbor classification. String matching is finding whether a test string appears in a longer text. The requirement of a perfect match in basic string matching can be relaxed, as in string matching with errors, or with the don't care symbol. These basic string and pattern recognition ideas are simple and straightforward, addressing them in large problems requires algorithms that are computationally efficient. Grammatical methods assume the strings are generated from certain classes of rules, which can be described by an underlying grammar. A grammar G consists of an alphabet, intermediate symbols, a starting or root symbol and most importantly a set of rewrite rules, or productions. The four different types of grammars -- free, context-sensitive, context-free, and regular -- make different assumptions about the nature of the transformations. Parsing is the task of accepting a string x and determining whether it is a member of the language generated by G, and if so, finding a derivation. Grammatical methods find greatest use in highly structured environments, particularly where structure lies at many levels. Grammatical inference generally uses positive and negative example strings (i.e., ones in the language generated by G and ones not in that language), to infer a set of productions. Rule-based systems use either propositional logic (variable-free) or first-order logic to describe a category. In broad overview, rules can be learned by sequentially "covering" elements in the training set by successively more complex compound rules. Bibliographical and Historical Remarks Most work on decision trees addresses problems in continuous features, though a key property of the method is that they apply to nominal data too. Some of the foundations of tree-based classifiers stem from the Concept Learning System described in [42], but the important book on CART [10] provided a strong statistics foundation and revived interest in the approach. Quinlan has been a leading exponent of tree classifiers, introducing ID3 [66], C4.5 [69], as well as the application of minimum 8.8. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 45 description length for pruning [71, 56]. A good overview is [61], and a comparison of multivariate decision tree methods is given in [11]. Splitting and pruning criteria based on probabilities are explored in [53], and the use of an interesting information metric for this end is described in [52]. The Gini index was first used in analysis of variance in categorical data [47]. Incremental or on-line learning in decision trees is explored in [85]. The missing variable problem in trees is addressed in [10, 67], which describe methods more general than those presented here. An unusual parallel "neural" search through trees was presented in [78]. The use of edit distance began in the 1970s [64]; a key paper by Wagner and Fischer proposed the fundamental Algorithm 3 and showed that it was optimal [88]. The explosion of digital information, especially natural language text, has motivated work on string matching and related operations. An excellent survey is [5] and two thorough books are [23, 82]. The computational complexity of string algorithms is presented in [21, Chapter 34]. The fast string matching method of Algorithm 2 was introduced in [9]; its complexity and speedups and improvements were discussed in [18, 35, 24, 4, 40, 83]. String edit distance that permits block-level transpositions is discussed in [48]. Some sophisticated string operations -- two-dimensional string matching, longest common subsequence and graph matching -- have found only occasionally use in pattern recognition. Statistical methods applied to strings are discussed in [26]. Finite-state automata have been applied to several problems in string matching [23, Chapter 7], as well as time series prediction and switching, for instance converting from an alphanumeric representation to a binary representation [43]. String matching has been applied to the recognition DNA sequences and text, and is essential in most pattern recognition and template matching involving large databases of text [14]. There is a growing literature on special purpose hardware for string operations, of which the Splash-2 system [12] is a leading example. The foundations of a formal study of grammar, including the classification of grammars, began with the landmark book by Chomsky [16]. An early exposition of grammatical inference [39, Chapter 6] was the source for much of the discussion here. Recognition based on parsing (Latin pars orationis or "part of speech") has been fundamental in automatic language recognition. Some of the earliest work on three-dimensional object recognition relied on complex grammars which described the relationships of corners and edges, in block structures such arches and towers. It was found that such systems were very brittle; they failed whenever there were errors in feature extraction, due to occlusion and even minor misspecifications of the model. For the most part, then, grammatical methods have been abandoned for object recognition and scene analysis [60, 25]. Grammatical methods have been applied to the recognition of some simple, highly structured diagrams, such as electrical circuits, simple maps and even Chinese/Japanese characters. For useful surveys of the basic ideas in syntactic pattern recognition see [33, 34, 32, 13, 62, 14], for parsing see [28, 3], for grammatical inference see [59]. The complexity of parsing type 3 is linear in the length of the string, type 2 is low-order polynomial, type 1 is exponential; pointers to the relevant literature appear in [76]. There has been a great deal of work on parsing natural language and speech, and a good textbook on artificial intelligence addressing this topic and much more is [75]. There is much work on inferring grammars from instances, such as Crespi-Reghizzi algorithm (context free) [22]. If queries can be presented interactively, the learning of a grammar can be speeded [81]. The methods described in this chapter have been expanded to allow for stochastic grammars, where there are probabilities associated with rules [20]. A grammar can be considered a specification of a prior probability for a class; for instance, a uniform 46 CHAPTER 8. NON-METRIC METHODS prior over all (legal) strings in the language L. Error-correcting parsers have been used when random variations arise in an underlying stochastic grammar [50, 84]. One can also apply probability measures to languages [8]. Rule-based methods have formed the foundation of expert systems, and have been applied extensively through many branches of artificial intelligence such as planning, navigation and prediction; their use in pattern recognition has been modest, however. Early influential systems include DENDRAL, for inferring chemical structure from mass spectra [29], PROSPECTOR, for finding mineral deposits [38], and MYCIN, for medical diagnosis [79]. Early use of rule induction for pattern recognition include that of Michalski [57, 58]. Figure 8.17 was inspired by Winston's influential work on learning simple geometrical structures and relationships [91]. Learning rules can be called inductive logic programming; Clark and Niblett have made a number of contributions to the field, particularly their CN2 induction algorithm [17]. Quinlan, who has contributed much to the theory and application of tree-based classifiers, describes his FOIL algorithm, which uses a minimum description length criterion to stop the learning of first-order rules [68]. Texts on inductive logic include [46, 63] and general machine learning, including inferencing [44, 61]. Problems Section 8.2 1. When a test pattern is classified by a decision tree, that pattern is subjected to a sequence of queries, corresponding to the nodes along a path from root to leaf. Prove that for any decision tree, there is a functionally equivalent tree in which every such path consists of distinct queries. That is, given an arbitrary tree prove that it is always possible to construct an equivalent tree in which no test pattern is ever subjected to the same query twice. Section 8.3 2. Consider classification trees that are non-binary. (a) Prove that for any arbitrary tree, with possibly unequal branching ratios throughout, there exists a binary tree that implements the same classification function. (b) Consider a tree with just two levels -- a root node connected to B leaf nodes (B 2). What are the upper and the lower limits on the number of levels in a functionally equivalent binary tree, as a function of B? (c) As in part (b), what are the upper and lower limits on the number of nodes in a functionally equivalent binary tree? 3. Compare the computational complexities of a monothetic and a polythetic tree classifier trained on the same data as follows. Suppose there are n/2 training patterns in each of two categories. Every pattern has d attributes, each of which can take on k discrete values. Assume that the best split evenly divides the set of patterns. (a) How many levels will there be in the monothetic tree? The polythetic tree? (b) In terms of the variables given, what is the complexity of finding the optimal split at the root of a monothetic tree? A polythetic tree? 8.8. PROBLEMS (c) Compare the total complexities for training the two trees fully. 47 4. The task here is to find the computational complexity of training a tree classifier using the twoing impurity where candidate splits are based on a single feature. Suppose there are c classes, 1 , 2 , ..., c , each with n/c patterns that are d-dimensional. Proceed as follows: (a) How many possible non-trivial divisions into two supercategories are there at the root node? (b) For any one of these candidate supercategory divisions, what is the computational complexity of finding the split that minimizes the entropy impurity? (c) Use your results from parts (a) & (b) to find the computational complexity of finding the split at the root node. (d) Suppose for simplicity that each split divides the patterns into equal subsets and furthermore that each leaf node corresponds to a single pattern. In terms of the variables given, what will be the expected number of levels of the tree? (e) Naturally, the number of classes represented at any particular node will depend upon the level in the tree; at the root all c categories must be considered, while at the level just above the leaves, only 2 categories must be considered. (The pairs of particular classes represented will depend upon the particular node.) State some natural simplifying assumptions, and determine the number of candidate classes at any node as a function of level. (You may need to use the floor or ceiling notation, x or x , in your answer, as described in the Appendix.) (f) Use your results from part (e) and the number of patterns to find the computational complexity at an arbitrary level L. (g) Use all your results to find the computational complexity of training the full tree classifier. (h) Suppose there n = 210 patterns, each of which is d = 6 dimensional, evenly divided among c = 16 categories. Suppose that on a uniprocessor a fundamental computation requires roughly 10-10 seconds. Roughly how long will it take to train your classifier using the twoing criterion? How long will it take to classify a single test pattern? 5. Consider training a binary tree using the entropy impurity, and refer to Eqs. 1 & 5. (a) Prove that the decrease in entropy impurity provided by a single yes/no query can never be greater than one bit. (b) For the two trees in Example 1, verify that each split reduces the impurity and that this reduction is never greater than 1 bit. Explain nevertheless why the impurity at a node can be lower than at its descendent, as occurs in that Example. (c) Generalize your result from part (a) to the case with arbitrary branching ratio B 2. 48 CHAPTER 8. NON-METRIC METHODS 6. Let P (1 ), . . . , P (c ) denote the probabilities of c classes at node N of a binary c classification tree, and j=1 P (j ) = 1. Suppose the impurity i(P (1 ), . . . , P (c )) at N is a strictly concave function of the probabilities. That is, for any probabilities = i(P a (1 ), . . . , P a (c )) = i(P b (1 ), . . . , P b (c )) and = i(1 P a (1 ) + (1 - 1 )P b (1 ), . . . , c P a (c ) + (1 - c )P b (c )), c ia ib i () then for 0 j 1 and j = 1, we have j=1 ia i ib . (a) Prove that for any split, we have i(s, t) 0, with equality if and only if P (j |TL ) = P (j |TR ) = P (j |T ), for j = 1, . . . , c. In other words, for a concave impurity function, splitting never increases the impurity. (b) Prove that entropy impurity (Eq. 1) is a concave function. (c) Prove that Gini impurity (Eq. 3) is a concave function. 7. Show that the surrogate split method described in the text corresponds to the assumption that the missing feature (attribute) is the one most informative. 8. Con05 level. 16. Consider the following patterns, each having four binary-valued attributes: 1 1100 0000 1010 0011 2 1100 1111 1110 0111 Note especially that the first patterns in the two categories are the same. (a) Create by hand a binary classification tree for this data. Train your tree so that the leaf nodes have the lowest impurity possible. (b) Suppose it is known that during testing the prior probabilities of the two categories will not be equal, but instead P (1 ) = 2P (2 ). Modify your training method and use the above data to form a tree for this case. Section 8.4 17. Consider training a binary decision tree to classify two-component patterns from two categories. The first component is binary, 0 or 1, while the second component has six possible values, A through F: 1 1A 0E 0B 1B 1F 0D 2 0A 0C 1C 0F 0B 1D 8.8. PROBLEMS 51 Compare splitting the root node based on the first feature with splitting it on the second feature in the following way. (a) Use an entropy impurity with a two-way split (i.e., B = 2) on the first feature and a six-way split on the second feature. (b) Repeat (a) but using a gain ratio impurity. (c) In light of your above answers discuss the value of gain ratio impurity in cases where splits have different branching ratios. Section 8.5 18. Consider strings x and text, of length m and n, respectively, from an alphabet A consisting of d characters. Assume that the naive string-matching algorithm (Algorithm 1) exits the implied loop in line 4 as soon as a mismatch occurs. Prove that the number of character-to-character comparisons made on average for random strings is 1 - d-m 2(n - m + 1). 1 - d-1 (n - m + 1) 19. Consider string matching using the Boyer-Moore algorithm (Algorithm 2) based on the trinary alphabet A = {a, b, c}. Apply the good-suffix function G and the last-occurrence function F to each of the following strings: (a) "acaccacbac" (b) "abababcbcbaaabcbaa" (c) "cccaaababaccc" (d) "abbabbabbcbbabbcbba" 20. Consider the string-matching problem illustrated in the top of Fig. 8.8. Assume text began at the first character of "probabilities." (a) How many basic character comparisons are required by the naive string-matching algorithm (Algorithm 1) to find a valid shift? (b) How many basic character comparisons are required by the Boyer-Moore string matching algorithm (Algorithm 2)? 21. For each of the texts below, determine the number of fundamental character comparisons needed to find all valid shifts for the test string x = "abcca" using the naive string-matching algorithm (Algorithm 1) and the Boyer-Moore algorithm (Algorithm 2). (a) "abcccdabacabbca" (b) "dadadadadadadad" (c) "abcbcabcabcabc" (d) "accabcababacca" 52 (e) "bbccacbccabbcca" CHAPTER 8. NON-METRIC METHODS 22. Write pseudocode for an efficient construction of the last-occurrence function F used in the Boyer-Moore algorithm (Algorithm 2). Let d be the number of elements in the alphabet A, and m the length of string x. (a) What is the time complexity of your algorithm in the worst case? (b) What is the space complexity of your algorithm in the worst case? (c) How many fundamental operations are required to compute F for the 26letter English alphabet for x = "bonbon"? For x = "marmalade"? For x = "abcdabdabcaabcda"? 23. Consider the training data from the trinary alphabet A = {a, b, c} in the table 1 aabbc ababcc babbcc 2 bccba bbbca cbbaaaa 3 caaaa cbcaab baaca Use the simple edit distance to classify each of the below strings. If there are ambiguities in the classification, state which two (or all three) categories are candidates. (a) "abacc" (b) "abca" (c) "ccbba" (d) "bbaaac" 24. Repeat Problem 23 using its training data but the following test data: (a) "ccab" (b) "abdca" (c) "abc" (d) "bacaca" 25. Repeat Problem 23 but assume that the cost of different string transformations are not equal. In particular, assume that an interchange is twice as costly as an insertion or a deletion. 26. Consider edit distance with positive but otherwise arbitrary costs associated with each of the fundamental operations of insertion, d (logical), 41 variable-free logic, see logic, propositional 69 variance impurity, see impurity, variance virtual value, 19 War and Peace, 24 word, 22 Contents 9 Algorithm-independent machine learning 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Lack of inherent superiority of any classifier . . . . . . . . . . . . . . 9.2.1 No Free Lunch Theorem . . . . . . . . . . . . . . . . . . . . . Example 1: No Free Lunch for binary data . . . . . . . . . . . . . . . 9.2.2 *Ugly Duckling Theorem . . . . . . . . . . . . . . . . . . . . 9.2.3 Minimum description length (MDL) . . . . . . . . . . . . . . 9.2.4 Minimum description length principle . . . . . . . . . . . . . 9.2.5 Overfitting avoidance and Occam's razor . . . . . . . . . . . . 9.3 Bias and variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Bias and variance for regression . . . . . . . . . . . . . . . . . 9.3.2 Bias and variance for classification . . . . . . . . . . . . . . . 9.4 *Resampling for estimating statistics . . . . . . . . . . . . . . . . . . 9.4.1 Jackknife . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example 2: Jackknife estimate of bias and variance of the mode . . . 9.4.2 Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Resampling for classifier design . . . . . . . . . . . . . . . . . . . . . 9.5.1 Bagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Boosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Algorithm 1: AdaBoost . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Learning with queries . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Arcing, learning with queries, bias and variance . . . . . . . . 9.6 Estimating and comparing classifiers . . . . . . . . . . . . . . . . . . 9.6.1 Parametric models . . . . . . . . . . . . . . . . . . . . . . . . 9.6.2 Cross validation . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.3 Jackknife and bootstrap estimation of classification accuracy 9.6.4 Maximum-likelihood model comparison . . . . . . . . . . . . 9.6.5 Bayesian model comparison . . . . . . . . . . . . . . . . . . . 9.6.6 The problem-average error rate . . . . . . . . . . . . . . . . . 9.6.7 Predicting final performance from learning curves . . . . . . . 9.6.8 The capacity of a separating plane . . . . . . . . . . . . . . . 9.7 Combining classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Component classifiers with discriminant functions . . . . . . 9.7.2 Component classifiers without discriminant functions . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliographical and Historical Remarks . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 3 4 4 6 8 11 13 14 15 15 18 20 22 23 24 25 26 26 28 29 31 32 33 33 35 36 37 39 42 44 45 46 48 49 50 51 58 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 CONTENTS Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 70 Chapter 9 Algorithm-independent machine learning 9.1 Introduction such range I pattern recognition. When confronting one isa"best." of algorithms, every reader has wondered at one time or another which Of course, some algorithms may be preferred because of their lower computational complexity; others may be preferred because they take into account some prior knowledge of the form of the data (e.g., discrete, continuous, unordered list, string, ...). Nevertheless there are classification problems for which such issues are of little or no concern, or we wish to compare algorithms that are equivalent in regard to them. In these cases we are left with the question: Are there any reasons to favor one algorithm over another? For instance, given two classifiers that perform equally well on the training set, it is frequently asserted that the simpler classifier can be expected to perform better on a test set. But is this version of Occam's razor really so evident? Likewise, we frequently prefer or impose smoothness on a classifier's decision functions. Do simpler or "smoother" classifiers generalize better, and if so, why? In this chapter we address these and related questions concerning the foundations and philosophical underpinnings of statistical pattern classification. Now that the reader has intuition and experience with individual algorithms, these issues in the theory of learning may be better understood. In some fields there are strict conservation laws and constraint laws -- such as the conservation of energy, charge and momentum in physics, or the second law of thermodynamics, which states that the entropy of an isolated system can never decrease. These hold regardless of the number and configuration of the forces at play. Given the usefulness of such laws, we naturally ask: are there analogous results in pattern recognition, ones that do not depend upon the particular choice of classifier or learning method? Are there any fundamental results that hold regardless of the cleverness of the designer, the number and distribution of the patterns, and the nature of the classification task? Of course it is very valuable to know that there exists a constraint on classifier 3 n the previous chapters we have seen many learning algorithms and techniques for Occam's razor 4 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING accuracy, the Bayes limit, and it is sometimes useful to compare performance to this theoretical limit. Alas in practice we rarely if ever know the Bayes error rate. Even if we did know this error rate, it would not help us much in designing a classifier; thus the Bayes error is generally of theoretical interest. What other fundamental principles and properties might be of greater use in designing classifiers? Before we address such problems, we should clarify the meaning of the title of this chapter. "Algorithm-independent" here refers, first, to those mathematical foundations that do not depend upon the particular classifier or learning algorithm used. Our upcoming discussion of bias and variance is just as valid for methods based on neural networks as for the nearest-neighbor or for model-dependent maximum likelihood. Second, we mean techniques that can be used in conjunction with different learning algorithms, or provide guidance in their use. For example, cross validation and resampling techniques can be used with any of a large number of training methods. Of course by the very general notion of an algorithm these too are algorithms, technically speaking, but we discuss them in this chapter because of their breadth of applicability and independence from the details of the learning techniques encountered up to here. In this chapter we shall see, first, that no pattern classification method is inherently superior to any other, or even to random guessing; it is the type of problem, prior distribution and other information that determine which form of classifier should provide the best performance. We shall then explore several ways to quantify and adjust the "match" between a learning algorithm and the problem it addresses. In any particular problem there are differences between classifiers, of course, and thus we show that with certain assumptions we can estimate their accuracy (even for instance before the candidate classifier is fully trained) and compare different classifiers. Finally, we shall see methods for integrating component or "expert" classifiers, which themselves might implement any of a number of algorithms. We shall present the results that are most important for pattern recognition practitioners, occasionally skipping over mathematical details that can be found in the original research referenced in the Bibliographical and Historical Remarks section. 9.2 Lack of inherent superiority of any classifier We now turn to the central question posed above: If we are interesIf we wish to compare the algorithms overall, we therefore must average over all such possible target functions consistent with the training data. Part 2 of Theorem 9.1 states that averaged over all possible target functions, there is no difference in off-training set errors between the two algorithms. For each of the 25 distinct target functions consistent with the n = 3 patterns in D, there is exactly one other target function whose output is inverted for each of the patterns outside the training set, and this ensures that the performances of algorithms 1 and 2 will also be inverted, so that the contributions to the formula in Part 2 cancel. Thus indeed Part 2 of the Theorem as well as Eq. 4 are obeyed. Figure 9.1 illustrates a result derivable from Part 1 of Theorem 9.1. Each of the six squares represents the set of all possible classification problems; note that this is not the standard feature space. If a learning system performs well -- higher than average generalization accuracy -- over some set of problems, then it must perform worse than average elsewhere, as shown in a). No system can perform well throughout the full set of functions, d); to do so would violate the No Free Lunch Theorem. In sum, all statements of the form "learning/recognition algorithm 1 is better than algorithm 2" are ultimately statements about the relevant target functions. There is, hence, a "conservation theorem" in generalization: for every possible learning algorithm for binary classification the sum of performance over all possible target functions is exactly zero. Thus we cannot achieve positive performance on some problems without getting an equal and opposite amount of negative performance on other problems. While we may hope that we never have to apply any particular algorithm to certain problems, all we can do is trade performance on problems we do not expect to encounter with those that we do expect to encounter. This, and the other results from the No Free Lunch Theorem, stress that it is the assumptions about the learning domains that are relevant. Another practical import of the Theorem is that even popular and theoretically grounded algorithms will perform poorly on some problems, ones in which the learning algorithm and the posterior happen not to be "matched," as governed by Eq. 1. Practitioners must be aware of this possibility, which arises in real-world applications. Expertise limited to a small range of methods, even powerful ones such as neural networks, will not suffice for all classification problems. 8 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING a) b) c) possible learning systems - - - - 0 0 - - 0 f) d) e) impossible learning systems + + + + 0 0 0 + 0 0 0 0 0 0 0 0 0 - + - 0 - 0 0 0 - Figure 9.1: The No Free Lunch Theorem shows the generalization performance on the off-training set data that can be achieved (top row), and the performance that cannot be achieved (bottom row). Each square represents all possible classification problems consistent with the training data -- this is not the familiar feature space. A + indicates that the classification algorithm has generalization higher than average, a - indicates lower than average, and a 0 indicates average performance. The size of a symbol indicates the amount by which the performance differs from the average. For instance, a) shows that it is possible for an algorithm to have high accuracy on a small set of problems so long as it has mildly poor performance on all other problems. Likewise, b) shows that it is possible to have excellent performance throughout a large range of problem but this will be balanced by very poor performance on a large range of other problems. It is impossible, however, to have good performance throughout the full range of problems, shown in d). It is also impossible to have higher than average performance on some problems, and average performance everywhere else, shown in e). Experience with a broad range of techniques is the best insurance for solving arbitrary new classification problems. 9.2.2 *Ugly Duckling Theorem While the No Free Lunch Theorem shows that in the absence of assumptions we should not prefer any learning or classification algorithm over another, an analogous theorem addresses features and patterns. Roughly speaking, the Ugly Duckling Theorem states that in the absence of assumptions there is no privileged or "best" feature representation, and that even the notion of similarity between patterns depends implicitly on assumptions which may or may not be correct. Since we are using discrete representations, we can use logical expressions or "predicates" to describe a pattern, much as in Chap. ??. If we denote a binary feature attribute by fi , then a particular pattern might be described by the predicate "f1 AN D f2 ," another pattern might be described as "N OT f2 ," and so on. Likewise we could have a predicate involving the patterns themselves, such as x1 OR x2 . Figure 9.2 shows how patterns can be represented in a Venn diagram. Below we shall need to count predicates, and for clarity it helps to consider a particular Venn diagram, such as that in Fig. 9.3. This is the most general Venn diagram based on two features, since for every configuration of f1 and f2 there is problem space (not feature space) + - - ++ + - + - + 0 0 - 9.2. LACK OF INHERENT SUPERIORITY OF ANY CLASSIFIER a) f1 x1 x2 x3 x4 x5 f3 f2 x1 x4 f3 b) f1 f1 x1 f2 x2 x3 f3 x3 c) 9 f2 x2 x6 x4 x5 Figure 9.2: Patterns xi , represented as d-tuples of binary features fi , can be placed in Venn diagram (here d = 3); the diagram itself depends upon the classification problem and its constraints. For instance, suppose f1 is the binary feature attribute has legs, f2 is has right arm and f3 the attribute has right hand. Thus in part a) pattern x1 denotes a person who has legs but neither arm nor hand; x2 a person who has legs and an arm, but no hand; and so on. Notice that the Venn diagram expresses the biological constraints associated with real people: it is impossible for someone to have a right hand but no right arm. Part c) expresses different constraints, such as the biological constraint of mutually exclusive eye colors. Thus attributes f1 , f2 and f3 might denote brown, green and blue respectively and a pattern xi describes a real person, whom we can assume cannot have eyes that differ in color. indeed a pattern. Here predicates can be as simple as "x1 ," or more complicated, such as "x1 OR x2 OR x4 ," and so on. f1 x1 x2 f2 x3 x4 Figure 9.3: The Venn for a problem with no constraints on two features. Thus all four binary attribute vectors can occur. The rank r of a predicate is the number of the simplest or indivisible elements it contains. The tables below show the predicates of rank 1, 2 and 3 associated with the Venn diagram of Fig. 9.3. Not shown is the fact that there is but one predicate of rank r = 4, the disjunction of the x1 , . . . , x4 , which has the logical value True. If we let n be the total number of regions in the Venn diagram (i.e., the number of distinct possible patterns), then there are n predicates of rank r, as shown at the bottom of r the table. rank Technically speaking, we should use set operations rather than logical operations when discussing the Venn diagram, writing x1 x2 instead of x1 OR x2 . Nevertheless we use logical operations here for consistency with the rest of the text. 10 rank r = 1 x1 f1 AN D x2 f1 AN D x3 f2 AN D x4 N OT (f1 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING rank OR x2 OR x3 OR x4 OR x3 OR x4 OR x4 r=2 f1 f1 XOR f2 NOT f2 f2 NOT(f1 AND f2 ) NOT f1 4 2 N OT f2 f2 N OT f1 OR f2 ) x1 x1 x1 x2 x2 x3 x1 x1 x1 x2 OR OR OR OR x2 x2 x3 x3 rank OR x3 OR x4 OR x3 OR x4 r=3 f1 OR f2 f1 OR NOT f2 NOT(f1 AND f2 ) f2 OR NOTf1 4 1 =4 =6 4 3 =4 The total number of predicates in the absence of constraints is n r=0 n r = (1 + 1)n = 2n , (5) and thus for the d = 4 case of Fig. 9.3, there are 24 = 16 possible predicates (Problem 9). Note that Eq. 5 applies only to the case where there are no constraints; for Venn diagrams that do incorporate constraints, such as those in Fig. 9.2, the formula does not hold (Problem 10). Now we turn to our central question: In the absence of prior information, is there a principled reason to judge any two distinct patterns as more or less similar than two other distinct patterns? A natural and familiar measure of similarity is the number of features or attributes shared by two patterns, but even such an obvious measure presents conceptual difficulties. To appreciate such difficulties, consider first a simple example. Suppose attributes f1 and f2 represent blind in right eye and blind in left eye, respectively. If we base similarity on shared features, person x1 = {1, 0} (blind only in the right eye) is maximally different from person x2 = {0, 1} (blind only in the left eye). In particular, in this scheme x1 is more similar to a totally blind person and to a normally sighted person than he is to x2 . But this result may prove unsatisfactory; we can easily envision many circumstances where we would consider a person blind in just the right eye to be "similar" to one blind in just the left eye. Such people might be permitted to drive automobiles, for instance. Further, a person blind in just one eye would differ significantly from totally blind person who would not be able to drive. A second, related point is that there are always multiple ways to represent vectors (or tuples) of attributes. For instance, in the above example, we might use alternative features f1 and f2 to represent blind in right eye and same in both eyes, respectively, and then the four types of people would be represented as shown in the tables. f1 0 0 1 1 f2 0 1 0 1 f1 0 0 1 1 f2 1 0 0 1 x1 x2 x3 x4 Of course there are other representations, each more or less appropriate to the particular problem at hand. In the absence of prior information, though, there is no principled reason to prefer one of these representations over another. 9.2. LACK OF INHERENT SUPERIORITY OF ANY CLASSIFIER 11 We must then still confront the problem of finding a principled measure the similarity between two patterns, given some representation. The only plausible candidate measure in this circumstance would be the number of predicates (rather than the number of features) the patterns share. Consider two distinct patterns (in some representation) xi and xj , where i = j. Regardless of the constraints in the problem (i.e., the Venn diagram), there are, of course, no predicates of rank r = 1 that are shared by the two patterns. There is but one predicate of rank r = 2, i.e., xi OR xj . A predicate of rank r = 3 must contain three patterns, two of which are xi and xj . Since there are d patterns total, there are then d-2 = d - 2 predicates of rank 3 that 1 are shared by xi and xj . Likewise, for an arbitrary rank r, there are d-2 predicates r-2 shared by the two patterns, where 2 r d. The total number of predicates shared by the two patterns is thus the sum d r-2 d-2 r-2 = (1 + 1)d-2 = 2d-2 . (6) Note the key result: Eq. 6 is independent of the choice of xi and xj (so long as they are distinct). Thus we conclude that the number of predicates shared by two distinct patterns is constant, and independent of the patterns themselves (Problem 11). We conclude that if we judge similarity based on the number of predicates that patterns share, then any two distinct patterns are "equally similar." This is stated formally as: Theorem 9.2 (Ugly Duckling) Given that we use a finite set of predicates that enables us to distinguish any two patterns under consideration, the number of predicates shared by any two such patterns is constant and independent of the choice of those patterns. Furthermore, if pattern similarity is based on the total number of predicates shared by two patterns, then any two patterns are "equally similar." In summary, then, the Ugly Duckling Theorem states something quite simple yet important: there is no problem-independent or privileged or "best" set of features or feature attributes. Moreover, while the above was derived using d-tuples of binary values, it also applies to a continuous feature spaces too, if such as space is discretized (at any resolution). The Theorem forces us to acknowledge that even the apparently simple notion of similarity between patterns is fundamentally based on implicit assumptions about the problem domain (Problem 12). 9.2.3 Minimum description length (MDL) It is sometimes claimed that the minimum description length principle provides justification for preferring one type of classifier over another -- specifically "simpler" classifiers over "complex" ones. Briefly stated, the approach purports to find some irreducible, smallest representation of all members of a category (much like a "signal"); all variation among the individual patterns is then "noise." The principle argues that by simplifying recognizers appropriately, the signal can be retained while the noise is ignored. Because the principle is so often invoked, it is important to understand what properly derives from it, what does not, and how it relates to the No Free Lunch The Theorem gets its fanciful name from the following counter-intuitive statement: Assuming similarity is based on the number of shared predicates, an ugly duckling A is as similar to beautiful swan B as beautiful swan C is to B, given that these items differ from one another. 12 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING Theorem. To do so, however, we must first understand the notion of algorithmic complexity. Algorithmic complexity Algorithmic complexity -- also known as Kolmogorov complexity, Kolmogorov-Chaitin complexity, descriptional complexity, shortest program length or algorithmic entropy -- seeks to quantify an inherent complexity of a binary string. (We shall assume both classifiers and patterns are described by such strings.) Algorithmic complexity can be explained by analogy to communication, the earliest application of information theory (App. ??). If the sender and receiver agree upon a specification method L, such as an encoding or compression technique, then message x can then be transmitted as y, denoted L(y) = x or y : L(y) = x. The cost of transmission of x is the length of the transmitted message y, that is, |y|. The least such cost is hence the minimum length of such a message, denoted min : L(y) = x; this minimal length is the entropy of x |y| abstract computer under the specification or transmission method L. Algorithmic complexity is defined by analogy to entropy, where instead of a specification method L, we consider programs running on an abstract computer, i.e., one whose functions (memory, processing, etc.) are described operationally and without regard to hardware implementation. Consider an abstract computer that takes as a program a binary string y and outputs a string x and halts. In such a case we say that y is an abstract encoding or description of x. A universal description should be independent of the specification (up to some additive constant), so that we can compare the complexities of different binary strings. Such a method would provide a measure of the inherent information content, the amount of data which must be transmitted in the absence of any other prior knowledge. The Kolmogorov complexity of a binary string x, denoted K(x), is defined as the size of the shortest program y, measured in bits, that without additional data computes the string x and halts. Formally, we write K(x) = min[U (y) = x], |y| (7) Turing machine where U represents an abstract universal Turing machine or Turing computer. For our purposes it suffices to state that a Turing machine is "universal" in that it can implement any algorithm and compute any computable function. Kolmogorov complexity is a measure of the incompressibility of x, and is analogous to minimal sufficient statistics, the optimally compressed representation of certain properties of a distribution (Chap. ??). Consider the following examples. Suppose x consists solely of n 1s. This string is actually quite "simple." If we use some fixed number of bits k to specify a general program containing a loop for printing a string of 1s, we need merely log2 n more bits to specify the iteration number n, the condition for halting. Thus the Kolmogorov complexity of a string of n 1s is K(x) = O(log2 n). Next consider the transcendental number , whose infinite sequence of seemingly random binary digits, 11.00100100001111110110101010001 . . .2 , actually contains only a few bits of information: the size of the shortest program that can produce any arbitrarily large number of consecutive digits of . Informally we say the algorithmic complexity of is a constant; formally we write K() = O(1), which means K() does not grow with increasing number of desired bits. Another example is a "truly" random binary string, which cannot be expressed as a shorter string; its algorithmic complexity is within a 9.2. LACK OF INHERENT SUPERIORITY OF ANY CLASSIFIER 13 constant factor of its length. For such a string we write K(x) = O(|x|), which means that K(x) grows as fast as the length of x (Problem 13). 9.2.4 Minimum description length principle We now turn to a simple, "naive" version of the minimum description length principle and its application to pattern recognition. Given that all members of a category share some properties, yet differ in others, the recognizer should seek to learn the common or essential characteristics while ignoring the accidental or random ones. Kolmogorov complexity seeks to provide an objective measure of simplicity, and thus the description of the "essential" characteristics. Suppose we seek to design a classifier using a training set D. The minimum description length (MDL) principle states that we should minimize the sum of the model's algorithmic complexity and the description of the training data with respect to that model, i.e., K(h, D) = K(h) + K(D using h). h (8) Thus we seek the model h that obeys h = arg min K(h, D) (Problem 14). (Variations on the naive minimum description length principle use a weighted sum of the terms in Eq. 8.) In practice, determining the algorithmic complexity of a classifier depends upon a chosen class of abstract computers, and this means the complexity can be specified only up to an additive constant. A particularly clear application of the minimum description length principle is in the design of decision tree classifiers (Chap. ??). In this case, a model h specifies the tree and the decisions at the nodes; thus the algorithmic complexity of the model is proportional to the number of nodes. The complexity of the data given the model could be expressed in terms of the entropy (in bits) of the data D, the weighted sum of the entropies of the data at the leaf nodes. Thus if the tree is pruned based on an entropy criterion, there is an implicit global cost criterion that is equivalent to minimizing a measure of the general form in Eq. 8 (Computer exercise 1). It can be shown theoretically that classifiers designed with a minimum description length principle are guaranteed to converge to the ideal or true model in the limit of more and more data. This is surely a very desirable property. However, such derivations cannot prove that the principle leads to superior performance in the finite data case; to do so would violate the No Free Lunch Theorems. Moreover, in practice it is often difficult to compute the minimum description length, since we may not be clever enough to find the "best" representation (Problem 17). Assume there is some correspondence between a particular classifier and an abstract computer; in such a case it may be quite simple to determine the length of the string y necessary to create the classifier. But since finding the algorithmic complexity demands we find the shortest such string, we must perform a very difficult search through possible programs that could generate the classifier. The minimum description length principle can be viewed from a Bayesian perspective. Using our current terminology, Bayes formula states P (h|D) = P (h)P (D|h) P (D) (9) for discrete hypotheses and data. The optimal hypothesis h is the one yielding the highest posterior probability, i.e., 14 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING h = arg max[P (h)P (D|h)] h h = arg max[log2 P (h) + log2 P (D|h)], (10) much as we saw in Chap. ??. We note that a string x can be communicated or represented at a cost bounded below by -log2 P (x), as stated in Shannon's optimal coding theorem. Shannon's theorem thus provides a link between the minimum description length (Eq. 8) and the Bayesian approaches (Eq. 10). The minimum description length principle states that simple models (small K(h)) are to be preferred, and thus amounts to a bias toward "simplicity." It is often easier in practice to specify such a prior in terms of a description length than it is using functions of distributions (Problem 16). We shall revisit the issue of the tradeoff between simplifying the model and fitting the data in the bias-variance dilemma in Sec. 9.3. It is found empirically that classifiers designed using the minimum description length principle work well in many problems. As mentioned, the principle is effectively a method for biasing priors over models toward "simple" models. The reasons for the many empirical success of the principle are not trivial, as we shall see in Sect. 9.2.5. One of the greatest benefits of the principle is that it provides a computationally clear approach to balancing model complexity and the fit of the data. In somewhat more heuristic methods, such as pruning neural networks, it is difficult to compare the algorithmic complexity of the network (e.g., number of units or weights) with the entropy of the data with respect to that model. 9.2.5 Overfitting avoidance and Occam's razor Throughout our discussions of pattern classifiers, we have mentioned the need to avoid overfitting by means of regularization, pruning, inclusion of penalty terms, minimizing a description length, and so on. The No Free Lunch results throw such techniques into question. If there are no problem-independent reasons to prefer one algorithm over another, why is overfitting avoidance nearly universally advocated? For a given training error, why do we generally advocate simple classifiers with fewer features and parameters? In fact, techniques for avoiding overfitting or minimizing description length are not inherently beneficial; instead, such techniques amount to a preference, or "bias," over the forms or parameters of classifiers. They are only beneficial if they happen to address problems for which they work. It is the match of the learning algorithm to the problem -- not the imposition of overfitting avoidance -- that determines the empirical success. There are problems for which overfitting avoidance actually leads to worse performance. The effects of overfitting avoidance depend upon the choice of representation too; if the feature space is mapped to a new, formally equivalent one, overfitting avoidance has different effects (Computer exercise ??). In light of the negative results from the No Free Lunch theorems, we might probe more deeply into the frequent empirical "successes" of the minimum description length principle and the more general philosophical principle of Occam's razor. In its original form, Occam's razor stated merely that "entities" (or explanations) should not be multiplied beyond necessity, but it has come to be interpreted in pattern recognition as counselling that one should not use classifiers that are more complicated than are necessary, where "necessary" is determined by the quality of fit to the training data. Given the respective requisite assumptions, the No Free Lunch theorem proves that 9.3. BIAS AND VARIANCE 15 there is no benefit in "simple" classifiers (or "complex" ones, for that matter) -- simple classifiers claim neither unique nor universal validity. The frequent empirical "successes" of Occam's razor imply that the classes of problems addressed so far have certain properties. What might be the reason we explore problems that tend to favor simpler classifiers? A reasonable hypothesis is that through evolution, we have had strong selection pressure on our pattern recognition apparatuses to be computationally simple -- require fewer neurons, less time, and so forth -- and in general such classifiers tend to be "simple." We are more likely to ignore problems for which Occam's razor does not hold. Analogously, researchers naturally develop simple algorithms before more complex ones, as for instance in the progression from the Perceptron, to multilayer neural networks, to networks with pruning, to networks with topology learning, to hybrid neural net/rule-based methods, and so on -- each more complex than its predecessor. Each method is found to work on some problems, but not ones that are "too complex." For instance the basic Perceptron is inadequate for optical character recognition; a simple three-layer neural network is inadequate for speaker-independent speech recognition. Hence our design methodology itself imposes a bias toward "simple" classifiers; we generally stop searching for a design when the classifier is "good enough." This principle of satisficing -- creating an adequate though possibly non-optimal solution -- underlies much of practical pattern recognition as well as human cognition. Another "justification" for Occam's razor derives from a property we might strongly desire or expect in a learning algorithm. If we assume that adding more training data does not, on average, degrade the generalization accuracy of a classifier, then a version of Occam's razor can in fact be derived. Note, however, that such a desired property amounts to a non-uniform prior over learning algorithms -- while this property is surely desirable, it is a premise and cannot be "proven." Finally, the No Free Lunch theorem implies that we cannot use training data to create a scheme by which we can with some assurance distinguish new problems for which the classifier will generalize well from new problems for which the classifier will generalize poorly (Problem 8). satisficing 9.3 Bias and variance Given that there is no general best classifier unless the probability over the class of problems is restricted, practitioners must be prepared to explore a number of methods or models when solving any given classification problem. Below we will define two ways to measure the "match" or "alignment" of the learning algorithm to the classification problem: the bias and the variance. The bias measures the accuracy or quality of the match: high bias implies a poor match. The variance measures the precision or specificity of the match: a high variance implies a weak match. Designers can adjust the bias and variance of classifiers, but the important bias-variance relation shows that the two terms are not independent; in fact, for a given mean-square error, they obey a form of "conservation law." Naturally, though, with prior information or even mere luck, classifiers can be created that have a different mean-square error. 9.3.1 Bias and variance for regression Bias and variance are most easily understood in the context of regression or curve fitting. Suppose there is a true (but unknown) function F (x) with continuous valued output with noise, and we seek to estimate it based on n samples in a set D generated 16 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING by F (x). The regression function estimated is denoted g(x; D) and we are interested in the dependence of this approximation on the training set D. Due to random variations in data selection, for some data sets of finite size this approximation will be excellent while for other data sets of the same size the approximation will be poor. The natural measure of the effectiveness of the estimator can be expressed as its mean-square deviation from the desired optimal. Thus we average over all training sets D of fixed size n and find (Problem 18) ED (g(x; D) - F (x))2 = (ED [g(x; D) - F (x)])2 + ED (g(x; D) - ED [g(x; D)])2 . bias2 variance (11) bias variance biasvariance dilemma The first term on the right hand side is the bias (squared) -- the difference between the expected value and the true (but generally unknown) value -- while the second term is the variance. Thus a low bias means on average we accurately estimate F from D. Further, a low variance means the estimate of F does not change much as the training set varies. Even if an estimator is unbiased (i.e., the bias = 0 and its expected value is equal to the true value), there can nevertheless be a large mean-square error arising from a large variance term. Equation 11 shows that the mean-square error can be expressed as the sum of a bias and a variance term. The bias-variance dilemma or bias-variance trade-off is a general phenomenon: procedures with increased flexibility to adapt to the training data (e.g., have more free parameters) tend to have lower bias but higher variance. Different classes of regression functions g(x; D) -- linear, quadratic, sum of Gaussians, etc. -- will have different overall errors; nevertheless, Eq. 11 will be obeyed. Suppose for example that the true, target function F (x) is a cubic polynomial of one variable, with noise, as illustrated in Fig. 9.4. We seek to estimate this function based on a sampled training set D. Column a) at the left, shows a very poor "estimate" g(x) -- a fixed linear function, independent of the training data. For different training sets sampled from F (x) with noise, g(x) is unchanged. The histogram of this meansquare error of Eq. 11, shown at the bottom, reveals a spike at a fairly high error; because this estimate is so poor, it has a high bias. Further, the variance of the constant model g(x) is zero. The model in column b) is also fixed, but happens to be a better estimate of F (x). It too has zero variance, but a lower bias than the poor model in a). Presumably the designer imposed some prior knowledge about F (x) in order to get this improved estimate. The model in column c) is a cubic with trainable coefficients; it would learn F (x) exactly if D contained infinitely many training points. Notice the fit found for every training set is quite good. Thus the bias is low, as shown in the histogram at the bottom. The model in d) is linear in x, but its slope and intercept are determined from the training data. As such, the model in d) has a lower bias than the models in a) and b). In sum, for a given target function F (x), if a model has many parameters (generally low bias), it will fit the data well but yield high variance. Conversely, if the model has few parameters (generally high bias), it may not fit the data particularly well, but this fit will not change much as for different data sets (low variance). The best way to get low bias and low variance is the have prior information about the target function. We can virtually never get zero bias and zero variance; to do so would mean there is only one learning problem to be solved, in which case the answer is already 9.3. BIAS AND VARIANCE 17 a) g(x) = fixed b) g(x) = fixed c) y g(x) = a0 + a1x + a0x2 +a3x3 learned d) y g(x) = a0 + a1x learned y g(x) y g(x) F(x) g(x) F(x) F(x) g(x) F(x) D1 x x x x y g(x) y y g(x) y D2 x F(x) g(x) F(x) F(x) g(x) F(x) x x x y g(x) y y g(x) y F(x) D3 x F(x) g(x) F(x) F(x) g(x) x x x p bias p bias p bias p bias E E E E variance variance variance Figure 9.4: The bias-variance dilemma can be illustrated in the domain of regression. Each column represents a different model, each row a different set of n = 6 training points, Di , randomly sampled from the true function F (x) with noise. Histograms of the mean-square error of E ED [(g(x) - F (x))2 ] of Eq. 11 are shown at the bottom. Column a) shows a very poor model: a linear g(x) whose parameters are held fixed, independent of the training data. This model has high bias and zero variance. Column b) shows a somewhat better model, though it too is held fixed, independent of the training data. It has a lower bias than in a) and the same zero variance. Column c) shows a cubic model, where the parameters are trained to best fit the training samples in a mean-square error sense. This model has low bias, and a moderate variance. Column d) shows a linear model that is adjusted to fit each training set; this model has intermn the training data lead to significantly different classifiers and relatively "large" changes in accuracy. As we saw in Chap. ??, decision tree classifiers trained by a greedy algorithm can be unstable -- a slight change in the position of a single training point can lead to a radically different tree. In general, bagging improves recognition for unstable classifiers since it effectively averages over such discontinuities. There are no convincing theoretical derivations or simulation studies showing that bagging will help all stable classifiers, however. Bagging is our first encounter with multiclassifier systems, where a final overall classifier is based on the outputs of a number of component classifiers. The global decision rule in bagging -- a simple vote among the component classifiers -- is the most elementary method of pooling or integrating the outputs of the component classifiers. We shall consider multiclassifier systems again in Sect. 9.7, with particular attention to forming a single decision rule from the outputs of the component classifiers. 9.5.2 Boosting weak learner The goal of boosting is to improve the accuracy of any given learning algorithm. In boosting we first create a classifier with accuracy on the training set greater than average, and then add new component classifiers to form an ensemble whose joint decision rule has arbitrarily high accuracy on the training set. In such a case we say the classification performance has been "boosted." In overview, the technique trains successive component classifiers with a subset of the training data that is "most informative" given the current set of component classifiers. Classification of a test point x is based on the outputs of the component classifiers, as we shall see. For definiteness, consider creating three component classifiers for a two-category problem through boosting. First we randomly select a set of n1 < n patterns from the full training set D (without replacement); call this set D1 . Then we train the first classifier, C1 , with D1 . Classifier C1 need only be a weak learner, i.e., have accuracy only slightly better than chance. (Of course, this is the minimum requirement; a weak learner could have high accuracy on the training set. In that case the benefit In Sect. 9.7 we shall come across other names for component classifiers. For the present purposes we simply note that these are not classifiers of component features, but are instead members in an ensemble of classifiers whose outputs are pooled so as to implement a single classification rule. 9.5. RESAMPLING FOR CLASSIFIER DESIGN 27 of boosting will be small.) Now we seek a second training set, D2 , that is the "most informative" given component classifier C1 . Specifically, half of the patterns in D2 should be correctly classified by C1 , half incorrectly classified by C1 (Problem 29). Such an informative set D2 is created as follows: we flip a fair coin. If the coin is heads, we select remaining samples from D and present them, one by one to C1 until C1 misclassifies a pattern. We add this misclassified pattern to D2 . Next we flip the coin again. If heads, we continue through D to find another pattern misclassified by C1 and add it to D2 as just described; if tails we find a pattern which C1 classifies correctly. We continue until no more patterns can be added in this manner. Thus half of the patterns in D2 are correctly classified by C1 , half are not. As such D2 provides information complementary to that represented in C1 . Now we train a second component classifier C2 with D2 . Next we seek a third data set, D3 , which is not well classified by the combined system C1 and C2 . We randomly select a training pattern from those remaining in D, and classify that pattern with C1 and with C2 . If C1 and C2 disagree, we add this pattern to the third training set D3 ; otherwise we ignore the pattern. We continue adding informative patterns to D3 in this way; thus D3 contains those not well represented by the combined decisions of C1 and C2 . Finally, we train the last component classifier, C3 , with the patterns in D3 . Now consider the use of the ensemble of three trained component classifiers for classifying a test pattern x. Classification is based on the votes of the component classifiers. Specifically, if C1 and C2 agree on the category label of x, we use that label; if they disagree, then we use the label given by C3 (Fig. 9.6). We skipped over a practical detail in the boosting algorithm: how to choose the number of patterns n1 to train the first component classifier. We would like the final system to be trained with all patterns in D of course; moreover, because the final decision is a simple vote among the component classifiers, we would like to have roughly equal number of patterns in each (i.e., n1 n2 n3 n/3). A reasonable first guess is to set n1 n/3 and create the three component classifiers. If the classification problem is very simple, however, component classifier C1 will explain most of the data and thus n2 (and n3 ) will be much less than n1 , and not all of the patterns in the training set D will be used. Conversely, if the problem is extremely difficult, then C1 will explain but little of the data, and nearly all the patterns will be informative with respect to C1 ; thus n2 will be unacceptably large. Thus in practice we may need to run the overall boosting procedure a few times, adjusting n1 in order to use the full training set and, if possible, get roughly equal partitions of the training set. A number of simple heuristics can be used to improve the partitioning of the training set as well (Computer exercise ??). The above boosting procedure can be applied recursively to the component classifiers themselves, giving a 9-component or even 27-component full classifier. In this way, a very low training error can be achieved, even a vanishing training error if the problem is separable. AdaBoost There are a number of variations on basic boosting. The most popular, AdaBoost -- from "adaptive" boosting -- allows the designer to continue adding weak learners until some desired low training error has been achieved. In AdaBoost each training pattern receives a weight which determines its probability of being selected for a training set for an individual component classifier. If a training pattern is accurately 28 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING x2 x1 n = 27 x2 component classifiers R 2 C1 x2 R 2 R 1 R 1 n1 = 15 final classification by voting x1 x2 R 2 n2 = 8 x1 n3 = 4 R 2 x1 C2 x2 R 1 C3 R 1 x1 Figure 9.6: A two-dimensional two-category classification task is shown at the top. The middle row shows three component (linear) classifiers Ck trained by LMS algorithm (Chap. ??), where their training patterns were chosen through the basic boosting procedure. The final classification is given by the voting of the three component classifiers, and yields a nonlinear decision boundary, as shown at the bottom. Given that the component classifiers are weak learners (i.e., each can learn a training set better than chance), then the ensemble classifier will have a lower training error on the full training set D than does any single component classifier. classified, then its chance of being used again in a subsequent component classifier is reduced; conversely, if the pattern is not accurately classified, then its chance of being used again is raised. In this way, AdaBoost "focuses in" on the informative or "difficult" patterns. Specifically, we initialize these weights across the training set to to be uniform. On each iteration k, we draw a training set at random according to these weights, and train component classifier Ck on the patterns selected. Next we increase weights of training patterns misclassified by Ck and decrease weights of the patterns correctly classified by Ck . Patterns chosen according to this new distribution are used to train the next classifier, Ck+1 , and the process is iterated. We let the patterns and their labels in D be denoted xi and yi , respectively and let Wk (i) be the kth (discrete) distribution over all these training samples. The AdaBoost phe training error will decrease, as given by Eq. 37. It is often found that the test error decreases in boosted systems as well, as shown in red. some applications, however, the patterns are unlabeled. We shall return in Chap. ?? to the problem of learning when no labels are available but here we assume there exists some (possibly costly) way of labeling any pattern. Our current challenge is thus to determine which unlabeled patterns would be most informative (i.e., improve the classifier the most) if they were labeled and used as training patterns. These query are the patterns we will present as a query to an oracle -- a teacher who can label, without error, any pattern. This approach is called variously learning with queries, oracle active learning or interactive learning and is a special case of a resampling technique. Learning with queries might be appropriate, for example, when we want to design a classifier for handwritten numerals using unlabeled pixel images scanned from documents from a corpus too large for us to label every pattern. We could start by randomly selecting some patterns, presenting them to an oracle, and then training the classifier with the returned labels. We then use learning with queries to select unlabeled patterns from our set to present to a human (the oracle) for labeling. Informally, we would expect the most valuable patterns would be near the decision boundaries. More generally we begin with a preliminary, weak classifier that has been developed with a small set of labeled samples. There are two related methods for then selecting an informative pattern, i.e., a pattern for which the current classifier is least certain. confidence In confidence based query selection the classifier computes discriminant functions gi (x) based query for the c categories, i = 1, . . . , c. An informative pattern x is one for which the two selection largest discriminant functions have nearly the same value; such patterns lie near the current decision boundaries. Several search heuristics can be used to find such points efficiently (Problem 30). voting The second method, voting based or committee based query selection, is similar to based query the previous method but is applicable to multiclassifier systems, that is, ones comprisselection ing several component classifiers (Sect. 9.7). Each unlabeled pattern is presented to 9.5. RESAMPLING FOR CLASSIFIER DESIGN 31 each of the k component classifiers; the pattern that yields the greatest disagreement among the k resulting category labels is considered the most informative pattern, and is thus presented as a query to the oracle. Voting based query selection can be used even if the component classifiers do not provide analog discriminant functions, for instance decision trees, rule-based classifiers or simple k-nearest neighbor classifiers. In both confidence based and voting based methods, the pattern labeled by the oracle is then used for training the classifier in the traditional way. (We shall return in Sect. 9.7 to training an ensemble of classifiers.) Clearly such learning with queries does not directly exploit information about the prior distribution of the patterns. In particular, in most problems the distributions of query patterns will be large near the final decision boundaries (where patterns are informative) rather than at the region of highest prior probability (where they are typically less informative), as illustrated in Fig. 9.8. One benefit of learning with queries is that we need not guess the form of the underlying distribution, but can instead use non-parametric techniques, such as nearest-neighbor classification, that allow the decision boundary to be found directly. If there is not a large set of unlabeled samples available for queries, we can nevertheless exploit learning with queries if there is a way to generate query patterns. Suppose we have a only small set of labeled handwritten characters. Suppose too we have image processing algorithms for altering these images to generate new, surrogate patterns for queries to an oracle. For instance the pixel images might be rotated, scaled, sheared, be subject to random pixel noise, or have their lines thinned. Further, we might be able to generate new patterns "in between" two labeled patterns by interpolating or somehow mixing them in a domain-specific way. With such generated query patterns the classifier can explore regions of the feature space about which it is least confident (Fig. 9.8). 9.5.4 Arcing, learning with queries, bias and variance In Chap. ?? and many other places, we have stressed the need for training a classifier on samples drawn from the distribution on which it will be tested. Resampling in general, and learning with queries in particular, seem to violate this recommendation. Why can a classifier trained on a strongly weighted distribution of data be expected to do well -- or better! -- than one trained on the i.i.d. sample? Why doesn't resampling lead to worse performance, to the extent that the resampled distribution differs from the i.i.d. one? Indeed, if we were to take a model of the true distribution and train it with a highly skewed distribution obtained by learning with queries, the final classifier accuracy might be unacceptably low. Consider, however, two interrelated points about resampling methods and altered distributions. The first is that resampling methods are generally used with techniques that do not attempt to model or fit the full category distributions. Thus even if we suspect the prior distributions for two categories are Gaussian, we might use a non-parametric method such as nearest neighbor, radial basis function, or RCE classifiers when using learning with queries. Thus in learning with queries we are not fitting parameters in a model, as described in Chap. ??, but instead are seeking decision boundaries more directly. The second point is that as the number of component classifiers is increased, techniques such as general boosting and AdaBoost effectively broaden that class of implementable functions, as illustrated in Fig. 9.6. While the final classifier might indeed be characterized as parametric, it is in an expanded space of parameters, one 32 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING x2 x2 i.i.d. samples EB = 0.02275 x2 x1 active learning E = 0.05001 x1 E = 0.02422 x1 Figure 9.8: Active learning can be used to create classifiers that are more accurate than ones using i.i.d. sampling. The figure at the top shows a two-dimensional problem with two equal circular Gaussian priors; the Bayes decision boundary is a straight line and the Bayes error EB = 0.02275. The bottom figure on the left shows a nearestneighbor classifier trained with n = 30 labeled points sampled i.i.d. from the true distributions. Note that most of these points are far from the decision boundary. The figure at the right illustrates active learning. The first four points were sampled near the extremes of the feature space. Subsequent query points were chosen midway between two points already used by the classifier, one randomly selected from each of the two categories. In this way, successive queries to the oracle "focused in" on the true decision boundary. The final generalization error of this classifier (0.02422) is lower than the one trained using i.i.d. samples (0.05001). larger than that of the first component classifier. In broad overview, resampling, boosting and related procedures are heuristic methods for adjusting the class of implementable decision functions. As such they allow the designer to try to "match" the final classifier to the problem by indirectly adjusting the bias and variance. The power of these methods is that they can be used with an arbitrary classification technique such as the Perceptron, which would otherwise prove extremely difficult to adjust to the complexity of an arbitrary problem. 9.6 Estimating and comparing classifiers There are at least two reasons for wanting to know the generalization rate of a classifier on a given problem. One is to see if the classifier performs well enough to be useful; another is to compare its performance with that of a competing design. Estimating the final generalization performance invariably requires making assumptions about the classifier or the problem or both, and can fail if the assumptions are not valid. We should stress, then, that all the following methods are heuristic. Indeed, if there were a foolproof method for choosing which of two classifiers would generalize better on an arbitrary new problem, we could incorporate such a method into the learning and violate the No Free Lunch Theorem. Occasionally our assumptions are explicit (as in parametric models), but more often than not they are implicit and difficult to identify or relate to the final estimation (as in empirical methods). 9.6. ESTIMATING AND COMPARING CLASSIFIERS 33 9.6.1 Parametric models One approach to estimating the generalization rate is to compute it from the assumed parametric model. For example, in the two-class multivariate normal case, we might estimate the probability of error using the Bhattacharyya or Chernoff bounds (Chap ??), substituting estimates of the means and the covariance matrix for the unknown parameters. However, there are three problems with this approach. First, such an error estimate is often overly optimistic; characteristics that make the training samples peculiar or unrepresentative will not be revealed. Second, we should always suspect the validity of an assumed parametric model; a performance evaluation based on the same model cannot be believed unless the evaluation is unfavorable. Finally, in more general situations where the distributions are not simple it is very difficult to compute the error rate exactly, even if the probabilistic structure is known completely. 9.6.2 Cross validation In cross validation we randomly split the set of labeled training samples D into two parts: one is used as the traditional training set for adjusting model parameters in the classifier. The other set -- the validation set -- is used to estimate the generalization error. Since our ultimate goal is low generalization error, we train the classifier until we reach a minimum of this validation error, as sketched in Fig. 9.9. It is essential that the validation (or the test) set not include points used for training the parameters in the classifier -- a methodological error known as "testing on the training set." E validation set va a lid tio n trainin g Figure 9.9: In cross validation, the data set D is split into two parts. The first (e.g., 90% of the patterns) is used as a standard training set for setting free parameters in the classifier model; the other (e.g., 10%) is the validation set and is meant to represent the full generalization task. For most problems, the training error decreases monotonically during training, as shown in black. Typically, the error on the validation set decreases, but then increases, an indication that the classifier may be overfitting the training data. In cross validation, training or parameter adjustment is stopped at the first minimum of the validation error. Cross validation can be applied to virtually every classification method, where the specific form of learning or parameter adjustment depends upon the general training A related but less obvious problem arises when a classifier undergoes a long series of refinements guided by the results of repeated testing on the same test data. This form of "training on the test data" often escapes attention until new test samples are obtained. stop training here amount of training, parameter adjustment 34 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING m-fold cross validation anti-cross validation method. For example, in neural networks of a fixed topology (Chap. ??), the amount of training is the number of epochs or presentations of the training set. Alternatively, the number of hidden units can be set via cross validation. Likewise, the width of the Gaussian window in Parzen windows (Chap. ??), and an optimal value of k in the k-nearest neighbor classifier (Chap. ??) can be set by cross validation. Cross validation is heuristic and need not (indeed cannot) give improved classifiers in every case. Nevertheless, it is extremely simple and for many real-world problems is found to improve generalization accuracy. There are several heuristics for choosing the portion of D to be used as a validation set (0 < < 1). Nearly always, a smaller portion of the data should be used as validation set ( < 0.5) because the validation set is used merely to set a single global property of the classifier (i.e., when to stop adjusting parameters) rather than the large number of classifier parameters learned using the training set. If a classifier has a large number of free parameters or degrees of freedom, then a larger portion of D should be used as a training set, i.e., should be reduced. A traditional default is to split the data with = 0.1, which has proven effective in many applications. Finally, when the number of degrees of freedom in the classifier is small compared to the number of training points, the predicted generalization error is relatively insensitive to the choice of . A simple generalization of the above method is m-fold cross validation. Here the training set is randomly divided into m disjoint sets of equal size n/m, where n is again the total number of patterns in D. The classifier is trained m times, each time with a different set held out as a validation set. The estimated performance is the mean of these m errors. In the limit where m = n, the method is in effect the leave-one-out approach to be discussed in Sect. 9.6.3. We emphasize that cross validation is a heuristic and need not work on every problem. Indeed, there are problems for which anti-cross validation is effective -- halting the adjustment of parameters when the validation error is the first local maximum. As such, in any particular problem designers must be prepared to explore different values of , and possibly abandon the use of cross validation altogether if performance cannot be improved (Computer exercise 5). Cross validation is, at base, an empirical approach that tests the classifier experimentally. Once we train a classifier using cross validation, the validation error gives an estimate of the accuracy of the final classifier on the unknown test set. If the true but unknown error rate of the classifier is p, and if k of the n independent, randomly drawn test samples are misclassified, then k has the binomial distribution P (k) = n pk (1 - p)n -k . k (38) Thus, the fraction of test samples misclassified is exactly the maximum likelihood estimate for p (Problem 39): k . (39) n The properties of this estimate for the parameter p of a binomial distribution are well known. In particular, Fig. 9.10 shows 95% confidence intervals as a function of p and n . For a given value of p, the probability is 0.95 that the true value of p lies ^ ^ in the interval between the lower and upper curves marked by the number n of test samples (Problem 36). These curves show that unless n is fairly large, the maximum likelihood estimate must be interpreted with caution. For example, if no errors are made on 50 test samples, with probability 0.95 the true error rate is between zero and p= ^ 9.6. ESTIMATING AND COMPARING CLASSIFIERS 35 8%. The classifier would have to make no errors on more than 250 test samples to be reasonably sure that the true error rate is below 2%. p 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 10 150 2 0 3 0 5 00 1 50 2 000 1 00 10250 0 10 0 5 0 3 0 2 5 1 10 ^ p 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 9.10: The 95% confidence intervals for a given estimated error probability p ^ can be derived from a binomial distribution of Eq. 38. For each value of p, the true ^ probability has a 95% chance of lying between the curves marked by the number of test samples n . The larger the number of test samples, the more precise the estimate of the true probability and hence the smaller the 95% confidence interval. 9.6.3 Jackknife and bootstrap estimation of classification accuracy A method for comparing classifiers closely related to cross validation is to use the jackknife or bootstrap estimation procedures (Sects. 9.4.1 & 9.4.2). The application of the jackknife approach to classification is straightforward. We estimate the accuracy of a given algorithm by training the classifier n separate times, each time using the training set D from which a different single training point has been deleted. This is merely the m = n limit of m-fold cross validation. Each resulting classifier is tested on the single deleted point and the jackknife estimate of the accuracy is then simply the mean of these leave-one-out accuracies. Here the computational complexity may be very high, especially for large n (Problem 28). The jackknife, in particular, generally gives good estimates, since each of the the n classifiers is quite similar to the classifier being tested (differing solely due to a single training point). Likewise, the jackknife estimate of the variance of this estimate is given by a simple generalization of Eq. 32. A particular benefit of the jackknife approach is that it can provide measures of confidence or statistical significance in the comparison between two classifier designs. Suppose trained classifier C1 has an accuracy of 80% while C2 has accuracy of 85%, as estimated by the jackknife procedure. Is C2 really better than C1 ? To answer this, we calculate the jackknife estimate of the variance of the classification accuracies and use traditional hypothesis testing to see if C1 's apparent superiority is statistically significant (Fig. 9.11). There are several ways to generalize the bootstrap method to the problem of es- 36 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING probability a leave-one-out replication has a particular accuracy C1 C2 accuracy (%) 70 75 80 85 90 95 100 Figure 9.11: Jackknife estimation can be used to compare the accuracies of classifiers. The jackknife estimate of classifiers C1 and C2 are 80% and 85%, and full widths (twice the square root of the jackknife estimate of the variances) are 12% and 15%, as shown by the bars at the bottom. In this case, traditional hypothesis testing could show that the difference is not statistically significant at some confidence level. timating the accuracy of a classifier. One of the simplest approaches is to train B classifiers, each with a different bootstrap data set, and test on other bootstrap data sets. The bootstrap estimate of the classifier accuracy is simply the mean of these bootstrap accuracies. In practice, the high computational complexity of bootstrap estimation of classifier accuracy is rarely worth possible improvements in that estimate. In Sect. 9.5.1 we shall discuss bagging, a useful modification of bootstrap estimation. 9.6.4 Maximum-likelihood model comparison ML-II Recall first the maximum-likelihood parameter estimation methods discussed in Chap. ??. ^ Given a model with unknown parameter vector , we find the value which maxi^ Maximum-likelihood model mizes the probability of the training data, i.e., p(D |). comparison or maximum-likelihood model selection -- sometimes called ML-II -- is a direct generalization of those techniques. The goal here is to choose the model that best explains the training data, in a way that will become clear below. We again let hi H represent a candidate hypothesis or model (assumed discrete for simplicity), and D the training data. The posterior probability of any given model is given by Bayes' rule: P (D|hi )P (hi ) P (D|hi )P (hi ), p(D) P (hi |D) = (40) evidence where we will rarely need the normalizing factor p(D). The data-dependent term, P (D|hi ), is the evidence for hi ; the second term, P (hi ), is our subjective prior over the space of hypotheses -- it rates our confidence in different models even before the data arrive. In practice, the data-dependent term dominates in Eq. 40, and hence the priors P (hi ) are often neglected in the computation. In maximum-likelihood model comparison, we find the maximum likelihood parameters for each of the candidate models, calculate the resulting likelihoods, and select the model with the largest such likelihood in Eq. 40 (Fig. 9.12). 9.6. ESTIMATING AND COMPARING CLASSIFIERS evidence P(D| h3) 37 P(D|h2) P(D|h1) D D0 Figure 9.12: The evidence (i.e., probability of generating different data sets given a model) is shown for three models of different expressive power or complexity. Model h1 is the most expressive, since with different values of its parameters the model can fit a wide range of data sets. Model h3 is the most restrictive of the three. If the actual data observed is D0 , then maximum-likelihood model selection states that we should choose h2 , which has the highest evidence. Model h2 "matches" this particular data set better than do the other two models, and should be selected. 9.6.5 Bayesian model comparison Bayesian model comparison uses the full information over priors when computing posterior probabilities in Eq. 40. In particular, the evidence for a particular hypothesis is an integral, P (D|hi ) = p(D|, hi )p(|D, hi )d, (41) where as before describes the parameters in the candidate model. It is common for ^ the posterior P (|D, hi ) to be peaked at , and thus the evidence integral can often be approximated as: P (D|hi ) ^ ^ P (D|, hi ) p(|hi ) . best fit Occam factor likelihood Before the data arrive, model hi has some broad range of model parameters, denoted by 0 and shown in Fig. 9.13. After the data arrive, a smaller range is commensurate or compatible with D, denoted . The Occam factor in Eq. 42, ^ p(|hi ) = 0 param. vol. commensurate with D , param. vol. commensurate with any data (42) Occam factor Occam factor = = (43) is the ratio of two volumes in parameter space: 1) the volume that can account for data D and 2) the prior volume, accessible to the model without regard to D. The Occam factor has magnitude less than 1.0; it is simply the factor by which the hypothesis space collapses by the presence of data. The more the training data, the smaller the range of parameters that are commensurate with it, and thus the greater this collapse in the parameter space and the larger the Occam factor (Fig. 9.13). 38 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING p(|D,hi) p(|hi) ^ 0 Figure 9.13: In the absence of training data, a particular model h has available a large range of possible values of its parameters, denoted 0 . In the presence of a particular training set D, a smaller range is available. The Occam factor, /0 , measures the fractional decrease in the volume of the model's parameter space due to the presence of training data D. In practice, the Occam factor can be calculated fairly easily if the evidence is approximated as a k-dimensional Gaussian, centered on ^ the maximum-likelihood value . Naturally, once the posteriors for different models have been calculated by Eq. 42 & 40, we select the single one having the highest such posterior. (Ironically, the Bayesian model selection procedure is itself not truly Bayesian, since a Bayesian procedure would average over all possible models when making a decision.) The evidence for hi , i.e., P (D|hi ), was ignored in a maximum-likelihood setting ^ of parameters ; nevertheless it is the central term in our comparison of models. As mentioned, in practice the evidence term in Eq. 40 dominates the prior term, and it is traditional to ignore such priors, which are often highly subjective or problematic anyway (Problem 38, Computer exercise 7). This procedure represents an inherent bias towards simple models (small ); models that are overly complex (large ) are automatically self-penalizing where "overly complex" is a data-dependent concept. In the general case, the full integral of Eq. 41 is too difficult to calculate analytically or even numerically. Nevertheless, if is k-dimensional and the posterior can be assumed to be a Gaussian, then the Occam factor can be calculated directly (Problem 37), yielding: P (D|hi ) ^ ^ P (D|, hi ) p(|hi )(2)k/2 |H|-1/2 . best fit likelihood where 2 lnp(|D, hi ) (45) 2 is a Hessian matrix -- a matrix of second-order derivatives -- and measures how ^ "peaked" the posterior is around the value . Note that this Gaussian approximation does not rely on the fact that the underlying model of the distribution of the data in feature space is or is not Gaussian. Rather, it is based on the assumption that the evidence distribution arises from a large number of independent uncorrelated processes and is governed by the Law of Large Numbers. The integration inherent H= Occam factor (44) 9.6. ESTIMATING AND COMPARING CLASSIFIERS 39 in Bayesian methods is simplified using this Gaussian approximation to the evidence. Since calculating the needed Hessian via differentiation is nearly always simpler than a high-dimensional numerical integration, the Bayesian method of model selection is not at a severe computational disadvantage relative to its maximum likelihood counterpart. There may be a problem due to degeneracies in a model -- several parameters could be relabeled and leave the classification rule (and hence the likelihood) unchanged. The resulting degeneracy leads, in essence, to an "overcounting" which alters the effective volume in parameter space. Degeneracies are especially common in neural network models where the parameterization comprises many equivalent weights (Chap. ??). For such cases, we must multiply the right hand side of Eq. 42 by the ^ degeneracy of in order to scale the Occam factor, and thereby obtain the proper estimate of the evidence (Problem 42). Bayesian model selection and the No Free Lunch Theorem There seems to be a fundamental contradiction between two of the deepest ideas in the foundation of statistical pattern recognition. On the one hand, the No Free Lunch Theorem states that in the absence of prior information about the problem, there is no reason to prefer one classification algorithm over another. On the other hand, Bayesian model selection is theoretically well founded and seems to show how to reliably choose the better of two algorithms. Consider two "composite" algorithms -- algorithm A and algorithm B -- each of which employs two others (algorithm 1 and algorithm 2). For any problem, algorithm A uses Bayesian model selection and applies the "better" of algorithm 1 and algorithm 2. Algorithm B uses anti-Bayesian model selection and applies the "worse" of algorithm 1 and algorithm 2. It appears that algorithm A will reliably outperform algorithm B throughout the full class of problems -- in contradiction with Part 1 of the No Free Lunch Theorem. What is the resolution of this apparent contradiction? In Bayesian model selection we ignore the prior over the space of models, H, effectively assuming it is uniform. This assumption therefore does not take into account how those models correspond to underlying target functions, i.e., mappings from input to category labels. Accordingly, Bayesian model selection usually corresponds to a non-uniform prior over target functions. Moreover, depending on the arbitrary choice of model, the precise non-uniform prior will vary. In fact, this arbitrariness is very well-known in statistics, and good practitioners rarely apply the principle of indifference, assuming a uniform prior over models, as Bayesian model selection requires. Indeed, there are many "paradoxes" described in the statistics literature that arise from not being careful to have the prior over models be tailored to the choice of models (Problem 38). The No Free Lunch Theorem allows that for some particular non-uniform prior there may be a learning algorithm that gives better than chance -- or even optimal -- results. Apparently Bayesian model selection corresponds to non-uniform priors that seem to match many important real-world problems. principle of indifference 9.6.6 The problem-average error rate The examples we have given thus far suggest that the problem with having only a small number of samples is that the resulting classifier will not perform well on new data -- it will not generalize well. Thus, we expect the error rate to be a function of 40 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING the number n of training samples, typically decreasing to some minimum value as n apne the most promising model quickly and efficiently, we need then only train this model fully. One method is to use a classifier's performance on a relatively small training set to predict its performance on the ultimate large training set. Such performance is revealed in a type of learning curve in which the test error is plotted versus the size of the training set. Figure 9.15 shows the error rate on an independent test set after the classifier has been fully trained on n n points in the training set. (Note that in this form of learning curve the training error decreases monotonically and does not show "overtraining" evident in curves such as Fig. 9.9.) For many real-world problems, such learning curves decay monotonically and can be adequately described by a power-law function of the form Etest = a + b/n (49) 9.6. ESTIMATING AND COMPARING CLASSIFIERS Etest 1 43 0.8 0.6 0.4 0.2 2000 4000 6000 8000 10000 n' Figure 9.15: The test error for three classifiers, each fully trained on the given number n of training patterns, decreases in a typical monotonic power-law function. Notice that the rank order of the classifiers trained on n = 500 points differs from that for n = 10000 points and the asymptotic case. where a, b and 1 depend upon the task and the classifier. In the limit of very large n , the training error equals the test error, since both the training and test sets represent the full problem space. Thus we also model the training error as a power-law function, having the same asymptotic error, Etrain = a - c/n . (50) If the classifier is sufficiently powerful, this asymptotic error, a, is equal to the Bayes error. Furthermore, such a powerful classifier can learn perfectly the small training sets and thus the training error (measured on the n points) will vanish at small n , as shown in Fig. 9.16. E 0.8 0.6 0.4 0.2 Etest Etrain 2000 4000 6000 8000 10000 a n' Figure 9.16: Test and training error of a classifier fully trained on data subsets of different size n selected randomly from the full set D. At low n , the classifier can learn the category labels of the points perfectly, and thus the training error vanishes there. In the limit n , both training and test errors approach the same asymptotic value, a. If the classifier is sufficiently powerful and the training data is sampled i.i.d., then a is the Bayes error rate, EB . Now we seek to estimate the asymptotic error, a, from the training and test errors on small and intermediate size training sets. From Eqs. 49 & 50 we find: b c - n n b c + . n n Etest + Etrain Etest - Etrain = = 2a + (51) 44 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING If we make the assumption of = and b = c, then Eq. 51 reduces to Etest + Etrain Etest - Etrain = = 2a 2b . n (52) Given this assumption, it is a simple matter to measure the training and test errors for small and intermediate values of n , plot them on a log-log scale and estimate a, as shown in Fig. 9.17. Even if the approximations = and b = c do not hold in practice, the difference Etest - Etrain nevertheless still forms a straight line on a loglog plot and the sum, s = b + c, can be found from the height of the log[Etest + Etrain ] curve. The weighted sum cEtest + bEtrain will be a straight line for some empirically set values of b and c, constrained to obey b + c = s, enabling a to be estimated (Problem 41). Once a has been estimated for each in the set of candidate classifiers, the one with the lowest a is chosen and must be trained on the full training set D. log[E] log[b+c]log[2b] log[Etest +Etrain]log[2a] 1 log[Etest -Etrain] log[n'] Figure 9.17: If the test and training errors versus training set size obey the power-law functions of Eqs. 49 & 50, then the log of the sum and log of the difference of these errors are straight lines on a log-log plot. The estimate of the asymptotic error rate a is then simply related to the height of the log [Etest + Etrain ] line, as shown. 9.6.8 The capacity of a separating plane Consider the partitioning of a d-dimensional feature snctions. For instance, we might have four component classifiers -- a k-nearest-neighbor classifier, a decision tree, a neural network, and a rule-based system -- all addressing the same problem. While a neural network would provide analog values for each of the c categories, the rule-based system would give only a single category label (i.e., a oneof-c representation) and the k-nearest neighbor classifier would give only rank order of the categories. In order to integrate the information from the component classifiers we must convert the their outputs into discriminant values obeying the constraint of Eq. 55 so we can use the framework of Fig. 9.19. The simplest heuristics to this end are the following: Analog If the outputs of a component classifier are analog values gi , we can use the ~ softmax transformation, ~ egi gi = c . (60) ~ egi j=1 softmax to convert them to values gi . Rank order If the output is a rank order list, we assume the discriminant function is linearly proportional to the rank order of the item on the list. Of course, the resulting gi should then be properly normalized, and thus sum to 1.0. One-of-c If the output is a one-of-c representation, in which a single category is identified, we let gj = 1 for the j corresponding to the chosen category, and 0 otherwise. The table gives a simple illustration of these heuristics. Analog value g i gi ~ 0.4 0.158 0.6 0.193 0.9 0.260 0.3 0.143 0.2 0.129 0.1 0.111 Rank order gi 4/21 = 0.194 1/21 = 0.048 2/21 = 0.095 6/21 = 0.286 5/21 = 0.238 3/21 = 0.143 One-of-c gi gi ~ 0 0 1 1.0 0 0 0 0 0 0 0 0 gi ~ 3rd 6th 5th 1st 2nd 4th Once the outputs of the component classifiers have been converted to effective discriminant functions in this way, the component classifiers are themselves held fixed, but the gating network is trained as described in Eq. 59. This method is particularly useful when several highly trained component classifiers are pooled to form a single decision. 9.7. SUMMARY 49 Summary The No Free Lunch Theorem states that in the absence of prior information about the problem there are no reasons to prefer one learning algorithm or classifier model over another. Given that a finite set of feature values are used to distinguish the patterns under consideration, the Ugly Duckling Theorem states that the number of predicates shared by any two different patterns is constant, and does not depend upon the choice of the two objects. Together, these theorems highlight the need for insight into proper features and matching the algorithm to the data distribution -- there is no problem independent "best" learning or pattern recognition system nor feature representation. In short, formal theory and algorithms taken alone are not enough; pattern classification is an empirical subject. Two ways to describe the match between classifier and problem are the bias and variance. The bias measures the accuracy or quality of the match (high bias implies a poor match) and the variance measures the precision or specificity of the match (a high variance implies a weak match). The bias-variance dilemma states that learning procedures with increased flexibility to adapt to the training data (e.g., have more free parameters) tend to have lower bias but higher variance. In classification there is a non-linear relationship between bias and variance, and low variance tends to be more important for classification than low bias. If classifier models can be expressed as binary strings, the minimum description length principle states that the best model is the one with the minimum sum of such a model description and the training data with respect to that model. This general principle can be extended to cover modelspecific heuristics such as weight decay and pruning in neural networks, regularization in specific models, and so on. The basic insight underlying resampling techniques -- such as the bootstrap, jackknife, boosting, and bagging -- is that multiple data sets selected from a given data set enable the value and ranges of arbitrary statistics to be computed. In classification, boe for an adequate but not necessarily the optimal solution [87]. An empirical study showing that simple classifiers often work well can be found in [45]. The basic bias-variance decomposition and bias-variance dilemma [37] in regression appear in many statistics books [41, 16]. Geman et al. give a very clear presentation in the context of neural networks, but their discussion of classification is only indirectly related to their mathematical derivations for regression [35]. Our presentation for classification (zero-one loss) is based on Friedman's important paper [32]; the biasvariance decomposition has been explored in other non-quadratic cost functions as well [42]. Quenouille introduced the term jackknife in 1956 [76]. The theoretical foundations of resampling techniques are presented in Efron's clear book [28], and practical guides to their use include [36, 25]. Papers on bootstrap techniques for error estimation include [48]. Breiman has been particularly active in introducing and exploring resampling methods for estimation and classifier design, such as bagging [11] and general arcing [13]. AdaBoost [31] builds upon Schapire's analysis of the strength of weak learnability [82] and Freund's early work in the theory of learning [30]. Boosting in multicategory problems is a bit more subtle than in two-category problems we discussed [83]. Angluin's early work on queries for concept learning [3] was generalized to 9.7. PROBLEMS 51 active learning by Cohn and many others [18, 20] and is fundamental to some efforts in collecting large databases [93, 95, 94, 99]. Cross validation was introduced by Cover [23], and has been used extensively in conjunction with classification methods such as neural network. Estimates of error under different conditions include [34, 110, 103] and an excellent paper, which derives the size of test set needed for accurate estimation of classification accuracy is [39]. Bowyer and Phillip's book covers empirical evaluation techniques in computer vision [10], many of which apply to more general classification domains. The roots of maximum likelihood model selection stem from Bayes himself, but one of the earlier technical presentations is [38]. Interest in Bayesian model selection was revived in a series of papers by MacKay, whose primary interest was in applying the method to neural networks and interpolation [66, 69, 68, 67]. These model selection methods have subtle relationships to minimum description length (MDL) [78] and so-called maximum entropy approaches -- topics that would take us a bit beyond our central concerns. Cortes and her colleagues pioneered the analysis of learning curves for estimating the final quality of a classifier [22, 21]. No rate of convergence results can be made in the arbitrary case for finding the Bayes error, however [6]. Hughes [46] first carried out the required computations and obtained in Fig. 9.14. Extensive books on techniques for combining general classifiers include [55, 56] and for combining neural nets in particular include [86, 9]. Perrone and Cooper described the benefits that arise when expert classifiers disagree [73]. Dasarathy's book [24] has a nice mixture of theory (focusing more on sensor fusion than multiclassifier systems per se) and a collection of important original papers, including [43, 61, 96]. The simple heuristics for converting 1-of-c and rank order outputs to numerical values enabling integration were discussed in [63]. The hierarchical mixture of experts architecture and learning algorithm was first described in [51, 52]. A specific hierarchical multiclassifier technique is stacked generalization [107, 88, 89, 12], where for instance Gaussian kernel estimates at one level are pooled by yet other Gaussian kernels at a higher level. We have skipped over a great deal of work from the formal field of computational learning theory. Such work is generally preoccupied with convergence properties, asymptotics, and computational complexity, and usually relies on simplified or general models. Anthony and Biggs' short, clear and elegant book is an excellent introduction to the field [5]; broader texts include [49, 70, 53]. Perhaps the work from the field most useful for pattern recognition practitioners comes from weak learnability and boosting, mentioned above. The Probably approximately correct (PAC) framework, introduced by Valiant [98], has been very influential in computation learning theory, but has had only minor influence on the development of practical pattern recognition systems. A somewhat broader formulation, Probably almost Bayes (PAB), is described in [4]. The work by Vapnik and Chervonenkis on structural risk minimization [102], and later Vapnik-Chervonenkis (VC) theory [100, 101], derives (among other things) expected error bounds; it too has proven influential to the theory community. Alas, the bounds derived are somewhat loose in practice [19, 106]. Problems Section 9.2 1. One of the "conservations laws" for generalization states that the positive generalization performance of an algorithm in some learning situations must be offset 52 CHAPTER 9. ALGORITHM-INDEPENDENT MACHINE LEARNING by negative performance elsewhere. Consider a very simple learning algorithm that seems to contradict this law. For each test pattern, the prediction of the majority learning algorithm is merely the category most prevalent in the training data. (a) Show that averaged over all two-category problems of a given number of features that the off-training set error is 0.5. (b) Repeat (a) but for the minority learning algorithm, which always predicts the category label of the category least prevalent in the training data. (c) Use your answers from (a) & (b) to illustrate Part 2 of the No Free Lunch Theorem (Theorem 9.1). 2. Prove Part 1 of Theorem 9.1, i.e., that uniformly averaged over all target functions F , E1 (E|F, n) - E2 (E|F, n) = 0. Summarize and interpret this result in words. 3. Prove Part 2 of Theorem 9.1, i.e., for any fixed training set D, uniformly averaged over F , E1 (E|F, D) - E2 (E|F, D) = 0. Summarize and interpret this result in words. 4. Prove Part 3 of Theorem 9.1, i.e., uniformly averaged over all priors P (F ), E1 (E|n) - E2 (E|n) = 0. Summarize and interpret this result in words. 5. Prove Part 4 of Theorem 9.1, i.e., for any fixed training set D, uniformly averaged over P (F ), E1 (E|D) - E2 (E|D) = 0. Summarize and interpret this result in words. 6. Suppose you call an algorithm better if it performs slightly better than average over most problems, but very poorly on a small number of problems. Explain why the NFL Theorem does not preclude the existence of algorithms "better" in this way. 7. Show by simple counterexamples that the averaging in the different Parts of the No Free Lunch Theorem (Theorem 9.1) must be "uniformly." For instance imagine that the sampling distribution is a Dirac delta distribution centered on a single target function, and algorithm 1 guesses the target function exactly while algorithm 2 disagrees with algorithm 1 on every prediction. (a) Part 1 (b) Part 2 (c) Part 3 (d) Part 4 8. State how the No Free Lunch theorems imply that you cannot use training data to distinguish between new problems for which you generalize well from those for which you generalize poorly. Argue by reductio ad absurdum: that if you could distinguish such problems, then the No Free Lunch Theorem would be violated. n 9. Prove the relation r=0 n r = (1 + 1)n = 2n of Eq. 5 two ways: (a) State the polynomial expansion of (x + y)n as a summation of coefficients and powers of x and y. Then, make a simple substitution for x and y. n (b) Prove the relation by induction. Let K(n) = r=0 1 n r . First confirm that the relation is valid for n = 1, i.e., that K(1) = 2 . Now prove that K(n + 1) = 2K(n) for arbitrary n. 9.7. PROBLEMS 53 10. Consider the number of different Venn diagrams for k binary features f1 , . . . , fk . (Figure 9.2 shows several of these configurations for the k = 3 case.) (a) How many functionally different Venn diagrams exist for the k = 2 case? Sketch all of them. For each case, state how many different regions exis-means . . . . . . . . . . . . . . . . . 10.4.4 *Fuzzy k-means clustering . . . . . . . . . . . Algorithm 2: Fuzzy k-means . . . . . . . . . . . . . . 10.5 Unsupervised Bayesian Learning . . . . . . . . . . . 10.5.1 The Bayes Classifier . . . . . . . . . . . . . . 10.5.2 Learning the Parameter Vector . . . . . . . . Example 2: Unsupervised learning of Gaussian data . 10.5.3 Decision-Directed Approximation . . . . . . . 10.6 *Data Description and Clustering . . . . . . . . . . . 10.6.1 Similarity Measures . . . . . . . . . . . . . . 10.7 Criterion Functions for Clustering . . . . . . . . . . 10.7.1 The Sum-of-Squared-Error Criterion . . . . . 10.7.2 Related Minimum Variance Criteria . . . . . 10.7.3 Scattering Criteria . . . . . . . . . . . . . . . Example 3: Clustering criteria . . . . . . . . . . . . 10.8 *Iterative Optimization . . . . . . . . . . . . . . . . Algorithm 3: Basic minimum-squared-error . . . . . 10.9 Hierarchical Clustering . . . . . . . . . . . . . . . . . 10.9.1 Definitions . . . . . . . . . . . . . . . . . . . 10.9.2 Agglomerative Hierarchical Clustering . . . . Algorithm 4: Agglomerative hierarchical . . . . . . . 10.9.3 Stepwise-Optimal Hierarchical Clustering . . Algorithm 5: Stepwise optimal hierarchical clustering 10.9.4 Hierarchical Clustering and Induced Metrics . 10.10*The Problem of Validity . . . . . . . . . . . . . . . 10.11Competitive Learning . . . . . . . . . . . . . . . . . Algorithm 6: Competitive learning . . . . . . . . . . 10.11.1 Unknown number of clusters . . . . . . . . . Algorithm 7: leader-follower . . . . . . . . . . . . . . 1 3 3 4 6 7 8 9 11 13 13 14 15 17 17 18 21 23 24 25 29 29 30 31 33 35 36 37 37 39 39 41 42 43 43 45 47 48 48 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 CONTENTS 10.11.2 Adaptive Resonance . . . . . . . . . . . . . . . . . . . . . . . . 10.12*Graph Theoretic Methods . . . . . . . . . . . . . . . . . . . . . . . . 10.13Component analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.1 Principal component analysis (PCA) . . . . . . . . . . . . . . . 10.13.2 Non-linear component analysis . . . . . . . . . . . . . . . . . . 10.13.3 *Independent component analysis (ICA) . . . . . . . . . . . . . 10.14Low-Dimensional Representations and Multidimensional Scaling (MDS) 10.14.1 Self-organizing feature maps . . . . . . . . . . . . . . . . . . . . 10.14.2 Clustering and Dimensionality Reduction . . . . . . . . . . . . Algorithm 8: Hierarchical dimensionality reduction . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliographical and Historical Remarks . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 51 53 53 54 55 58 61 65 66 66 68 68 79 84 87 Chapter 10 Unsupervised Learning and Clustering 10.1 Introduction labeled by their use labeled U to be supervised.category membership. Procedures thatunsupervised samples are said Now we shall investigate a number of procedures, which use unlabeled samples. That is, we shall see what can be done when all one has is a collection of samples without being told their category. One might wonder why anyone is interested in such an unpromising problem, and whether or not it is possible even in principle to learn anything of value from unlabeled samples. There are at least five basic reasons for interest in unsupervised procedures. First, collecting and labeling a large set of sample patterns can be surprisingly costly. For instance, recorded speech is virtually free, but accurately labeling the speech -- marking what word or phoneme is being uttered at each instant -- can be very expensive and time consuming. If a classifier can be crudely designed on a small set of labeled samples, and then "tuned up" by allowing it to run without supervision on a large, unlabeled set, much time and trouble can be saved. Second, one might wish to proceed in the reverse direction: train with large amounts of (less expensive) unlabeled data, and only then use supervision to label the groupings found. This may be appropriate for large "data mining" applications where the contents of a large database are not known beforehand. Third, in many applications the characteristics of the patterns can change slowly with time, for example in automated food classification as the seasons change. If these changes can be tracked by a classifier running in an unsupervised mode, improved performance can be achieved. Fourth, we can use unsupervised methods to find features, that will then be useful for categorization. There are unsupervised methods that represent a form of data-dependent "smart preprocessing" or "smart feature extraction." Lastly, in the early stages of an investigation it may be valuable to gain some insight into the nature or structure of the data. The discovery of distinct subclasses or similarities among patterns or of major departures from expected characteristics may suggest we significantly alter our 3 ntil now we have assumed that the training samples used to design a classifier were 4 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING approach to designing the classifier. The answer to the question of whether or not it is possible in principle to learn anything from unlabeled data depends upon the assumptions one is willing to accept -- theorems can not be proved without premises. We shall begin with the very restrictive assumption that the functional forms for the underlying probability densities are known, and that the only thing that must be learned is the value of an unknown parameter vector. Interestingly enough, the formal solution to this problem will turn out to be almost identical to the solution for the problem of supervised learning given in Chap. ??. Unfortunately, in the unsupervised case the solution suffers from the usual problems associated with parametric assumptions without providing any of the benefits of computational simplicity. This will lead us to various attempts to reformulate the problem as one of partitioning the data into subgroups or clusters. While some of the resulting clustering procedures have no known significant theoretical properties, they are still among the more useful tools for pattern recognition problems. 10.2 Mixture Densities and Identifiability We begin by assuming that we know the complete probability structure for the problem with the sole exception of the values of some parameters. To be more specific, we make the following assumptions: 1. The samples come from a known number c of classes. 2. The prior probabilities P (j ) for each class are known, j = 1, . . . , c. 3. The forms for the class-conditional probability densities p(x|j , j ) are known, j = 1, . . . , c. 4. The values for the c parameter vectors 1 , . . . , c are unknown. 5. The category labels are unknown. Samples are assumed to be obtained by selecting a state of nature j with probability P (j ) and then selecting an x according to the probability law p(x|j , j ). Thus, the probability density function for the samples is given by c p(x|) = j=1 p(x|j , j )P (j ), (1) component densities mixing parameters where = ( 1 , . . . , c ). For obvious reasons, a density function of this form is called a mixture density. The conditional densities p(x|j , j ) are called the component densities, and the prior probabilities P (j ) are called the mixing parameters. The mixing parameters can also be included among the unknown parameters, but for the moment we shall assume that only is unknown. Our basic goal will be to use samples drawn from this mixture density to estimate the unknown parameter vector . Once we know we can decompose the mixture into its components and use a Bayesian classifier on the derived densities, if indeed classification is our final goal. Before seeking explicit solutions to this problem, however, let us ask whether or not it is possible in principle to recover from the mixture. Suppose that we had an unlimited number of samples, and that we used one of the nonparametric methods of Chap. ?? to determine the value of p(x|) for every x. If 10.2. MIXTURE DENSITIES AND IDENTIFIABILITY 5 there is only one value of that will produce the observed values for p(x|), then a solution is at least possible in principle. However, if several different values of can produce the same values for p(x|), then there is no hope of obtaining a unique solution. These considerations lead us to the following definition: a density p(x|) is said to be identifiable if = implies that there exists an x such that p(x|) = p(x| ). Or put another way, a density p(x|) is not identifiable if we cannot recover a unique , even from an infinite amount of data. In the discouraging situation where we cannot infer any of the individual parameters (i.e., components of ), the density is completely unidentifiable. Note that the identifiability of is a property of the model, irrespective of any procedure we might use to determine its value. As one might expect, the study of unsupervised learning is greatly simplified if we restrict ourselves to identifiable mixtures. Fortunately, most mixtures of commonly encountered density functions are identifiable, as are most complex or high-dimensional density functions encountered in real-world problems. Mixtures of discrete distributions are not always so obliging. As a simple example consider the case where x is binary and P (x|) is the mixture complete unidentifiability P (x|) = = 1 x 1 x 1 (1 - 1 )1-x + 2 (1 - 2 )1-x 2 2 1 (1 + 2 ) if x = 1 2 1 - 1 (1 + 2 ) if x = 0. 2 Suppose, for example, that we know for our data that P (x = 1|) = 0.6, and hence that P (x = 0|) = 0.4. Then we know the function P (x|), but we cannot determine , and hence cannot extract the component distributions. The most we can say is that 1 +2 = 1.2. Thus, here we have a case in which the mixture distribution is completely unidentifiable, and hence a case for which unsupervised learning is impossible in principle. Related situations may permit us to determine one or some parameters, but not all (Problem 3). This kind of problem commonly occurs with discrete distributions. If there are too many components in the mixture, there may be more unknowns than independent equations, and identifiability can be a serious problem. For the continuous case, the problems are less severe, although certain minor difficulties can arise due to the possibility of special cases. Thus, while it can be shown that mixtures of normal densities are usually identifiable, the parameters in the simple mixture density P (1 ) P (2 ) 1 1 p(x|) = exp - (x - 1 )2 + exp - (x - 2 )2 2 2 2 2 (2) cannot be uniquely identified if P (1 ) = P (2 ), for then 1 and 2 can be interchanged without affecting p(x|). To avoid such irritations, we shall acknowledge that identifiability can be a problem, but shall henceforth assume that the mixture densities we are working with are identifiable. Technically speaking, a distribution is not identifiable if we cannot determine the parameters without bias. We might guess their correct values, but such a guess would have to be biased in some way. 6 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING 10.3 Maximum-Likelihood Estimates Suppose now that we a (i ), and we obtain P (i |x, D) = p(x|i , D)P (i ) c j=1 . (32) p(x|j , D)P (j ) Central to the Bayesian approach is the introduction of the unknown parameter vector via 18 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING p(x|i , D) = = p(x, |i , D) d p(x|, i , D)p(|i , D) d. (33) Since the selection of x is independent of the samples, we have p(x|, i , D) = p(x|i , i ). Similarly, since knowledge of the state of nature when x is selected tells us nothing about the distribution of , we have p(|i , D) = p(|D), and thus P (x|i , D) = p(x|i , i )p(|D) d. (34) That is, our best estimate of p(x|i ) is obtained by averaging p(x|i , i ) over i . Whether or not this is a good estimate depends on the nature of p(|D), and thus our attention turns at last to that density. 10.5.2 Learning the Parameter Vector p(D|)p() , p(D|)p() d We can use Bayes' formula to write p(|D) = (35) where the independence of the samples yields the likelihood n p(D|) = k=1 p(xk |). (36) Alternatively, letting Dn denote the set of n samples, we can write Eq. 35 in the recursive form p(|Dn ) = p(xn |)p(| Dn-1 ) . p(xn |)p(|Dn-1 ) d (37) These are the basic equations for unsupervised Bayesian learning. Equation 35 emphasizes the relation between the Bayesian and the maximum-likelihood solutions. If p () is essentially uniform over the region where p(D|) peaks, then p(|D) peaks ^ at the same place. If the only significant peak occurs at = , and if the peak is very sharp, then Eqs. 32 & 34 yield ^ p(x|i , D) p(x|i , ) and P (i |x, D) ^ p(x|i , i )P (i ) c j=1 (38) . (39) ^ p(x|j , j )P (j ) That is, these conditions justify the use of the maximum-likelihood estimate as if it were the true value of in designing the Bayes classifier. As we saw in Sect. ??.??, in the limit of large amounts of data, maximum-likelihood and the Bayes methods will agree (or nearly agree). While many small sample size 10.5. UNSUPERVISED BAYESIAN LEARNING p(D|) 19 ^ Figure 10.4: In a highly skewed or multiple peak posterior distribution such as illus^ trated here, the maximum-likelihood solution will yield a density very different from a Bayesian solution, which requires the integration over the full range of parameter space . problems they will agree, there exist small problems where the approximations are poor (Fig. 10.4). As we saw in the analogous case in supervised learning whether one chooses to use the maximum-likelihood or the Bayes method depends not only on how confident one is of the prior distributions, but also on computational considerations; maximum-likelihood techniques are often easier to implement than Bayesian ones. Of course, if p() has been obtained by supervised learning using a large set of labeled samples, it will be far from uniform, and it will have a dominant influence on p(|Dn ) when n is small. Equation 37 shows how the observation of an additional unlabeled sample modifies our opinion about the true value of , and emphasizes the ideas of updating and learning. If the mixture density p(x|) is identifiable, then each additional sample tends to sharpen p(|Dn ), and under fairly general conditions p(|Dn ) can be shown to converge (in probability) to a Dirac delta function centered at the true value of (Problem 8). Thus, even though we do not know the categories of the samples, identifiability assures us that we can learn the unknown parameter vector , and thereby learn the component densities p(x|i , ). This, then, is the formal Bayesian solution to the problem of unsupervised learning. In retrospect, the fact that unsupervised learning of the parameters of a mixture density is so similar to supervised learning of the parameters of a component density is not at all surprising. Indeed, if the component density is itself a mixture, there would appear to be no essential difference between the two problems. There are, however, some significant differences between supervised and unsupervised learning. One of the major differences concerns the issue of identifiability. With supervised learning, the lack of identifiabilitanalytically simple results. Exact solutions for even the simplest nontrivial examples lead to computational requirements that grow exponentially with the number of samples (Problem ??). The problem of unsupervised learning is too important to abandon just because exact solutions are hard to find, however, and numerous procedures for obtaining approximate solutions have been suggested. Since the important difference between supervised and unsupervised learning is the presence or absence of labels for the samples, an obvious approach to unsupervised learning is to use the prior information to design a classifier and to use the decisions of this classifier to label the samples. This is called the decision-directed approach to unsupervised learning, and it is subject to many variations. It can be applied sequentially on-line by updating the classifier each time an unlabeled sample is classified. Alternatively, it can be applied in parallel (batch mode) by waiting until all n samples are classified before updating the classifier. If desired, this process can be repeated until no changes occur in the way the samples are labeled. Various heuristics can be introduced to make the extent of any corrections depend upon the confidence of the classification decision. There are some obvious dangers associated with the decision-directed approach. If the initial classifier is not reasonably good, or if an unfortunate sequence of samples is encountered, the errors in classifying the unlabeled samples can drive the classifier the wrong way, resulting in a solution corresponding roughly to one of the lesser peaks of the likelihood function. Even if the initial classifier is optimal, in general the resulting labeling will not be the same as the true class membership; the act of classification will exclude samples from the tails of the desired distribution, and will include samples from the tails of the other distributions. Thus, if there is significant overlap between the component densities, one can expect biased estimates and less than optimal results. Despite these drawbacks, the simplicity of decision-directed procedures makes the Bayesian approach computationally feasible, and a flawed solution is often better than none. If conditions are favorable, performance that is nearly optimal can be achieved at far less computational expense. In practice it is found that most of these procedures work well if the parametric assumptions are valid, if there is little overlap between the component densities, and if the initial classifier design is at least roughly correct (Computer exercise 7). 24 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING 10.6 *Data Description and Clustering Let us reconsider our original problem of learning something of use from a set of unlabeled samples. Viewed geometrically, these samples may form clouds of points in a d-dimensional space. Suppose that we knew that these points came from a single normal distribution. Then the most we could learn form the data would be contained in the sufficient statistics -- the sample mean and the sample covariance matrix. In essence, these statistics constitute a compact description of the data. The sample mean locates the center of gravity of the cloud; it can be thought of as the single point m that best represents all of the data in the sense of minimizing the sum of squared distances from m to the samples. The sample covariance matrix describes the amount the data scatters along various directions. If the data points are actually normally distributed, then the cloud has a simple hyperellipsoidal shape, and the sample mean tends to fall in the region where the samples are most densely concentrated. Of course, if the samples are not normally distributed, these statistics can give a very misleading description of the data. Figure 10.5 shows four different data sets that all have the same mean and covariance matrix. Obviously, second-order statistics are incapable of revealing all of the structure in an arbitrary set of data. Figure 10.5: These four data sets have identical sall directions. Clusters defined by Euclidean distance will be invariant to translations or rotations in feature space -- rigid-body motions of the data points. However, they will not be invariant to linear transformations in general, or to other transformations that distort the distance relationships. Thus, as Fig. 10.7 illustrates, a simple scaling of the coordinate axes can result in a different grouping of the data into clusters. Of course, this is of no concern for problems in which arbitrary rescaling is an unnatural 26 x2 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING x2 d0 = .3 1 1 x2 d0 = .1 1 d0 = .03 .8 .8 .8 .6 .6 .6 .4 .4 .4 .2 .2 .2 0 .2 .4 .6 .8 1 x1 0 .2 .4 .6 .8 1 x1 0 .2 .4 .6 .8 1 x1 Figure 10.6: The distance threshold affects the number and size of clusters. Lines are drawn between points closer than a distance d0 apart for three different values of d0 -- the smaller the value of d0 , the smaller and more numerous the clusters. or meaningless transformation. However, if clusters are to mean anything, they should be invariant to transformations natural to the problem. One way to achieve invariance is to normalize the data prior to clustering. For example, to obtain invariance to displacement and scale changes, one might translate and scale the axes so that all of the features have zero mean and unit variance -- standardize the data. To obtain invariance to rotation, one might rotate the axes so that they coincide with the eigenvectors of the sample covariance matrix. This transformation to principal components (Sect. 10.13.1) can be preceded and/or followed by normalization for scale. However, we should not conclude that this kind of normalization is necessarily desirable. Consider, for example, the matter of translating and whitening -- scaling the axes so that each feature has zero mean and unit variance. The rationale usually given for this normalization is that it prevents certain features from dominating distance calculations merely because they have large numerical values, much as we saw in networks trained with backpropagation (Sect. ??.??). Subtracting the mean and dividing by the standard deviation is an appropriate normalization if this spread of values is due to normal random variation; however, it can be quite inappropriate if the spread is due to the presence of subclasses (Fig. ??). Thus, this routine normalization may be less than helpful in the cases of greatest interest. Section ?? describes other ways to obtain invariance to scaling. Instead of scaling axes, we can change the metric in interesting ways. For instance, one broad class of distance metrics is of the form d 1/q d(x, x ) = k=1 |xk - xk | q , (44) Minkowski metric city block metric where q 1 is a selectable parameter -- the general Minkowski metric we considered in Chap. ??. Setting q = 2 gives the familiar Euclidean metric while setting q = 1 the Manhattan or city block metric -- the sum of the absolute distances along each of the d coordinate axes. Note that only q = 2 is invariant to an arbitrary rotation or In backpropagation, one of the goals for such preprocessing and scaling of data was to increase learning speed; in contrast, such preprocessing does not significantly affect the speed of these clustering algorithms. 10.6. *DATA DESCRIPTION AND CLUSTERING x2 1.6 27 x2 1 1.4 (.5 0 ) 0 2 .8 1.2 1 .6 .8 .4 .6 .2 .4 0 .2 .4 .6 .8 1 x1 .2 0 ( 2 .5 ) 0 0 .1 .2 .3 .4 .5 x1 x2 .5 .4 .3 .2 .1 0 .25 .5 .75 1 1.25 x1 1.5 1.75 2 Figure 10.7: Scaling axes affects the clusters in a minimum distance cluster method. The original data and minimum-distance clusters are shown in the upper left -- points in one cluster are shown in red, the other gray. When the vertical axis is expanded by a factor of 2.0 and the horizontal axis shrunk by a factor of 0.5, the clustering is altered (as shown at the right). Alternatively, if the vertical axis is shrunk by a factor of 0.5 and the horizontal axis expanded by a factor of 2.0, smaller more numerous clusters result (shown at the bottom). In both these scaled cases, the clusters differ from the original. translation in feature space. Another alternative is to use some kind of metric based on the data itself, such as the Mahalanobis distance. More generally, one can abandon the use of distance altogether and introduce a nonmetric similarity function s(x, x ) to compare two vectors x and x . Conventionally, this is a symmetric functions whose value is large when x and x are somehow "similar." For example, when the angle between two vectors is a meaningful measure of their similarity, then the normalized inner product xt x x x similarity function s(x, x ) = (45) 28 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING x2 x2 x1 x1 Figure 10.8: If the data fall into well-separated clusters (left), normalization by a whitening transform for the full data may reduce the separation, and hence be undesirable (right). Such a whitening normalization may be appropriate if the full data set arises from a single fundamental process (with noise), but inappropriate if there are several different processes, as shown here. may be an appropriate similarity function. This measure, which is the cosine of the angle between x and x , is invariant to rotation and dilation, though it is not invariant to translation and general linear transformations. When the features are binary valued (0 or 1), this similarity functions has a simple non-geometrical interpretation in terms of shared features or shared attributes. Let us say that a sample x possesses the ith attribute if xi = 1. Then xt x is merely the number of attributes possessed by both x and x , and x x = (xt xx t x )1/2 is the geometric mean of the number of attributes possessed by x and the number possessed by x . Thus, s(x, x ) is a measure of the relative possession of common attributes. Some simple variations are s(x, x ) = the fraction of attributes shared, and s(x, x ) = xt x , xt x + x t x - x t x (47) xt x , d (46) Tanimoto distance the ratio of the number of shared attributes to the number possessed by x or x . This latter measure (sometimes known as the Tanimoto coefficient or Tanimoto distance) is frequently encountered in the fields of information retrieval and biological taxonomy. Related measures of similarity arise in other applications, the variety of measures testifying to the diversity of problem domains (Computer exercise ??). Fundamental issues in measurement theory are involved in the use of any distance or similarity function. The calculation of the similarity between two vectors always involves combining the values of their components. Yet in many pattern recognition applications the components of the feature vector measure seemingly noncomparable quantities, such as meters and kilograms. Recall our example of classifying fish: how can one compare the lightness of the skin to the length or weight of the fish? Should the comparison depend on whether the length is measured in meters or inches? How does one treat vectors whose components have a mixture of nominal, ordinal, interval and ratio scales? Ultimately, there are rarely clear methodological answers to these questions. When a user selects a particular similarity function or normalizes the data in a particular way, information is introduced that gives the procedure meaning. We have given examples of some alternatives that have proved to be useful. (Competitive 10.7. CRITERION FUNCTIONS FOR CLUSTERING 29 learning, discussed in Sect. 10.11, is a popular decision directed clustering algorithm.) Beyond that we can do little more than alert the unwary to these pitfalls of clustering. Amidst all this discussion of clustering, we must not lose sight of the fact that often the clusters found will later be labeled (e.g., by resorting to a teacher or small number of labeled samples), and that the clusters can then be used for classification. In that case, the same similarity (or metric) should be used for classification as was used for forming the clusters (Computer exercise 8). 10.7 Criterion Functions for Clustering We have just consie favored under conditions where there may be unknown or irrelevant linear transformations of the data. Invariant Criteria It is not particularly hard to show that the eigenvalues 1 , . . . , d of S-1 SB are invariW ant under nonsingular linear transformations of the data (Problem ??). Indeed, these eigenvalues are the basic linear invariants of the scatter matrices. Their numerical values measure the ratio of between-cluster to within-cluster scatter in the direction of the eigenvectors, and partitions that yield large values are usually desirable. Of 10.7. CRITERION FUNCTIONS FOR CLUSTERING 33 course, as we pointed out in Sect. ??, the fact that the rank of SB can not exceed c-1 means that no more than c-1 of these eigenvalues can be nonzero. Nevertheless, good partitions are ones for which the nonzero eigenvalues are large. One can invent a great variety of invariant clustering criteria by composing appropriate functions of these eigenvalues. Some of these follow naturally from standard matrix operations. For example, since the trace of a matrix is the sum of its eigenvalues, one might elect to maximize the criterion function d trS-1 SB = W i=1 i . (64) By using the relation ST = SW + SB , one can derive the following invariant relatives of [trSW and |SW | (Problem 25): d Jf = trS-1 SW = T i=1 1 1 + i (65) and |SW | 1 = . |ST | 1 + i i=1 d (66) Since all of these criterion functions are invariant to linear transformations, the same is true of the partitions that extremize them. In the special case of two clusters, only one eigenvalue is nonzero, and all of these criteria yield the same clustering. However, when the samples are partitioned into more than two clusters, the optimal partitions, though often similar, need not be the same, as shown in Example 3. Example 3: Clustering criteria We can gain some intuition by considering these criteria applied to the following data set. sample 1 2 3 4 5 6 7 8 9 10 x1 -1.82 -0.38 -0.13 -1.17 -0.92 -1.69 0.33 -0.71 1.27 -0.16 x2 0.24 -0.39 0.16 0.44 0.16 -0.01 -0.17 -0.21 -0.39 -0.23 sample 11 12 13 14 15 16 17 18 19 20 x1 0.41 1.70 0.92 2.41 1.48 -0.34 0.83 0.62 -1.42 0.67 x2 0.91 0.48 -0.49 0.32 -0.23 1.88 0.23 0.81 -0.51 -0.55 All of the clusterings seem reasonable, and there is no strong argument to favor one over the others. For the case c = 2, the clusters minimizing the Je indeed tend to favor clusters of roughly equal numbers of points, as illustrated in Fig. 10.9; in contrast, Jd favors one large and one fairly small cluster. Since the full data set happens to be spread horizontally more than vertically, the eigenvalue in the horizontal direction is greater than that in the vertical direction. As such, the clusters are "stretched" 34 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING c=2 c=3 Je Jd Jf The clusters found by minimizing a criterion depends upon the criterion function as well as the assumed number of clusters. The sum-of-squared-error criterion Je (Eq. 49), the determinant criterion Jd (Eq. 63) and the more subtle trace criterion Jf (Eq. 65) were applied to the 20 points in the table with the assumption of c = 2 and c = 3 clusters. (Each point in the table is shown, with bounding boxes defined by -1.8 < x < 2.5 and -0.6 < y < 1.9.) horizontally somewhat. In general, the differences between the cluster criteria become less pronounced for large numbers of clusters. For the c = 3 case, for instance, the clusters depend only mildly upon the cluster criterion -- indeed, two of the clusterings are identical. With regard to the criterion function involving ST , note that ST does not depend on how the samples are partitioned into clusters. Thus, the clusterings that minimize |SW |/|ST | are exactly the same as the ones that minimize |SW |. If we rotate and scale the axes so that ST becomes the identity matrix, we see that minimizing tr[S-1 SW ] T is equivalent to minimizing the sum-of-squared-error criterion trSW after performing this normalization. Clearly, this criterion suffers from the very defects that we warned about in Sect. ??, and it is probably the least desirable of these criteria. One final warning about invariant criteria is in order. If different apparent clusters can be obtained by scaling the axes or by applying any other linear transformation, then all of these groupings will be exposed by invariant procedures. Thus, invariant criterion functions are more likely to possess multiple local extrema, and are correspondingly more difficult to optimize. The variety of the criterion functions we have discussed and the somewhat subtle differences between them should not be allowed to obscure their essential similarity. In every case the underlying model is that the samples form c fairly well separated clouds of points. The within-cluster scatter matrix SW is used to measure the compactness of these clouds, and the basic goal is to find the most compact grouping. While this approach has proved useful for many problems, it is not universally applicable. For example, it will not extract a very dense cluster embedded in the center of a diffuse cluster, or separate intertwined line-like clusters. For such cases one must devise other 10.8. *ITERATIVE OPTIMIZATION 35 criterion functions that are better matched to the structure present or being sought. 10.8 *Iterative Optimization Once a criterion function has been selected, clustering becomes a well-defined problem in discrete optimization: find those partitions of the set of samples that extremize the criterion function. Since the sample set is finite, there are only a finite number of possible partitions. Thus, in theory the clustering problem can always be solved by exhaustive enumeration. However, the computational complexity renders such an approach unthinkable for all but the simplest problems; there are approximately cn /c! ways of partitioning a set of n elements into c subsets, and this exponential growth with n is overwhelming (Problem 17). For example an exhaustive search for the best set of 5 clusters in 100 samples would require considering more than 1067 partitionings. Simply put, in most applications an exhaustive search is completely infeasible. The approach most frequently used in seeking optimal partitions is iterative optimization. The basic idea is to find some reasonable initial partition and to "move" samples from one group to another if such a move will improve the value of the criterion function. Like hill-climbing procedures in general, these approaches guarantee local but not global optimization. Different starting points can lead to different solutions, and one never knows whether or not the best solution has been found. Despite these limitations, the fact that the computational requirements are bearable makes this approach attractive. Let us consider the use of iterative improvement to minimize the sum-of-squarederror criterion Je , written as c Je = i=1 Ji , (67) where an effective error per cluster is defined to be Ji = xDi x - mi 2 (68) and the mean of each cluster is, as before, mi = 1 ni x. xDi (48) ^ Suppose that a sample x currently in cluster Di is tentatively moved to Dj . Then mj changes to m = mj + j and Jj increases to Jj ^ x - mj nj + 1 (69) = xDi x - m j 2 ^ + x - m j ^ x - mj nj + 1 2 2 = xDi x - mj - + nj (^ - mj ) x nj + 1 2 36 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING = Jj + nj ^ x - mj nj + 1 2 . (70) Under the assumption that ni = 1 (singleton clusters should not be destroyed), a similar calculation (Problem 29) shows that mi changes to m = m - i and Ji decreases to Ji = Ji - ni ^ x - mi 2 . ni - 1 (72) ^ x - mi ni - 1 (71) These equations greatly simplify the computation of the change in the criterion ^ function. The transfer of x from Di to Dj is advantageous if the decrease in Ji is greater than the increase in Jj . This is the case if ni ^ x - mi ni - 1 2 > nj ^ x - mj nj + 1 2 , (73) ^ which typically happens whenever x is closer to mj than mi . If reassignment is profitable, the greatest decrease in sum of squared error is obtained by selecting the ^ cluster for which nj /(nj + 1) x - mj 2 is minimum. This leads to the following clustering procedure: Algorithm 3 (Basic iterative minimum-squared-error clustering) 1 2 3 4 5 6 7 8 9 10 begin initialize n, c, m1 , m2 , . . . , mc ^ do randomly select a sample x; ^ ^ i arg min mi - x (classify x) i if ni = 1 then compute j = nj nj +1 nj nj -1 ^ x - mj ^ x - mi 2 2 j=i j=i ^ if k j for all j then transfer x to Dk recompute Je , mi , mk until no change in Je in n attempts return m1 , m2 , . . . , mc end A moment's consideration will show that this procedure is is essentially a sequential version of the k-means procedure (Algorithm 1) described in Sect. 10.4.3. Where the k-means procedure waits until all n samples have been reclassified before updating, the Basic Iterative Minimum-Squared-Error procedure updates after each sample is reclassified. It has been experimentally observed that this procedure is more susceptible to being trapped in local minima, and it has the further disadvantage of making the results depend on the order in which the candidates are selected. However, it is at least a stepwise optimal procedure, and it can be easily modified to apply to problems in which samples are acquired sequentially and clustering must be done on-line. One question that plagues all hill-climbing procedures is the choice of the starting point. Unfortunately, there is no simple, universally good solution to this problem. One approach is to select c samples randomly for the initial cluster centers, using them to partition the data on a minimum-distance basis. Repetition with different random selections can give some indication of the sensitivity of the solution to the 10.9. HIERARCHICAL CLUSTERING 37 starting point. Yet another approach is to find the c-cluster starting point from the solutions to the (c - a)-cluster problem. The solution for the one-cluster problem is the total sample mean; the starting point for the c-cluster problem can be the final means for the (c - a)-cluster problem plus the sample that is farthest from the nearest cluster center. This approach leads us directly to the so-called hierarchical clustering procedures, which are simple methods that can provide very good starting points for iterative optimization. 10.9 Hierarchical Clustering Up to now, our methods have formed disjoint clusters -- in computer science terminology, we would say that the data description is "flat." However, there are many times when clusters have subclusters, these have sub-subclusters, and so on. In biological taxonomy, for instance, kingdoms are split into phylums, which are split into subphylums, which are split into orders, and suborders, and families, and subfamilies, and genus and species, and so on, all the way to a particular individual organism. Thus we might have kingdom = animal, phylum = Chordata, subphylum = Vertebrata, class = Osteichthyes, subclass = Actinopterygii, order = Salmoniformes, family = Salmonidae, genus = Oncorhynchus, species = Oncorhynchus kisutch, and individual = the particular Coho salmon caught in my net. Organisms that lie in the animal kingdom -- such as a salmon and a moose -- share important attributes that are not present in organisms in the plant kingdom, such as redwood trees. In fact, this kind of hierarchical clustering permeates classifactory activities in the sciences. Thus we now turn to clustering methods which will lead to representations that are "hierarchical," rather than flat. 10.9.1 Definitions Let us consider a sequence of partitions of the n samples into c clusters. The first of these is a partition into n clusters, each cluster containing exactly one sample. The next is a partition into n - 1 clusters, the next a partition into n - 2, and so on until the nth, in which all the samples form one cluster. We shall say that we are at level k in the sequence when c = n - k + 1. Thus, level one corresponds to n clusters and level n to one cluster. Given any two samples x and x , at some level they will be grouped together in the same cluster. If the sequence has the property that whenever two samples are in the same cluster at level k they remain together at all higher levels, then the sequence is said to be a hierarchical clustering. The most natural representation of hierarchical clustering is a corresponding tree, called a dendrogram, which shows how the samples are grouped. Figure 10.10 shows a dendrogram for a simple problem involving eight samples. Level 1 shows the eight samples as singleton clusters. At level 2, samples x6 and x7 have been grouped to form a cluster, and they stay together at all subsequent levels. If it is possible to measure the similarity between clusters, then the dendrogram is usually drawn to scale to show the similarity between the clusters that are grouped. In Fig. 10.10, for example, the similarity between the two groups of samples that are merged at level 5 has a value of roughly 60. We shall see shortly how such similarity values can be obtained, but first note that the similarity values can be used to help determine whether groupings are natural or forced. If the similarity values for the levels are roughly evenly distributed throughout dendrogram 38 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING the range of possible values, then there is no principled argument that any particular number of clusters is better or "more natural" than another. Conversely, suppose that there is a unusually large gap between the similarity values for the levels corresponding to c = 3 and to c = 4 clusters. In such a case, one can argue that c = 3 is the most natural number of clusters (Problem 35). x 1 x2 x3 x 4 x5 x 6 x7 x8 Level 1 Level 4 Level 5 Level 6 Level 7 Level 8 Figure 10.10: A dendrogram can represent the results of hierarchical clustering algorithms. The vertical axis shows a generalized measure of similarity among clusters. Here, at level 1 all eight points lie in singleton clusters; each point in a cluster is highly similar to itself, of course. Points x6 and x7 happen to be the most similar, and are merged at level 2, and so forth. Another representation for hierarchical clustering is based on sets, in which each level of cluster may contain sets that are subclusters, as shown in Fig. 10.11. Yet another, textual, representation uses brackets, such as: {{x1 , {x2 , x3 }}, {{{x4 , x5 }, {x6 , x7 }}, x8 }}. While such representations may reveal the hierarchical structure of the data, they do not naturally represent the similarities quantitatively. For this reason dendrograms are generally preferred. x4 x3 x6 3 x2 2 x7 5 4 x5 x1 6 7 8 x8 Figure 10.11: A set or Venn diagram representation of two-dimensional data (which was used in the dendrogram of Fig. 10.10) reveals the hierarchical structure but not the quantitative distances between clusters. The levels are numbered in red. Because of their conceptual simplicity, hierarchical clustering procedures are among the best-known of unsupervised methods. The procedures themselves can be divided according to two distinct approaches -- agglomerative and divisive. Agglomerative (bottom-up, clumping) procedures start with n singleton clusters and form the sequence by successively merging clusters. Divisive (top-down, splitting) procedures start with all of the samples in one cluster and form the sequence by successively splitting clusters. The computation needed to go from one level to another is usually Agglomerative divisive Similarity scale Level 2 Level 3 100 90 80 70 60 50 40 30 20 10 0 10.9. HIERARCHICAL CLUSTERING 39 simpler for the agglomerative procedures. However, when there are many samples and one is interested in only a small number of clusters, this computation will have to be repeated many times. For simplicity, we shall concentrate on agglomerative procedures, and merely touch on some divisive methods in Sect. 10.12. 10.9.2 Agglomerative Hierarchical Clustering The major steps in agglomerative clustering are contained in the following procedure, where c is the desired number of final clusters: Algorithm 4 (Agglomerative hierarchical clustering) 1 2 3 4 5 6 7 begin initialize c, c n, Di {xi }, i = 1, . . . , n ^ do c c - 1 ^ ^ Find nearest clusters, say, Di and Dj Merge Di and Dj until c = c ^ return c clusters end As described, this procedure terminates when the specified number of clusters has been obtained and returns the clusters, described as set of points (rather than as mean or representative vectors). If we continue until c = 1 we can produce a dendrogram like that in Fig. 10.10. At any level the "distance" between nearest clusters can provide the dissimilarity value for that level. Note that we have not said how to measure the distance between two clusters, and hence how to find the "nearest" clusters, required by line 3 of the Algorithm. The considerations here are much like those involved in selecting a general clustering criterion function. For simplicity, we shall generally restrict our attention to the following distance measures: dmin (Di , Dj ) = dmax (Di , Dj ) = davg (Di , Dj ) = dmean (Di , Dj ) = min x - x xDi x Dj (74) (75) x-x (76) (77) xDi x Dj max x - x 1 n i nj xDi x Dj m i - mj . All of these measures have a minimum-variance flavor, and they usually yield the same results if the clusters are compact and well separated. However, if the clusters are close to one another, or if their shapes are not basically hyperspherical, quite different results can be obtained. Below we shall illustrate some of the differences. But first let us consider the computational complexity of a particularly simple agglomerative clustering algorithm. Suppose we have n patterns in d-dimensional space, and we seek to form c clusters using dmin (Di , Dj ) defined in Eq. 74. We will, once and for all, need to calculate n(n - 1) inter-point distances -- each of which is an O(d2 ) calculation -- and place the results in an inter-point distance table. The space complexity is, then, O(n2 ). Finding the minimum distance pair (for the first merging) requires that we step through the complete list, keeping the 40 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING index of the smallest distance. Thus for the first agglomerative step, the complexity is O(n(n - 1)(d2 + 1)) = O(n2 d2 ). For an arbitrary agglomeration step (i.e., from c ^ to c - 1), we need merely step through the n(n - 1) - c "unused" distances in the ^ ^ list and find the smallest for which x and x lie in different clusters. This is, again, O(n(n - 1) - c). The full time complexity is thus O(cn2 d2 ), and in typical conditions ^ n c. The Nearest-Neighbor Algorithm minimum algorithm singlelinkage algorithm When dmin is used to measure the distance between clusters (Eq. 74) the algorithm is sometimes called the nearest-neighbor cluster algorithm, or minimum algorithm Moreover, if it is terminated when the distance between nearest clusters exceeds an arbitrary threshold, it is called the single-linkage algorithm. Suppose that we think of the data points as being nodes of a graph, with edges forming a path between the nodes in the same subset Di . When dmin is used to measure the distance between subsets, the nearest neighbor nodes determine the nearest subsets. The merging of Di and Dj corresponds to adding an edge between the nearest pair of nodes in Di and Dj . Since edges linking clusters always go between distinct clusters, the resulting graph never has any closed loops or circuits; in the terminology of graph theory, this procedure generates a tree. If it is allowed to continue until all of the subsets are linked, the result is a spanning tree -- a tree with a path from any node to any other node. Moreover, it can be shown that the sum of the edge lengths of the resulting tree will not exceed the sum of the edge lengths for any other spanning tree for that set of samples (Problem 37). Thus, with the use of dmin as the distance measure, the agglomerative clustering procedure becomes an algorithm for generating a minimal spanning tree. Figure 10.12 shows the results of applying this procedure to Gaussian data. In both cases the procedure was stopped giving two large clusters (plus three singleton outliers); a minimal spanning tree can be obtained by adding the shortest possible edge between the two clusters. In the first case where the clusters are fairly well separated, the obvious clusters are found. In the second case, the presence of a point located so as to produce a bridge between the clusters results in a rather unexpected grouping into one large, elongated cluster, and one small, compact cluster. This behavior is often called the "chaining effect," and is sometimes considered to be a defect of this distance measure. To the extent that the results are very sensitive to noise or to slight changes in position of the data points, this is certainly a valid criticism. The Farthest-Neighbor Algorithm maximum algorithm completelinkage algorithm complete subgraph When dmax (Eq. 75) is used to measure the distance between subsets, the algorithm is sometimes called the farthest-neighbor clustering algorithm, or maximum algorithm. If it is terminated when the distance between nearest clusters exceeds an arbitrary threshold, it is called the complete-linkage algorithm. The farthest-neighbor algorithm discourages the growth of elongated clusters. Application of the procedure can be thought of as producing a graph in which edges connect all of the nodes in a cluster. In the terminology of graph theory, every cluster constitutes a complete subgraph. The distance between two clusters is determined by the most distant nodes in the two spanning tree There are methods for sorting or arranging the entries in the inter-point distance table so as to easily avoid inspection of points in the same cluster, but these typically do not improve the complexity results significantly. 10.9. HIERARCHICAL CLUSTERING 41 Figure 10.12: Two Gaussians were used to generate two-dimensional samples, shown in pink and black. The nearest-neighbor clustering algorithm gives two clusters that well approximate the generating Gaussians (left). If, however, another particular sample is generated (red point at the right) and the procedure re-started, the clusters do not well approximate the Gaussians. This illustrates how the algorithm is sensitive to the details of the samples. clusters. When the nearest clusters are merged, the graph is changed by adding edges between every pair of nodes in the two clusters. If we define the diameter of a partition as the largest diameter for clusters in the partition, then each iteration increases the diameter of the partition as little as possible. As Fig. 10.13 illustrates, this is advantageous when the true clusters are compact and roughly equal in size. Nevertheless, when this is not the case -- as happens with the two elongated clusters -- the resulting groupings can be meaningless. This is another example of imposing structure on data rather than finding structure in it. Compromises The minimum and maximum measures represent two extremes in measuring the distance between clusters. Like all procedures that involve minima or maxima, they tend to be overly sensitive to "outliers" or "wildshots." The use of averaging is an obvious way to ameliorate these problems, and davg and dmean (Eqs. 76 & 77) are natural compromises between dmin and dmax . Computationally, dmean is the simplest of all of these measures, since the others require computing all ni nj pairs of distances x - x . However, a measure such as davg can be used when the distances x - x are replaced by similarity measures, where the similarity between mean vectors may be difficult or impossible to define. 10.9.3 Stepwise-Optimal Hierarchical Clustering We observed earlier that if clusters are grown by merging the nearest pair of clusters, then the results have a minimum variance flavor. However, when the measure 42 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING dmax = large dmax = small Figure 10.13: The farthest-neighbor clustering algorithm uses the separation between the most distant points as a criterion for cluster membership. If this distance is set very large, then all points lie in the same cluster. In the case shown at the left, a fairly large dmax leads to three clusters; a smaller dmax gives four clusteramount proportional to netj , as shown by the red arrows in Fig. 10.14. It is this competition between cluster units, and the rsulting suppression of activity in all but the one with the largest net that gives the algorithm its name. Learning is confined to the weights at the most active unit. The weight vector at this unit is updated to be more like the pattern: w(t + 1) = w(t) + x, d (87) 2 wi = 1. where is a learning rate. The weights are then normalized to insure i=0 This normalization is needed to keep the classification and clustering based on the position in feature space rather than overall magnitude of w. Without such weight normalization, a single weight, say wj , could grow in magnitude and forever give the greatest value netj , and through competition thereby prevent other clusters from learning. Figure 10.15 shows the trajectories of three cluster centers in response to a sequence of patterns chosen randomly from the set shown. 10.11. COMPETITIVE LEARNING Algorithm 6 (Competitive learning) 1 2 3 4 5 6 7 8 9 10 47 begin initialize , n, c, w1 , w2 , . . . , wc xi {1, xi } i = 1, . . . n augment all patterns xi xi / xi i = 1, . . . n normalize all patterns do randomly select a pattern x t j arg max wj x classify x j wj wj + x weight update wj wj / wj weight normalization until no significant change in w in n attempts return w1 , w2 , . . . , wc end x3, w3 x2, w2 x1, w1 3 Figure 10.15: All of the three-dimensional patterns have been normalized ( i=1 x2 = 1), i and hence lie on a two-dimensional sphere. Likewise, the weights of the three cluster centers have been normalized. The red curves show the trajectory of the weight vectors; at the end of learning, each lies near the center of a cluster. A drawback of Algorithm 6 is that there is no guarantee that it will terminate, even for a finite, non-pathological data set -- the condition in line 8 may never be satisfied and thus the weights may vary forever. A simple heuristic is to decay the learning rate in line 6 , for instance by (t) = (0)t for < 1 where t is an iteration number. If the initial cluster centers are representative of the full data set, and the rate of decay is set so that the full data set is presented at least several times before the learning is reduced to very small values, then good results can be expected. However if then a novel pattern is added, it cannot be learning, since is too small. Likewise, such a learning decay scheme is inappropriate if we seek to track gradual changes in the data. In a non-stationary environment, a we may want a clustering algorithm to be stable to prevent ceaseless recoding, and yet plastic, or changeable, in response to a new pattern. (Freezing cluster centers would prevent recoding, but would not permit learning of new patterns.) This tradeoff has been called the stability-plasticity dilemma, and we shall see in Sect. 10.11.2 how it can be addressed. First, however, we turn to the problem of unknown number of clusters. stabilityplasticity 48 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING 10.11.1 Unknown number of clusters We have mentioned the problem of unknown number of cluster centers. When the number is unknown, we can proceed in one of two general ways. In the first, we compare some cluster criterion as a function of the number of clusters. If there is a large gap in the criterion values, it suggests a "natural" number of clusters. A second approach is to state a threshold for the creation of a new cluster. This is useful in on-line cases. The drawback is that it depends more strongly on the order of data presentation. Whereas clustering algorithms such as k-means and hierarchical clustering typically have all data present before clustering begins (i.e., are off-line), there are occasionally situations in which clustering must be performed on-line as the data streams in, for instance when there is inadequate memory to store all the patterns themselves, or in a time-critical situation where the clusters need to be used even before the full data is present. Our graph theoretic met. Very quickly, a stable configuration of output and input units occurs, called a "resonance"(though this has nothing to do with the type of resonance in a driven oscillator). ART networks detect novelty by means of the orienting subsystem. The details need not concern us here, but in broad overview, the orienting subsystem has two inputs: the total number of active input features and the total number of features that are active in the input layer. (Note that these two numbers need not be the same, since the top-down feedback affects the activation of the input units, but not the number of active inputs themselves.) If an input pattern is "too different" from any current cluster centers, then the orienting subsystem sends a reset wave signal that renders the active output unit quiet. This allows a new cluster center to be found, or if all have been explored, then a new cluster center is created. The criterion for "too different" is a single number, set by the user, called the vigilance, (0 1. Denoting the number of active input features as |I| and the number active in the input layer during a resonance as |R|, then there will be a reset if |R| < , |I| vigilance parameter (88) vigilance where rho is a user-set number called the vigilance parameter. A low vigilance parameter means that there can be a poor "match" between the input and the learned cluster and the network will accept it. (Thus vigilance and the ratio of the number of features used by ART, while motivated by proportional considerations, is just one of an infinite number of possible closeness criteria (related to ). For the same data set, a low vigilance leads to a small number of large coarse clusters being formed, while a high vigilance leads to a large number of fine clusters (Fig. 10.19). We have presented the basic approach and issues with ART1, but these return (though in a more subtle way) in analog versions of ART in the literature. 10.12. *GRAPH THEORETIC METHODS 51 Figure 10.19: The results of ART1 applied to a sequence of binary figures. a) = xx. b) = 0.xx. 10.12 *Graph Theoretic Methods Where the mathematics of normal mixtures and minimum-variance partitions leads us to picture clusters as isolated clumps, the language and concepts of graph theory lead us to consider much more intricate structures. Unfortunately, there is no uniform way of posing clustering problems as problems in graph theory. Thus, the effective use of these ideas is still largely an art, and the reader who wants to explore the possibilities should be prepared to be creative. We begin our brief look into graph-theoretic methods by reconsidering the simple procedures that produce the graphs shown in Fig. 10.6. Here a threshold distance d0 was selected, and two points are placed in the same cluster if the distance between them is less than d0 . This procedure can easily be generalized to apply to arbitrary similarity measures. Suppose that we pick a threshold value s0 and say that xi is similar to xj if s(xi , xj ) > s0 . This defines an n-by-n similarity matrix S = [sij ], with binary component sij = 1 0 if s(xi , xj ) > s0 otherwise. (89) similarity matrix Furthermore, this matrix induces a similarity graph, dual to S, in which nodes correspond to points and an edge joins node i and node j if and only if sij = 1. The clusterings produced by the single-linkage algorithm and by a modified version of the complete-linkage algorithm are readily described in terms of this graph. With the single-linkage algorithm, two samples x and x are in the same cluster if and only if there exists a chain x, x1 , x2 , . . . , xk , x such that x is similar to x1 , x1 is similar to x2 , and so on for the whole chain. Thus, this clustering corresponds to the connected components of the similarity graph. With the complete-linkage algorithm, all samples in a given cluster must be similar to one another, and no sample can be in more than one cluster. If we drop this second requirement, then this clustering corresponds to the maximal complete subgraphs of the similarity graph -- the "largeserence between the source distribution and the estimate. That is, a is the basis vectors of A and thus p(y; a) is an estimate of the p(y). ^ This difference can be quantified by the Kullback-Liebler divergence: D(p(y), p(y; a)) ^ = D(p(y)||^(y; a)) p p(y) = p(y)log dy p(y; a) ^ = H(y) - The log-likelihood is 1 l(a) = n n p(y)log p(y; a)dy ^ (94) log p(xi ; a). ^ i=1 (95) 10.13. COMPONENT ANALYSIS 57 and using the law of large numbers, the Kullback-Liebler divergence can be written as p(y) dy p(y; a) ^ (96) l(a) = - = p(y)logp(y)dy - p(y)log H(y) -D(p(y)||^(y; a)), p indep. of W where the entropy H(y) is independent of W. Thus we maximize the log-likelihood by minimizing the Kullback-Liebler divergence with respect to the estimated density p(y; a): ^ l(a) =- D(p(y)||^(y; a)). p (97) W W Because A is an invertible matrix, and because the Kullback-Liebler divergence is invariant under invertible transformation (Problem 47), we have l(a) =- D(p(x)||^(z)). p W W H(yyy) WWW log[|WWW|] + log WWW WWW [WWW-1 ]t - (xxx)zzzt , n (98) = = i=1 xxi yyi (99) score function (100) where (xxx) is the score function, the gradient fector of the log likelihood: p(z )/z 1 1 p(z)/z (z) = - = - p(z) Thus the learning rule is p(z1 ) . . . p(zq )/zq p(zq ) H(xxx) = [xxxt ]-1 - (xx)yyt . xxx A simpler form comes if we merely scale, following the natural gradient xxx (101) H(xxx) (102) WWt WW = [I - (xx)xxt ]WWW. xxx This, then is the learning algorithm. An assumption is that at most one of the sources is Gaussian distributed (Problem 46). Indeed this method is most successful if the distributions are highly skewed or otherwise deviate markedly from Gaussian. We can understand the difference between PCA and ICA in the following way. Imagine that there were two sources that are correlated and large correlated signals in a particular direction. PCA would find that direction, and indeed would reduce the sum-squared error. Such components are not independent, and would not be useful for separating the sources. As such, they would not be found by ICA. Instead, ICA 58 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING would find those directions that are best for separating the sources -- even if those directions have small eigenvectors. Generally speaking, when used as preprocessing for classification, independent component analysis has several characteristics that make it more desirable than linear or non-linear principal component analysis. As we saw in Fig. 10.23, such principal components need not be effective in separating classes. Recall that the sensed input consists of a signal (due to the true categories) plus noise. If the noise is large much larger than the signal, principal components will depend more upon the noise than on the signal. Since the different categories are, we assume, independent, independent component analysis is likely to extract those features that are useful in distinguishing the classes. 10.14 Low-Dimensional Representations and Multidimensional Scaling (MDS) Part of the problem of deciding whether or not a given clustering means anything stems from our inability to visualize the structure of multidimensional data. This problem is further aggravated when similarity or dissimilarity measures are used that lack the familiar properties of distance. One way to attack this problem is to try to represent the data points as points in some lower-dimensional space in such a way that the distances between points in the that space correspond to the dissimilarities between points in the original space. If acceptably accurate representations can be found in two or perhaps three dimensions, this can be an extremely valuable way to gain insight into the structure of the data. The general process of finding a configuration of points whose interpoint distances correspond to similarities or dissimilarities is often called multidimensional scaling. Let us begin with the simpler case where it is meaningful to talk about the distances between the n samples x1 , . . . , xn . Let yi be the lower-dimensional image of xi , ij be the distance between xi and xj , and dij be the distance between yi and yj (Fig. 10.25). Then we are looking for a configuration of image points y1 , . . . , yn for which the n(n - 1)/2 distances dij between image points are as close as possible to the corresponding original distances ij . Since it will usually not be possible to find a configuration for which dij = ij for all i and j, we need some criterion for deciding whether or not one configuration is better than another. The following sum-of-squared-error functions are all reasonable candidates: (dij - ij )2 Jee = i<j i<j 2 ij 2 (103) Jf f Jef = i<j dij - ij ij (104) (105) = 1 ij i<j i<j (dij - ij )2 . ij Since these criterion functions involve only the distances between points, they are invariant to rigid-body motions of the configurations. Moreover, they have all been 10.14. LOW-DIMENSIONAL REPRESENTATIONS AND MULTIDIMENSIONAL SCALING (MDS)59 y2 x3 ij x2 xi xj yi dij yj x1 y1 Figure 10.25: The distance between points in the original space are ij while in the projected space dij . In practice, the source space is typically of very high dimension, and the mapped space of just two or three dimensions, to aid visualization. (In order to illustrate the correspondence between points in the two spaces, the size and color of each point xi matches that of its image yi . normalized so that their minimum values are invariant to dilations of the sample points. While Jee emphasizes the largest errors (regardless whether the distances ij are large or small), Jf f emphasizes the largest fractional errors (regardless whether the errors |dij - ij | are large or small). A useful compromise is Jef , which emphasizes the largest product of error and fractional error. Once a criterion function has been selected, an optimal configuration y1 , . . . , yn is defined as one that minimizes that criterion function. An optimal configuration can be sought by a standard gradient-descent procedure, starting with some initial configuration and changing the yi 's in the direction of greatest rate of decrease in the criterion function. Since dij = yi - yj , the gradient of dij with respect to yi is merely a unit vector in the direction of yi -yj . Thus, the gradients of the criterion functions are easy to compute: yk Jee yk Jf f yk Jef 2 2 ij (dkj - kj ) j=k = yk - yj dkj i<j = 2 j=k dkj - kj yk - yj 2 kj dkj dkj - kj yk - yj . kj dkj = 2 ij i<j j=k The starting configuration can be chosen randomly, or in any convenient way that ^ spreads the image points about. If the image points lie in a d-dimensional space, 60 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING ^ then a simple and effective starting configuration can be found by selecting those d coordinates of the samples that have the largest variance. The following example illustrates the kind of results that can be obtained by these techniques. The data consist of thirty points spaced at unit intervals along a spiral in three-dimensions: cos(k/ 2) = sin(k/ 2) = k/ 2, k = 0, 1, . . . , 29. = x1 (k) x2 (k) x3 (k) Figure 10.26 shows a the three-dimensional data. When the Jef criterion was used, twenty iterations of a gradient descent procedure produced the two-dimensional configuration shown at the right. Of course, translations, rotations, and reflections of this configuration would be equally good solutions. x3 20 15 10 5 x2 1 0 1 x1 Figure 10.26: Thirty points of the form (cos(k/ 2), sin(k/ 2), k/ 2)t for k = 0, 1, . . . , 29 are shown at the left. Multidimensional scaling using the Jef criterion (Eq. 105) and a two-dimensional target space leads to the image points shown at the right. This lower-dimensional representation shows clearly the fundamental sequential nature of the points in the original, source space. In non-metric multidimensional scaling problems, the quantities ij are dissimilarities whose numerical values are not as important as their rank order. An ideal configuration would be one for which the rank order of the distances dij is the same as the rank order ore 10.27: A self-organizing map from the (two-dimensional) disk source space to the (one-dimensional) line of the target space can be learned as follows. For each point x in the target line, there exists a corresponding point in the source space that, if sensed, would lead to x begin most active. For clarity, then, we can link theses points in the source; it is as if the image line is placed in the source space. At the state shown, the particular sensed point leads to x begin most active. The learning rule (Eq. 109) makes its source point move toward the sensed point, as shown by the small arrow. Because of the window function (|y - y|), points adjacent to x are also moved toward the sensed point, thought not as much. If such learning is repeated many times as the arm randomly senses the whole source space, a topologically correct map is learned. along the target line. When a pattern , each node in the target space computes its net activation, netk = i wki . One of the units is most activated; call it y . The i weights to this unit and those in its immediate neighborhood are updated according to: wki (t + 1) = wki (t) + (t)(|y - y |)i , (109) window function where (t) is a learning rate which depends upon the iteration number t. Next, every weight vector is normalized such that |w| = 1. (Naturally, only those weight vectors that have been altered during the learning trial need be re-normalized.) The function (|y - y |) is called the "window function," and has value 1.0 for y = y and smaller for large values of |y - y |. The window function is vital to the success of the algorithm: it insures that neighboring points in the target space have weights that are similar, and thus correspond to neighboring points in the source space, thereby insuring topological neighborhoods (Fig. 10.28). The learning rate (t) decreases slowly as a function of iteration number (i.e., as patterns are presented) to insure that learning will ultimately stop. Equation 109 has a particularly straightforward interpretation. For each pattern presentation, the "winning" unit in the target space (y ) is adjusted so that it is more like the particular pattern. Others in the neighborhood of y are also adjusted so that their weights more nearly match that of the input pattern (though not quite as much as for y , according to the window function). In this way, neighboring points in the input space lead to neighboring points being active. After are large number of pattern presentations, learning according to Eq. 109 10.14. LOW-DIMENSIONAL REPRESENTATIONS AND MULTIDIMENSIONAL SCALING (MDS)63 y2 y* y* y y1 Figure 10.28: Typical window functions for self-organizing maps for target spaces in one dimension (left) and two dimensions (right). In each case, the weights at the maximally active unit, y, in the target space get the largest weight update while units more distant get smaller update. insures that neighboring points in the source space lead to neighboring points in the target space. Informally speaking, it is as if the target space line has been placed on the source space, and learning pulls and stretches the line to fill the source space, as illustrated in Fig. 10.29 shows the development of the map. After 150000 training presentations, a topological map has been learned. 0 20 100 1000 10000 25000 50000 75000 100000 150000 Figure 10.29: If a large number of pattern presentations are made using the setup of Fig. 10.27, a topologically ordered map develops. The number of pattern presentations is listed. The learning of such self-organizing maps is very general, and can be applied to virtually any source space, target space and continuous nonlinear mapping. Figure 10.30 shows the development of a self-organizing map from a square source space to a square (grid) target space. There are generally inherent ambiguities in the maps learned by this algorithm. For instance, a mapping from a square to a square could eight possible orientations, corresponding to the four rotation and two flip symmetries. Such ambiguity is generally irrelevant for suLet us consider a simple modification of hierarchical clustering to reduce dimensionality. In place of an n-by-n matrix of distances between samples, we consider a d-by-d correlation matrix R = [ij ], where the correlation coefficient ij is related to the covariances (or sample covariances) by ij = ij . ii jj (110) principal component factor analysis data matrix correlation matrix Since 0 2 1, with 2 = 0 for uncorrelated features and 2 = 1 for completely ij ij ij correlated features, 2 plays the role of a similarity function for features. Two features ij 66 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING for which 2 is large are clearly good candidates to be merged into one feature, thereby ij reducing the dimensionality by one. Repetition of this process leads to the following hierarchical procedure: Algorithm 8 (Hierarchical dimensionality reduction) 1 2 3 4 5 6 7 8 9 10 begin initialize d , Di {xi }, i = 1, . . . , d ^ dd+1 ^ ^ do d d - 1 compute R by Eq. 110 Find most correlated distinct clusters, say Di and Dj Di Di Dj merge delete Dj ^ until d = d return d clusters end Probably the simplest way to merge two groups of features is just to average them. (This tacitly assumes that the features have been scaled so that their numerical ranges are comparable.) With this definition of a new feature, there is no problem in defining the correlation matrix for groups of features. It is not hard to think of variations on this general theme, but we shall not pursue this topic further. For the purposes of pattern classification, the most serious criticism of all of the approaches to dimensionality reduction that we have mentioned is that they are overly concerned with faithful representation of the data. Greatest emphasis is usually placed on those features or groups of features that have the greatest variability. But for classification, we are interested in discrimination -- not representation. While it is a truism that the ideal representation is the one that makes classification easy, it is not always so clear that clustering without explicitly incorporating classification criteria will find such a representation. Roughly speaking, the most interesting features are the ones for which the difference in the class means is large relative to the standard deviations, not the ones for which merely the standard deviations are large. In short, we are interested in something more like the method of multiple discriminant analysis described in Sect. ??. There is a large body of theory on methods of dimensionality reduction for pattern classification. Some of these methods seek to form new features out of linear combinations of old ones. Others seek merely a smaller subset of the original features. A major problem confronting this theory is that the division of pattern recognition into feature extraction followed by classification is theoretically artificial. A completely optimal feature extractor can never by anything but an optimal classifier. It is only when constraints are placed on the classifier or limitations are placed on the size of the set of samples that one can formulate nontrivial (or very complicated) problems. Various ways of circumventing this problem that may be useful under the proper circumstances can be found in the literature. When it is possible to exploit knowledge of the problem domain to obtain more informative features, that is usually the most profitable course of action. Summary Unsupervised learning and clustering seek to extract information from unlabeled samples. If the underlying distribution comes from a mixture of component densities de- 10.14. SUMMARY 67 scribed by a set of unknown parameters , then can be estimated by Bayesian or maximum-likelihood methods. A more general approach is to define some measure of similarity between two clusters, as well as a global criterion such as a sum-squarederror or trace of a scatter matrix. Since there are only occasionally analytic methods for computing the clustering which optimizes the criterion, a number of greedy (locally step-wise optimal) iterative algorithms can be used, such as k-means and fuzzy k-means clustering. If we seek to reveal structure in the data at many levels -- i.e., clusters with subclusters and sub-subcluster -- then hierarchical methods are needed. Agglomerative or bottom-up methods start with each sample as a singleton cluster and iteratively merge clusters that are "most similar" according to some chosen similarity or distance measure. Conversely, divisive or top-down methods start with a single cluster representing the full data set and iteratively splitting into smaller clusters, each time seeking the subclusters that are most dissimilar. The resulting hierarchical structure is revealed in a dendrogram. A large disparity in the similarity measure for successive cluster levels in a dendrogram usually indicates the "natural" number of clusters. Alternatively, the problem of cluster validity -- knowing the proper number of clusters -- can also be addressed by hypothesis testing. In that case the null hypothesis is that there are some number c of clusters; we then determine if the reduction of the cluster criterion due to an additional cluster is statistically significant. Competitive learning is an on-line neural network clustering algorithm in which the cluster center most similar to an input pattern is modified to become more like that pattern. In order to guarantee that learning stops for an arbitrary data set, the learning rate must decay. Competitive learning can be modified to allow for the creation of new cluster centers, if no center is sufficiently similar to a particular input pattern, as in leader-follower clustering and Adaptive Resonance. While these methods have many advantages, such as computational ease and tracking gradual variations in the data, they rarely optimize an easily specified global criterion such as sum-of-squared error. Graph theoretic methods in clustering treat the data as points, to be linked according to a number of heuristics and distance measures. The clusters produced by these methods can exhibit chaining or other intricate structures, and rarely optimize an easily specified global cost function. Graph methods are, moreover, generally more sensitive to details of the data. Component analysis seeks to find directions or axes in feature space that provide an improved, lower-dimensional representation for the full data space. In (linear) principal component analysis, such directions are merely the largest eigenvectors of the covariance matrix of the full data; this optimizes a sum-squared-error criterion. Nonlinear principal components, for instance as learned in an internal layer an autoencoder neural network, yields curved surfaces embedded in the full d-dimensional feature space, onto which an arbitrary pattern x is projected. The goal in independent component analysis -- which uses gradient descent in an entropy criterion -- is to determine the directions in feature space that are statistically most independent. Such directions may reveal the true sources (assumed independent) and can be used for segmentation and blind source separation. Two general methods for dimensionality reduction is self-organizing feature maps and multidimensional scaling. Self-organizaing feature maps can be highly nonlinear, and represents points close in the source space by points close in the lower-dimensional target space. In preserving neighborhoods in this way, such maps also called "topologically correct." The source and target spaces can be of very general shapes, and the 68 CHAPTER 10. UNSUPERVISED LEARNING AND CLUSTERING mapping will depend upon the the distribution of samples within the source space. Multidimensional scaling similarly learns a nonlinear mapping that, too, seeks to preserve neighborhoods, and is often used for data visualization. Because the basic method requires all the inter-point distances for minimizing a global criterion function, its space complexity limits the usefulness of multidimensional scaling to problems of moderate size. Bibliographical and Historical Remarks Historically, t Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Tsinghua University - ELECTRONIC - PC2010S 1. Give an example to show that the decision boundary resulting form the nearestneighbor rule is piecewise linear. 2. (6.5 in the text book) Consider the set of twodimensional vectors from two categories:T T T T T T x1 = (1, 0) , x2 = (0,1) , x3 = (0, -1 Tsinghua University - ELECTRONIC - PC2010S 1 . (3.10 in the textbook) Suppose X = cfw_x1 , x2 ,L , xN is (i.i.d.) sampled from N ( , ) : tell2whether the maximum-likelihood estimators of and 2 are biased. 2. (3.3 in the textbook) Suppose X = cfw_x1 , x2 ,L , xN is (i.i.d.) sampled from:f ( x, Tsinghua University - WUXIANDIAN - 1 Assignment of &quot;Pattern Recognition&quot; (Ch.2 Bayes Decision Theory)1 Prove that Mahalanobis metric (distance) r indeed possesses the four properties required of all metrics (distances), i.e. (1) Nonnegativity: r ( a, b ) 0 (2) Reflexivity: r ( a, b ) = r ( Tsinghua University - ELECTRONIC - PC2010S Assignments for Linear Classifiers, and SVM Note: Exercises without &quot;*&quot; are required for everyone. You are welcome to do those with &quot;*&quot; if interested in them, however, they will not be taken into account when giving scores. 1. In the multicategory case, a Tsinghua University - ELECTRONIC - PC2010S Gaussian Mixture Model and EM (Expectation Maximization) AlgorithmChangshui Zhang Dept. of Automation Tsinghua University zcs@mail.tsinghua.edu.cnReference Jeff A. Bilmes, A Gentle Tutorial of the Algorithm and its Application to Parameter Estimation f Tsinghua University - ELECTRONIC - PC2010S Support Vector Machine 1OutlineLinearly separable patterns Linearly non-separable patterns Nonlinear case Some examples2Linearly separable caseOptimal Separating hyperplane3Optimal Hyperplane4Linear classficationTraining sample set T = cfw_(xi Tsinghua University - ELECTRONIC - PC2010S 3 3.1 2 P (i ) p ( x | i ) 2.1 p ( x | i ) P (i ) p ( x | 1 ) p ( x | 2 ) 1 p ( x | i ) 2 P (i ) p ( x | i ) P (i ) p ( x | i ) P (i ) p ( x | i ) P (i ) 2 2 N N p ( x | i ) P (i ) p ( x | i ) P (i ) p ( x | i ) P (i ) 2 p ( x | i ) P (i ) -Parzen k N Tsinghua University - ELECTRONIC - PC2010S Bayes 1Bayes 2Bayes 3 -1.P( i | x) = p( x | i ) P( i ) = 1 p( x | j =12j) P( j ) = 2P (B A )P ( A ) = P (B , A )4 -1. P ( | x ) &gt; P ( | x ),1 2 x 1 P ( | x ) &lt; P ( | x )1 2 x 2(1) P ( | x ) = max P ( | x ), x (2 ) p ( x | Tsinghua University - ELECTRONIC - PC2010S 1K_L Karhumen-Loeve T x = [ x(1), , x(n)] u i , i = 1,2,,x = ci u ii =11 u uj = 0T ii= j i j^ x = ci u ii =12d^ ^ = E[( x - x) ( x - x)]T= E[( ci ui )i = d +1 Tj = d +1c ujj]= E[ c ]i = d +1 2 iu x = u c k u k = ci = x u iT i k = Tsinghua University - ELECTRONIC - PC2010S 1234g i ( x) = min x - ml =1, 2 , ,li l ig j ( x) = min g j ( x)j =1, 2 , ,c5 6g i ( x) = min x - x , k = 1,2,k k i, Ni,cg j ( x) = min g i ( x), i = 1,2,i x j78 P ( m| x) = max[ P ( i | x)]ix P (e | x) = 1 - P ( m | x) P= P (e Tsinghua University - ELECTRONIC - PC2010S The Estimation of Density Functions p ( x | wi ) p ( wi ) In pattern recognition applications, we rarely have the complete knowledge about the density functions How to estimate the density functions from samples or training data?3 problems How to est Tsinghua University - ELECTRONIC - PC2010S 1 2 g ( x ) = w x + woT x1 x 2 x= xd w1 w 2 w= wd 3 g ( x) = g1 ( x) - g 2 ( x)g ( x) &gt; 0, x 1 g ( x) &lt; 0, x 2 g ( x) = 0, x . g ( x) = 0 4x1 , x2Hw x1 + w0 = w x 2 + w0T T w ( x1 - x 2 ) = 0T5ww x = xp + r w6w ) + w0 g ( x) = w ( x Tsinghua University - ELECTRONIC - PC2010S zcs@mail.tsinghua.edu.cn1 2 60% () 20% 20% 3() 0 104 2-3 5 0 106 78,: : Pattern Classification, Second edition Richard O. Duda,Peter E. Hart,David G. Stork,John Wiley &amp; Sons, Inc. () 9info.tsinghua.edu.cn / zcs@mail.tsinghua.edu.cn 6279 Tsinghua University - ELECTRONIC - PC2010S 1g12March 13, 2010 3.#X 'XI ; XP ; ; Xn X4. #CX PI=PVUN @H; IAI=P'VUN @; PA ; I &lt; &lt; CI; P &gt; H=f @xY ; P A aI I x I I x p e C Pp e P P P2 ( 2 )2 2 2 ' 'q,O&quot;f @xA aI jxj= e P 5.#XI; XP; ; Xnd'cfw_yP3 q,O&quot; #XI; XP; Xn5ge'f @xY I ; P A a(I6.I Tsinghua University - MATH - STAT2010S 1g1March 13, 2010 1.X(j )(2) (3)oNX $ F X P X x f x X V &quot;X fX ; ; Xng $X ; X ; ; X n ^OPX Yi X i X i i n Zi;j( ) = ( ), ( )(pdf=1(iid.)(1)(2)( )2.1Zi;j &quot; ef x e xI x&gt; PX $ Ex Yi &quot;qY ; Y ; ; Yn y\&quot; ef x t I ;t PX $ U ; t ; t &gt; aqK&quot; Z Y ; Y Tsinghua University - MATH - STAT2010S 1g1March 13, 2010 12.X ; X ; X 5gBernoulli; B(1; p) $uK1 2 313.H0 : p = ; 6 Ha : p =1 23 4T1!1aVp = &quot; d,&quot;X $ N (; ); = 15; = 0:05&quot;E5 $ 3 4 2 2W = f(x1 ; x2 ; x3 ) : x1 + x2 + x32g6:g14:7; 15:1; 14:8; 15:0; 15:2; 14:6 CE15( = 0:05)14. l,o Tsinghua University - MATH - STAT2010S 1ng1March 23, 2010 19. For each of the following distributions let X1 ; X2 ; Xn be a random sample. Find a minimal sucient statistic for (a) f (xj) = e (x ) ; &lt; x &lt; I; I &lt; &lt; I (b) f (xj) =(1+e (x ) )2e (x ) ; I &lt; x &lt; I; I &lt; &lt; I20. A famous example i Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Laboratory Department of Computer Science and Technology Tsinghua University, Beijing 100084 jianyong@tsinghua.edu.cnCourse Website: http:/ Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnOctober 9, 2009 Data Mining: Principles and Algorithms 1Course Administriv Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnOctober 14, 2009 Data Mining: Principles and Algorithms 1Chapter 2: Data P Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnOctober 22, 2009 Data Mining: Principle and Algorithms 1Chapter 3: Mining Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnOctober 29, 2009Data Mining: Principle and Algorithms1Chapter 3: Mining Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnNovember 5, 2009Data Mining: Principle and Algorithms1Chapter 3: Mining Tsinghua University - COMPUTER - DM2009F Data Mining: Principle and Algorithms-Chapter 3.6-Sequential Pattern MiningJianyong WangDepartment of Computer Science and Technology Tsinghua University, Beijing, China Email: jianyong@tsinghua.edu.cn120091111Data Mining: Principles and AlgorithmsC Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Laboratory, Institute of Software Department of Computer Science and Technology Tsinghua University, Beijing 100084 jianyong@tsinghua.edu.cn20091111Data Mining: Principle and Algorithms1Cha Tsinghua University - COMPUTER - DM2009F Out-of-Core Coherent Closed Quasi-Clique Mining from Large Dense Graph DatabasesJianyong WangDepartment of Computer Science and TechnologyTsinghua University, Beijing, P.R. ChinaEmail: jianyong@tsinghua.edu.cn- Joint work with Zhiping Zeng, Lizhu Zho Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cn12/3/2009Data Mining: Principles and Algorithms1Chapter 4. Classificatio Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cn12/8/2009 Data Mining: Principles and Algorithms 1Chapter 6. Classificatio Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cn12/17/2009 Data Mining: Principles and Algorithms 1Chapter 6. Classificati Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cn12/17/2009 Data Mining: Principles and Algorithms 1Chapter 6. Classificati Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnDecember 23, 2009 Data Mining: Principles and Algorithms 1Chapter 5. Clust Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnDecember 30, 2009Data Mining: Principles and Algorithms1Chapter 7. Clust Tsinghua University - COMPUTER - DM2009F Data Mining: Principles and AlgorithmsJianyong WangDatabase Lab, Institute of Software Department of Computer Science and Technology Tsinghua University jianyong@tsinghua.edu.cnJanuary 5, 2010 Data Mining: Principles and Algorithms 1Chapter 7. Cluster Tsinghua University - MAXIS - CM2009F Tsinghua University - MAXIS - CM2009F Tsinghua University - MAXIS - CM2009F Tsinghua University - MAXIS - CM2009F Tsinghua University - MAXIS - CM2009F Griffith - ACCT - Acct7107 STRATEGY, BALANCED SCORECARD, AND STRATEGIC PROFITABILITY ANALYSISTRUE/FALSE 1. Strategy describes how an organization matches its own capabilities with the opportunities in the marketplace to accomplish its overall objectives. Answer: Terms to Learn: 2. University of Ottawa - PHY - PHY1321 COLLGE BOIS-DE-BOULOGNE Ondes et physique moderne 203-NYC-05AUTOMNE 2007 Enseignant : Alexandre LemerleRPONSES AUX QUESTIONS SUGGRES(BENSON, Harris, Physique III, Ondes, optique et physique moderne, 3e dition, ERPI, 2005.) Ces questions et rponses sont UCSD - BIBC - bibc 103 1. How would you prepare 750 mL of a solution consisting of 50 mM NaCl, 100 mM Tris, 15 mM EDTA, and 20 g/mL RNaseA? You have available to you stock solutions of 5 M NaCl, 2 M Tris, 1 M EDTA, and 50 g/ L RNaseA. Express your answers in the simplest form w Binghamton - PHYS - 131 HW-3 Summer 10 Chapter 3 23, 30, 39, 41, 44, 57 (8-th edition) 25, 26, 41, 39, 42, 63 (9-th edition) _ Problem 3-23(WileyPlus) The strategy is to find where the camel is () by adding the two consecutive displacements described in the problem, and then fi University of Louisville - ENGR - 201 ENGR 201 Summer Unit 1 Study GuideUnit Content:Section 13.1 13.3 13.4 13.5 Pages 906-920 931-936 936-943 943-950 Content Vector-Valued Functions and Motion in Space Arc Length and The Unit Tangent Vector, Curvature and The Unit Normal Vector, Torsion an University of Louisville - ENGR - 201 ENGR 201 Summer Unit 2 Study GuideUnit Content:Section 14.3 14.4 14.9 Pages984-996 996-1005Content Partial Derivatives, Differentiability, Continuity The Chain Rule for Partial Derivatives, The Implicit Function Theorem for Finding Partial Derivatives University of Louisville - ENGR - 201 ENGR 201 Summer Unit 3 Study GuideUnit Content:Section 14.6 14.5 14.6 14.7Pages 1018-1024 1005-1014 1015-1018 1027-1038Content Linearization, Differentials, Absolute, Relative and Percentage Change Directional Derivatives, Gradient Vectors, Tangent Pl University of Louisville - ENGR - 201 ENGR 201 Summer Unit 4 Study GuideUnit Content:Section 14.8 15.1 15.2 15.4 Pages1038-1049 1067-1081 1081-1091 1098-1109Content Lagrange Multipliers Double Integrals in Rectangular Coordinates Areas, Moments, Centers of Mass, Centroids, Average Values, University of Louisville - PHYS - 299 Chapter 2122.(a)Practice Pr solns: # 22,25,27,29, 30 ,31, 35, 36, 41, 47, 48, 59rWhen the surface is perpendicular to the field, its normal is either parallel or anti-parallel to E.2 2 Then Equation 21.1 gives = E A = EA cos(0 or 180) = (850 N/C)(2 m University of Louisville - PHYS - 299 pg Phys 299 Chapter 20 Practice Pr soln: #15, 16, 17, 22, 25, 27, 29,31, 35,38, 59, 60, 61,67 15:1DEVELOPSince the magnitude of elementary charge e is e = 1.6 1019 C, the number of electrons involved is given by Q/e.EVALUATESubstituting the values give University of Louisville - PHYS - 296 Coulombs LawPHYS 296Your name_ PRE-LAB QUIZZES1. What is the purpose of this lab?Lab section__2. Two conducting hollow balls of radius 2.0 cm are both initially charged by a bias voltage of +5000 V. When they are brought to a center-to-center distan University of Louisville - PHYS - 299 Chapter 22 Practice Pr solns:16.W = qV = (50 C)(12 V) = 600 J.The potential difference and the work per unit charge, done by an external agent, are equal in magnitude, so(Note: Since only magnitudes are needed in this problem, we omitted the subscript University of Louisville - PHYS - 299 Chapter 23 Practice problem solutions # 19, 20,21,26,30,34,40,44,48,52,5419. (a) We find the electric field to be E=(b) The potential difference isq (1.1 C) = = 1.99 MV/m 12 2 0 A (8.85 10 C /N m 2 ) (0.25 m) 2 V = Ed = (1.99 MV/m)(5 mm) = 9.94 kV(c) T University of Louisville - CECS - 230 CECS 230Springto edit Master subtitle style Click 2010Artificial IntelligenceVeryBAD definition: Programming a computer to solve problems like a human. good definition:_VeryTuring TestTuring Test - contInterfaceis text onlyQuestionsmust be limi University of Louisville - PHYS - 299 Physics 299 Important Constants and EquationsNote: Some basic simple equations you are supposed to know. Here we are providing you with only some important equations that were introduced in the chapters given below.4. 5. 6. 7. 8.Constants:g = 9.80 m/s University of Louisville - PHYS - 299 Phys 299Solution Quiz 1 chapter 20Summer 2010Q.1. If one bag contains a charge 8Q and another one contains charge Q, the 8Q-bag exerts 8 times as much force on the Q-bag as the Q-bag exerts on the 8Q-bag. A) True B) False Q.2. A plastic rod is charged University of Louisville - PHYS - 299 Phys 299 Your Name: _ Your List# _ May 24, 2010 Chapter 21 Test# QUIZ 2 Due : Wed May 26 in class. Submit your written work on the back of this sheet only. Mark Scantron in class.Q.1. Q.2. Q.3. Q.4. If more electric field lines point into a balloon than University of Louisville - PHYS - 299 Phys 299 Your Name: _SOLUTIONS_ Your List# _ May 23, 2010 QUIZ 2 Due : Wed May 26 in class. Submit your written work on the back of this sheet only. Mark Scantron in class.Q.1. Q.2. Q.3. Q.4. If more electric field lines point into a balloon than come ou University of Louisville - PHYS - 299 Phys 299 May 27, 2010 Practice Quiz on Chapter 22 There is one correct answer choice per question. Answers will be posted Saturday evening.Q.1 As an electron moves from a high potential to a low potential, its electrical potential energy A) increases. B) University of Louisville - PHYS - 299 SolutionPhys 299 2010 Quiz 3 Chapter 22Q.1 As an electron moves from a high potential to a low potential, its electrical potential energy A) increases. Q.2. In the figure, charge is placed on the piece of copper. How will the charge be distributed on th University of Louisville - ENGR - 101 University of Louisville - ENGR - 101 !E,VGRtOlUNIT ZUN C0tlTEnTl Sscrrorc t,+- E/EJ nrtoQoo Frrxcnor.l5 Sont), fuoVrttfSrDFFEiE,tcE rf Quonr(n|af(4.t-tb\ $ t,f (pa,tl-Hl- Ct&amp;u*s, Pecrrour j V*trcer 4,o-Hqttnnreu ftmtrtb, ' Scautw eio RefuEcrrr.t b i guPflErncbrrau fiArFfiaLVrcnre ur Tir University of Louisville - ENGR - 101 EN6KIotUnJrr 3isr.rr.),Ux,r Cilrrr.n:5&quot;*,r* (url Roser e.o Nq,'rfuztro/5 i.l h.,atttfoaqs ot 3'Tl ?xttrrtt +.le[l*u* *a*z*t-c.a -@reLuo.r :Sspal5ttrrrrlil- n^rt.aAt-2,L= Rra.eift 01l'r', A! -E*tartrtL Ltntt e.4.r4r:rnrl Arorrr eiiuY, Ulntt lr+r
{"url":"http://www.coursehero.com/file/5905375/Patternclassification/","timestamp":"2014-04-17T07:35:34Z","content_type":null,"content_length":"698991","record_id":"<urn:uuid:0038e477-6c0a-4ba6-97b4-f550531560b1>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Mutually tangent ellipsoids in 3 space up vote 9 down vote favorite I recently heard a claim that for any n, it is possible to arrange n ellipsoids in 3 space such that each pair of ellipsoids is kissing. Is this true, and if so, how? Edit: By kissing, I mean that I would like the interiors of the ellipsoids to be disjoint, but each pair of ellipsoids should intersect at a point. 2 "kissing" might mean different things... – Anton Petrunin Nov 30 '12 at 0:11 2 There was a MathOverflow post some months back mentioning work of Jeff Erickson on n convex bodies that were cotangent. (Some kind of voronoi cells of points on a helix.) Perhaps someone can guide you and extend it to ellipsoids. Gerhard "Ask em About System Design" Paseman, 2012.11.29 – Gerhard Paseman Nov 30 '12 at 1:03 1 I think she means that the solid ellipsoids have disjoint interiors, but nevertheless all their pairwise intersections are non-empty. Kissing, yes, but nothing more! – alvarezpaiva Nov 30 '12 at Thanks for the comments and ideas. I added in what I mean by kissing, which, as many guessed, includes disjoint interiors. If I should have used another word, please let me know! – Linda Brown Westrick Dec 1 '12 at 1:33 add comment 3 Answers active oldest votes Here is the paper that Gerhard remembered, which doesn't answer the question as posed, but does answer a related question: Jeff Erickson and Scott Kim, "Arbitrarily large neighborly families of congruent symmetric convex 3-polytopes," 2003. (link) up vote 7 down vote That their paper does not mention ellipsoids might be taken as indirect evidence that the posed question may not yet have been answered a decade ago. add comment I've never heard of this before, and it sounds quite counterintuitive. If one is granted that this is true, then one can try to work backwards and apply a bit of dimensional analysis to understand how this can be. First, note that ellipsoids are quadrics, and there are a $9$ dimensional space of quadrics. Also, note that ellipsoids are invariant under affine (or more generally projective) transformations, so if we have one such configuration, we will expect to get at least a $15$-dimensional family of kissing ellipsoids. On the other hand, there are degenerate quadrics for which this is true. Consider $n$ mutually non-parallel lines in the plane. We can consider these as degenerate ellipsoids, with two axes $0$ radius and one axis $\infty$ radius. They the are each mutually tangent, since they each meet in a point. There is an $2n+3$-parameter family of these. One could hope to "regenerate" families of $n$ tangent ellipsoids from these, but there might be restrictions on when this is possible. For $n=1$, there is a $9$ dimensional family. For $n=2$, there is a $17=9+8$ dimensional family. We have $9$ dimensions for the first ellipsoid. Choosing any other ellipsoid with center disjoint from the first, we may rescale it uniquely to be tangent to the first ellipsoid. So this gives us $8$ dimensions for the second ellipsoid. up vote 5 down For $n=3$, let's assume that one of the ellipsoids is a sphere. Then its center is equidistant from the other two ellipsoids. The space of points equidistant from two kissing ellipsoids is vote $2$-dimensional. So we get a $17+2=19$-dimensional space of 3 tangent ellipsoids, with one round. We may change the round sphere by an affine transformation based at its center to get any ellipsoid with the same center, and we have a $6$-parameter family of such affine maps (we have $3$ dimensions for the major axis, $2$ for the minor axis, and $1$ for the third axis). However, this over-counts, since a similarity (all $3$ axes equal) will take the sphere to a sphere, so this gives us $19+6-1=24$ dimensions (or we may take a $5$-parameter family of volume-preserving affine transformations to eliminate repetitions). For $n=4$, let's again assume that one of the ellipsoids is a sphere. The space of points equidistant from $3$ ellipsoids is $1$-dimensional. As above, we may modify the sphere by a $5$-parameter family of volume-preserving affine transformations to get $24+1+5=30$ dimensions of 4 mutually tangent ellipsoids. For $n=5$, we expect finitely many points which are equidistant from $4$ mutually tangent ellipsoids, so the computation gives $35$ dimensions. At this point, one expects adding the next sphere will cut down on the dimension of the space of 5 tangent ellipsoids which have an equidistant point. If we continue the sequence $9,17,24,30,35$, then the next term ought to be $39$ dimensions ($42, 44, 45, ?$). Of course, this trend couldn't possibly continue if one expects to have $n$ tangent ellipsoids for all $n$, so there must be some non-generic phenomenon creating the incidence. add comment I'm assuming "kissing" means osculating, i.e. the ellipsoids intersect at a point where they have second-order contact, i.e. in a coordinate system where the point of contact is the origin and the tangent plane is the $xy$ plane, near the origin we have $z = a x^2 + b x y + c y^2 + O((|x|+|y|)^3)$ for the same $a,b,c$. Well, e.g. this is true for the ellipsoids $$ x^2 + y^2 + b (z - 1/b)^2 = 1/b,\ b > 0$$ which are all mutually osculating at the origin. up vote 1 down vote Or did you want each pair to be osculating at a different point? 3 I suspect the asker wanted the interiors of the ellipsoids to be disjoint. – Graham Leuschke Nov 30 '12 at 3:02 1 Me too. Otherwise why not just use ellipses in two dimensions rather than ellipsoids in three? – Noam D. Elkies Nov 30 '12 at 3:24 They couldn't be osculating if the interiors are disjoint. – Robert Israel Nov 30 '12 at 3:42 2 Kissing usually means disjoint interior plus tangent; one does not ask for osculation. – Benoît Kloeckner Nov 30 '12 at 17:07 add comment Not the answer you're looking for? Browse other questions tagged geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/114945/mutually-tangent-ellipsoids-in-3-space","timestamp":"2014-04-19T00:00:49Z","content_type":null,"content_length":"70009","record_id":"<urn:uuid:e3f3f589-0040-4a3a-b2bb-6829f0fc8f1a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Size refers to the number of nodes in a parse tree. Generally speaking, you can think of size as code length. This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
{"url":"http://www.mathworks.com/matlabcentral/cody/problems/1087-magic-is-simple-for-beginners/solutions","timestamp":"2014-04-18T11:11:35Z","content_type":null,"content_length":"133053","record_id":"<urn:uuid:571083bf-58d6-4362-9df3-f9b0522fbae1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
3 door problem A classic one often taught in undergrad, not sure if I remember it being done here though. You are on a high-stakes game-show. For the final round, you are shown three doors. Two of the doors have nothing behind them. The other door has a boxload of kittens. All the doors appear identical. You are told to select one door to attempt to obtain to the boxload of kittens. You choose one door - but it is not opened yet. Then the host opens one of the other two doors, revealing that it has no kittens behind it. There are now two closed doors, one of which contains a box of kittens. The host gives you a chance to change your choice. Assuming you want to maximise your odds of getting the kittens, should you stay with your original door or switch to the other closed door? Does it make a difference? More importantly, if you don't want a box full of kittens, can you pick the known empty one? solution? wrote:Really, it shouldn't matter, right? You already increased your odds from 33% to 50%. So at this point your odds are equal on either door. 9 x 6 = 42 Note: Randall kicks ass. always switch it gives you better odds of winning since there are three different possiblilities. either you picked empty door 1 initially and switching would get you the kittens, or you picked empty door 2 initially and switching would get you the kittens, or you picked the kittens initially and switching would lose it for you. so by always switching you would improve your odds from 1/ 3 to 2/3 This is a famous one, wiki has it too. I think it was up on this site though not sure. You pick the door you didn't pick. The door you picked (say door A) had 1/3 chances of having girl (don't like kittens) The doors together had 2/3. (say door B and C) When the host eliminated on of those two doors (Say door C), the chances of the girl being behind the door you didn't pick (door B) is still 2/3 . Your door (door A) just has the 1/3 chance. So depending on whether you want the girl or have a gf you pick appropriately. Edit: Damn you parsonb, for once I thought I had the solution "The best times in life are the ones when you can genuinely add a "Bwa" to your "ha""- Chris Hastings Yeah didn't think it'd last for long, it's pretty famous... At first everybody thinks it's Tractor's solution, and I personally took a look of convincing before I understood the right one (i.e. parsonsb's and Pi's solution)... I just assumed it was something like personb, like Person A and person B. I am curious now, wut does it mean? "The best times in life are the ones when you can genuinely add a "Bwa" to your "ha""- Chris Hastings By the way, this problem was already discussed a long time ago (except that it wasn't kittens but a car instead - the solution still works, doesn't it?? Query thread Solution thread Wikipedia's writeup EDIT: restored original contents. Seriously guys, WHO edited my post? (or was my account hijacked? - given that we can't go on https) Last edited by tendays on Wed Mar 14, 2007 12:20 am UTC, edited 1 time in total. This is a fairly famous problem, but it's always good to use on first-comers to probability. When in doubt, always use the slow-but-always-works outcome tree! I recently had to explain this to a small group, I have found that when explaining the problem you can change the numbers and imagine that you start with 100 doors, select 1, open 98, This immediately keyed people in on what was happening. Anyway, orthogonal to the thread but i thought it was appropriate. Sygnon wrote:I recently had to explain this to a small group, I have found that when explaining the problem you can change the numbers and imagine that you start with 100 doors, select 1, open 98, This immediately keyed people in on what was happening. Anyway, orthogonal to the thread but i thought it was appropriate. I once asked this as a riddle on a long car trip, and unsurprisingly nobody got the answer. However, after I revealed the correct answer and explained it, they kept arguing and wouldn't believe I had the correct answer (in fact, the debate lasted several days, even after I gave internet references to the solution, and performed the experiment with some cards). I've never given it as a riddle I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side. "With math, all things are possible." —Rebecca Watson While we're at it, what's 0.9999999... again? I can at least understand resisting the Monty Hall problem, because it's (initially) counterintuitive. But it's still annoying when people are that dense. The problem with this question is that unstated assumptions change the answer. Pattern A: Suppose the door opening guy has the following algorithm: If you pick the door with the kittens, open a door and show it to be empty. If you pick a door without the kittens, don't open a door. Or, pattern B: the host only opens a blank door if you picked a door that was already blank. In Pattern A, your chance when swapping is 0% and 100% when sticking. In Pattern B, your chance when swapping is 100% and 0% when sticking. The host can probablistically choose between Pattern A and Pattern B to generate any chance in between. Now when you are on the show, you observe exactly what the problem describes. Yet changing doors when the host opens an empty one makes you win with any probability the host wants. For the Monty Kitten proof to work, assumptions about how the door opening algorithm works must be made. When people state the problem, they basically never mention these assumptions, yet the answer is completely dependant on them. You can fix this by specifying the exact algorithm the host follows. "The host always picks and opens a door that is empty, and you know this", or "The host always opens a door at random. This time you got an empty door." But that kind of information can't be glazed over and ignored. Yakk wrote:You can fix this by specifying the exact algorithm the host follows. "The host always picks and opens a door that is empty, and you know this", or "The host always opens a door at random. This time you got an empty door." An entirely valid point except that the idea of the "puzzle" is that it is entirely random and the latter algorithm is implied. Yakk wrote:"The host always opens a door at random. This time you got an empty door." I think you get the "50% chances of winning if you switch" with that scenario. Case 1 with 1/3 chances: You pick the kittens Case 2 with 2/3 chances: You don't pick the kittens. Case 2a (probability 1/3): the host doesn't pick the kittens. Case 2b (probability 1/3): the host picks the kittens In case 1 you lose if you switch (what the host chose doesn't matter). In case 2a you win if you switch. In case 2b the game is void because the host "accidentally" revealed the kittens - maybe you restart the game or see it as a draw - anyway whether you should switch or not is no longer a valid question in this case YET this case may happen, invalidating the reasoning done in the original problem. Conclusion: if the host did not reveal the kittens (probability 2/3) you win 50% of the cases if you switch ( (1/3)/(2/3) ) (and also 50% of the cases if you don't - i.e. whether you switch or not doesn't change your chances of winning) Edit: Completely rewritten. I don't like this question. It should be 50/50, the only reason why it may not be is if you study it from a particular point within the game tree, in a particular manner, that doesn't represent the question which was asked. We all know each statistical process is individual and random when looking at the scope of what it involves. Ie, regardless of weather you flip a coin a million times on heads in a row, it still has a 50/50 chance the next time. If you link together their probabilities by saying you will get X amount of heads, it becomes a different question. At this point in the game show, regardless of what could have happened, or what may happen. We only know for 100% certain that: 1) There are 3 doors. 2) 1 Door has kittens behind it 3) 1 Door does not have kittens behind it. Hence, in this particular statistical state, he happened to pick the one without kittens. So you cant physically change the fact that there isnt any kittens there after it has happened. Last edited by elminster on Mon Mar 19, 2007 12:31 am UTC, edited 2 times in total. Sygnon wrote:I recently had to explain this to a small group, I have found that when explaining the problem you can change the numbers and imagine that you start with 100 doors, select 1, open 98, This immediately keyed people in on what was happening. Anyway, orthogonal to the thread but i thought it was appropriate. I never found that argument compelling. I don't see how it works with 100 doors any better than 3 myself. The only explanation that I've found actually *satisfying* (as opposed to just a mechanical proof or empirical demonstration that proves or shows what the right answer is that doesn't help with the intuition) is that if you choose the right door in the first place (1/3 chance) you lose if you switch, and if you choose the wrong door in the first place (2/3 chance) you always win if you switch, so in some sense you're picking the other *two* doors if you switch. NB: for the following, I am referring to the "You pick a door, the host then always opens one of the two other doors, which is always a losing door, and you're always given the opportunity to switch" problem, which is the classic Monty Hall problem. The first step when you're trying to convince someone that an intuitive answer is wrong should be to disprove the intuition. Showing the proof of the correct answer should be second. In this case: the intuition is "there are two doors, therefore 50/50". The maths says "If there are n indistinguishable options, the chances are 1/n"... but the doors aren't indistinguishable... one of them the host couldn't open because you've chosen it... the other the host could open, but elected not to... possibly at random, but possibly because it's the winning door. So the doors are distinct, and a such, they're not necessarily 50/50. To make the 1/3 result more clear, imagine that the doors are able to be physically moved around (I often use an analogy with Deal or No Deal here, and the briefcases). The door that you choose, you take with you to your side of the room, the two you don't choose I keep with me. The chance that you have the prize is 1/3, the chance that I have the prize is 2/3... this much is clear, since the doors are still indistinguishable (the only distinguishing feature is that you picked the one that you did, and you made the choice without any information). Now, I open one of my doors and show it's the loser. This doesn't give any new information with respect to which side of the room the prize is on, since you know that one of my doors is, indeed, a So the chance you have the prize is still 1/3, and the chance I have it is 2/3. In the 2/3 chances that I have the prize, however, it is always the door I haven't opened, so there's a 2/3 chance that the prize is behind my closed door - you'll have a 2/3 chance if winning if you switch. It is as EvanED says... if you choose wrong in the first place, which you have a 2/3 chance of doing, then switch, you will always win. However, Yakk's gripe is very valid, hence the note at the top of my post. While no one overhear you quickly tell me not cow cow. but how about watch phone? skeptical scientist wrote:I once asked this as a riddle on a long car trip, and unsurprisingly nobody got the answer. However, after I revealed the correct answer and explained it, they kept arguing and wouldn't believe I had the correct answer (in fact, the debate lasted several days.... that's the point you start playing it for money until they twig Say for example, we have a different quiz. There are 3 cups, 2 are upside down, 1 is the right way up. You are told one of the 3 cups contains a ball. If you know for a fact that the right way up cup does not, and will not contain the ball, even if it could, it wont for any particular asking of this simulation. The probability would be 50/50. It represents the same probabilistic state as the game show though. The only reason why it would be 1/3 and 2/3 is on the assumption that he "could" have picked the winning one. Its only really the solution to the Nth cases. But using probability of Nth case does not apply when you are already part way through a probability tree for a particular instance of a probability that contains more than 1 step. Nth case probabilities cant be used to "predict" outcomes in reality. Just because you got heads on the last flip of the coin, the Nth case says it should be tails, but that is irrelevant to what's actually happening. Edit: also im probably wrong somewhere, even after reading the wiki on it 3 times. This is just a confusion by looking at the second probabilistic state from the first probabilistic state. ie, assuming the chances at the first state are still true, even though were in the second state. Here's how I see it: There are three possible cases after your first turn: Two where you got it wrong, one where you got it right. You don't know which it is yet, but this is important Now the host shows you a selection which is wrong. For the first two cases, only one other selection is wrong and thus if you switch you'll get it right. For the last one you already have it so if you switch you get it wrong. But as you can see, not switching gives you a 1/3 chance of getting it right while switching gives you a 2/3 chance. Cranking up the number of doors usually works for me when people need convincing. "Don't worry; the Universe IS out to get you." elminster wrote:There are 3 cups, 2 are upside down, 1 is the right way up. You are told one of the 3 cups contains a ball. If you know for a fact that the right way up cup does not, and will not contain the ball, even if it could, it wont for any particular asking of this simulation. The probability would be 50/50. It represents the same probabilistic state as the game show though. Not quite... in this example, the incorrect cup is turned over before you make your first selection, but in the Monty Hall problem, the incorrect door is opened after your first selection. The two cases are distinct, since in the Monty Hall problem, when you choose a door, the host can't open the door you chose... so you're restricting the host's choices, so the probabilities change. In your problem with the cups, the probability is 50/50, because the two facedown cups are indistinguishable. In the Monty Hall problem the two closed doors are distinct, so it's not necessarily 50/ While no one overhear you quickly tell me not cow cow. but how about watch phone? It's rather easy to prove that the answer with math if you make define a distribution: Assume a uniform discrete distribution between which of the doors (1, 2, or 3) the kittens are behind (a fairly resonable assumtion, agreed?) : f(x) = 1/3, x = 1,2,3. Define the set A as the set where x = 3, so the size of A is 1 Define the set B is the set where x != 2, so the size of B is 2 The size of (A int B), or the intersection of the two sets, would be 1 because the element x = 3 is present in both. P(A|B) = P(A int B) / P(B) ... = {|(A int B)| / |S|} / {|B| / |S|} ... = |(A int B)| / |B| ... = 1 / 2 Same goes for P(C|B) where C is the set x = 1 P(A|B) would be the probability that the kittens are behind door # 3 given that they are not behind door # 2. This is the only thing you know: that the kittens are not behind door # 2. Say that x = 1 is the door that you picked initially and x = 2 is the door that the host showed you. It doesn't matter if this is the case, since you can define a set where that is a case (for, say, y) and do the above with that set. Edit: However, the question, as posed, also gives that the door is different from the one you picked, so there's an extra calculation since you can measure which door the host picks in terms of wether it has kittens behind it as a uniform probability (or it's probably good to assume it is rather than base it off the other facts since the host is rather external to the picking of the correct Eventually it'll lead down to 2/3. The graph at wikipedia was the best tool in making me see it intuitively. Last edited by Icaruse on Tue Mar 20, 2007 3:41 am UTC, edited 3 times in total. I think I just found an even easier way to explain it. You pick Door X. There is a 1/3 chance that the prize is behind it. The host opens Door Y. It does not have the prize. Door Z therefore has the prize whenever Door X does not, or 2/3 of the time. You forgot to condition your probability to the fact that you now know that it is not Y. Otherwise you have that P(Z) = 1/3 and P(X) = 1/3 leaving a contradiction that the sum of all probabilities = 1 Hey guys, Airplane on a Treadmill is still on the first page of the forum... let's stick to one question where people are arguing cross-purposes based on differing interpretations of the question at a time, shall we? While no one overhear you quickly tell me not cow cow. but how about watch phone? The Wikipedia write-up has that you know the host to always open a closed door that you didn't pick, then you can indeed calculate the conditional probability to have a 2/3 chance for switching. That requires that the host never opens the door you pick, weigting the probability favorably that he narrows it down for you. So I guess this is, indeed, a problem of phrasing and interpretation. EvanED wrote: Sygnon wrote:I recently had to explain this to a small group, I have found that when explaining the problem you can change the numbers and imagine that you start with 100 doors, select 1, open 98, This immediately keyed people in on what was happening. Anyway, orthogonal to the thread but i thought it was appropriate. I never found that argument compelling. I don't see how it works with 100 doors any better than 3 myself. say there are 1 million doors and you guess that the car is behind door #4. The host goes, "ok, now i'll open all the doors except door #532,592." and there's no car behind any of them. at this point, which one are you going to pick, your initial guess 4, or the one that the host left open? Hey, hurray, it's the Monty Hall problem! I learned about it first in this book called "The Curious Incident of the Dog in the Night-time." I'm not emo, I'm oboe. MotleyJesster (12:34:04 PM): Better than moping around being all "I do not need love, I have indie music and a wind instrument!" That book is a good read. However, all of probability is wrong. Any event that happens always had probability 1 as no other event could have happened. Any other event has probability 0 as it didn't take its chance to happen. So the probability of you winning in a real situation is 1 or 0 depending on so many factors. li te'o te'a vei pai pi'i ka'o ve'o su'i pa du li no Mathematician is a function mapping tea onto theorems. Sadly this function is irreversible. QED is Latin for small empty box. Ceci n’est pas une [s]pipe[/s] signature. Uh, cmacis, are you piss-taking, or are you actually posting in support of actualism? Because under that philosophy, there's really no point in doing any kind of science, ever. Which means that, even if actualism is true, it's useless for actually figuring anything out. Treatid basically wrote:widdout elephants deh be no starting points. deh be no ZFC. (If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome) I just don't like probability. Or statistics. li te'o te'a vei pai pi'i ka'o ve'o su'i pa du li no Mathematician is a function mapping tea onto theorems. Sadly this function is irreversible. QED is Latin for small empty box. Ceci n’est pas une [s]pipe[/s] signature. I can't see any posts that have put it as simply as I see it: (The scenario where we know a door without kittens will be opened after we choose.) The probability is 50/50, as we are always only choosing between two doors, one with and one without the kittens. There is no third door to consider because it is always removed from the equation and we know this. As for the other scenario where we don't know that a losing door will be opened, there is a neat pictorial represenation of this in the book The Curious Incident of the Dog in the Night-time. Although it is used to illustrate the first scenario, which makes it incorrect. Yay!! I'm a llama again! yellowtamarin wrote:I can't see any posts that have put it as simply as I see it: (The scenario where we know a door without kittens will be opened after we choose.) The probability is 50/50, as we are always only choosing between two doors, one with and one without the kittens. There is no third door to consider because it is always removed from the equation and we know this. As for the other scenario where we don't know that a losing door will be opened, there is a neat pictorial represenation of this in the book The Curious Incident of the Dog in the Night-time. Although it is used to illustrate the first scenario, which makes it incorrect. Yes, we know the 3rd door doesn't have kittens. That doesn't mean there's a 50/50 chance between the other two doors. That would be assuming the doors were identical - and that is not true. You have been given more information about the door you didn't chose than the door you chose - that is, if one of the doors you didn't chose had kittens behind it, then that's the door that wasn't opened for But that's confusing. Here's my simple version: You choose one door. You have a 1/3 chance of that door being correct and a 2/3 chance one of the other doors is correct. One of the other doors is revealed to not contain kittens. There is still a 2/3 chance one of the two doors you haven't chosen contains kittens. Of course, there's a 0 chance that the door that was revealled contains kittens. So there's a 2/3 chance the remaining door contains kittens. So if you stay, you win 1/3. If you change, you win 2/3. But it took me about half an hour of arguing to figure it out You choose one door. You have a 1/3 chance of that door being correct and a 2/3 chance one of the other doors is correct. One of the other doors is revealed to not contain kittens. Your explanation is true if the chooser doesn't know that a losing door is going to be opened after they choose. But if they already know this is going to happen, they are essentially only chosing between 2 doors. The quote above can be swapped around the other way and will look like this: "One of the doors is revealed to not contain kittens. You choose one door. You have a 1/2 chance of that door being correct and a 1/2 chance the other door is correct." You can only swap it around because you already know a door is going to be removed after you choose. Yay!! I'm a llama again! Sorry SpitValve for not quoting you properly, I haven't done a post in a forum for about 8 years. I've read through the Wikipedia explanation, including the part that tries to tell me where I'm "going wrong" and I still disagree If I were the contestant on the quiz show, and I had watched it many times before so I knew what was going to happen, I would be choosing between a door with a car or a door with a goat. One of the goat doors is going to be removed so it is irrelevant. Wikipedia says you can't ignore the past, meaning you can't change the odds after one door has been opened. I agree, but I say the odds were always 50/50. BTW I actually originally thought the chance was 1/3 v 2/3. I didn't think it was 50/50 until I thought about it more (a lot). Yay!! I'm a llama again! Yellow, run simulations. Do a thousand runs sticking with your door, then a thousand switching to the other door. Now pretend you don't know that the host is going to open one of the other doors (though he always does) and repeat the experiment. Do you actually expect different results? Last edited by Lothar on Wed Mar 28, 2007 9:25 am UTC, edited 1 time in total. Always program as if the person who will be maintaining your program is a violent psychopath that knows where you live. If you're not part of the solution, you're part of the precipitate. 1+1=3 for large values of 1.
{"url":"http://forums.xkcd.com/viewtopic.php?p=63603","timestamp":"2014-04-19T01:49:43Z","content_type":null,"content_length":"103289","record_id":"<urn:uuid:1d42e88d-fcb8-4fc2-97be-ad689acea017>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Impact of Imperfect Test Sensitivity on Determining Risk Factors: The Case of Bovine Tuberculosis Imperfect diagnostic testing reduces the power to detect significant predictors in classical cross-sectional studies. Assuming that the misclassification in diagnosis is random this can be dealt with by increasing the sample size of a study. However, the effects of imperfect tests in longitudinal data analyses are not as straightforward to anticipate, especially if the outcome of the test influences behaviour. The aim of this paper is to investigate the impact of imperfect test sensitivity on the determination of predictor variables in a longitudinal study. Methodology/Principal Findings To deal with imperfect test sensitivity affecting the response variable, we transformed the observed response variable into a set of possible temporal patterns of true disease status, whose prior probability was a function of the test sensitivity. We fitted a Bayesian discrete time survival model using an MCMC algorithm that treats the true response patterns as unknown parameters in the model. We applied our approach to epidemiological data of bovine tuberculosis outbreaks in England and investigated the effect of reduced test sensitivity in the determination of risk factors for the disease. We found that reduced test sensitivity led to changes to the collection of risk factors associated with the probability of an outbreak that were chosen in the ‘best’ model and to an increase in the uncertainty surrounding the parameter estimates for a model with a fixed set of risk factors that were associated with the response variable. We propose a novel algorithm to fit discrete survival models for longitudinal data where values of the response variable are uncertain. When analysing longitudinal data, uncertainty surrounding the response variable will affect the significance of the predictors and should therefore be accounted for either at the design stage by increasing the sample size or at the post analysis stage by conducting appropriate sensitivity analyses. Citation: Szmaragd C, Green LE, Medley GF, Browne WJ (2012) Impact of Imperfect Test Sensitivity on Determining Risk Factors: The Case of Bovine Tuberculosis. PLoS ONE 7(8): e43116. doi:10.1371/ Editor: Frank Emmert-Streib, Queen’s University Belfast, United Kingdom Received: May 30, 2012; Accepted: July 16, 2012; Published: August 13, 2012 Copyright: © Szmaragd et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was funded by Defra (http:\\www.defra.gov.uk) under the Animal Health and Welfare call (project number SE3239). Defra had some input in the design of the initial field trial and in the collection and original analysis of the data. They did not actually have any input in the design and modelling/analysis of the data as presented in the paper. The authors provided with the data by AHVLA, came up with a new modelling approach (described in the paper), reformatting the data to the format required for the analysis, analysed and wrote the paper without any help by the funder. The authors sent Defra the draft manuscript prior to submission for comments as required by the term of their funding. The funders had no role in the data selection, preparation and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. The estimation of disease incidence and prevalence, and the identification of potential risk factors associated with a disease are hampered by imperfect diagnostic tests. While the imperfect nature of tests is widely acknowledged, and several methods have been devised to account for imperfect test sensitivity and specificity when estimating disease incidence and prevalence (e.g. [1]–[3]), the impact of imperfect testing on the determination of risk factors has rarely been directly studied. The methods proposed to correct for imperfect testing have generally been based on sensitivity analyses and produce adjusted prevalence estimates for specific scenarios. This is a valid approach for cross-sectional studies, but ignores the implications that the test result of an individual subject (or unit) might affect the testing regime and the subsequent tests performed on the same subject/unit in a longitudinal setting. In this paper, we consider the use of discrete time survival models [4] to study risk factors for disease diagnosis in a dynamic context (i.e. where the status of an individual unit at a point in time is dependent on the status at the previous time point), and propose a novel extension to the standard model to handle imperfect tests using Monte Carlo Markov chain (MCMC) methods. The methods proposed can be used in many infectious disease scenarios but here we focus on modelling of risk factors for bovine tuberculosis (bTB) in Great Britain using a subset of data from the Randomised Badger Culling Trial (RBCT) [5]. There are many potential approaches for modelling this dataset and the purpose of the current work is not to propose a definitive best fitting procedure but to examine the impact of test sensitivity on one particular approach by adjusting for the imperfect sensitivity of the currently used test to define exposure to bovine tuberculosis, the single intra-dermal comparative cervical tuberculin (SICCT) test. Bovine tuberculosis is a major economic problem for the British cattle livestock industry, which has seen a continuous increase in the disease incidence rate over the past 25 years [6]. Incidence and prevalence of bTB is generally reported in terms of numbers of new herd break downs (HBDs) per area, where a HBD occurs if at least one animal in a herd tests positive when the herd is not currently known to be infected. The number of HBDs is closely linked to the testing regime and in Great Britain (GB); routine testing of a cattle herd is conducted at a frequency determined by the prevalence of bTB in the parish as observed by the testing regime [7]. The guidelines for testing are in theory quite clear: during routine testing, all qualifying animals in the herd are tested using the SICCT test. Three days after testing, the test is read and if at least one animal is a reactor (i.e. the animal has reacted positively to the test), the herd is placed under movement restrictions; all reactors are compulsorily slaughtered and subject to post mortem examination. This event is known as a HBD. Results of the post-mortem examination and/or laboratory culture of tissues lead to the HBD being either classified as confirmed (evidence of M. bovis found) or not [8]. Herds in which reactor animals are confirmed are re-tested at a minimum interval of 60 days until they have had two consecutive clear tests. They are re-tested again after a minimum interval of 6 months and again after a further 12 months. All reactors removed prior to a second clear 60 day test are attributed to the breakdown incident and animal movement restrictions remain in place throughout the period [9]. Herds in which M. bovis is not confirmed in any reactor are also placed under restrictions and re-tested after 42 days. If this test is clear, restrictions are lifted and the herd returns to the routine testing cycle. Confirmation of a HBD also triggers testing of contiguous herds and herds from which reactor animals have been purchased during the period extending back to two months before the previous clear herd test. Any animals sold by the breakdown farm to other farms during this same period are also tested. The testing regime is thus designed to follow possible paths of transmission to neighbouring herds or through traded cattle to attempt to contain the infection, as well as to provide surveillance data. In the literature, numerous risk factors associated with cattle HBDs have been identified and can be categorised into cattle- or herd-related factors (including cattle movements) and wildlife (e.g. badger) factors [9]–[17]. The risk factors identified vary between studies according to the data available and statistical approaches used. Little attention has so far been given to the effect that the imperfect nature of the tuberculin test has on the significance of the risk factors. While the specificity of the test is generally accepted to be close to 100% (i.e. there are very few false positive test results), previous work has demonstrated that the sensitivity of the test is likely to lie somewhere between 50% to 100% with large variation depending on the estimation method used (Monaghan et al. (1994) [18] suggested a range of 68% to 95%, de la Rua-Domenech (2006) [19] a range between 75% and 95.5%, Clegg et al. (2011) [20] a range between 53% and 61% and Alvarez et al. (2012) [21] suggested a median sensitivity of 66–69% with high variability). The stage of infection of the animal tested and previous exposure to the SICCT have been shown to reduce the sensitivity of the test [22]. The consequences of the imperfect sensitivity of the SICCT test are not straightforward to assess due to the modality of the test which includes possible repeated testing of same individuals depending on the results of previous tests. In this paper we propose a new methodology that can be used to incorporate imperfect test sensitivity into a discrete time survival modelling framework. We describe methods for conversion of the complex testing regime that exists for bTB in GB into an underlying discrete time (annual) response pattern of HBD responses for each herd. We then show the impact of the imperfect test on identified risk factors under a range of possible values for the sensitivity of the SICCT test. Materials and Methods The aim of this paper was to investigate the impact of sensitivity of the single intra-dermal comparative cervical tuberculin (SICCT) test on the determination of risk factors associated with a positive bTB test result. As such, we did not attempt to infer or model the true disease or infection status of a cattle herd, but instead modelled the effect of having uncertain test results, irrespective of the underlying unknown infection status. Data Background Since the discovery of a badger infected with bovine tuberculosis (bTB) in 1971, which raised questions on the role of badgers as a vector and reservoir of the pathogen, culling badgers became one possible measure to control the spread of the disease in addition to the control of cattle. The combination of the 1992 Badger Protection Act and the conflicting evidence regarding the efficacy of badger culling as a control measure for bTB, prompted a comprehensive review of the subject [23]. This resulted in 1998 in the design of a large scale clinical trial, the Randomised Badger Culling Trial (RBCT) [5]to evaluate the efficiency of two methods for culling badgers in altering the incidence of cattle herd breakdowns (HBDs) due to bTB. The RBCT ran from 1998 to 2005 (with the last surveys occurring in March 2006) and targeted regions of the UK with a high reported incidence of bTB. Ten ‘triplet’ areas were defined and following an initial survey of both farms and badgers’ social territories, each study area (triplet) was separated into a control trial area where no badger culling occurred and two treatment areas differing in the way badger culling was implemented. In the reactive treatment area, culling of badgers was performed on neighbouring farms where a bTB herd breakdown occurred. In the proactive treatment area, culling occurred over the whole area on an annual basis. For this paper, we restricted the analysis to one of the ten proactive culling trial areas (area B2). The area was chosen as it covers the full duration of the RBCT trial from 1998 to 2005. The B2 culling area comprised 174 herds corresponding to 167 distinct County Parish Holdings (or farms) registered to 164 registered land owners as per the RBCT database. Multiple data sources were used to construct the response variable and the different predictors. Data related to herd bTB status and test results (including number of animals tested, number of reactors) were extracted from the Vetnet database. Animal movement data related to the farms present in the area were extracted from the British Cattle Movement Service (BCMS) Cattle Tracing System, and the number of animals moved aggregated by farm of origin and destination, and by calendar month. This was used to construct cattle movement predictors. From the data collected during the RBCT itself, we extracted information mostly relating to badgers. The RBCT database contained records of badger setts identified on different parcels of lands across the trial area. The badger data are also organised at a spatial level of badger social groups. The badger information can thus be related to the farm data using an association table relating badger social group, badger sett and registered land owner, these three variables forming a unique searching key between the badger trapping and survey data, and the farms. The final data source used was GIS information, which enabled building maps of both badger social groups and farmland, to define the neighbourhood structure for each farm (i.e. how many neighbours, each farm had) and to calculate distances between the centroid of each farm and the centre of the trial area. The GIS information was also used to identify farms with multiple parcels of land across the study area and to link each of these parcels to possible overlapping badger social groups. Response Variable – Relationship between Cattle Herd Breakdowns and Test Results We chose as our response variable, a binary indicator of a cattle herd breakdown (HBD), considering both confirmed and unconfirmed HBD together as both have an impact on testing regime. By a HBD, we mean a herd being placed under restriction (unable to move animals off the farm) following a positive bTB test (which is the case for both confirmed and unconfirmed HBDs). Thus we modelled the risk of the herd being put under restriction and not the risk of becoming infected which is not known. During the trial period, all cattle herds in area B2 were supposed to undergo a whole herd test on an annual basis. However, herds were not always tested each year with a total of 219 out of 1302 annual tests missing and 30 occurrences where a herd was not tested for two or more consecutive years. The reasons for the absence of a test were not always obvious from the data, and we considered such tests as “true missing tests”, i.e. a test was due and was not performed. This “missing test” could be due to the absence of eligible cattle in the herd, which could still leave a residual risk of bTB in the herd [15], or it may be an artefact of our time-interval selection, i.e. there may be just over 12 months between two consecutive tests. Discrete Time Survival Models In discrete time survival modelling, we model the risk (or hazard) of an event happening, considering time as a succession of discrete time intervals during which the individual may or may not be at risk of the event occurring [4]. In our case, the event of interest was a bTB HBD. The unit of study was the herd, organised within farms within registered land owners; herds are uniquely identified with a County-Parish-Holding-Herd (CPHH) number. Observations were considered on an annual (i.e. 12 months) time interval to reflect the fact that herds followed an annual testing regime. Any shorter time step would result in a large amount of missing data because herds testing negative for bTB were rarely retested in the same year. Our annual time step (test-year) ran from the 1^st of February of each year to the 31^st of January of the following year, in what we called a “badger-year”. This choice was motivated by the trapping methodology used in the RBCT which was restricted to the 1^st of May through to the 31^st of January the following year to reduce the chance of trapping lactating badger sows with dependent cubs. By choosing to use a badger-year, we thus ensured (1) that any cub caught in a specific year was born in that year and (2) that two trapping seasons would not be considered within the same year. The change in temporal unit from calendar year to badger-year had no influence on the bTB testing regime in place for cattle. Since some herds may have experienced multiple HBD events (on average a herd experienced 2.5 HBD during the study period), we included in the model a random effect at the individual occupier level assuming multiple herds associated with the same land owner will share the same random variation (there were only 10 more herds than owners). Based on results from previous work (Szmaragd et al. in preparation), we did not include a conditional autoregressive (CAR) spatial random effect as any spatial effects were assumed to be captured by the predictor variables. This resulted in a hierarchical structure with time periods nested within herds nested within owners but only owner level random effects: h[ijk](t) is the hazard of an event occurring in time interval t during episode i for individual herd j for occupier k. X[ijk](t) are the predictor variables which might be time-varying or defined at the episode, individual herd, farm or occupier level. It also includes a polynomial function of the time at risk (see Table 1), which represents the baseline hazard of the event occurring based on the time since the previous event. Table 1. List of the 12 possible patterns resulting from a 6-year pattern of herd status. u[k] is a random effect representing unobserved characteristics common to all episodes experienced by all herds sharing the same land owner k. We follow the common assumption that the u[k] are normally distributed with mean 0 and variance σ[u]^2. Constructing the Response Variable in the Presence of Missing Test Results and Imperfect Test Sensitivity When working with discrete time survival models (or event history data), the presence of missing outcomes for any individual impacts on the whole sequence of events/observations for that individual. Indeed, assuming the missing observation is an event (or respectively a non-event), this will alter (i) the probability of the following observation being an event and (ii) the time when the next episode starts. Before fitting the discrete time survival model described above, we transformed the observed responses based on the results of the tuberculin test into a set of patterns of HBD events which are the response variables used in the model and represent the underlying true disease status. The process used to create the pattern of HBD events is described in figure 1. Figure 1. Diagram of the data preparation process. Example based on a hypothetical herd. *The herd might not exist for the whole duration of the study period. Missing year at the start and/or end of the study period are ignored, as the dataset does not to be balanced. For each herd, our procedure starts with a list of test results grouped per badger-year. In the example of figure 1, this example herd had not been tested for years 3, 7 and 8. This list of test results is then transformed into a list (or vector) containing HBD states. We defined four states, which represents the restriction status of the herd based on the bTB test results: 1. 0 being defined as at least one bTB test occurred and all tests were negative, i.e. herd was not under-restriction. 2. 1 being defined as at least one HBD occurred and the herd was not unrestricted by the end of the period (2 consecutive negative tests in the period) by the end of the period, i.e. herd was still under-restriction at the period end. 3. 2 being defined as a HBD occurred but the herd was unrestricted by the end of the period (last 2 tests were negative in the period), i.e. herd was under-restriction for part of the period but the restrictions were lifted by the end of the period. 4. M no test occurred in this period and so the annual (test results) state is unknown. The next step consists in accounting for the imperfect sensitivity of the test in defining the sequence of HBD. The imperfect sensitivity means that for each recorded 0 or 2 in the list of HBD based on the test results, there is a probability that in reality this state should actually be a 1 (i.e. the corresponding tuberculin test is truly positive). Thus for each 0 or 2 state in the list of HBD states for a particular herd, two possibilities arise: either the true state is actually a 1 (and the test was a false negative) with probability p or the test was a true negative and the true state is the same as the observed state with probability 1–p. This is propagated through the full sequence of HBD states for that herd leading to a set of possible patterns with associated prior probabilities depending on p (following a binomial probability distribution) as illustrated in figure 1. The probability p is related to the test sensitivity but is actually closer to the negative predictive value and so will depend on the underlying prevalence of bTB in the population as explained Some herds contained missing HBD states for certain years which are found through the set of patterns. We dealt with missing tests via a rule based approach so that some missing tests were filled in deterministically whilst others were uncertain and hence this resulted in additional possible response patterns being created for each herd. Our approach to filling in missing values is presented in Table 2 based on the herd states before and after the period with missing test(s). If the HBD states are identical before and after the period with missing test(s) and the before state is not a 2, then the missing HBD state take the same value. If the before state is either a 0 or 2 and the after state a 1 or 2, then the missing state can be either a 0 or 1, with equal probability. Finally if the before state is a 1 and the after state a 0, the missing state can be either 0, 1 or 2 with equal probability. Table 2. The deterministic rules used for filling in missing test values. When the missing test values had been filled in and the patterns were fully defined (i.e. there are no longer any missing value), we transformed the resulting set of state vectors into response vectors indicating whether the herd was “at risk of HBD but had not broken down” ( = 0) or “was not at risk” because an event ( = 1) had occurred. If the herd was not at risk for two or more consecutive time steps (consecutive 1s or a 2 following a 1), the second and consecutive 1s were discarded from the data. We summarised in Table 1 the list of 12 patterns of HBD states represented in figure 1, with the corresponding response and time at risk vectors and the prior probability of each pattern being determined by the value of p. Based on the fact that the test is designed with high specificity to avoid true negative herds being placed under restriction, we assumed for simplicity that the number of false positives was minimal and could be ignored. We expressed p as the ratio between the number of false negatives (FN) over the number of herds tested negative (FN + TN, TN being the number of true negative tests): By introducing the number of true positive tests (TP) into the ratio, p was related to the sensitivity of the test: with S[e] being the sensitivity of the test and NH[r] and NH[t] being respectively the number of herds restricted (i.e. tested positive) and the number of herds tested. NH[r] and NH[t] were obtained from the Defra national statistics for TB for each year from 1998–2005 and for each county in the UK. As we are interested in areas within the RBCT, we compiled NH[r] and NH[t] as an overall average for the RBCT counties over the RBCT period. In addition to the perfect case of S[e] = 1 or p = 0, we tested a range of sensitivities from 0.5 (50%) to 0.95 (95%) and these corresponded to values of p ranging from 0.008 to 0.153 (Table 3 provides the corresponding values of p for the sensitivities tested). Table 3. Values of p used as prior probability of each pattern, for a range of possible sensitivity values. We considered the value for the sensitivity of the test to be a constant rather than a parameter in the model and populated the list of possible response patterns with associated prior probability distribution for each herd before running any statistical models, thus reducing the computation time required for estimating the model parameters. An extension that allows this parameter to be estimated is potentially feasible but would require a strong prior to ensure the model is identifiable and would require the pattern prior probabilities to be recalculated at each iteration (thus considerably increasing the computation time). The true patterns (one of the set of the different possible patterns of true disease status for each herd) are treated as parameters to be updated as part of the Markov Chain Monte Carlo (MCMC) algorithm that we used to fit the model (see Appendix S1 for details). MCMC algorithms are iterative simulation-based procedures, where parameters are grouped and updated in separate steps within each iteration. A pattern is therefore selected in one step of the algorithm in each iteration using the associated prior probability as a proposal distribution, and that pattern is then considered as known for the other steps in the iteration and the other model parameters (fixed effects estimates and variances) can be updated conditional on this fixed data. We developed a general MCMC algorithm to fit discrete time survival models with potential multiple patterns for each herd, in the general case of assuming a 2-level random intercept model. We implemented our algorithm in special purpose computer code written in C++ (see Appendix S1 for the full description of the algorithm) and tested the code by comparing the results obtained by our algorithm with the ones given by the WinBUGS software package [24] for the cases with no missing tests. Using the model fitting process described below, we estimated the best fitting models for each value of test sensitivity. We also considered the best fitting model found under the assumption of a perfect test (Se = 1 or p = 0), and looked at the effect on its parameter estimates of decreasing test sensitivity. Predictor Variables Considered All the predictors were defined in relation to the response variable which was the risk of a HBD (binary variable) for a herd j in badger-year y. Some predictors were constant for the same herd across the different years, whilst some varied. Some predictors were also constructed by introducing a lag component, i.e. looking at the value of a predictor at a specific number of years prior to the year of the response (for simplicity the reference year will be termed “test-year” or badger-year). We constructed nine badger variables: the number of badgers trapped in the test year and the previous year, the number of bTB positive badgers trapped in the test year and the previous year, the number of badgers estimated alive in the test year and the previous test year (constructed deterministically from dental records), the percentage of badgers infected in the test year and the previous test year and the cumulative number of badgers trapped from the start of culling to the test year. None of these predictors were significant when area B2 was considered in isolation so we omit further details. The herd-level variables were herd enterprise (dairy-only, beef-only, mixed), mean herd size in test year and the previous year, number of calves born in the previous two years, new stock from homebred calves only (binary), whether the herd was depopulated in the 1–5 years or ever, before the test-year, number of cattle tested, number of reactors in the previous 1 to 5 years, cumulative number of reactors, missing test (in the test year or previous year), number of months since last test. CPH/occupier level variables were the number of land parcels associated with the farm owner, average field size (ha); total farm area, stocking density, distance between the centroid of the farm and the centre of the trial area (m), number and average herd sizes of neighbouring herds, categorised by whether they were not tested, tested positive or negative for bTB. Continuous variables were computed for each herd over a badger-year. If the herd tested negative in that year, the value of the variable was the average of the variable over the badger-year, while if the herd had a HBD, the value of the variable was obtained by averaging over the period from the start of the badger-year to the date of the first positive test or in the case of variables related to the number of reactors we used the sum over the whole badger-year. The herd enterprise type was obtained from data records from both the RBCT database and historical records from VetNet for 2000 and 2002, allowing for some changes in herd enterprise type during the course of the trial. There was a foot and mouth disease (FMD) epidemic in 2001 during which time some herds were depopulated. Herds were marked as depopulated if they decreased in size (through slaughter) by more than half in a month. The set of predictors constructed from cattle movements were the number of cattle moved to a farm either directly or through market, categorised by year, test status of source farm (untested, positive, negative) in the previous and following year(s) and the testing frequency of the source farm. The number of cattle sold in the 12 or 12–24 months prior to the first unrestricted bTB test in any year were also considered as predictors. Rare movements (i.e. variables corresponding to types of movement where in total fewer than ten cattle were observed performing the type of movement in the whole dataset) were removed from the list of possible predictors prior to modelling. Model Fitting Process The model fitting process considered is similar to one given in Cox and Wermuth (1996) [25], and was performed in three main steps within an iterative loop (figure 2). The first step consisted of fitting univariable models where each predictor was added on its own to a base model containing random effects, the baseline hazard function and an intercept. When all predictors have been tested, they are ordered according to their “z-score” defined as the absolute value of the ratio of the posterior mean for the predictor to its posterior standard deviation. Predictors with a z-score larger than 1.96 are considered significant. In the second step, all the significant variables were included in a single model except for highly correlated variables (Spearman correlation coefficient more than 0.7). If two variables were strongly correlated then the variable with the higher z-score was preferentially included. The model including all these predictors was fitted and we then proceeded by removing no longer significant variables one at a time, refitting models as we went. This process continued until we reached a model where all the predictors were significant. In the third step, the optimum model for this round was used as the starting point for the next round. Step one was then repeated using this model as the base model. Once the remaining predictors had been tested in a univariable fashion, we checked for any new significant variable, which was not correlated with any predictor already in the model. We then repeated the second step, followed again by the third step (if necessary) until no more significant predictor could be added to the model. Figure 2. Diagram of the model fitting process. *cubic function of the time at risk variable was initially used. At the end of the first univariable iteration, the significance of each term will be assessed as part of the predictor selection step. None of the three terms were found significant and the time at risk variable was thus removed from the model ^§A predictor is considered to be significant if its z-score (|posterior mean|/(posterior standard deviation)) is larger than 1.96 ^‡The models are fitted using the MCMC algorithm detailed in Appendix S1. Each model was run twice (to check for convergence to a unique mode) for 50,000 iterations following 5,000 burnin iterations. The initial values for the first chain were set to 0 for each predictor and to 0.1 for the variance of the random effects. The second chain was initialised using the opposite signed value of the coefficient values obtained for each predictor from the first chain, and using 1 for the variance of the random effects. Except for the choice of the pattern corresponding to the true disease status where we used the prior distribution derived from the test sensitivity, we did not have any prior information for the other parameters of the model. We, thus, used non-informative priors: uniform priors for fixed effects and a prior for the variance parameter. Using data collected in one area of the Randomised Badger Control Trial (RBCT) where proactive culling of badgers occurred, we tested the effect of varying the bovine tuberculosis (bTB) test sensitivity from 50 to 100% on the identification of risk factors for bTB herd breakdown (HBD). Effect of Reduced Test Sensitivity on the Identification of bTB HBD Risk Factors The best fitting model obtained under the assumption of perfect test sensitivity contained several predictors (see Table 4 for their full description), half of which were related to farm or herd demographic characteristics (including farm neighbourhood and the effect of the 2001 FMD epidemic); with the other half being related to cattle movements onto the farm (Table 5). This model highlighted that selling animals a year before being tested for bTB significantly reduced the risk of a HBD; this protective effect persisted across the range of sensitivity values. Perhaps not surprisingly, herds with larger number of animals tested and more calves born the year before the test had a significantly higher risk of a HBD. This increasing risk of HBD with larger herd size was also consistently found across the range of sensitivity albeit through different but highly correlated predictor variables such as the log of the mean herd size and number of calves born two years before the test (see Table S1 for the full correlation table). Previous HBD history as represented by the number of reactors two years before the test was also a significant risk factor which persisted across the range of sensitivities except when the test sensitivity dropped to 50% where it was replaced by the cumulative number of reactors in the previous four years (Spearman’s correlation coefficient of 0.58). The 2001-FMD outbreak appeared to mark an increase in the risk of HBD which persisted across the range of sensitivity values, however this could also be linked to a general increase in bTB prevalence with calendar time as has been reported across Great Britain. Table 4. List of predictor variables appearing in the models. Table 5. Best fit models obtained for each sensitivity value tested. With regard the effect of neighbouring cattle herds, herds with more neighbouring herds testing positive in the same year had a higher risk of HBD; while having larger neighbouring herds testing negative in the same year was protective against the risk of HBD; both effects probably reflect the testing regime. The increasing risk of HBD with more neighbouring herds testing positive persisted across the range of sensitivity values whereas the protective effect of being surrounded by large herds testing negative was replaced by the protective effect of having more neighbouring herds (or larger neighbouring) herds testing negative the previous year. While these different predictor variables are not highly correlated (supplementary Table S1), they represent a similar relationship between neighbouring herds and the probability of a HBD. When the sensitivity of the test dropped to 50%, two additional predictor variables became significant with opposite effects: the more neighbours a herd has in the previous year the higher the risk of a HBD (through higher probability of exposure and/or larger susceptible population) but the more neighbouring herds testing positive in the previous year the less the risk (possibly due to effective removal of the source of infection). The type of herd enterprise (beef or mixed versus dairy) was a significant risk factor only under the assumption of perfect sensitivity or very high sensitivity (95%). A number of variables related to purchase patterns were significant predictors for the risk of HBD in a given year but only two movements variables found under the assumption of perfect test sensitivity were consistent across the range of sensitivity values. These were the log of the number of animals bought the year before the test directly, from a farm which had tested positive at some point in the 12 months preceding the purchase, which was consistently associated with a decreased risk of HBD while the log number of animals bought the previous year from a farm which never tested positive before the move but after the 1^st of July 1996 when cattle passports were first implemented led to an increase in risk of HBD for all sensitivity values except for the best fiting model with 75% test sensitivity where it was replaced with the highly correlated indicator of depopulated herd. Two additional risk factors (including highly correlated factors), namely the number of animal bought through market from a farm testing positive in the year following the move and from a farm in a 3- or 4-yearly testing area which tested positive two years before the move, were found in the best fitting models for sensitivity value above 60%. The other movement predictor variables (a mixture of risk and protective factors) identified in the best fitting model in the case of the perfect sensitivity tended to disappear either partially (persisted for some sensitivity values but not others) or completely, while new movement predictors appeared. Effect of Reduced Test Sensitivity on the Parameter Estimates for bTB HBD Risk Factors As the variables identified in the best fitting models for different values of test sensitivity varied, we investigated the effect of reducing test sensitivity on the parameter estimates obtained for the predictors of the best fitting model found under the assumption of perfect test sensitivity while keeping the model fixed. The rationale here was that epidemiological studies aiming at identifying risk factors from field data generally assume a perfect test (100% sensitivity). It is, however unclear what effect on the parameter estimates would be observed if this assumption was not Our model fitting showed that while the size and the sign of the parameter estimates generally remained similar across the different sensitivity values, there was a consistent increase in the standard errors around these estimates with a decrease in sensitivity (Table 6). Some parameters ceased to be significant as sensitivity was reduced, mainly once this dropped below 60%, with the exception of one of the FMD indicators which lost significance as soon as the sensitivity of the test dropped below 95%. When the test sensitivity was less than 60%, the MCMC algorithm had convergence problems for the second chains with diffuse starting values. In our modelling approach, decreased test sensitivity led to more uncertainty in the actual pattern for each herd and the posterior distribution potentially became multi-modal. Here different patterns with similar prior probability of being correct will result in different predictors being significant. This also led to patterns with lower prior probability being more likely to be selected by the model than they were for higher sensitivity values. Table 6. Parameter estimates by test sensitivities (Se), based on the best fit model under the assumption of a perfect test. In this paper, we have presented a new approach for fitting discrete time survival models where the response variable contains missing values and at the same time where the known values of the response variable are surrounded by a certain amount of uncertainty. This type of data is especially found in epidemiology where the response variable relates to disease status, which itself is dependent on a (possibly imperfect) diagnostic test. The presence of missing values alone leads to a response variable which is not uniquely defined but instead takes the form of a set of multiple possible outcomes. This set of potential outcomes is further increased by the uncertainty relating to the imperfect nature of the test used, which affects the prior probability of each response We developed a Bayesian model to deal with such cases and applied it to historical data on herd breakdown (HBD) with bovine tuberculosis (bTB). We found that accounting for the imperfect sensitivity of the diagnostic test affects which risk factors are significantly associated with a bTB herd breakdown and in particular that a decreased test sensitivity leads to larger confidence intervals around the parameter estimates of each risk factors. A small set of predictors (mostly non cattle-purchase variables) were consistently significant across the best fitting models for each different value of test sensitivity. The cattle-purchase variables were the most affected by decreasing sensitivity values with only two out of the nine predictors identified for the perfect sensitivity best fitting model being significant across the whole range of sensitivity values. Whatever the true sensitivity of the test, larger herds (as represented by higher number of calves born, larger herd sizes, or greater number of animals tested) and herds with a history of bTB and larger number reactors found at a HBD 2 years previous to the current test had an increased risk of HBD while selling an animal before the test was protective. The marked increase in HBD after the 2001-FMD outbreak was also unaffected by decreased sensitivity. Our analyses also confirmed similarity in the pattern of HBD between neighbouring herds, which held through the range of sensitivity values with some variations in the predictors. As predictor variables we only considered the number of neighbouring herds that tested positive or negative rather than the number of neighbouring herds that were actually positive or negative. A possible extension of our modelling approach, when assuming less than perfect test sensitivity, would be to recalculate numbers of neighbours that are truly positive/negative for a given herd, at each iteration of the algorithm, for use as predictors but we will consider this extension elsewhere. The data we analysed were collected as part of a large scale clinical trial/field experiment designed to detect gross overall effects between different treatments (culling and survey) in an area. It was therefore, as such, not intended for the fine grain analyses we have performed. We included cattle movement predictors alongside badger and herd-demographic predictors. This resulted in a large number of predictors to be considered as possible risk factors, which can lead to the usual problems of multiple comparisons and might explain why some of the effects found were perhaps counter intuitive. However, the aim of this paper was to investigate the effect of decreased test sensitivity on the risk factors determined rather than identify these risk factors per-se Under decreasing values of sensitivity, slightly different sets of predictors appeared in the best fitting model, but with some of the predictors considered across models being strongly correlated with one another (see Table S1). Our statistical modelling approach identifies associations between predictor variables and HBD rather than causations and so care has to be taken in acting upon the findings, especially given the large number of predictors considered. Our analysis of the effect of reduced bTB test sensitivity highlighted an increase in the uncertainty surrounding the parameter estimates identified by the models, with some predictors losing significance altogether. This has important consequences for epidemiological field studies aiming at identifying risk factors for infectious disease, as they are based on the assumption of perfect test sensitivity. Given that the published estimates of sensitivity are around 60%–75%, our analysis suggests this could explain the difficulty in finding a consistent and reliable set of risk factors for bTB. It is therefore essential that estimates of bTB test sensitivity (at individual and herd level) be confirmed in the field, possibly by using complementary test diagnostics and analyses adjusted for the lack of sensitivity [20], [21]. Increasing sample sizes to account for imperfect test sensitivity is already advised when designing cross-sectional studies, but we have shown that the same holds true for longitudinal data. Future longitudinal field studies would benefit from being sufficiently large by adjusting for a less than perfect test sensitivity when calculating the sample sizes, if reliable conclusions are to be drawn from the data. The method we presented here is not just limited to the study of bTB risk factors or to epidemiological data in general. It could be applied to a wide range of datasets where the response variable may be surrounded by some level of uncertainty, which could influence the associated predictors. The deterministic set of rules used to resolve the case of missing data was specific to this dataset, being based on the testing regime which produces the response variables, but could be adapted to other scenarios. The solution we proposed illustrates how one could implement a similar approach when confronted with missing data in discrete time survival analysis, instead of removing the observations corresponding to a subject with missing responses. In our case, the data only had a simple non-spatial two level random effects structure but our method can be extended to include spatial error structure or higher levels of data structure. The only drawback of such methods is that they are computationally intensive and therefore slow to run. Supporting Information Full correlation tables for the predictor variables appearing in table 5 of the main text. Only Spearman or Pearson correlation coefficients over 0.20 are shown. In bold, are indicated Spearman correlations over 0.5. MCMC algorithm. Expression of the likelihood for discrete time survival model with multiple patterns. The authors thank Sam Mason from Warwick University for work on BCMS data extraction, and Andy Mitchell and Paul Upton from the Veterinary Laboratory Agency for providing the RBCT and Vetnet data. Author Contributions Conceived and designed the experiments: CS LEG GFM WJB. Performed the experiments: CS. Analyzed the data: CS. Contributed reagents/materials/analysis tools: CS LEG GFM WJB. Wrote the paper: CS LEG GFM WJB. Designed and coded the model algorithm: CS WJB.
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0043116","timestamp":"2014-04-24T15:35:11Z","content_type":null,"content_length":"150387","record_id":"<urn:uuid:22899971-bba2-4955-b497-87cbd2986aa5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Compton, CA Algebra Tutor Find a Compton, CA Algebra Tutor ...I'm available both online and in-person. I have zillions of references, ranging from students and parents to high school teachers, college counselors, and principals! You're pretty sure you want to contact me, right? 26 Subjects: including algebra 1, algebra 2, reading, English ...I am also an accomplished piano player, with more than 12 years' experience. I completed the Certificate of Merit program through level Advanced, which also means I am proficient with music theory. I am an avid photographer and videographer, and have had experience with freelance work in both fields. 47 Subjects: including algebra 2, algebra 1, English, chemistry I graduated from USC in 2012 with a degree in accounting. After graduation, I started working for one of the Big 4 Accounting firms. This past year, I passed all four parts of my CPA exam. 5 Subjects: including algebra 1, accounting, elementary (k-6th), elementary math ...Mandarin is the fastest growing language in Middle Schools, High Schools and Universities around the country. I fount out that the best way to learn Mandarin is to immerse my students in an environment where they are interested in what is going on. My students and I always have so much fun learning Mandarin. 7 Subjects: including algebra 1, algebra 2, calculus, Chinese ...The three of them (alone and together) show up powerfully in engineering, physics, cosmology, economics, etc., etc. They all contribute in practical ways to much of the modern world. Now I’m ready to describe the beauty of the equation. 7 Subjects: including algebra 1, algebra 2, geometry, precalculus Related Compton, CA Tutors Compton, CA Accounting Tutors Compton, CA ACT Tutors Compton, CA Algebra Tutors Compton, CA Algebra 2 Tutors Compton, CA Calculus Tutors Compton, CA Geometry Tutors Compton, CA Math Tutors Compton, CA Prealgebra Tutors Compton, CA Precalculus Tutors Compton, CA SAT Tutors Compton, CA SAT Math Tutors Compton, CA Science Tutors Compton, CA Statistics Tutors Compton, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/compton_ca_algebra_tutors.php","timestamp":"2014-04-17T19:57:37Z","content_type":null,"content_length":"23777","record_id":"<urn:uuid:2b8b596c-e71a-4d4f-9fbd-ab3ea3a7f25f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - combinig curves help plz? 07-06-2005, 06:17 PM hi i have curves, that are set up in a Y shape.. one side of the Y is 1 solid curve: : / the other side of the y is another curve \ so it ends up looking ilke this: :\ / : | and is a total of 2 curves... does this make sence? the point where all 3 of the curves i would like to combine into 1 curve(FAKED 1 curve).. like: edit polygon -> combine.. i dont want it to actualy be 1 curve, just so that if i select any one of the curves it selects them all.. not one curve then pickwalk up to a group etc it seems odd that you can combine only polygons in maya is there a mel/script/tutorial on how to combine other objects? dangit it seems like such a simple task ony i just cant figure it out! does this make sence?
{"url":"http://forums.cgsociety.org/archive/index.php/t-256205.html","timestamp":"2014-04-21T10:03:50Z","content_type":null,"content_length":"6333","record_id":"<urn:uuid:599c7377-8093-4d71-a3d5-827839c5422e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Undergrad. Chem/Math Major: Numerical Analy. or Stat. Theory Course? Hey There, I was just wondering if there were any people available on the site who could give me advice for which course to take in the upcoming fall 2010 semester. I'm an undergraduate chemistry and math major, nearing the end of my studies (1 year remaining). I have a math elective to consider, and was wondering if I should use it for Statistical Theory (which requires one term of Probability Theory, which I've completed) or Numerical Analysis (which requires a semester of computer science, which I've completed), or even Differential Geometry. I'm looking to go into graduate theoretical chemistry, p.chemistry, or chemical physics. This is my training in mathematics as of now: Calculus I-IV Mathematical Theory Diff. Equations Partial Diff. Equations Linear Algebra I & II Probability Theory I Complex Analysis Abstract Algebra Real Analysis What do you think? Anything you suggest that I did not mention other than Stat. Theory/Num./Diff.Geo?
{"url":"http://www.physicsforums.com/showthread.php?p=2692288","timestamp":"2014-04-18T10:40:07Z","content_type":null,"content_length":"20802","record_id":"<urn:uuid:66f483f6-6f2c-480b-9916-01c1d8c93ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2010 [00632] [Date Index] [Thread Index] [Author Index] Re: Defining a function using common notation for absolute value (not Abs[x]) • To: mathgroup at smc.vnet.net • Subject: [mg112758] Re: Defining a function using common notation for absolute value (not Abs[x]) • From: Bill Rowe <readnews at sbcglobal.net> • Date: Wed, 29 Sep 2010 04:15:34 -0400 (EDT) On 9/28/10 at 6:06 AM, accardi at accardi.com (Gianni) wrote: >I am trying to display the absolute value exponent in the second >function definition above in the way students will see it in a >standard math text (with the vertical bars, not Abs, like in the >first text line). The first definition above is without error >(exponent shown as -Abs[x]). I know how I can create a function using Abs where the notebook will look like I used two vertical bars to create the Abs function. Basically, I can do this by making use of Traditional form to get the desired result in an output cell, then doing a copy paste operation to an input cell to create the function definition. But while the net effect will be something that looks like a textbook, it will be very confusing to a new user of Mathematica. That is, as far as I know there is no simple obvious way to do this directly from the keyboard. A somewhat less complex way to create the appearance you want and have an operational function definition is to write the function using either InputForm or StandardForm and after entry is complete convert the input cell to TraditionalForm. This is something I frequently do when I want to present something to a colleague for review that is not a Mathematica user and I do not expect him to actually use what I've done or re-create it in Mathematica. In this caes, the important thing is to have notation my colleague is familiar with rather than something he could easily enter into Mathematica for himself. >The second is what I am trying to >define without error. I am trying to find the button (hopefully on >the classroom assistant palette) that will produce the desired The button in the classroom assistant palette and the basic math input palettes create a template for Abs value using Abs[ ] rather than two vertical bars >The button in the Typesetting section of the Classroom Assistant won't >be implemented as an error free function definition as you see in the >error messages. The button in the typesetting section that does create a template with two vertical bars simply isn't the Abs function and won't do what you want.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Sep/msg00632.html","timestamp":"2014-04-16T04:43:43Z","content_type":null,"content_length":"27296","record_id":"<urn:uuid:6d827616-a941-4894-9523-c71994f01d91>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Perrineville Math Tutor ...As an educator, I recognize that the student must come first. This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. I do hold high expectations on both parties, and do understand that this is a process that evolves as a deeper relationship is formed. 9 Subjects: including algebra 1, algebra 2, calculus, physics ...This requires a strong working knowledge of the principles and applications of organic chemistry. I have also completed more than six graduate level organic chemistry classes at The University of Texas at Austin and Princeton University. My grades in these courses were excellent. 8 Subjects: including algebra 1, chemistry, biology, American history ...I have taught K to adult students. I am currently a teacher in a public school and an adjunct at a community college. I have taught abroad and I am fluent in Portuguese and Spanish. 21 Subjects: including prealgebra, algebra 1, reading, Spanish ...Most secondary level math subjects rely on skills learned in algebra I. As a geometry teacher, I am constantly reinforcing these skills in order to move forward and prepare my students for algebra II. Over the summer, I teach algebra I for credit recovery in my district's summer school program. 9 Subjects: including trigonometry, algebra 1, algebra 2, calculus ...During that time I worked to tutor fellow students in a vast amount of topics in an honors program. I am currently working to become licensed to teach elementary school in the state of New Jersey. I love teaching and helping kids to learn. 28 Subjects: including logic, reading, Spanish, English
{"url":"http://www.purplemath.com/Perrineville_Math_tutors.php","timestamp":"2014-04-16T22:00:35Z","content_type":null,"content_length":"23727","record_id":"<urn:uuid:aa27c8a0-d533-4741-9474-b57138e6f69f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
load testing hi i am putting together a practical training for the infrastructure industry in the rebuild of earthquake torn christchurch nz, the training is around slinging lifting rigging i would like to make it as hands on and practical possible i have attached a diagram commonly used in explaining why slings at attached at an obtuse angle are under more pressure/force than at a acute angle i want to build a small scale model to explain this and to show that the weight and pressure are increased other things i am looking to try explane on the practical course are around how shock loading a load puts huge forces onto the chains well above the weight that is actually being lifted any help would be greatly appresiated
{"url":"http://www.physicsforums.com/showthread.php?t=683680","timestamp":"2014-04-16T10:24:52Z","content_type":null,"content_length":"20340","record_id":"<urn:uuid:9742cbe0-2c83-4f62-9693-1179504ef391>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Perpendicular Lines in the Coordinate Plane 3.9: Perpendicular Lines in the Coordinate Plane Difficulty Level: At Grade Created by: CK-12 Practice Perpendicular Lines in the Coordinate Plane What if you wanted to figure out if two lines in a shape met at right angles? How could you do this? After completing this Concept, you'll be able to use slope to help you determine whether or not lines are perpendicular. Watch This CK-12 Foundation: Chapter3PerpendicularLinesintheCoordinatePlaneA Watch the portion of this video that deals with Perpendicular Lines Khan Academy: Equations of Parallel and Perpendicular Lines Recall that the definition of perpendicular is two lines that intersect at a $90^\circ$ If we take a closer look at these two lines, we see that the slope of one is -4 and the other is $\frac{1}{4}$The slopes of perpendicular lines are opposite signs and reciprocals of each other. Example A Find the slope of the perpendicular lines to the lines below. a) $y=2x+3$ b) $y=-\frac{2}{3}x-5$ c) $y=x+2$ We are only concerned with the slope for each of these. a) $m = 2$$m_\perp$$m_\perp=-\frac{1}{2}$ b) $m=-\frac{2}{3}$$m_\perp=\frac{3}{2}$ c) Because there is no number in front of $x$$m_\perp=-1$ Example B Find the equation of the line that is perpendicular to $y=-\frac{1}{3}x+4$ First, the slope is the reciprocal and opposite sign of $-\frac{1}{3}$$m = 3$$y-$$y-$not our new line. We need to plug in 9 for $x$$y$new $y-$$(b)$ $-5 & = 3(9)+b\\-5 & = 27 + b \qquad \text{Therefore, the equation of line is} \ y=3x-32.\\-32 & = b$ Example C Graph $3x-4y=8$$4x+3y=15$ First, we have to change each equation into slope-intercept form. In other words, we need to solve each equation for $y$ $3x-4y & = 8 && 4x+3y = 15\\-4y & = -3x+8 && 3y = -4x + 15\\y & = \frac{3}{4}x-2 && y = -\frac{4}{3}x+5$ Now that the lines are in slope-intercept form (also called $y-$ Watch this video for help with the Examples above. CK-12 Foundation: Chapter3PerpendicularLinesintheCordinatePlaneB Two lines in the coordinate plane with slopes that are opposite signs and reciprocals of each other are perpendicular and intersect at a $90^\circ$Slope measures the steepness of a line. Guided Practice 1. Determine which of the following pairs of lines are perpendicular. • $y=-2x+3$$y=\frac{1}{2}x+3$ 2. Find the equation of the line that is perpendicular to the line $y=2x+7$ 3. Give an example of a line that is perpendicular to the line $y=\frac{2}{3}x-4$ 1. Two lines are perpendicular if their slopes are opposite reciprocals. The only pairs of lines this is true for is the first pair, because $-2$$\frac{1}{2}$ 2. The perpendicular line goes through (2, -2), but the slope is $-\frac{1}{2}$$2$ $y & = -\frac{1}{2}x+b\\-2 & = -\frac{1}{2}(2) + b\\-2 & = -1+b\\-1 & =b$ The equation is $y = -\frac{1}{2}x-1$ 3. Any line perpendicular to $y=\frac{2}{3}x-4$$-\frac{3}{2}$$y=-\frac{3}{2}x+b$ 1. Determine which of the following pairs of lines are perpendicular. 1. $y=-3x+1$$y=3x-1$ 2. $2x-3y=6$$3x+2y=6$ 3. $5x+2y=-4$$5x+2y=8$ 4. $x-3y=-3$$x+3y=9$ 5. $x+y=6$$4x+4y=-16$ Determine the equation of the line that is perpendicular to the given line, through the given point. 2. $y=x-1; \ (-6, \ 2)$ 3. $y=3x+4; \ (9, \ -7)$ 4. $5x-2y=6; \ (5, \ 5)$ 5. $y = 4; \ (-1, \ 3)$ 6. $x = -3; \ (1, \ 8)$ 7. $x - 3y = 11; \ (0, \ 13)$ Determine if each pair of lines is perpendicular or not. For the line and point below, find a perpendicular line, through the given point. Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/3.9/","timestamp":"2014-04-25T01:52:10Z","content_type":null,"content_length":"149543","record_id":"<urn:uuid:dbac7ad7-70c8-41eb-a092-5e7182c1a4ef>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Do Irreducibles Induce Algebraic Extensions? i think so. in general, if M is a maximal ideal in a commutative ring R, then R/M is a field. If R contains a subfield k, then R/M is an extension of k. So in your case the ring K[X] contains the subfield K, and since K[X] is a Euclidean ring, hence also a p.i.d., an irreducible polynomial f generates a maximal ideal, so K[X]/(f) is a field extension of K. Moreover the degree (vector dimension) of the extension equals the degree of f, hence is finite, and every finite extension is definitely algebraic. so YES! I had to think through all the details since I am old and losing my memory. hope this helps.
{"url":"http://www.physicsforums.com/showthread.php?t=468042","timestamp":"2014-04-16T19:08:21Z","content_type":null,"content_length":"25867","record_id":"<urn:uuid:6d325173-ac22-4a54-9ba8-cd04ac85fe3d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
02-07-2012, 09:34 AM I see "MOA" used to describe firearm accuracy quite a bit, mostly from manufacturers. I searched and can't find what it stands for. Anyone know? 02-07-2012, 09:58 AM Just think 1"@ 100 yards is close enough. Here's a U-Tube deal that will help you.:smt1099 Understanding Minute of Angle (MOA) - Rifle Shooting Technique - NSSF Shooting Sportscast - YouTube 02-07-2012, 10:05 AM Minute of Arc (arcminute) or Minute of Angle (some even refer to it as minute of "accuracy" but that is a more slang term in the gun world, and technically inaccurate) while a very technical term the following is an introductory explanation The arcminute is commonly found in the firearms industry and literature, particularly concerning the accuracy of rifles, though the industry tends to refer to it as minute of angle. It is especially popular with shooters familiar with the Imperial measurement system because 1 MOA subtends approximately one inch at 100 yards, a traditional distance on target ranges. Since most modern rifle scopes are adjustable in half (1⁄2), quarter (1⁄4), or eighth (1⁄8) MOA increments, also known as clicks, this makes zeroing and adjustments much easier. For example, if the point of impact is 3" high and 1.5" left of the point of aim at 100 yards, the scope needs to be adjusted 3 MOA down, and 1.5 MOA right. Such adjustments are trivial when the scope's adjustment dials have an MOA scale printed on them, and even figuring the right number of clicks is relatively easy on scopes that click in fractions of MOA. so.. Imagine a circle bisecting your body, think about how a circle has 360 degrees... now think of every one of those degree's having 60 minutes within them. ONE minute of one degree is One inch at 100 yards. multiply that One inch by 60 and you have (obviously) 60 inches, or one degree of a full circle with a radius of 100 yards, diameter of 200 yards and a circumference of (approx) 628 yards. now if you go and check the math one degree of 60 inches multiplied by 360 and divided by 12 (for feet) and 3 (for yards) comes to 600. one minute at 100 yards is an approximation and when you multiply one by 60 by 360 the relatively minute difference ultimately becomes 28 yards. as you can tell it's a very deep subject in firearms, however generally applied to Rifles. This, however, is handgun forums and therefor MOA does not compute... most pistols would (exaggerating slightly) be measured in degrees of angle, not minutes at 100 yards, lol. 02-07-2012, 10:43 AM
{"url":"http://www.handgunforum.net/printthread.php?t=29715&pp=25&page=1","timestamp":"2014-04-16T13:22:02Z","content_type":null,"content_length":"6707","record_id":"<urn:uuid:0c72c731-f890-46c5-a454-26a89033c75e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
note friedo <p>I don't understand why you can't simply use subtraction, regardless of the decimal truncation in either of the operands. </p> <p>You're going to end up with some floating-point drift anyway; perhaps it would be a good idea to drop the decimal points and treat everything as integers, then divide by 10<sup>n</sup> when it's time to display the number to the user. (This is the recommended way of doing currency calculations so floating point error doesn't mess with your cents.)</p> 689319 689327
{"url":"http://www.perlmonks.org/?displaytype=xml;node_id=689329","timestamp":"2014-04-21T05:59:21Z","content_type":null,"content_length":"1046","record_id":"<urn:uuid:d585c365-8811-47f0-9753-ad8366ca6828>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Estimating the parameters of the GB2 distribution with incomplet Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Estimating the parameters of the GB2 distribution with incomplete knowledge of sample statistics From Austin Nichols <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Estimating the parameters of the GB2 distribution with incomplete knowledge of sample statistics Date Sun, 26 Sep 2010 22:25:57 -0400 Julio Estevez <je_123@hotmail.com>: -gbgfit- on SSC will give the MLE based only on the quantiles; you can write the MLE incorporating knowledge of the mean. Or you can use -nl- or Mata's optimize() of which there are examples in the archives. On Sun, Sep 26, 2010 at 4:32 PM, Julio Estevez <je_123@hotmail.com> wrote: > Hi > I am interested in deducing the parameters of a GB2 distribution given that I know just a couple of quantiles and sample statistics of the population I am interested in. > I know the mean of the population of interest. I also know some key quantiles from the left side of the distribution. That is I know x1, x2, x3 such that F(x1) = 0.1, F(x2) = 0.2, F(x3) = 0.3 been F() the c.d.f. Although I do not know much about the upper tail, from previous research I beleive that the GB2 distribution may be good approximation of the distribution I am looking at. > I beleive I can use the fact that GB2 distribution has distribution function (c.d.f.) F(x) = ibeta(p, q, (x/b)^a/(1+(x/b)^a) ) to estimate the parameters of the distribution function (a,b,p,q) given my knowledge about the mean and the first qualtiles. Then I can explore the upper tail under the assumption that my data is distributed as a GB2 > Can somebody help me to set up the program to estimate these parameters? I believe it is "just" a matter of solving a set of 4 nonlinear equations with 4 unknowns, but my programming skills are not that good. > The four equations will look like > mean = b*G(p+1/a)*G(q-1/a)/[G(p)G(q)]F(x1) = 0.1 = ibeta(p, q, (x1/b)^a/(1+(x1/b)^a)F(x2) = 0.2 = ibeta(p, q, (x2/b)^a/(1+(x2/b)^a)F(x3) = 0.3 = ibeta(p, q, (x3/b)^a/(1+(x3/b)^a) > from which the unknowns, a,b, p and q can be retrieved. > with G(.) been the gamma function > Any help will be greatly appreciated! > Thanks > Julio > * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-09/msg01251.html","timestamp":"2014-04-17T10:26:58Z","content_type":null,"content_length":"9602","record_id":"<urn:uuid:0e21470e-0177-4c6e-8f7b-91082b505e03>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Hometown, IL Math Tutor Find a Hometown, IL Math Tutor ...If you need help with standardized testing, my GRE scores were a 680 verbal and 800 quantitative.During my masters degree I was a TA for the intro to computer science course. For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. The course began ... 17 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...Years of tutoring physics sharpened my sense for how people learn science. I discovered the ability to connect all types of inquiring minds with abstract concepts by using concrete examples. Consider the formula d=vt. 7 Subjects: including algebra 1, algebra 2, calculus, geometry ...I am resourceful, creative, and tireless to help you learn the objectives of your daily lessons and homework. I hope to foster your motivation to learn. My biggest accomplishment as a tutor was helping a child with ADHD to organize a given reading assignment. 9 Subjects: including algebra 1, biology, vocabulary, grammar ...I received my PhD in Molecular Genetics in 2006 from the University of Illinois at Chicago, and I currently work at Loyola University as a research scientist. I use math and science in my everyday life. My goal as a tutor is not only to help students learn math and science, but also to share my enthusiasm and passion for these subjects. 21 Subjects: including ACT Math, reading, prealgebra, algebra 1 ...I am certified to teach English in the state of Illinois and have experience with Chicago Public Schools. I have additional tutoring experience in the subjects of Accounting, Algebra, Biology, Computer Skills, Geometry, History, and Humanities. I also have experience working as both a GED and E... 39 Subjects: including calculus, grammar, trigonometry, web design Related Hometown, IL Tutors Hometown, IL Accounting Tutors Hometown, IL ACT Tutors Hometown, IL Algebra Tutors Hometown, IL Algebra 2 Tutors Hometown, IL Calculus Tutors Hometown, IL Geometry Tutors Hometown, IL Math Tutors Hometown, IL Prealgebra Tutors Hometown, IL Precalculus Tutors Hometown, IL SAT Tutors Hometown, IL SAT Math Tutors Hometown, IL Science Tutors Hometown, IL Statistics Tutors Hometown, IL Trigonometry Tutors Nearby Cities With Math Tutor Argo, IL Math Tutors Burbank, IL Math Tutors Chicago Ridge Math Tutors Evergreen Park Math Tutors Mc Cook, IL Math Tutors Mccook, IL Math Tutors Merrionette Park, IL Math Tutors Oak Lawn Math Tutors Palos Park Math Tutors Riverside, IL Math Tutors Robbins, IL Math Tutors Summit Argo Math Tutors Summit, IL Math Tutors Willow Springs, IL Math Tutors Worth, IL Math Tutors
{"url":"http://www.purplemath.com/Hometown_IL_Math_tutors.php","timestamp":"2014-04-19T12:13:33Z","content_type":null,"content_length":"23785","record_id":"<urn:uuid:4e85551b-89cc-428a-af10-1fed7db2b00a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
NHEON : Frameworks : Mathematics Introduction - How this framework is organized - Rationale - Societal Goals - How students learn mathematics - References - Matrix "Students learn mathematics well only when they construct their own mathematical understanding." (Everybody Counts, p.58) This view of learning, called constructivism, is the premise upon which the reform movement in mathematics education is based. When students learn mathematics by doing mathematics, by exploring and discussing concepts in the context of physical situations, what emerges from these experiences are skills which are anchored in understanding and clarity. The students not only know the basic procedures, but also know how to apply them to new situations. Research supports the fact that students learn best by experiencing mathematics and thereby constructing understanding for themselves. Research also indicates that mathematics education will best serve societal needs when the curriculum is so conceptually focused. The attitudes students form influence their thinking and performance, and, later, influence their decisions about studying mathematics. Students are active individuals who construct, modify, and integrate ideas by interacting with materials, the world around them, and their peers. Thus, the learning of mathematics must be an active process: exploring, justifying, representing, solving, constructing, discussing, using, investigating, describing, developing, and predicting. These actions require both the physical and mental involvement of students both hands on and minds on. Such a curriculum has the following characteristics: • students are actively involved in doing mathematics; • problem solving, thinking, reasoning, and communicating are everyday activities; • manipulatives are used to connect conceptual to procedural understanding; • calculators and computers are used in appropriate ways; • there is as much emphasis on application as on acquisition of knowledge and skills; • a broad range of content is addressed; and • central mathematical concepts are understood.
{"url":"http://www.nheon.org/frameworks/math/learn.php","timestamp":"2014-04-20T05:42:42Z","content_type":null,"content_length":"5409","record_id":"<urn:uuid:f76dc14b-243e-47a9-a264-03a6d096af55>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Fortran Wiki Determines the location of the element in the array with the maximum value, or, if the dim argument is supplied, determines the locations of the maximum element along each row of the array in the dim direction. If mask is present, only the elements for which mask is .true. are considered. If more than one element in the array has the maximum value, the location returned is that of the first such element in array element order. If the array has zero size, or all of the elements of mask are .false., then the result is an array of zeroes. Similarly, if dim is supplied and all of the elements of mask along a given row are zero, the result value for that row is zero. Fortran 95 and later Transformational function • result = maxloc(array, dim [, mask]) • result = maxloc(array [, mask]) • array - Shall be an array of type integer, real, or character. • dim - (Optional) Shall be a scalar of type integer, with a value between one and the rank of array, inclusive. It may not be an optional dummy argument. • mask - Shall be an array of type logical, and conformable with array. Return value If dim is absent, the result is a rank-one array with a length equal to the rank of array. If dim is present, the result is an array with a rank one less than the rank of array, and a size corresponding to the size of array with the dim dimension removed. If dim is present and array has a rank of one, the result is a scalar. In all cases, the result is of default integer type. See also max, maxval
{"url":"http://fortranwiki.org/fortran/show/maxloc","timestamp":"2014-04-21T02:34:47Z","content_type":null,"content_length":"10345","record_id":"<urn:uuid:a31a9b62-f9c7-40c3-bd14-3953d913219d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Musings of the Masters: An Anthology of Mathematical Reflections Musings of the Masters: An Anthology of Mathematical Reflections A book by Raymond Ayoub ' ... a collection of articles written by renowned mathematicians of the 20th century. An important criterion ... is that the articles should be accessible to the literate reader who may or may not have a technical knowledge of mathematics.' L'enseignement … see full wiki Author: Raymond Ayoub Publisher: Mathematical Assn of Amer 1 review about Musings of the Masters: An Anthology of... Advanced essays on the philosophical underpinnings of mathematics This is not a book about mathematics; the content is about the philosophy of mathematics, which is as old as abstract mathematics. When humans were limited to using mathematics to measure land and count sheep, there was no need to ask questions about where the knowledge of mathematics resided, in fact the measurers and counters would have no doubt considered the questions silly. That changed after the amazing flowering of abstract mathematics that took place in ancient Greece. Once that occurred, then the questions about the residence of the abstractions of circles, lines and planes were logical consequences of the mathematical work being done. These ponderings have at least occasionally occupied the minds of the greatest mathematicians of all time and this book contains a series of short essays about the deep underpinnings of mathematics written by some of the greatest mathematical minds. The essays and authors are: *) "Mathematics and thinking mathematically," by Mary Cartwright *) "Mathematical invention", by Henri Poincare *) "Thoughts on the heuristic method," by Jacques Hadamard *) "Mathematical proof," by G. H. Hardy *) "The unity of knowledge," by Hermann Weyl *) "Mathematics and the arts," by Marston Morse *) "Intuition, reason and faith in science," by George David Birkhoff *) "Logic and the understanding of nature," by David Hilbert *) "The cultural basis of mathematics," by Raymond Wilder *) "Presidential address to the British Association," by J. J. Sylvester *) "The mathematician," by John von Neumann *) "The community of scholars," by Andre Lichnerowicz *) "History of mathematics: Why and how," by Andre Weil *) "Does God exist?" by Paul Levy *) "Goethe and mathematics," by Wilhelm Maak *) "Leonardo and mathematics," by Francesco Severi *) "The highest good," by Norbert Wiener Despite the lack of equations, theorems and proofs, this is a difficult book to read, for the content goes right to the heart of what mathematics truly is. Very little formal mathematics is really needed to understand it, however you have to be a philosopher at heart, willing to read carefully and think about how some of the most complex ideas were created and applied. This would be an excellent source of material for an upper level undergraduate or graduate course in the philosophical underpinnings of mathematics. What did you think of this review? • About Lunch • Press • FAQs • Contact Us • Terms of Service • Privacy Policy • Report Abuse © 2014 Lunch.com, LLC All Rights Reserved Lunch.com - Relevant reviews by real people.
{"url":"http://www.lunch.com/Reviews/book/Musings_of_the_Masters_An_Anthology_of_Mathematical_Reflections-1515250.html","timestamp":"2014-04-25T04:39:02Z","content_type":null,"content_length":"86786","record_id":"<urn:uuid:7ccf44c5-2dd0-400b-8d2f-c9dec2f9f346>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Semi-pullbacks and bisimulations in categories of stochastic relations , 2002 "... A pipeline is a popular architecture which connects computational components/filers) through connectors (pipes) so that computations are performed in a stream like fashion. The data are transported through the pipes between filers, gradually transforming inputs to outputs. This kind of stream proces ..." Add to MetaCart A pipeline is a popular architecture which connects computational components/filers) through connectors (pipes) so that computations are performed in a stream like fashion. The data are transported through the pipes between filers, gradually transforming inputs to outputs. This kind of stream processing has been made popular through UNIX pipes that serially connect independent components for performing a sequence of tasks. We show in this paper how to formalize this architecture in terms of monads, hereby including relational specifications as special cases. The system is given through a directed acyclic graph the nodes of which carry the computational structure by being labelled with morphisms from the monad, and the edges provide the data for these operations. It is shown how fundamental compositional operations like combining pipes and filers, and refining a system by replacing simple parts through more elaborate ones, are supported through this construction. "... Normally, one thinks of probabilistic transition systems as taking an initial probability distribution over the state space into a new probability distribution representing the system after a transition. We, however, take a dual view of Markov processes as transformers of bounded measurable function ..." Add to MetaCart Normally, one thinks of probabilistic transition systems as taking an initial probability distribution over the state space into a new probability distribution representing the system after a transition. We, however, take a dual view of Markov processes as transformers of bounded measurable functions. This is very much in the same spirit as a “predicate-transformer ” view, which is dual to the state-transformer view of transition systems. We redevelop the theory of labelled Markov processes from this view point, in particular we explore approximation theory. We obtain three main results: (i) It is possible to define bisimulation on general measure spaces and show that it is an equivalence relation. The logical characterization of bisimulation can be done straightforwardly and generally. (ii) A new and flexible approach to approximation based on averaging can be given. This vastly generalizes and streamlines the idea of using conditional expectations to compute approximations. (iii) We show that there is a minimal process bisimulation-equivalent to a given process, and this minimal process is obtained as the limit of the finite approximants. , 2005 "... In this paper we propose a notion of bisimulation for labelled Markov processes parameterised by negligible sets (LMPns). The point is to allow us to say things like two LMPs are “almost surely ” bisimilar when they are bisimilar everywhere except on a negligible set. Usually negligible sets are set ..." Add to MetaCart In this paper we propose a notion of bisimulation for labelled Markov processes parameterised by negligible sets (LMPns). The point is to allow us to say things like two LMPs are “almost surely ” bisimilar when they are bisimilar everywhere except on a negligible set. Usually negligible sets are sets of measure 0, but we work with abstract ideals of negligible sets and so do not introduce an ad-hoc measure. The construction is given in two steps. First a refined version of the category of measurable spaces is set up, where objects incorporate ideals of negligible subsets, and arrows are identified when they induce the same homomorphisms from their target to their source σ-algebras up to negligible sets. Epis are characterised as arrows reflecting negligible subsets. Second, LMPns are obtained as coalgebras of a refined version of Giry’s probabilistic monad. This gives us the machinery to remove certain counterintuitive examples where systems were bisimilar except for a negligible set. Our new notion of bisimilarity is then defined using cospans of epis in the associated category of coalgebras, and is found to coincide with a suitable logical equivalence given by the LMP modal logic. This notion of bisimulation is given in full generality- not
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=723079","timestamp":"2014-04-18T14:05:06Z","content_type":null,"content_length":"18688","record_id":"<urn:uuid:e8fcf3fa-9213-42c1-9925-608d0ee0d83f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
The uniformity principle on traced monoidal categories Masahito Hasegawa In Proc. 9th International Conference on Category Theory and Computer Science (CTCS'02), Electronic Notes in Theoretical Computer Science 69 (2003) The uniformity principle for traced monoidal categories has been introduced as a natural generalization of the uniformity principle (Plotkin's principle) for fixpoint operators in domain theory. We show that this notion can be used for constructing new traced monoidal categories from known ones. Some classical examples like the Scott induction principle are shown to be instances of these constructions. We also characterize some specific cases of our constructions as suitable enriched limits. Pointers to Related Work • M. Hasegawa, Recursion from cyclic sharing: traced monoidal categories and models of cyclic lambda calculi. In Proc. Typed Lambda Calculi and Applications, Springer LNCS 1210 (1997) pp.196-213. • M. Hasegawa, Models of Sharing Graphs (A Categorical Semantics of Let and Letrec). PhD thesis ECS-LFCS-97-360, University of Edinburgh (1997) / Distinguished Dissertations Series, Springer-Verlag • A. Joyal, R. Street and D. Verity, Traced monoidal categories. Mathematical Proceedings of the Cambridge Philosophical Society 119(3) (1996) 447-468. • P. Selinger, Categorical structure of asynchrony. In Proc. MFPS 15, ENTCS 20 (1999). Back to Hassei's Research Page / Home Page
{"url":"http://www.kurims.kyoto-u.ac.jp/~hassei/papers/ctcs02.html","timestamp":"2014-04-18T10:36:27Z","content_type":null,"content_length":"2688","record_id":"<urn:uuid:d241d40d-8bcf-4bc0-88b6-ee1e2e964c98>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Look, but don’t Scratch Ladies and gentlemen, please excuse my prolonged absence. Life occasionally has a habit of getting in the way of the schedule that I’d like to keep; in this case, it means I haven’t been able to update over the past month. Fear not though, for now I have returned, and I am ready to dish on math and pop culture. In that spirit, I would be remiss if I did not take a moment to mention this article from Wired last month on the man who cracked the code for several scratch lottery ticket games. Mohan Srivastiva, geological statistician by day and mathematical rogue by night, discovered a pattern in certain scratch lottery tickets back in 2003, but I’m sure (as this article suggests) he’s received a bit more publicity since the Wired article hit. I highly recommend reading the whole article, but I’ll outline the gist of his discovery here. In order to do so, I’ll need to specify a type of scratch game he cracked. The article focuses primarily on a tic-tac-toe themed scratcher shown below. The left side of the ticket is what gets scratched – below each X and each O lies a number. Once all of ticket has been scratched, you can compare the uncovered numbers to the numbers on the eight 3×3 grids. If, in any of those grids, you can find three numbers in a row, column, or diagonal that match the hidden list, you are a winner. Note the craftiness here – much like the McDonald’s monopoly game, it’s much more likely to get two numbers in a row rather than three, so that a ticket can seem tantalizingly close to being a winner. In each square of each grid sits a number between 1 and 39. Also, within each grid, no number repeats more than once; however, since there are 72 squares total on the scratcher, some numbers must repeat themselves between grids (by the pigeonhole principle, if you like). Some numbers may repeat several times (for example, 17 appears three times in the ticket on the left), while others will appear only once (such as 08). The key to cracking the ticket, Srivastiva realized, is to take note of the numbers that appear only once on the ticket. Such numbers will be called “singletons.” There are several singletons on the ticket presented here, and a more thorough analysis is given in the Wired article. Most importantly, though, one of the grids has a row of singletons: 24, 12, and 29 are all singletons, and this sequence makes an appearance in the second grid in the in the third row. What Srivastiva observed was that if a ticket has a sequence of singletons in a winning row, column, or diagonal, then that ticket is likely to be a winner. In particular, since you can determine all the singletons without scratching off the ticket, he realized that this game reveals information about the likelihood of winning! In theory, one could (at least in 2003, before the game was pulled) make a career out of buying these tickets in bulk, scratching off the identified winners, and returning the remainder – Srivastiva even went so far as to ask if lottery tickets could be returned, and found that indeed they could be (in fact, it seems as though this is not such an uncommon occurrence). Ultimately, the only reason why he exposed this fault was that he decided the effort involved in sticking it to the man wasn’t worth it – he thought he could earn roughly $600 a day by going through lottery tickets, but he earned more money and had more fun at his day job. Kudos to Mr. Srivastiva for his foray into mathematical badassery. From a mathematical standpoint, there are a number of questions one can ask about this particular type of ticket. Here’s one: what’s the probability that a number is a singleton? Of course, these tickets can’t be completely random, as Srivastiva observed, since “the lottery corporation needs to control the number of winning tickets. The game can’t be truly random. Instead, it has to generate the illusion of randomness while actually being carefully determined.” Nevertheless, for argument’s sake let’s suppose the numbers on the ticket are random. In this case, a number is a singleton if it appears on one grid and doesn’t appear on the remaining 7 grids. What is the probability that a given number appears on a 3 x 3 grid? Since there are 9 numbers in the grid, this probability equals 9/39 = 3/13. If we fix a number between 1 and 39, and let X denote the number of times that number appears in the grids, then X satisfies a binomial distribution with n = 8 and p = 3/13. In particular, X = 1 means that the number is a singleton, and P(X = 1) = 8*(3/13)*(10/13)^7, which is approximately 29.4%. We also see that the expected value of X (i.e. the expected number of times any given number will occur) is np = 24/13, which is around 1.85. One can also find the probabilities that X takes on some other value. Of course, one can also ask what happens if we vary the number of grids, the sizes of the grids, or the size of the number pool from which we draw. Other questions abound as well: what are the odds of getting two singletons in a row? Three singletons in a row? How do the odds of winning change if the number of hidden values change (either in absolute terms, or as a proportion of the total pool of values)? These questions, gentle reader, I will leave for you.
{"url":"http://www.mathgoespop.com/2011/03/lookbutdontscratch.html","timestamp":"2014-04-20T10:50:09Z","content_type":null,"content_length":"77329","record_id":"<urn:uuid:6b7df401-c586-4dc0-b31a-f5fed670dba8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Cyclotomic Polynomials Andrew Arnold, Michael Monagan The algorithms: Download our papers: A high-performance algorithm for calculating cyclotomic polynomials (PDF) -Submitted to the International Workshop on Parallel and Symbolic Computation (PASCO), to be held July 18-21, 2010 in Grenoble, France. Calculating Cyclotomic Polynomials (with new revisions) (PDF) Calculating Cyclotomic Polynomials (PDF) -Submitted with revisions April 26, 2010 to Mathematics of Computation. Originally submitted Oct 10, 2008. The $n_{th}$ cyclotomic polynomial $\Phi_n(z)$ is monic polynomial whose roots are the $n_{th}$ primitive roots of unity. It is the minimal polynomial, over the integers, of the $n_{th}$ primitive roots of unity. The height of $\Phi_n(z)$, $A(n)$, is the largest coefficient of $\Phi_n(z)$ in terms of absolute value. The $n_{th}$ inverse cyclotomic polynomial, $\Psi_n(z)$, is the monic polynomial whose roots are the $n_{th}$ non-primitive roots of unity. We develop and implement algorithms to calculate cyclotomic polynomials, towards the end of studying their coefficients. We have three algorithms for calculating cyclotomic polynomials $\Phi_n(z)$, which we use to calculate $\Phi_n(z)$ for "small" $n$. (Currently roughly $n<2*10^{10}$) Our first algorithm does repeated polynomial division using the FFT to calculate images of $\Phi_n(z)$ modulo primes $p$, and then reconstructs $\Phi_n(z)$ by way of Chinese remaindering. Using this approach we were able to find the first examples of $n$ such that $A(n)>n^2, and n^4. We call this algorithm the FFT-CRT method. Our second algorithm, the SPS algorithm, calculates $\Phi_n(z)$ as a quotient of sparse truncated power series. Our implementation of the SPS algorithm is roughly an order of magnitude faster than that of the FFT-CRT algorithm. Our third, newest algorithm, the recursive SPS algorithm, computes $\Phi_n(z)$ in a manner similar to that of the SPS algorithm; however, with our new approach we truncate any intermediate power series in our compuation to as minimal degree as necessary. This improves on the SPS algorithm by roughly another order of magnitude. For instance, to compute $\Phi_{n}(z)$ for $n=3234846615$, the product of the first nine odd primes, it originally required some 12 hours and 40 GB of memory on a powerful server using the FFT-CRT method. Using the SPS algorithm we were able to compute the same polynomial in under a half hour using roughly 4 GB of memory. Using our newest algorithm (and more intelligent inline assembly overflow checking), we are able to compute $\Phi_n(z)$ in under a We have a fourth algorith, the big prime algorithm, which was used to compute $\Phi_n(z)$ for $n=mp$ with large prime divisor $p$. This algorithm was mostly used to study cyclotomic polynomials of order four and five with very large degree and very low height. If you are interested in a particular cyclotomic polynomial computation or would like to look at my source code you can email me at (ada26 AT sfu DOT ca). The shape of the coefficients of $\Phi_n(z)$ We show some plots of coefficients of cyclotomic polynomials. These plots are of a subset of the coefficients of $\Phi_n(z)$; we take the coefficient of term of degree $k$, for $k$ a multiple of some prime $p$. We chose $p$ so that our plots are roughly 10000 data points. We see that for some cyclotomic polynomials, the coefficients appear to be somewhat random, others have very definite Plots of cyclotomic polynomial coefficients (LINK) Heights and lengths of cyclotomic polynomials Library of data on the heights and lengths of cyclotomic polynomials This data was produced using the FFT-CRT, SPS, and the resursive SPS algorithms, as well as low-memory variants of the SPS algorithms. For further details you can refer to my thesis. We also computed cyclotomic polynomials of particularly large height which we have grouped into families. These families are not well-defined, but cyclotomic polynomials belonging to a family typically share many prime factors and have very similar height. Cyclotomic polynomials of order 5 with potentially low height For n < 3\cdot 10^8, the only \Phi_n(z) of order four for which A(n)=1 are exactly the n=p_1p_2p_3p_4 such that • p_2 = -1 mod p_1 • p_3 = +/-1 mod p_1p_2 • p_4 = +/-1 mod p_1p_2p_3 We calculate A(n) for n=p_1p_2p_3p_4p_5<2^{63} such that n/p_i satisfies the congruences above for i=1,2,...,5. That is, • p_2 = -1 mod p_1 • p_3 = -1 mod p_1p_2 • p_4 = +/-1 mod p_1p_2p_3 • p_5 = +/-1 mod p_1p_2p_3p_4 We only consider n=p_1p_2p_3p_4p_5 for which, given(p1,p2,p3,p4), p_5 is the minimal prime in its congruence class. A(n) for all such n less than 2^63 Alternatively, here are all such n separted into their respective congruence sets. They all satisfy p_2 \equiv -1 mod p_1 and p_3 \equiv -1 mod p_1p_2: Bounding cyclotomic coefficients of a given fixed degree Write \Phi_n(z) = \sum_{k=0}^{\phi(n)}a_n(k)z^k. Let a(k) = \max_n |a_n(k)}. That is, a(k) is the largest magnitude of any z^k cyclotomic coefficient. Similarly, let a_+(k) = \max_n a_n(k) and a_-(k) = \min_n a_n(k)$. We list a(k), a_+(k), and a_- (k) below. We also define alpha(b) to be the minimum k for which b occurs as |a_n(k)|. We similarly define \bar{alpha}(b) to be the smallest k for which b occurs as a_n(k). Note that \alpha(b) is the minimum of \bar{alpha}(b) and \bar{alpha}(-b). The links below give alpha(b) for 0 \leq b \leq 927 and \bar{alpha}_(b) for 0 \leq |b| \leq 927. We also give the smallest n for which, given b and alpha(b)=k, that |a_n(k)|=b, and similarly for \bar NOTE: n is not necessarily minimal in the data here for if alpha(b)=173. It is certainly minimal for alpha(b) < 173. Here is an implementation of the recursive SPS algorithm with 64-bit precision and no overflow checking: SPS4_64.c
{"url":"http://www.cecm.sfu.ca/~ada26/cyclotomic/","timestamp":"2014-04-20T18:24:07Z","content_type":null,"content_length":"8453","record_id":"<urn:uuid:89477820-7d2f-4210-a0cb-21aa04c096c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Andrei Nikolaevich Kolmogorov Support Kolmogorov.com Ph.D. students and descendants of A. N. Kolmogorov Scientist's Club of the Moscow State University and Moscow Mathematical Society Kolmogorov Centennial Memorial Meeting Great Hall of the Moscow State University (Moscow, April 29, 2003) Russian Academy of Sciences (RAS) and Moscow State University (MSU) INTERNATIONAL CONFERENCE KOLMOGOROV AND CONTEMPORARY MATHEMATICS (Moscow, June 16 - 21, 2003) IN COMMEMORATION OF THE CENTENNIAL of Andrei Nikolaevich Kolmogorov (25.IV.1903 - 20.X.1987) Scopes and Themes The Conference will cover the main areas of A.N. Kolmogorov's scientific interests with an emphasis on his vision of Mathematics in its fundamental unity as well as recent developments in the ○ Dynamic Systems and Ergodic Theory ○ Theory of Functions and Functional Analysis ○ Theory of Probability and Mathematical Statistics ○ Turbulence and Hydrodynamics ○ Mathematical Logic and Theory of Complexity ○ Geometry and Topology Invited Speakers International conference devoted to the hundredth anniversary of A.N. Kolmogorov's birth (Tambov, May 11-16, 2003 ) Third Annual Moscow Kolmogorov Readings - 2003 International Science Students Conference May 5-7, 2003, Moscow http://www.pms.ru/reading/ e-mail: reading@pms.ru Second Annual Moscow Kolmogorov Readings - 2002 First Moscow Kolmogorov Readings - 2001 the abdus salam international centre for theoretical physics ICTP-INFM Summer School Transport, Reaction and Propagation in Fluids 8-12 September 2003, ICTP, Trieste, Italy followed by conference on Kolmogorov's Legacy in Physics: One Century of Chaos, Turbulence and Complexity 15-17 September 2003, ICTP, Trieste, Italy One-Day Workshop in Honor of the One Hundredth Anniversary of the Birth of Andrei Nikolaevich Kolmogorov Complexity, Information, and Randomness: The Legacy of Andrei Kolmogorov Sunday, July 6, 2003, University of Aarhus, Denmark in conjunction with 18th Annual IEEE Conference on Computational Complexity Monday, July 7th, to Thursday, July 10th, 2003, University of Aarhus, Denmark Centennial Seminar on Kolmogorov Complexity and Applications INTERNATIONAL CONFERENCE AND RESEARCH CENTER FOR COMPUTER SCIENCE 2003, April 27 - May 5, Schloss Dagstuhl, D-66687 Wadern, Saarbrücken, Germany Il Centro per la MECCANICA STATISTICA e la COMPLESSITA' (SMC) dell' Istituto Nazionale per la Fisica della Materia (INFM) e il Dipartimento di Fisica Universita' degli Studi di Roma La Sapienza organizzano una Giornata su L'EREDITA' CULTURALE di A.N. KOLMOGOROV La Conferenza si terra' presso l'Edificio Fermi del Dip di Fisica p.le Aldo Moro 2, 00185, Roma AULA 1 ore 16 del 9 Maggio 2003 The University of London Inaugural Kolmogorov Lecture Speaker: Professor Ray Solomonoff 27th February 2003 5:30pm, Main Lecture Theatre Royal Holloway, University of London ROUNDTABLE DISCUSSION ON THE FUTURE OF MATHEMATICS Discussion Leader: Israel M. Gelfand January 21, 2003 4:30pm, Courant Institute of Mathematical Sciences, New York University Fields Institute Kolmogorov Lecture Series 1998-1999 Chicago Kolmogorov Memorial Readings 2001 A. N. Kolmogorov Bibliography I. General List of the main publications by A. N. Kolmogorov Report to the mathematical circle on covering by squares (1921) On operations on sets. II (1922) A. Kolmogoroff, Une série de Fourier-Lebesgue divergente presqne partout (1923) Sur l'ordre de grandeur des coéfficients de la série de Fourier-Lebesgue (1923) A. Kolmogoroff, Une contribution à l'étude de la convergence des séries de Fourier (1924) Sur la convergence des séries de Fourier (1924), jointly with G. A. Seliverstov La définition axiomatique de l'intégrale (1925) Sur le bornes de la généralisation de l'intégrale (1925) Sur la possibilité de la définition générale de la dérivée, de l'intégrale et de lasommation des séries divergentes (1925) A. Kolmogoroff, Sur les fonctions harmoniques conjuguées et les séries de Fourier (1925) On the tertium non datur principle (1925) Über Convergenz von Reihen, deren Glieder durch den Zufall bestimmt werden (1925), jointly with A. Ya. Khintchin Sur la convergence des séries de Fourier (1926), jointly with G. A. Seliverstov Une série de Fourier-Lebesgue divergente partout (1926) Sur la loi des grands nombres (1927) A. Kolmogoroff et D. Menchoff, Sur la convergence des series de fonctions ortogonales (1927) On operations on sets (1928) (in Russian) Sur une formule limite de M. A. Khintchine (1928) Sur un procédé d'intégration de M. Denjoy (1928) A. Kolmogoroff, Über die Summen durch den Zufall bestimmter unabhängiger Größen (1928) Bemerkungen zu meiner Arbeit "Über die Summen durch den Zufall bestimmter unabhängiger Größen" (1929) General measure theory and the calculus of probabilities (1929) (in Russian) Present-day controversies on the nature of mathematics (1929) (in Russian) A. Kolmogoroff, Über das Gesetz des iterierten Logarithmus (1929) Sur la loi des grands nombres (1929) Sur la loi forte des grands nombres (1930) A. Kolmogoroff, Zur topologisch- gruppentheoretischen Begründung der Geometrie" (1930) A. Kolmogoroff, Untersuchungen über den Integralbegriff (1930) Sur la notion de la moyenne (1930) A. Kolmogoroff, Bemerkungen zu meiner Arbeit "Über die Summen zufälliger Größen" (1930) A. Kolmogoroff, Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung (1931) Sur la probléme d'attente (1931) The method of medians in the theory of errors (1931) (in Russian) Eine Verallgemeinerung des Laplace-Liapunoffschen Satzes (1931) A. Kolmogoroff, Über Kompaktheit der Funktionenmengen bei der Konvergenz im Mittel" (1931) The theory of functions of a real variable (1932) (in Russian) Sulla forma generale di un processo stocastico omogeneo (Un problema di Bruno di Finetti) (1932) Ancora sulla forma generale di un processo stocastico omogeneo (1932) A. Kolmogoroff, Zur Deutung der intuitionistischen Logik (1932) Zur Begründung der projektiven Geometrie (1932) Introduction to the theory of functions of a real variable (1932), jointly with P. S. Aleksandrov (in Russian) Introduction to the theory of functions of a real variable, 2nd ed. (1933), jointly with P. S. Aleksandrov (in Russian) Grundbegriffe der Wahrscheinlichkeitrechnung (1933) A. Kolmogoroff, Beiträge zur Maßtheorie (1933) Zur Berechung der mittleren Brownschen Fläche (1933), jointly with M. A. Leontovich Sulla determinazione empirica di una legge di distribuzione (1933) Über die Grenzwertsätze der Wahrscheinlichkeitsrechnung (1933) A. Kolmogoroff, Zur Theorie der stetigen zufälligen Prozesse (1933) Sur la détermination empirique d'une loi de distribution (1933) On the question of suitability of forecast formulas found statistically (1933) (in Russian) 10 papers in 1934 4 papers in 1935 17 papers in 1936 A. Kolmogoroff, Zur Theorie der Markoffschen Ketten (1936) 9 papers in 1937 A. Kolmogoroff, Zur Umkehrkeit der statistischen Naturgesetze (1937) 16 papers in 1938 Complete list of Kolmogorov's works is published in the book Kolmogorov in Perspective. Most of published scientific papers and monographs of A.N. Kolmogorov are reproduced in a 3 Volume Set - Selected Works of A.N. Kolmogorov. See also more than a dozen of A. N. Kolmogorov's articles in Russian popular science journal for students Kvant (Quantum). Kolmogorov in Perspective Edited by A. N. Shiryaev / Published September 2000 by American Mathematical Society The editorial board for the History of Mathematics series has selected for this volume a series of translations from two Russian publications, Kolmogorov in Remembrance and Mathematics and its Historical Development. This book, Kolmogorov in Perspective, includes articles written by Kolmogorov's students and colleagues and his personal accounts of shared experiences and lifelong mathematical friendships. Specifically, the article, "Andrei Nikolaevich Kolmogorov. A Biographical Sketch of His Life and Creative Paths" by A. N. Shiryaev, gives an excellent personal and scientific biography of Kolmogorov. The volume also includes the following articles: "On A. N. Kolmogorov" by V. I. Arnol'd, "In Memory of A. N. Kolmogorov" by S. M. Nikol'skii, "Remembrances of A. N. Kolmogorov" by Ya. G. Sinai, "The Influence of Andrei Nikolaevich Kolmogorov on My Life" by P. L. Ul'yanov, "A Few Words on A. N. Kolmogorov" by P. S. Aleksandrov, "Memories of P. S. Aleksandrov" by A. N. Kolmogorov, "Newton and Contemporary Mathematical Thought" by A. N. Kolmogorov, and an extensive bibliography with the complete list of Kolmogorov's works--including the articles written for encyclopedias and newspapers. The book is illustrated with photographs and includes quotations from Kolmogorov's letters and conversations, uniquely reflecting his mathematical tastes and opinions. Copublished with the London Mathematical Society. Members of the LMS may order directly from the AMS at the AMS member price. The LMS is registered with the Charity Commissioners. • A. N. Shiryaev -- Andrei Nikolaevich Kolmogorov (April 25, 1903 to October 20, 1987). A biographical sketch of his life and creative paths • V. I. Arnol'd -- On A. N. Kolmogorov • S. M. Nikol'skii -- In memory of A. N. Kolmogorov • Ya. G. Sinai -- Remembrances of A. N. Kolmogorov • P. L. Ul'yanov -- The influence of Andrei Nikolaevich Kolmogorov on my life • P. S. Aleksandrov -- A few words on A. N. Kolmogorov • A. N. Kolmogorov -- Memories of P. S. Aleksandrov • A. N. Kolmogorov -- Newton and contemporary mathematical thought □ II. Encyclopedia articles by Kolmogorov (selected articles - Mathematics, Magnitude, Wiener, Norbert, Hilbert, David) □ III. Articles by Kolmogorov in the journal "Matematika v Shkole" (Mathematics at School) □ V. Articles by Kolmogorov on the theory of poetry and the statistics of text □ VI. Articles by Kolmogorov on popular science □ VII. Newspaper articles by Kolmogorov (selecta) □ IX. Contents of Selected Works of Kolmogorov, Vols. I-III ☆ Vol. I. Mathematics and Mechanics (Kluwer, 1991) ☆ Vol. II. Probability Theory and Mathematical Statistics (Kluwer, 1992) ☆ Vol. III. Information Theory and the Theory of Algorithms (Kluwer, 1993) □ X. Publications about Kolmogorov □ XI. Publications (by various authors) cited in the biographical sketch Andrei Nikolaevich Kolmogorov. Biography. by V. M. Tikhomirov Kolmogorov Remembered by Leonid A. Bassalygo, Roland L. Dobrushin, Mark S. Pinsker Kolmogorov, Andrey Nikolayevich an article from the Encyclopædia Britannica Automata and Life (1961) by A. N. Kolmogorov Prepared for publication and edited by N. G. Khimchenko (Rychkova) (Home Page). N. G. Khimchenko -- How it was... Automata and Life (text) A. N. Kolmogorov -- Automata and Life (theses for his talk) V. M. Tikhomirov -- A few words on topic: "Kolmogorov and cybernetics" V. A. Uspensky -- Kolmogorov, as I remember him A. N. Kolmogorov -- Mathematics - A Science and Profession Collected and prepared for publication by G. A. Galperin Published by Nauka, Moscow, 1988 A. N. Kolmogorov -- Mathematics and its Historical Development Edited by V. A. Uspensky, collected and prepared for publication by G. A. Galperin Published by Nauka, Moscow, 1991 See also the following Russian publications about A. N. Kolmogorov: Curriculum Vitae More photographs of A. N. Kolmogorov at the Kolmogorov School web site and at the Andrey Kolmogorov page of the History of Mathematics Archive. Click on any picture to enlarge. Mathematics : Its Content, Methods and Meaning Edited by A. D. Aleksandrov, A. N. Kolmogorov, and M. A. Lavrent'ev 1120 pages, Three Volumes Bound as One Published September 1999 by Dover This edition reprints in one volume the second edition of this title, which was published in three volumes by The MIT Press in 1969. The original edition was published in 1964, translated from the Russian. Eighteen Russian mathematicians survey the scope of math, from elementary to the advanced levels, writing to educate a lay audience (those with "secondary school mathematics") who are motivated to know more. Discussion includes both the origins and the development of analytic geometry, algebra, ordinary differential equations, partial differential equations, curve and surface theories, prime numbers, probability, functions of a complex variable, linear algebra, non-Euclidean geometry, topology, functional analysis, and groups and other algebraic systems. Selected Works of A.N.Kolmogorov : Mathematics and Mechanics, Volume 1 Edited by V. M. Tikhomirov Selected Works of A.N.Kolmogorov : Probability Theory and Mathematical Statistics, Volume 2 Edited by A. N. Shiryayev Selected Works of A.N.Kolmogorov : Information Theory and the Theory of Algorithms, Volume 3 Edited by A. N. Shiryayev These three volumes devoted to the work of one of the most prominent twentieth-century mathematicians. Throughout his mathematical work, A.N. Kolmogorov (1903-1987) showed great creativity and versatility and his wide-ranging studies in many different areas led to the solution of conceptual and fundamental problems and the posing of new, important questions. His lasting contributions embrace probability theory and statistics, the theory of dynamical systems, mathematical logic, geometry and topology, the theory of functions and functional analysis, classical mechanics, the theory of turbulence, and information theory. The material appearing in each volume was selected by A.N. Kolmogorov himself and is accompanied by short introductory notes and commentaries which reflect upon the influence of this work on the development of modern mathematics. All papers appear in English -- some for the first time -- and in chronological order. The volume contains a significant legacy which will find many grateful beneficiaries amongst researchers and students of mathematics and mechanics, as well as historians of mathematics. Volume I: Mathematics and Mechanics This first volume contains papers in mathematics (excluding probability theory and information theory, which are the subject of the following two volumes), turbulence and classical mechanics. They include his famous paper on everywhere-divergent Fourier series, the concluding work on Hilbert's 13th problem, the fundamentals of the Kolmogorov-Arnold-Moser theory in classical mechanics, the fundamentals of the theory of upper homologies, an original construction of the integral, papers on approximation theory and turbulence, and much more. Volume II: Probability Theory and Mathematical Statistics This second volume contains papers on probability theory and mathematical statistics, and embraces topics such as limit theorems, axiomatics and logical foundations of probability theory, Markov chains and processes, stationary processes and branching processes. Volume III: Information Theory and the Theory of Algorithms This third volume contains original papers dealing with information theory and the theory of algorithms. Comments on these papers are included. Mathematics of the 19th Century: Constructive Function Theory According to Chebyshev, Ordinary Differential Equations, Calculus of Variations, and Theory of Finite Differences Edited by A. N. Kolmogorov and A. P. Yushkevich Mathematics of the 19th Century: Geometry, Analytic Function Theory Edited by A. N. Kolmogorov and A. P. Yushkevich Mathematics of the 19th Century: Mathematical Logic, Algebra, Number Theory, Probability Theory Edited by A. N. Kolmogorov and A. P. Yushkevich Volume I • Function Theory According to Chebyshev 1.2 Functions of Minimal Deviation from Zero 1.3 Continued Fractions • Ordinary Differential Equations 2.1 Summary of the Development of Ordinary Differetial Equations in the Eighteenth-Century 2.2 The Problem of Existence and Uniqueness 2.3 Integration of Equations in Quadratures 2.4 Linear Differential Equations 2.5 The Analytic Theory of Differential Equations 2.6 The Qualitative Theory of Differential Equations • The Calculus of Variations 3.2 Calculus of Variations in the First Half of the Nineteenth Century 3.3 Calculus of Variations in the Second Half of the Nineteenth Century • The Calculus of Finite Differences 4.1 Interpolation 4.2 The Euler-Maclaurin Summation Formula 4.3 Finite-Difference Equations Index of Names Volume II • Geometry 1.1 Analytic and Differential Geometry 1.2 Projective Geometry 1.3 Algebraic Geometry and Geometric Algebra 1.4 Non-Euclidean Geometry 1.5 Multi-Dimensional Geometry 1.6 Topology 1.7 Geometric Transformations • Analytic Function Index of Names Volume III Introduction to the English Translation • Mathematical Logic • Algebra and Algebraic Number Theory • Problems of Number Theory • The Theory of Probability Addendum by O. B. Sheinin Index of Names Introductory Real Analysis by A. N. Kolmogorov and S. V. Fomin Published June 1975 by Dover Elements of the Theory of Functions and Functional Analysis by A. N. Kolmogorov and S. V. Fomin Published March 1999 by Dover The Thirteen Books of Euclid's Elements, Books I-II The Thirteen Books of Euclid's Elements, Books III-IX The Thirteen Books of Euclid's Elements, Books X-XIII (Translated with introduction and commentary by Sir Thomas L. Heath) Published 1956 by Dover What Is Mathematics?: An Elementary Approach to Ideas and Methods by Richard Courant and Herbert Robbins, Revised by Ian Stewart Published 1996 by Oxford University Press (2nd Edition) Translated into Russian by A. N. Kolmogorov and with introduction by A. N. Kolmogorov The Principia : Mathematical Principles of Natural Philosophy by Sir Isaac Newton (New Translation by I. Bernard Cohen and Anne Whitman) Published 1999 by University of California Press Foundations of Geometry by David Hilbert (Translation by Leo Unger) Published 1988 by Open Court (2nd Edition) Foundations of the Theory of Probability by A. N. Kolmogorov The translation is edited by Nathan Morrison Originally published 1933 in German as "Grundbegriffe der Wahrscheinlichkeitrechnung" English translation published 1950 by Chelsea Publishing Most recent 3rd Russian Edition was published 1998 by Phasis, Moscow Full text of the 1st Russian Edition (1936) is available at www.probabilityandfinance.com web page maintained by Vladimir Vovk • Elementary Theory of Probability: 1.1 Axioms 1.2 The relation to experimental data 1.3 Notes on terminology 1.4 Immediate corollaries of the axioms; conditional probabilities; Theorem of Bayes 1.5 Independence 1.6 Conditional probabilities as random variables; Markov chains • Infinite Probability Fields: 2.1 Axiom of continuity 2.2 Borel fields of probability 2.3 Examples of infinite fields of probability • Random Variables: 3.1 Probability functions 3.2 Definition of random variables and of distribution functions 3.3 Multi-dimensional distribution functions 3.4 Probabilities in infinite-dimensional spaces 3.5 Equivalent random variables; various kinds of convergence • Mathematical Expectations: 4.1 Abstract Lebesgue integrals 4.2 Absolute and conditional mathematical expectations 4.3 The Tchebycheff inequality 4.4 Some criteria for convergence 4.5 Differentiation and integration of mathematical expectations with respect to a parameter • Conditional Probabilities and Mathematical Expectations: 5.1 Conditional probabilities 5.2 Explanation of a Borel paradox 5.3 Conditional probabilities with respect to a random variable 5.4 Conditional mathematical expectations • Independence; The Law of Large Numbers: 6.1 Independence 6.2 Independent random variables 6.3 The law of large numbers 6.4 Notes on the concept of mathematical expectation 6.5 The strong law of large numbers; Convergence of a series • Appendix--Zero-or-one law in the theory of probability • Bibliography • Notes to supplementary bibliography • Supplementary bibliography Limit Distributions for Sums of Independent Variables by B. V. Gnedenko and A. N. Kolmogorov Translated from the Russian, annotated, and revised by K. L. Chung With Appendices by J. L. Doob and P. L. Hsu Published by Addison-Wesley, 1954, 1968 (Second Edition) The Kolmogorov Legacy in Physics: A Century of Turbulence and Complexity (Lecture Notes in Physics, 642) Edited by Roberto Livi and Angelo Vulpiani Hardcover: 246 pages; Publisher: Springer Verlag; (February 2004) ISBN: 3540203079 English translation from the French edition L'héritage de Kolmogorov en physique published September 2003 by Belin, Paris Table of Contents Kolmogorov Pathways from Integrability to Chaos and Beyond 3 From Regular to Chaotic Motions through the Work of Kolmogorov 33 Dynamics at the Border of Chaos and Order 61 Kolmogorov's Legacy about Entropy, Chaos, and Complexity 85 Complexity and Intelligence 109 Information Complexity and Biology 123 Fully Developed Turbulence 149 Turbulence and Stochastic Processes 173 Reaction-Diffusion Systems: Front Propagation and Spatial Structures 187 Self-Similar Random Fields: From Kolmogorov to Renormalization Group 213 Financial Time Series: From Batchelier's Random Walks to Multifractal 'Cascades' 229 L'héritage de Kolmogorov en physique Edited by Roberto Livi and Angelo Vulpiani Published September 2003 by Belin, Paris Table des matières (Contents) Préface Yakov G. Sinai Introduction Roberto Livi et Angelo Vulpiani Première partie : CHAOS ET SYSTÈMES DYNAMIQUES • Chapitre 1 LE CHEMINEMENT DE KOLMOGOROV DE L'INTÉGRABILITÉ AU CHAOS ET AU-DELÀ Roberto Livi, Stefano Ruffo, Dima Shepelyansky 1 Une perspective générale 2 Deux degrés de liberté : l'application standard de Chirikov 3 De nombreux degrés de liberté : l'expérience numérique de Fermi, Pasta et Ulam 4 Seuils énergétiques 5 Spectres de Lyapounov et caractérisation de la dynamique chaotique 6 Ordinateurs quantiques et chaos quantique • Chapitre 2 DES MOUVEMENTS RÉGULIERS AUX MOUVEMENTS CHAOTIQUES À TRAVERS LE TRAVAIL DE KOLMOGOROV Alessandra Celletti, Claude Froeschlé, Elena Lega 1 Introduction 2 Mouvements stables 2.1 Systèmes intégrables et non intégrables 2.2 Théorie des perturbations 2.3 Le théorème de Kolmogorov-Arnold-Moser 2.4 La stabilité d'un modèle associé au problème des trois corps 3 Mouvements instables 3.1 Théorème de Nekhoroshev 3.2 Outils pour différencier le chaos de l'ordre 3.3 Représentation du réseau d'Arnold dans un modèle hamiltonien simple • Chapitre 3 DYNAMIQUE À LA FRONTIÈRE ENTRE L'ORDRE ET LE CHAOS Arkady Pikovsky et Michael Zaks 1 Introduction 2 Suite de Thue-Morse : un exemple non trivial de séquence symbolique complexe 3 Attracteurs à spectre fractal : du codage symbolique aux singularités des temps de retour 4 Les spectres fractals en hydrodynamique laminaire 5 Conclusion Deuxième partie : COMPLEXITÉ ALGORITHMIQUE ET THÉORIE DE L'INFORMATION • Chapitre 4 ENTROPIE, CHAOS ET COMPLEXITÉ Massimo Falcioni, Vittorio Loreto, Angelo Vulpiani 1 L'entropie en thermodynamique et en physique statistique 2 L'entropie dans la théorie de l'information 3 L'entropie dans les systèmes dynamiques 4 Complexité algorithmique 5 Complexité et information en linguistique, génomique et finances 5.1 Du jeu en bourse à l'estimation de l'entropie 5.2 Recherche d'informations pertinentes 5.3 Entropie relative et écart entre séquences 5.4 Compression de données et mesures de complexité • Chapitre 5 COMPLEXITÉ ET INTELLIGENCE Giorgio Parisi 1 Complexité algorithmique 2 Quelques propriétés et paradoxes apparents de la complexité 3 La profondeur logique 4 Apprentissage par l'exemple 5 Apprentissage, généralisation et propensions 6 Une approche statistique des propensions 7 Une définition possible de l'intelligence • Chapitre 6 INFORMATION, COMPLEXITÉ ET BIOLOGIE Franco Bagnoli, Franco Bignone, Fabio Cecconi, Antonio Politi 1 Notes historiques 2 Les contributions directes de Kolmogorov 3 Information et biologie 4 Les protéines : un exemple paradigmatique de complexité Troisième partie : TURBULENCE • Chapitre 7 TURBULENCE PLEINEMENT DÉVELOPPÉE Luca Biferale, Guido Beffetta, Bernard Castaing 1 Introduction 2 Théorie de Kolmogorov 1941 2.1 Symétries de Navier-Stokes 2.2 Anomalie dissipative 2.3 Loi des 4/5 et auto-similarité 3 Théorie de Kolmogorov 1962 3.1 Intermittence et loi d'échelle anomale 3.2 Cascade multiplicative 3.3 Approche multifractale 3.4 Tests sur les hypothèses de Kolmogorov 4 L'héritage de Kolmogorov sur la turbulence moderne 4.1 Universalité des fluctuations aux petites échelles 4.2 Turbulence anisotrope • Chapitre 8 TURBULENCE ET PROCESSUS STOCHASTIQUES Antonio Celani, Andrea Mazzino, Alain Pumir 1 Introduction 2 Turbulence d'un scalaire passif 3 Le modèle de Kraichnan et ses prolongements 4 Du côté de la turbulence de Navier-Stokes 5 Conclusion • Chapitre 9 SYSTÈMES DE RÉACTION-DIFFUSION : PROPAGATION DE FRONTS ET STRUCTURES SPATIALES Massimo Cencini, Cristobal Lopez, Davide Vergni 1 Introduction 2 Propagation de front dans l'équation de diffusion non linéaire 3 Systèmes de réaction-diffusion en physique, en chimie et en biologie 3.1 Systèmes de réaction-diffusion à multi composants 3.2 Systèmes d'advection-réaction-diffusion Quatrième partie : APPLICATION DE LA THÉORIE DES PROBABILITÉS • Chapitre 10 CHAMPS ALÉATOIRES AUTOSIMILAIRES : DE KOLMOGOROV AU GROUPE DE RENORMALISATION Giovanni Jona-Lasinio 1 Introduction 2 Bref historique 3 La spirale de Wiener, et les processus apparentés 4 Le groupe de renormalisation : idées générales 5 Le groupe de renormalisation : un point de vue probabiliste 6 Une propriété des champs aléatoires autosimilaires critiques 7 Structure multiplicative 8 Théorèmes limites et universalité des phénomènes critiques 9 Conclusion • Chapitre 11 SÉRIES TEMPORELLES FINANCIÈRES : DES MARCHES ALÉATOIRES DE BACHELIER AUX «CASCADES» MULTIFRACTALES Jean-Philippe Bouchaud et Jean-François Muzy 1 Introduction 2 Caractéristiques universelles des séries temporelles des rendements 3 Des lois d'échelle multifractales aux processus en cascade 3.1 Comportement multi-échelle des rendements d'actifs 3.2 Le paradigme de la cascade 3.3 L'héritage de Kolmogorov, turbulence et finance 4 Marche aléatoire multifractale 5 Conclusion TURBULENCE: The Legacy of A. N. Kolmogorov by Uriel Frisch Published 1994 by Cambridge University Press This textbook presents a modern account of turbulence, one of the greatest challenges in physics. The state-of-the-art is put into historical perspective five centuries after the first studies of Leonardo and half a century after the first attempt by A.N. Kolmogorov to predict the properties of flow at very high Reynolds numbers. Such "fully developed turbulence" is ubiquitous in both cosmical and natural environments, in engineering applications and in everyday life. First, a qualitative introduction is given to bring out the need for a probabilistic description of what is in essence a deterministic system. Kolmogorov's 1941 theory is presented in a novel fashion with emphasis on symmetries (including scaling transformations) which are broken by the mechanisms producing the turbulence and restored by the chaotic character of the cascade to small scales. Considerable material is devoted to intermittency, the clumpiness of small-scale activity, which has led to the development of fractal and multifractal models. Such models, pioneered by B. Mandelbrot, have applications in numerous fields besides turbulence (diffusion limited aggregation, solid-earth geophysics, attractors of dynamical systems, etc). The final chapter contains an introduction to analytic theories of the sort pioneered by R. Kraichnan, to the modern theory of eddy transport and renormalization and to recent developments in the statistical theory of two-dimensional turbulence. The book concludes with a guide to further reading. The intended readership for the book ranges from first-year graduate students in mathematics, physics, astrophysics, geosciences and engineering, to professional scientists and engineers. Table of Contents • Preface • Introduction • Why a probabilistic description of turbulence? • Probabilistic tools: a survey • Two experimental laws of fully developed turbulence • The Kolmogorov 1941 theory • Phenomenology of turbulence in the sense of Kolmogorov 1941 • Intermittency • Further reading: a guided tour • References • Author index • Subject index (See also Kolmogorov's turbulence definitions from his famous K41 paper) K41 A. N. Kolmogorov. Dokl. Akad. Nauk SSSR, 30;4:3201, 1941. An English translation of this paper was recently republished (Translation by V. Levin): A. N. Kolmogorov, The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Proc. R. Soc. Lond. A, 434:9-13, 1991 in the book Turbulence and Stochastic Processes: Kolmogorov's Ideas 50 Years On (1991) and in the book Selected Papers on Adaptive Optics and Speckle Imaging (1994) An Introduction to Kolmogorov Complexity and Its Applications by Ming Li, Paul Vitanyi (Home page) Published January 1997 by Springer-Verlag New York (2nd Edition) See also Kolmogorov Complexity and Solomonoff Induction Mailing List and Special Issue on Kolmogorov Complexity, The Computer Journal, Volume 42, Issue 4, 1999. Edited by Alexander Gammerman (Home page), and Vladimir Vovk (Home page). Generalized Kolmogorov Complexity by Jürgen Schmidhuber (Home page) Hilbert's 10th Problem (Foundations of Computing) by Yuri V. Matiyasevich (Home page) Published October 1993 by MIT Press See also Hilbert's Tenth Problem page The Honor's Class: Hilbert's Problems and Their Solvers by Benjamin Yandell Published December 2001 by A K Peters, Ltd. See also an original Hilbert's address in German Mathematische Probleme (Vortrag, gehalten auf dem internationalen Mathematiker-Kongreß zu Paris 1900 ) Von David Hilbert Mathematical Problems (in English) (Lecture delivered before the International Congress of Mathematicians at Paris in 1900) by Professor David Hilbert Hilbert's Problems (in Russian) edited and with introduction by P. S. Aleksandrov See also Millenium Prize Problems by Clay Mathematics Institute The Millenium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time by Keith J. Devlin Published October 2002 by Basic Books Russian Mathematicians in the 20th Century Edited by Yakov Sinai Published October 2003 by World Scientific In the 20th century, many mathematicians in Russia made great contributions to the field of mathematics. This invaluable book, which presents the main achievements of Russian mathematicians in that century, is the first most comprehensive book on Russian mathematicians. It has been produced as a gesture of respect and appreciation for those mathematicians and it will serve as a good reference and an inspiration for future mathematicians. It presents differences in mathematical styles and focuses on Soviet mathematicians who often discussed "what to do" rather than "how to do it". Thus, the book will be valued beyond historical documentation. The editor, Professor Yakov Sinai, a distinguished Russian mathematician, has taken pains to select leading Russian mathematicians — such as Lyapunov, Luzin, Egorov, Kolmogorov, Pontryagin, Vinogradov, Sobolev, Petrovski and Krein — and their most important works. One can, for example, find works of Lyapunov, which parallel those of Poincaré; and works of Luzin, whose analysis plays a very important role in the history of Russian mathematics; Kolmogorov has established the foundations of probability based on analysis. The editor has tried to provide some parity and, at the same time, included papers that are of interest even today. The original works of the great mathematicians will prove to be enjoyable to readers and useful to the many researchers who are preserving the interest in how mathematics was done in the former Soviet Union.-- Golden Years of Moscow Mathematics Edited by Smilka Zdravkovska and Peter L. Duren Published January 1994 by American Mathematical Society This volume contains articles on Soviet mathematical history, many of which are personal accounts by mathematicians who witnessed and contributed to the turbulent years of Moscow mathematics. In today's climate of glasnost, the stories can be told with a candor uncharacteristic of the "historical" accounts published under the Soviet regime. An important case in point is the article on Luzin and his school, based in part on documents only recently released. The articles focus on mathematical developments in that era, the personal lives of Russian mathematicians, and political events that shaped the course of scientific work in the Soviet Union. Another important feature is the inclusion of two articles on Kolmogorov, perhaps the greatest Russian mathematician of the twentieth century. The volume concludes with an annotated English bibliography and a Russian bibliography for further reading. This book appeals to mathematicians, historians, and anyone else interested in Soviet mathematical history. • A. P. Yushkevich -- Encounters with mathematicians • S. S. Demidov -- The Moscow school of the theory of functions in the 1930s • E. M. Landis -- About mathematics at Moscow State University in the late 1940s and early 1950s • B. A. Rosenfeld -- Reminiscences of Soviet mathematicians • V. M. Tikhomirov -- A. N. Kolmogorov (see also his Biography) • V. I. Arnol'd -- On A. N. Kolmogorov (see also An Interview with Vladimir Arnol'd) • M. M. Postnikov -- Pages of a mathematical autobiography (1942-1953) • B. A. Kushner -- Markov and Bishop: An essay in memory of A. A. Markov (1903-1979) and E. Bishop (1928-1983) • I. Piatetski-Shapiro -- Étude on life and automorphic forms in the Soviet Union • D. B. Fuchs -- On Soviet mathematics of the 1950s and 1960s • A. B. Sossinsky -- In the other direction • S. S. Demidov -- A brief survey of the literature on the development of mathematics in the USSR • S. S. Demidov -- Bibliography (Russian) Quantum : The Magazine of Math and Science A. N. Kolmogorov co-founded in 1970 and was First Deputy Editor-in-Chief of the Russian popular scientific journal for students 'Kvant'. Current Editor-in-Chief is Yu. A. Osipian English translation Quantum was published from 1990 to 2001 by the National Science Teachers Association (NSTA). It was a lively, handsomely illustrated bimonthly magazine of math and science (primarily physics). Full texts of many issues of 'Kvant' is currently avalable from the Russian web site of MCCME, Moscow Center for Continuous Mathematical Education. See also "Math in Moscow" - a program in English for undergraduates and graduate students at the Independent University of Moscow. Full texts of A. N. Kolmogorov's articles in Russian journal Kvant (in Russian): Algebra and Elements of Analysis (A High School Textbook for 10-11 grades) Edited by A. N. Kolmogorov, A. M. Abramov, Yu. P. Dudnitsyn, B. M. Ivlev, S. I. Shvatsburg Published by Prosveshchenie, 2001 (11th Edition) See also Algebra by I. M. Gelfand and A. Shen Full text (in Russian) of the article by G. V. Pukhova A. N. Kolmogorov and Summer School at Lake Rubskoye at web site of the Math Department, Ivanovo State University. See also full text (in Russian) An Inroduction to the book 'A Summer School at Lake Rubskoye', Published 1971 by A. N. Kolmogorov, I. G. Zhurbenko, G. V. Pukhova, O. S. Smirnova, S. V. Smirnov. Theory of Probability and Its Applications A. N. Kolmogorov founded in 1956 and was Editor-in-Chief of the Russian journal 'Teoriya Veroyatnostei i ee Primeneniya'. Current Editor-in-Chief is Yu. V. Prokhorov who is Kolmogorov's student. Theory of Probability and Its Applications is a translation of the Russian journal Teoriya Veroyatnostei i ee Primeneniya, which contains papers on the theory and application of probability, statistics, and stochastic processes. Russian Mathematical Surveys A. N. Kolmogorov was a founding member of the editorial board of the Russian journal 'Uspekhi Matematicheskikh Nauk' from 1934 till his death in 1987. He was Editor-in-Chief from 1946 till 1954 and from 1982 to 1987. Current Editor-in-Chief is S. P. Novikov Russian Mathematical Surveys is the English translation of the Russian bimonthly journal Uspekhi Matematicheskikh Nauk, founded in 1936. The English language version is a cover-to-cover translation of all the material: that is, the survey articles, the Communications of the Moscow Mathematical Society, and the biographical material. Portraits of A. N. Kolmogorov by his former student Dima Gordeev (click to enlarge) Kolmogorov 1 2 3 . . . . . . . . . . . . . . . State of the Art Internat #18 alumni web site, alumni club, and questionnaire. Kolmogorov Specialized Physics & Mathematics School - Internat #18 at Moscow State University School was founded by A. N. Kolmogorov in 1964. A. N. Kolmogorov was life-long Chairman of the Board of Trustees. The School was named after A. N. Kolmogorov in 1988. A. N. Kolmogorov was student (1920-1925), graduate student (1925-1929), researcher (1929-1931), and professor (1931-1987) at the University. MechMath - Faculty of Mechanics and Mathematics A. N. Kolmogorov was Dean of the MechMath Faculty (1954-1958) and Head of the Mathematics Division (1954-1956 and 1978-till his death in 1987). From 1931 A. N. Kolmogorov was Head of graduate school of MechMath Faculty - Director of the Institute of Mathematics and Mechanics. In 1951 he was once again appointed Director of the Institute of Mathematics. Department of Probability Theory A. N. Kolmogorov was Founder (1935) and first Head of the Department (1935-1966). From 1966 to 1995 head of the chair was his student B. V. Gnedenko. From 1996 another student of A. N. Kolmogorov professor A. N. Shiryaev became head of the chair. Department of Mathematical Logic and Theory of Algorithms A. N. Kolmogorov was second Head of the Department of Mathematical Logic (1980-1987) after first Head A. A. Markov (1959-1979). From 1988 to 1993 head of the chair was V. A. Mel'nikov. Head of the Department since January 1995 is professor V.A.Uspensky who is Kolmogorov's student. Russian Academy of Sciences A. N. Kolmogorov was elected Full Member (Academician) of the Division of Mathematical and Natural Sciences since 29.01.1939. He was elected Member of the Presidium of Academy and Head (Academician-Secretary) of the Division of Physical and Mathematical Sciences. Steklov Mathematical Institute of the Russian Academy of Sciences Department of Probability and Mathematical Statistics of the Steklov Mathematical Institute Founded by A. N. Kolmogorov in 1938 who was Head of Department until 1958 (excluding 1946-1948 when A. Ya. Khinchin occupied this position). After 1960 it is headed by Yu. V. Prokhorov who is Kolmogorov's student. A. N. Kolmogorov was Head of the Turbulence Laboratory (1946-1949) at the O. Yu. Shmidt Institute of Theoretical Geophysics of the Russian Academy of Sciences. His student A. M Obukhov became Head of the Turbulence Laboratory in 1949 and later Founding and life-long Director of the Institute of Atmospheric Physics of the Russian Academy of Sciences. His other student A. S. Monin became Director of the Institute of Oceanology of the Russian Academy of Sciences. A. N. Kolmogorov was President of the Moscow Mathematical Society from 1964 to 1966 and from 1973 to 1985. P. S. Aleksandrov was President of the Moscow Mathematical Society from 1932 to 1964. Current President of the Moscow Mathematical Society is V. I. Arnol'd who is Kolmogorov's student. Small Hall of the Conservatory of Moscow Academic Philarmonia P. S. Aleksandrov and A. N. Kolmogorov were the life-long Season Tickets holders... Johann Sebastian Bach: Concerto for 2 violins & strings in D minor ("Double"), BWV 1043 (P. S. Aleksandrov's and A. N. Kolmogorov's favorite) Wolfgang Amadeus Mozart: Symphony No.40 in G minor, K.550 (P. S. Aleksandrov's and A. N. Kolmogorov's favorite) Johann Sebastian Bach: St. Matthew Passion Christoph Willibald Gluck: Orfeo ed Euridice 1. Gift of 59 books from the Class of 1976, 1 September 2002. 2. Gift of 9 books from the Class of 1976, 7 March 2003. 3. Gift of ~80 books from the Class of 1976, 11 June 2003. 4. Gift of ~35 books from the Class of 1976, 6 December 2003. Some recent additions - Gifts to the Kolmogorov Library. 5, 6, 7, ... Thanks for your support. Send your comments and suggestions to alex@kolmogorov.com Kolmogorov Library Project Moscow Mathematical Society | Conferences | Portraits | Photographs | Bibliography | Journals | Articles | Papers | Books | Works | Students | School | Library Vladimir I. Arnol'd| David Bohm| Lewis Carroll| S. Chandrasekhar| Jonh H. Conway| Richard Courant| H. S. M. Coxeter| Keith J. Devlin| Paul Dirac| Freeman Dyson| Paul Erdös| Euclid| Leonhard Euler| Enrico Fermi| Richard P. Feynman| George Gamow| Martin Gardner| Carl Friedrich Gauss| Israel M. Gelfand| Vitaly Lazarevich Ginzburg| James Gleick| Richard K. Guy| Stephen Hawking| Werner Heisenberg| David Hilbert| Douglas R. Hofstadter| Ross Honsberger| Felix Klein| Thomas S. Kuhn| Lev Davidovich Landau| Benoit Mandelbrot| Sir Isaac Newton| Roger Penrose| Ivars Peterson| Clifford A. Pickover| George Pólya| Karl R. Popper| Constance Reid| Bertrand Russell| Julian Schwinger| Ian Stewart| Kip S. Thorne| Stanislaw M. Ulam| James Watson| Steven Weinberg| John Archibald Wheeler| Alfred North Whitehead| Ludwig Wittgenstein| A.M. & I.M. Yaglom| Dolciani Mathematical Expositions| Dover Books| Dover Phoenix Editions| Helix Books| Advanced Book Classics| Cambridge Mathematical Library| Cambridge Texts in Applied Mathematics| Classics in Mathematics| History of Mathematics| Princeton Science Library| Princeton Series in Physics| Frontiers in Physics Series| Kolmogorov Library Wish List| FizMatKniga| Springer Mathematics| Springer Synergetics| Graduate Texts in Mathematics| Problem Books in Mathematics| Book List from the Notices of the American Mathematical Society| Amazon.com | FinMath.com| Friends of Phystech| Fizteh Library| State of the Art| Kolmogorov.pms.ru| Kolmogorov.com Support Kolmogorov.com
{"url":"http://kolmogorov.com/Kolmogorov.html","timestamp":"2014-04-20T00:50:15Z","content_type":null,"content_length":"100210","record_id":"<urn:uuid:66213f80-718b-4d0b-84c5-a71722def63d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: physics/990503012May1999 On the Gravitational Field of a Mass Point according to Einstein's Theory by K. Schwarzschild (Communicated January 13th, 1916 [see above p. 42].) translation and foreword by S. Antoci and A. Loinger Foreword. This fundamental memoir contains the ORIGINAL form of the solution of Schwarzschild's problem. It is regular in the whole space-time, with the only exception of the origin of the spatial co-ordinates; consequently, it leaves no room for the science fiction of the black holes. (In the centuries of the decline of the Roman Empire people said: "Graecum est, non legitur"...). §1. In his work on the motion of the perihelion of Mercury (see Sitzungsberichte of November 18th, 1915) Mr. Einstein has posed the following problem: Let a point move according to the prescription: ds = 0,
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/381/4700450.html","timestamp":"2014-04-18T19:06:34Z","content_type":null,"content_length":"7901","record_id":"<urn:uuid:1eba2c83-a345-400b-a8a8-a27b9f67e0ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: HUB CONE FOR AN AIRCRAFT ENGINE Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A hub cone (20, 30, 40) is provided for an aircraft engine having a propeller (2, 32, 42) or a blower (fan) (3) enveloped by a casing (5). To reduce flow losses of the hub cone and increase efficiency of the engine, a contour of the hub cone is described by the following equation: S(x)=R /M; where: S(x) is a shape of the cone defined along the machine axis x; R is a maximum extension (26) of the cone in the radial direction; L is a maximum extension (25) of the cone in the direction of the machine axis x and M is a quantity describing the shape S(x). A hub cone for an aircraft engine having at least one of a propeller and a blower enveloped by a casing, wherein a contour of the hub cone is described by the following equation:S(x)=R * {1-[(x-L /M where:S(x): is a shape of the cone, defined along the machine axis x;R : is a maximum extension of the cone in a radial direction r;L : is a maximum extension of the cone in the direction of the machine axis x;M: is a quantity describing the shape S(x). The hub cone of claim 1, wherein M is a positive real number. The hub cone of claim 2, wherein a set of cone shapes is defined by suitable selection of the value of M. The hub cone of claim 3, wherein the set of cone shapes so defined includes all values of M between 50 and 1.98. 5. The hub cone of claim 4, wherein the values of M are between 89 and The hub cone of claim 1, wherein a set of cone shapes is defined by suitable selection of the value of M. The hub cone of claim 6, wherein the set of cone shapes so defined includes all values of M between 50 and 1.98. 8. The hub cone of claim 7, wherein the values of M are between 89 and This application claims priority to German Patent Application DE 10 2008 055 631.9 filed Nov. 3, 2008, the entirety of which is incorporated by reference herein. This invention relates to a hub cone for an aircraft engine. An aircraft engine having a propeller or a blower (fan) enveloped by a casing is characterized in that, upstream of the propeller or blower, a hub cone is arranged which is designed such that the inflow is conducted as favorably as possible onto the hub radius of the blower or propeller, respectively. The hub cone co-rotates with the propeller or blower, respectively. When the inflow passes over the contour of the hub cone, a boundary layer is generated along the contour. This boundary layer, which increases with the running length, effects that the blower or propeller hub, respectively, is unfavorably approached by the inflow, i.e. very slowly and at a steep angle of incidence. For the state of the art for turboprop aircraft engines, reference is made to Specifications U.S. Pat. No. 4,796,424A and US 2004179941A. The approaching boundary layer, which has a thickness in the millimeter range, affects how the blower or propeller hub profiles, respectively, are unfavorably reached by the inflow, i.e. very slowly and at a steep angle of incidence. This is undesirable as a steep inflow promotes flow separation on the hub-near blade profiles. This leads to losses and unfavorable outflow of the blower or propeller, respectively. The losses reduce efficiency, and the unfavorable outflow affects the efficiency and the flow conditions on the downstream engine components. In a broad aspect, this invention provides a hub cone for an aircraft engine which avoids the above described disadvantages. It is a particular object of the present invention to provide solution to the above problems by disclosing a hub cone contour which can be described by the following equation: * {1-[(x-L S(x): Is a shape of the cone, defined along the machine axis x (the horizontal cone axis); : Is a maximum extension of the cone in the radial direction, i.e. vertically to the machine axis in the direction of the radial axis r; : Is a maximum extension of the cone in the direction of the machine axis, respectively; M: Is a quantity describing the shape S(x). M is a positive real number. In order to minimize the boundary layer thickness, the present invention provides definition of a set of optimum shapes of the geometry of the hub or cone, respectively. The hub geometry can be defined by the mathematical equation such that the course thereof is continuous and monotonous and, accordingly, the boundary layer is not disturbed. As per the above equation, a set of cone shapes producing minimum loss and minimum boundary layer thickness can be defined by suitable selection of the value of M. The set of curves so defined includes all values and all ranges of M between 1.50 and 1.98. Minimum loss is produced by those shapes which are generated with values of M ranging between 1.89 and 1.945. Optimum limiting contours of the shape curve S(x) range between M=1.50 and 1.98. The hub shapes produced with the equation provided and the values specified for the quantity M, which describes the shape S(x), provide, in a test case, a 17 percent lower loss than, for example, an elliptical hub contour as it is commonly selected for the hub contour of a propeller. Examples of an optimized contour are hereinafter described in further detail. The optimum hub contour results in improved inflow of the propeller or blower, respectively, whose efficiency is thereby increased, and reduces the drag coefficient of the cone. Both results in reduced fuel consumption. According to the present invention, the quantity M is a positive real number. Furthermore, a set of cone shapes is defined by suitable selection of the quantity M, with the defined set of curves including all values of M between 1.50 and 1.98. Finally, with values of M between 1.89 and 1.945, a hub cone can be described which enables minimum loss of power of the aircraft engine to be Embodiments of the hub cone in accordance with the present invention for an aircraft engine having a propeller or a blower (fan) enveloped by a casing are illustrated in the following drawings: FIG. 1 (Prior Art) shows a hub cone for a propeller in accordance with the state of the art, [0018]FIG. 2 (Prior Art) shows a hub cone for a blower (fan) enveloped by a casing in accordance with the state of the art, FIG. 3 (Prior Art) shows the boundary layer thickness on a hub cone in accordance with the state of the art as per FIG. 1 or FIG. 2 FIG. 4 shows the angles of incidence and the inflow on a hub cone in accordance with the state of the art as per FIG. 1 or FIG. 2 FIG. 5 (Prior Art) shows the hub profile and the sense of rotation on a propeller in accordance with the state of the art as per FIG. 1, FIG. 6 shows definitions of the shape of the cone in the x-y-coordinate system, FIG. 7 shows the optimum cone shapes determined in accordance with the inventive equation as cone shape S(x) versus the x-axis, FIG. 8 shows an embodiment of a hub cone in accordance with the present invention with single propeller in tractor configuration, and [0025]FIG. 9 shows an embodiment of a hub cone in accordance with the present invention with dual propeller in pusher configuration. FIG. 1 (Prior Art) shows an essentially elliptical hub cone 1 with a propeller 2 of a turboprop aircraft engine in accordance with the state of the art. FIG. 2 (Prior Art) shows an essentially elliptical hub cone 1 with a blower (fan) 3 enveloped by a casing 4 of an aircraft engine according to the state of the art. The inflow is indicated by an arrowhead 5 each. The aircraft engine with the propeller (2) (FIG. 1) or the blower (fan) 3 enveloped by the casing 4 ( FIG. 2 ) is characterized in that, upstream of the propeller 2 or the blower 3, a hub cone 1 with regularly elliptical shape is arranged which should be designed such that the inflow is as favorably as possible conducted onto the hub radius of the propeller 2 or the blower 3, respectively. The hub cone 1 co-rotates with the propeller 2 or blower 3, respectively. When the inflow passes over the contour of the hub cone 1, a boundary layer 6 (FIG. 3) is generated along the contour. This boundary layer 6, which increases with the running length, affects how the hub cone 1 of the propeller 2 or the blower 3, respectively, is unfavorably approached by the inflow, i.e. very slowly and at a steep angle of incidence. The approaching boundary layer 6, which has a boundary layer thickness 7 (FIG. 3) in the millimeter range, affects how, on state-of-the-art hub cones 1 according to FIGS. 1 and 2, the profiles of the hub cones 1 of the propeller 2 or the blower 3, respectively, are unfavorably approached, i.e. with a very slow inflow 10 and at a steep angle of incidence 8 (see FIG. 4). This is undesirable since a steep inflow, or a steep angle of incidence 8, promotes flow separation on the profiles of the propeller 2 or the blower 3, respectively, situated near the cone 1. This leads to losses and unfavorable outflow of the propeller 2 or the blower 3, respectively. The losses reduce efficiency, and the unfavorable outflow impairs the efficiency and the flow conditions on the downstream engine More favorable is a desired flatter angle of incidence 9 with desired faster inflow 11 as also shown in FIG. 4. FIG. 5 shows a hub profile 12 and the sense of rotation 13. In order to achieve this and to minimize the boundary layer thickness 7, provision is made for the definition of a set of optimum shapes of the geometry of the hub cone 20 in accordance with the present invention. The geometry of the hub cone 20 can be described and defined by the following mathematical equation, such that the course of the hub cone 20 is continuous and monotonous and, accordingly, the boundary layer is not disturbed. with definition being provided for the following quantities in the S(x): Is a shape of the cone 20 defined along the machine axis x (the horizontal cone axis), : Is a maximum extension of the cone 20 in the radial direction, i.e. vertically to the machine axis x in the direction of the radial axis r, : Is a maximum extension of the cone 20 in the direction of the machine axis x, respectively, M: Is a quantity describing the shape S(x). M is a positive real number. To define the geometrical shape of the hub cone 20 according to the present invention, FIG. 6 shows the origin of the x-r-coordinate system at the tip 21 of the hub cone 20, the r-axis 22 extending radially and vertically to the machine axis 23, the outer contour 24 of the hub cone 20 and the maximum extensions 25 and 26 of the hub cone 20 in x-direction or in r-direction, respectively. In order to minimize the boundary layer thickness 7, the present invention provides definition of a set of optimum shapes of the geometry of the hub cone 20. The hub cone geometry can be defined by the mathematical equation such that the course thereof is continuous and monotonous and, accordingly, the boundary layer is not disturbed. As per the above equation, a set of cone shapes producing minimum loss and minimum boundary layer thickness can be defined by suitable selection of the value of M. The set of curves so defined includes all values of M between 1.50 and 1.98 as per FIG. 7. Minimum loss is produced by those shapes which are generated with values of M ranging between 1.89 and 1.945. Optimum limiting contours of the shape curve S(x) range between M=1.50 and 1.98. FIG. 7 shows a further shape curve S(x) for M=1.7. The hub shapes produced with the equation provided and the values specified for the quantity M, which describes the shape S(x), provide, in a test case, a 17 percent lower loss than, for example, an elliptical hub contour as it is commonly selected for the hub contour of a propeller. The optimum hub contour results in improved inflow of the propeller 2 or blower 3, respectively, whose efficiency is thereby increased, and reduces the drag coefficient of the hub cone 20. Both results in reduced fuel consumption. An example of a hub cone 30 optimized according to the present invention with hub body 33 and machine axis 34 is shown in FIG. 8 for a single propeller 32 in tractor configuration. Here, the hub cone rotates. A further example of a hub cone 40 optimized according to the present invention with hub body 43 and machine axis 44 is shown in FIG. 9 for a dual propeller 42 in pusher configuration. Here, the hub cone is stationary. LIST OF REFERENCE NUMERALS [0040] 1 Hub cone 2 Propeller 3 Blower (fan) 4 Casing 5 Approaching flow 6 Boundary layer 7 Boundary layer thickness 8 Steep angle of incidence 9 Desired angle of incidence 10 Slow inflow 11 Desired inflow 12 Hub profile 13 Sense of rotation 20 Hub cone 21 Tip of cone 22 r-axis 23 Machine/cone axis 24 Outer contour of hub cone 25 Max. extension in x-direction 26 Max. extension in r-direction 30 Optimized hub cone 32 Propeller 33 Hub body 34 Machine/cone axis 40 Optimized hub cone 42 Propeller 43 Hub body 44 Machine/cone axis Patent applications by Carsten Clemen, Mittenwalde DE Patent applications by Rolls-Royce Deutschland Ltd & Co KG Patent applications in class Spinner or fairwater cap Patent applications in all subclasses Spinner or fairwater cap User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100111702","timestamp":"2014-04-24T20:08:38Z","content_type":null,"content_length":"39299","record_id":"<urn:uuid:186b388a-9a96-4398-8419-8772afae6224>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyzing Warehouse-Retailer Interaction using a Modified Economic Order Quantity (EOQ) Model Abstract (Summary) This thesis analyzes the interaction between the warehouse and the retailer to find the characteristics that affect the time between orders and the order quantities of the retailer. The classic EOQ model was altered to explicitly include the operating costs at the warehouse and the transportation costs and this, accounts for multiple SKUs and multiple unit loads. This modified EOQ model is referred to as the Economic Order Frequency (EOF) model. Using the design of a typical warehouse, equations for detailed calculation of the individual unit load costs at the warehouse were developed. These equations were then incorporated into the total cost equation of the EOF model. This model was tested by changing values of time between orders using Excel and VBA and the change in optimal order frequencies and minimum total costs were analyzed. ANOVA was used to analyze the effect that different costs had on the order frequency and the minimum total cost. The holding cost per item was found to have the most significant effect on the order frequency and the minimum total cost. Therefore, if the holding cost at the retailer was too high, he/she would order more frequently causing more work at the warehouse but preventing the retailer from incurring high inventory costs. The transportation cost also has a significant effect on the order frequency and the total cost. This indicates that the distance of the retailer from the warehouse is an important factor to be considered while determining orders. The quantity ordered also has an effect on the transportation cost, since a little more than the truck capacity will require an additional truck that will increase the minimum total cost. Hence a balance between the transportation costs and the order quantity has to be considered as well. The labor rate has been found to have no significant effect on the order frequency but has an effect on the minimum total cost when considered with other interactions thus indicating that for a given order frequency as the labor cost increases, the total cost increases (i.e. the retailer will pay more for the same quantity as the warehouse costs increase). Bibliographical Information: School:Ohio University School Location:USA - Ohio Source Type:Master's Thesis Keywords:supply chain inventory order fulfilment frequency multiple skus unit loads Date of Publication:01/01/2004
{"url":"http://www.openthesis.org/documents/Analyzing-Warehouse-Retailer-Interaction-using-539100.html","timestamp":"2014-04-19T04:26:24Z","content_type":null,"content_length":"10090","record_id":"<urn:uuid:97ce5dfb-f46e-4633-89f0-e9c86a2e8f83>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Fit a Model to Complex-Valued Data Data Model The data model is a simple exponential: The x is input data, y is the response, and v is a complex-valued vector of coefficients. The goal is to estimate v from x and noisy observations y. Artificial Data with Noise Generate artificial data for the model. Take the complex coefficient vector v as [2;3+4i;-.5+.4i]. Take the observations x as exponentially distributed. Add complex-valued noise to the responses y. rng default % for reproducibility N = 100; % number of observations v0 = [2;3+4i;-.5+.4i]; % coefficient vector xdata = -log(rand(N,1)); % exponentially distributed noisedata = randn(N,1).*exp((1i*randn(N,1))); % complex noise cplxydata = v0(1) + v0(2).*exp(v0(3)*xdata) + noisedata; Fit the Model to Recover the Coefficient Vector The difference between the response predicted by the data model and an observation (xdata for x and response cplxydata for y) is: objfcn = @(v)v(1)+v(2)*exp(v(3)*xdata) - cplxydata; Use either lsqnonlin or lsqcurvefit to fit the model to the data. This example first uses lsqnonlin. Because the data is complex, set the Algorithm option to 'levenberg-marquardt'. opts = optimoptions(@lsqnonlin,... x0 = (1+1i)*[1;1;1]; % arbitrary initial guess [vestimated,resnorm,residuals,exitflag,output] = lsqnonlin(objfcn,x0,[],[],opts); vestimated = 2.1581 + 0.1351i 2.7399 + 3.8012i -0.5338 + 0.4660i resnorm = exitflag = ans = lsqnonlin recovers the complex coefficient vector to about one significant digit. The norm of the residual is sizable, indicating that the noise keeps the model from fitting all the observations. The exit flag is 3, not the preferable 1, because the first-order optimality measure is about 1e-3, not below 1e-6. Alternative: Use lsqcurvefit To fit using lsqcurvefit, write the model to give just the responses, not the responses minus the response data. objfcn = @(v,xdata)v(1)+v(2)*exp(v(3)*xdata); Use lsqcurvefit options and syntax. opts = optimoptions(@lsqcurvefit,opts); % reuse the options [vestimated,resnorm] = lsqcurvefit(objfcn,x0,xdata,cplxydata,[],[],opts) vestimated = 2.1581 + 0.1351i 2.7399 + 3.8012i -0.5338 + 0.4660i resnorm = The results match those from lsqnonlin, because the underlying algorithms are identical. Use whichever solver you find more convenient. Alternative: Split Real and Imaginary Parts To use the trust-region-reflective algorithm, such as when you want to include bounds, you must split the real and complex parts of the coefficients into separate variables. For this problem, split the coefficients as follows: Write the response function for lsqcurvefit. function yout = cplxreal(v,xdata) yout = zeros(length(xdata),2); % allocate yout expcoef = exp(v(5)*xdata(:)); % magnitude coscoef = cos(v(6)*xdata(:)); % real cosine term sincoef = sin(v(6)*xdata(:)); % imaginary sin term yout(:,1) = v(1) + expcoef.*(v(3)*coscoef - v(4)*sincoef); yout(:,2) = v(2) + expcoef.*(v(4)*coscoef + v(3)*sincoef); Save this code as the file cplxreal.m on your MATLAB^® path. Split the response data into its real and imaginary parts. ydata2 = [real(cplxydata),imag(cplxydata)]; The coefficient vector v now has six dimensions. Initialize it as all ones, and solve the problem using lsqcurvefit. x0 = ones(6,1); [vestimated,resnorm,residuals,exitflag,output] = ... vestimated = resnorm = exitflag = ans = Interpret the six-element vector vestimated as a three-element complex vector, and you see that the solution is virtually the same as the previous solutions.
{"url":"http://www.mathworks.com/help/optim/ug/fit-model-to-complex-data.html?nocookie=true","timestamp":"2014-04-23T14:49:41Z","content_type":null,"content_length":"40318","record_id":"<urn:uuid:7c7b0ef6-9dba-4fe2-9f9b-935f091ccb37>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
A couple Algebra 1 problems (polynomial division, mixture problem) April 29th 2012, 03:18 PM #1 Apr 2012 Divide. (x^3 - 3x^2 + 3x + 1) divided by (x - 2) A solution containing 20% acid is mixed with a solution containing 10% acid to make 200 liters of a solitution containing 12% acid. How much of the 10% solution was used? Also... would √(t-2)^2 be just t-2 or the absolute value of t-2? Re: A couple Algebra 1 problems (polynomial division, mixture problem) Divide. (x^3 - 3x^2 + 3x + 1) divided by (x - 2) use synthetic division A solution containing 20% acid is mixed with a solution containing 10% acid to make 200 liters of a solitution containing 12% acid. How much of the 10% solution was used? (200-x)(.20) + x(.10) = 200(.12) solve for x Also... would √(t-2)^2 be just t-2 or the absolute value of t-2? √(t-2)^2 = |t-2| Re: A couple Algebra 1 problems (polynomial division, mixture problem) Re: A couple Algebra 1 problems (polynomial division, mixture problem) Synthetic Division Re: A couple Algebra 1 problems (polynomial division, mixture problem) If you don't know "synthetic division", just use regular division (synthetic division is really just a short cut). How many times does x divide into $x^3$? Now, just as you do in division of numbers multiply that quotient by the divisor, x- 2, and subtract from $x^3- 3x^3- 3x+ 1$ to get the next dividend and continue. "A solution containing 20% acid is mixed with a solution containing 10% acid to make 200 liters of a solution containing 12% acid. How much of the 10% solution was used?" Let x be the amount of 10% solution. Then 200- x is the amount of 20% solution. The amount of acid in the 10% solution is .1x and the amount of acid in the 20% solution is .2(200- x)= 40- .2x so the total amount of acid is .1x+ 40- .2x= 40- .1x. The amount of acid in 200 liters of 12% solution is .12(200)= 24 so you must have 40- .1x= 24. Solve for x. April 29th 2012, 03:23 PM #2 April 29th 2012, 04:00 PM #3 Apr 2012 April 29th 2012, 04:23 PM #4 April 29th 2012, 06:07 PM #5 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/198109-couple-algebra-1-problems-polynomial-division-mixture-problem.html","timestamp":"2014-04-16T06:06:39Z","content_type":null,"content_length":"48897","record_id":"<urn:uuid:62b3a239-15cc-43d0-b426-26eddfedeb0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by JOSE Total # Posts: 346 A piece of a broken wheel is shown. It was taken to a machine shop to be replaced with a new whole wheel. Find the radius of the wheel AC=10cm, BD=3 cm and D is the midpoint of AC. Round to the nearest hundredth. The curvature of a circle with radius r is defined as Descartes formula gives a relationship between the curvatures of the four circles. What are Descartes formula and the curvature of a circle? What does the sign of the curvature of a circle mean? State Descartes&... An organization is planning a dance. The band will cost $400 and advertising costs will be $100. Food will be supplied at $2 per person. a. How many people need to attend the dance in order for the organization to break even if tickets are sold at $7 each? b. How many people ... The percent grade of a highway is the amount that the highway raises or falls in a given horizontal distance. For example, a highway with a four percent grade rises 0.04 mile for every 1 mile of horizontal distance. a. How many feet does a highway with a six percent grade rise... A piece of a broken wheel is shown. It was taken to a machine shop to be replaced with a new whole wheel. Find the radius of the wheel AC=10cm, BD=3 cm and D is the midpoint of AC. Round to the nearest hundredth. If boys and girls are equally likely to be born, what is the probability that in a randomly selected family of 6 children, there will be at least one boy? (Find the answer using a formula. Round your answer to three decimal places.) There are 2 leaves along 3 in.of an ivy vine. There are 14 leaves along 15 in. of the same vine. How many leaves are there along 6 in. of the vine? Becca wants to make a giant cherry pie to try to break the world record. If she succeeds in making a pie with a 20 foot diameter, what will be the total distance around the pie? How many square inches of crust would it take to cover the bottom of a pie dish with a 15 diameter? Pam served her apple pie on a 13 inch diameter dish. She wanted to tie a ribbon around the dish to make it a little more festive. How long does the ribbon need to be in order to fit around the dish? Fitness Topics When you massage an exercised muscle, you help to A. increase the efficiency for the removal of wastes. B. decrease the amount of oxygen moving to the muscle. C. decrease the amount of blood moving to the muscle. D. increase its lifting capacity. im going with B Scientists performed an experiment to determine whether there is a connection between learning ability and food. They took two groups of 20 mice each, all from the same purebred strain. The mice were deprived of food for 3 days and then given a standard learning session in run... Mr. Smith currently produces 45,000 bushels of potatoes a year. He Can increase his harvest by an average rate of 3% annually. How many years will it take until he can produce 50,000 bushels per Find the angel in degrees required to shoot a cannon a distance of 200 yards. It's speed is 400 mps. Given the equation of the circle (x 9)2 + y2 = 484 , where is the center of the circle located at? which of the following is the best course of action when it comes to using natural resources can someone post the ecuations? thanks Find two numbers the exact answer is between. 6x7381 So far i have gotten answers that i already knew. I know that 224=16t^2 goes to 14=t^2 but what squared equals 14? That is my question, i don't know what t equals when 224=16t^2 A ball is dropped from a cliff that is 224 feet high. The distance S (in feet) that it falls in t seconds is given by the formula S=16t^2. How many seconds (to tenths) will it take for the ball to hit the ground? how do you use atlas in more than one Business Investigation Expenditures. During January and February of the current year, Big Bang LLC incurs $3,000 in travel, feasibility studies, and legal expenses to investigate the feasibility of opening a new entertainment gallery in one of the new suburban malls in town. B... A volume of 40.0mL of aqueous potassium hydroxide (KOH) was titrated against a standard solution of sulfuric acid (H2SO4). What was the molarity of the KOH solution if 21.7mL of 1.50 M H2SO4 was needed? The equation is 2KOH(aq)+H2SO4(aq)→K2SO4(aq)+2H2O(l) What is the unit rate? 1500 meters in 6 seconds? Based on the following information, calculate the coefficient of variation and select the best investment based on the risk/reward relationship. Std Dev. Exp. Return Company A 10.4 15.2 Company B 14.6 22.9 Based on the following information, calculate the required return based on the CAPM: Risk Free Rate = 3.5% Market Return =10% Beta = 1.08 THERE ARE 2 SPINNERS ONE IS LABLED A-D THE OTHER IS LABLED 1-8. WHAT IS THE PROBABILITH OF SPINNING EITHER AN A OR B AND EITHER A 1 OR 2. ANSWERS GIVEN ARE 0.0625,0.116,0.125,AND 0.18 college algebra College Chemistry For each trial, enter the amount of heat lost by the calorimeter, qcalorimeter. Be careful of the algebraic sign here and remember that the change in temperature is equal to the final temperature minus the initial temperature. Report your answer using 4 digits. Note, this is 1... College Chemistry For each trial, enter the amount of heat lost by the calorimeter, qcalorimeter. Be careful of the algebraic sign here and remember that the change in temperature is equal to the final temperature minus the initial temperature. Report your answer using 4 digits. Note, this is 1... Assume that 60% of community college students are female. 1) If we pick 10 students at random justify why we can or cannot model this using a normal model. 2) Find the probability that 8 of the 10 students selected are female. 3) Find the probability that 8 of more of the 10 s... A study of two groups of children since early childhood: One group of 62 attended preschool. A control group of 61 children from the same area and similar backgrounds did not attend preschool. Over 10 year period as adults, 38 of the preschool sample and 49 of the control samp... 1. Which metal is the most easily oxidized? A.) highly active metal *B.) moderately active metal C.) slightly active metal D.) an inactive metal 2. What happens in a voltaic cell? *A.) Chemical energy is changed to a electrical energy. B.) Electrical energy is changed to chem... joe draws a line segment 2 1/4 inches long. then he makes the line segment 1 1/2 inches longer. how long is the line segment now? Physics please help!! Assume a circular orbit. If a planet's distance from it star is multiplied by 25 times, then by how many times is its orbital period T multiplied ? Physics please help!! A satellite is in a circular orbit very close to the surface of a spherical planet. The period of the orbit is 2.23 hours. What is density of the planet? Assume that the planet has a uniform density. I'm Try a But the b is: ((sigma*x)/(2*epsilon_0))*(1/(sqrt(x^2+R_1^2))-1/(sqrt(x^2+R_2^2))) what do I use for a and k in this situation? i tried taking the derivative but no answers i come up make sense based on the problem. The number, N, of people who have heard a rumor spread by mass media by time, t, is given by N(t)=a(1−e−kt). There are 6 million people in the population, who hear the rumor eventually. If 5% of them heard it on the first day, find the percentage of the population ... The number, N, of people who have heard a rumor spread by mass media by time, t, is given by N(t)=a(1−e−kt). There are 6 million people in the population, who hear the rumor eventually. If 5% of them heard it on the first day, find the percentage of the population ... and the 2) (B) is 0.5 b) 1283994.39504 c) 46.244 The First is -7.901. Just this in the moment. A 8.38g bullet is fired into a 4.60kg block suspended as in the figure. The bullet stops in the block, which rises 15.0cm above its initial position. Find initial speed of the bullet. discuss how the body manages to maintain the cellular components whithin the reference range in an a healthy individual What are peer reviewed articles/journals? what is a subponent well i think it will be 183g How do you go about solving for x in x/2 + x/3 > 5 ? AP Chemistry In a fit of boredom, a chemistry teaching assistant swallows a small piece of solid dry ice (CO2). The 8.9 g of dry ice are converted very quickly into gas at 311 K. Assuming the TA s stomach has a volume of 1.5 L, what pressure is exerted on his stomach? if a house square feet is 3,546 sq what is the perimeter of the house 5.Find the complete exact solution of sin x = -√3/2. 10. Solve cos 2x 3sin x cos 2x = 0 for the principal value(s) to two decimal places. 12. Solve tan^2x + tan x 1 = 0 for the principal value(s) to two decimal places. 19.Prove that tan^2a 1 + cos^2a... in DEF, DE=5 and M,D=55. find the nearest tenth please explain How % errors would affected the mass/moles of substances in lab? Compare and contrast the western world view with the deep ecology world view. Challenge one of the world views presented in Visualizing Environmental Science. Which world view is closest to your own? social studies What were the three major cities in the Islamic world?how did each develop and grow to become important? problem solving An average-size female spider and a large spider can lay a total of 2100 eggs at one time. the large spider lays 20times the number of eggs as the average-size spider. how many eggs does the average sized spider lay 5-(7/6x)> 7 Subtract both sides by 5 -(7/6x) > 7-5 -(7/6x) > 2 Multiple both sides by -(7/6x) < 2-(6/7) (1/x) < -12/7 x < -7/12 Correction 5-(7/6x)> 7 Subtract both sides by 5 -(7/6x) > 7-5 -(7/6x) > 2 Multiple both sides by -(7/6x) < 2-(6/7) (1/x) > -12/7 x > -7/12 college algebra The answer is 3. Since the y = -√(x+2) - 9 (-) means reflect by x-axis -9 means 9 units down +2 means shift left 2 units The sign makes a difference For shift if its negative it shift right and positive left. why is hydrogen gas placed in the 1A column of the periodic table? Jack and Jill wash a car at the same rate . Working together it takes them an hour and 10 minutes to wash a car. One day jack started washing a car by himself at 2:30 p m. When he was half way done , Jill joined him . What time did they finish washing the car? - Juwan and Jake both have money in their pockets. If Juwan gave Jake 15 cent then they both would have the same amount of money. However, if Jake gave Juwan 15 cent , then Juwan would have 5 times as much money as Jake would have. How much money do Juwan and Jake have togethe... Wat is the number sequence for 16 18 20 22 24 26 28 30 32 34 36 38 40 42 i got b. and got it wrong The scatter plot shows the study times and test scores for a number of students. How long did the person who scored 81 study? A. 50 minutes B. 81 minutes C. 16 minutes D. 100 minutes helppppppp!!!!! Can someone help me with this math problem? 14 divided by 2 1/3 on the 1st problem i dont understand how x is being added when the sign is subtraction d+3(d-4)=20 please explain x-(12-x)=38 please explain A college bookstore ordered six boxes of red pens. The store sold 32 red pens last week and 35 red pens this week. Five pens were left on the shelf. How many pens were in each box? draw a box-and-whisker plot for the following set of data. 62,76,41,87,60,42,47,69,65. Yolanda is 180 centimeters tall. How many meters tall is that? HOW MANY GRAMS OF ice at 0oc can be melted into water at 0oc by the additon of 75.0 of heat ? hfus = 6.01 kj/mo angular momentum a 8.20 kg particle P has a position vector of magnitude 6.90 m and angle θ1 = 42.0° and a velocity vector of magnitude 6.70 m/s and angle θ2 = 25.0°. Force , of magnitude 8.90 N and angle θ3 = 28.0° acts on P. All three vectors lie in the xy plane. A... the volume of a gas is 250 mL at 350 kPa pressure. what will the volume be when the pressure is reduced to 50 kPa, assuming the temperature remains constant? three coins, a half-dollar, quarter, nickel are tossed at the same time. to say that there is an outcome xyz means that the half-dollar, landed x, the quarter landed y, and the nickel landed z. the eight equally likely possibilites can be listed as follows: HHH HHT HTH HTT THH... what do cells do? Psyc Research Methods Yes, there is Applied physics two men are carrying a 12m uniform ladder on their shoulders. Man A is 1.0m from one end, and man B is 3.0m from the other end. If the beam has a mass of 200kg, determine the load supported by each Jessica's teacher has asked her to bring her 0.40 moles of sodium chloride to the laboratory. She should bring approximately _______ grams. Spermatozoa and Oocytes... I thnk it should be that.. For a particular mass of gas, the volume (Vcm^3) and pressure (Pcm) of mecury (Hg) are related by P=kV^n, where k and n are constants. Reduce the relation to a linear one and determine the values of k and n given that when v=110cm^3, P=50.3cm and when V=230cm^3, P=18.6cm For a particular mass of gas, the volume (Vcm^3) and pressure (Pcm) of mecury (Hg) are related by P=kV^n, where k and n are constants. Reduce the relation to a linear one and determine the values of k and n given that when v=110cm^3, P=50.3cm and when V=230cm^3, P=18.6cm A car starts from rest and accelerates uniformly to a speed of 50km/h in 18s. If the wheels of the car are 70cm in diameter. Calculate; a.The final angular velocity of the wheels b.angular A mass m=400g hangs from the rim of a wheel of radius r=15cm. When released from rest, the mass falls 2.0m in 6.5sec. Find the moment of inertia of the wheel applied physics you are standing on a 100m tall buiding above a street and your friend is standing vertically below the building. When you drop a stone, your friend simultaneously throws the identical stone as yours at a speed of 50m/s in the same straight line. Calculate the time taken for t... A grid shows the positions of a subway stop and your house. The subway stop is located at (7, -6) and your house is located at (-9, 5). What is the distance, to the nearest unit, between your house and the subway stop a girl lifts a 160-n load a height of 1 m in a time of 0.5 s. what power does the girl produce applied math Sorry Orbital speed applied math if the mean radius of the moon is 1.73x10^6m and its mass is 7.35x10^22kg.determin the average speed? applied chemistry a 69.3g sample of oxalic ac acid, H2C2O4, was dissolved in 1.000L of solution. How would yo prepare 1.00L of 0.150M H2C2O4 from this solution? applied chemistry M = 0.0285 mols/0.050 L M = 0.57 mols/L thanks i now know the formular bt hey how would u define Molarity and in what part of living does it apply? Am trying to understand it. Pages: 1 | 2 | 3 | 4 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=JOSE","timestamp":"2014-04-20T06:50:19Z","content_type":null,"content_length":"27234","record_id":"<urn:uuid:98c72620-e682-40fe-9767-d7192380dd9a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Degrees of Unsolvability, volume 55 of Annals of Mathematical Studies "... The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2 ..." Cited by 79 (21 self) Add to MetaCart The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2], showing that each low for Martin-Lof random set is # 2 . Our class induces a natural intermediate # 3 ideal in the r.e. Turing degrees (which generates the whole class under downward closure). Answering - Proceedings of the IMS workshop on computational prospects of infinity , 2008 "... Four classes of sets have been introduced independently by various researchers: low for K, low for ML-randomness, basis for ML-randomness and K-trivial. They are all equal. This survey serves as an introduction to these coincidence results, obtained in [24] and [10]. The focus is on providing backdo ..." Cited by 5 (2 self) Add to MetaCart Four classes of sets have been introduced independently by various researchers: low for K, low for ML-randomness, basis for ML-randomness and K-trivial. They are all equal. This survey serves as an introduction to these coincidence results, obtained in [24] and [10]. The focus is on providing backdoor access to the proofs. 1. Outline of the results All sets will be subsets of N unless otherwise stated. K(x) denotes the prefix free complexity of a string x. A set A is K-trivial if, within a constant, each initial segment of A has minimal prefix free complexity. That is, there is c ∈ N such that ∀n K(A ↾ n) ≤ K(0 n) + c. This class was introduced by Chaitin [5] and further studied by Solovay (unpublished). Note that the particular effective epresentation of a number n by a string (unary here) is irrelevant, since up to a constant K(n) is independent from the representation. A is low for Martin-Löf randomness if each Martin-Löf random set is already Martin-Löf random relative to A. This class was defined in Zambella [28], and studied by Kučera and Terwijn [17]. In this survey we will see that the two classes are equivalent [24]. Further concepts have been introduced: to be a basis for ML-randomness (Kučera [16]), and to be low for K (Muchnik jr, in a seminar at Moscow State, 1999). They will also be eliminated, by showing equivalence with K-triviality. All - J. Symbolic Logic "... Abstract. We study inversions of the jump operator on Π0 1 classes, combined with certain basis theorems. These jump inversions have implications for the study of the jump operator on the random degrees—for various notions of randomness. For example, we characterize the jumps of the weakly 2-random ..." Cited by 1 (1 self) Add to MetaCart Abstract. We study inversions of the jump operator on Π0 1 classes, combined with certain basis theorems. These jump inversions have implications for the study of the jump operator on the random degrees—for various notions of randomness. For example, we characterize the jumps of the weakly 2-random sets which are not 2-random, and the jumps of the weakly 1-random relative to 0 ′ sets which are not 2-random. Both of the classes coincide with the degrees above 0 ′ which are not 0 ′-dominated. A further application is the complete solution of [Nie09, Problem 3.6.9]: one direction of van Lambalgen’s theorem holds for weak 2-randomness, while the other fails. Finally we discuss various techniques for coding information into incomplete randoms. Using these techniques we give a negative answer to [Nie09, Problem 8.2.14]: not all weakly 2-random sets are array computable. In fact, given any oracle X, there is a weakly 2-random which is not array computable relative to X. This contrasts with the fact that all 2-random sets are array computable. 1. , 2011 "... We answer a question of Jockusch by showing that the measure of the Turing degrees which satisfy the cupping property is 0. In fact, every 2-random degree has a strong minimal cover, and so fails to satisfy the cupping property. ..." Add to MetaCart We answer a question of Jockusch by showing that the measure of the Turing degrees which satisfy the cupping property is 0. In fact, every 2-random degree has a strong minimal cover, and so fails to satisfy the cupping property.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3557906","timestamp":"2014-04-24T07:11:22Z","content_type":null,"content_length":"20371","record_id":"<urn:uuid:c7036dcd-7b65-46b5-b571-fa7b50e40b12>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Los Altos ACT Tutor Find a Los Altos ACT Tutor ...I'm a patient tutor with a positive, collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variable) and advanced calculus (multi-variable). Pre-calculus skills are very valuable for success on the mathematics section of the SAT exam and the SAT Math... 22 Subjects: including ACT Math, calculus, statistics, geometry ...I have worked with many universities such as M.I.T., UCONN and the University of Colorado to strengthen my professional skills for my students. I also encourage honest and active participation from students so that I may reflect and revise on the feedback that my students share. My greatest hon... 13 Subjects: including ACT Math, calculus, statistics, geometry ...The subject matter of linear algebra was studied when I was in college and then used throughout my graduate studies and career as a research scientist. In general, I teach students how to solve problems, and then lead students through understanding why the subject is introduced and what is the c... 15 Subjects: including ACT Math, calculus, statistics, physics ...I took symbolic logic in college and received an A. I also had other philosophy classes that taught logic and feel comfortable teaching the subject. I tutored for symbolic logic, via the introduction to advanced math class that was a requirement for the math major. 35 Subjects: including ACT Math, reading, calculus, statistics ...Whether you are a high school student seeking advice on this week’s essay, or a graduate student looking to polish your thesis, we can work together to ensure that your writing is expressive, concise, and technically accurate. I have helped clients ranging from beginning writers to university pr... 35 Subjects: including ACT Math, Spanish, reading, English
{"url":"http://www.purplemath.com/los_altos_act_tutors.php","timestamp":"2014-04-18T06:00:10Z","content_type":null,"content_length":"23705","record_id":"<urn:uuid:9d733cad-3d93-48ea-9bd8-a866817ff02e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum-energy broadcast in all-wireless networks: NP-completeness and distribution issues Results 1 - 10 of 101 , 2003 "... We propose a micro-payment scheme for multi-hop cellular networks that encourages collaboration in packet forwarding by letting users benefit from relaying others' packets. At the same time as proposing mechanisms for detecting and rewarding collaboration, we introduce appropriate mechanisms for ..." Cited by 98 (8 self) Add to MetaCart We propose a micro-payment scheme for multi-hop cellular networks that encourages collaboration in packet forwarding by letting users benefit from relaying others' packets. At the same time as proposing mechanisms for detecting and rewarding collaboration, we introduce appropriate mechanisms for detecting and punishing various forms of abuse. We show that the resulting scheme -- which is exceptionally lightweight -- makes collaboration rational and cheating undesirable. - IEEE Trans. Commun , 2005 "... Abstract — The minimum energy required to transmit a bit of information through a network characterizes the most economical way to communicate in a network. In this paper, we show that under a layered model of wireless networks, the minimum energyper-bit for multicasting in a mobile ad hoc network c ..." Cited by 87 (2 self) Add to MetaCart Abstract — The minimum energy required to transmit a bit of information through a network characterizes the most economical way to communicate in a network. In this paper, we show that under a layered model of wireless networks, the minimum energyper-bit for multicasting in a mobile ad hoc network can be found by a linear program; the minimum energy-per-bit can be attained by performing network coding. Compared with conventional routing solutions, network coding not only promises a potentially lower energy-per-bit, but also enables the optimal solution to be found in polynomial time, in sharp contrast with the NPhardness of constructing the minimum-energy multicast tree as the optimal routing solution. We further show that the minimum energy multicast formulation is equivalent to a cost minimization with linear edge-based pricing, where the edge prices are the energy-per-bits of the corresponding physical broadcast links. This paper also investigates minimum energy multicasting with routing. Due to the linearity of the pricing scheme, the minimum energy-per-bit for routing is achievable by using a single distribution tree. A characterization of the admissible rate region for routing with a single tree is presented. The minimum energy-per-bit for multicasting with routing is found by an integer linear program. We show that the relaxation of this integer linear program, studied earlier in the Steiner tree literature, can now be interpreted as the optimization for minimum energy multicasting with network coding. In short, this paper presents a unifying study of minimum energy multicasting with network coding and routing. Index Terms — Network coding, routing, multicast, Steiner tree, wireless ad hoc networks, energy efficiency, mobility. - ACM Wireless Networks , 2005 "... supported by NSF CCR-0311174. Abstract — Topology control has been well studied in wireless ad hoc networks. However, only a few topology control methods take into account the low interference as a goal of the methods. Some researchers tried to reduce the interference by lowering node energy consump ..." Cited by 56 (0 self) Add to MetaCart supported by NSF CCR-0311174. Abstract — Topology control has been well studied in wireless ad hoc networks. However, only a few topology control methods take into account the low interference as a goal of the methods. Some researchers tried to reduce the interference by lowering node energy consumption (i.e. by reducing the transmission power) or by devising low degree topology controls, but none of those protocols can guarantee low interference. Recently, Burkhart et al. [?] proposed several methods to construct topologies whose maximum link interference is minimized while the topology is connected or is a spanner for Euclidean length. In this paper we give algorithms to construct a network topology for wireless ad hoc network such that the maximum (or average) link (or node) interference of the topology is either minimized or approximately minimized. Index Terms — Topology control, interference, wireless ad hoc networks. - in Proceedings of the 9th Annual International Conference on Mobile Computing and Networking , 2003 "... We develop algorithms for finding minimum energy disjoint paths in an all-wireless network, for both the node and linkdisjoint cases. Our major results include a novel polynomial time algorithm that optimally solves the minimum energy 2 link-disjoint paths problem, as well as a polynomial time algor ..." Cited by 48 (1 self) Add to MetaCart We develop algorithms for finding minimum energy disjoint paths in an all-wireless network, for both the node and linkdisjoint cases. Our major results include a novel polynomial time algorithm that optimally solves the minimum energy 2 link-disjoint paths problem, as well as a polynomial time algorithm for the minimum energy k node-disjoint paths problem. In addition, we present efficient heuristic algorithms for both problems. Our results show that link-disjoint paths consume substantially less energy than node-disjoint paths. We also found that the incremental energy of additional linkdisjoint paths is decreasing. This finding is somewhat surprising due to the fact that in general networks additional paths are typically longer than the shortest path. However, in a wireless network, additional paths can be obtained at lower energy due to the broadcast nature of the wireless medium. Finally, we discuss issues regarding distributed implementation and present distributed versions of the optimal centralized algorithms presented in the paper. - IN PROC. OF IEEE INFOCOM , 2006 "... We show that network coding allows to realize energy savings in a wireless ad-hoc network, when each node of the network is a source that wants to transmit information to all other nodes. Energy efficiency directly affects battery life and thus is a critical design parameter for wireless networks. W ..." Cited by 46 (9 self) Add to MetaCart We show that network coding allows to realize energy savings in a wireless ad-hoc network, when each node of the network is a source that wants to transmit information to all other nodes. Energy efficiency directly affects battery life and thus is a critical design parameter for wireless networks. We propose an implementable method for performing network coding in such a setting. We analyze theoretical cases in detail, and use the insights gained to propose a practical, fully distributed method for realistic wireless ad-hoc scenarios. We address practical issues such as setting the forwarding factor, managing generations, and impact of transmission range. We use theoretical analysis and packet level simulation. "... Traditional approaches to transmit information reliably over an error-prone network employ either Forward Error Correction (FEC) or retransmission techniques. In this paper we consider an application of network coding to increase the bandwidth efficiency of reliable broadcast in a wireless network ..." Cited by 34 (1 self) Add to MetaCart Traditional approaches to transmit information reliably over an error-prone network employ either Forward Error Correction (FEC) or retransmission techniques. In this paper we consider an application of network coding to increase the bandwidth efficiency of reliable broadcast in a wireless network. In particular, we propose two schemes which employ network coding to reduce the number of retransmissions as a result of packet losses. Our proposed schemes combine different lost packets from different receivers in such a way that multiple receivers are able to recover their lost packets with one transmission by the source. The advantages of the proposed schemes over the traditional wireless broadcast are shown through simulations and theoretical analysis. Specifically, we provide a few results on the retransmission overhead of the proposed schemes under different channel conditions. , 2006 "... Dissemination of common information through broadcasting is an integral part of wireless network operations such as query of interested events, resource discovery and code update. In this paper, we characterize the behavior of information dissemination in power-constrained wireless networks by defin ..." Cited by 33 (2 self) Add to MetaCart Dissemination of common information through broadcasting is an integral part of wireless network operations such as query of interested events, resource discovery and code update. In this paper, we characterize the behavior of information dissemination in power-constrained wireless networks by defining two quantities, i.e., broadcast capacity and information diffusion rate and derive fundamental limits in both random extended and dense networks. We find that using multihop relay, the rate of broadcasting continuous stream is Θ(log(n) − α 2) in extended networks; while direct single-hop broadcast is efficient for dense networks. Furthermore, regardless of the density, information can diffuse at constant speed, i.e., Θ(1) in both extended and dense networks. The theoretical bounds obtained and proof techniques are instrumental to the modeling and design of efficient wireless network protocols. - IEEE Journal on Selected Areas in Communications, Special Issue on Wireless Ad Hoc Networks (Part I , 2005 "... Abstract—Transmit power control is a prototypical example of a cross-layer design problem. The transmit power level affects signal quality and, thus, impacts the physical layer, determines the neighboring nodes that can hear the packet and, thus, the network layer affects interference which causes c ..." Cited by 32 (0 self) Add to MetaCart Abstract—Transmit power control is a prototypical example of a cross-layer design problem. The transmit power level affects signal quality and, thus, impacts the physical layer, determines the neighboring nodes that can hear the packet and, thus, the network layer affects interference which causes congestion and, thus, affects the transport layer. It is also key to several performance measures such as throughput, delay, and energy consumption. The challenge is to determine where in the architecture the power control problem is to be situated, to determine the appropriate power level by studying its impact on several performance issues, to provide a solution which deals properly with the multiple effects of transmit power control, and finally, to provide a software architecture for realizing the solution. We distill some basic principles on power control, which inform the subsequent design process. We then detail the design of a sequence of increasingly complex protocols, which address the multidimensional ramifications of the power control problem. Many of these protocols have been implemented, and may be the only implementations for power control in a real system. It is hoped that the approach in this paper may also be of use in other topical problems in cross-layer design. Index Terms—Design principles, Linux implementation, power control. - In Proc. Workshop on Network Coding, Theory, and Applications , 2005 "... Abstract — Energy efficiency, i.e., the amount of battery energy consumed to transmit bits across a wireless link, is a critical design parameter for wireless ad-hoc networks. We examine the problem of broadcasting information to all nodes in an ad-hoc network, when a large percentage of the nodes a ..." Cited by 32 (6 self) Add to MetaCart Abstract — Energy efficiency, i.e., the amount of battery energy consumed to transmit bits across a wireless link, is a critical design parameter for wireless ad-hoc networks. We examine the problem of broadcasting information to all nodes in an ad-hoc network, when a large percentage of the nodes act as sources. We theoretically quantify the energy savings that network coding can offer for the cases of two regular topologies. We then propose low-complexity distributed algorithms, and demonstrate through simulation that for random networks, network coding can in fact offer significant benefits in terms of energy consumption. I. - Proc. 2nd Int. Conf. on Wireless on Demand Network Systems and Service (WONS , 2005 "... Abstract — Wireless ad hoc radio networks have gained a lot of attention in recent years. We consider geometric networks, where nodes are located in a euclidean plane. We assume that each node has a variable transmission range and can learn the distance to the closest neighbor. We also assume that n ..." Cited by 29 (0 self) Add to MetaCart Abstract — Wireless ad hoc radio networks have gained a lot of attention in recent years. We consider geometric networks, where nodes are located in a euclidean plane. We assume that each node has a variable transmission range and can learn the distance to the closest neighbor. We also assume that nodes have a special collision detection (CD) capability so that a transmitting node can detect a collision within its transmission range. We study the basic communication problem of collecting data from all nodes called convergecast. Recently, there appeared many new applications such as real-time multimedia, battlefield communications and rescue operations that impose stringent delay requirements on the convergecast time. We measure the latency of convergecast, that is the number of time steps needed to collect the data in any n-node network. We propose a very simple randomized distributed algorithm that has the expected running time O(log n). We also show that this bound is tight and any algorithm needs Ω(log n) time steps while performing convergecast in an arbitrary network. One of the most important problems in wireless ad hoc networks is to minimize the energy consumption, which maximizes the network lifetime. We study the trade-off between the energy and the latency of convergecast. We show that our algorithm consumes at most O(n log n) times the minimum energy. We also demonstrate that for a line topology the minimum energy convergecast takes n − 1 time steps while any algorithm performing convergecast within O(log n) time steps requires Ω(n) times the minimum energy.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=100151","timestamp":"2014-04-16T09:40:22Z","content_type":null,"content_length":"41787","record_id":"<urn:uuid:83d29f3f-acea-462a-84f8-81df2f74f178>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
K-Weil cohomology theories? up vote 10 down vote favorite I don't know very much about this stuff, so I'm a bit afraid that I'm being naive or stupid, and I apologize if I am --- but it seems to me that Weil cohomology theories, or at least the standard examples thereof, are essentially, or are supposed to be, generalizations or algebraic versions of singular cohomology. If I am incorrect in this assessment, please do correct me. Meanwhile, we have other interesting cohomology theories in topology: for example (topological) K-theory, elliptic cohomology, complex cobordism, .... Correspondingly, then, are there notions of "K-Weil cohomology theory" or "elliptic Weil cohomology theory", etc.? Is it possible? ag.algebraic-geometry at.algebraic-topology 1 Yes, it is quite possible. Keywords for this kind of technique in homotopy theory include motivic (stable) homotopy theory and $\mathbb{A}^1$-homotopy theory. Algebraic K-theory makes its appearance as an analogue of topological K-theory. There is also a Landweber exact functor theorem which can produce some elliptic cohomology theories (but not a universal one). If you come up with something interesting about motivic elliptic cohomology theories then you should write a paper about it. – Tyler Lawson Jun 5 '10 at 1:52 Why is there no universal motivic elliptic cohomology theory? Do you mean there can't be or that such a thing might exist but hasn't been found yet? – Chris Brav Jun 5 '10 at 2:08 There is a motivic version of the Landweber exact functor theorem (the version I know is due to Naumann-Spitzweck-Østvaer) and this suffices to construct many elliptic cohomology theories, including a "universal elliptic cohomology" away from the primes 2 and 3 in the same manner that it is definable in the old sense. However, at the primes 2 and 3 the fact that the LEFT is only functorial on the homotopy category means together with the fact that certain elliptic curves have 2- and 3-primary automorphism groups means a universal theory is difficult, motivic or not. Motivic is always harder. – Tyler Lawson Jun 5 '10 at 4:21 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/27121/k-weil-cohomology-theories","timestamp":"2014-04-19T12:12:39Z","content_type":null,"content_length":"50046","record_id":"<urn:uuid:f351ff92-de5c-41ae-a569-bf796e4fc82b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Westfield, NJ Algebra 2 Tutor Find a Westfield, NJ Algebra 2 Tutor ...I excel in standardized tests and would be happy to coach you to improve above and beyond your expectations. My strong suits are found in SATs, GREs, and MCATs. I am an award-winning writer, and am happy to guide you through your personal statements for college, graduate school, and beyond. 44 Subjects: including algebra 2, reading, English, writing ...Consequently, I am able to adapt to any personality. I love teaching. I have been a teacher/tutor for 9 years now. 13 Subjects: including algebra 2, English, reading, writing ...Additionally, I have excelled in various extracurricular pursuits, taking sixth place in the state in Chemistry League, fifth place in the state in Biology league, third place in the state in the Merck State Science Day Biology exam, and achieving finalist status in the New York Federal Reserve c... 19 Subjects: including algebra 2, chemistry, calculus, physics ...As a major in Physics from the University, I was exposed to many important areas of algebra. I have taught the subject in high school, and also tutored many students in the college. I am familiar with the requirements in college, GED and SAT exams. 9 Subjects: including algebra 2, calculus, physics, geometry ...I graduated from Brooklyn Tech High School (a specialized NYC high school) with high honors and an advanced AP diploma. I took 7 APs during my time there, so I've always been trying to challenge myself academically.I graduated cum laude with a Bachelor of Science in Biochemistry. I wrote for my University's Newspaper. 37 Subjects: including algebra 2, English, chemistry, reading
{"url":"http://www.purplemath.com/westfield_nj_algebra_2_tutors.php","timestamp":"2014-04-18T15:51:02Z","content_type":null,"content_length":"23966","record_id":"<urn:uuid:70227b5d-ecfe-4012-94b5-af5d89798c1c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Balancing robot for dummies - Arduino Forum - powerfull (110 oz.in) - solid design (metal gears) - fast (350RPM, don't settle for less than 200RPM) - integrated, efficient encoders (464 counts per wheel rotation) - acceptable price (40 bucks, including encoder) - noticiable backlash (as with all spur gearmotors) Planetary gearboxes are supposed to have no backlash I have still to identify a planetary geared equivalent motor (speed, power, encoder) any links ???
{"url":"http://forum.arduino.cc/index.php?topic=8871.msg73989","timestamp":"2014-04-17T06:58:00Z","content_type":null,"content_length":"120058","record_id":"<urn:uuid:d7ce8d8e-01b9-40e2-807d-47c134bf04a9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
The Connection Between Prime Numbers and Music Prime numbers––those divisible only by themselves and one––have confounded mathematicians for centuries. Because mathematicians rely on patterns, the fact that primes occur at seemingly random intervals (2, 3, 5, 7, 11, 13…) makes them the Holy Grail of math. Many who studied the numbers burned out in their primes, fell into depressions, or attempted suicide. The Music of the Primes, a companion to the BBC documentary The Story of Math, delves into the history of mathematicians’ struggle to understand the primes. In 300 B.C., Greek mathematician Euclid proved that there are an infinite number of primes, but could not find a way to predict when they would show up in a sequence. At the beginning of the 19^th century, German scientist Carl Friedrich Gauss made a major breakthrough: instead of asking which numbers are prime, he asked how many are. Using a graph like the one below, Gauss predicted the rate at which primes thin out as numbers become larger. Program host Marcus du Sautoy tells us that Gauss “heard the dominant theme of the music of the primes, but he couldn’t prove it.” But we must wait to understand this the meaning of this statement. About fifty years later, mathematician Bernhard Riemann took Gauss’ predictions one step further. Using the physics of waves of musical tones as a guide, Riemann came up with a way to give order to the distribution of the primes. For an explanation of the hypothesis, see the clip below from the documentary and read this article from Marcus du Sautoy. Riemann’s Hypothesis was a revelation for mathematicians, but it remained just that: a hypothesis. He couldn’t prove it. Nevertheless, his parade of zeroes was a major revelation: Reimann had connected two disparate realms of mathematics––zeros and primes. When Reimann died of tuberculosis at 39, his housekeeper burned all his papers, so we’ll never know how close he was to a proof. His hypothesis became one of the greatest unsolved mathematical mysteries. In the 1940s, British mathematician and computer pioneer Alan Turing approached the problem in a new way. He tried to prove Reimann’s Hypothesis false by building a machine that would search for rogue zeros off the line. After World War II, Turing’s machine showed that the first 1,104 zeros were on the line, but then the machine broke down. So did Turing’s life; he was persecuted for his homosexuality and ultimately committed suicide. A major breakthrough occurred in the 1970s at Princeton. Driving a red sports car, blaring the punk rock song “American Idiot,” Du Sautoy arrives at the university and interviews Hugh Montgomery, who had noticed that the scattered zeroes on the line seemed to repel one another. He consulted physicist Freeman Dyson, who recognized that the pattern was strangely similar to a matrix used to model the nucleus of Uranium. Here, the meaning of music of the primes finally begins to emerge. The energy levels of the nucleus of an atom are like musical notes, Du Sautoy says, and then plays several on his trumpet to illustrate. “As I blow more energy into it, the notes jump up by degrees,” he says. The energy levels of the nucleus space out, just as zeros on the line do, he explains. “The behavior of the fundamental building blocks of matter seemed to correspond to the fundamental behavior of the building blocks of maths.” This is the crescendo of the documentary (pun intended). Despite this contribution, the riddle of the primes persists, and today mathematicians continue to devote vast amounts of time and computational power to the Riemann Hypothesis. Most believe it to be true, but no one has yet found a proof. A financier has offered $1 million to anyone who can crack the hypothesis. Du Sautoy believes that who ever does will make the primes sing. The Music of the Primes is available on DVD here. The Music of the Primes [Plus Magazine] The Music of the Primes [BBC]
{"url":"http://www.sciencefriday.com/blogs/08/02/2010/the-connection-between-prime-numbers-and-music.html?interest=1","timestamp":"2014-04-17T13:06:27Z","content_type":null,"content_length":"56092","record_id":"<urn:uuid:d8a7b80e-c67e-404e-bdf2-665b1f391a65>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;duality boolean-algebras&#39; Questions I am seeking a deeper understanding of the representation of set-based objects in terms of Boolean algebras. Let $\wp(A)$ be the set of subsets of a set $A$. A relation $R \subseteq A \times B$ ... A Boolean algebra may, or may not, be complete (i.e, any set of elements has a sup and an inf) or atomic (i.e., every element is a sup of some set of atoms). Boolean Algebras that are complete as ...
{"url":"http://mathoverflow.net/questions/tagged/duality+boolean-algebras","timestamp":"2014-04-18T05:40:31Z","content_type":null,"content_length":"34268","record_id":"<urn:uuid:0e7d84dd-8c19-453d-851a-6aef8aa31145>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Improved algorithms for coloring random graphs Subramanian, CR (1994) Improved algorithms for coloring random graphs. In: 5th International Symposium, ISAAC '94. Algorithms and Computation, 25-27 Aug. 1994, Beijing, China, pp. 460-468. Full text not available from this repository. Consider the problem of k-coloring random k-colorable graphs. The random graphs are drawn from the G(n,p,k) model. In this model, an adversary splits the vertices into k color classes each of size Ω (n), and then for each pair u,ν of vertices belonging to different color classes, he includes the edge {u,ν} with probability p. We give algorithms for coloring random graphs from G(n,p,k) with p& ges;$n^{-1+&epsiv}$; where &epsiv;&ges;1/3 is any positive constant. The failure probability of our algorithms is exponentially low, i.e. $e^{-nδ}$ for some positive constant δ. This improves the previously known algorithms which have only polynomially low failure probability. We also show how the algorithms given by Blum and Spencer can be modified to have exponentially low failure probability provided some restrictions on edge probabilities are introduced. We introduce a technique for converting almost succeeding algorithms into algorithms with polynomial expected running time and which surely k-color the input random graph. Using these two results, we derive polynomial expected time algorithms for k-coloring random graphs from G(n,p,k) with p&ges;$n^{-1+&epsiv}$; where & epsiv; is any constant greater than 0.4. Our results improve the previously known results Actions (login required)
{"url":"http://eprints.iisc.ernet.in/11057/","timestamp":"2014-04-17T19:20:54Z","content_type":null,"content_length":"20267","record_id":"<urn:uuid:0dc9f7d4-0cfc-455d-96e0-8fe018d2854c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Estimation of Limited depandat variable model with MLE Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Estimation of Limited depandat variable model with MLE From nadir virk <nadirvirk@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: Estimation of Limited depandat variable model with MLE Date Mon, 23 May 2011 16:17:08 +0300 Hi All, I want to estimate the transaction cost measures for cost of selling and cost of buying in equity markets as is done in Lesmond, D., Ogden, J, Trzcinka, C.,1999, A new estimate of transaction costs, Review of financial studies,12,1113-1141. They have explained the procedure in Appendix B if somebody coouldn't understand what I am narrating here, They specified three regions in the distribution of y(i's) and x(i's) [its a single factor capm model]. They maximized the liklihood function with respect to four unknowns (alpha1,alpha2, market beta and sigma of normal distribution). The maximization is done for three regions simultaneously such that 1-where x's (independant variable) is greater than zero (region1) with unknown parametrs (alpha1, beta and sigma) 2-where x's are less than zero (region2) with parameters alpha2,beta and sigma 3- when x's are exactly equal to zero. They maximized these regions simultaneosly and estimated the unknown paramters. I can't figure it out how to specify it under tobit command, which fits the model over the whole distribution (if is possible then excuse me as I don't know how!). But in this case we need to estimate the alpha1 and alpha2 from region1 and region2 respectively and also the a constant sigma of error distribution. Any help will be highly appreciated. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-05/msg01191.html","timestamp":"2014-04-17T21:24:58Z","content_type":null,"content_length":"8375","record_id":"<urn:uuid:6e3b3385-0cce-45f0-9d8b-f45468d09414>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Sinnott's proof of Washington's theorem, and generalisations Add to your list(s) Download to your calendar using vCal • Friday 08 February 2013, 14:00-15:00 • MR4. If you have a question about this talk, please contact Joanna Fawcett. In 1978 Washington proved that for any finite abelian extension k of the rationals, and any prime p, that if k(n) denotes the n-th layer of the cyclotomic Zp extension of k, then for all primes q different from p, the q-part of the ideal class group of k(n) stabilises as n tends to infinity. In 1987 Sinnott gave a beautiful proof of this theorem, which I shall discuss, and hopefully detail how one can generalise this proof to deduce results about Selmer groups of CM elliptic curves and ideal class groups over non-cyclotomic Zp extensions. This talk is part of the Junior Algebra/Logic/Number Theory seminar series. This talk is included in these lists: Note that ex-directory lists are not shown.
{"url":"http://www.talks.cam.ac.uk/talk/index/42732","timestamp":"2014-04-17T18:45:44Z","content_type":null,"content_length":"12225","record_id":"<urn:uuid:8c6aa7d0-c700-4e11-8241-d635a1caa074>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
• The lecture continues with a precise definition of all-pairs shortest paths problem: given a directed graph, find an NxN matrix (N = | V |), where each entry a ij is the shortest path from vertex i to vertex j. Original Signal - Transmitting Buzz • On the other hand, I did retain the "ij" spelling where the "j" is essentially silent: there is a danger here that the reader will pronounce the "j" in the character I've called "Zanijel". Archive 2010-05-01 • The 'ij' is a Dutch letter, merged from the 'i' and the 'j'. Nico Muhly • In the past there were typewriters with the 'ij' on one button. Nico Muhly • We have only one 'ij', 'ei' is a composition of the vocals 'e' and 'i'. Nico Muhly • Mr. Carpenter is director of strategic research at the Institute for Justice and author of "Disclosure Costs: Unintended Consequences of Campaign Finance Reform," available at www. ij.org. Neighbor Against Neighbor • Where i is a very large patch class; n is the very large number of patches of class i; j is the very large number of patches of all classes; p is the very large perimeter of patch ij; and a is the very large area of patch ij. trackback hughstimson.org » Blog Archive » Very Large Area-Weighted Mean Shape Index Formula • The only reason she won in Ohio is because she manipulated the voters ij Ohio by trashing Obama not her skills. NY Times slams Clinton's 'negativity' • I learned about this froma great activist named Phil Jakes Johnson of the firm Solvents & Petroleum when he spoke at a workshop of the CastleCoalition www. castlecoalition.org of the Institute for Justice www. ij.org. President Clinton's Contribution From Corporate Welfare Recipient Favored By Senator Clinton • I brought the Institute for Justice (www. ij.org) into the case and they helped St. Luke's Pentecostal Church's attorney. Urban Renewal Schemer Arrested Over Checks From Developer
{"url":"https://www.wordnik.com/words/ij","timestamp":"2014-04-23T22:41:38Z","content_type":null,"content_length":"33316","record_id":"<urn:uuid:4be7dee7-7349-4837-bd32-9fa2cebda2df>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Kochen-Specker Theorem Using Projection Measurement and Positive Operator-Valued Measure Toh, Sing Poh (2008) Proving Kochen-Specker Theorem Using Projection Measurement and Positive Operator-Valued Measure. PhD thesis, Universiti Putra Malaysia. One of the main theorems on the impossibility of hidden variables in quantum mechanics is Kochen-Specker theorem (KS). This theorem says that any hidden variable theory that satisfies quantum mechanics must be contextual. More specifically, it asserts that, in Hilbert space of dimension ≥ 3, it is impossible to associate definite numerical values, 1 or 0, with every projection operator Pm, in such a way that, if a set of commuting Pm satisfies 1=ΣmP, the corresponding values will also satisfy . Since the first proof of Kochen and Specker using 117 vectors in R3, there were many attempts to reduce the number of vector either via conceiving ingenious models or extending the system being considered to higher dimension. By considering eight dimensional three qubits system, we found a state dependent proof that requires only five vectors. The state that we assign value of 1 is the ray that arises from intersection of two planes. The recent advancements show that the KS theorem proof can be extended to two dimensional quantum system through generalized measurement represented by positive operator-valued measured (POVM). In POVMs the number of available outcomes of a measurement may be higher than the dimensionality of the Hilbert space and N-outcome generalized measurement is represented by N-element POVM which consists of N positive semidefinite operators {}dE that sum to identity. Each pair of elements is not mutually orthogonal if the number of outcome of measurements is bigger than the dimensionality. In terms of POVM, Kochen-Specker theorem asserts that and could not be satisfied for . We developed a general model that enables us to generate different sizes of the POVM for the proof of the Kochen-Specker theorem. We show that the current simplest Nakamura model is in fact a special case of our model. W also provide another model which is as simple as the Nakamura’s but consists of different sets of POVM. Item Type: Thesis (PhD) Subject: Algorithms Subject: Mathematical analysis Chairman Supervisor: Associate Professor Hishamuddin Zainuddin, PhD Call Number: IPM 2008 3 Faculty or Institute: Institute for Mathematical Research ID Code: 5419 Deposited By: Rosmieza Mat Jusoh Deposited On: 09 Apr 2010 04:07 Last Modified: 27 May 2013 07:22 Repository Staff Only: item control page
{"url":"http://psasir.upm.edu.my/5419/","timestamp":"2014-04-19T02:09:50Z","content_type":null,"content_length":"33152","record_id":"<urn:uuid:fd7b5451-5ff3-4bbc-914b-608229e11989>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: can someone help me with these 3 proofs, I have attached the file.. it has to get done by today! and i really appreciate it:) medals rewarded! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ff009be4b0426c63680aed","timestamp":"2014-04-19T17:32:43Z","content_type":null,"content_length":"405423","record_id":"<urn:uuid:99b6314e-a052-4b2b-842d-e706bcf277bf>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
SI Units - metre How long would it take to walk around planet Earth? To answer this question we need to consider distance, or put another way the length of the path we need to take. There have been many units used for length throughout history and some of them are still in use today, but here we consider the SI unit of the metre, with the symbol m. How long would it take to walk around the Earth? Firstly, a quick comment on the unit name. Strictly speaking it is spelled metre, but in many English speaking countries meter is used instead and widely accepted. Now, on with the question... One of the earliest known units of length was the cubit. This was defined as the length of the arm from the elbow to the tip of the finger. This was further split into smaller units and even today we still use the "hand" to measure the height of horses. The obvious problem with the cubit is that it varies depending on the size of the person making the measurement. Other units where also based on arbitrary measurements with a notable example being King Henry I (1068 - 1135) decreeing that a yard was the distance from the tip of his nose to the end of his outstretched thumb. What's perhaps surprising is that the SI unit of the metre was also initially based on an arbitrary measurement, namely the circumference of the Earth. In 1791 the French Academy of Sciences decided to adopt a new unit of measurement, called the metre, based on 1/10,000,000th of the distance from Earth's equator to the North Pole. This was certainly a step forward from nose-to-thumb length, but it also had its problems such as the fact that the Earth constantly changes shape a little due to gravitational forces and other factors. For this reason the metre was slightly changed in length and based on something much more stable. It is now defined as "the length of the path traveled by light in a vacuum during a time interval of 1/299,792,458th of a second". Not quite as handy as the cubit, but a lot more precise. The metre, then, is the SI unit of measurement. It is further split into 100 centimetres, or 1000 millimetres. For long distances it sometimes makes sense to talk in thousands of metres. One thousand metres is one kilometre, with the units km. For comparison, 1 metre is close to 3.28 feet (about a yard) and 1 kilometre is about 0.62 miles. Now back to our original question of how long it would take to walk around the Earth. At its equator the Earth's circumference is 40,075 km (24,901 miles). The average walking speed, depending on age , is about 5 kilometres per hour (km/h), so in 12 hours a distance of 5 km/h x 12 hours = 60 km is covered. So to work out how long it would take to walk around the Earth at 5 km/h for 12 hours a day we simply divide the Earth's circumference by the distance walked in a day: 40,075 km / 60 km = 668 days This is about 1 year and 10 months. But what if we could drive around the Earth at 100 km/h (about 60 miles per hour)? Now we cover 100 km/h x 12 hours = 1200 km in a day, so: 40,075 km / 1200 km = 33.4 days. In other words, even if a car is used it would still take about a month to travel all of the way around the Earth. With the Internet and other forms of global communication, as well as fast jet transport, we tend to think of the world as quite a small place, but when we think about walking or driving around it we soon realize that it's actually quite big. Finally, for most of human history we have thought of distance as being absolute - one mile is one mile and one metre is one metre. Einstein, in his theory of relativity, demonstrated that in reality distances shrink when we move at very high speeds. If we travel at about 90% of the speed of light distances, in other words lengths, shrink by about 50%. You can read more about this, together with worked examples, by clicking here or on the picture of Einstein:
{"url":"http://www.si-units-explained.info/length/","timestamp":"2014-04-23T06:38:14Z","content_type":null,"content_length":"17901","record_id":"<urn:uuid:ac1ab6d1-7694-408a-819b-2904b474a704>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the zeros and intercepts---help / check answer? September 15th 2008, 09:02 PM #16 Junior Member Sep 2008 Sorry, I took a break and did another section of math which I've completed for the most part. Okay, I am still stuck on this question. Let's plug in "2" for C...how would I proceed in solving it? I cannot add 2x + 2 together...I'm really trying to remember how to do this... and how would I plot it? Thanks in advance. What are you trying to solve for? Wait...I'm not sure. Am I solving for anything? I think I need to solve for x and y if I want to plot the correct corresponding lines to the number I put in for C? Ahh I'm so confused... If you put in 2 for C you have $y = \frac{2x}{3} -\frac{2}{3}$ Are they asking you to find the x and y intercepts of this equation? Thats the only thing I can see the problem is asking to solve for? I guess so? If C is an arbitrary real constant, an equation such as 2x - 3y = C is said to define a family of lines. Choose four different values of C and plot the corresponding lines on the same coordinate axes. What is true about hte lines that are members of this family? Do you need the intercepts to plot those lines? If you do I am not sure how to solve them. Thanks again. It would help. Plot them and the answer will be obvious September 15th 2008, 09:08 PM #17 September 15th 2008, 09:11 PM #18 Junior Member Sep 2008 September 15th 2008, 09:15 PM #19 September 15th 2008, 09:22 PM #20 Junior Member Sep 2008 September 15th 2008, 10:10 PM #21
{"url":"http://mathhelpforum.com/pre-calculus/49208-finding-zeros-intercepts-help-check-answer-2.html","timestamp":"2014-04-17T23:07:04Z","content_type":null,"content_length":"46829","record_id":"<urn:uuid:90e08f17-a9dd-42e1-9677-f8fb5f81bf0d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
(radiation) monpoles vs. dipoles In the chapter on radiation (Chapter 11), Griffiths notes that an electric monopole does not radiate, but also that a point charge of electric dipole moment [tex]\mathbf{p} (t) = q \mathbf{d} (t) [/ tex] (where [tex] \mathbf{d} (t) [/tex] is the instantaneous coordinate of the charge with respect to a fixed origin ) radiates with power [tex] P = \mu_0 q^2 a^2/(6 \pi c) [/tex], where [tex] \mathbf{a}(t) = \ddot{\mathbf{p}} (t) [/tex]. By "monopole," does he simply mean a point charge that doesn't move?
{"url":"http://www.physicsforums.com/showthread.php?p=1542217","timestamp":"2014-04-16T19:04:03Z","content_type":null,"content_length":"22339","record_id":"<urn:uuid:d32623b4-200c-462a-af3f-ae70fe9451fe>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - Subobjects' transforms exposed to maxscript?! 09-25-2005, 06:48 PM Hi there guys, I'm trying to solve a big problem right now! Are Subobject elements (faces, vertexes etc...) actually exposed to maxscript transform wise? Ok, I have a Editable Poly object, I have a Polygon selected and I need to ROTATE that polygon along it's local Z axis... I can extract all the maxtrix3 data necessary to make the rotation by any angle... HOWEVER... How the hell do I say to maxscript that I wanna ROTATE that polygon?! Rotate() works only with nodes RotateZ() only changes the Quat values of the rotation matrix :( How do I do that?! Thanks in advance for any suggestions, help, answers... - loocas
{"url":"http://forums.cgsociety.org/archive/index.php/t-280296.html","timestamp":"2014-04-16T16:41:58Z","content_type":null,"content_length":"11961","record_id":"<urn:uuid:0a30e367-b4f0-4a4f-ae41-547daace6646>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
A ball of mass 0.120 kg is dropped from rest from a height of 1.25 m. It rebounds from the floor to reach a height of 0.600 m. What impulse was given to the ball by the floor? Number of results: 101,679 physics 111 A ball of mass 0.120 kg is dropped from rest from a height of 1.25 m. It rebounds from the floor to reach a height of 0.600 m. What impulse was given to the ball by the floor? answer in kg·m/s Wednesday, October 27, 2010 at 2:00pm by Blake A ball of mass 0.120 kg is dropped from rest from a height of 1.25 m. It rebounds from the floor to reach a height of 0.600 m. What impulse was given to the ball by the floor? Monday, July 7, 2008 at 5:14pm by Elisa A ball of mass 0.120 kg is dropped from rest from a height of 1.25 m. It rebounds from the floor to reach a height of 0.600 m. What impulse was given to the ball by the floor? Friday, October 14, 2011 at 5:42pm by Anonymous 1. A centrifuge in a medical laboratory rotates at an angular speed of 3650 rev/min. When switched off, it rotates through 50.0 revolutions before coming to rest. Find the constant angular acceleration of the centrifuge. 2. A ball of mass 0.120 kg is dropped from rest from a ... Sunday, July 6, 2008 at 11:03pm by Elisa A 0.135 kg ball is dropped from rest. If the magnitude of the ball's momentum is 0.730 kg·m/s just before it lands on the ground, from what height was it dropped? Thursday, April 22, 2010 at 2:32pm by Physics A 0.135 kg ball is dropped from rest. If the magnitude of the ball's momentum is 0.730 kg·m/s just before it lands on the ground, from what height was it dropped? Thursday, April 22, 2010 at 2:32pm by Physics the height that a ball bounces varies directly as the height from which it is dropped. A ball dropped from a height of 20cm bounces 16cm a) Find the height of the bounce if the ball is dropped from a height of 280cm b) Find from waht height the ball was dropped if it bounces ... Tuesday, January 13, 2009 at 8:03pm by lisa momemtum ? A ball of mass 0.150 kg is dropped from rest to a height of 0.125 m. It rebounds from the floor to reach a height of 0.960m. What impulse was given by the floor? please help/explain. Thanks! Wednesday, June 16, 2010 at 12:39pm by jake how do i work out the following word problem. a particular ball retains half of its heght on each bounce when dropped from a height of 80 m. give the height of the ball on the fifth bounce after being dropped?(will the ball comes to rest?) Saturday, May 19, 2012 at 5:54am by ladane A 5.80 kg ball is dropped from a height of 14.5 m above one end of a uniform bar that pivots at its center. The bar has mass 9.00 kg and is 6.40 m in length. At the other end of the bar sits another 5.30 kg ball, unattached to the bar. The dropped ball sticks to the bar after ... Thursday, April 5, 2012 at 10:35am by Joe a ball is dropped from a tower of height 5 m. it accelerates uniformly at 10m/s .if mass of the ball is 2 kg ,find the momentum transferred by a ball to the ground on striking. Wednesday, October 9, 2013 at 1:58am by TINA After falling from rest from a height of 35 m, a 0.49 kg ball rebounds upward, reaching a height of 25 m. If the contact between ball and ground lasted 1.7 ms, what average force was exerted on the Thursday, September 20, 2012 at 11:35pm by Freedo when a ball is dropped, it bounces back up 4/5 of the height from the height it was dropped.If the ball dropped from a height of 200cm, how high does it go after bouncing twice. Saturday, February 9, 2013 at 4:36pm by leanna A superball is dropped from rest from a height of 2.0m. It bounces repeatedly from the floor, as superballs are prone to do. After each bounce the ball dissipates some energy, so eventually it comes to rest. The following pattern is observed: After the 1st bounce, the ball ... Friday, January 31, 2014 at 12:56am by Chey A superball is dropped from rest from a height of 2.0m. It bounces repeatedly from the floor, as superballs are prone to do. After each bounce the ball dissipates some energy, so eventually it comes to rest. The following pattern is observed: After the 1st bounce, the ball ... Friday, January 31, 2014 at 9:46am by Cheylynne the coefficient of restitution between the ball and the floor is 0.60 If the ball is drop from rest at the height of 6.6 m from the floor. Find a) what is the maximum height will the ball attain after the first bounce. b) how much kinetic energy is lost duirng the impact if ... Monday, February 21, 2011 at 7:25am by robert A ball having a mass of 0.20 kilograms is placed at a height of 3.25 meters. If it is dropped from this height, what will be the kinetic energy of the ball when it reaches 1.5 meters above the Thursday, February 21, 2013 at 2:26pm by kavita a ball has a mass of 0.5kg dropped from a cliff top, the ball srtikes the sea below at a velocity of 10m/s. (a)what is the kinetic energy of the ball as it strikes the sea? (b) what was its potential energy before it was dropped? (c)from what height was it dropped? Friday, December 9, 2011 at 1:32pm by sim A 0.400-kg ball is dropped from rest at a point 1.80 m above the floor. The ball rebounds straight upward to a height of 0.730 m. What are the magnitude and direction of the impulse of the net force applied to the ball during the collision with the floor? Thursday, October 14, 2010 at 1:27pm by Sarah A 0.350-kg ball is dropped from rest at a point 1.10 m above the floor. The ball rebounds straight upward to a height of 0.420 m. Taking the negative direction to be downward, what is the impulse of the net force applied to the ball during the collision with the floor? Tuesday, January 3, 2012 at 8:38pm by Robert A basketball of mass 0.70 kg is dropped from rest from a height of 1.22 m. It rebounds to a height of 0.64 m. How much mechanical energy was lost during the collision with the floor? A basketball player dribbles the ball from a height of 1.22 m by exerting a constant downward ... Tuesday, November 27, 2012 at 10:04pm by Physics A ball of mass 0.1 kg is dropped from a height of 2 m onto a hard surface. If rebounds to a height of 1.5 m and it is in contact with the surface for 0.05s. Calculate the: (a) speed with which it strikes the surface. (b) speed with which it leaves the surface. (c) change in ... Saturday, March 15, 2014 at 3:29am by waslat HS Physics Please help...I know the direction is up, but I can't figure out the magnitude, except that its label is kg · m/s, I think. A 0.400-kg ball is dropped from rest at a point 1.80 m above the floor. The ball rebounds straight upward to a height of 0.730 m. What are the magnitude ... Friday, October 15, 2010 at 6:22pm by Tzana college physics A small ball of mass m is aligned above a larger ball of mass M = 0.79 kg (with a slight separation), and the two are dropped simultaneously from height h = 2.0 m. (Assume the radius of each ball is negligible relative to h). (a) If the larger ball rebounds elastically from ... Thursday, November 12, 2009 at 9:01pm by Steve A ball, dropped from rest, cover 2/7 of the distant to the ground in the last 2 seconds of its fall. From what height was the ball dropped? What was the total time of the fall? Sunday, October 11, 2009 at 6:22pm by Robert C Long, long ago, on a planet far, far away, a physics experiment was carried out. First, a 0.250 kg ball with zero net charge was dropped from rest at a height of 1.00 m. The ball landed 0.350 s later. Next, the ball was given a net charge of 7.60 muC and dropped in the same ... Monday, February 13, 2012 at 5:35am by DAVE 1) a force of 10 N acts on a 5.0 kg object, initially at rest, for 2.5 s. What is the final speed of the object? 2) a 1500 kh car, is allowed to coast along a level track at a speed of 8.0 m/s. It collides and couples with a 2000-kg truck , initially at rest with brakes ... Monday, November 29, 2010 at 9:17pm by Joceylin a .140 kg baseball is dropped from rest from a height of 2.2m above the ground. it rebounds to a height of 1.6 m. what change in the balls momentum occurs when the ball hits the ground? Tuesday, February 22, 2011 at 12:10pm by vicki a .140 kg baseball is dropped from rest from a height of 2.2m above the ground. it rebounds to a height of 1.6 m. what change in the balls momentum occurs when the ball hits the ground Wednesday, February 23, 2011 at 7:36am by vicki two balls are dropped to the ground from different heights. one ball is dropped 2s after the other but they both strike the ground at the same time, 5s after the first is dropped. what is the difference in height from which they are dropped? from what height was the first ball... Tuesday, June 14, 2011 at 11:46am by pavatharani algebra1 HELP PLEASE Two identical rubber balls are dropped from different heights. Ball 1 is dropped from a height of 100 feet, and ball 2 is dropped from a height of 210 feet. Write a function for the height of each ball. h1(t) = h2(t) = When does ball 1 reach the ground? Round to the nearest ... Monday, July 8, 2013 at 11:13am by lisa A ball is thrown upward from the ground with an initial speed of 25 m/s; at the same instant, a ball is dropped from rest from a building 15 m high. After how long will the balls be at the same Thursday, September 20, 2012 at 2:18am by Shayla A ball is thrown upward from the ground with an initial speed of 25 m/s; at the same instant, a ball is dropped from rest from a building 15 m high. After how long will the balls be at the same Thursday, September 20, 2012 at 2:21am by Shayla This chart shows how high a ball bounces when dropped from different heights If this ball is dropped from 350 in how high will it bounce? Drop in inches Bounce Height 50 25 100 50 150 75 200 100 A.200 B.175 C.150 D.120 B? Thursday, February 7, 2013 at 7:36pm by Jerald Long, long ago, on a planet far, far away, a physics experiment was carried out. First, a 0.250 kg ball with zero net charge was dropped from rest at a height of 1.00 m. The ball landed 0.350 s later. Next, the ball was given a net charge of 7.60 muC and dropped in the same ... Monday, February 13, 2012 at 5:18pm by kas algeba 1 please I REALY NEED HELP THANK YOU :D Two identical rubber balls are dropped from different heights. Ball 1 is dropped from a height of 100 feet, and ball 2 is dropped from a height of 210 feet. Write a function for the height of each ball. h1(t) = h2(t) = When does ball 1 reach the ground? Round to the nearest ... Monday, July 8, 2013 at 12:09pm by lisa physic PLEASE I REALLY NEED HELP WITH THIS PROBLEM Two identical rubber balls are dropped from different heights. Ball 1 is dropped from a height of 100 feet, and ball 2 is dropped from a height of 210 feet. Write a function for the height of each ball. h1(t) = h2(t) = When does ball 1 reach the ground? Round to the nearest ... Monday, July 8, 2013 at 12:42pm by lisa A rubber ball (mass 0.21 kg) is dropped from a height of 1.7 m onto the floor. Just after bouncing from the floor, the ball has a speed of 3.6 m/s. (a) What is the magnitude and direction of the impulse imparted by the floor to the ball? Magnitude? Direction? (b) If the ... Sunday, December 4, 2011 at 7:07pm by glenmore algebra1 HELP PLEASE Thanks you!! Two identical rubber balls are dropped from different heights. Ball 1 is dropped from a height of 100 feet, and ball 2 is dropped from a height of 210 feet. Write a function for the height of each ball. h1(t) = h2(t) = When does ball 1 reach the ground? Round to the nearest ... Monday, July 8, 2013 at 12:22pm by sandy algebra1 HELP PLEASE Thanks you!! wo identical rubber balls are dropped from different heights. Ball 1 is dropped from a height of 100 feet, and ball 2 is dropped from a height of 210 feet. Write a function for the height of each ball. h1(t) = h2(t) = When does ball 1 reach the ground? Round to the nearest ... Monday, July 8, 2013 at 1:49pm by lisa It takes 2.5 s for a small ball released from rest from a tall building to reach the ground. The mass of the ball is 0.05 kg. Calculate the height from which the ball is released. Ok so I have the time, initial velocity, and the acceleration. But the mass totally throws me off... Tuesday, October 9, 2007 at 1:07pm by Lindsay I've been stuck on this problem and hope someone can help me. A man is standing on scales which show his weight as 607.6 N. A .50 kg ball is dropped from a height of 1 m into his hands. His hands drop 25 cm from chest level to waist level during the catch.His mass is 62 kg. If... Sunday, March 22, 2009 at 3:47pm by Brigid Physics 203 A 2.4 m long rod of mass m1 = 11.0 kg, is supported on a knife edge at its midpoint. A ball of clay of mass m2 = 5 kg is dropped from rest from a height of h = 1.1 m and makes a perfectly inelastic collision with the rod 0.9 m from the point of support. Find the angular ... Wednesday, April 13, 2011 at 7:06pm by Kayla Two balls are conneted by a string that stretches over a massless, frictionless pulley. Ball 1 has a mass of 0.81 kg and is held 0.5 m above the ground. Ball 2 has a mass of 6.3 kg and is held 0.28 m above the ground. When the balls are released, ball 2 falls to the ground, ... Sunday, January 15, 2012 at 12:29am by Anonymous Two balls are conneted by a string that stretches over a massless, frictionless pulley. Ball 1 has a mass of 0.81 kg and is held 0.5 m above the ground. Ball 2 has a mass of 6.3 kg and is held 0.28 m above the ground. When the balls are released, ball 2 falls to the ground, ... Sunday, December 21, 2008 at 12:09pm by Anonymous A rubber ball of mass 18.0 g is dropped from a height of 1.75 m onto a floor. The velocity of the ball is reversed by the collision with the floor, and the ball rebounds to a height of 1.55 m. What impulse was applied to the ball during the collision? Sunday, October 16, 2011 at 5:36pm by Kellie A rubber ball of mass 19.5 g is dropped from a height of 1.85 m onto a floor. The velocity of the ball is reversed by the collision with the floor, and the ball rebounds to a height of 1.55 m. What impulse was applied to the ball during the collision? Friday, February 24, 2012 at 12:49am by Tyler A rubber ball of mass 41.5 g is dropped from a height of 1.75 m onto a floor. The velocity of the ball is reversed by the collision with the floor, and the ball rebounds to a height of 1.55 m. What impulse was applied to the ball during the collision? Friday, February 24, 2012 at 11:46am by Anonymous A rubber ball of mass 43.5 g is dropped from a height of 2.05 m onto a floor. The velocity of the ball is reversed by the collision with the floor, and the ball rebounds to a height of 1.55 m. What impulse was applied to the ball during the collision? Monday, October 14, 2013 at 8:36pm by Anonymous Two balls are connected by a string that stretches over a massless, frictionless pulley. Ball 1 has a mass of 0.45 kg and is held 0.74 m above the ground. Ball 2 has a mass of 5.7 kg and is held 0.88 m above the ground. When the balls are released, ball 2 falls to the ground, ... Sunday, December 21, 2008 at 3:05am by jason A ball of mass 1.2kg and a volume of 35cm cubed is dropped from a height of 2m. Determine the height to which this ball will bounce. Sunday, September 5, 2010 at 3:31pm by Joe a ball rebounds one half of the height from which it was dropped. The ball is dropped from a height of 160 feet and keeps bouncing. what is the total vertical distance the ball will travel from the moment it is droppedto the moment it hits the floor for the fifth time? Thursday, July 9, 2009 at 7:48pm by laly A ball is dropped from a height of 1 metre. At every bounce it travels half of the height it travelled it with the previous flight. Find the total distance travelled when the ball comes to rest. Thursday, December 27, 2012 at 8:25am by anitha A mass of 5.25-kg was dropped from a height of 31 m. How much time does it take for the mass to reach the ground? Thursday, August 25, 2011 at 5:37pm by kayla a ball rebounds 1/2 of the height from which it is dropped. The ball is dropped from a height of 136 feet and keeps on bouncing. How far will it traveled when it strikes the ground the third time? Sunday, August 29, 2010 at 4:02am by Karen A cue ball (mass = 0.140 kg) is at rest on a frictionless pool table. The ball is hit dead center by a pool stick which applies an impulse of +1.60 N·s to the ball. The ball then slides along the table and makes an elastic head-on collision with a second ball of equal mass ... Wednesday, December 7, 2011 at 7:03pm by s A ball of mass 0.15kg is dropped from rest at a height of 1.25m. it rebounds from the floor to reach a height of 0.960n. what impulse is given to the ball from the floor? i dont have a clue where to start. i know impulse is constant force x change in time or time elapsed, but ... Sunday, October 21, 2007 at 5:39am by natali After a 0.200-kg rubber ball is dropped from a height of 1.95 m, it bounces off a concrete floor and rebounds to a height of 1.45 m. (a) Determine the magnitude and direction of the impulse delivered to the ball by the floor. (b) Estimate the time the ball is in contact with ... Wednesday, March 5, 2014 at 9:29pm by Joe After a 0.200-kg rubber ball is dropped from a height of 1.95 m, it bounces off a concrete floor and rebounds to a height of 1.45 m. (a) Determine the magnitude and direction of the impulse delivered to the ball by the floor. (b) Estimate the time the ball is in contact with ... Friday, March 7, 2014 at 12:24am by Joe A 0.125-kg baseball is dropped from rest. If the magnitude of the baseball's momentum is 0.750 kg * m/s just before it lands on the ground, from what height was it dropped? Tuesday, March 29, 2011 at 7:09pm by Jason A cue ball (mass = 0.150 kg) is at rest on a frictionless pool table. The ball is hit dead center by a pool stick which applies an impulse of +1.45 N·s to the ball. The ball then slides along the table and makes an elastic head-on collision with a second ball of equal mass ... Sunday, October 24, 2010 at 4:48pm by Xavier A cue ball (mass = 0.140 kg) is at rest on a frictionless pool table. The ball is hit dead center by a pool stick which applies an impulse of +1.60 N·s to the ball. The ball then slides along the table and makes an elastic head-on collision with a second ball of equal mass ... Wednesday, December 7, 2011 at 8:34pm by Anonymous AP Physics a ball of mass 3 kg is moving with a velocity of 10 m/s. This ball collides with a second ball of mass 0.5 kg that is at rest. After the collison, the first ball is at rest. What is the velocity after the collison of the second ball if the collison is inelastic? Tuesday, October 18, 2011 at 11:51pm by Kailyn AP Physics a ball of mass 3 kg is moving with a velocity of 10 m/s. This ball collides with a second ball of mass 0.5 kg that is at rest. After the collison, the first ball is at rest. What is the velocity after the collison of the second ball if the collison is inelastic? Tuesday, October 18, 2011 at 11:53pm by Kailyn A ball of mass 200 g is dropped from a height 3 m above the ground onto a hard floor, and rebounds to a height of 2.2 m. If the ball is in contact with the floor for 0.002 s calculate the average force exerted by the floor on the ball Friday, March 7, 2014 at 2:29pm by Thobiswa Starting with an initial speed of 5.00 m/s at a height of 0.335 m, a 1.75-kg ball swings downward and strikes a 4.40-kg ball that is at rest, as the drawing shows. (a) Using the principle of conservation of mechanical energy, find the speed of the 1.75-kg ball just before ... Sunday, March 23, 2014 at 7:36pm by Bo After falling from rest from a height of 27 m, a 0.53 kg ball rebounds upward, reaching a height of 17 m. If the contact between ball and ground lasted 1.6 ms, what average force was exerted on the Friday, February 25, 2011 at 5:55pm by Anonymous After falling from rest from a height of 30 m, a 0.49 kg ball rebounds upward, reaching a height of 20 m. If the contact between ball and ground lasted 2.5 ms, what average force was exerted on the Monday, September 12, 2011 at 7:53pm by LeVonte After falling from rest from a height of 30 m, a 0.48-kg ball rebounds upward, reaching a height of 20 m. If the contact between ball and ground lasted 1.8 ms, what average force was exerted on the Monday, February 25, 2013 at 4:40pm by adam After falling from rest from a height of 26 m, a 0.49-kg ball rebounds upward, reaching a height of 16 m. If the contact between ball and ground lasted 1.8 ms, what average force was exerted on the Wednesday, March 20, 2013 at 10:05am by matt Physics please help cant figure this out starting with an initial speed of 5.00 m/s at a height of 0.265 m, a 1.75 kg ball swings downward and strikes a 4.45 kg ball that is at rest, as the drawing shows. (a) Using the principle of conservation of mechanical energy, find the speed of the 1.75 kg ball just before ... Friday, November 5, 2010 at 9:45am by joseph college physics After falling from rest from a height of 34 m, a 0.47 kg ball rebounds upward, reaching a height of 24 m. If the contact between ball and ground lasted 1.8 ms, what average force was exerted on the Tuesday, September 21, 2010 at 8:17pm by halley college physics After falling from rest from a height of 34 m, a 0.47 kg ball rebounds upward, reaching a height of 24 m. If the contact between ball and ground lasted 1.8 ms, what average force was exerted on the Tuesday, September 21, 2010 at 10:20pm by mary After falling from rest at a height of 31.5 m, a 0.564 kg ball rebounds upward, reaching a height of 20.2 m. If the contact between ball and ground lasted 1.80 ms, what average force was exerted on the ball? Wednesday, February 12, 2014 at 5:28pm by sam A ball of mass .3 kg is dropped from a height of 10m. Its momentum when it strikes the ground is? I know p=mv how would I find velocity for this problem? Thanks in advance for your help. Wednesday, October 24, 2007 at 3:12pm by Tammy A 9.0 kg iron ball is dropped onto a pavement from a height of 110 m.If half of the heat generated goes into warming the ball, find the temperature increase of the ball. (In SI units, the specific heat capacity of iron is 450 J/kg*degree C.) Tuesday, April 3, 2012 at 9:40pm by Jim A ball of mass 0.150kg is dropped from a height of 1.25m. It rebounds from the floor to reach a height of 0.960m. What impulse was given to the ball by the floor? Saturday, March 29, 2008 at 4:39pm by JIM physical science A 12-kg iron ball is dropped onto a pavement from a height of 90m. Suppose half of the heat generated goes into warming the ball. What is the temperature increase of the ball in degree C Sunday, October 6, 2013 at 9:21pm by martha A ball is dropped from a height of 36 feet. The quadratic equation d=1/2 gt^2 is used to calculate the distance (d) the ball has fallen after t seconds. The constant g is the acceleration of gravity, 9.8m/s^2. How long does it take the ball to hit the ground? A ball in problem... Monday, November 19, 2012 at 11:37am by Lux ball is dropped from a height of 36 feet. The quadratic equation d=1/2 gt^2 is used to calculate the distance (d) the ball has fallen after t seconds. The constant g is the acceleration of gravity, 9.8m/s^2. How long does it take the ball to hit the ground? A ball in problem 2... Monday, November 19, 2012 at 11:37am by Anonymous A ball has bounce coefficient 0 < r < 1 if when it is dropped from height h, it bounces back to a height of rh. Suppose that such a ball is dropped from an initial height a and subsequently bounces infinitely may times. Find the total up-and-down distance in all its ... Sunday, July 18, 2010 at 3:53pm by fred A ball of mass 1.74 kg is dropped from a height y1 = 1.47 m and then bounces back up to a height of y2 = 0.83 m. How much mechanical energy is lost in the bounce? The effect of air resistance has been experimentally found to be negligible in this case, and you can ignore it. Saturday, June 2, 2012 at 11:13am by John After falling from rest at a height of 29.2 m, a 0.488 kg ball rebounds upward, reaching a height of 20.3 m. If the contact between ball and ground lasted 1.68 ms, what average force was exerted on the ball? I'm so confused on where to start. tyvm Tuesday, September 22, 2009 at 11:11am by David A ball is dropped from an undetermined height and bounces to 5 meters. To what height will it bounce if dropped from a height 1 meter higher? Thursday, September 16, 2010 at 11:16pm by Tony 1....How much force needed to bring a 3200 lb car from rest to a velocity of 44 m/s in 8 sec.? 2....A ball with a mass of 2 kg move to the right with a speed of 2 m/s. It hits a ball of mass 0.50 kg which is at rest. After collision the second ball moves to the right with a ... Friday, February 18, 2011 at 8:56am by EM Im just a little confused on how to answer this question. A ball bounced 64% of the height from which it was dropped. The bounce was 72 cm high. What is the height from which the ball was dropped. I think there has to be an algebraic equation of some sort, but I'm not sure ... Sunday, May 15, 2011 at 7:53pm by Erika A ball of mass m(1) = 0.250 kg and initial v(1) = 5.00 m/s collides head-on with a ball of mass m(2) = 0.800 kg that is initially at rest. What are the velocities of the balls after the collision if they stick together? Friday, January 11, 2013 at 3:49pm by Jackie Solve this equation for t: (Thrown) Ball 1 height = (Dropped) Ball 2 height 26.7 t - 4.9 t^2 = 13.1 - 4.9 t^2 The acceleration terms, -4.9 t^2, cancel out t = 13.1/26.7 = ___ seconds Interesting result, and one that hints at Einstein's principle of equivalence. In a free-... Sunday, January 23, 2011 at 1:13pm by drwls A softball of mass 0.220 kg that is moving with a speed of 5.5 m/s (in the positive direction) collides head-on and elastically with another ball initially at rest. Afterward it is found that the incoming ball has bounced backward with a speed of 3.9 m/s. (a) Calculate the ... Sunday, January 10, 2010 at 5:00pm by Anonymous A golf ball is dropped from 15 different heights (in inches) and the height of the bounce is recorded (in inches.) The regression analysis gives the model y-hat = -0.4 + 0.70 (drop height) where y-hat is the predicted bounce height of the golf ball. A golf ball dropped from 71... Wednesday, September 18, 2013 at 7:40pm by Jeremy Two balls are dropped from rest from the same height. One of the balls is dropped 1.0 s after the other. What distance separates the two balls 2.00 s after the second ball is dropped? Sunday, August 28, 2011 at 7:25am by Anonymous The height(h) of an object that has been dropped or thrown in the air is given by: h(t)=-4.9t^2+vt+h t=time in seconds(s) v=initial velocity in meters per second (m/s) h=initial height in meters(m) A ball is thrown vertically upwardd from the top of the Leaning Tower of Pisa (... Wednesday, April 4, 2012 at 4:24pm by NeedHelp A 5-kg ball is at rest when it is struck head-on by a 2-kg ball moving along a track at 10 m/s. If the 2-kg ball is at rest after the collision, what is the speed of the 5-kg ball after the Monday, March 24, 2014 at 8:05pm by Amanda An object of mass 5 kg is dropped from a certain height. Just before it strikes the ground it has a kinetic energy of 1250 J. From what height was the object dropped? Ignore air resistance and use g = 10 m/s2. Wednesday, August 29, 2012 at 8:08pm by Alexandra A tennis ball of mass 100g is dropped from a height of 40m.It hits the ground and bounces back to a height of 30m after an impact of 0,01s.Ignore the air resistance .what was the velocity of the ball when it hit the ground ? Sunday, February 5, 2012 at 9:04am by kulani A ball is dropped from rest from the top of a building and strikes the ground with the speed Vf. From ground level, a second ball is thrown straight upward at the same instant that the first ball is dropped. The initial speed of the second ball is Vi=Vf, the same with which ... Monday, June 28, 2010 at 6:48pm by Katashia A rubber ball rebounds 3/5 of the height from which it falls. If it is initially dropped from a height of 30 ft. What total vertical distance does it travel before coming to rest? Saturday, January 28, 2012 at 10:54pm by PDF A ball of mass 0.540 kg moving east (+x direction) with a speed of 3.30 m/s collides head-on with a 0.740 kg ball at rest. If the collision is perfectly elastic, what will be the speed and direction of each ball after the collision? a)ball originally at rest m/s b)ball ... Thursday, October 7, 2010 at 7:40pm by ami an elastic ball rebounces 70% when it is dropped from the height. if it is dropped from 60m what is the total distance travelled by the ball Saturday, May 26, 2012 at 11:15am by Anonymous Will you check my work and help me with the last two problems please? A 4 kg ball has a momentum of 12 kg m/s. What is the ball's speed? -(12 kg/m/s) / 4kg=3m/s A ball is moving at 4 m/s and has a momentum of 48 kg m/s. What is the ball's mass? -(48kg/m/s) / (4m/s) =12 kg A 1-... Sunday, October 14, 2007 at 11:07pm by Soly Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=A+ball+of+mass+0.120+kg+is+dropped+from+rest+from+a+height+of+1.25+m.+It+rebounds+from+the+floor+to+reach+a+height+of+0.600+m.+What+impulse+was+given+to+the+ball+by+the+floor%3F","timestamp":"2014-04-17T01:43:45Z","content_type":null,"content_length":"46600","record_id":"<urn:uuid:86fb5037-40eb-45e7-8b10-478f849d82ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by ROCKY on Sunday, July 24, 2011 at 9:53pm. factor the following expression completely: 70w^3-125w^2+30w Related Questions intermediate algebra - factor the following expression completely: 70w^3-125w^2+... Algebra - Factor the following expression completely: 36^3-16x Algebra - Completely factor the following expression:100y2 – 36z2 algebra - Completely factor the following expression.1.) 5c^2-24cd-5d^2 algebra - Completely factor the following expression.1.) 5c^2-24cd-5d^2 Intermediate algebra - Factor the following expression completely: 9x^4-81y^2 algebra - completely factor the following expression. 16a^2-40ab+25b^2 Algebra - Completely factor the following expression:100x2 – 160xy + 64y2 algebra - Factor the following expression completely: 45x^3y – 80xy algebra - Completely factor the following expression. 1.4x^2-8x-12+6x
{"url":"http://www.jiskha.com/display.cgi?id=1311558792","timestamp":"2014-04-21T13:37:02Z","content_type":null,"content_length":"8128","record_id":"<urn:uuid:f09ea18d-c248-46ef-8f76-a2bae524e7bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Escape Action Level 61-65 Use our strategy guides to help you pass Escape Action levels 61 through 65 easily. Please let us know if you still have trouble beating any of the stages in the game. Remember to check out our other guides for the game for other levels. Thank you! Level 61 Level 62 Level 63 Level 64 Level 65 Escape Action Level 61 Observe the double hearts icon on the right window for the hint. Change the numbers on the door to 214 (2/14 for Valentine’s day). The door will then open and you have escaped! Escape Action Level 62 Tap the blue dots on all corners and observe their movements. They will form different shapes as as shown in our picture solution. These shapes are: Top-left: Triangle Bottom-left: Diamond Top-right: Rectangle Bottom-right: Upside-Down Triangle Match the symbols in the center circle based on the clue given by the dots. The wall breaks after you input the correct code and you can escape! Escape Action Level 63 The numbers for this level are randomly generated so that your solution may be different than ours. First pick up the rag on the floor to the left of the door. Remove the numbers on the door to remove the the numbers. Now tap on the squares to the middle so that they add up to the number on the door. You will then be able to escape! Escape Action Level 64 Disregard the number in the center because it has no connection to the final solution. Notice the center clue: E = 4, A = 3. The logic behind the number is that E has 4 straight lines and A has three straight lines. Now apply the same logic to F, H, and K. F = 3 Straight lines, H = 3 Straight Lines, K = 3 Straight lines. 3 + 3 + 3 = 9. Change the middle number to 09 and escape through the unlocked door! Escape Action Level 65 This challenge room is pretty tough, tap and slide your fingers based on the directions. Although you can potentially complete the challenge without timer power up, consider practice a couple of rounds and purchase additional seconds if you do need them. Each set of direction arrows give you 1 point. Once you have 10 points, the door will open for you to escape!
{"url":"http://adventurewalkthrough.com/escape-action/escape-action-level-61-65/","timestamp":"2014-04-17T21:23:12Z","content_type":null,"content_length":"38368","record_id":"<urn:uuid:2fa4725d-5a20-4efd-b4e0-a6519d9c6979>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Irving Fisher Fisher, Irving, 1867–1947, American economist, b. Saugerties, N.Y., Ph.D. Yale, 1891. He began teaching at Yale in 1890 and was active there until 1935. His earliest work was in mathematics, and he made a distinguished contribution to mathematical economic theory. He was noted chiefly for his studies in managed currency, in which he set forth the theory of the "compensated dollar" whereby purchasing power might be stabilized. His expansion of interest theory included the theory of investment appraisal, which relied on a person's willingness to sacrifice present for future income. He was also one of the first to work out a numbered index system for filing. Fisher's interests were wide; they included activities in academic, business, welfare, and public organizations, especially public health societies. Important among his many books are Mathematical Investigations in the Theory of Value and Prices (1892), Appreciation and Interest (1896), The Nature of Capital and Income (1906), The Rate of Interest (1907), The Making of Index Numbers (1922), and Theory of Interest (1930). See biography by his son, I. N. Fisher (1956). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Irving Fisher from Fact Monster: • D - F - D - F David Dale Marcus Daly Thomas Danforth Herbert Joseph Davenport Edward Davy Charles Gates ... • interest - interest interest, charge for the use of credit or money, usually figured as a percentage of the ...
{"url":"http://www.factmonster.com/encyclopedia/people/fisher-irving.html","timestamp":"2014-04-17T14:22:59Z","content_type":null,"content_length":"20714","record_id":"<urn:uuid:c1a210e0-226d-4ca4-9062-34a3d09efccb>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverted Pendulum: Simulink Modeling In this page we outline how to build a model of our inverted pendulum system for the purposes of simulation using Simulink and its add-ons. A great advantage of simulation, as will be demonstrated in this example, is that it can generate numerical solutions to nonlinear equations for which closed-form solutions cannot be generated. The nonlinear simulation can then be employed to test the validity of a linearized version of the model. The simulation model can also be used to evaluate the performance of the control scheme designed based on the linearized model. Physical setup and system equations In this example we will consider a two-dimensional version of the inverted pendulum system with cart where the pendulum is contrained to move in the vertical plane shown in the figure below. For this system, the control input is the force For this example, let's assume the following quantities: (M) mass of the cart 0.5 kg (m) mass of the pendulum 0.2 kg (b) coefficient of friction for cart 0.1 N/m/sec (l) length to pendulum center of mass 0.3 m (I) mass moment of inertia of the pendulum 0.006 kg.m^2 (F) force applied to the cart (x) cart position coordinate (theta) pendulum angle from vertical (down) Below are the two free-body diagrams of the system. This system is challenging to model in Simulink because of the physical constraint (the pin joint) between the cart and pendulum which reduces the degrees of freedom in the system. Both the cart and the pendulum have one degree of freedom ( It is necessary, however, to include the interaction forces Inverted Pendulum: System Modeling tutorial, the interaction forces In general, we would like to exploit the modeling power of Simulink to take care of the algebra for us. Therefore, we will model the additional However, the position coordinates Then addressing the These expressions can then be substituted into the expressions for We can now represent these equations within Simulink. Simulink can work directly with nonlinear equations, so it is unnecessary to linearize these equations as was done in the Inverted Pendulum: System Modeling page. Building the nonlinear model with Simulink We can build the inverted pendulum model in Simulink employing the equations derived above by following the steps given below. • Begin by typing simulink into the MATLAB command window to open the Simulink environment. Then open a new model window in Simulink by choosing New > Model from the File menu at the top of the open Simulink Library Browser window or by pressing Ctrl-N. • Insert four Fcn Blocks from the Simulink/User-Defined Functions library. We will build the equations for • Change the label of each Fcn block to match its associated function. • Insert four Integrator blocks from the Simulink/Continuous library. The output of each Integrator block is going to be a state variable of the system: • Double-click on each Integrator block to add the State Name: of the associated state variable. See the following figure for an example. Also change the Initial condition: for • Insert four Multiplexer (Mux) blocks from the Simulink/Signal Routing library, one for each Fcn block. • Insert two Out1 blocks and one In1 block from the Simulink/Sinks and Simulink/Sources libraries, respectively. Then double-click on the labels for the blocks to change their names. The two outputs are for the "Position" of the cart and the "Angle" of the pendulum, while the one input is for the "Force" applied to the cart. • Connect each output of the Mux blocks to the input of the corresponding Fcn block. • Connect the output of the Now we will enter each of the four equations (1), (2), (13), and (14) into a Fcn block. Let's start with equation (1) which is repeated below. • This equation requires three inputs: Number of inputs: to "3". • Connect these three inputs to this Mux block in the order prescribed in the previous step. • Double-click on the first Fcn block and enter the equation for xddot as shown below. Now, let's enter equation (2) which is repeated below. • This equation also requires three inputs: • Enter the above equation into the Fcn block, change the number of inputs of the Mux block, and connect the correct signals to the Mux block in the correct order. • Repeat this process for equations (13) and (14) repeated below. When all of these steps are completed, the resulting model should appear as follows. In order to save all of these components as a single subsystem block, first select all of the blocks, then select Create Subsystem from the Edit menu. Your model should appear as follows. You can also download the file for this system here, Pend_Model.mdl. Building the nonlinear model with Simscape In this section, we alternatively show how to build the inverted pendulum model using the physical modeling blocks of the Simscape extension to Simulink. The blocks in the Simscape library represent actual physical components; therefore, complex multi-body dynamic models can be built without the need to build mathematical equations from physical principles as was done above by applying Newton's Open a new Simulink model and follow the steps below to create the inverted pendulum model in Simscape. In order to orient oneself, we will assume a coordinate system where the cart moves in the • Insert a Body block from the Simscape/SimMechanics/Mechanical/Bodies library to represent the cart. Following the system parameters given at the top of this page, double-click on the block and set the Mass: to "0.5" with units of kg. The Body block by default includes two ports. Since we need ports to define where the pendulum is connected to the cart and where the external force and the frictional force are applied, a third port must be added. This can be accomplished from the button on the right-side of the Position tab. Since the cart will only move in one dimension, the two forces must be co-located along that dimension (the CG). The following figure shows a possible definition of the cart body. • Insert a second Body block to represent the pendulum. Double-click on the block and set the Mass: to "0.2" with units of kg. Since the pendulum can only rotate about the Inertia: equal to "0.006*eye(3)" with units of kg*m^2. Since we are modeling the pendulum as a rigid body that has size as well as mass, the body can rotate and it is important to define the location of the pendulum's attachment to the cart and its CG correctly. Specifically, define the point of attachment CS1 to have a position "[0 0 0]" and an origin that is Adjoining and define the CG to be 0.3 meters away from the attachment CS1 (as defined above). Also define the four corners of the pendulum. Make sure to show the port defining the attachment point. Under the Visualization tab, you can also change the pendulum's color to make it stand out from the cart. • Next add a Revolute block from the Simscape/SimMechanics/Joints library to define the joint connecting the pendulum to the cart. By default, the joint will be defined to rotate about the B) of the joint and the Body block corresponding to the pendulum to the follower port (F) of the joint. Double-click on the Revolute block and set the Number of sensor / actuator ports: to "2". • Then add a Joint Initial Condition block and a Joint Sensor block from the Simscape/SimMechanics/Sensors & Actuators library and connect these blocks to the Revolute block. Double-click on the Joint Initial Condition block and check the Enable box. We can use the default values for initial position and velocity of the joint. Employing an initial position of 0 degrees corresponds to the pendulum being pointed vertically upward based on the definition of the pendulum body above. This isn't consistent with the original definition of Angle measurement to rad. Angular position is the only measurement that is needed for this joint, the other boxes may remain unchecked. • Add two Prismatic blocks from the Simscape/SimMechanics/Joints library to define the translational degree of freedom of the cart and the application of the forces to the cart. Since the cart is technically a point mass we need only one Prismatic block, but by employing two we can apply the forces at different locations. Double-click on each Prismatic block and change the Axis of Action to "[1 0 0]" to reflect the fact that the two forces act in the F) of each block to the ports for the applied force (CS1) and frictional force (CS2) on the Body block representing the cart. • Next add two Ground blocks from the Simscape/SimMechanics/Bodies library to define the base for the motion of the cart. Specifically, connect the output of each ground block to the base port (B) of each Prismatic block. • For one of the Ground blocks you just created, double-click on the block and check the Show Machine Environment port box. Then add a Machine Environment block from the Simscape/SimMechanics/ Bodies library and connect it to the Ground block for which you just added the port. The Machine Environment block allows us to define the gravitational force in the simulation. In this case the default direction (negative m/s^2 are correct. This block also allows us to define the parameters for visualization and the numerical solver. The default parameters are fine for this example. • Next add two Joint Actuator blocks and one Joint Sensor block from Simscape/SimMechanics/Sensors & Actuators library. The Joint Actuator blocks will be employed for generating the external applied force and the frictional force, while the Joint Sensor block will sense the motion of the cart. Note, there is also a Translational Friction block that is available, but we will calculate the frictional force ourselves since we are employing only a simple viscous model. Double-click on one of the Prismatic blocks and set the Number of sensor / actuator ports: to "1" (for the force actuator). For the other Prismatic block, set the Number of sensor / actuator ports: to "2" (one for the force actuator and one for the cart sensor). Then connect the Joint Actuator and Joint Sensor blocks as described. The default values for the Joint Actuator blocks are sufficient for this case, but we must change the Joint Sensor block to output position and velocity since the velocity is needed for calculating the frictional force. Double-click on the Joint Sensor block and check the box for Velocity while leaving the box for Position checked. The default metric units do not need to be changed. Also uncheck the box for Output selected parameters as one signal. • Add a Gain block from the Simulink/Math Operations library to represent the viscous friction coefficient • Next, add two Out1 blocks and one In1 block from the Simulink/Ports & Subsystems library. Connect the Out1 blocks to the remaining Joint Sensor block outputs and the In1 block to the remaining Joint Actuator input. • Finally, connect and label the components as shown in the following figure. You can rotate a block in a similar manner to the way you flipped blocks, that is, by right-clicking on the block then selecting Rotate Block from the Format menu. You can also save this model as a single subsystem block as described in the previous section. You can change the color of the subsystem by right-clicking on the block and choosing Background Color from the resulting menu. You can download the complete model file here, Pend_Model_Simscape.mdl, but note that you will need the Simscape addition to Simulink in order to run the file. We use this model in the Inverted Pendulum: Simulink Controller Design page. Generating the open-loop response We will now simulate the response of the inverted pendulum system to an impulsive force applied to the cart. This simulation requires an impulse input. Since there is no such block in the Simulink library, we will use the Pulse Generator block to approximate a unit impulse input. We could use either of the models we generated above, however, we will use the Simscape model in this case because it will allow us to visualize the motion of the inverted pendulum system. Follow the steps given below. • Open the inverted pendulum simscape model generated above. • Add a Pulse Generator block from the Simulink/Sources library. Double-click on the block and change the parameters as shown below. In particular, change the Period: to "10". Since we will run our simulation for 10 seconds, this will ensure that only a single "pulse" is generated. Also change the Amplitude to "1000" and the Pulse Width (% of period): to "0.01". Together, these settings generate a pulse that approximates a unit impulse in that the magnitude of the input is very large for a very short period of time and the area of the pulse equals 1. • Add a Scope block from the Simulink/Sinks library. • In order display two inputs on the scope, double-click on the Scope block, choose the Parameters icon, and change the Number of axes: to "2". Connect the blocks and label the signals connected to the Scope block as shown. Save this system as Pend_Openloop.mdl, or download it here Before we start the simulation, we would like to enable the visualization of the inverted pendulum system. From the menus at the top of the model window choose Simulation > Configuration Parameters. Then from the from the directory on the left-side of the window choose Simscape > SimMechanics. Then check the box for Show animation during simulation as shown in the figure below. Now, start the simulation (select Start from the Simulation menu or enter Ctrl-T). As the simulation runs, an animation of the inverted pendulum like the one shown below will visualize the system's resulting motion. Then open the Scope and click the Autoscale button. You will see the following output for the pendulum angle and the cart position. Notice that the pendulum repeatedly swings through full revolutions where the angle rolls over Inverted Pendulum: System Analysis page. This is due of course to the fact that this simulation employed a fully nonlinear model, while the previous analysis had relied on a linear approximation of the inverted pendulum model. In order to compare the results of the simulation model more directly to the prior results, we will extract a linear model from our simulation model. Extracting a linear model from the simulation Aside from comparing our simulation model to our prior results, it may also be desirable to extract a linear model for the purposes of analysis and design. Much of the analytical techniques that are commonly applied to the analysis of dynamic systems and the design of their associated control can only be applied to linear models. Therefore, it may be desirable to extract an approximate linear model from the nonlinear simulation model. We will accomplish this from within Simulink. • To begin, open either of the Simulink models generated above, Pend_Model.mdl or Pend_Model_Simscape.mdl. • If you generated your simulation model using variables, it is necessary to define the physical constants in the MATLAB workspace before performing the linearization. This can be accomplished by entering the following commands in the MATLAB command window. M = 0.5; m = 0.2; b = 0.1; I = 0.006; g = 9.8; l = 0.3; • Next choose from the menus at the top of the model window Tools > Control Design > Linear Analysis. This will cause the Linear Analysis Tool window to open. • In order to perform our linearization, we need to first identify the inputs and outputs for the model and the operating point that we wish to perform the linearization about. First right-click on the signal representing the Force input in the Simulink/Simscape model. Then choose Linearization Points > Input Point from the resulting menu. Similarly, right-click on each of the two output signals of the model (pendulum angle and cart position) and select Linearization Points > Output Point from the resulting menu in each case. The resulting inputs and outputs should now be identified on your model by arrow symbols as shown in the figure below. • Next we need to identify the operating point to be linearized about. From the Operating Point: menu choose Linearize At... > Trim model... as shown in the figure below. This will open the TRIM MODEL tab. Within this tab, select the Trim button indicated by the green triangle. This will create the operating point op_trim1. • Since we wish to examine the impulse response of this system, return to the EXACT LINEARIZATION tab and choose New Impulse from the Plot Result: drop-down menu near the top window as shown in the figure below. • Finally, choose op_trim1 from the Operating Point: drop-down menu and click the Linearize button indicated by the green triangle. This automatically generates an impulse response plot and the linearized model linsys1. • In order to compare the results to those plots generated in the Inverted Pendulum: System Analysis page, it is necessary to change the Properties from the right-click menu. The resulting window should then appear as follows, where the top plot is response of the pendulum angle and the bottom plot is the response of the cart position. These plots are very similar, though not exactly the same, as those generated in the Inverted Pendulum: System Analysis page. We can also export the resulting linearized model into the MATLAB workspace for further analysis and design. This can be accomplished by simply right-clicking on the linsys1 object in the Linear Analysis Workspace to copy the object. Then right-click within the MATLAB Workspace to paste the object.
{"url":"http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&section=SimulinkModeling","timestamp":"2014-04-20T16:24:40Z","content_type":null,"content_length":"74094","record_id":"<urn:uuid:f0ceb2cb-1244-48ef-8fd9-8ebf57dae6b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Piedmont, CA Calculus Tutor Find a Piedmont, CA Calculus Tutor ...I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have significant experience tutoring students in lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and d... 25 Subjects: including calculus, physics, algebra 1, statistics ...To tutor MATLAB programming I will create a series of exercises designed to show the student the specific tools that they will need for their desired application. I am a trained engineer, with an M.S. from UC Berkeley, and a B.S. from the University of Illinois at Urbana-Champaign. I have gradu... 15 Subjects: including calculus, Spanish, geometry, ESL/ESOL ...I have helped hundreds of students, both one on one and in a classroom setting, and many of them provide excellent references and referrals for me. Please get in touch with me – I will be very happy to help you succeed.I tutor AP Statistics and college level introductory statistics courses. Man... 14 Subjects: including calculus, statistics, geometry, algebra 2 I hold a B.A. in molecular and cell biology from University of California at Berkeley and a B.S. in health Science from University of the Sciences in Philadelphia. I love teaching and enjoy working with students of all ages. I served as a teacher's assistant in my math, English and Chinese classes since I excelled in those classes during middle and high school. 22 Subjects: including calculus, geometry, statistics, biology ...I'm a 25-year veteran of Silicon Valley with multiple degrees in Engineering. I am currently an Adjunct Professor at the Golden Gate University School of Business. I teach graduate-level courses in Business-to-Business Marketing, with a focus on technology marketing. 39 Subjects: including calculus, English, chemistry, reading Related Piedmont, CA Tutors Piedmont, CA Accounting Tutors Piedmont, CA ACT Tutors Piedmont, CA Algebra Tutors Piedmont, CA Algebra 2 Tutors Piedmont, CA Calculus Tutors Piedmont, CA Geometry Tutors Piedmont, CA Math Tutors Piedmont, CA Prealgebra Tutors Piedmont, CA Precalculus Tutors Piedmont, CA SAT Tutors Piedmont, CA SAT Math Tutors Piedmont, CA Science Tutors Piedmont, CA Statistics Tutors Piedmont, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Piedmont_CA_Calculus_tutors.php","timestamp":"2014-04-20T16:13:37Z","content_type":null,"content_length":"24197","record_id":"<urn:uuid:1780ec47-1fe6-4f02-a697-341a1727f438>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra, from the Arabic al-jebr, meaning “reunion of broken parts”, is one of the wide-ranging parts of mathematics, combined with number theory, geometry, and analysis. Geometry is a branch of mathematics that is concerned with questions of shape, size, relative position of figures, and the properties of space. Trigonometry, from the Greek trigonon, meaning triangle, and metron, meaning measure, is a branch of mathematics that studies the relationships involving lengths and angles of triangles. Applied mathematics is a branch of mathematics that focuses on mathematical methods that are usually used in science, engineering, business, and industry. In mathematics and computer science, an algorithm is a step-by-step procedure for calculations. Algorithms are used for calculations, automated reasoning, and data processing. In geometry, a polygon is traditionally a plane figure that is bounded by a finite chain of strait line segments closing in a loop to create a closed chain or circuit. In Euclidean geometry, a rhombus, plural rhombi or rhombuses, is a simple quadrilateral whose four sides all have equal length. Square Root Day is an unofficial holiday celebrated on days when both the day of the month and the month are the square root of the last two digits of the year. There are nine square root days that always fall on the same dates each century.There is an...
{"url":"http://www.redorbit.com/education/reference_library/mathematics/","timestamp":"2014-04-19T05:28:32Z","content_type":null,"content_length":"29510","record_id":"<urn:uuid:eb531336-0e36-484e-8d33-46ef8f4a76a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
sum of interior angles of a 7-point star. April 4th 2011, 02:50 AM #1 Apr 2010 sum of interior angles of a 7-point star. Dear sir , I would appreciate if anyone can show me the solution to the below question. What is the value of the angles A+B+C+D+E+F+G in the beolw figure ? Last edited by kingman; April 4th 2011 at 02:51 AM. Reason: typo error Hello, kingman! I have a very primitive solution. $\text{What is the sum of the angles }A\!+\!B\!+\!C\!+\!D\!+\!E\!+\!F\!+\!G\text{ in the figure?}$ Place a pencil on $\,AC$, the eraser at $\,A$, the point at $\,C.$ Rotate the pencil about $\,C$ so that the eraser is at $\,E$. Rotate the pencil about $\,E$ so that the point is at $\,G.$ Rotate the pencil about $\,G$ so that the eraser is at $\,B.$ Rotate the pencil about $\,B$ so that the point is at $\,D.$ Rotate the pencil about $\,D$ so that the eraser is at $\,F.$ Rotate the pencil about $\,F$ so that the point is at $\,A.$ Rotate the pencil about $\,A$ so that the eraser is at $\,C.$ We find that the pencil has undergone $1\tfrac{1}{2}$ rotations. Therefore: . $A\!+\!B\!+\!C\!+\!D\!+\!E\!+\!F\!+\!G \:=\:540^o$ Assuming that the star is uniform, you can try this method. Lets begin with the triangle with angle A as one of its angle, label the two remaining angles with 1 and 2 (your preference). Then, label the opposite angles in the neighbouring triangle as well. Introduce some new angles, and repeat the process until you have all the angles in the 7 exterior triangles labelled. Now, label the heptagon inside, ie 180 - 1, 180 -2 .... A + 1 + 2 = 180 B + 2 + 3 = 180 Do for all the rest and get 7 equations and sum them all up: A+...+G + 2 (1 + 2 + ... + 7) =1260 Then set up another equation by summing up all the angles in the inner heptagon. Yeah, just follow these boring steps if it comes out in a test, else go for Soroban's more interesting way of solving. Thanks very much Soroban for the graphical approach to the question but I wonder whether it can done in the similar fashion iwhen solving the above problem if we take a 5 -point star instead and taking advantage of fact that the exterior angle of a triangle equals to sum of the interior opposite angles. another method mark intersection of GE and DF, H.StraigtenAGEF to form a rectangle.Let AGD be equilateral and GHD isosceles 30-30-120. Revised figure contains 4 90's and 3 60's= 540 thanks very much for the solution but would appreciate very much if you can draw out how the final figure looks like and how a rectangle can be formed by straigtening AGEF .Finally can you explain how AGD becomes equilateral and GHD becomes isosceles . sum of interior angles Hi kingman, if you look at your diagram with my added pointH you can see that BGHD is shaped like akite and ACEF can be straitened to a rectanglewhich I will call akite with tails at F and E.THe rearrangement does not change the sum of the interior angles but makes it easy to find the sum.Draw the kite as previously described. Extend GH and DH .Draw AC parallel to GD.Drop perpendiculars from A and C meeting DH extended at Fand GH extended at E. Count the angles AC must be equal to FE Last edited by bjhopper; April 4th 2011 at 08:13 PM. Reason: add correction Thanks very much for the response and would appreciate if you can use Paintbrush and make a sketch of the diagram you have just described. Really expeiencing difficulty in visually something without a diagram. April 4th 2011, 05:44 AM #2 Super Member May 2006 Lexington, MA (USA) April 4th 2011, 06:01 AM #3 MHF Contributor Sep 2008 West Malaysia April 4th 2011, 06:21 AM #4 Apr 2010 April 4th 2011, 08:02 AM #5 Super Member Nov 2007 Trumbull Ct April 4th 2011, 05:54 PM #6 Apr 2010 April 4th 2011, 08:10 PM #7 Super Member Nov 2007 Trumbull Ct April 4th 2011, 09:41 PM #8 Apr 2010
{"url":"http://mathhelpforum.com/geometry/176754-sum-interior-angles-7-point-star.html","timestamp":"2014-04-18T14:06:27Z","content_type":null,"content_length":"54841","record_id":"<urn:uuid:ff3bda5a-3693-45ff-a092-01506fb6ffa7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
nag_opt_sparse_convex_qp (e04nkc) NAG Library Function Document nag_opt_sparse_convex_qp (e04nkc) 1 Purpose nag_opt_sparse_convex_qp (e04nkc) solves sparse linear programming or convex quadratic programming problems. 2 Specification #include <nag.h> #include <nage04.h> nag_opt_sparse_convex_qp (Integer n, Integer m, Integer nnz, Integer iobj, Integer ncolh, void void (*qphx)(Integer ncolh, const double x[], double hx[], Nag_Comm *comm), const double a[], const Integer ha[], const Integer ka[], const double bl[], const double bu[], double xs[], Integer *ninf, double *sinf, double *obj, Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail) 3 Description nag_opt_sparse_convex_qp (e04nkc) is designed to solve a class of quadratic programming problems that are assumed to be stated in the following general form: $minimize x ∈ R n f x subject to l ≤ x A x ≤ u ,$ (1) is a set of variables, is an matrix and the objective function $f x$ may be specified in a variety of ways depending upon the particular problem to be solved. The optional argument Section 11.2 ) may be used to specify an alternative problem in which $f x$ is maximized. The possible forms for $f x$ are listed in Table 1 below, in which the prefixes FP, LP and QP stand for ‘feasible point’, ‘linear programming’ and ‘quadratic programming’ respectively, is an element vector and is the second-derivative matrix $∇ 2 f x$ Hessian matrix Problem Type Objective Function $fx$ Hessian Matrix $H$ FP Not applicable Not applicable LP $cT x$ Not applicable QP $cT x + 1 2 xT Hx$ Symmetric positive semidefinite For LP and QP problems, the unique global minimum value of $f x$ is found. For FP problems, $f x$ is omitted and the function attempts to find a feasible point for the set of constraints. For QP problems, a function must also be provided to compute $Hx$ for any given vector $x$. ($H$ need not be stored explicitly.) nag_opt_sparse_convex_qp (e04nkc) is intended to solve large-scale linear and quadratic programming problems in which the constraint matrix (i.e., when the number of zero elements is sufficiently large that it is worthwhile using algorithms which avoid computations and storage involving zero elements). nag_opt_sparse_convex_qp (e04nkc) also takes advantage of sparsity in . (Sparsity in can be exploited in the function that computes .) For problems in which can be treated as a matrix, it is usually more efficient to use nag_opt_lp (e04mfc) nag_opt_lin_lsq (e04ncc) nag_opt_qp (e04nfc) is positive definite, then the final will be unique. If nag_opt_sparse_convex_qp (e04nkc) detects that is indefinite, it terminates immediately with an error condition (see Section 6 ). In that case, it may be more appropriate to call nag_opt_nlp_sparse (e04ugc) instead. If is the zero matrix, the function will still solve the resulting LP problem; however, this can be accomplished more efficiently by setting the argument Section 5 The upper and lower bounds on the elements of are said to define the general constraints of the problem. Internally, nag_opt_sparse_convex_qp (e04nkc) converts the general constraints to equalities by introducing a set of slack variables $s$ , where $s = s 1 , s 2 , … , s m T$ . For example, the linear constraint $5 ≤ 2 x 1 + 3 x 2 ≤ +∞$ is replaced by $2 x 1 + 3 x 2 - s 1 = 0$ , together with the bounded slack $5 ≤ s 1 ≤ +∞$ . The problem defined by can therefore be re-written in the following equivalent form: $minimize x ∈ R n , s ∈ R m f x subject to Ax - s = 0 , l ≤ x s ≤ u .$ Since the slack variables are subject to the same upper and lower bounds as the elements of , the bounds on can simply be thought of as bounds on the combined vector . (In order to indicate their special role in QP problems, the original variables are sometimes known as ‘column variables’, and the slack variables are known as ‘row variables’.) Each LP or QP problem is solved using an active-set method. This is an iterative procedure with two phases: a feasibility phase, in which the sum of infeasibilities is minimized to find a feasible point; and an optimality phase, in which $f x$ is minimized by constructing a sequence of iterations that lies within the feasible region. A constraint is said to be active or binding at $x$ if the associated element of either $x$ or $Ax$ is equal to one of its upper or lower bounds. Since an active constraint in $Ax$ has its associated slack variable at a bound, the status of both simple and general upper and lower bounds can be conveniently described in terms of the status of the variables $x,s$. A variable is said to be nonbasic if it is temporarily fixed at its upper or lower bound. It follows that regarding a general constraint as being active is equivalent to thinking of its associated slack as being nonbasic. At each iteration of an active-set method, the constraints $Ax - s = 0$ are (conceptually) partitioned into the form $Bx B + Sx S + Nx N = 0 ,$ $x N$ consists of the nonbasic elements of and the basis matrix $B$ is square and non-singular. The elements of $x B$ $x S$ are called the variables respectively; with $x N$ they are a permutation of the elements of . At a QP solution, the basic and superbasic variables will lie somewhere between their upper or lower bounds, while the nonbasic variables will be equal to one of their bounds. At each iteration, $x S$ is regarded as a set of independent variables that are free to move in any desired direction, namely one that will improve the value of the objective function (or sum of infeasibilities). The basic variables are then adjusted in order to ensure that continues to satisfy $Ax - s = 0$ . The number of superbasic variables ( $n S$ say) therefore indicates the number of degrees of freedom remaining after the constraints have been satisfied. In broad terms, $n S$ is a measure of how nonlinear the problem is. In particular, $n S$ will always be zero for FP and LP problems. If it appears that no improvement can be made with the current definition of $B$, $S$ and $N$, a nonbasic variable is selected to be added to $S$, and the process is repeated with the value of $n S$ increased by one. At all stages, if a basic or superbasic variable encounters one of its bounds, the variable is made nonbasic and the value of $n S$ is decreased by one. Associated with each of the equality constraints $Ax - s = 0$ is a dual variable $π i$ . Similarly, each variable in has an associated reduced gradient $d j$ (also known as a reduced cost ). The reduced gradients for the variables are the quantities $g - AT π$ , where is the gradient of the QP objective function; and the reduced gradients for the slack variables are the dual variables . The QP subproblem is optimal if $d j ≥ 0$ for all nonbasic variables at their lower bounds, $d j ≤ 0$ for all nonbasic variables at their upper bounds and $d j = 0$ for all superbasic variables. In practice, an QP solution is found by slightly relaxing these conditions on $d j$ (see the description of the optional argument Section 11.2 The process of computing and comparing reduced gradients is known as (a term first introduced in the context of the simplex method for linear programming). To ‘price’ a nonbasic variable $x j$ means that the reduced gradient $d j$ associated with the relevant active upper or lower bound on $x j$ is computed via the formula $d j = g j - aT π$ , where $a j$ is the th column of . (The variable selected by such a process and the corresponding value of $d j$ (i.e., its reduced gradient) are the quantities in the detailed printed output from nag_opt_sparse_convex_qp (e04nkc); see Section 11.3 .) If has significantly more columns than rows (i.e., ), pricing can be computationally expensive. In this case, a strategy known as partial pricing can be used to compute and compare only a subset of the $d j$ nag_opt_sparse_convex_qp (e04nkc) is based on SQOPT, which is part of the SNOPT package described in Gill et al. (2002) , which in turn utilizes routines from the MINOS package (see Murtagh and Saunders (1995) ). It uses stable numerical methods throughout and includes a reliable basis package (for maintaining sparse factors of the basis matrix ), a practical anti-degeneracy procedure, efficient handling of linear constraints and bounds on the variables (by an active-set strategy), as well as automatic scaling of the constraints. Further details can be found in Section 8 4 References Fourer R (1982) Solving staircase linear programs by the simplex method Math. Programming 23 274–313 Gill P E and Murray W (1978) Numerically stable methods for quadratic programming Math. Programming 14 349–372 Gill P E, Murray W and Saunders M A (2002) SNOPT: An SQP Algorithm for Large-scale Constrained Optimization 12 979–1006 SIAM J. Optim. Gill P E, Murray W, Saunders M A and Wright M H (1987) Maintaining LU factors of a general sparse matrix Linear Algebra and its Applics. 88/89 239–270 Gill P E, Murray W, Saunders M A and Wright M H (1989) A practical anti-cycling procedure for linearly constrained optimization Math. Programming 45 437–474 Gill P E, Murray W, Saunders M A and Wright M H (1991) Inertia-controlling methods for general quadratic programming SIAM Rev. 33 1–36 Hall J A J and McKinnon K I M (1996) The simplest examples where the simplex method cycles and conditions where EXPAND fails to prevent cycling Report MS 96–100 Department of Mathematics and Statistics, University of Edinburgh Murtagh B A and Saunders M A (1995) MINOS 5.4 users' guide Report SOL 83-20R Department of Operations Research, Stanford University 5 Arguments 1: n – IntegerInput On entry: $n$, the number of variables (excluding slacks). This is the number of columns in the linear constraint matrix $A$. Constraint: $n≥1$. 2: m – IntegerInput On entry , the number of general linear constraints (or slacks). This is the number of rows in , including the free row (if any; see argument Constraint: $m≥1$. 3: nnz – IntegerInput On entry: the number of nonzero elements in $A$. Constraint: $1 ≤ nnz ≤ n × m$. 4: iobj – IntegerInput On entry : if , row is a free row containing the nonzero elements of the vector appearing in the linear objective term $cT x$ , there is no free row – i.e., the problem is either an FP problem (in which case must be set to zero), or a QP problem with Constraint: $0 ≤ iobj ≤ m$. 5: ncolh – IntegerInput On entry $n H$ , the number of leading nonzero columns of the Hessian matrix . For FP and LP problems, must be set to zero. Constraint: $0 ≤ ncolh ≤ n$. 6: qphx – function, supplied by the userExternal Function must be supplied for QP problems to compute the matrix product . If has zero rows and columns, it is most efficient to order the variables $x = y zT$ so that $Hx = H 1 0 0 0 y z = H 1 y 0 ,$ where the nonlinear variables appear first as shown. For FP and LP problems, will never be called and the NAG defined null function pointer, , can be supplied in the call to nag_opt_sparse_convex_qp (e04nkc). The specification of void qphx (Integer ncolh, const double x[], double hx[], Nag_Comm *comm) 1: ncolh – IntegerInput On entry: the number of leading nonzero columns of the Hessian matrix $H$, as supplied to nag_opt_sparse_convex_qp (e04nkc). 2: x[ncolh] – const doubleInput On entry : the first elements of 3: hx[ncolh] – doubleOutput On exit: the product $Hx$. 4: comm – Nag_Comm * Pointer to structure of type Nag_Comm; the following members are relevant to first – Nag_BooleanInput On entry : will be set to Nag_TRUE on the first call to and Nag_FALSE for all subsequent calls. nf – IntegerInput On entry : the number of evaluations of the objective function; this value will be equal to the number of calls made to including the current one. user – double * iuser – Integer * p – Pointer The type Pointer will be void * with a C compiler that defines void * or char *. Before calling nag_opt_sparse_convex_qp (e04nkc) these pointers may be allocated memory and initialized with various quantities for use by when called from nag_opt_sparse_convex_qp (e04nkc). Note: qphx should be tested separately before being used in conjunction with nag_opt_sparse_convex_qp (e04nkc). The array be changed by 7: a[nnz] – const doubleInput On entry : the nonzero elements of , ordered by increasing column index. Note that elements with the same row and column indices are not allowed. The row and column indices are specified by arguments (see below). 8: ha[nnz] – const IntegerInput On entry: $ha[i]$ must contain the row index of the nonzero element stored in $a[i]$, for $i=0,1,…,nnz - 1$. Note that the row indices for a column may be supplied in any order. Constraint: $1 ≤ ha[i] ≤ m$, for $i=0,1,…,nnz-1$. 9: ka[$n+1$] – const IntegerInput On entry must contain the index in of the start of the th column, for . To specify the th column as empty, set $ka[j-1] = ka[j]$ . Note that the first and last elements of must be such that $ka[0] = 0$ $ka[n] = nnz$ □ $ka[0] = 0$; □ $ka[j-1] ≥ 0$, for $j=2,3,…,n$; □ $ka[n] = nnz$; □ $0 ≤ ka[j] - ka[j-1] ≤ m$, for $j=1,2,…,n$. 10: bl[$n+m$] – const doubleInput 11: bu[$n+m$] – const doubleInput On entry must contain the lower bounds and the upper bounds, for all the constraints in the following order. The first elements of each array must contain the bounds on the variables, and the next elements the bounds for the general linear constraints and the free row (if any). To specify a nonexistent lower bound (i.e., $l j = -∞$ ), set $bl[j-1] ≤ -options.inf_bound$ , and to specify a nonexistent upper bound (i.e., $u j = +∞$ ), set $bu[j-1] ≥ options.inf_bound$ , where is one of the optional arguments (default value $10 20$ , see Section 11.2 ). To specify the th constraint as an equality, set $bl[j-1] = bu[j-1] = β$ , say, where $β < options.inf_bound$ . Note that, for LP and QP problems, the lower bound corresponding to the free row must be set to and stored in $bl[ n + iobj - 1 ]$ ; similarly, the upper bound must be set to and stored in $bu[ n + iobj - 1 ]$ □ $bl[j] ≤ bu[j]$, for $j=0,1,…, n + m - 1$; □ if $bl[j] = bu[j] = β$, $β < options.inf_bound$; □ if $iobj>0$, $bl[ n + iobj - 1 ] ≤ -options.inf_bound$ and $bu[ n + iobj - 1 ] ≥ options.inf_bound$. 12: xs[$n+m$] – doubleInput/Output On entry , for , must contain the initial values of the variables, . In addition, if a ‘warm start’ is specified by means of the optional argument Section 11.2 ) the elements $xs[ n + i - 1 ]$ , for , must contain the initial values of the slack variables, On exit: the final values of the variables and slacks $x,s$. 13: ninf – Integer *Output On exit : the number of infeasibilities. This will be zero if an optimal solution is found, i.e., if nag_opt_sparse_convex_qp (e04nkc) exits with 14: sinf – double *Output On exit : the sum of infeasibilities. This will be zero if . (Note that nag_opt_sparse_convex_qp (e04nkc) does attempt to compute the minimum value of in the event that the problem is determined to be infeasible, i.e., when nag_opt_sparse_convex_qp (e04nkc) exits with 15: obj – double *Output On exit : the value of the objective function. includes the quadratic objective term $1 2 xT Hx$ (if any). is just the linear objective term $cT x$ (if any). For FP problems, is set to zero. 16: options – Nag_E04_Opt *Input/Output On entry/exit : a pointer to a structure of type Nag_E04_Opt whose members are optional arguments for nag_opt_sparse_convex_qp (e04nkc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of is given below in Section 11 . Some of the results returned in can be used by nag_opt_sparse_convex_qp (e04nkc) to perform a ‘warm start’ (see the member Section 11.2 structure also allows names to be assigned to the columns and rows (i.e., the variables and constraints) of the problem, which are then used in solution output. In particular, if the problem data is defined by an MPSX file, the function nag_opt_sparse_mps_read (e04mzc) may be used to read the file, and to store the column and row names in for use by nag_opt_sparse_convex_qp (e04nkc). If any of these optional arguments are required then the structure should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_sparse_convex_qp (e04nkc). However, if the optional arguments are not required the NAG defined null pointer, , can be used in the function call. 17: comm – Nag_Comm *Input/Output Note: comm is a NAG defined type (see Section 3.2.1.1 in the Essential Introduction). On entry/exit : structure containing pointers for communication to the user-supplied function, , and the optional user-defined printing function; see the description of Section 11.3.1 for details. If you do not need to make use of this communication feature the null pointer may be used in the call to nag_opt_sparse_convex_qp (e04nkc); will then be declared internally for use in calls to user-supplied functions. 18: fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 5.1 Description of Printed Output Intermediate and final results are printed out by default. The level of printed output can be controlled with the structure member Section 11.2 ). The default, , provides a single line of output at each iteration and the final result. This section describes the default printout produced by nag_opt_sparse_convex_qp (e04nkc). The following line of summary output ( $< 80$ characters) is produced at every iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration. Itn is the iteration count. Step is the step taken along the computed search direction. Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase. Sinf/ is the current value of the objective function. If $x$ is not feasible, Sinf gives the sum of magnitudes of constraint violations. If $x$ is feasible, Objective is the value of the Objective objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point. During the optimality phase, the value of the objective function will be non-increasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Norm rg is $d S$, the Euclidean norm of the reduced gradient (see Section 10.3). During the optimality phase, this norm will be approximately zero after a unit step. For FP and LP problems, Norm rg is not printed. The final printout includes a listing of the status of every variable and constraint. The following describes the printout for each variable. Variable gives the name of variable $j$, for $j=1,2,…,n$. If an options structure is supplied to nag_opt_sparse_convex_qp (e04nkc), and the $options.crnames$ member is assigned to an array of column and row names (see Section 11.2 for details), the name supplied in $options.crnames[j-1]$ is assigned to the $j$th variable. Otherwise, a default name is assigned to the variable. State gives the state of the variable (LL if nonbasic on its lower bound, UL if nonbasic on its upper bound, EQ if nonbasic and fixed, FR if nonbasic and strictly between its bounds, BS if basic and SBS if superbasic). A key is sometimes printed before State to give some additional information about the state of a variable. Note that unless the optional argument $options.scale=Nag_NoScale$ (default value is $options.scale=Nag_ExtraScale$; see Section 11.2) is specified, the tests for assigning a key are applied to the variables of the scaled problem. Alternative optimum possible. The variable is nonbasic, but its reduced gradient is essentially zero. This means that if the variable were allowed to start moving away from its bound, A there would be no change in the value of the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case, the values of the Lagrange multipliers might also change. D Degenerate. The variable is basic or superbasic, but it is equal to (or very close to) one of its bounds. I Infeasible. The variable is basic or superbasic and is currently violating one of its bounds by more than the value of the optional argument $options.ftol$ (default value $= max 10 -6 , ε$ , where $ε$ is the machine precision; see Section 11.2). N Not precisely optimal. The variable is nonbasic or superbasic. If the value of the reduced gradient for the variable exceeds the value of the optional argument $options.optim_tol$ (default value $= max 10 -6 , ε$; see Section 11.2), the solution would not be declared optimal because the reduced gradient for the variable would not be considered negligible. Value is the value of the variable at the final iteration. Lower is the lower bound specified for variable $j$. (None indicates that $bl[j-1] ≤ -options.inf_bound$, where $options.inf_bound$ is the optional argument.) Upper is the upper bound specified for variable $j$. (None indicates that $bu[j-1] ≥ options.inf_bound$.) Lagr is the value of the Lagrange multiplier for the associated bound. This will be zero if State is FR. If $x$ is optimal, the multiplier should be non-negative if State is LL, non-positive if Mult State is UL, and zero if State is BS or SBS. Residual is the difference between the variable Value and the nearer of its (finite) bounds $bl[j-1]$ and $bu[j-1]$. A blank entry indicates that the associated variable is not bounded (i.e., $bl [j-1] ≤ -options.inf_bound$ and $bu[j-1] ≥ options.inf_bound$). The meaning of the printout for general constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, replaced by replaced by $options.crnames[ n + j - 1 ]$ replaced by $bl[ n + j - 1 ]$ $bu[ n + j - 1 ]$ respectively, and with the following change in the heading: Constrnt gives the name of the linear constraint. Note that the movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the column to become positive. Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision. 6 Error Indicators and Warnings Dynamic memory allocation failed. The contents of array are not valid. $0 ≤ ka[i+1] - ka[i] ≤ m$ , for $0 ≤ i < n$ The contents of array are not valid. $ka[0] = 0$ The contents of array are not valid. $ka[n] = nnz$ On entry, argument $options.crash$ had an illegal value. On entry, argument $options.print_level$ had an illegal value. On entry, argument $options.scale$ had an illegal value. On entry, argument $options.start$ had an illegal value. Numerical error in trying to satisfy the general constraints. The basis is very ill conditioned. The basis is singular after 15 attempts to factorize it. The basis is singular after 15 attempts to factorize it (adding slacks where necessary). Either the problem is badly scaled or the value of the optional argument is too large; see Section 11.2 The lower bound for variable $value$ (array element $bl[value]$) is greater than the upper bound. The lower bound and upper bound for variable $value$ (array elements $bl[value]$ and $bu[value]$) are equal but they are greater than or equal to $options.inf_bound$. The lower bound and upper bound for linear constraint $value$ (array elements $bl[value]$ and $bu[value]$) are equal but they are greater than or equal to $options.inf_bound$. The lower bound for linear constraint $value$ (array element $bl[value]$) is greater than the upper bound. Duplicate sparse matrix element found in row $value$, column $value$. The Hessian matrix $H$ appears to be indefinite. The Hessian matrix Section 10.2 ) appears to be indefinite – normally because is indefinite. Check that function has been coded correctly. If is coded correctly with symmetric positive (semi-)definite, then the problem may be due to a loss of accuracy in the internal computation of the reduced Hessian. Try to reduce the values of the optional arguments Section 11.2 Reduced Hessian exceeds assigned dimension. $options.max_sb=value$. The reduced Hessian matrix $ZT HZ$ Section 10.2 ) exceeds its assigned dimension. The value of the optional argument is too small; see Section 11.2 On entry, $m=value$. Constraint: $m≥1$. On entry, $n=value$. Constraint: $n≥1$. given to not valid. Correct range for elements of $≥ 0$ given to not valid. Correct range for elements of is 1 to On entry, $options.factor_freq=value$. Constraint: $options.factor_freq≥1$. On entry, $options.fcheck=value$. Constraint: $options.fcheck≥1$. On entry, $options.max_iter=value$. Constraint: $options.max_iter≥0$. On entry, $options.max_sb=value$. Constraint: $options.max_sb≥1$. On entry, $options.nsb=value$. Constraint: $options.nsb≥0$. On entry, $options.partial_price=value$. Constraint: $options.partial_price≥1$. An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact for assistance. given to is not valid. Correct range is $0 ≤ iobj ≤ m$ given to is not valid. Correct range is $0 ≤ ncolh ≤ n$ given to is not valid. Correct range is $1 ≤ nnz ≤ n × m$ Value $value$ given to $options.reset_ftol$ is not valid. Correct range is $0 < options.reset_ftol < 10000000$. Value $value$ given to $options.ftol$ is not valid. Correct range is $options.ftol≥ε$. Value $value$ given to $options.inf_bound$ is not valid. Correct range is $options.inf_bound>0.0$. Value $value$ given to $options.inf_step$ is not valid. Correct range is $options.inf_step>0.0$. Value $value$ given to $options.lu_factor_tol$ is not valid. Correct range is $options.lu_factor_tol≥1.0$. Value $value$ given to $options.lu_sing_tol$ is not valid. Correct range is $options.lu_sing_tol>0.0$. Value $value$ given to $options.lu_update_tol$ is not valid. Correct range is $options.lu_update_tol≥1.0$. Value $value$ given to $options.optim_tol$ is not valid. Correct range is $options.optim_tol≥ε$. Value $value$ given to $options.pivot_tol$ is not valid. Correct range is $options.pivot_tol>0.0$. Value $value$ given to $options.crash_tol$ is not valid. Correct range is $0.0 ≤ options.crash_tol < 1.0$. Value $value$ given to $options.scale_tol$ is not valid. Correct range is $0.0 < options.scale_tol < 1.0$. The string pointed to by $options.crnames[value]$ is too long. It should be no longer than 8 characters. Cannot open file $string$ for appending. Cannot close file $string$. Since argument is nonzero, the problem is assumed to be of type QP. However, the argument is a null function. must be non-null for QP problems. Invalid lower bound for objective row. Bound should be $≤ value$. Invalid upper bound for objective row. Bound should be $≥ value$. Options structure not initialized. There is insufficient workspace for the basis factors, and the maximum allowed number of reallocation attempts, as specified by options.max_restart, has been reached. $options.state[value]$ is out of range. $options.state[value] = value$. Solution appears to be unbounded. The problem is unbounded (or badly scaled). The objective function is not bounded below in the feasible region. Error occurred when writing to file $string$. No feasible point was found for the linear constraints. The problem is infeasible. The general constraints cannot all be satisfied simultaneously to within the value of the optional argument ; see Section 11.2 Optimal solution is not unique. Weak solution found. The final $x$ is not unique, although $x$ gives the global minimum value of the objective function. The maximum number of iterations, $value$, have been performed. Too many iterations. The value of the optional argument is too small; see Section 11.2 7 Accuracy nag_opt_sparse_convex_qp (e04nkc) implements a numerically stable active set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine. 8 Further Comments 9 Example To minimize the quadratic function $f x = cT x + 1 2 xT Hx$ , where $c = -200,-2000,-2000,-2000,-2000,400,400T$ $H = 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 2 2 0 0 0 0 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 2 2 0 0 0 0 0 2 2$ subject to the bounds $00 0 ≤ x 1 ≤ 0 200 00 0 ≤ x 2 ≤ 2500 400 ≤ x 3 ≤ 0 800 100 ≤ x 4 ≤ 0 700 00 0 ≤ x 5 ≤ 1500 00 0 ≤ x 6 ≤ 1500 00 0 ≤ x 7 ≤ 1500$ and the general constraints $x1+ x2+ x3+ x4+ x5+ x6+ x7= 2000 0.15x1+ 0.04x2+ 0.02x3+ 0.04x4+ 0.02x5+ 0.01x6+ 0.03x7≤ 60 0.03x1+ 0.05x2+ 0.08x3+ 0.02x4+ 0.06x5+ 0.01x6+ ≤ 100 0.02x1+ 0.04x2+ 0.01x3+ 0.02x4+ 0.02x5+ ≤ 40 0.02x1+ 0.03x2+ + 0.01x5+ ≤ 30 1500≤ 0.70x1+ 0.75x2+ 0.80x3+ 0.75x4+ 0.80x5+ 0.97x6+ 250≤ 0.02x1+ 0.06x2+ 0.08x3+ 0.12x4+ 0.02x5+ 0.01x6+ 0.97x7≤ 300$ The initial point, which is infeasible, is The optimal solution (to five figures) is $x * = 0.0,349.40,648.85,172.85,407.52,271.36,150.02T .$ One bound constraint and four linear constraints are active at the solution. Note that the Hessian matrix is positive semidefinite. The function to calculate in the argument list; see Section 5 ) is The example program shows the use of the structures. The data for the example include a set of user-defined column and row names, and data for the Hessian in a sparse storage format (see Section 9.2 for further details). structure is initialized by nag_opt_init (e04xxc) and the member is assigned to the array of character strings into which the column and row names were read. The member of is used to pass the Hessian into nag_opt_sparse_convex_qp (e04nkc) for use by the function On return from nag_opt_sparse_convex_qp (e04nkc), the Hessian data is perturbed slightly and two further options set, selecting a warm start and a reduced level of printout. nag_opt_sparse_convex_qp (e04nkc) is then called for a second time. Finally, the memory freeing function nag_opt_free (e04xzc) is used to free the memory assigned by nag_opt_sparse_convex_qp (e04nkc) to the pointers in the options structure. You must use the standard C function for this purpose. The sparse storage scheme used for the Hessian in this example is similar to that which nag_opt_sparse_convex_qp (e04nkc) uses for the constraint matrix , but since the Hessian is symmetric we need only store the lower triangle (including the diagonal) of the matrix. Thus, an array contains the nonzero elements of the lower triangle arranged in order of increasing column index. The array contains the indices in of the first element in each column, and the array contains the row index associated with each element in . To allow the data to be passed via the member of , a struct is declared, containing pointer members which are assigned to the three arrays defining the Hessian. Alternative approaches would have been to use the members of to pass suitably partitioned arrays to , or to avoid the use of altogether and declare the Hessian data as global. The storage scheme suggested here is for illustrative purposes only. 9.1 Program Text 9.2 Program Data 9.3 Program Results 10 Further Description This section gives a detailed description of the algorithm used in nag_opt_sparse_convex_qp (e04nkc). This, and possibly the next section, Section 11 , may be omitted if the more sophisticated features of the algorithm and software are not currently of interest. 10.1 Overview nag_opt_sparse_convex_qp (e04nkc) is based on an inertia-controlling method that maintains a Cholesky factorization of the reduced Hessian (see below). The method is similar to that of Gill and Murray (1978) , and is described in detail by Gill et al. (1991) . Here we briefly summarize the main features of the method. Where possible, explicit reference is made to the names of variables that are arguments of the function or appear in the printed output. The method used has two distinct phases: finding an initial feasible point by minimizing the sum of infeasibilities (the feasibility phase ), and minimizing the quadratic objective function within the feasible region (the optimality phase ). The computations in both phases are performed by the same code. The two-phase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities (the described in Section 5.1 ) to the quadratic objective function (the quantity , see Section 5.1 In general, an iterative process is required to solve a quadratic program. Given an iterate in both the original variables and the slack variables , a new iterate $x -, s -$ is defined by $x - s - = x s + α p ,$ (2) where the step length $α$ is a non-negative scalar (the printed quantity , see Section 5.1 ), and is called the search direction . (For simplicity, we shall consider a typical iteration and avoid reference to the index of the iteration.) Once an iterate is feasible (i.e., satisfies the constraints), all subsequent iterates remain feasible. 10.2 Definition of the Working Set and Search Direction At each iterate , a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the value of the optional argument ; see Section 11.2 ). The working set is the current prediction of the constraints that hold with equality at a solution of the LP or QP problem. Let $m W$ denote the number of constraints in the working set (including bounds), and let denote the associated $m W$ $n+m$working set matrix consisting of the $m W$ gradients of the working set constraints. The search direction is defined so that constraints in the working set remain for any value of the step length. It follows that must satisfy the identity This characterization allows to be computed using any $n Z$ full-rank matrix that spans the null space of . (Thus, $n Z = n - m W$ $WZ = 0$ .) The null space matrix is defined from a sparse factorization of part of below). The direction will satisfy $p Z$ is any $n Z$ The working set contains the constraints $Ax - s = 0$ and a subset of the upper and lower bounds on the variables $x,s$. Since the gradient of a bound constraint $x j ≥ l j$ or $x j ≤ u j$ is a vector of all zeros except for $± 1$ in position $j$, it follows that the working set matrix contains the rows of $A-I$ and the unit rows associated with the upper and lower bounds in the working The working set matrix can be represented in terms of a certain column partition of the matrix . As in Section 3 we partition the constraints $Ax - s = 0$ so that $Bx B + Sx S + Nx N = 0 ,$ (5) is a square non-singular basis and $x B$ $x S$ $x N$ are the basic, superbasic and nonbasic variables respectively. The nonbasic variables are equal to their upper or lower bounds at , and the superbasic variables are independent variables that are chosen to improve the value of the current objective function. The number of superbasic variables is $n S$ (the quantity in the detailed printed output; see Section 11.3 ). Given values of $x N$ $x S$ , the basic variables $x B$ are adjusted so that is a permutation matrix such that $A-I P = BSN$ , then the working set matrix $WP = B S N 0 0 I N ,$ (6) $I N$ is the identity matrix with the same number of columns as The null space matrix is defined from a sparse factorization of part of . In particular, is maintained in ‘reduced gradient’ form, using the LUSOL package (see Gill et al. (1987) ) to maintain sparse factors of the basis matrix that alters as the working set changes. Given the permutation , the null space basis is given by $Z = P -B -1 S I 0 .$ (7) This matrix is used only as an operator, i.e., it is never computed explicitly. Products of the form $ZT g$ are obtained by solving with . This choice of implies that $n Z$ , the number of ‘degrees of freedom’ at , is the same as $n S$ , the number of superbasic variables. $g Z$ $H Z$ denote the reduced gradient reduced Hessian of the objective function: $g Z = ZT g and H Z = ZT HZ ,$ (8) is the objective gradient at . Roughly speaking, $g Z$ $H Z$ describe the first and second derivatives of an $n S$ problem for the calculation of $p Z$ . (The condition estimator of $H Z$ is the quantity Cond Hz in the detailed printed output; see Section 11.3 At each iteration, an upper triangular factor $R$ is available such that $H Z = RT R$. Normally, $R$ is computed from $RT R = ZT HZ$ at the start of the optimality phase and then updated as the QP working set changes. For efficiency, the dimension of $R$ should not be excessive (say, $n S ≤ 1000$). This is guaranteed if the number of nonlinear variables is ‘moderate’. If the QP problem contains linear variables, is positive semidefinite and may be singular with at least one zero diagonal element. However, an inertia-controlling strategy is used to ensure that only the last diagonal element of can be zero. (See Gill et al. (1991) for a discussion of a similar strategy for indefinite quadratic programming.) If the initial $R$ is singular, enough variables are fixed at their current value to give a non-singular $R$. This is equivalent to including temporary bound constraints in the working set. Thereafter, $R$ can become singular only when a constraint is deleted from the working set (in which case no further constraints are deleted until $R$ becomes non-singular). 10.3 The Main Iteration If the reduced gradient is zero, is a constrained stationary point on the working set. During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero elsewhere in the presence of constraint dependencies). During the optimality phase, a zero reduced gradient implies that minimizes the quadratic objective function when the constraints in the working set are treated as equalities. At a constrained stationary point, Lagrange multipliers are defined from the equations A Lagrange multiplier $λ j$ corresponding to an inequality constraint in the working set is said to be $λ j ≤ σ$ when the associated constraint is at its upper bound , or if $λ j ≥ -σ$ when the associated constraint is at its lower bound , where depends on the value of the optional argument Section 11.2 ). If a multiplier is non-optimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by continuing the minimization with the corresponding constraint excluded from the working set. (This step is sometimes referred to as ‘deleting’ a constraint from the working set.) If optimal multipliers occur during the feasibility phase but the sum of infeasibilities is nonzero, there is no feasible point and the function terminates immediately with Section 6 The special form of the working set allows the multiplier vector , the solution of , to be written in terms of the vector $d = g 0 - A-IT π = g - AT π π ,$ (10) satisfies the equations $BT π = g B$ , and $g B$ denotes the basic elements of . The elements of are the Lagrange multipliers $λ j$ associated with the equality constraints $Ax - s = 0$ . The vector $d N$ of nonbasic elements of consists of the Lagrange multipliers $λ j$ associated with the upper and lower bound constraints in the working set. The vector $d S$ of superbasic elements of is the reduced gradient $g Z$ . The vector $d B$ of basic elements of is zero, by construction. (The Euclidean norm of $d S$ and the final values of $d S$ are the quantities Norm rg Reduced Gradnt Obj Gradient Dual Activity in the detailed printed output; see Section 11.3 If the reduced gradient is not zero, Lagrange multipliers need not be computed and the search direction is given by $p = Zp Z$ ). The step length is chosen to maintain feasibility with respect to the satisfied constraints. There are two possible choices for $p Z$ , depending on whether or not $H Z$ is singular. If $H Z$ is non-singular, is non-singular and $p Z$ is computed from the equations $g Z$ is the reduced gradient at . In this case, $x,s + p$ is the minimizer of the objective function subject to the working set constraints being treated as equalities. If $x,s + p$ is feasible, is defined to be unity. In this case, the reduced gradient at $x -, s -$ will be zero, and Lagrange multipliers are computed at the next iteration. Otherwise, is set to $α M$ , the step to the ‘nearest’ constraint along . This constraint is added to the working set at the next iteration. $H Z$ is singular, then must also be singular, and an inertia-controlling strategy is used to ensure that only the last diagonal element of is zero. (See Gill et al. (1991) for a discussion of a similar strategy for indefinite quadratic programming.) In this case, $p Z$ $pZT HZ pZ = 0 and gZT pZ ≤ 0 ,$ (12) which allows the objective function to be reduced by any step of the form $x,s + α p$ , where . The vector $p = Zp Z$ is a direction of unbounded descent for the QP problem in the sense that the QP objective is linear and decreases without bound along . If no finite step of the form $x,s + α p$ ) reaches a constraint not in the working set, the QP problem is unbounded and the function terminates immediately with Section 6 ). Otherwise, is defined as the maximum feasible step along and a constraint active at $x,s + α p$ is added to the working set for the next iteration. 10.4 Miscellaneous If the basis matrix is not chosen carefully, the condition of the null space matrix could be arbitrarily high. To guard against this, the function implements a ‘basis repair’ feature in which the LUSOL package (see Gill et al. (1987) ) is used to compute the rectangular factorization returning just the permutation that makes unit lower triangular. The pivot tolerance is set to require $PLPT ij ≤ 2$ , and the permutation is used to define . It can be shown that is likely to be little more than unity. Hence, should be well conditioned regardless of the condition of $W$ . This feature is applied at the beginning of the optimality phase if a potential ordering is known. The EXPAND procedure (see Gill et al. (1989) ) is used to reduce the possibility of cycling at a point where the active constraints are nearly linearly dependent. Although there is no absolute guarantee that cycling will not occur, the probability of cycling is extremely small (see Hall and McKinnon (1996) ). The main feature of EXPAND is that the feasibility tolerance is increased at the start of every iteration. This allows a positive step to be taken at every iteration, perhaps at the expense of violating the bounds on by a small amount. Suppose that the value of the optional argument Section 11.2 ) is . Over a period of iterations (where is the value of the optional argument ; see Section 11.2 ), the feasibility tolerance actually used by nag_opt_sparse_convex_qp (e04nkc) (i.e., the feasibility tolerance) increases from $0.5 δ$ (in steps of $0.5 δ / K$ At certain stages the following ‘resetting procedure’ is used to remove small constraint infeasibilities. First, all nonbasic variables are moved exactly onto their bounds. A count is kept of the number of non-trivial adjustments made. If the count is nonzero, the basic variables are recomputed. Finally, the working feasibility tolerance is reinitialized to $0.5 δ$. If a problem requires more than $K$ iterations, the resetting procedure is invoked and a new cycle of iterations is started. (The decision to resume the feasibility phase or optimality phase is based on comparing any constraint infeasibilities with $δ$.) The resetting procedure is also invoked when nag_opt_sparse_convex_qp (e04nkc) reaches an apparently optimal, infeasible or unbounded solution, unless this situation has already occurred twice. If any non-trivial adjustments are made, iterations are continued. The EXPAND procedure not only allows a positive step to be taken at every iteration, but also provides a potential choice of constraints to be added to the working set. All constraints at a distance $α$ (where $α ≤ α M$) along $p$ from the current point are then viewed as acceptable candidates for inclusion in the working set. The constraint whose normal makes the largest angle with the search direction is added to the working set. This strategy helps keep the basis matrix $B$ well conditioned. 11 Optional Arguments A number of optional input and output arguments to nag_opt_sparse_convex_qp (e04nkc) are available through the structure argument , type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional arguments you should use the NAG defined null pointer, , in place of when calling nag_opt_sparse_convex_qp (e04nkc); the default settings will then be used for all arguments. Before assigning values to directly the structure be initialized by a call to the function nag_opt_init (e04xxc) . Values may then be assigned to the structure members in the normal C manner. Option settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the structure will be performed automatically if not already done. Any subsequent direct assignment to the structure must be preceded by initialization. If assignment of functions and memory to pointers in the structure is required, then this must be done directly in the calling program; they cannot be assigned using nag_opt_read (e04xyc) 11.1 Optional Argument Checklist and Default Values For easy reference, the following list shows the members of which are valid for nag_opt_sparse_convex_qp (e04nkc) together with their default values where relevant. The number is a generic notation for machine precision nag_machine_precision (X02AJC) Nag_Start start Nag_Cold Boolean list Nag_TRUE Nag_PrintType print_level Nag_Soln_Iter char outfile[80] stdout void (*print_fun)() NULL char prob_name[9] '$\ 0$' char obj_name[9] '$\ 0$' char rhs_name[9] '$\ 0$' char range_name[9] '$\ 0$' char bnd_name[9] '$\ 0$' char **crnames NULL Boolean minimize Nag_TRUE Integer max_iter $max50, 5 n+m$ Nag_CrashType crash Nag_CrashTwice double crash_tol 0.1 Nag_ScaleType scale Nag_ExtraScale double scale_tol 0.9 double optim_tol $max 10 -6 , ε$ double ftol $max 10 -6 , ε$ Integer reset_ftol 10000 Integer fcheck 60 Integer factor_freq 100 Integer partial_price 10 double pivot_tol $ε 0.67$ double lu_factor_tol 100.0 double lu_update_tol 10.0 double lu_sing_tol $ε 0.67$ Integer max_sb $min ncolh+1 n$ double inf_bound $10 20$ double inf_step $maxoptions.inf_bound, 10 20$ Integer *state size $n+m$ double *lambda size $n+m$ Integer nsb Integer iter Integer nf 11.2 Description of the Optional Arguments start – Nag_Start Default $=Nag_Cold$ On entry : specifies how the initial working set is to be chosen. An internal Crash procedure will be used to choose an initial basis matrix, $B$. You must provide a valid definition of every array element of the optional argument $options.state$ (see below), probably obtained from a previous call of nag_opt_sparse_convex_qp (e04nkc), while, for QP problems, the optional argument $options.nsb$ (see below) must retain its value from a previous call. Constraint: $options.start=Nag_Cold$ or $Nag_Warm$. list – Nag_Boolean Default $=Nag_TRUE$ On entry: if $options.list=Nag_TRUE$ the argument settings in the call to nag_opt_sparse_convex_qp (e04nkc) will be printed. print_level – Nag_PrintType Default $=Nag_Soln_Iter$ On entry : the level of results printout produced by nag_opt_sparse_convex_qp (e04nkc). The following values are available: Nag_NoPrint No output. Nag_Soln The final solution. Nag_Iter One line of output for each iteration. Nag_Iter_Long A longer line of output for each iteration with more information (line exceeds 80 characters). Nag_Soln_Iter The final solution and one line of output for each iteration. Nag_Soln_Iter_Long The final solution and one long line of output for each iteration (line exceeds 80 characters). As Nag_Soln_Iter_Long with the matrix statistics (initial status of rows and columns, number of elements, density, biggest and smallest elements, etc.), factors resulting from the Nag_Soln_Iter_Full scaling procedure (if $options.scale=Nag_RowColScale$ or $Nag_ExtraScale$; see below), basis factorization statistics and details of the initial basis resulting from the Crash procedure (if $options.start=Nag_Cold$). Details of each level of results printout are described in Section 11.3 Constraint: $options.print_level=Nag_NoPrint$, $Nag_Soln$, $Nag_Iter$, $Nag_Soln_Iter$, $Nag_Iter_Long$, $Nag_Soln_Iter_Long$ or $Nag_Soln_Iter_Full$. outfile – const char[80] Default $= stdout$ On entry: the name of the file to which results should be printed. If $options.outfile[0] = " \ 0 '$ then the stdout stream is used. print_fun – pointer to function Default $=$NULL On entry : printing function defined by you; the prototype of void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm); Section 11.3.1 below for further details. prob_name – const char Default: $options.prob_name[0] = " \ 0 '$ obj_name – const char Default: $options.obj_name[0] = " \ 0 '$ rhs_name – const char Default: $options.rhs_name[0] = " \ 0 '$ range_name – const char Default: $options.range_name[0] = " \ 0 '$ bnd_name – const char Default: $options.bnd_name[0] = " \ 0 '$ On entry : these options contain the names associated with the so-called MPSX form of the problem. MPSX files may be read by calling nag_opt_sparse_mps_read (e04mzc) prior to calling nag_opt_sparse_convex_qp (e04nkc). The arguments contain, respectively, the names of: the problem; the objective (or free) row; the constraint right-hand side; the ranges, and the bounds. They are used in the detailed output when optional argument crnames – char ** Default $=$NULL On entry : if is not then it must point to an array of character strings with maximum string length 8, containing the names of the columns and rows (i.e., variables and constraints) of the problem. Thus, contains the name of the th column (variable), for , and $options.crnames[ n + i - 1 ]$ contains the names of the th row (constraint), for . If supplied, the names are used in the solution output (see Section 5.1 Section 11.3 If a problem is defined by an MPSX file, it may be read by calling nag_opt_sparse_mps_read (e04mzc) prior to calling nag_opt_sparse_convex_qp (e04nkc). In this case, nag_opt_sparse_mps_read (e04mzc) may optionally be used to allocate memory to and to read the column and row names defined in the MPSX file into . In this case, the memory freeing function nag_opt_free (e04xzc) should be used to free the memory pointed to by on return from nag_opt_sparse_convex_qp (e04nkc). You must use the standard C function for this purpose. minimize – Nag_Boolean Default $=Nag_TRUE$ On entry specifies the required direction of optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes $f x$ and the other maximizes $-f x$ , their solutions will be the same but the signs of the dual variables $π i$ and the reduced gradients $d j$ Section 10.3 ) will be reversed. max_iter – Integer Default $= max50, 5 n+m$ On entry specifies the maximum number of iterations allowed before termination. If you wish to check that a call to nag_opt_sparse_convex_qp (e04nkc) is correct before attempting to solve the problem in full then $options.max_iter$ may be set to 0. No iterations will then be performed but all initialization prior to the first iteration will be done and a listing of argument settings will be output, if optional argument $options.list=Nag_TRUE$ (the default setting). Constraint: $options.max_iter≥0$. crash – Nag_CrashType Default $=Nag_CrashTwice$ This option does not apply when optional argument $options.start=Nag_Warm$. On entry : if , and internal Crash procedure is used to select an initial basis from various rows and columns of the constraint matrix . The value of determines which rows and columns are initially eligible for the basis, and how many times the Crash procedure is called. , the all-slack basis $B = -I$ is chosen. The Crash procedure is called once (looking for a triangular basis in all rows and columns of the linear constraint matrix $A$). The Crash procedure is called twice (looking at any equality constraints first followed by any inequality constraints). If $options.crash=Nag_CrashOnce$ or $Nag_CrashTwice$, certain slacks on inequality rows are selected for the basis first. (If $options.crash=Nag_CrashTwice$, numerical values are used to exclude slacks that are close to a bound.) The Crash procedure then makes several passes through the columns of $A$, searching for a basis matrix that is essentially triangular. A column is assigned to ‘pivot’ on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis.) For remaining unassigned rows, slack variables are inserted to complete the basis. Constraint: $options.crash=Nag_NoCrash$, $Nag_CrashOnce$ or $Nag_CrashTwice$. crash_tol – double Default $=0.1$ On entry allows the Crash procedure to ignore certain ‘small’ nonzero elements in the constraint matrix while searching for a triangular basis. For each column of , if $a max$ is the largest element in the column, other nonzeros in that column are ignored if they are less than (or equal to) $a max × options.crash_tol$ When $options.crash_tol>0$, the basis obtained by the Crash procedure may not be strictly triangular, but it is likely to be non-singular and almost triangular. The intention is to obtain a starting basis with more column variables and fewer (arbitrary) slacks. A feasible solution may be reached earlier for some problems. Constraint: $0.0 ≤ options.crash_tol < 1.0$. scale – Nag_ScaleType Default $=Nag_ExtraScale$ On entry : this option enables the scaling of the variables and constraints using an iterative procedure due to Fourer (1982) , which attempts to compute row scales $r i$ and column scales $c j$ such that the scaled matrix coefficients $a - ij = a ij × c j / r i$ are as close as possible to unity. This may improve the overall efficiency of the function on some problems. (The lower and upper bounds on the variables and slacks for the scaled problem are redefined as $l - j = l j / c j$ $u - j = u j / c j$ respectively, where $c j ≡ r j-n$ No scaling is performed. All rows and columns of the constraint matrix $A$ are scaled. An additional scaling is performed that may be helpful when the solution $x$ is large; it takes into account columns of $A-I$ that are fixed or have positive lower bounds or negative upper Constraint: $options.scale=Nag_NoScale$, $Nag_RowColScale$ or $Nag_ExtraScale$. scale_tol – double Default $=0.9$ This option does not apply when optional argument $options.scale=Nag_NoScale$. On entry: $options.scale_tol$ is used to control the number of scaling passes to be made through the constraint matrix $A$. At least 3 (and at most 10) passes will be made. More precisely, let $a p$ denote the largest column ratio (i.e., ('biggest' element)/('smallest' element) in some sense) after the $p$th scaling pass through $A$. The scaling procedure is terminated if $a p ≥ a p-1 × options.scale_tol$ for some $p≥3$. Thus, increasing the value of $options.scale_tol$ from 0.9 to 0.99 (say) will probably increase the number of passes through $A$. Constraint: $0.0 < options.scale_tol < 1.0$. optim_tol – double Default $= max 10 -6 , ε$ On entry: $options.optim_tol$ is used to judge the size of the reduced gradients $d j = g j - πT aj$. By definition, the reduced gradients for basic variables are always zero. Optimality is declared if the reduced gradients for any nonbasic variables at their lower or upper bounds satisfy $-options.optim_tol × max1, π ≤ d j ≤ options.optim_tol × max1, π$, and if $d j ≤ options.optim_tol × max1, π$ for any superbasic variables. Constraint: $options.optim_tol≥ε$. ftol – double Default $= max 10 -6 , ε$ On entry defines the maximum acceptable violation in each constraint at a ‘feasible’ point (including slack variables). For example, if the variables and the coefficients in the linear constraints are of order unity, and the latter are correct to about 6 decimal digits, it would be appropriate to specify $10 -6$ nag_opt_sparse_convex_qp (e04nkc) attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the problem is assumed to be infeasible. Let Sinf be the corresponding sum of infeasibilities. If Sinf is quite small, it may be appropriate to raise $options.ftol$ by a factor of 10 or 100. Otherwise, some error in the data should be suspected. Note that nag_opt_sparse_convex_qp (e04nkc) does not attempt to find the minimum value of Sinf. If the constraints and variables have been scaled (see optional argument $options.scale$ above), then feasibility is defined in terms if the scaled problem (since it is more likely to be meaningful). Constraint: $options.ftol≥ε$. reset_ftol – Integer Default $=5$ On entry : this option is part of an anti-cycling procedure designed to guarantee progress even on highly degenerate problems (see Section 10.4 For LP problems, the strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount. Suppose that the value of the optional argument $options.ftol$ is $δ$. Over a period of $options.reset_ftol$ iterations, the feasibility tolerance actually used by nag_opt_sparse_convex_qp (e04nkc) (i.e., the working feasibility tolerance) increases from $0.5 δ$ to $δ$ (in steps of $0.5 δ / options.reset_ftol$). For QP problems, the same procedure is used for iterations in which there is only one superbasic variable. (Cycling can only occur when the current solution is at a vertex of the feasible region.) Thus, zero steps are allowed if there is more than one superbasic variable, but otherwise positive steps are enforced. Increasing the value of $options.reset_ftol$ helps reduce the number of slightly infeasible nonbasic basic variables (most of which are eliminated during the resetting procedure). However, it also diminishes the freedom to choose a large pivot element (see $options.pivot_tol$ below). Constraint: $0 < options.reset_ftol < 10000000$. fcheck – Integer Default $=60$ On entry: every $options.fcheck$th iteration after the most recent basis factorization, a numerical test is made to see if the current solution $x,s$ satisfies the linear constraints $Ax - s = 0$. If the largest element of the residual vector $r = Ax - s$ is judged to be too large, the current basis is refactorized and the basic variables recomputed to satisfy the constraints more accurately. Constraint: $options.fcheck≥1$. factor_freq – Integer Default $=100$ On entry: at most $options.factor_freq$ basis changes will occur between factorizations of the basis matrix. For LP problems, the basis factors are usually updated at every iteration. For QP problems, fewer basis updates will occur as the solution is approached. The number of iterations between basis factorizations will therefore increase. During these iterations a test is made regularly according to the value of optional argument $options.fcheck$ to ensure that the linear constraints $Ax - s = 0$ are satisfied. If necessary, the basis will be refactorized before the limit of $options.factor_freq$ updates is reached. Constraint: $options.factor_freq≥1$. partial_price – Integer Default $=10$ This option does not apply to QP problems. On entry: this option is recommended for large FP or LP problems that have significantly more variables than constraints (i.e., $n≫m$). It reduces the work required for each pricing operation (i.e., when a nonbasic variable is selected to enter the basis). If $options.partial_price=1$, all columns of the constraint matrix $A -I$ are searched. If $options.partial_price>1$, $A$ and $-I$ are partitioned to give $options.partial_price$ roughly equal segments $A j , K j$, for $j=1,2,…,p$ (modulo $p$). If the previous pricing search was successful on $A j-1 , K j-1$, the next search begins on the segments $A j , K j$. If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to enter the basis. If nothing is found, the search continues on the next segments $A j+1 , K j+1$, and so on. Constraint: $options.partial_price≥1$. pivot_tol – double Default $= ε 0.67$ On entry: $options.pivot_tol$ is used to prevent columns entering the basis if they would cause the basis to become almost singular. Constraint: $options.pivot_tol>0.0$. lu_factor_tol – double Default $=100.0$ lu_update_tol – double Default $=10.0$ On entry affect the stability and sparsity of the basis factorization $B = LU$ , during refactorization and updates respectively. The lower triangular matrix is a product of matrices of the form where the multipliers will satisfy $μ < options.lu_factor_tol$ during refactorization or $μ < options.lu_update_tol$ during update. The default values of usually strike a good compromise between stability and sparsity. For large and relatively dense problems, setting to 25 (say) may give a marked improvement in sparsity without impairing stability to a serious degree. Note that for band matrices it may be necessary to set in the range $1 ≤ options.lu_factor_tol < 2$ in order to achieve stability. • $options.lu_factor_tol≥1.0$; • $options.lu_update_tol≥1.0$. lu_sing_tol – double Default $= ε 0.67$ On entry: $options.lu_sing_tol$ defines the singularity tolerance used to guard against ill conditioned basis matrices. Whenever the basis is refactorized, the diagonal elements of $U$ are tested as follows. If $u jj ≤ options.lu_sing_tol$ or $u jj < options.lu_sing_tol × max i u ij$, the $j$th column of the basis is replaced by the corresponding slack variable. Constraint: $options.lu_sing_tol>0.0$. max_sb – Integer Default $= minncolh+1,n$ This option does not apply to FP or LP problems. On entry places an upper bound on the number of variables which may enter the set of superbasic variables (see Section 10.2 ). If the number of superbasics exceeds this bound then nag_opt_sparse_convex_qp (e04nkc) will terminate with . In effect, specifies ‘how nonlinear’ the QP problem is expected to be. Constraint: $options.max_sb>0$. inf_bound – double Default $= 10 20$ On entry: $options.inf_bound$ defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to $options.inf_bound$ will be regarded as $+∞$ (and similarly any lower bound less than or equal to $-options.inf_bound$ will be regarded as $-∞$). Constraint: $options.inf_bound>0.0$. inf_step – double Default $= maxoptions.inf_bound, 10 20$ On entry: $options.inf_step$ specifies the magnitude of the change in variables that will be considered a step to an unbounded solution. (Note that an unbounded solution can occur only when the Hessian is not positive definite.) If the change in $x$ during an iteration would exceed the value of $options.inf_step$, the objective function is considered to be unbounded below in the feasible Constraint: $options.inf_step>0.0$. state – Integer * Default memory $= n + m$ On entry need not be set if the default option of is used as values of memory will be automatically allocated by nag_opt_sparse_convex_qp (e04nkc). If the option has been chosen, must point to a minimum of elements of memory. This memory will already be available if the structure has been used in a previous call to nag_opt_sparse_convex_qp (e04nkc) from the calling program, with and the same values of . If a previous call has not been made you must allocate sufficient memory. If you supply a vector and , then the first elements of must specify the initial states of the problem variables. (The slacks need not be initialized.) An internal Crash procedure is then used to select an initial basis matrix . The initial basis matrix will be triangular (neglecting certain small elements in each column). It is chosen from various rows and columns of $A -I$ . Possible values for , for , are: $options.state[j]$ State of $xs[j]$ during Crash procedure 0 or 1 Eligible for the basis 2 Ignored 3 Eligible for the basis (given preference over 0 or 1) 4 or 5 Ignored If nothing special is known about the problem, or there is no wish to provide special information, you may set $options.state[j] = 0$ (and $xs[j] = 0.0$), for $j=0,1,…,n - 1$. All variables will then be eligible for the initial basis. Less trivially, to say that the $j$th variable will probably be equal to one of its bounds, you should set $options.state[j] = 4$ and $xs[j] = bl[j]$ or $options.state[j] = 5$ and $xs[j] = bu[j]$ as appropriate. Following the Crash procedure, variables for which $options.state[j] = 2$ are made superbasic. Other variables not selected for the basis are then made nonbasic at the value $xs[j]$ if $bl[j] ≤ xs[j] ≤ bu[j]$, or at the value $bl[j]$ or $bu[j]$ closest to $xs[j]$. must specify the initial states and values, respectively, of the variables and slacks . If nag_opt_sparse_convex_qp (e04nkc) has been called previously with the same values of already contains satisfactory information. • $0 ≤ options.state[j] ≤ 5$ if $options.start=Nag_Cold$, for $j=0,1,…,n-1$; • $0 ≤ options.state[j] ≤ 3$ if $options.start=Nag_Warm$, for $j=0,1,…,n+m-1$. On exit : the final states of the variables and slacks . The significance of each possible value of is as follows: $options.state[j]$ State of variable $j$ Normal value of $xs[j]$ $000 0$ Nonbasic $bl[j]$ $000 1$ Nonbasic $bu[j]$ $000 2$ Superbasic Between $bl[j]$ and $bu[j]$ $000 3$ Basic Between $bl[j]$ and $bu[j]$ If the problem is feasible (i.e., $ninf=0$), basic and superbasic variables may be outside their bounds by as much as optional argument $options.ftol$. Note that unless the optional argument $options.scale=Nag_NoScale$, $options.ftol$ applies to the variables of the scaled problem. In this case, the variables of the original problem may be as much as 0.1 outside their bounds, but this is unlikely unless the problem is very badly scaled. Very occasionally some nonbasic variables may be outside their bounds by as much as $options.ftol$, and there may be some nonbasic variables for which $xs[j]$ lies strictly between its bounds. If the problem is infeasible (i.e., ), some basic and superbasic variables may be outside their bounds by an arbitrary amount (bounded by lambda – double * Default memory $= n + m$ On entry: $n+m$ values of memory will be automatically allocated by nag_opt_sparse_convex_qp (e04nkc) and this is the recommended method of use of $options.lambda$. However you may supply memory from the calling program. On exit : the values of the multipliers for each constraint with respect to the current working set. The first elements contain the multipliers ( reduced costs ) for the bound constraints on the variables, and the next elements contain the Lagrange multipliers ( shadow prices ) for the general linear constraints. On entry: $n S$, the number of superbasics. For QP problems, $options.nsb$ need not be specified if optional argument $options.start=Nag_Cold$, but must retain its value from a previous call when $options.start=Nag_Warm$. For FP and LP problems, $options.nsb$ is not referenced. Constraint: $options.nsb≥0$. On exit: the final number of superbasics. This will be zero for FP and LP problems. On exit: the total number of iterations performed. On exit : the number of times the product has been calculated (i.e., number of calls of 11.3 Description of Printed Output The level of printed output can be controlled with the structure members Section 11.2 ). If then the argument values to nag_opt_sparse_convex_qp (e04nkc) are listed, whereas the printout of results is governed by the value of . The default of provides a single short line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from nag_opt_sparse_convex_qp (e04nkc). the output produced at each iteration is as described in Section 5.1 . If the following, more detailed, line of output is produced at every iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration. Itn is the iteration count. pp is the partial price indicator. The variable selected by the last pricing operation came from the ppth partition of $A$ and $-I$. Note that pp is reset to zero whenever the basis is dj is the value of the reduced gradient (or reduced cost) for the variable selected by the pricing operation at the start of the current iteration. +S is the variable selected by the pricing operation to be added to the superbasic set. -S is the variable chosen to leave the superbasic set. -B is the variable removed from the basis (if any) to become nonbasic. -B is the variable chosen to leave the set of basics (if any) in a special basic $↔$ superbasic swap. The entry under -S has become basic if this entry is nonzero, and nonbasic otherwise. The swap is done to ensure that there are no superbasic slacks. is the value of the steplength $α$ taken along the computed search direction $p$. The variables $x$ have been changed to $x + α p$. If a variable is made superbasic during the current Step iteration (i.e., +S is positive), Step will be the step to the nearest bound. During the optimality phase, the step can be greater than unity only if the reduced Hessian is not positive is the $r$th element of a vector $y$ satisfying $By = a q$ whenever $a q$ (the $q$th column of the constraint matrix $A -I$) replaces the $r$th column of the basis matrix $B$. Wherever Pivot possible, Step is chosen so as to avoid extremely small values of Pivot (since they may cause the basis to be nearly singular). In extreme cases, it may be necessary to increase the value of the optional argument $options.pivot_tol$ (default value $= ε 0.67$, where $ε$ is the machine precision; see Section 11.2) to exclude very small elements of $y$ from consideration during the computation of Step. Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase. Sinf/ is the current value of the objective function. If $x$ is not feasible, Sinf gives the sum of magnitudes of constraint violations. If $x$ is feasible, Objective is the value of the Objective objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point. During the optimality phase, the value of the objective function will be non-increasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. L is the number of nonzeros in the basis factor $L$. Immediately after a basis factorization $B = LU$, this is lenL, the number of subdiagonal elements in the columns of a lower triangular matrix. Further nonzeros are added to L when various columns of $B$ are later replaced. (Thus, L increases monotonically.) U is the number of nonzeros in the basis factor $U$. Immediately after a basis factorization, this is lenU, the number of diagonal and superdiagonal elements in the rows of an upper triangular matrix. As columns of $B$ are replaced, the matrix $U$ is maintained explicitly (in sparse form). The value of U may fluctuate up or down; in general, it will tend to increase. Ncp is the number of compressions required to recover workspace in the data structure for $U$. This includes the number of compressions needed during the previous basis factorization. Normally, Ncp should increase very slowly. If it does not, nag_opt_sparse_convex_qp (e04nkc) will attempt to expand the internal workspace allocated for the basis factors. Norm rg is $d S$, the Euclidean norm of the reduced gradient (see Section 10.3). During the optimality phase, this norm will be approximately zero after a unit step. For FP and LP problems, Norm rg is not printed. Ns is the current number of superbasic variables. For FP and LP problems, Ns is not printed. Cond Hz is a lower bound on the condition number of the reduced Hessian (see Section 10.2). The larger this number, the more difficult the problem. For FP and LP problems, Cond Hz is not printed. the following intermediate printout ( $< 120$ characters) is produced whenever the matrix $B S = B S T$ is factorized. Gaussian elimination is used to compute an factorization of $B S$ , where is a lower triangular matrix and is an upper triangular matrix for some permutation matrices . The factorization is stabilized in the manner described under the optional argument Section 11.2 Factorize is the factorization count. is a code giving the reason for the present factorization as follows: Code Meaning $1 0$ First $LU$ factorization. Demand $1 1$ Number of updates reached the value of the optional argument $options.factor_freq$ (see Section 11.2). $1 2$ Excessive nonzeros in updated factors. $1 7$ Not enough storage to update factors. $10$ Row residuals too large (see the description for the optional argument $options.fcheck$ in Section 11.2). $11$ Ill conditioning has caused inconsistent results. Iteration is the iteration count. Nonlinear is the number of nonlinear variables in $B$ (not printed if $B S$ is factorized). Linear is the number of linear variables in $B$ (not printed if $B S$ is factorized). Slacks is the number of slack variables in $B$ (not printed if $B S$ is factorized). Elems is the number of nonzeros in $B$ (not printed if $B S$ is factorized). Density is the percentage nonzero density of $B$ (not printed if $B S$ is factorized). More precisely, $Density = 100 × Elems / Nonlinear + Linear + Slacks 2$. Compressns is the number of times the data structure holding the partially factorized matrix needed to be compressed, in order to recover unused workspace. is the average Markowitz merit count for the elements chosen to be the diagonals of $PUQ$. Each merit count is defined to be $c-1 r-1$, where $c$ and $r$ are the number of nonzeros in the Merit column and row containing the element at the time it is selected to be the next diagonal. Merit is the average of m such quantities. It gives an indication of how much work was required to preserve sparsity during the factorization. lenL is the number of nonzeros in $L$. lenU is the number of nonzeros in $U$. Increase is the percentage increase in the number of nonzeros in $L$ and $U$ relative to the number of nonzeros in $B$. More precisely, $Increase = 100 × lenL + lenU - Elems / Elems$. m is the number of rows in the problem. Note that $m = Ut + Lt + bp$. Ut is the number of triangular rows of $B$ at the top of $U$. d1 is the number of columns remaining when the density of the basis matrix being factorized reached 0.3. Lmax is the maximum subdiagonal element in the columns of $L$ (not printed if $B S$ is factorized). This will not exceed the value of the optional argument $options.lu_factor_tol$. Bmax is the maximum nonzero element in $B$ (not printed if $B S$ is factorized). BSmax is the maximum nonzero element in $B S$ (not printed if $B$ is factorized). is the maximum nonzero element in $U$, excluding elements of $B$ that remain in $U$ unchanged. (For example, if a slack variable is in the basis, the corresponding row of $B$ will become a Umax row of $U$ without modification. Elements in such rows will not contribute to Umax. If the basis is strictly triangular, none of the elements of $B$ will contribute, and Umax will be Ideally, Umax should not be significantly larger than Bmax. If it is several orders of magnitude larger, it may be advisable to reset the optional argument $options.lu_factor_tol$ to a value near 1.0. Umax is not printed if $B S$ is factorized. Umin is the magnitude of the smallest diagonal element of $PUQ$ (not printed if $B S$ is factorized). Growth is the value of the ratio $Umax/Bmax$, which should not be too large. Providing Lmax is not large (say $< 10.0$), the ratio $maxBmax,Umax / Umin$ is an estimate of the condition number of $B$. If this number is extremely large, the basis is nearly singular and some numerical difficulties could occur in subsequent computations. (However, an effort is made to avoid near singularity by using slacks to replace columns of $B$ that would have made Umin extremely small, and the modified basis is refactorized.) Growth is not printed if $B S$ is factorized. Lt is the number of triangular columns of $B$ at the beginning of $L$. bp is the size of the ‘bump’ or block to be factorized nontrivially after the triangular rows and columns have been removed. d2 is the number of columns remaining when the density of the basis matrix being factorized reached 0.6. the following lines of intermediate printout ( $< 80$ characters) are produced whenever Section 11.2 ). They refer to the number of columns selected by the Crash procedure during each of several passes through , whilst searching for a triangular basis matrix. Slacks is the number of slacks selected initially. Free Cols is the number of free columns in the basis. Preferred is the number of ‘preferred’ columns in the basis (i.e., $options.state[j] = 3$ for some $j<n$). Unit is the number of unit columns in the basis. Double is the number of double columns in the basis. Triangle is the number of triangular columns in the basis. Pad is the number of slacks used to pad the basis. the following lines of intermediate printout ( $< 80$ characters) are produced, following the final iteration. They refer to the ‘MPSX names’ stored in the optional arguments Section 11.2 Name gives the name for the problem (blank if none). Status gives the exit status for the problem (i.e., Optimal soln, Weak soln, Unbounded, Infeasible, Excess itns, Error condn or Feasble soln) followed by details of the direction of the optimization (i.e., (Min) or (Max)). Objective gives the name of the free row for the problem (blank if none). RHS gives the name of the constraint right-hand side for the problem (blank if none). Ranges gives the name of the ranges for the problem (blank if none). Bounds gives the name of the bounds for the problem (blank if none). the final solution printout for each column and row is as described in Section 5.1 . When , the following longer lines of final printout ( $< 120$ characters) are produced. $a j$ denote the th column of , for . The following describes the printout for each column (or variable). Number is the column number $j$. (This is used internally to refer to $x j$ in the intermediate output.) Column gives the name of $x j$. State gives the state of $x j$ (LL if nonbasic on its lower bound, UL if nonbasic on its upper bound, EQ if nonbasic and fixed, FR if nonbasic and strictly between its bounds, BS if basic and SBS if superbasic). A key is sometimes printed before State to give some additional information about the state of $x j$. Note that unless the optional argument $options.scale=Nag_NoScale$ (default value is $options.scale=Nag_ExtraScale$; see Section 11.2) is specified, the tests for assigning a key are applied to the variables of the scaled problem. Alternative optimum possible. $x j$ is nonbasic, but its reduced gradient is essentially zero. This means that if $x j$ were allowed to start moving away from its bound, there would be no A change in the value of the objective function. The values of the basic and superbasic variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case, the values of the Lagrange multipliers might also D Degenerate. $x j$ is basic or superbasic, but it is equal to (or very close to) one of its bounds. I Infeasible. $x j$ is basic or superbasic and is currently violating one of its bounds by more than the value of the optional argument $options.ftol$ (default value $= max 10 -6 , ε$, where $ε$ is the machine precision; see Section 11.2). N Not precisely optimal. $x j$ is nonbasic or superbasic. If the value of the reduced gradient for $x j$ exceeds the value of the optional argument $options.optim_tol$ (default value $= max 10 -6 , ε$; see Section 11.2), the solution would not be declared optimal because the reduced gradient for $x j$ would not be considered negligible. Activity is the value of $x j$ at the final iterate. Obj is the value of $g j$ at the final iterate. For FP problems, $g j$ is set to zero. Lower is the lower bound specified for $x j$. (None indicates that $bl[j-1] ≤ -options.inf_bound$, where $options.inf_bound$ is the optional argument.) Upper is the upper bound specified for $x j$. (None indicates that $bu[j-1] ≥ options.inf_bound$.) Reduced is the value of $d j$ at the final iterate (see Section 10.3). For FP problems, $d j$ is set to zero. $m + j$ is the value of $m+j$. $v i$ denote the th row of , for . The following describes the printout for each row (or constraint). Number is the value of $n+i$. (This is used internally to refer to $s i$ in the intermediate output.) Row gives the name of $v i$. State gives the state of $v i$ (LL if active on its lower bound, UL if active on its upper bound, EQ if active and fixed, BS if inactive when $s i$ is basic and SBS if inactive when $s i$ is A key is sometimes printed before State to give some additional information about the state of $s i$. Note that unless the optional argument $options.scale=Nag_NoScale$ (default value is $options.scale=Nag_ExtraScale$; see Section 11.2) is specified, the tests for assigning a key are applied to the variables of the scaled problem. Alternative optimum possible. $s i$ is nonbasic, but its reduced gradient is essentially zero. This means that if $s i$ were allowed to start moving away from its bound, there would be no A change in the value of the objective function. The values of the basic and superbasic variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case, the values of the dual variables (or Lagrange multipliers) might also change. D Degenerate. $s i$ is basic or superbasic, but it is equal to (or very close to) one of its bounds. I Infeasible. $s i$ is basic or superbasic and is currently violating one of its bounds by more than the value of the optional argument $options.ftol$ (default value $= max 10 -6 , ε$, where $ε$ is the machine precision; see Section 11.2). N Not precisely optimal. $s i$ is nonbasic or superbasic. If the value of the reduced gradient for $s i$ exceeds the value of the optional argument $options.optim_tol$ (default value $= max 10 -6 , ε$; see Section 11.2), the solution would not be declared optimal because the reduced gradient for $s i$ would not be considered negligible. Activity is the value of $v i$ at the final iterate. Slack is the value by which $v i$ differs from its nearest bound. (For the free row (if any), it is set to Activity.) Lower is the lower bound specified for $v j$. None indicates that $bl[ n + j - 1 ] ≤ -options.inf_bound$, where $options.inf_bound$ is the optional argument. Upper is the upper bound specified for $v j$. None indicates that $bu[ n + j - 1 ] ≥ options.inf_bound$. Dual is the value of the dual variable $π i$ (the Lagrange multiplier for $v i$; see Section 10.3). For FP problems, $π i$ is set to zero. i gives the index $i$ of $v i$. Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision. If $options.print_level=Nag_NoPrint$ then printout will be suppressed; you can print the final solution when nag_opt_sparse_convex_qp (e04nkc) returns to the calling program. 11.3.1 Output of results via a user-defined printing function You may also specify your own print function for output of iteration results and the final solution by use of the function pointer, which has prototype void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm); The rest of this section can be skipped if you wish to use the default printing facilities. When a user-defined function is assigned to this will be called in preference to the internal print function of nag_opt_sparse_convex_qp (e04nkc). Calls to the user-defined function are again controlled by means of the member. Information is provided through , the two structure arguments to If $comm→it_prt = Nag_TRUE$ then the results from the last iteration of nag_opt_sparse_convex_qp (e04nkc) are provided through st. Note that $options.print_fun$ will be called with $comm→it_prt = Nag_TRUE$ only if $options.print_level=Nag_Iter$, $Nag_Iter_Long$, $Nag_Soln_Iter$, $Nag_Soln_Iter_Long$ or $Nag_Soln_Iter_Full$. The following members of st are set: iter – Integer The iteration count. qp – Nag_Boolean Nag_TRUE if a QP problem is being solved; Nag_FALSE otherwise. pprice – Integer The partial price indicator. rgval – double The value of the reduced gradient (or reduced cost) for the variable selected by the pricing operation at the start of the current iteration. sb_add – Integer The variable selected to enter the superbasic set. sb_leave – double The variable chosen to leave the superbasic set. b_leave – Integer The variable chosen to leave the basis (if any) to become nonbasic. bswap_leave – Integer The variable chosen to leave the basis (if any) in a special basic $↔$ superbasic swap. step – double The step length taken along the computed search direction. pivot – double The $r$th element of a vector $y$ satisfying $By = a q$ whenever $a q$ (the $q$th column of the constraint matrix $A -I$) replaces the $r$th column of the basis matrix $B$. ninf – Integer The number of violated constraints or infeasibilities. f – double The current value of the objective function if $st→ninf$ is zero; otherwise, the sum of the magnitudes of constraint violations. nnz_l – Integer The number of nonzeros in the basis factor $L$. nnz_u – Integer The number of nonzeros in the basis factor $U$. ncp – Integer The number of compressions of the basis factorization workspace carried out so far. norm_rg – double The Euclidean norm of the reduced gradient at the start of the current iteration. This value is meaningful only if $st→qp = Nag_TRUE$. nsb – Integer The number of superbasic variables. This value is meaningful only if $st→qp = Nag_TRUE$. cond_hz – double A lower bound on the condition number of the reduced Hessian. This value is meaningful only if $st→qp = Nag_TRUE$. If $comm→sol_prt = Nag_TRUE$ then the final results for one row or column are provided through st. Note that $options.print_fun$ will be called with $comm→sol_prt = Nag_TRUE$ only if $options.print_level=Nag_Soln$, $Nag_Soln_Iter$, $Nag_Soln_Iter_Long$ or $Nag_Soln_Iter_Full$. The following members of st are set (note that $options.print_fun$ is called repeatedly, for each row and column): m – Integer The number of rows (or general constraints) in the problem. n – Integer The number of columns (or variables) in the problem. col – Nag_Boolean Nag_TRUE if column information is being provided; Nag_FALSE if row information is being provided. index – Integer If $st→col=Nag_TRUE$ then $st→index$ is the index $j$ (in the range $1 ≤ j ≤ n$) of the current column (variable) for which the remaining members of st, as described below, are set. If $st→col=Nag_FALSE$ then $st→index$ is the index $i$ (in the range $1 ≤$i $≤ m$) of the current row (constraint) for which the remaining members of st, as described below, are set. name – char * The name of row $i$ or column $j$. sstate – char * is a character string describing the state of row or column . This may be . The meaning of each of these is described in Section 11.3 key – char * is a character string which gives additional information about the current row or column. The possible values of " " . The meaning of each of these is described in Section 11.3 val – double The activity of row $i$ or column $j$ at the final iterate. blo – double The lower bound on row $i$ or column $j$. bup – double The upper bound on row $i$ or column $j$. lmult – double The value of the Lagrange multiplier associated with the current row or column (i.e., the dual activity $π i$ for a row, or the reduced gradient $d j$ for a column) at the final iterate. objg – double The value of the objective gradient $g j$ at the final iterate. $st→objg$ is meaningful only when $st→col = Nag_TRUE$ and should not be accessed otherwise. It is set to zero for FP problems. The relevant members of the structure it_prt – Nag_Boolean Will be Nag_TRUE when the print function is called with the result of the current iteration. sol_prt – Nag_Boolean Will be Nag_TRUE when the print function is called with the final result. user – double iuser – Integer p – Pointer Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_sparse_convex_qp (e04nkc) or during a call to qphess or $options.print_fun$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.
{"url":"http://www.nag.com/numeric/CL/nagdoc_cl23/html/E04/e04nkc.html","timestamp":"2014-04-16T13:47:46Z","content_type":null,"content_length":"317605","record_id":"<urn:uuid:ba6cad84-6141-45e9-b3cc-19602daeee8c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Hand Dryer A recent peer-reviewed study explains how the Mitsubishi Jet Towel® is the best hand dryer for our planet. The Materials Systems Laboratory at MIT recently reviewed seven hand drying systems, finding hands-in hand dryers to provide the lowest Global Warming Potential (GWP) of any hand drying method, including recycled paper towels. This study found that "dyer impact is dominated by use and dryers with similar dry times (and power ratings) will have similar impacts"1 and while packaging, transportation, and other non-operational factors do contribute to a dryer’s impact, they only account for 3% to 14% of the GWP.2 Since the Jet Towel® has the lowest power rating of any dryer in its class - 1060 watts - we would like to demonstrate how the Jet Towel® is the best hand dryer for the environment. Calculating GWP This study compares hand drying methods by the GWP for one use. To calculate the GWP of one use of the Jet Towel®, we multiply Ecoinvent’s GWP coefficient for one MJ of electricity (0.232 kg per MJ)3 by the sum total power required to operate the Jet Towel® for 12 seconds plus the power required for standby. The Jet Towel® uses 1060 watts to dry hands in 12 seconds and 2 watts per day on standby. Power from Use For power consumption in use, we simply multiply watts by time in use and divide by the number of seconds in an hour (3,600): E[use] = (watts per hour) (seconds in use) / seconds in one hour E[use] = (1060) (12) / 3,600 E[use] = 3.53 watts per dry Power from Standby To accurately calculate one use’s power consumption from standby, we must first discover the average number of seconds in standby mode for one use. This study assumes 350,000 uses over a 5-year life span, or 192 uses per day. One day has 86,400 seconds, so to find the number of seconds the Jet Towel® is in standby we must subtract the number of seconds from operation. For 192 uses at 12 seconds, the Jet Towel® is in use for 2,304 seconds each day. Once we subtract this from the number of seconds in a day, we know the number of seconds per day that the Jet Towel® is in standby. As we are calculating a per use number and the Jet Towel® is used 192 times per day, we then divide the number of seconds in standby by 192. As a formula, it looks like: 86,400 – (192 * 12) t[standby] = t[standby] = 438 seconds per use The Jet Towel® uses 2 watts per day on standby, so our formula for standby power per use is: (Standby watts per day) (standby seconds per use) E[standby] = Seconds in one day E[standby] = E[standby] = 0.01 watt per use So our total per use energy consumption for the Jet Towel® is 3.54 watts. GWP per Use The study in question operates on kilojoules and to convert watts to kilojoules we multiply by 3,600, since joules equal watts per hour. In kilojoules, the Jet Towel® uses 12.76 kJ per dry. To find the GWP of the Jet Towel®, we simply multiply the Ecoinvent GWP by our KJ number. As the coefficient is in MJ per kg of CO2, multiplying the coefficient by our kilojoules number will provide us with the g of CO2 per use since both numbers are lower by a factor of 1,000. It is: GWP = (kilojoules) (GWP CO2 coefficient) GWP = (12.76) (0.232) GWP = 3.0 g of CO2 Additional GWP Considerations Energy use is the predominant force for hands dryers, but shipping, packaging materials, and other non-use factors do affect GWP. This study found that non- use factors contribute from minimum 4% to maximum 13% of a hand dryer’s GWP. When we factor in secondary GWP factors using our 3.0 g finding, we see: 4%: use = 3.0g (96%) packaging, etc. = 0.1g (4%) Total: 3.1g 13%: use = 3.0g (87%) packaging, etc. = 0.6g (13%) Total: 3.6g Comparative Calculations To support our claim of Best Hand Dryer for the Environment for its class, we need to apply the same logic to other dryers. The following graph shows findings from the same calculations applied to other dryers. Specifications are drawn from manufacturers’ websites. Metric Jet Towel® Dyson Airblade SaniFlo Bio Jet Dry Time 12 seconds 12 seconds 12 seconds 12 seconds Watts 1060 1400 1750 1100 Standby Power 2 1 1 1 Total Energy Per Day 3.5 4.7 5.8 3.7 Kijoules Per Dry 12.8 16.8 21 13.2 GWP Per Ecoinvent 3.0 3.9 4.9 3.1 GWP at 4% Secondary Materials 3.1 4.1 5.1 3.2 GWP at 4% Secondary Materials 3.6 4.7 5.8 3.7 The above table shows that even with packaging, transportation, and other variables factored into the equation, the Jet Towel® provides the least impact on the environment. Even with variables set to a high GWP percentage for the Jet Towel® and low for other dryers, in all but one scenario the Jet Towel® impacts the planet the least, showing why the Jet Towel® is the Best Hand Dryer for the All facts and figures derived from the following study: http://msl.mit.edu/ publications/HandDryingLCA-Report.pdf 1) p.25 2) p.34 3) p.89 Copyright 2010 PACARC, LLC
{"url":"http://www.mitsubishijettowel.com/greenhanddryer","timestamp":"2014-04-16T07:13:09Z","content_type":null,"content_length":"11475","record_id":"<urn:uuid:7490009f-236d-4d0b-a87b-ef12ab7bcd67>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
String Formulas This page describes some formulas for working with text strings. This page describes a number of worksheet formulas that work with strings of text. The following formula will return the number of times that the text in B1 occurs in the text in A1. This is not case sensitive so, for example, 'A' is treated the same as 'a'. If you want to use a case-sensitve match where for example 'A' is treated differently than 'a', use the following formula: The following formula counts the number of letters (A to Z, in either upper or lower case) in cell A1. This formula is an array formulas so you must press CTRL SHIFT ENTER rather than just ENTER when you first enter the formula and whenever you edit it later. If you do this properly, Excel will display the formula in the formula bar enclosed in curly braces { }. See the Array Formulas page for more information about array formulas. If A1 is empty, the result is 0. The following formula counts the number of digits (0 to 9) in cell A1. This is an array formula so you must enter it with CTRL SHIFT ENTER rather than just ENTER. If cell A1 is empty, the formula returns 0. This formula will return the position of the first digit (0 - 9) in the string in A1. This is an array formula so you must enter it with CTRL SHIFT ENTER rather than just ENTER. This formula will return the position of the first non-numeric character in the string in cell A1. This is an array formula so you must enter it with CTRL SHIFT ENTER rather than just ENTER. The following formula will return the postion of the last occurrence of the character in cell B1 in the string in cell A1. This formula does not distinguish between upper and lower case. If you want to make this distinction, use the formula This is an array formula, so you must press CTRL SHIFT ENTER rather than just ENTER. If cell B1 is empty, the result is 0. The following formula will return the number of words in a cell. A word is considered by be a string of characters delimited by spaces. Other punctuation characters are not considered. =IF(LEN(TRIM(A1))=0,0, LEN(TRIM(A1))-LEN(SUBSTITUTE(TRIM(A1)," ",""))+1) You can combine two string into a single string by using either the CONCATENATE function or the & operator. Unfortunately, neither of these can be used in an array formula to selectively build up a result string based on other criteria. See String Concatenation For Array Formulas for a VBA function that can be used in an array formula to build a string based on selection criteria. This page last updated: 2-November-2007
{"url":"http://www.cpearson.com/excel/StringFormulas.aspx","timestamp":"2014-04-16T22:22:02Z","content_type":null,"content_length":"30895","record_id":"<urn:uuid:ac321efa-0a6e-415d-9ca2-cff9fb3971c0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Clique is hard to approximate within n 1−ɛ Results 11 - 20 of 131 , 2003 "... We show that the minimum distance d of a linear code is not approximable to within anyconstant factor in random polynomial time (RP), unless NP (nondeterministic polynomial time) equals RP. We also show that the minimum distance is not approximable to within an additiveerror that is linear in the b ..." Cited by 51 (7 self) Add to MetaCart We show that the minimum distance d of a linear code is not approximable to within anyconstant factor in random polynomial time (RP), unless NP (nondeterministic polynomial time) equals RP. We also show that the minimum distance is not approximable to within an additiveerror that is linear in the block length n of the code. Under the stronger assumption that NPis not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor 2log 1-ffl(n), for any ffl> 0. Our results hold for codes over any finite field, including binary codes. In the process we show that it is hard to findapproximately nearest codewords even if the number of errors exceeds the unique decoding radius d/2 by only an arbitrarily small fraction ffld. We also prove the hardness of the nearestcodeword problem for asymptotically good codes, provided the number of errors exceeds (2 , 2006 "... The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that ..." Cited by 49 (7 self) Add to MetaCart The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that maximizes the overall welfare (i.e., the sum of all ASes’ utilities for their selected routes). We show that, if the utility functions are unrestricted, this problem is NP-hard even to approximate closely. We then study a natural class of restricted utilities that we call next-hop preferences. We present a strategy-proof, polynomial-time computable mechanism for welfare-maximizing routing over this restricted domain. However, we show that, in contrast to earlier work on lowest-cost routing mechanism design, this mechanism appears to be incompatible with , 2005 "... Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NP-complete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and ..." Cited by 48 (8 self) Add to MetaCart Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NP-complete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bid-ordering heuristics, and a host of structural observations. CABOB attempts to capture structure in any instance without making assumptions about the instance distribution. Experiments against the fastest prior algorithm, CPLEX 8.0, show that CABOB is often faster, seldom drastically slower, and in many cases drastically faster—especially in cases with structure. CABOB’s search runs in linear space and has significantly better anytime performance than CPLEX. We also uncover interesting aspects of the problem itself. First, problems with short bids, which were hard for the first generation of specialized algorithms, are easy. Second, almost all of the CATS distributions are easy, and the run time is virtually unaffected by the number of goods. Third, we test several random restart strategies, showing that they do not help on this problem—the run-time distribution does not have a heavy tail. - In Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science , 2001 "... Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, w ..." Cited by 39 (5 self) Add to MetaCart Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, was noticed before. Yet, researchers had failed to build extractors directly from a good code, without using other tools from pseudorandomness. We succeed in constructing an extractor directly from a Reed-Muller code. To do this, we develop a novel proof technique. Furthermore, our construction is the first and only construction with degree close to linear. In contrast, the best previous constructions had brought the log of the degree within a constant of optimal, which gives polynomial degree. This improvement is important for certain applications. For example, it follows that approximating the VC dimension to within a factor of N , 1994 "... We study the relationship between the independence number of a graph and its semi-definite relaxation, the Lov'asz `-function. We deduce an improved approximation algorithm for the independence number. If a graph on n vertices has an independence number n=k + m, for some fixed integer k 3 and some ..." Cited by 31 (5 self) Add to MetaCart We study the relationship between the independence number of a graph and its semi-definite relaxation, the Lov'asz `-function. We deduce an improved approximation algorithm for the independence number. If a graph on n vertices has an independence number n=k + m, for some fixed integer k 3 and some m ? 0, the algorithm finds, in random polynomial time, an independent set of size ~ \Omega\ Gamma m 3=(k+1) ). This is the first improvement upon the Ramsey Theory based algorithm of Boppana and Halldorsson that finds an independent set of size\Omega\Gamma m 1=(k\Gamma1) ) in such a graph. The algorithm is based on semi-definite programming, some properties of the `-function, and the recent algorithm of Karger, Motwani and Sudan for approximating the chromatic number of a graph. If the `-function of an n vertex graph is at least Mn 1\Gamma2=h , for some absolute constant M , we describe another, related algorithm that finds an independent set of size h. Finally, while it is e... - IEEE Transactions on Evolutionary Computation , 2005 "... Abstract—Estimation of distribution algorithms sample new solutions (offspring) from a probability model which characterizes the distribution of promising solutions in the search space at each generation. The location information of solutions found so far (i.e., the actual positions of these solutio ..." Cited by 30 (10 self) Add to MetaCart Abstract—Estimation of distribution algorithms sample new solutions (offspring) from a probability model which characterizes the distribution of promising solutions in the search space at each generation. The location information of solutions found so far (i.e., the actual positions of these solutions in the search space) is not directly used for generating offspring in most existing estimation of distribution algorithms. This paper introduces a new operator, called guided mutation. Guided mutation generates offspring through combination of global statistical information and the location information of solutions found so far. An evolutionary algorithm with guided mutation (EA/G) for the maximum clique problem is proposed in this paper. Besides guided mutation, EA/G adopts a strategy for searching different search areas in different search phases. Marchiori’s heuristic is applied to each new solution to produce a maximal clique in EA/G. Experimental results show that EA/ G outperforms the heuristic genetic algorithm of Marchiori (the best evolutionary algorithm reported so far) and a MIMIC algorithm on DIMACS benchmark graphs. Index Terms—Estimation of distribution algorithms, evolutionary algorithm, guided mutation, heuristics, hybrid genetic algorithm, maximum clique problem (MCP). I. - SIAM Journal on Computing , 2003 "... independent set ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=319035&sort=cite&start=10","timestamp":"2014-04-20T04:57:40Z","content_type":null,"content_length":"35055","record_id":"<urn:uuid:e08923e2-a9b7-4002-9d55-04362dd8edb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:EB1911 - Volume 01.djvu/643 This page has been , but needs to be 25. Preparation for Algebra.—The calculation of the values of simple algebraical expressions for particular values of letters involved is a useful exercise, but its tediousness is apt to make the subject repulsive. What is more important is to verify particular examples of general formulae. These formulae are of two kinds:—(a) the general properties, such as m(a+b) = ma+mb, on which algebra is based, and (b) particular formulae such as (x-a)(x+a) = x²-a². Such verifications are of value for two reasons. In the first place, they lead to an understanding of what is meant by the use of brackets and by such a statement as 3(7+2) = 3•7+3•2. This does not mean (cf. § 23) that the algebraic result of performing the operation 3(7+2) is 3•7+3•2; it means that if we convert 7+2 into the single number 9 and then multiply by 3 we get the same result as if we converted 3•7 and 3•2 into 21 and 6 respectively and added the results. In the second place, particular cases lay the foundation for the general Exercises in the collection of coefficients of various letters occurring in a complicated expression are usually performed mechanically, and are probably of very little value. 26. General Arithmetical Theorems. (i.) The fundamental laws of arithmetic (q.v.) should be constantly borne in mind, though not necessarily stated. The following are some special points. (a) The commutative law and the associative law are closely related, and it is best to establish each law for the case of two numbers before proceeding to the general case. In the case of addition, for instance, suppose that we are satisfied that in a+b+c+d+e we may take any two, as b and c, together (association) and interchange them (commutation). Then we have a+b+c+d+e = a+c+b+d+e. Thus any pair of adjoining numbers can be interchanged, so that the numbers can be arranged in any order. (b) The important form of the distributive law is m(A+B) = mA+mB. The form (m+n)A = mA+nA follows at once from the fact that A is the unit with which we are dealing. (c) The fundamental properties of subtraction and of division are that A-B+B = A and m×$\textstyle {{1 \over \it m}}$ of A = A, since in each case the second operation restores the original quantity with which we started. (ii.) The elements of the theory of numbers belong to arithmetic. In particular, the theorem that if n is a factor of a and of b it is also a factor of pa±qb, where p and q are any integers, is important in reference to the determination of greatest common divisor and to the elementary treatment of continued fractions. Graphic methods are useful here (§ 34 (iv.)). The law of relation of successive convergents to a continued fraction involves more advanced methods (see § 42 (iii.) and Continued Fraction). (iii.) There are important theorems as to the relative value of fractions; e.g. (a) If $\textstyle {{\it a \over b}}$ = $\textstyle {{\it c \over d}}$ then each = $\textstyle {{\it pa \pm qc \over pb \pm qd}}$. (b) $\textstyle {{\it a+n \over b+n}}$ is nearer to 1 than $\textstyle {{\it a \over b}}$ is; and, generally, if $\textstyle {{\it a \over b}}$ ≠ $\textstyle {{\it c \over d}}$, then $\ textstyle {{\it pa+qc \over pb+qd}}$ lies between the two. (All the numbers are, of course, supposed to be positive.) 27. Negative Quantities and Fractional Numbers.—(i.) What are usually called "negative numbers" in arithmetic are in reality not negative numbers but negative quantities. If a person has to receive 7s. and pay 5s., with a net result of +2s., the order of the operations is immaterial. If he pays first, he then has -5s. This is sometimes treated as a debt of 5s.; an alternative method is to recognize that our zero is really arbitrary, and that in fact we shift it with every operation of addition or subtraction. But when we say "-5s." we mean "-(5s.)," not "(-5)s."; the idea of (-5) as a number with which we can perform such operations as multiplication comes later (§ 49). (ii.) On the other hand, the conception of a fractional number follows directly from the use of fractions, involving the subdivision of a unit. We find that fractions follow certain laws corresponding exactly with those of integral multipliers, and we are therefore able to deal with the fractional numbers as if they were integers. 28. Miscellaneous Developments in Arithmetic.—The following are matters which really belong to arithmetic; they are usually placed under algebra, since the general formulae involve the use of (i.) Arithmetical Progressions such as 2, 5, 8, . . .—The formula for the rth term is easily obtained. The problem of finding the sum of r terms is aided by graphic representation, which shows that the terms may be taken in pairs, working from the outside to the middle; the two cases of an odd number of terms and an even number of terms may be treated separately at first, and then combined by the ordinary method, viz. writing the series backwards. In this, as in almost all other cases, particular examples should be worked before obtaining a general formula. (ii.) The law of indices (positive integral indices only) follows at once from the definition of a^2, a^3, a^4, . . . as abbreviations of a•a, a•a•a, a•a•a•a, ..., or (by analogy with the definitions of 2, 3, 4, . . . themselves) of a•a, a•a^2, a•a^3, . . . successively. The treatment of roots and of logarithms (all being positive integers) belongs to this subject; a = $\it \sqrt [p]{}n$ = $\log_ a$ being the inverses of n = a^p (cf. §§ 15, 16). The theory may be extended to the cases of p = 1 and p = 0; so that a^3 means a•a•a•1, a^2 means a•a•1, a^1 means a•1, and a^0 means 1 (there being then none of the multipliers a). The terminology is sometimes confused. In n = a^p, a is the root or base, p is the index or logarithm, and n is the power or antilogarithm. Thus a, a^2, a^3, . . . are the first, second, third, . . . powers of a. But a^p is sometimes incorrectly described as "a to the power p"; the power being thus confused with the index or logarithm. (iii.) Scales of Notation lead, by considering, e.g., how to express in the scale of 10 a number whose expression in the scale of 8 is 2222222, to (iv.) Geometrical Progressions.—It should be observed that the radix of the scale is exactly the same thing as the root mentioned under (ii.) above; and it is better to use the term "root" throughout. Denoting the root by a, and the number 2222222 in this scale by N, we have N = 2222222. aN = 22222220. Thus by adding 2 to aN we can subtract N from aN+2, obtaining 20000000, which is = 2 • a^7; and from this we easily pass to the general formula for the sum of a geometrical progression having a given number of terms. (v) Permutations and Combinations may be regarded as arithmetical recreations; they become important algebraically in reference to the binomial theorem (§§ 41, 44). (vi.) Surds and Approximate Logarithms.—From the arithmetical point of view, surds present a greater difficulty than negative quantities and fractional numbers. We cannot solve the equation 7s.+X = 4s.; but we are accustomed to transactions of lending and borrowing, and we can therefore invent a negative quantity -3s. such that -3s.+3s. = 0. We cannot solve the equation 7X = 4s.; but we are accustomed to subdivision of units, and we can therefore give a meaning to X by inventing a unit $\textstyle {{1 \over 7}}$s = 1s. such that 7×$\textstyle {{1 \over 7}}$s = 1s., and can thence pass to the idea of fractional numbers. When, however, we come to the equation x² = 5, where we are dealing with numbers, not with quantities, we have no concrete facts to assist us. We can, however, find a number whose square shall be as nearly equal to 5 as we please, and it is this number that we treat arithmetically as √5. We may take it to (say) 4 places of decimals; or we may suppose it to be taken to 1000 places. In actual practice, surds mainly arise out of mensuration; and we can then give an exact definition by graphical methods. When, by practice with logarithms, we become familiar with the correspondence between additions of length on the logarithmic scale (on a slide-rule) and multiplication of numbers in the natural scale (including fractional numbers), √5 acquires
{"url":"http://en.wikisource.org/wiki/Page:EB1911_-_Volume_01.djvu/643","timestamp":"2014-04-20T04:05:43Z","content_type":null,"content_length":"33485","record_id":"<urn:uuid:31a82601-7d15-4216-b6a8-39a9bf7f8ec1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Please help. March 5th 2009, 07:33 PM #1 Mar 2009 "What will be the amount in an RRSP at the end of 20 years if monthly contributions of $500 are made at the end of each month and the RRSP earns 6% compounded monthly for the first 15 years and 7.5% compounded monthly for the remaining 5 years." The formula that I have is Future Value = pmt {(1+i)^n - 1} I am able to get the right answer when I put it into my calculator but every time I use the formula I am getting the wrong answer. Please help =)
{"url":"http://mathhelpforum.com/business-math/77175-please-help.html","timestamp":"2014-04-16T08:47:02Z","content_type":null,"content_length":"24505","record_id":"<urn:uuid:087ce47c-95e8-423b-b720-8afac55fca39>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Aladdin? Aladdin is a computational toolkit for the interactive matrix and finite element analysis of large engineering structures, particularly building and highway bridge structures. Aladdin views finite element computations as a specialized form of matrix computations, matrices as rectangular arrays of physical quantities, and numbers as dimensionless physical quantities. Aladdin Version 1.0 , released in May 1996, provides engineers with: • Mechanisms to define physical quantities with units, and matrices of physical quantities. • Facilities for physical quantity and matrix arithmetic. • A SI and US units package. Conversion of units may be applied to physical quantity constants, physical quantity variables, and matrices of physical quantities. • A matrix package. Its capabilities include matrix arithmetic, solution of linear matrix equations, and the general symmetric eigenvalue problem. • Programming constructs to control the solution procedure (i.e., branching and looping) in matrix and finite element problems. • A finite element mesh generation package. Two- and three-dimensional finite element meshes can be created. • A library of finite elements. Currently, the finite element library includes elements for plane stress/plane strain analysis, two-dimensional beam/column analysis, three-dimensional truss analysis, DKQ plate analysis, and a variety of shell finite elements. Aladdin Version 2.1 , released in March 2000, contains a variety of new features for nonlinear analysis of structures with fiber finite elements. Aladdin Version 3.0 is in the early stages of development. We are currently working on finite elements and numerical algorithms for the analysis of structures undergoing large geometric displacements, and problems involving fluid flow and fluid-structure interaction . In a second project we are developing a suite of numerical algorithms for the hybrid control of large structural Applications : We are using Versions 2 and 3 of Aladdin for the performance-based design and analysis of base-isolated highway bridge structures and tethered aerostat systems. • Austin M.A., and Sebastianelli R., Phase Analysis of Actuator Response for Sub-Optimal Bang-Bang and Velocity Cancellation Control of Base Isolated Structures, Journal of Structural Control and Health Monitoring , Vol. 14, Issue 7, July 2007, pp. 1034-1061. • Sebastianelli R. and Austin M.A., Energy Balance and Power Demand Assessment of Actuators in Base Isolated Structures supplemented with Modified Bang-Bang Control, Structural Engineering and Mechanics , Vol. 25, No. 5, March 2007, pp. 541-564. • Austin M.A., Matrix and Finite Element Stack Machines for Structural Engineering Computations with Units, Advances in Engineering Software, Elsevier, Vol. 37, No. 8, August 2006, pp. 544-559. • Austin M.A. and Lin W.J., Energy-Balance Assessment of Isolated Structures, Journal of Engineering Mechanics, ASCE, Vol. 130, No. 3, March 2004, pp. 347-358. • Austin M.A., Lin W.J., and Chen X.G., Structural Matrix Computations with Units, Journal of Computing in Civil Engineering , ASCE, Vol. 14, No. 3, July 2000, pp. 174-182. • Austin M.A., Lin W.J., and Chen X.G., Structural Matrix Computations with Units: Data Structures, Algorithms, and Scripting Language Design, ISR Technical Report 99-63 ( ps.gz , 406 KB) ( pdf , 392 KB), University of Maryland, College Park, MD 20742, November, 1999, 39 p. • Austin M.A. and Chancogne D., Introduction to Engineering Programming : in C, MATLAB, and JAVA , John Wiley and Sons, New York, December 1998, (p. 656). Note - The C tutorial contains a simplified explanation of how the dynamic allocation of matrices works in Aladdin. • Lin W.J. and Austin M.A., Design of a Scripting Language for the Energy-Based Analysis of Nonlinear Structural Systems ( ps.gz ,273 KB), I3th U.S. National Congress of Applied Mechanics, University of Florida, Gainesville, June 21-26, 1998. • Lin W.J., Modern Computational Environments for Seismic Analysis of Highway Bridge Structures ( ps.Z , 805 KB), Ph.D. Dissertation, University of Maryland, College Park, MD 20742, December 1997, (p. 197). Note - Wane-Jang's dissertation describes a lot of the new features in Aladdin 2.0. • Austin M.A., Chen X.G., and Lin W.J., Aladdin: A Computational Toolkit for Interactive Engineering Matrix and Finite Element Analysis, ISR Technical Research Report TR95-74 ( ps ) ( pdf ), University of Maryland, College Park, MD 20742, August 1995, (p. 235). Click here for the Table of Contents to TR95-74. • Chen X.G. and Austin M.A., A Systems Approach to Nonlinear Finite Element Analysis of Shell Structures, ISR Technical Research Report TR95-104 ( ps.Z , 346 KB), University of Maryland, College Park, MD 20742, December 1995, (p. 57). • Lanheng Jin, Analysis and Evaluation of a Shell Finite Element with Drilling Degree of Freedom, ISR Masters Thesis Report M.S. 94-12 ( ps.Z , 331 KB) ( pdf , 401 KB), Institute for Systems Research, University of Maryland, College Park, MD 20742, December 1994, (p. 60).
{"url":"http://www.isr.umd.edu/~austin/aladdin.html","timestamp":"2014-04-19T15:23:55Z","content_type":null,"content_length":"17180","record_id":"<urn:uuid:0f3e9933-0df8-4f4f-b0e9-57f0468ff412>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Does Branching in the Weight Diagram affect an embedding? up vote 2 down vote favorite All groups here are compact semisimple Lie groups. Out of laziness I will use $B_7$ to mean $Spin(15)$. Suppose that one has a group $H$ and a subgroup $G$. The embedding determines the decomposition of $Res^H_G(V)$ into $G$-irreducibles for $V$ a representation of $H$; certainly different embeddings can lead to different decompositions of $Res^H_G(V)$. What I am wondering is whether knowing the decomposition determines the embedding. Here is a specific instance of why I am not sure whether this is true or not. Let $H = B_7$ and $G = G_2$ and consider the embedding of $G_2$ such that the 15-dimensional irreducible representation of $B_7$ restricts to the direct sum of the 14-dimensional adjoint representation with the trivial representation. To describe the restriction of an arbitrary character, it is enough to describe the image of the fundamental weights under the restriction since the restriction of an arbitrary weight is then a linear combination of these images. Let $\omega_1,\ldots,\omega_7$ be fundamental weights of $B_7$ with $\omega_1$ the highest weight of the 15-dimensional representation and $\omega_7$ the highest weight of the 128-dimensional spinor representation. Define $V_i$ to be a fundamental $B_7$-module with highest weight $\omega_i$. Finally, let $\eta_1$ be the fundamental weight of $G_2$ which is the highest weight of the 7-dimensional representation of $G_2$ and $\eta_2$ the fundamental weight which is the highest weight of the 14-dimensional adjoint representation which I will call $W$. EDIT: To help clarify the situation, here are the weight diagrams of these representations, truncated at the zero weight: Since $V_k \cong \Lambda^kV_1$ for $k=1\ldots 6$, if $Res^{B_7}_{G_2}(V_1) = W\oplus 1$ then one can map the weight $\omega_k$ to the sum of the $k$ top-most weights in the weight diagram of $W$ for $k=1\ldots 6$ (actually it may map to the sum of any $k$ weights of $W$; we choose the topmost weights so that the restriction takes highest weights to highest weights). The four top-most weights of $W$ are totally ordered by their height, so there is no ambiguity in choosing the sums of the top-most weights. $\omega_1\mapsto \eta_2$ $\omega_2\mapsto 3\eta_1$ $\omega_3\mapsto 4\eta_1$ $\omega_4\mapsto 3\eta_1+\eta_2$ However, when one reaches the 5th level of the weight diagram of $W$, the diagram branches (both $-3\eta_1+2\eta_2$ and $2\eta_1-\eta_2$ sit at this level) and so there are two possible choices for where to map $\omega_5$: $\omega_5\mapsto 3\eta_2$ $\omega_5\mapsto 5\eta_1$ After this, there are no further ambiguities; $\omega_6$ maps to $2\eta_1+2\eta_2$ which is the sum of the 6 highest weights, and subsequently one works out that $\omega_7$ maps to $\eta_1+\eta_2$. This leads to the actual question. Question: Does the choice of how one maps $\omega_5$ affect the conjugacy of the embedding of $G_2$ in $Spin(15)$? It is not hard to check that either choice of how to map $\omega_5$ leads to isomorphic decompositions of the restriction to $G_2$ of an arbitrary $B_7$ representation. However, what I cannot decide is whether the two different ways of mapping $\omega_5$ to weights of $G_2$ correspond to two embeddings of $G_2$ (call them $G_2(A)$ and $G_2(B)$) which are not conjugate to one another in $B_7$, and yet $Res^{B_7}_{G_2(A)}(V)\cong Res^{B_7}_{G_2(B)}(V)$ for every representation $V$ of $B_7$. Since the restriction is really just a projection of the weight space of $B_7$ onto a subspace which is isomorphic to the weight space of $G_2$, I feel that there should be no real difference between the two choices of where to send $\omega_5$ (indeed there are lots of choices of where to map the $\omega_i$'s, and certainly not all choices lead to non-conjugate $G_2$'s) but I cannot adequately convince myself that the two choices really do correspond to conjugate embeddings. Motivation: Recently I have been working on behaviors of certain representations upon restriction to a subgroup. Most of the cases I have dealt with have been simple to deal with, so I decided to try a more complicated example. In order to embed $G_2$ in $B_7$ one can of course go through the 'obvious' inclusion chain: $G_2\hookrightarrow B_3\hookrightarrow B_4\hookrightarrow B_5\hookrightarrow B_6\hookrightarrow B_7$. It is clear (to me at least) that all such embeddings of $G_2$ into $B_7$ are conjugate to one another. On the other hand, since the adjoint representation of $G_2$ is orthogonal, one can embed $G_2$ directly in $B_7$ leading to the above question. This is the simplest example of the more general embeddings I am thinking about, each involving restrictions to representations with branching in their weight diagrams. rt.representation-theory lie-groups lie-algebras gr.group-theory This is a substantial-looking (and lengthy) question but still hard for me to sort out, partly due to some loose or unfamiliar language used such as "four top-most weights". – Jim Humphreys Mar 2 '11 at 13:40 So if one starts at the highest weight $\eta_2$ and then draws out the weight diagram by subtracting off simple weights, the second highest weight is $3\eta_1-\eta_2$, then $\eta_1$ then $-\eta_1+\ eta_2$; then the branching occurs at the next level. So these first four weights are the 'top-most' weights I refer to. The point is that the weights are totally ordered when subtracting off the first three simple weights; and then the branching occurs in the diagram when one goes to subtract a fourth simple weight. – ARupinski Mar 2 '11 at 15:00 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged rt.representation-theory lie-groups lie-algebras gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/57042/does-branching-in-the-weight-diagram-affect-an-embedding","timestamp":"2014-04-18T01:12:31Z","content_type":null,"content_length":"54351","record_id":"<urn:uuid:f0a8a448-fa40-4883-b318-c491389862c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
The International Commission on Mathematical Instruction Bulletin No. 46 June 1999 In Memoriam - Claude Janvier (1940-1998) Linda Gattuso and Nadine Bednarz On the 16^th day of June 1998, Claude Janvier passed away after a long struggle with illness, leaving his family, colleagues, and numerous friends across the world struck with consternation. This text is a homage to a teacher-educator deeply engaged in his work and an internationally renowned researcher in the field of mathematics education who contributed in many ways to the improvement and understanding of mathematics teaching and learning. After completing a baccalaureate and master degree in applied mathematics at McGill University in Montréal, Claude Janvier received in 1978 a doctorate in education from the University of Nottingham in England with a thesis on The interpretation of complex Cartesian graphs representing situation, studies and teaching experiments. He became a leader in Québec in the creation of programs for the preparation and development of teachers of mathematics. For Claude, the problems of preparing teachers and of research in mathematical thinking and teaching were closely linked, one domain of activity drawing profound inspiration from the other. From as early as 1970, Claude Janvier was actively engaged in an innovative program of in-service teacher training, PERMAMA (Perfectionnement des maîtres en mathématiques). His involvement in the conception of courses for in-service training led to the production of still up-to-date audiovisual scenarios and didactical material regarding, for example, the teaching of statistics or functions. The topic of functions eventually became one of his preferred subjects and his research in the domain is well known internationally. The students in his courses on the teaching of variables and functions recognized his particular expertise in the field, as is witnessed by the affectionate nickname they gave him, "Monsieur Fonction". Claude Janvier was involved in the conception and implementation of numerous courses for prospective mathematics teachers: mathematics courses on geometry or modeling in mathematics and sciences; mathematics education courses on the concepts of proportional reasoning, of measure, of variable and function or on the teaching of volume. He regularly supervised the students practicum, being convinced of the importance of such an involvement for all teacher educators. He contributed to the preparation of numerous researchers in mathematics education, supervising theses in a variety of domains which testify to the richness of his expertise (the use of analogies in the teaching of electricity, conceptions of probability, the role of films in the teaching of mathematics, the utilization of transitory symbolism, problems of representation in the learning and teaching of mathematics, etc.). He was also interested by post-secondary education, as is attested by his study on the contextualisation of reasoning by electro-technicians. He played an important role in the onset of the CIRADE (Centre Interdisciplinaire de Recherche sur l'Apprentissage et le Développement en Éducation) at the Université du Québec à Montréal (UQAM), serving as the director of the center for many years. Claude Janvier had numerous activities at the international level, in addition to acting as a referee for various research journals on the didactic of mathematics and sciences (Didaskalia, Journal for Research in Mathematics Education, Recherche en didactique des mathématiques, ). From 1987 to 1993, he was in charge of a cooperation project with the École Normale Supérieure de Marrakech, in Morocco, on the preparation of teacher educators in mathematics and sciences. He was actively engaged in the CIEAEM (Commission Internationale pour l'Étude et l'Amélioration de l'Enseignement des Mathématiques) and was a founding member of PME (The International Group for the Psychology Mathematics Education), serving on its international committee from 1979 to 1983 and as the Secretary from 1980 to 1982. From 1989, he played an important role in the PME Working Group on Representations, and accepted in 1994 to co-chair this group with Gérard Vergnaud. Near the end of his life, he continued his activity by planning and co-editing two special issues of the Journal of Mathematical Behavior dedicated to "Representations and the Psychology of Mathematics Education" (vol. 17, Nos. 1 and 2). Sadly, he did not live long enough to see the results of this labor. In 1996, he was awarded the Prix Abel-Gauthier of the Association mathématique du Québec. The mention recognized his contribution to the improvement of the quality of the teaching of mathematics and stressed the originality, the utility, and the value of his written works (Pallascio, 1997). A wonderful French expression sums up the frame of mind of Claude Janvier: la mathématique qui se fait plutôt que la mathématique toute faite (roughly translated, mathematics that is developing rather than mathematics that is all done ). In addition, Claude always took an interdisciplinary perspective. As he said, "It was always my idea to incorporate other subjects into mathematics, and mathematics into other subjects. This is the foundation of my ideas." (Gattuso, 1997) Claude was for us a mentor, a companion, and a colleague. We will remember his intellectual contributions but still more we will recall his warmth, humanity, and dear and irreplaceable friendship. Claude is still present, and will continue to be among us. L. Gattuso (1997) "À bâtons rompus : Claude Janvier répond aux interrogations de jeunes enseignants." Bulletin AMQ (Association mathématique du Québec) XXXVII (3) 20-26. Pallascio, R. (1997) "AMQ en action : Présentation du lauréat 1996 du prix Abel-Gauthier." Bulletin AMQ (Association mathématique du Québec) XXXVII (4) 20-22. Selected works of Claude Janvier On teacher training: "Teachers training in mathematics education in a constructivist perspective." In: L.P. Steffe and P. Nesher (eds.) Theories of mathematical learning. Erlbaum, 1996. On the role of representation in mathematics education: C. Janvier (ed.) Problems of representation in the teaching and learning of mathematics. Erlbaum, 1987. On the role of context in mathematics and science education: C. Janvier, M. Baril and C. Mary (1993) "Contextualized reasoning of electrical technicians." In: M. Caillot (ed.) Learning electricity or electronics with advanced educational technology. Springer-Verlag, NATO ASI Series F, 115, p. 157-169. On the concept of variable and function: C. Janvier (1993) "Les graphes cartésiens: des traductions aux chroniques." Les sciences de l'éducation pour l'ère nouvelle 1 (3) 17-39. Linda Gattuso and Nadine Bednarz Département de mathématiques et CIRADE Université du Québec à Montréal C.P. 8888, Succursale Centre ville Montréal H3C 3P8 CANADA gattuso.linda@uqam.ca, descamps-bednarz.nadine@uqam.ca
{"url":"http://www.mathunion.org/o/Organization/ICMI/bulletin/46/Janvier_Claude.html","timestamp":"2014-04-18T08:02:06Z","content_type":null,"content_length":"11215","record_id":"<urn:uuid:c4a52bbd-db3e-46f9-be28-bfea263996a4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Sampling the Hydrogen Atom Authors: Norman Graves Since the emergence of quantum theory just over a century ago every model for the hydrogen atom that has been developed incorporates the same basic assumption. From Niels Bohr through de Broglie and Schrödinger up to and including the Standard Model all such theories are based on an assumption put forward by John Nicholson. Nicholson was the first to recognise that the units of Planck’s constant were the same as those of angular momentum and so he reasoned that perhaps Planck’s constant was a measure of the angular momentum of the orbiting electron. But Nicholson went one step further and argued that the angular momentum of the orbiting electron could take on values which were an integer multiple of Planck’s constant. This allowed Bohr to develop a model in which the differences between the energy levels matched those of the empirically developed Rydberg formula. When the Bohr model was superseded Nicholson’s assumption was simply carried forward unchallenged into these later models. The main problem with Nicholson’s assumption is that it lacks any mathematical rigour. It simply takes one variable, angular momentum, and asserts that if we allow it to have this characteristic quantisation then we get energy levels which appear to be correct. In so doing it fails to provide any sort of explanation as to why such a quantisation should take place. In the 1940s a branch of mathematics appeared which straddles the boundary between continuous functions and discrete solutions. It was developed by engineers at Bell Labs to address problems with capacity in the telephone network. While at first site there appears to be little to connect problems of network capacity with electrons orbiting atomic nuclei it is the application of these mathematical ideas which holds the key to explaining quantisation inside the atom. Comments: 29 Pages. Download: PDF Submission history [v1] 2012-02-21 16:02:22 [v2] 2012-06-19 03:39:21 Unique-IP document downloads: 102 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/1202.0072","timestamp":"2014-04-16T13:07:33Z","content_type":null,"content_length":"8476","record_id":"<urn:uuid:379ce238-91c6-4a13-b2ce-e04ef5c7282a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: How to avoid wrong answers in simple arithmetic expressions. Date: Jan 20, 2013 3:49 AM Author: S.K.Mody Subject: Re: How to avoid wrong answers in simple arithmetic expressions. "James Tursa" wrote in message <kdfc0c$v7$1@newscl01ah.mathworks.com>... > False. 9 has an exact IEEE double representation. 0.09 does not ... the closest is: > 0.0899999999999999966693309261245303787291049957275390625 > > More generally, most integers do not have an exact representation in this format. > False. All of the integers in the range of your particular problem can be represented exactly in IEEE double. All integers from 0 up to 2^53 have exact IEEE double representations. OK. I got it (after some embarrassment). The IEEE 754 format description on Wikipedia was a bit confusing - all those negative powers of 2! It really boils down to - if a number can be represented as a sum of powers of 2, then it has an exact representation - so all integers (within some range) will have an exact representation. Nevertheless, it does appear that exact calculations even with integers has to be thought through on a case by case basis. Best Regards. > > So, does Matlab treat small integers in a special way? > No. They are treated just like all other IEEE double numbers. > James Tursa
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8114165","timestamp":"2014-04-19T13:08:26Z","content_type":null,"content_length":"2395","record_id":"<urn:uuid:8b3dc084-e75b-4a1e-b308-126e96806e08>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
North Hills Calculus Tutor ...I have also taught university level mathematics at UCLA, the University of Maryland, and the U.S. Air Force Academy. I love working with students and have experience teaching a wide range of classes from pre-Algebra to Advanced Engineering Mathematics. 9 Subjects: including calculus, geometry, algebra 1, algebra 2 ...My approach with students is to try to work with them to solve the problems. I show them the basics of their questions and work from there to help them fully comprehend what the questions want you to do. I need to show the students how to solve the problems on their own. 18 Subjects: including calculus, chemistry, geometry, GRE ...I started tutoring in high school and have continued to do so until now, whether it be with friends or my little brothers and sisters. I am easy to get along, communicate well, and have a lot of patience. I look forward to building long-lasting relationships with you. 21 Subjects: including calculus, reading, Spanish, geometry ...Thanks very much for your interest! David M.I'm an advanced level tennis player, though I have only taught friends how to play before (for free). I've played tennis since I was about seven years old, and played in high school and participated in tennis classes in college. I've done pretty well ... 43 Subjects: including calculus, reading, English, geometry ...I also played at the Intramural level at UC Irvine for one year and I continue to play regularly. As a Christian, I study the Old and New Testaments on a daily basis and have been doing so for the last 10 years. I also know a bit about the Islamic and Jewish religions because, along with Christ... 26 Subjects: including calculus, reading, chemistry, French
{"url":"http://www.purplemath.com/north_hills_ca_calculus_tutors.php","timestamp":"2014-04-17T19:44:48Z","content_type":null,"content_length":"23898","record_id":"<urn:uuid:a02ccf87-6af7-44e4-8258-f9f8793db7b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 4 of 4 1. CMB 2012 (vol 57 pp. 42) Covering the Unit Sphere of Certain Banach Spaces by Sequences of Slices and Balls e prove that, given any covering of any infinite-dimensional Hilbert space $H$ by countably many closed balls, some point exists in $H$ which belongs to infinitely many balls. We do that by characterizing isomorphically polyhedral separable Banach spaces as those whose unit sphere admits a point-finite covering by the union of countably many slices of the unit ball. Keywords:point finite coverings, slices, polyhedral spaces, Hilbert spaces Categories:46B20, 46C05, 52C17 2. CMB 2012 (vol 57 pp. 145) The Essential Spectrum of the Essentially Isometric Operator Let $T$ be a contraction on a complex, separable, infinite dimensional Hilbert space and let $\sigma \left( T\right) $ (resp. $\sigma _{e}\left( T\right) )$ be its spectrum (resp. essential spectrum). We assume that $T$ is an essentially isometric operator, that is $I_{H}-T^{\ast }T$ is compact. We show that if $D\diagdown \sigma \left( T\right) \neq \emptyset ,$ then for every $f$ from the disc-algebra, \begin{equation*} \sigma _{e}\left( f\left( T\right) \right) =f\left( \sigma _{e}\left( T\right) \right) , \end{equation*} where $D$ is the open unit disc. In addition, if $T$ lies in the class $ C_{0\cdot }\cup C_{\cdot 0},$ then \begin{equation*} \sigma _{e}\left( f\left( T\right) \right) =f\left( \sigma \left( T\right) \cap \Gamma \right) , \end{equation*} where $ \Gamma $ is the unit circle. Some related problems are also discussed. Keywords:Hilbert space, contraction, essentially isometric operator, (essential) spectrum, functional calculus Categories:47A10, 47A53, 47A60, 47B07 3. CMB 2012 (vol 57 pp. 25) Subadditivity Inequalities for Compact Operators Some subadditivity inequalities for matrices and concave functions also hold for Hilbert space operators, but (unfortunately!) with an additional $\varepsilon$ term. It seems not possible to erase this residual term. However, in case of compact operators we show that the $\varepsilon$ term is unnecessary. Further, these inequalities are strict in a certain sense when some natural assumptions are satisfied. The discussion also stresses on matrices and their compressions and several open questions or conjectures are considered, both in the matrix and operator settings. Keywords:concave or convex function, Hilbert space, unitary orbits, compact operators, compressions, matrix inequalities Categories:47A63, 15A45 4. CMB 2011 (vol 56 pp. 400) A Factorization Theorem for Multiplier Algebras of Reproducing Kernel Hilbert Spaces Let $(X,\mathcal B,\mu)$ be a $\sigma$-finite measure space and let $H\subset L^2(X,\mu)$ be a separable reproducing kernel Hilbert space on $X$. We show that the multiplier algebra of $H$ has property $(A_1(1))$. Keywords:reproducing kernel Hilbert space, Berezin transform, dual algebra Categories:46E22, 47B32, 47L45
{"url":"http://cms.math.ca/cmb/kw/Hilbert%20space","timestamp":"2014-04-18T15:44:56Z","content_type":null,"content_length":"31284","record_id":"<urn:uuid:75403085-993a-46c8-bf9b-8ddc6c594915>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
[BioC] Question regarding bg [BioC] Question regarding bg correction/normalization/summarization Wolfgang Huber w.huber at dkfz-heidelberg.de Sat Jul 10 11:33:29 CEST 2004 References: <5B9643DA-D133-11D8-9F16-000A95B445D6 at ucla.edu> Hi Tova, with 'calibration' I refer to the transformation ny[k,i] <- (y[k,i] - a[s[k], i]) / b[s[k], i] where y[k,i] is the raw intensity of the k-th probe on the i-th array, s[k] is the "stratum" (e.g. print-tip group) of the k-th probe (in the most simple case, there is just one stratum: s[k]==1 for all k), a[s, i] is an offset term for the s-th stratum on the i-th array, b[s, i] is a scaling factor for the s-th stratum on the i-th array, and ny[k,i] is the calibrated intensity. Don't be afraid of the formal notation, it's really very simple. The parameters a and b are estimated from the data. It is in that sense that vsn does background correction. People use also the words "background correction" and "normalization" in this context, but my impression is that the exact definition of what they mean is a little different for each author. From my own prejudice, I would not recommend to use the combination bgcorrect.method="rma", normalize.method="vsn" in expresso, since this would be doing the same thing twice (and probably get confused) but I'd interested to hear about other opinions. Also, note that the current example for the use of vsn with affy data ignores the MMs, which is clearly suboptimal compared to methods that do use the MMs in a sensible manner. (Note: in addition, vsn will apply a "log-like" tranformation on the matrix ny to ensure approximate independence of variance of the mean. This transformation is called the "glog" and is like the logarithm (base e) for high intensities and like a straight line for low intensities.) Best wishes Wolfgang Huber Division of Molecular Genome Analysis German Cancer Research Center Heidelberg, Germany Phone: +49 6221 424709 Fax: +49 6221 42524709 Http: www.dkfz.de/abt0840/whuber Tova Fuller wrote: > Hello! > I was wondering about the terminology used to describe VSN in its > vignette. The vignette claims VSN does calibration, and mentions > something about comparison against background - so, in addition to > normalization, does VSN do background correction? The example given for > use with the affy package has bg correction turned off, but I've notice > several posters have used rma for background correction with vsn in > expresso. > Also, I was curious as to opinions regarding which possible combinations > in expresso or threestep (affyPLM), in addition to DChip & PLIER are > most common for doing bg correction, normalization & summarization. Or > which combos are the best. Any feedback would be wonderful. > Thank you, and I apologize for these probably trivial questions - I am > but a lowly graduate student. > Thanks again, > Tova Fuller Wolfgang Huber Division of Molecular Genome Analysis German Cancer Research Center Heidelberg, Germany Phone: +49 6221 424709 Fax: +49 6221 42524709 Http: www.dkfz.de/abt0840/whuber More information about the Bioconductor mailing list
{"url":"https://stat.ethz.ch/pipermail/bioconductor/2004-July/005293.html","timestamp":"2014-04-19T17:57:45Z","content_type":null,"content_length":"6045","record_id":"<urn:uuid:541d5215-847c-479f-befa-18c5e7693567>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00081-ip-10-147-4-33.ec2.internal.warc.gz"}