text stringlengths 6 976k | token_count float64 677 677 | cluster_id int64 1 1 |
|---|---|---|
Northridge StatisticsShifting Graphs, Stretching, Compressing and Reflecting Graphs. The Graph of a Quadratic Function, Finding a Quadratic Function, Finding the Vertex, Optimizing Quadratic Functions. Introduction to Polynomial Functions and Their Graphs, The Real Zeros of a Polynomial Function, Polynomial Division, The Rational Zero Theorem, Complex Numbers, The Fundamental Theorem of Algebra. | 677.169 | 1 |
Overview
Authors Wayne Winston and Munirpallam Venkataramanan emphasize model-formulation and model-building skills as well as interpretation of computer software output. Focusing on deterministic models, this book is designed for the first half of an operations research sequence. A subset of Winston's best-selling OPERATIONS RESEARCH, INTRODUCTION TO MATHEMATICAL PROGRAMMING offers self-contained chapters that make it flexible enough for one- or two-semester courses ranging from advanced beginning to intermediate in level. The book has a strong computer orientation and emphasizes model-formulation and model-building skills. Every topic includes a corresponding computer-based modeling and solution method and every chapter presents the software tools needed to solve realistic problems. LINDO, LINGO, and Premium Solver for Education software packages are available with the book.
Customer Reviews
Average Rating
5 ratings
4.4
Good Value
Published by Thriftbooks.com User, 8 years ago
The book is very good, brings loads of examples and exercises. It has also a sample version of LINGO, which is quite useful for Operational Researchers. The only hint I give is the following: if you have already the blue book OPERATIONS RESEARCH by the same author, forget about this one. The content is basically the same, except from two chapters.
This book is a tool shed
Published by Thriftbooks.com User, 8 years ago
As one of the previous reviewers noted this book offers no (very few to be accurate) proofs. While I would normally pounce on an author for neglecting proofs and rigor, Winston approach is rather refreshing and practical. It is like a tool shed filled with tools that one may use without completely understanding its composition. This book is very accessible to people who are not very mathematically apt and provides a gentle introduction for those advanced in mathematics. If you want a general introduction to LP and NLP before you dive into the meat and potatoes (rigor and proofs) or if you want to just pick up some methods to optimize operations related tasks, I highly recommends this book.
Very helpful book
Published by Thriftbooks.com User, 9 years ago
This isn't your regular review here, because I'm guessing we all "have" to buy this book when we take the course. This book is very helpful for students of all levels. I took it for an "operations research" class after more than 7 years of not studying any math. Still, the first few chapters offer a recap of algebra needed for the course. In the subsequent chapters, everything progresses slowly with lots of examples, (which is helpful if you miss a class or two). Good luck with the class using this book then.
something stinks
Published by Thriftbooks.com User, 13 years ago
P>Totally useless information is pounded into your skull with solutions to less than a third of the problems.
Good luck with this one. If it wasn't required to buy, I never would.
great, but the software doesn't work on windows 98/NT
Published by Thriftbooks.com User, 16 years ago
That's it - it's a very informative and thorough book, but it's software only works on older Operating systems | 677.169 | 1 |
Mathematical Background/Clarifying Examples:
Connect to students' previous knowledge of finding input and output values to plot points on a coordinate plane. This conversation will extend into a discussion of periodic functions and their relationships. After graphing functions by hand, students can examine how to graph the trig functions in radian mode on the graphing calculator. This may be easier to use when comparing multiple function transformations quickly.
Mathematical Background/Clarifying Examples:
Have students construct appropriate domain and range intervals of the sine, cosine, and tangent curves. They will need to use the graphs to determine the x-values for domain and y-values for range. This should lead to a discussion about possible input and output values of the basic trigonometric functions.
Mathematical Background/Clarifying Examples:
Allow students to explore changes to the a, b, c, and d values of a basic trigonometric function in standard form. The students may be able to construct their own concepts of how each value affects the function. After this exploration and discussion, be sure to clearly show students the values that affect amplitude, period, phase shifts, and vertical shifts.
d. Sketch the cosecant, secant, and cotangent functions and their transformations.
Math Background/Clarifying Examples:
Have the students connect what they just learned about sine, cosine, and tangent graphs to the reciprocal functions. Students can create a table of values to plot the reciprocal functions. This discovery activity may lead to conclusions about transformations of the reciprocal functions, including amplitude, period, phase shifts, and vertical shifts. Also, be sure to highlight any input values that can not be included the domain of the reciprocal function, thus creating vertical asymptotes.
Mathematical Background/Clarifying Examples:
Have students sketch a graph of all trigonometric functions, giving careful attention to zeroes and any possible vertical asymptotes. Take note that to "graph" implies for the student to create a table of values and plot critical values for a graph, whereas a "sketch" implies for students to put asymptotes and key features on a graph and then draw a curve containing these key features. A sketch is not a precise graph of the function.
Also have students use the graph of a trigonometric function to create an equation of the function. In other words, students will have to work backwards from the graph to the equation. They will need to find the a, b, c, and d values to put into the standard form of the appropriate trigonometric function.
Example: This document is an example of identifying and writing a sine equation from a given graph.
Mathematical Background/Clarifying Examples:
Have students identify, use, and apply trigonometric graphs to model applied problems. While you may show students all 6 function graphs, you may want to emphasize sine, cosine, and tangent graphs use in real-life examples. This could include using the sine regression for a periodic set of data, simple harmonic motion, etc...
Example:
The following table gives the average monthly temperature for Cleveland, Ohio for the years 1971-2000. Create a model for the data. | 677.169 | 1 |
What be responsible for most of the reading as lectures will consist primarily of motivating the material and encouraging discussion. I advise everyone seriously interested to buy the book, grab on and get ready for a mind-expanding voyage into higher dimensions of recursive | 677.169 | 1 |
offers a unique opportunity to understand the essence of one of the great thinkers of western civilization. A guided reading of Euclid's Elements leads to a critical discussion and rigorous modern treatment of Euclid's geometry and its more recent descendants, with complete proofs. Topics include the introduction of coordinates, the theory of area, history of the parallel postulate, the various non-Euclidean geometries, and the regular and semi-regular polyhedra.
Top Customer Reviews
Hartshorne is a famous algebraist and one main contribution of this text is to show fascinating interrelations between classical geometries and modern algebra (of course the book contains lots of pure geometry as well). Example 1: Many texts show the impossibility of the classical problems of constructibility by straightedge and compass (by observing that the coordinates of any point so constructed lie in the smallest extension field of the rationals Q closed under taking square roots of positive numbers). Hartshorne's is the only text that goes further, solving the analogous problem when the straightedge is marked (real roots of cubic and quartic equations must also be allowed); Archimedes observed that any angle can be trisected with these tools. Example 2. Dehn's solution to Hilbert's Third Problem is given, whereby any two polyhedra equivalent under dissection must have equal Dehn invariants, and it shown that a tetrahedron has different invariant than a cube. Example 3. In hyperbolic geometry, Hilbert's arithmetic of ends is developed and applied. Example 4. Pejas' algebraic classification of Hilbert planes is discussed. Hartshorne's text overlaps mine in correcting Euclid's errors, developing rigorous foundations for Euclidean and Non-Euclidean geometries, and covering much history, presented delightfully. He gives a thorough discussion of area and the open problems in that theory. He concludes with a nice chapter on polyhedra.
Comment
85 of 88 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
Hartshorne is a leading mathematician known for work in rather abstract geometry (see his book ALGEBRAIC GEOMETRY). He takes Euclid's ELEMENTS as great mathematics, no mere genial precursor, and collates it with Hilbert's FOUNDATIONS OF GEOMETRY. Of course Harshorne proves that Euclid needed the parallel postulate, by exhibiting a non-Euclidean geometry. He gives a very pretty compass and straight-edge Euclidean theory of circles, which then turns into the Poincare plane model for hyperbolic geometry. He also proves that Euclid needed the method of exhaustion for volumes of solids: he gives the agreeably simple Dehn invariant proof that even a cube and a tetrahedron of equal volumes are not decomposable into congruent parts. It is a famous proof, rarely seen, and a beautiful use of the modern algebraic viewpoint in classical geometry. I had always supposed it must be hard but it is not. Hartshorne also develops the contested "geometric algebra" of Euclid as a modern axiomatic algebra. Many commentators have shown it is wrong to think Euclid was doing "algebra" in the sense of a disguised theory of the roots of quadratic polynomials. But (unless and until Fowler's THE MATHEMATICS OF PLATO'S ACADEMY changes my mind) I think it is reasonable to say Euclid is doing algebra in this sense.
Comment
35 of 35 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
This book reveals the love professor Hartshorne has for geometry and euclid. I became excited about the subject just reading the introduction. The book assumes the student knows high school geometry. which unfortunately eliminates many college students, but I am going to try to use it at least for the second part of my college course.
This is a really well written, expert, wonderfully enthusiastic book, about a great, absolutely classic topic, by a powerful world famous authority in geometry.
The organization assumes the student is reading euclid concurrently. then prof hartshorne explains the difficullties with euclids treatment and shows how to remedy them. e.g. he observes euclids proof of SAS uses a principle of superposition without stating it, then although he adopts the Hilbert option of making this an axiom, he also presents an alternative treatment in which the principle of superposition is an axiom, and SAS is then proved exactly as euclid does. this sort of thing shows very clearly that euclids proofs become correct, merely by clarifying his implicit assumptions.
i love this and think it enhances the subject enormously.
the exercises are so ambitious and far reaching I at first dismissed them as unrealistic, but soon became infected with dr hartshornes enthusiasm for putting the students in touch with their best abilities, and challenging them to reach as deeply as they can.
This book is a remarkable work of scholarship, with far more content than one course can use. The student has here a work that will repay years of study. again the price makes it a bargain compared to far inferior works at double the price.
1 Comment
27 of 28 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
So this book answers one of the questions I always had. I had never had a complete reference of the axiomatization of geometry in my hands before.
I had read Professor Hartshorne's book Algebraic Geometry (Graduate Texts in Mathematics) before and arrived to the conclusion that this branch of mathematics is more an "algebraic" branch of mathematics than a "geometric" one. However, this book gave me the chance to see Professor Hartshorne as a geometer, not an algebrist as I had thought with the previous book. His style is excellent and conveys the geometric insight you want in a Geometry book.
Since I was told some years ago that Geometry could be Axiomatized, I had always hoped to see the structure being constructed. This book finally fulfilled my curiosity. I am indeed grateful with professor Hartshorne just for writing this book.
Comment
13 of 14 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback... | 677.169 | 1 |
Overcoming Math66
FREE
New:
Brand New! We ship daily Monday-Friday!
Big River Books
GA, USA
$15The new edition retains the author's pungent analysis of what makes math "hard" for otherwise successful people and how women, more than men, become victims of a gendered view of math. It has been substantially updated to incorporate new research on what we know and don't know about "sex differences" in brain organization and function, and it has been enlarged to include problems, puzzles, and strategies tried out in hundreds of math-anxiety workshops Tobias and her colleagues have sponsored. The author sees "math anxiety" as a political issue. So long as people believe themselves to be disabled in mathematics and do not rise up and confront the social and pedagogical origins of their disabilities, they will be denied "math mental health" - the willingness to learn the math you need when you need it. In an ever more technical society, that can make the difference between low and high self-esteem, failure and success. | 677.169 | 1 |
Algebra skills include understanding how signed numbers work in equations, what graphs look like,and how to factor polynomials. Each of these key concepts are foundational to variations seen in Algebra I; they also provide the background for advancement into higher levels of Algebra and beyond. I ... | 677.169 | 1 |
,...
Show More, Self-Checks, Getting Ready exercises, and Vocabulary and Concept problems. Use this text, and you'll learn solid mathematical skills that will help you both in future mathematical courses and in real life | 677.169 | 1 |
Product Description
The Pre-Algebra Tutor DVD series teaches the fundamentals of pre-algebra through step-by-step example problems that progress in difficulty. Emphasis is placed on giving students confidence in their skills by gradual repetition so that the skills learned are committed to long term memory. This program covers the important topic of teaching students the order of operations in algebra. This central topic teaches students the order in which to calculate expressions involving addition, subtraction, multiplication, division, parenthesis, exponents and more. Grades 9-College. 47 minutes on DVD.
DVD Playable in Bermuda, Canada, United States and U.S. territories. Please check if your equipment can play DVDs coded for this region. Learn more about DVDs and Videos | 677.169 | 1 |
Derivations of Applied Mathematics
Thaddeus H. Black Revised 19 April 2008
ii
Thaddeus H. Black, 1967–. Derivations of Applied Mathematics. 19 April 2008. U.S. Library of Congress class QA401. Copyright c 1983–2008 by Thaddeus H. Black thb@derivations.org . Published by the Debian Project [10]. This book is free software. You can redistribute and/or modify it under the terms of the GNU General Public License [16], version 2.
complex variables and. Later chapters build upon the foundation. integrals. There unfortunately really is no gradual way to bridge the gap to hexadecimal xvii
. Decimal numerals are fine in history and anthropology (man has ten fingers). One particular respect in which the book departs requires some defense here. From such a kernel the notes grew over time. There is nothing wrong with decimal numerals as such.Preface
I never meant to write this book. trigonometrics. The book. of course. In the book. until family and friends suggested that the notes might make the material for a book. Kernighan's and Ritchie's The C Programming Language [27]. The book began in 1983 when a high-school classmate challenged me to prove the Pythagorean theorem on the spot. adding to it the derivations of a few other theorems of interest to me. series. treating quadratics. cents. never aesthetic. pounds. I lost the dare. The book is neither a tutorial on the one hand nor a bald reference on the other. law and engineering (the physical units are arbitrary anyway). first frames coherently the simplest. The book is so organized. or you can begin on page one and read— with toil and commensurate profit—straight through to the end of the last chapter. but looking the proof up later I recorded it on loose leaves. for instance. deriving results less general or more advanced. It is a study reference. unexpectedly. the aforementioned Pythagorean theorem. shillings. I think: the book employs hexadecimal numerals. most basic derivations of applied mathematics. pence: the base hardly matters). exponentials. you can look up some particular result directly. entire book as it stands. The book generally follows established applied mathematical convention but where worthwhile occasionally departs therefrom. It emerged unheralded. but they are merely serviceable in mathematical theory. in the tradition of. These and others establish the book's foundation in Chapters 2 through 9. derivatives. a work yet in progress but a complete. Such is the book's plan. finance and accounting (dollars.
one must leap. either—unless one would on the same principle despise a Washington or an Einstein. it can break the flow of otherwise good prose. tending rather to consult the table of contents. special fonts. sidebars. Will making. etc. long sanctioned by an earlier era of publishing. among whom a surprising concentration of talent and expertise are found. while edifying. In this book it shall have many messages to bear.
. the last of which it generally sets in italic type. The book belongs to the emerging tradition of open-source software. or a Debian Developer [10]. A theorem is referenced by the number of the section that states it. The book's peculiar mission and program lend it an unusual quantity of discursive footnotes. extensively employed by such sages as Gibbon [18] and Shirer [43]. the book leaps seldom. like numbered examples. really do help the right kind of book. In typical science/engineering style. Often it does in fact. On the other hand. thence to twelve. It would be vain to deny that professional editing and formal peer review. colored inks. the book numbers its sections.. had substantial value. one who does for the love of it: not such a bad motive. after all1 ) on principle. The book naturally subjoins an index. by contrast. proprietary book. Personally with regard to my own work. seems the most able messenger. is no use). tables and formulas. Twenty years of keeping my own private notes in hex have persuaded me that the leap justifies the risk. I should rather not make too many claims. it does not do to despise the amateur (literally. enrich the work freely. not a program. too. of course.xviii
PREFACE
(shifting to base eleven. The book provides a bibliography listing other books I have referred to
1
The expression is derived from an observation I seem to recall George F. but not its theorems. Catching the reader's eye. Well. Open source has a spirit to it which leads readers to be far more generous with their feedback than ever could be the case with a traditional. Nevertheless it is a book. This has value. coheres insufficiently well to join the main narrative. Some of these are merely trendy. These footnotes offer nonessential material which. Modern publishing offers various alternatives to the footnote—numbered examples. In other matters. The footnote is an imperfect messenger. Such readers. etc. figures. maybe. If one wishes to reach hexadecimal ground. but for this book the humble footnote. Others. The canny reader will avoid using the index (of this and most other books). neither of which the book enjoys. The book in general walks a tolerably conventional applied mathematical line. Lore among open-source developers holds that open development inherently leads to superior work. where at the time of this writing it fills a void.
electrical engineering. Brown or he of me. underlying purpose or hidden plan. the ways in which a good doctoral adviser influences his student are both complex and deep. To the initiated. building construction. but I suppose that what is true for me is true for many of them also: we begin by organizing notes for our own use. Among the bibliography's entries stands a reference [6] to my doctoral adviser G. and then undertake to revise the notes and to bring them into a form which actually is useful to others. undoubtedly bias the selection and presentation to some degree. and even where a proof is new the idea proven probably is not. Prof. of course. Brown. Brown had nothing directly to do with the book's development. electromagnetic analysis and Debian development. and my work under him did not regard the book in any case. What constitutes "useful" or "orderly" is a matter of perspective and judgment. How other authors go about writing their books. However. usually too implicitly coherently to cite. or of the notes underlying it. then observe that the same notes might prove useful to others. though the book's narrative seldom explicitly invokes the reference. The latter proofs are perhaps original or semi-original from my personal point of view. Brown's style and insight touch the book in many places and in many ways.xix while writing. had been drafted years before I had ever heard of Prof. the book has none. As to a grand goal. for a substantial bulk of the manuscript. for its methods and truths are established by derivation rather than authority. other than to derive as many useful mathematical results as possible and to record the derivations together in an orderly manner in a single volume. Much of the book consists of common mathematical knowledge or of proofs I have worked out with my own pencil from various ideas gleaned who knows where over the years. residence and citizenship in the United States. My own peculiar heterogeneous background in military service. Prof. the mathematics itself often tends to suggest the form of the proof: if to me. Mathematics by its very nature promotes queer bibliographies. THB
. however.S. then surely also to others who came before. I do not know. but it is unlikely that many if any of them are truly new. Whether this book succeeds in the last point is for the reader to judge. my nativity.
xx
PREFACE
.
This first chapter treats a few introductory matters of general interest.1
Applied mathematics
Applied mathematics is a branch of mathematics that concerns itself with the application of mathematical knowledge to other domains. The question of what is applied mathematics does not answer to logical classification so much as to the sociology of professionals who use mathematics. demonstrable but generally lacking the detailed rigor of the professional mathematician. The book's chapters are topical. de¨mphasizing formal mathematical rigor.Chapter 1
Introduction
This is a book of applied mathematical proofs. The book's purpose is to convey the essential ideas underlying the derivations of a large number of mathematical results useful in the modeling of physical systems. the book emphasizes main threads of mathematical argument and the motivation underlying the main threads. To this end. if you want to know why the result is so. you can look for the proof here. It derives mathematical results e from the purely applied perspective of the scientist and the engineer. . proceeding not from reduced. well defined sets of axioms but rather directly from a nebulous mass of natural arithmetical. In this book we shall define applied mathematics to be correct mathematics useful to scientists. geometrical and classical-algebraic idealizations of physical systems. engineers and the like. . on both counts. [30]
What is applied mathematics?
That is about right.
1. . If you have seen a mathematical result. 1
.
2
CHAPTER 1. INTRODUCTION
1.2
Rigor
It is impossible to write such a book as this without some discussion of mathematical rigor. Applied and professional mathematics differ principally and essentially in the layer of abstract definitions the latter subimposes beneath the physical ideas the former seeks to model. Notions of mathematical rigor fit far more comfortably in the abstract realm of the professional mathematician; they do not always translate so gracefully to the applied realm. The applied mathematical reader and practitioner needs to be aware of this difference.
1.2.1
Axiom and definition
Ideally, a professional mathematician knows or precisely specifies in advance the set of fundamental axioms he means to use to derive a result. A prime aesthetic here is irreducibility: no axiom in the set should overlap the others or be specifiable in terms of the others. Geometrical argument—proof by sketch—is distrusted. The professional mathematical literature discourages undue pedantry indeed, but its readers do implicitly demand a convincing assurance that its writers could derive results in pedantic detail if called upon to do so. Precise definition here is critically important, which is why the professional mathematician tends not to accept blithe statements such as that 1 = ∞, 0 without first inquiring as to exactly what is meant by symbols like 0 and ∞. The applied mathematician begins from a different base. His ideal lies not in precise definition or irreducible axiom, but rather in the elegant modeling of the essential features of some physical system. Here, mathematical definitions tend to be made up ad hoc along the way, based on previous experience solving similar problems, adapted implicitly to suit the model at hand. If you ask the applied mathematician exactly what his axioms are, which symbolic algebra he is using, he usually doesn't know; what he knows is that the bridge has its footings in certain soils with specified tolerances, suffers such-and-such a wind load, etc. To avoid error, the applied mathematician relies not on abstract formalism but rather on a thorough mental grasp of the essential physical features of the phenomenon he is trying to model. An equation like 1 =∞ 0
1.2. RIGOR
3
may make perfect sense without further explanation to an applied mathematical readership, depending on the physical context in which the equation is introduced. Geometrical argument—proof by sketch—is not only trusted but treasured. Abstract definitions are wanted only insofar as they smooth the analysis of the particular physical problem at hand; such definitions are seldom promoted for their own sakes. The irascible Oliver Heaviside, responsible for the important applied mathematical technique of phasor analysis, once said, It is shocking that young people should be addling their brains over mere logical subtleties, trying to understand the proof of one obvious fact in terms of something equally . . . obvious. [35] Exaggeration, perhaps, but from the applied mathematical perspective Heaviside nevertheless had a point. The professional mathematicians Richard Courant and David Hilbert put it more soberly in 1924 when they wrote, Since the seventeenth century, physical intuition has served as a vital source for mathematical problems and methods. Recent trends and fashions have, however, weakened the connection between mathematics and physics; mathematicians, turning away from the roots of mathematics in intuition, have concentrated on refinement and emphasized the postulational side of mathematics, and at times have overlooked the unity of their science with physics and other fields. In many cases, physicists have ceased to appreciate the attitudes of mathematicians. [8, Preface] Although the present book treats "the attitudes of mathematicians" with greater deference than some of the unnamed 1924 physicists might have done, still, Courant and Hilbert could have been speaking for the engineers and other applied mathematicians of our own day as well as for the physicists of theirs. To the applied mathematician, the mathematics is not principally meant to be developed and appreciated for its own sake; it is meant to be used. This book adopts the Courant-Hilbert perspective. The introduction you are now reading is not the right venue for an essay on why both kinds of mathematics—applied and professional (or pure)— are needed. Each kind has its place; and although it is a stylistic error to mix the two indiscriminately, clearly the two have much to do with one another. However this may be, this book is a book of derivations of applied mathematics. The derivations here proceed by a purely applied approach.
4
CHAPTER 1. INTRODUCTION Figure 1.1: Two triangles.
h b1 b b2 b −b2
h
1.2.2
Mathematical extension
Profound results in mathematics are occasionally achieved simply by extending results already known. For example, negative integers and their properties can be discovered by counting backward—3, 2, 1, 0—then asking what follows (precedes?) 0 in the countdown and what properties this new, negative integer must have to interact smoothly with the already known positives. The astonishing Euler's formula (§ 5.4) is discovered by a similar but more sophisticated mathematical extension. More often, however, the results achieved by extension are unsurprising and not very interesting in themselves. Such extended results are the faithful servants of mathematical rigor. Consider for example the triangle on the left of Fig. 1.1. This triangle is evidently composed of two right triangles of areas A1 = A2 = b1 h , 2 b2 h 2
(each right triangle is exactly half a rectangle). Hence the main triangle's area is (b1 + b2 )h bh A = A1 + A2 = = . 2 2 Very well. What about the triangle on the right? Its b1 is not shown on the figure, and what is that −b2 , anyway? Answer: the triangle is composed of the difference of two right triangles, with b1 the base of the larger, overall one: b1 = b + (−b2 ). The b2 is negative because the sense of the small right triangle's area in the proof is negative: the small area is subtracted from
1.3. COMPLEX NUMBERS AND COMPLEX VARIABLES
5
the large rather than added. By extension on this basis, the main triangle's area is again seen to be A = bh/2. The proof is exactly the same. In fact, once the central idea of adding two right triangles is grasped, the extension is really rather obvious—too obvious to be allowed to burden such a book as this. Excepting the uncommon cases where extension reveals something interesting or new, this book generally leaves the mere extension of proofs— including the validation of edge cases and over-the-edge cases—as an exercise to the interested reader.
1.3
Complex numbers and complex variables
More than a mastery of mere logical details, it is an holistic view of the mathematics and of its use in the modeling of physical systems which is the mark of the applied mathematician. A feel for the math is the great thing. Formal definitions, axioms, symbolic algebras and the like, though often useful, are felt to be secondary. The book's rapidly staged development of complex numbers and complex variables is planned on this sensibility. Sections 2.12, 3.10, 3.11, 4.3.3, 4.4, 6.2, 9.5 and 9.6.5, plus all of Chs. 5 and 8, constitute the book's principal stages of complex development. In these sections and throughout the book, the reader comes to appreciate that most mathematical properties which apply for real numbers apply equally for complex, that few properties concern real numbers alone. Pure mathematics develops an abstract theory of the complex variable.1 The abstract theory is quite beautiful. However, its arc takes off too late and flies too far from applications for such a book as this. Less beautiful but more practical paths to the topic exist;2 this book leads the reader along one of these.
1.4
On the text
The book gives numerals in hexadecimal. It denotes variables in Greek letters as well as Roman. Readers unfamiliar with the hexadecimal notation will find a brief orientation thereto in Appendix A. Readers unfamiliar with the Greek alphabet will find it in Appendix B. Licensed to the public under the GNU General Public Licence [16], version 2, this book meets the Debian Free Software Guidelines [11].
1 2
[14][44][24] See Ch. 8's footnote 8.
6
CHAPTER 1. INTRODUCTION
If you cite an equation, section, chapter, figure or other item from this book, it is recommended that you include in your citation the book's precise draft date as given on the title page. The reason is that equation numbers, A chapter numbers and the like are numbered automatically by the L TEX typesetting software: such numbers can change arbitrarily from draft to draft. If an example citation helps, see [5] in the bibliography.
Chapter 2
Classical algebra and geometry
Arithmetic and the simplest elements of classical algebra and geometry, we learn as children. Few readers will want this book to begin with a treatment of 1 + 1 = 2; or of how to solve 3x − 2 = 7. However, there are some basic points which do seem worth touching. The book starts with these.
2.1
Basic arithmetic relationships
This section states some arithmetical rules.
2.1.1
Commutivity, associativity, distributivity, identity and inversion
Table 2.1 lists several arithmetical rules, each of which applies not only to real numbers but equally to the complex numbers of § 2.12. Most of the rules are appreciated at once if the meaning of the symbols is understood. In the case of multiplicative commutivity, one imagines a rectangle with sides of lengths a and b, then the same rectangle turned on its side, as in Fig. 2.1: since the area of the rectangle is the same in either case, and since the area is the length times the width in either case (the area is more or less a matter of counting the little squares), evidently multiplicative commutivity holds. A similar argument validates multiplicative associativity, except that here we compute the volume of a three-dimensional rectangular box, which box we turn various ways.1
1
Multiplicative inversion lacks an obvious interpretation when a = 0. Loosely, 1 = ∞. 0 But since 3/0 = ∞ also, surely either the zero or the infinity, or both, somehow differ in the latter case. Looking ahead in the book, we note that the multiplicative properties do not always hold for more general linear transformations. For example, matrix multiplication is not commutative and vector cross-multiplication is not associative. Where associativity does not hold and parentheses do not otherwise group, right-to-left association is notationally implicit:2,3 A × B × C = A × (B × C). The sense of it is that the thing on the left (A×) operates on the thing on the right (B × C). (In the rare case in which the question arises, you may want to use parentheses anyway.)
2.1.2
Negative numbers
Consider that (+a)(+b) = +ab, (−a)(+b) = −ab, (+a)(−b) = −ab,
(−a)(−b) = +ab. The first three of the four equations are unsurprising, but the last is interesting. Why would a negative count −a of a negative quantity −b come to
The fine C and C++ programming languages are unfortunately stuck with the reverse order of association, along with division inharmoniously on the same level of syntactic precedence as multiplication. Standard mathematical notation is more elegant: abc/uvw = (a)(bc) . (u)(vw)
2
The applied mathematician very often finds it convenient to change variables, introducing new symbols to stand in place of old. For this we have
Few readers attempting this book will need to be reminded that < means "is less than," that > means "is greater than," or that ≤ and ≥ respectively mean "is less than or equal to" and "is greater than or equal to."
4
One can indeed use the equal sign. usually it is better to introduce a new symbol.
(2. it means to increment k by one. Still. as in j ← k + 1. a number defined such that i2 = −1. which regrettably does not fill the role well. whereas Q ← P implies nothing especially interesting about P or Q.
5
. "let the new symbol Q represent P . introduced in more detail in § 2. as the reader may be aware. △ In some books. Even k ← k + 1 can confuse readers inasmuch as it appears to imply two different values for the same symbol k. ≡ is printed as =. but the latter notation is sometimes used anyway when new symbols are unwanted or because more precise alternatives (like kn = kn−1 + 1) seem overwrought. it just introduces a (perhaps temporary) new symbol Q to ease the algebra. "in place of P .12 below). The notation k ← k + 1 by contrast is unambiguous. 6 One would never write k ≡ k + 1." For example.
11
This means. Subjectively. This means. if a2 + b2 = c2 . other than the = equal sign. but then what does the change of variable k = k + 1 mean? It looks like a claim that k and k + 1 are the same.
Differences and sums of squares are conveniently factored as a2 + b2 = (a + ib)(a − ib). the latter notation admittedly has seen only scattered use in the literature. which is impossible. or. put Q". then the change of variable 2µ ← a yields the new form (2µ)2 + b2 = c2 .2.2
Quadratics
a2 − b2 = (a + b)(a − b). "let Q now equal P .2.1)
(where i is the imaginary unit."6 The two notations logically mean about the same thing. The C and C++ programming languages use == for equality and = for assignment (change of variable).
a2 + 2ab + b2 = (a + b)2
a2 − 2ab + b2 = (a − b)2 . QUADRATICS the change of variable or assignment notation5 Q ← P. Useful as these four forms are. Q ≡ P identifies a quantity P sufficiently interesting to be given a permanent name Q. The concepts grow clearer as examples of the usage arise in the book.
There appears to exist no broadly established standard mathematical notation for the change of variable.
2. However. Similar to the change of variable notation is the definition notation Q ≡ P.
(If not already clear from the context.11.3) β2 − γ2. 2a
. Expressions of fourth and fifth order are quartic and quintic. zeroth-order expressions like 3 are constant. Examples of quadratic expressions include x2 . 2x2 − 7x + 3 and x2 +2xy +y 2 . Substituting into the equation the values of z1 and z2 and simplifying proves the suggestion correct. 9 It suggests it because the expressions on the left and right sides of (2. the expressions x3 −1 and 5x2 y are cubic not quadratic because they contain third-order terms. The term 5x2 y = 5[x][x][y] is of third order. See Ch.12
CHAPTER 2.4) (2. order basically refers to the number of variables multiplied together in a term.)
7 The adjective quadratic refers to the algebra of expressions in which no term has greater than second order.2) = z 2 − 2βz + β 2 − (β 2 − γ 2 )
are those given by (2. where z1 and z2 are the two values of z given by (2. 10 and its Tables 10. or in other words where z=β± This suggests the factoring9 z 2 − 2βz + γ 2 = (z − z1 )(z − z2 ). but they are much harder.2. The expression evidently has roots8 where (z − β)2 = (β 2 − γ 2 ). which is called the quadratic formula. It follows that the two solutions of the quadratic equation z 2 = 2βz − γ 2 (2. To factor this. CLASSICAL ALGEBRA AND GEOMETRY
however. By contrast. none of them can directly factor the more general quadratic7 expression z 2 − 2βz + γ 2 . First-order expressions like x + 1 are linear. 10 The form of the quadratic formula which usually appears in print is √ −b ± b2 − 4ac x= .1 and 10. we complete the square. See § 2. (2.10 (Cubic and quartic formulas also exist respectively to extract the roots of polynomials of third and fourth order. for instance.2).2). writing z 2 − 2βz + γ 2 = z 2 − 2βz + γ 2 + (β 2 − γ 2 ) − (β 2 − γ 2 ) = (z − β)2 − (β 2 − γ 2 ).) 8 A root of f (z) is a value of z for which f (z) = 0.3) are both quadratic (the highest power is z 2 ) and have the same roots. respectively.
3 speaks further of the dummy variable. the general habit of writing k and j proves convenient at least in § 4. then adding the several f (k). The symbols and come respectively from the Greek letters for S and P. by (2.2.
.4). For example. Appendix A tells how to read the numerals. the quadratic z 2 = 3z − 2 has the solutions 3 z= ± 2
11
s" « 2 3 − 2 = 1 or 2. b in turn. . so we start now.12 (Nothing prevents one from writing k rather than j . For example. and may be regarded as standing for "Sum" and "Product. For a dummy variable.3
Notation for series sums and products
Sums and products of series arise so frequently in mathematical work that one finds it convenient to define terse notations to express them. However. evaluating the function f (k) at each k. . The book's preface explains why the book represents such numbers in hexadecimal. incidentally. 2
The hexadecimal numeral 0x56 represents the same number the decimal numeral 86 represents.2) in light of (2. 12 Section 7. The summation notation
b
f (k)
k=a
means to let k equal each of a.11
6
k2 = 32 + 42 + 52 + 62 = 0x56.)
which solves the quadratic ax2 + bx + c = 0. . index of summation or loop counter —a variable with no independent existence. a + 1." The j or k is a dummy variable. a + 2. this writer finds the form (2. NOTATION FOR SERIES SUMS AND PRODUCTS
13
2. one can use any letter one likes. However. 8.2) easier to remember.5.2 and Ch. . used only to facilitate the addition or multiplication of the series.
k=3
The similar multiplication notation
b
f (j)
j=a
means to multiply the several f (j) rather than to add them.3.
" Regarding the notation n!/m!.1) is not always commutative.
. we specify that
b j=a
f (j) = [f (b)][f (b − 1)][f (b − 2)] · · · [f (a + 2)][f (a + 1)][f (a)]
rather than the reverse order of multiplication. If you can avoid unnecessary multiplication by regarding n!/m! as a single unit.
13
(2. overflowing computer registers during numerical computation.5)
One reason among others for this is that factorials rapidly multiply to extremely large sizes. we shall use the notation
b j=a
f (j) = [f (a)][f (a + 1)][f (a + 2)] · · · [f (b − 2)][f (b − 1)][f (b)]. CLASSICAL ALGEBRA AND GEOMETRY The product shorthand
n
n! ≡ n!/m! ≡
j. but this is the order we shall use in this book.
N N
f (k) = 0 +
k=N +1 N k=N +1 N
f (k) = 0.
This means among other things that 0! = 1. this is a win. this can of course be regarded correctly as n! divided by m! . In the event that the reverse order of multiplication is needed.
j=m+1
is very frequently used. The notation n! is pronounced "n factorial.14
CHAPTER 2.
Note that for the sake of definitional consistency. but it usually proves more amenable to regard the notation as a single unit. 14 The extant mathematical Q literature lacks an established standard on the order of multiplication implied by the " " symbol.
f (j) = (1)
j=N +1 j=N +1
f (j) = 1.1.14 Multiplication proceeds from right to left.13 Because multiplication in its more general sense as linear transformation (§ 11.
j=1 n
j.
thus the average of the entire series is [a+b]/2.4 addresses that point. . etc.2. 1. −2. m.15 but the exponents a and b are arbitrary real numbers. n. It offers no surprises. n..2 summarizes its definitions and results. r and s. Traditionally.
Pairing a with b.6. the letters i. Hence. but this changes nothing. 7). . p. q. .6)
Success with this arithmetic series leads one to wonder about the geometric series ∞ z k . r and s are integers. then a+ 2 with b− 2." However. −3. THE ARITHMETIC SERIES
15
On first encounter. . the exponents k. 2
(2.4
The arithmetic series
A simple yet useful application of the series sum of § 2. −1. see Ch. We shall use such notation extensively in this book. . In this section. too.
In case the word is unfamiliar to the reader who has learned arithmetic in another language than English. .
2. 3. Admittedly it is easier for the beginner to read "f (1) + f (2) + · · · + f (N )" than " N f (k). m. the average of each pair is [a+b]/2. j. .4. Section 2. q. Readers seeking more rewarding reading may prefer just to glance at the table then to skip directly to the start of the next section. then a+ 1 with b− 1. 0. k=0
2. M and N are used to represent integers (i is sometimes avoided because the same letter represents the imaginary unit). experience shows the latter notation to be k=1 extremely useful in expressing more sophisticated mathematical ideas. k. the integers are the negative.
b k=a
k = (b − a + 1)
a+b . but this section needs more integer symbols so it uses p.3 is the arithmetic series
b k=a
k = a + (a + 1) + (a + 2) + · · · + b. 2.) The series has b − a + 1 terms. (The pairing may or may not leave an unpaired element at the series midpoint k = [a + b]/2. Table 2.5
Powers and roots
This necessarily tedious section discusses powers and roots. zero and positive counting numbers .
15
. The corresponding adjective is integral (although the word "integral" is also used as a noun and adjective indicating an infinite sum of infinitesimals. such and notation seems a bit overwrought.
then wpq = z p . In any case. § 2. When z is real and nonnegative. What about powers expressible neither as n nor as 1/n. Extending (2. often written √ z.3 will find it useful if we can also show here that (z 1/q )1/s = z 1/qs = (z 1/s )1/q . (z 1/n )n = z = (z n )1/n (2.14)
. the square root of z.13) Since any real number can be approximated arbitrarily closely by a ratio of integers. (2. Equation (2.13) is this subsection's main result. the power and root operations mutually invert one another. so this is (z 1/q )p = (z p )1/q . (2.13) implies a power definition for all real exponents. The number z 1/n is called the nth root of z—or in the very common case n = 2.5. wp = (z p )1/q . But w = z 1/q . (v n )1/n = u1/n . the result is the same. the last notation is usually implicitly taken to mean the real. Taking the u and v formulas together. (2. which says that it does not matter whether one applies the power or the root first. nonnegative square root.18
CHAPTER 2.12)
for any z and integral n. then. However. Taking the qth root. such as the 3/2 power? If z and w are numbers related by wq = z. we define z p/q such that (z 1/q )p = z p/q = (z p )1/q . (v n )1/n = v.10) therefore. CLASSICAL ALGEBRA AND GEOMETRY
it follows by successive steps that v n = u.
With such a definition the results apply not only for all bases but also for all exponents.18)
2.18) is z a−b = za . CLASSICAL ALGEBRA AND GEOMETRY
By identical reasoning. which according to (2.
2. a.5. § 3. real or complex. p/q. Still.11 and Ch. zb (2.5
Summary and remarks
Table 2. if complex (§ 2. which implies that 1 .4
Sums of powers
z (p/q)+(r/s) = (z ps+rq )1/qs = (z ps z rq )1/qs = z p/q z r/s . 1 = z −b+b = z −b z b . we shall remove later. zb But then replacing −b ← b in (2. the formulas remain valid.20
CHAPTER 2. In the case that a = −b.19) (2. b and so on are real numbers. 5. n.16) for any real a and b. purposely defining the action of a complex exponent to comport with the results found here. This restriction. Looking ahead to § 2. (2.12. one can reason that
or in other words that z a+b = z a z b .17) leads to z −b = z a−b = z a z −b . this implies that (z a )b = z ab = (z b )a (2. u and v to be real numbers.
. Since p/q and r/s can approximate any real numbers with arbitrary precision. w.5.9).15) and (2.17)
(2.
With (2.16). we observe that nothing in the foregoing analysis requires the base variables z.2 on page 16 summarizes the section's definitions and results. z (p/q)(r/s) = (z r/s )p/q . the analysis does imply that the various exponents m.12).
2. The experienced scientist or engineer may notice that the above vocabulary omits the name "Taylor series." Semantics. This section discusses the multiplication and division of power series. not if it includes negative powers of z.20). depending on whether the speaker is male or female.
18 Another name for the power series is polynomial. We tend to call (2. technically.20) a "power series with negative powers. they call a "power series" only if ak = 0 for all k < 0—in other words.20) in general a Laurent series. If that is not what you seek. to the point that we applied mathematicians (at least in the author's country) seem to feel somehow unlicensed actually to use the name. 8. They call it a "polynomial" only if it is a "power series" with a finite number of terms.6
Multiplying and dividing power series
A power series 18 is a weighted sum of integral powers:
∞ k=−∞
A(z) =
ak z k . That is after all exactly what it is. but the two names in fact refer to essentially the same thing." The vocabulary omits the name because that name fortunately remains unconfused in usage—it means quite specifically a power series without negative powers and tends to connote a representation of some particular function of interest—as we shall see in Ch. Equation (2.20) a Laurent series if you prefer (and if you pronounce it right: "lor-ON"). The name "Laurent series" is a name we shall meet again in § 8. you can snow him with the big word "Laurent series.14. They call (2. by the even humbler name of "polynomial." or just "a power series. but one is expected most carefully always to give the right name in the right place. if it has a finite number of terms. The power-series terminology seems to share a spirit of that kin. to sketch some annulus in the Argand plane. You however can call (2. then you may find it better just to call the thing by the less lofty name of "power series"—or better. and/or to engage in other maybe unnecessary formalities.6. What a bother! (Someone once told the writer that the Japanese language can give different names to the same object. In the meantime however we admit that the professionals have vaguely daunted us by adding to the name some pretty sophisticated connotations. Professional mathematicians use the terms more precisely.
(2. When someone does object. The word "polynomial" usually connotes a power series with a finite number of terms." instead.
." be prepared for people subjectively— for no particular reason—to expect you to establish complex radii of convergence. All these names mean about the same thing. MULTIPLYING AND DIVIDING POWER SERIES
21
2.20)
where the several ak are arbitrary constants." This book follows the last usage. the writer recommends that you call it a "power series" and then not worry too much about it until someone objects.) If you seek just one word for the thing. Nevertheless if you do use the name "Laurent series.
22
CHAPTER 2.6. (2.3. One sometimes wants to express the long division of power series more formally. however. Section 2. we prepare the long division B(z)/A(z) by writing B(z) = A(z)Qn (z) + Rn (z).23)
If Q(z) is a quotient and R(z) a remainder.
. Such are the Latin-derived names of the parts of a long division. and you can skip the rest of the subsection if you like.
(2.) Formally. then B(z) is a dividend (or numerator ) and A(z) a divisor (or denominator ). z−2 z−2 z−2 =
The strategy is to take the dividend19 B(z) piece by piece. but this subsection does it by long division.21) bk z .
19
(2. That is what the rest of the subsection is about.3 below will do it by matching coefficients. though less direct.6.2
Dividing power series
The quotient Q(z) = B(z)/A(z) of two power series is a little harder to calculate.6. For example.6. then that is really all there is to it. purposely choosing pieces easily divided by A(z). If you feel that you understand the example. 2z 2 − 3z + 3 z−2 2z 2 − 4z z + 3 z+3 + = 2z + z−2 z−2 z−2 z−2 5 5 = 2z + + = 2z + 1 + .22)
k=−∞ j=−∞
2. CLASSICAL ALGEBRA AND GEOMETRY
2.
k
the product of the two series is evidently P (z) ≡ A(z)B(z) =
∞ ∞
aj bk−j z k . (Be advised however that the cleverer technique of § 2. and there are at least two ways to do it. is often easier and faster.1
Multiplying power series
Given two power series A(z) = B(z) =
∞ k=−∞ ∞ k=−∞
ak z k .
N − 2.
where K and N identify the greatest orders k of z k present in A(z) and B(z). MULTIPLYING AND DIVIDING POWER SERIES
23
where Rn (z) is a remainder (being the part of B[z] remaining to be divided). + Rn (z) − aK aK
Matching this equation against the desired iterate B(z) = A(z)Qn−1 (z) + Rn−1 (z) and observing from the definition of Qn (z) that Qn−1 (z) = Qn (z) +
. At start. What does it mean? The key to understanding it lies in understanding (2. . respectively. . The goal is to grind Rn (z) away to nothing.23). .
Rn (z) =
k=−∞
Qn (z) =
N −K k=n−K+1
qk z k . where n = N. but the quotient Qn (z) and the remainder Rn (z) change. which is not one but several equations—one equation for each value of n. N − 1. which we add and subtract from (2. Well.24) rnk z k . that is a lot of symbology.
k=−∞
B(z) = RN (z) = B(z).6. As in the example. The piece we choose is (rnn /aK )z n−K A(z). bk z k . aK = 0. . to make it disappear as n → −∞. and
K
A(z) =
k=−∞ N
ak z k .2.
n
(2. QN (z) = 0. but the thrust of the long division process is to build Qn (z) up by wearing Rn (z) down. QN (z) = 0 while RN (z) = B(z). The dividend B(z) and the divisor A(z) stay the same from one n to the next.23) to obtain B(z) = A(z) Qn (z) + rnn n−K rnn n−K z z A(z) . we pursue the goal by choosing from Rn (z) an easily divisible piece containing the whole high-order term of Rn (z).
Rn (z) B(z) = Qn (z) + . then so long as Rn (z) tends to vanish as n → −∞. § 3.2] The notations Ko . A(z) Iterating only a finite number of times leaves a remainder. Table 2.20 In its qn−K equation. Then we iterate per (2." "a sub k" and "z to the k" (or.24
CHAPTER 2.
k=No
B(z) = then
n
Rn (z) =
k=n−(K−Ko )+1
20 21
rnk z k for all n < No + (K − Ko ). bk z k . the table includes also the result of § 2.
(2. qn−K = where no term remains in Rn−1 (z) higher than a z n−1 term. as "K naught.3 summarizes the long-division procedure.23) that B(z) = Q−∞ (z).25)
except that it may happen that Rn (z) = 0 for sufficiently small n. respectively.3 below.23) is trivially true.27) (2. aK Rn−1 (z) = Rn (z) − qn−K z n−K A(z).25) as many times as desired. If an infinite number of times. To begin the actual long division. we find that rnn .28)
[46. "z to the kth power")—at least in the author's country. it follows from (2. A(z) A(z) (2. we initialize RN (z) = B(z). ak and z k are usually pronounced. for which (2.26) (2. CLASSICAL ALGEBRA AND GEOMETRY
qn−K z n−K . more fully.6. It should be observed in light of Table 2.
.3 that if21
K
A(z) =
k=Ko N
ak z k .
which is exactly the claim (2. The greatest-order term of Rn (z) is by definition a z n term. Often. then according to the equation so also must the leastorder term of Rm−1 (z) be a z No term.26
CHAPTER 2. bk z k
∞ k=N −K
qk z k
If a more formal demonstration of (2.
2. however. In this case. otherwise a z No term.28) is wanted.29)
∞ k=K ∞ k=N
ak z k . Under these conditions. evidently the least-order term of Rm−1 (z) is a z m−(K−Ko ) term when m − (K − Ko ) ≤ No . CLASSICAL ALGEBRA AND GEOMETRY
That is. One can divide them by matching coefficients.3 extends the quotient Qn (z) through successively smaller powers of z. of course. one prefers to extend the quotient through successively larger powers of z. This is better stated after the change of variable n + 1 ← m: the least-order term of Rn (z) is a z n−(K−Ko )+1 term when n < No + (K − Ko ).28) makes. the terms of Rn (z) run from z n−(K−Ko )+1 through z n . unless an even lower-order term be contributed by the product z m−K A(z). aK = 0. in summary.2.6.22 The long-division procedure of Table 2. is that we have strategically planned the long-division iteration precisely to cause the leading term of the divisor to cancel the leading term of the remainder at each step. otherwise a z No term. But that very product's term of least order is a z m−(K−Ko ) term.6.25) that rmm m−K Rm−1 (z) = Rm (z) − z A(z).5]
. The reason for this. the long division goes by the complementary rules of Table 2. 23 [29][14. when n < No + (K − Ko ). So. the remainder has order one less than the divisor has. A(z)
(2.4. then consider per (2. § 2. sometimes quicker way to divide power series than by the long division of § 2. where a z K term is A(z)'s term of least order. aK
If the least-order term of Rm (z) is a z No term (as clearly is the case at least for the initial remainder RN [z] = B[z]).3
Dividing power series by matching coefficients
There is another.23 If Q∞ (z) = where A(z) = B(z) = are known and Q∞ (z) =
22
B(z) .
30) yields a sequence of coefficients does not necessarily mean that the resulting power series Q∞ (z) converges to some definite value over a given domain.3 and 2. even though all its coefficients are known. often what interest us are only the series' first several terms
n−K−1
Qn (z) = In this case. The adaptation is left as an exercise to the interested reader.
(2.29) through by A(z) to obtain the form A(z)Q∞ (z) = B(z). we have that qn−K = 1 aK
n−K−1
bn −
an−k qk
k=N −K
. Admittedly.30) is correct when Q∞ (z) does converge. then one can multiply (2. each coefficient depending on the coefficients earlier computed.22) and changing the index n ← k on the right side.
n=N k=N −K n−K k=N −K
But for this to hold for all z.
∞ n−K
an−k qk z n =
∞ n=N
bn z n . rather than increasing.
Transferring all terms but aK qn−K to the equation's right side and dividing by aK .4 incorporate the technique both ways.31) for Rn (z). the fact that (2. Q∞ (z) =
k=N −K
qk z k . but Tables 2. the coefficients must match for each n: an−k qk = bn . At least (2. Even when Q∞ (z) as such does not converge. Rn (z) = B(z) − A(z)Qn (z). Solving (2.30) computes the coefficients of Q(z). however.
Rn (z) B(z) = Qn (z) + A(z) A(z) and convergence is not an issue.31)
(2.
(2.28
CHAPTER 2.34). Expanding the left side according to (2. which diverges when |z| > 1. powers of z if needed or desired. CLASSICAL ALGEBRA AND GEOMETRY
is to be calculated. n ≥ N. The coefficient-matching technique of this subsection is easily adapted to the division of series in decreasing.30)
Equation (2.32)
. Consider for instance (2. n ≥ N.
∞ k=1
Multiplying by z yields zS ≡
zk . |5| = 5. sum? Answer: it sums exactly to 1/(1 − k=0 z). which.
∞ k=0
zk =
1 .6.6. 1 k=0 (2. |z| < 1. Let S≡
∞ k=0
z k . For example. computed by the coefficient matching of § 2. calculated by the long division of § 2. and/or verified by multiplying. more aesthetic way to demonstrate the same thing. after dividing by 1 − z. the difference technique of § 2.5
Variations on the geometric series
Besides being more aesthetic than the long division of § 2. 1−z
(2.2. |z| < 1.6.2. |z| < 1.3.34)
2. |z| < 1.
. as follows. but also |−5| = 5.4
Common power-series quotients and the geometric series
Frequently encountered power-series quotients.
k=−∞
Equation (2.33) almost incidentally answers a question which has arisen in § 2. However. include24 ∞ (∓)k z k .4 permits one to extend the basic geometric series in
24 The notation |z| represents the magnitude of z.33) = 1 ± z −1 k k − (∓) z .6.4 and which often arises in practice: to what total does the infinite geometric series ∞ z k . |z| > 1. implies that S≡ as was to be demonstrated. there is a simpler.6. MULTIPLYING AND DIVIDING POWER SERIES
29
2.6.2.6.
Subtracting the latter equation from the former leaves (1 − z)S = 1.
the model
25
[33]
. independent variables and dependent variables. it doesn't vary). vsound is an indeterminate constant (given particular atmospheric conditions. among others. k (k + 1)(k)z k . We multiply the unknown S1 by z. independent variables and dependent variables
Mathematical models use indeterminate constants. |z| < 1. we can compute as follows.7
Indeterminate constants. |z| < 1
(which arises in. so. Here.
k
k3 z k . 1−z
where we have used (2.
We then subtract zS1 from S1 . The three are best illustrated by example. we arrive at ∞ z . The model gives t as a function of ∆r. and t is a dependent variable.
where ∆r is the distance from source to listener and vsound is the speed of sound. the sum S1 ≡
∞ k=0
kz k . Dividing by 1 − z. Planck's quantum blackbody radiation calculation25 ).
2.34) to collapse the last sum. (2. can be calculated in like manner as the need for them arises. etc. Further series of the kind. Consider the time t a sound needs to travel from its source to a distant listener: t= ∆r vsound .35) kz k = S1 ≡ (1 − z)2
k=0
which was to be found.30
CHAPTER 2. CLASSICAL ALGEBRA AND GEOMETRY
several ways.. leaving (1 − z)S1 =
∞ k=0
kz −
k
∞ k=1
(k − 1)z =
k
∞ k=1
z =z
k
∞ k=0
zk =
z . ∆r is an independent variable. For instance. if you tell the model how far the listener sits from the sound source. producing zS1 =
∞
kz
k+1
=
∞
k=0
k=1
(k − 1)z k . such as k k2 z k .
The chance that the typical reader will ever specify the dimensions of a real musical concert hall is of course vanishingly small. an independent variable or a dependent variable often depends upon one's immediate point of view. now regarding as an independent variable what a moment ago we had regarded as an indeterminate constant. because the chance that the typical reader will ever specify something technical is quite large. One could calculate the quantity of water in a kitchen mixing bowl just as well. you just have to recalculate numerically).7. Such examples remind one of the kind of calculation one encounters in a childhood arithmetic textbook. ∆r. The same model in the example would remain valid if atmospheric conditions were changing (vsound would then be an independent variable) or if the model were used in designing a musical concert hall26 to suffer a maximum acceptable sound time lag from the stage to the hall's back row (t would then be an independent variable. The point is that conceptually there pre¨xists some right figure for the indeterminate e constant. day-to-day decisions where small sums of money and negligible risk to life are at stake—are done with models hardly more sophisticated than the one shown here. Such a shift of viewpoint is fine.2. Occasionally we go so far as deliberately to change our point of view in mid-analysis. Note that the abstract validity of the model does not necessarily depend on whether we actually know the right figure for vsound (if I tell you that sound goes at 500 m/s. Knowing the figure is not the point. However. § 9. after all. dependent). so long as we remember that there is a difference between the three kinds of quantity and we keep track of which quantity is which kind to us at the moment. The main reason it matters which symbol represents which of the three kinds of quantity is that in calculus. it probably doesn't ruin the theoretical part of your analysis.
26
. for instance (a typical case of this arises in the solution of differential equations by the method of unknown coefficients. Although there exists a definite philosophical distinction between the three kinds of quantity. as of the quantity of air contained in an astronaut's round helmet. nevertheless it cannot be denied that which particular quantity is an indeterminate constant. maybe the concert-hall example is not so unreasonable.4). one analyzes how change in independent variables affects dependent variables as indeterminate constants remain
Math books are funny about examples like this. but later you find out that the real figure is 331 m/s. CONSTANTS AND VARIABLES
31
returns the time needed for the sound to propagate from one to the other. Although sophisticated models with many factors and terms do indeed play a major role in engineering. but astronauts' helmets are so much more interesting than bowls. you see. So. that sound goes at some constant speed—whatever it is—and that we can calculate the delay in terms of this. the great majority of practical engineering calculations—for quick. it is the idea of the example that matters here.
(2.3 has introduced the dummy variable. Refer to §§ 2. Within the expression. the inverse operation is not the root but rather the logarithm: loga (az ) = z.1
The logarithm
The exponential operation follows the same laws the power operation follows. not on some other variable within the expression.8
Exponentials and logarithms
In § 2.
. where (in § 2.37)
Hence." of course. in fact. However.
With the change of variable w ← az .5 we have considered the power operation z a . One can view it as the exponential operation az . it fills no role because logically it does not exist there. without. There is another way to view the power operation. this is aloga w = w. the dummy variable fills the role of an independent variable.7's language) the independent variable z is the base and the indeterminate constant a is the exponent. we have that aloga (a
z)
= az . but its dependence is on the operator controlling the expression.
2.)
2.8. Such a dummy variable does not seem very "independent. but because the variable of interest is now the exponent rather than the base.32
CHAPTER 2.3 and 7. most dummy variables are just independent variables—a few are dependent variables— whose scope is restricted to a particular expression. the exponential and logarithmic operations mutually invert one another. "What power must I raise a to.36)
The logarithm loga w answers the question. to get w?" Raising a to the power of the last equation. which the present section's threefold taxonomy seems to exclude. however. where the variable z is the exponent and the constant a is the base.3. CLASSICAL ALGEBRA AND GEOMETRY
fixed. (2. (Section 2.
34
CHAPTER 2. 1. and h for perpendicular height. |a − b| < c < a + b. bh A= . (2.1
Triangle area
The area of a right triangle27 is half the area of the corresponding rectangle. These are the triangle inequalities.9. The fact that any triangle's area is half its base length times its height is seen by dropping a perpendicular from one point of the triangle to the opposite side (see Fig.5: General properties of the logarithm. which itself is longer than the difference between the two.2
The triangle inequalities
Any two sides of a triangle together are longer than the third alone. dividing the triangle into two right triangles.9. In algebraic symbols. b for base length.
.1 on page 4).44)
where a. This is seen by splitting a rectangle down its diagonal into a pair of right triangles of equal size. is seen by sketching some triangle on a sheet of paper and asking: if c is the direct route between two points and a + b is an indirect route. then how can a + b not be longer? Of course the sum inequality is equally good on any of the
27
A right triangle is a triangle.43) 2 where A stands for area. In symbols. (2. CLASSICAL ALGEBRA AND GEOMETRY Table 2. one of whose three angles is perfectly square. loga uv = loga u + loga v u loga = loga u − loga v v loga (wz ) = z loga w wz = az loga w loga w logb w = loga b loga (az ) = z w = aloga w
2. for each of which the fact is true. The truth of the sum inequality c < a + b.
2. b and c are the lengths of a triangle's three sides.
φ ψ
triangle's three sides.2). which together say that |a − b| < c. Solving the latter equations for φk and
Many or most readers will already know the notation 2π and its meaning as the angle of full revolution. Rearranging the a and b inequalities. Briefly. In mathematical notation. φ1 + φ2 + φ3 = 2π.9. the car turns to travel along the next side. You may be used to the notation 360◦ in place of 2π. the symbol 2π represents a complete turn.3
The sum of interior angles
A triangle's three interior angles28 sum to 2π/2. this book tends to avoid the former notation.
2. but for the reasons explained in Appendix A and in footnote 15 of Ch. we have that a − b < c and b − a < c.9. the notation is introduced more properly in §§ 3. 2π .44)'s proof.2. Hence 2π/4 represents a square turn or right angle.6 and 8.1. returning to the start. completing (2. One way to see the truth of this fact is to imagine a small car rolling along one of the triangle's sides. we reason that it has turned a total of 2π: a full revolution. φk + ψk = 2 where ψk and φk are respectively the triangle's inner angles and the angles through which the car turns. Reaching the corner. however. a full circle. For those who do not. k = 1. Since the car again faces the original direction. 3. 2. and so on round all three corners to complete a circuit. the more the car turns: see Fig. 3. 3.11 below.
28
.2: The sum of a triangle's inner angles: turning at the corner. The last is the difference inequality. a spin to face the same direction as before. TRIANGLES AND OTHER POLYGONS: SIMPLE FACTS
35
Figure 2. But the angle φ the car turns at a corner and the triangle's inner angle ψ there together form the straight angle 2π/2 (the sharper the inner angle. 2. so one can write a < c + b and b < c + a just as well as c < a + b.
2) and Cauchy's integral formula (8. as in Fig. The area of the large outer square is (a + b)2 .46) with n = 3.29). 2. 2
Solving the latter equations for φk and substituting into the former yields
n k=1
2π − ψk 2
= 2π. 2 (2. we have that
n
φk = 2π.46)
Equation (2.
2. hence the area
.36
CHAPTER 2. CLASSICAL ALGEBRA AND GEOMETRY
substituting into the former yields ψ1 + ψ2 + ψ3 = 2π .45) is then seen to be a special case of (2.11). The theorem holds that a2 + b2 = c2 . the Pythagorean theorem is one of the most famous results in all of mathematics. b and c are the lengths of the legs and diagonal of a right triangle.4.
k=1
φk + ψk =
2π . Extending the same technique to the case of an n-sided polygon. 2.45)
which was to be demonstrated. 2
(2. The area of each of the four triangles in the figure is evidently ab/2.10
The Pythagorean theorem
Along with Euler's formula (5. The area of the tilted inner square is c2 . (2.47)
where a. the fundamental theorem of calculus (7. But the large outer square is comprised of the tilted inner square plus the four triangles.
or in other words
n k=1
ψk = (n − 2)
2π . Many proofs of the theorem are known.3. One such proof posits a square of side length a + b with a tilted square of side length c inscribed as in Fig.
Be that as it may. thus also to c. A root (or zero) of a function is a domain point at which the function evaluates to zero (the example has roots at x = ±1). it follows that c2 + h2 = r 2 . A singularity of a function is a domain point at which the function's output
This elegant proof is far simpler than the one famously given by the ancient geometer Euclid. Unfortunately the citation is now long lost.
which simplifies directly to (2. a function is a mapping from one number (or vector of several numbers) to another. but it is possible [52." 02:32. f (x) = x2 − 1 is a function which maps 1 to 0 and −3 to 8. "Pythagorean theorem. among others. Briefly. however. Inasmuch as (2. In mathematical symbols.47) applies to any right triangle.
29
.29 The Pythagorean theorem is readily extended to three dimensions as a2 + b2 + h2 = r 2 .47). if the domain is restricted to real x such that |x| ≤ 3. so can claim no credit for originating it. One often speaks of domains and ranges when discussing functions. In the example. CLASSICAL ALGEBRA AND GEOMETRY
of the large outer square equals the area of the tilted inner square plus the areas of the four triangles. For example. this is (a + b)2 = c2 + 4 ab 2 . The domain of a function is the set of numbers one can put into it.11
Functions
This book is not the place for a gentle introduction to the concept of the function.48). Whether Euclid was acquainted with the simple proof given here this writer does not know.
2. singularity and pole. yet more appealing than alternate proofs often found in print. The range of a function is the corresponding set of numbers one can get out of it. Other terms which arise when discussing functions are root (or zero). then the corresponding range is −1 ≤ f (x) ≤ 8. and where r is the corresponding three-dimensional diagonal: the diagonal of the right triangle whose legs are c and h. the present writer encountered the proof this section gives somewhere years ago and has never seen it in print since. which equation expands directly to yield (2. (2.38
CHAPTER 2. 31 March 2006] that Euclid chose his proof because it comported better with the restricted set of geometrical elements he permitted himself to work with. A current source for the proof is [52] as cited earlier in this footnote.48)
where h is an altitude perpendicular to both a and b.
(Besides the root. the singularity and the pole. the function h(x) = 1/(x2 − 1) has poles at x = ±1. Since there exists no real number i such that i2 = −1 (2.49) and since the quantity i thus defined is found to be critically important across broad domains of higher mathematics.32
30 Here is one example of the book's deliberate lack of formal mathematical rigor. however. or otherwise to frame the problem such that one need not approach the singularity. an infamous example of which is z = 0 in the function √ g[z] = z.5.— neither of us has ever met just 1). 32 The English word imaginary is evocative. A more precise formalism to say that "the function's output is infinite" might be
x→xo
lim |f (x)| = ∞.49) as the definition of a fundamentally new kind of quantity: the imaginary number. What it has not done is to tell us √ how to regard a quantity such as −1. however. say. that the number 1/2 never emerges directly from counting things. The example function f (x) has no singularities for finite x. then the period separating your fourteenth and twenty-first birthdays could have
.2.6. where the function's output is infinite. The word imaginary in the mathematical sense is thus more of a technical term than a descriptive adjective. there is also the troublesome branch point. one chair. which (§ 9.5. However.12
Complex numbers (introduction)
Section 2. Notice. that is. but perhaps not of quite the right concept in this usage.
The applied mathematician tends to avoid such formalism where there seems no immediate use for it. Imaginary numbers are not to mathematics as. of course. If for some reason the iyear were offered as a unit of time. for example. but the book must lay a more extensive foundation before introducing them properly in § 8. either. Branch points are important. imaginary elfs are (presumably) not substantial objects. as w ← 1/z. in the mathematical realm. one summer afternoon. never directly from counting things.12. COMPLEX NUMBERS (INTRODUCTION)
39
diverges. imaginary numbers are substantial. but the best way to handle such unreasonable singularities is almost always to change a variable. but then so is the number 1 (though you and I have often met one of something—one apple. we accept (2. etc. This book will have little to say of such singularities. 31 There is further the essential singularity. like 1/ x).30 A pole is a sin√ gularity that behaves locally like 1/x (rather than.2) can be thought of as N poles. In the physical world.2 has introduced square roots.31 )
2. The number i is just a concept. an example of which is z = 0 in p(z) = exp(1/z). A singularity that behaves as 1/xN is a multiple pole. imaginary elfs are to the physical world. The reason imaginary numbers are called "imaginary" probably has to do with the fact that they emerge from mathematical operations only.
let us call it a useful formalism.40
CHAPTER 2. which per the Pythagorean theorem (§ 2.133 is such that y (2. x
been measured as −i7 iyears.5: The complex (or Argand) plane. If the equation doesn't make sense to you yet for this reason. The unpersuaded reader is asked to suspend judgment a while.50)
The phase arg z of the complex number is defined to be the angle φ in the figure. He will soon see the use. rather. plotted at right angles to the familiar real number line as in Fig. The conjugate z ∗ of this complex number is defined to be z ∗ = x − iy. The important point is that arg z is the angle φ in the figure. 2. Madness? No. 2. skip it for now. which in terms of the trigonometric functions of § 3.10) is such that |z|2 = x2 + y 2 .5.5.
. and a complex number z therein. The magnitude (or modulus.
iℑ(z) i2 i z ρ φ −2 −1 −i −i2 1 2 ℜ(z)
Imaginary numbers are given their own number line. let us not call it that. (2. 33 This is a forward reference. CLASSICAL ALGEBRA AND GEOMETRY
Figure 2. or absolute value) |z| of the complex number is defined to be the length ρ in Fig. The sum of a real number x and an imaginary number iy is the complex number z = x + iy.51) tan(arg z) = .
2.
34
[13. = 2 x2 + y 2 2 It is a curious fact that It is a useful fact that 1 = −i. but would perfectly mirror one another in every respect. particularly when printed by hand).54)
(2.12.
2. COMPLEX NUMBERS (INTRODUCTION)
41
Specifically to extract the real and imaginary parts of a complex number.56)
(the curious fact. claiming that j not i were the true imaginary unit.34 then one would find that (−j)2 = −1 = j 2 . including that z1 z2 = (x1 x2 − y1 y2 ) + i(y1 x2 + x1 y2 ).2.53)
(2.2
Complex conjugation
An important property of complex numbers descends subtly from the fact that i2 = −1 = (−i)2 . the notations ℑ(z) = y.1
Multiplication and division of complex numbers in rectangular form
Several elementary properties of complex numbers are readily seen if the fact that i2 = −1 is kept in mind. (2.55.12. z1 x2 − iy2 x1 + iy1 x1 + iy1 = = z2 x2 + iy2 x2 − iy2 x2 + iy2 (x1 x2 + y1 y2 ) + i(y1 x2 − x1 y2 ) . ℜ(z) = x. 2.52)
are conventionally recognized (although often the symbols ℜ(·) and ℑ(·) are written Re(·) and Im(·). eqn. § I:22-5]
. The units i and j would differ indeed. is useful. i z ∗ z = x2 + y 2 = |z|2 (2.12.55) (2. too).
If one defined some number j ≡ −i. and thus that all the basic properties of complex numbers in the j system held just as well as they did in the i system.
42
CHAPTER 2. CLASSICAL ALGEBRA AND GEOMETRY
That is the basic idea. To establish it symbolically needs a page or so of slightly abstract algebra as follows, the goal of which will be to show that [f (z)]∗ = f (z ∗ ) for some unspecified function f (z) with specified properties. To begin with, if z = x + iy, then by definition. Proposing that (z k−1 )∗ = (z ∗ )k−1 (which may or may not be true but for the moment we assume it), we can write z k−1 = sk−1 + itk−1 , (z ∗ )k−1 = sk−1 − itk−1 , where sk−1 and tk−1 are symbols introduced to represent the real and imaginary parts of z k−1 . Multiplying the former equation by z = x + iy and the latter by z ∗ = x − iy, we have that (z ∗ )k = (xsk−1 − ytk−1 ) − i(ysk−1 + xtk−1 ). With the definitions sk ≡ xsk−1 − ytk−1 and tk ≡ ysk−1 + xtk−1 , this is written more succinctly z k = sk + itk , (z ∗ )k = sk − itk . In other words, if (z k−1 )∗ = (z ∗ )k−1 , then it necessarily follows that (z k )∗ = (z ∗ )k . Solving the definitions of sk and tk for sk−1 and tk−1 yields the reverse definitions sk−1 = (xsk + ytk )/(x2 + y 2 ) and tk−1 = (−ysk + xtk )/(x2 + y 2 ). Therefore, except when z = x + iy happens to be null or infinite, the implication is reversible by reverse reasoning, so by mathematical induction35 we have that (z k )∗ = (z ∗ )k (2.57) for all integral k. We have also from (2.53) that
∗ ∗ (z1 z2 )∗ = z1 z2
35
z ∗ = x − iy
z k = (xsk−1 − ytk−1 ) + i(ysk−1 + xtk−1 ),
(2.58)
Mathematical induction is an elegant old technique for the construction of mathematical proofs. Section 8.1 elaborates on the technique and offers a more extensive example. Beyond the present book, a very good introduction to mathematical induction is found in [20].
where ak and bk are real and imaginary parts of the coefficients peculiar to the function f (·), then [f (z)]∗ = f ∗ (z ∗ ). (2.61) In the common case where all bk = 0 and zo = xo is a real number, then f (·) and f ∗ (·) are the same function, so (2.61) reduces to the desired form [f (z)]∗ = f (z ∗ ), (2.62)
which says that the effect of conjugating the function's input is merely to conjugate its output. Equation (2.62) expresses a significant, general rule of complex numbers and complex variables which is better explained in words than in mathematical symbols. The rule is this: for most equations and systems of equations used to model physical systems, one can produce an equally valid alternate model simply by simultaneously conjugating all the complex quantities present.36
2.12.3
Power series and analytic functions (preview)
Equation (2.59) is a general power series37 in z − zo . Such power series have broad application.38 It happens in practice that most functions of interest
[20][44] [24, § 10.8] 38 That is a pretty impressive-sounding statement: "Such power series have broad application." However, air, words and molecules also have "broad application"; merely stating the fact does not tell us much. In fact the general power series is a sort of one-size-fits-all mathematical latex glove, which can be stretched to fit around almost any function. The interesting part is not so much in the general form (2.59) of the series as it is in the specific choice of ak and bk , which this section does not discuss. Observe that the Taylor series (which this section also does not discuss; see § 8.3) is a power series with ak = bk = 0 for k < 0.
37 36
44
CHAPTER 2. CLASSICAL ALGEBRA AND GEOMETRY
in modeling physical phenomena can conveniently be constructed as power series (or sums of power series)39 with suitable choices of ak , bk and zo . The property (2.61) applies to all such functions, with (2.62) also applying to those for which bk = 0 and zo = xo . The property the two equations represent is called the conjugation property. Basically, it says that if one replaces all the i in some mathematical model with −i, then the resulting conjugate model is equally as valid as the original.40 Such functions, whether bk = 0 and zo = xo or not, are analytic functions (§ 8.4). In the formal mathematical definition, a function is analytic which is infinitely differentiable (Ch. 4) in the immediate domain neighborhood of interest. However, for applications a fair working definition of the analytic function might be "a function expressible as a power series." Chapter 8 elaborates. All power series are infinitely differentiable except at their poles. There nevertheless exist one common group of functions which cannot be constructed as power series. These all have to do with the parts of complex numbers and have been introduced in this very section: the magnitude |·|; the phase arg(·); the conjugate (·)∗ ; and the real and imaginary parts ℜ(·) and ℑ(·). These functions are not analytic and do not in general obey the conjugation property. Also not analytic are the Heaviside unit step u(t) and the Dirac delta δ(t) (§ 7.7), used to model discontinuities explicitly. We shall have more to say about analytic functions in Ch. 8. We shall have more to say about complex numbers in §§ 3.11, 4.3.3, and 4.4, and much more yet in Ch. 5.
Trigonometry
Trigonometry is the branch of mathematics which relates angles to lengths. This chapter introduces the trigonometric functions and derives their several properties.
3.1
Definitions
Consider the circle-inscribed right triangle of Fig. 3.1. In considering the circle, we shall find some terminology useful: the angle φ in the diagram is measured in radians, where a radian is the angle which, when centered in a unit circle, describes an arc of unit length.1 Measured in radians, an angle φ intercepts an arc of curved length ρφ on a circle of radius ρ (that is, of distance ρ from the circle's center to its perimeter). An angle in radians is a dimensionless number, so one need not write "φ = 2π/4 radians"; it suffices to write "φ = 2π/4." In mathematical theory, we express angles in radians. The angle of full revolution is given the symbol 2π—which thus is the circumference of a unit circle.2 A quarter revolution, 2π/4, is then the right angle, or square angle. The trigonometric functions sin φ and cos φ (the "sine" and "cosine" of φ) relate the angle φ to the lengths shown in Fig. 3.1. The tangent function is then defined as sin φ , (3.1) tan φ ≡ cos φ
The word "unit" means "one" in this context. A unit length is a length of 1 (not one centimeter or one mile, just an abstract 1). A unit circle is a circle of radius 1. 2 Section 8.11 computes the numerical value of 2π.
1
45
46
CHAPTER 3. TRIGONOMETRY
Figure 3.1: The sine and the cosine (shown on a circle-inscribed right triangle, with the circle centered at the triangle's point).
y
φ
ρ cos φ
ρ sin φ
ρ
x
which is the "rise" per unit "run," or slope, of the triangle's diagonal. Inverses of the three trigonometric functions can also be defined: arcsin (sin φ) = φ, arccos (cos φ) = φ, arctan (tan φ) = φ. When the last of these is written in the form y , arctan x it is normally implied that x and y are to be interpreted as rectangular coordinates3 and that the arctan function is to return φ in the correct quadrant −π < φ ≤ π (for example, arctan[1/(−1)] = [+3/8][2π], whereas arctan[(−1)/1] = [−1/8][2π]). This is similarly the usual interpretation when an equation like y tan φ = x is written. By the Pythagorean theorem (§ 2.10), it is seen generally that4 cos2 φ + sin2 φ = 1. (3.2)
Fig. 3.2 plots the sine function. The shape in the plot is called a sinusoid.
3.2
Simple properties
Inspecting Fig. 3.1 and observing (3.1) and (3.2), one readily discovers the several simple trigonometric properties of Table 3.1.
3.3
Scalars, vectors, and vector notation
In applied mathematics, a vector is a amplitude of some kind coupled with a direction.5 For example, "55 miles per hour northwestward" is a vector, as is the entity u depicted in Fig. 3.3. The entity v depicted in Fig. 3.4 is also a vector, in this case a three-dimensional one. Many readers will already find the basic vector concept familiar, but for those who do not, a brief review: Vectors such as the ˆ ˆ u = xx + yy, ˆ ˆ ˆ v = xx + yy + zz
5 The same word vector is also used to indicate an ordered set of N scalars (§ 8.16) or an N × 1 matrix (Ch. 11), but those are not the uses of the word meant here.
ˆ ˆ ˆ of the figures are composed of multiples of the unit basis vectors x, y and z, which themselves are vectors of unit length pointing in the cardinal directions their respective symbols suggest.6 Any vector a can be factored into ˆ a amplitude a and a unit vector a, as
ˆ a = aa,
ˆ where the a represents direction only and has unit magnitude by definition, and where the a represents amplitude only and carries the physical units ˆ if any.7 For example, a = 55 miles per hour, a = northwestward. The ˆ unit vector a itself can be expressed in terms of the unit basis vectors: for √ √ ˆ ˆ ˆ ˆ −ˆ example, if x points east and y points north, then a =√ x(1/ 2)+ y(1/ 2), √ where per the Pythagorean theorem (−1/ 2)2 + (1/ 2)2 = 12 . A single number which is not a vector or a matrix (Ch. 11) is called a scalar. In the example, a = 55 miles per hour is a scalar. Though the scalar a in the example happens to be real, scalars can be complex, too— which might surprise one, since scalars by definition lack direction and the Argand phase φ of Fig. 2.5 so strongly resembles a direction. However, phase is not an actual direction in the vector sense (the real number line in the Argand plane cannot be said to run west-to-east, or anything like that). The x, y and z of Fig. 3.4 are each (possibly complex) scalars; v =
6 Printing by hand, one customarily writes a general vector like u as " u " or just " u ", ˆ and a unit vector like x as " x ". ˆ 7 The word "unit" here is unfortunately overloaded. As an adjective in mathematics, or in its noun form "unity," it refers to the number one (1)—not one mile per hour, one kilogram, one Japanese yen or anything like that; just an abstract 1. The word "unit" itself as a noun however usually signifies a physical or financial reference quantity of measure, like a mile per hour, a kilogram or even a Japanese yen. There is no inherent mathematical unity to 1 mile per hour (otherwise known as 0.447 meters per second, among other names). By contrast, a "unitless 1"—a 1 with no physical unit attached— does represent mathematical unity. Consider the ratio r = h1 /ho of your height h1 to my height ho . Maybe you are taller than I am and so r = 1.05 (not 1.05 cm or 1.05 feet, just 1.05). Now consider the ratio h1 /h1 of your height to your own height. That ratio is of course unity, exactly 1. There is nothing ephemeral in the concept of mathematical unity, nor in the concept of unitless quantities in general. The concept is quite straightforward and entirely practical. That r > 1 means neither more nor less than that you are taller than I am. In applications, one often puts physical quantities in ratio precisely to strip the physical units from them, comparing the ratio to unity without regard to physical units.
but as neither orientation has a natural advantage over the other. A plane is two-dimensional. is a flat (but not necessarily level) surface. as the reader on this tier undoubtedly knows. in the general case vectors are not associated with any particular origin. they represent distances and directions. A point is zero-dimensional.3. ROTATION ˆ ˆ xx + yy + ˆz is a vector. then8 z |v|2 = |x|2 + |y|2 + |z|2 = x∗ x + y ∗ y + z ∗ z + [ℜ(z)]2 + [ℑ(z)]2 . If you hold the screwdriver in your right hand and turn the screw in the natural manner clockwise. then bend your fingers in the y direction and extend your thumb. = [ℜ(x)]2 + [ℑ(x)]2 + [ℜ(y)]2 + [ℑ(y)]2
51
(3. we arbitrarily but conventionally accept the right-handed one as standard.
8
. If x. and Ch. infinite in extent unless otherwise specified. A line is one-dimensional. y) can be identified with the vector xx + yy." This book just renders it |v|.4
Rotation
ˆ ˆ u = xx + yy (3. This is orientation by the right-hand rule. speak further of the vector. That is. 15. scalar magnitude of a complex vector.9 Sections 3. the point ˆ ˆ (x. The axes are oriented such that if you point your flat right hand in the x direction. The reason the last notation subscripts a numeral 2 is obscure. 10 A plane. Notice the relative orientation of the axes in Fig. but verbal lore in American engineering has it that the name "right-handed" comes from experience with a standard right-handed wood screw or machine screw. The plane belongs to this geometrical hierarchy. you'd probably find it easier to drive that screw with the screwdriver in your left hand.4 and 3. 3. Space is three-dimensional. turning the screw slot from the x orientation toward the y. not fixed positions.3)
A point is sometimes identified by the vector expressing its distance and direction from the origin of the coordinate system. However.4)
A fundamental problem in trigonometry arises when a vector
ˆ ˆ ˆ must be expressed in terms of alternate unit vectors x′ and y′ .
3. the thumb then points in the z direction.
Some books print |v| as v or even v 2 to emphasize that it represents the real. 9 The writer does not know the etymology for certain. If somehow you came across a left-handed screw.9. the screw advances away from you in the z direction into the wood or hole. where x′ ˆ ˆ ˆ and y′ stand at right angles to one another and lie in the plane10 of x and y. y and z are complex.4. of course. A lefthanded orientation is equally possible.4. having to do with the professional mathematician's generalized definition of a thing he calls the "norm.
we would have to rotate ˆ it by ψ = α − β. When one actually rotates a physical body.3. of course. TRIGONOMETRIC SUMS AND DIFFERENCES
53
that it amounts to the same thing. ˆ ≡ x cos β + y sin β. If we wanted b to coincide with a.
12
. ˆ ˆ Since we have deliberately chosen the angle of rotation such that b′ = a.4 in hand. (3. the body experiences forces during rotation which might or might not change the body internally in some relevant way. According to (3. with respect to the vectors themselves.8)
Whether it is the basis or the vector which rotates thus depends on your point of view.12
3. if we did this we would obtain ˆ ˆ b′ = x[cos β cos(α − β) − sin β sin(α − β)] ˆ + y[cos β sin(α − β) + sin β cos(α − β)].5
Trigonometric functions of sums and differences of angles
With the results of § 3. ˆ ˆ b be vectors of unit length in the xy plane. we ˆ ˆ ˆ ˆ can separately equate the x and y terms in the expressions for a and b′ to obtain the pair of equations cos α = cos β cos(α − β) − sin β sin(α − β).8) and the definition of b.
This is only true. Let ˆ ˆ ˆ a ≡ x cos α + y sin α. we now stand in a position to consider trigonometric functions of sums and differences of angles. sin α = cos β sin(α − β) + sin β cos(α − β). respectively at angles α and β ˆ ˆ from the x axis. except that the sense of the rotation is reversed: ˆ ˆ u′ = x(x cos ψ − y sin ψ) + y(x sin ψ + y cos ψ).5.
[4] The hex and hour notations are recommended mostly only for theoretical math work. the reader will approve the choice.
. Figure 3. • There are 360 degrees in a circle.3). TRIGONOMETRY
multiples of an hour —there are twenty-four or 0x18 hours in a circle. 16 An equilateral triangle is. look at the square and the equilateral triangle16 of Fig. the angle between hours on the Greenwich clock is indeed an honest hour of arc.10) then supplies the various other lengths in the figure. the first sentence nevertheless says the thing rather better. If you have ever been to the Old Royal Observatory at Greenwich. This is so because the clock face's geometry is artificial.6 shows the angles.7. so the angle between eleven o'clock and twelve on the clock face is not an hour of arc! That angle is two hours of arc. Consider:
• There are 0x18 hours in a circle. Since such angles arise very frequently in practice.
15 Hence an hour is 15◦ . you'd probably not like the reception the memo got.56
CHAPTER 3. The familiar. 3. the improved notation fits a book of this kind so well that the author hazards it. The reader is urged to invest the attention and effort to master the notation. Also by symmetry. just as there are twenty-four or 0x18 hours in a day15 —for such angles simpler expressions exist. if you were. you're in good company." were you? Well. and the diagonal splits the square's corner into equal halves of three hours each. It is hoped that after trying the notation a while. a triangle whose three sides all have the same length. the perpendicular splits the triangle's top angle into equal halves of two hours each and its bottom leg into equal segments of length 1/2 each. Nonetheless. The Pythagorean theorem (§ 2. Table 3. It'd be a bit hard to read the time from such a crowded clock face were it not so big. Both sentences say the same thing.80 hours instead of 22. standard clock face shows only twelve hours not twenty-four. you may have seen the big clock face there with all twenty-four hours on it.5 degrees. 3. To see how the values in the table are calculated. There is a psychological trap regarding the hour. after which we observe from Fig. for example. and since a triangle's angles always total twelve hours (§ 2. The author is fully aware of the barrier the unfamiliar notation poses for most first-time readers of the book. It is not claimed that they offer much benefit in most technical work of the less theoretical kinds. England. by symmetry each of the angles of the equilateral triangle in the figure measures four.9. but you weren't going to write your angles in such inelegant conventional notation as "15◦ . If you wrote an engineering memo describing a survey angle as 0x1.1 that • the sine of a non-right angle in a right triangle is the opposite leg's length divided by the diagonal's. The barrier is erected neither lightly nor disrespectfully. as the name and the figure suggest. Each of the square's four angles naturally measures six hours.2 tabulates the trigonometric functions of these hour angles. it seems worth our while to study them specially. but anyway. don't they? But even though the "0x" hex prefix is a bit clumsy.
9) and (3. difference and half-angle formulas from the preceding sections to the values already in the table. However. The values for one and five hours are found by applying (3.8: The laws of sines and cosines.
17
. With this observation and the lengths in the figure. and • the cosine is the adjacent leg's length divided by the diagonal's.7
The laws of sines and cosines
Refer to the triangle of Fig. THE LAWS OF SINES AND COSINES Figure 3.10) against the values for two and three hours just calculated. of course. the Taylor series (§ 8.9) offers a cleaner. one can calculate the sine.
59
y x γ
b
α h a
c β
• the tangent is the opposite leg's length divided by the adjacent leg's.17
3. 3. b c
The creative reader may notice that he can extend the table to any angle by repeated application of the various sum. one can write that c sin β = h = b sin γ.8. quicker way to calculate trigonometrics of non-hour angles. The values for zero and six hours are. or in other words that sin γ sin β = .3. three and four hours. seen by inspection. By the definition of the sine function. tangent and cosine of angles of two.7.
γ is an angle not a point. too.15)
This equation is known as the law of sines. Table 3. On the other hand. 3.1 on page 48 has summarized simple properties of the trigonometric functions. the writer suspects in light of Fig. ˆ ˆ b = xb cos γ + yb sin γ.8
Summary of properties
Table 3.60
CHAPTER 3.3 summarizes further properties. what is true for them must be true for α. if one expresses a and b as vectors emanating from the point γ. However.4. then c2 = |b − a|2
= a2 + (b2 )(cos2 γ + sin2 γ) − 2ab cos γ.8 that few readers will be confused as to which point is meant.16)
= (b cos γ − a)2 + (b sin γ)2
3. this is c2 = a2 + b2 − 2ab cos γ. if we like. gathering them from §§ 3.18 Hence. Nothing prevents us from dropping additional perpendiculars hβ and hγ from the other two corners and using those as utility variables. Of course there is no "point γ". "there is something special about α.
. However. 19 Here is another example of the book's judicious relaxation of formal rigor. Table 3. The perpendicular h drops from it. we're not interested in h itself. Since cos2 (·) + sin2 (·) = 1. TRIGONOMETRY
But there is nothing special about β and γ.
18 "But. The skillful applied mathematician does not multiply labels without need.7. too. (3. 3." True. sin α sin β sin γ = = . the h is just a utility variable to help us to manipulate the equation into the desired form. We can use any utility variables we want.19 ˆ a = xa. known as the law of cosines.2 on page 58 has listed the values of trigonometric functions of the hour angles.5 and 3. a b c (3." it is objected.
x ˆ ≡ +ˆ cos θ + ρ sin θ. 7 May 2006] 21 Notice that the φ is conventionally written second in cylindrical (ρ. z). or the illumination a lamp sheds on a printed page of this book. θ. Consider for example an electric wire's magnetic field. Such coordinates are related to one another and to the rectangular coordinates by the formulas of Table 3. θ.9
Cylindrical and spherical coordinates
ˆ ˆ ˆ v = xx + yy + zz. This odd-seeming convention is to maintain proper righthanded coordinate rotation. the cylindrical coordinates (ρ. and are convenient for many purposes. That is. "Orthonormality. y. 15 is read.62
CHAPTER 3." 14:19. There are no constant unit basis vectors to match them.4 on page 49). ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ v = xx + yy + zz = ρρ + φφ + zz = ˆr + θθ + φφ.17)
Such variable unit basis vectors point locally in the directions in which their respective coordinates advance. z) can be used instead of the rectangular coordinates (x. φ) coordinates.21 Refer to Fig. However. z
(3.9. 3. xˆz Such rectangular coordinates are simple and general. φ." [52.)
20
.4. rectangular coordinates—which given a specific orthonormal20 set of unit basis vectors [ˆ y ˆ] uniquely identify a point (see Fig. there are at least two broad classes of conceptually simple problems for which rectangular coordinates tend to be inconvenient: problems in which an axis or a point dominates. TRIGONOMETRY
3. r It doesn't work that way.
Orthonormal in this context means "of unit length and at right angles to the other vectors in the set. To attack a problem dominated by an axis. To attack a problem dominated by a point.
Section 3.3 has introduced the concept of the vector
The coefficients (x. φ) can be used. which depends on the book's distance from the lamp (a point). Nevertheless variable unit basis vectors are defined: ˆ ˆ ρ ≡ +ˆ cos φ + y sin φ. Cylindrical and spherical coordinates can greatly simplify the analyses of the kinds of problems they respectively fit. z) on the equation's right side are coordinates—specifically. but they come at a price. y. the spherical coordinates (r. φ. whose intensity varies with distance from the wire (an axis). ˆ r z ˆ ˆ θ ≡ −ˆ sin θ + ρ cos θ. (The explanation will seem clearer once Ch. 3. z) but third in spherical (r. x ˆ ˆ φ ≡ −ˆ sin φ + y cos φ.
(3. 2. but you can use symbols like (ρx )2 = y 2 + z 2 . when some of the numbers summed are positive and others negative.2. y (ρy )2 = z 2 + x2 .23 But if the triangle inequalities hold for vectors in a plane." but that's all right.
(3. See § 1.22 ρx . The writer is not aware of any conventionally established symbols for quantities like these.
22
. we have that
k
zk ≤
k
|zk | .2 in vector notation. TRIGONOMETRY
ˆ Convention usually orients z in the direction of a problem's axis. Evidently. b and c represent the three sides of a triangle such that a + b + c = 0.64
CHAPTER 3.20)
Naturally. y x .19)
for any two complex numbers z1 and z2 .
These are just the triangle inequalities of § 2.9.18)
3.19) and (3.5 on page 40. θ and φ is usually not a good idea.2.2 uses the "<" sign rather than the "≤. for example.
Symbols like ρx are logical but. not standard. tan θ x = tan φx = instead if needed. one may find the latter formula useful for sums of real numbers.10
The complex triangle inequalities
If a.9.20) hold equally well for real numbers as for complex. one might note that § 2. 23 Reading closely. Changing the meanings of known symbols like ρ. then per (2. z (3. |z1 | − |z2 | ≤ |z1 + z2 | ≤ |z1 | + |z2 | (3. tan θ y = tan φy = ρy .44) |a| − |b| ≤ |a + b| ≤ |a| + |b| . x z . Extending the sum inequality. then why not equally for complex numbers? Consider the geometric interpretation of the Argand plane of Fig. Occaˆ sionally however a problem arises in which it is more convenient to orient x ˆ or y in the direction of the problem's axis (usually because ˆ has already z been established in the direction of some other pertinent axis). as far as this writer is aware.
if you refer to any of them as de Moivre's theorem then you are unlikely to be misunderstood. but shall in § 5. TRIGONOMETRY
It follows by parallel reasoning (or by extension) that z1 ρ1 = cis(φ1 − φ2 ) z2 ρ2 and by extension that z a = ρa cis aφ. (3. but since the three equations express essentially the same idea.26) Equations (3.25) and (3.25)
Also called de Moivre's formula.24). that cis φ = exp iφ = eiφ . 25 [44][52]
24
. (3.26). (3.66 • to multiply their magnitudes and • to add their phases. 5. Some authors apply the name of de Moivre directly only to (3.4. where exp(·) is the natural exponential function and e is the natural logarithmic base.25 We have not shown yet.26) are known as de Moivre's theorem. De Moivre's theorem is most useful in this light. both defined in Ch.24 .
CHAPTER 3. or to some variation thereof.
It cannot be the role of a book like this one to lead the beginner gently toward an apprehension of the basic calculus concept. f ′ (t)? • Interpreting some function f ′ (t) as an instantaneous rate of change. the calculus beginner's first task is quantitatively to understand the pair's interrelationship.
4.1
Infinitesimals and limits
Calculus systematically treats numbers so large and so small. Leibnitz cleared this hurdle for us in the seventeenth century. to understand this pair of questions. f (t)? This chapter builds toward a basic understanding of the first question. generality and significance. the concept is simple and briefly stated. so now at least we know the right pair of questions to ask. In this book we necessarily state the concept briefly. dedicated beginner could perhaps obtain the basic calculus concept directly here.
1
67
. they lie beyond the reach of our mundane number system. then move along.Chapter 4
The derivative
The mathematics of calculus concerns a complementary pair of questions:1 • Given some function f (t). so briefly stated. or derivative. he would probably find it quicker and more pleasant to begin with a book like the one referenced. Although a sufficiently talented. The greatest conceptual hurdle—the stroke of brilliance—probably lies simply in stating the pair of questions clearly. what is the corresponding accretion. Once grasped.
Although once grasped the concept is relatively simple. is no trivial thing. Sir Isaac Newton and G.W. or integral. what is the function's instantaneous rate of change. With the pair in hand. They are the pair which eluded or confounded the most brilliant mathematical minds of the ancient world. Such an understanding constitutes the basic calculus concept. Many instructional textbooks—[20] is a worthy example—have been written to lead the beginner gently.
very small. Nevertheless." "Smaller than 0x0." "Zero. etc. then 1/ǫ can be regarded as an infinity: a very large number much larger than any mundane number one can name." "Smaller than 0x0." "Oh. My infinitesimal is definitely bigger than zero.0001? Is it smaller than that?" "Much smaller. my infinitesimal is smaller still. "Very. Bigger than that. then the quotient δ/ǫ is not some unfathomable 0/0.68
CHAPTER 4.1
The infinitesimal
A number ǫ is an infinitesimal if it is so small that 0 < |ǫ| < a for all possible mundane positive numbers a. "How small?" "Very small. to divide them. You said that we should use hexadecimal notation in this book. In physical applications. the infinitesimals are often not true mathematical infinitesimals but rather relatively very small quantities such as the mass of a wood screw compared to the mass
. right. Yes. The principal advantage of using symbols like ǫ rather than 0 for infinitesimals is in that it permits us conveniently to compare one infinitesimal against another. For instance.01." "Smaller than 2−0x1 0000 0000 0000 0000 ?" "Now that is an impressively small number.01?" "Smaller than what?" "Than 2−8 . then. rather it is δ/ǫ = 3. This is somewhat a difficult concept." "What about 0x0. smaller than 0x0. THE DERIVATIVE
4. "How big is your infinitesimal?" you ask.1. If ǫ is an infinitesimal. to add them together. Let me propose to you that I have an infinitesimal. but its smallness conceptually lies beyond the reach of our mundane number system.0000 0000 0000 0001?" "Smaller. It is a definite number of a certain nonzero magnitude." This is the idea of the infinitesimal. if δ = 3ǫ is another infinitesimal. no. remember?" "Sorry." I reply. so if it is not immediately clear then let us approach the matter colloquially.
a quantity less than 1/0x10000 of the principal is indeed infinitesimal for most practical purposes (but not all: for example. z→0 2z 2 lim
The symbol "limQ " is short for "in the limit as Q. INFINITESIMALS AND LIMITS
69
of a wooden house frame.6)." Notice that lim is not a function like log or sin.1. one common way to specify that ǫ be infinitesimal is to write that ǫ ≪ 1.2 The second-order infinitesimal ǫ2 is so small on the scale of the common. the implication is that z draws toward zo from + the positive side such that z > zo . used when saying that the quantity
2 Among scientists and engineers who study wave phenomena. or v ≫ u. The reason for the notation is to provide a way to handle expressions like 3z 2z as z vanishes: 3z 3 = . indicates that u is much less than v. depending on your point of view. the − implication is that z draws toward zo from the negative side. even very roughly speaking.1.
4.
. When written limz→zo . It is just a reminder that a quantity approaches some value. On the other hand. positions of spacecraft and concentrations of chemical impurities must sometimes be accounted more precisely). In any case. typically such that one can regard the quantity u/v to be an infinitesimal. whatever "negligible" means in the context.and higher-order infinitesimals are likewise possible. Similarly. when written limz→zo .4. The ǫ2 is an infinitesimal to the infinitesimals. we should probably render the rule as twelve points per wavelength here. first-order infinitesimal ǫ that the even latter cannot measure it. In fact. For quantities between 1/0xC and 1/0x10000. The key point is that the infinitesimal quantity be negligible by comparison. The notation u ≪ v. there is an old rule of thumb that sinusoidal waveforms be discretized not less finely than ten points per wavelength. The additional cost of inviting one more guest to the wedding may or may not be infinitesimal. or the audio power of your voice compared to that of a jet engine. In keeping with this book's adecimal theme (Appendix A) and the concept of the hour of arc (§ 3.2
Limits
The notation limz→zo indicates that z draws as near to zo as it possibly can. Third. it depends on the accuracy one seeks. a quantity greater then 1/0xC of the principal to which it compares probably cannot rightly be regarded as infinitesimal.
There are evidently n P ≡ n!/(n − k)! (4. or permutations. However. because you can accept or reject the first block. I have several small wooden blocks of various shapes and sizes. there are n − 1 options (why not n options? because you have already taken one block from me.2
Combinatorics
In its general form. Then you select your second favorite: for this. whereas red-white-blue is a different combination entirely. This section treats the problem in its basic form. some of these distinct permutations put exactly the same combination of blocks in your hand. Then you select your third favorite—for this there are n − 2 options—and so on until you have k blocks. neither more nor fewer. and so on.1) k ordered ways. but it is not magical. the permutations redgreen-blue and green-red-blue constitute the same combination. For a single combination
3
[20]
. you select your favorite block first: there are n options for this. Consider that to say
z→2−
lim (z + 2) = 4
is just a fancy way of saying that 2 + 2 = 4. the problem of selecting k specific items out of a set of n available items belongs to probability theory ([chapter not yet written]). In its basic form. for instance. then the third. however.1
Combinations and permutations
Consider the following scenario. Now. the same problem also applies to the handling of polynomials or power series. If I offer you the blocks and you are free to take all. some or none of them at your option. suppose that what you want is exactly k blocks. I have only n − 1 blocks left). then accept or reject the second.
4.3
4. then how many distinct choices of blocks do you have? Answer: you have 2n choices. available for you to select exactly k blocks. painted different colors so that you can clearly tell each block from the others. Don't let it confuse you.70
CHAPTER 4. THE DERIVATIVE
equaled the value would be confusing.2. if you can take whichever blocks you want. Desiring exactly k blocks. The lim notation is convenient to use sometimes.
The Taylor expansion as such will be introduced in Ch.74
CHAPTER 4.14) grows large. we have that (1 + ǫ)n/m ≈ 1 + Inverting this equation yields (1 + ǫ)−n/m ≈ [1 − (n/m)ǫ] n 1 = ≈ 1 − ǫ.13). are complex? Changing the symbol z ← x and observing that the infinitesimal ǫ may also be complex. one would like (4. THE DERIVATIVE
Changing 1 + δ ← (1 + ǫ)n and observing from the (1 + ǫo )n equation above that this implies that δ ≈ nǫ. The equation offers a simple. because although a complex infinitesimal ǫ poses no particular problem. the action of a complex power z remains undefined. the last two equations imply that (1 + ǫ)x ≈ 1 + xǫ (4.
5 Other than "the first-order Taylor expansion. or both. m
Taken together." but such an unwieldy name does not fit the present context. The writer knows of no conventional name5 for (4. for consistency's sake.
4. one wants to know whether (1 + ǫ)z ≈ 1 + zǫ (4. as follows.13) raises the question: what if ǫ or x.3.4 will investigate the extremely interesting effects which arise when ℜ(ǫ) = 0 and the power z in (4. accurate way of approximating any real power of numbers in the near neighborhood of 1. which we now do.14) does hold.
. Still. but for the moment we shall use the equation in a more ordinary manner to develop the concept and basic application of the derivative.13) into the new domain.14)
still holds.14) to hold. 8. Section 5. logically extending the known result (4. No work we have yet done in the book answers the question. but named or unnamed it is an important equation.13)
for any real x. 1 + (n/m)ǫ [1 − (n/m)ǫ][1 + (n/m)ǫ] m n ǫ.3
Complex powers of numbers near unity
Equation (4. In fact nothing prevents us from defining the action of a complex power such that (4.
15) says that f (t) = =
′ ∞ k=−∞ ∞ k=−∞
ǫ→0+
lim
(ck )(t + ǫ/2)k − (ck )(t − ǫ/2)k ǫ (1 + ǫ/2t)k − (1 − ǫ/2t)k . ǫ
ǫ→0+
lim ck tk
Applying (4. f ′ (t) ≡ lim f (t + ǫ/2) − f (t − ǫ/2) .14).15). In mathematical symbols and for the moment using real numbers.
(4. There is no helping this.
. ǫ
ǫ→0+
but this book generally prefers the more elegant balanced form (4. The reader is advised to tread through these sections line by stubborn line.4. (4. THE DERIVATIVE
75
4. we now stand in a position properly to introduce the chapter's subject. which we will now use in developing the derivative's several properties through the rest of the chapter.4
The derivative
Having laid down (4.6
4. the mathematical notation grows a little thick. in the good trust that the math thus gained will prove both interesting and useful. ǫ
From this section through § 4.4. ǫ (4. What is the derivative? The derivative is the instantaneous rate or slope of a function.15)
ǫ→0+
Alternately. one can define the same derivative in the unbalanced form f ′ (t) = lim f (t + ǫ) − f (t) .1
The derivative of the power series
In the very common case that f (t) is the power series f (t) =
∞ k=−∞
ck tk .16)
where the ck are in general complex coefficients.14).7.4. the derivative. this is f ′ (t) =
6
∞ k=−∞
ǫ→0+
lim ck tk
(1 + kǫ/2t) − (1 − kǫ/2t) .
15) and (4.3 will remedy the oversight.
. it's important.
= f (t + dt/2) − f (t − dt/2). In (4. The quantities dt and df represent coordinated infinitesimal changes in t and f respectively. the meaning is clear enough: d(·) signifies how much (·) changes when the independent variable t increments by dt.18) until you do.2
The Leibnitz notation
The f ′ (t) notation used above for the derivative is due to Sir Isaac Newton. many users of applied mathematics have never developed a clear understanding as to precisely what the individual symbols mean. and df is a dependent infinitesimal whose size relative to dt depends on the independent variable t.4. conceptually. Usually better on the whole. one can choose any infinitesimal size ǫ. however. and is easier to start with. (4. is G. Because there is significant practical benefit in learning how to handle the Leibnitz notation correctly—particularly in applied complex variable theory—this subsection seeks to present each Leibnitz element in its correct light.17)
Equation (4. like ∂f /∂z. The meaning of the symbol d unfortunately depends on the context. 10 This is difficult. reread it carefully with reference to (4.4. 9 If you do not fully understand this sentence.15). 8 This subsection is likely to confuse many readers the first time they read it. For the independent infinitesimal dt. df such that per (4. As a result.17) admittedly has not explicitly considered what happens when the real t becomes the complex z.7
4. that the notation dt itself has two distinct meanings:10 f ′ (t) =
7 Equation (4. Often they have developed positive misunderstandings.18).17) gives the general derivative of the power series.9 Notice. df . Usually the exact choice of size does not matter. THE DERIVATIVE
∞ k=−∞
ck ktk−1 . more concise way to state it.18) dt Here dt is the infinitesimal.
(4.76 which simplifies to f (t) =
′
CHAPTER 4. yet the author can think of no clearer. The reason is that Leibnitz elements like dt and ∂f usually tend to appear in practice in certain specific relations to one another. however. but occasionally when there are two independent variables it helps the analysis to adjust the size of one of the independent infinitesimals with respect to the other.W. Leibnitz's notation8 dt = ǫ. but § 4.
For instance. Maybe it would be clearer in some cases to write dt f instead of df . nor do we usually worry about the precise value of dt. and
77
At first glance. Now. By contrast. then df or d(f ) is the amount by which f changes as t changes by dt. This leads one to forget that dt does indeed have a precise value. t). such that dt has prior independence to ds. The point is to develop a clear picture in your mind of what a Leibnitz infinitesimal really is. However. The df is infinitesimal but not constant. The important point is that dv and dx be coordinated so that the ratio dv/dx has a definite value no matter which of the two be regarded as independent. Changing perspective is allowed and perfectly proper. but it really denies the entire point of the Leibnitz notation. whose specific size could be a function of t but more likely is just a constant. Maybe it would be clearer in some cases to write ǫ instead of dt. Because the book is a book of applied mathematics. What confuses is when one changes perspective in mid-analysis. the math can get a little tricky.
• d(t). However. then we can (and in complex analysis sometimes do) say that ds = dt = ǫ. Sometimes when writing a differential equation like the potential-kinetic energy equation ma dx = mv dv.4. this writer recommends that you learn the Leibnitz notation properly. now regarding f as the independent variable.
. what we are interested in is not dt or df as such. which is how much (t) changes as t increments by dt. the distinction between dt and d(t) seems a distinction without a difference. the ratio df /dt remains the same in any case. then dt does not fundamentally have anything to do with t as such.
so there is usually no trouble with treating dt and df as though they were the same kind of thing. developing the ability to treat the infinitesimals individually. and for most practical cases of interest. Many applied mathematicians do precisely that. so indeed it is. One can avoid the confusion simply by keeping the dv/dx or df /dt always in ratio. but the latter is how it is conventionally written. it has not yet pointed out (but does so now) that even if s and t are equally independent variables. we do not necessarily have either v or x in mind as the independent variable. If t is an independent variable. if f is a dependent variable. However. but one must take care: the dt and df after the change are not the same as the dt and df before the change. The point is not to fathom all the possible implications from the start. In fact. ds = δ(s. THE DERIVATIVE • the independent infinitesimal dt = ǫ. Once you have the picture. For this reason. you can go from there. That is okay as far as it goes. or when changing multiple independent complex variables simultaneously. but rather the R P ratio df /dt or the sum k f (k dt) dt = f (t) dt.4. most of the time. you can do that as the need arises. when switching perspective in mid-analysis as to which variables are dependent and which are independent. but for most cases the former notation is unnecessarily cluttered. One might as well just stay with the Newton notation in that case. this footnote does not attempt to say everything there is to say about infinitesimals. Instead. at the fundamental level they really aren't. If a constant. after which nothing prevents us from using the symbols ds and dt interchangeably. it is a function of t. never treating the infinitesimals individually. or whether the independent be some third variable (like t) not in the equation. we do not usually worry about which of df and dt is the independent infinitesimal. if s and t are both independent variables. This is fine. one can have dt = ǫ(t). the latter is how it is conventionally written. then dt is just an infinitesimal of some kind.
if δ is a positive real infinitesimal.
4. but he accepts the notation as conventional nevertheless.19) rather than (4. then it should be equally valid to let ǫ = δ. In any case you should appreciate the conceptual difference between dt = ǫ and d(t). Use discretion. One should like the derivative (4. ǫ = iδ.19) to come out the same regardless of the Argand direction from which ǫ approaches 0 (see Fig. ǫ = −δ. introducing some unambiguous symbol like ǫ to represent the independent infinitesimal. ǫ = (4 − i3)δ or any other infinitesimal value. In fact for the sake of robustness. it is conventional to use the symbol ∂ instead of d. THE DERIVATIVE
In such cases. it may be wise to use the symbol dt to mean d(t) only. or second derivative. and (4.19)
one should like it to evaluate the same in the limit regardless of the complex phase of ǫ. one normally demands that derivatives do come out the same regardless of the Argand direction. ∂s (·) when tracking s. 2. Where two or more independent variables are at work in the same equation.78
CHAPTER 4. though.) Conventional shorthand for d(df ) is d2 f . the notation dk f dtk represents the kth derivative. That is. By extension. so d2 f d(df /dt) = 2 dt dt is a derivative of a derivative. (If needed or desired. Such notation appears only rarely in the literature. so your audience might not understand it when you write it. dz ǫ→0 ǫ (4.15) to be robust.3
The derivative of a function of a complex variable
For (4. one can write ∂t (·) when tracking t. written here in the slightly more general form f (z + ǫ/2) − f (z − ǫ/2) df = lim .
. dt2 . for (dt)2 . ǫ = −iδ. etc.5). so long as 0 < |ǫ| ≪ 1. as a reminder that the reader needs to pay attention to which ∂ tracks which independent variable.11 A derivative ∂f /∂t or ∂f /∂s in this case is sometimes called by the slightly misleading name of partial derivative.4.19) is
11 The writer confesses that he remains unsure why this minor distinction merits the separate symbol ∂. Where the limit (4.15) is the definition we normally use for the derivative for this reason.
4.12. d(z a ) dz (z + ǫ/2)a − (z − ǫ/2)a ǫ→0 ǫ (1 + ǫ/2z)a − (1 − ǫ/2z)a = lim z a ǫ→0 ǫ (1 + aǫ/2z) − (1 − aǫ/2z) . Excepting the nonanalytic parts of complex numbers (|·|.3).19) except at isolated nonanalytic points (like z = 0 in h[z] = 1/z √ or g[z] = z). THE DERIVATIVE
79
works without modification when ǫ is complex.
.4
The derivative of z a
Inspection of § 4.14). plus the Heaviside unit step u(t) and the Dirac delta δ(t) (§ 7. real ǫ and integral k of that section with arbitrary complex z. such functions are fully differentiable except at their poles (where the derivative goes infinite in any case) and other nonanalytic points.
ck z k =
∞ k=−∞
ck kz k−1
(4. Particularly. ǫ and a. so the derivative (4.4. [·]∗ . d dz
∞ k=−∞
sensitive to the Argand direction or complex phase of ǫ.12).4. Meeting the criterion.19) does exist—where the derivative is finite and insensitive to Argand direction—there we say that the function f (z) is differentiable. see § 2. That is. Where the derivative (4.7).21)
which simplifies to
for any complex z and a.17) of the general power series. How exactly to evaluate z a or z a−1 when a is complex is another matter.14) reveals that nothing prevents us from replacing the real t. arg[·]. the key formula (4. = lim z a ǫ→0 ǫ = lim d(z a ) = az a−1 dz (4. ℜ[·] and ℑ[·].4.4 and its (5.
4. but in any case you can use (4.1's logic in light of (4. there we normally say that the derivative does not exist. most functions encountered in applications do meet the criterion (4.21) for real a right now. written here as (1 + ǫ)w ≈ 1 + wǫ. treated in § 5.20)
holds equally well for complex z as for real.
if you inform me instead that you earn 10 percent a year on the same bond. like 10 percent annual interest on a bond. For example. The latter figure is a relative rate. then I may commend you vaguely for your thrift but otherwise the information does not tell me much.
.22)
The investment principal grows at the absolute rate df /dt. It expresses the significant concept of a relative rate.5
Basic manipulation of the derivative
This section introduces the derivative chain and product rules. f (t) dt
(4. if you inform me that you earn $ 1000 a year on a bond you hold.2.
4. or logarithmic derivative. The natural logarithmic notation ln f (t) may not mean much to you yet.
df /dt d = ln f (t). then I might want to invest.22) for the moment. but the bond's interest rate is (df /dt)/f (t).5
The logarithmic derivative
Sometimes one is more interested in knowing the rate of f (t) relative to the value of f (t) than in knowing the absolute rate itself.4. but the equation's left side at least should make sense to you. THE DERIVATIVE
4. so you can ignore the right side of (4. as we'll not introduce it formally until § 5.80
CHAPTER 4. However.
x=xo
(4.
x=xo
16 The notation P |Q means "P when Q. The almost distinctive characteristic of the extremum f (xo ) is that16 df dx = 0. But the derivative of the slope is just the derivative of the derivative.28)
At the extremum. If from downward to upward. Whether the extremum be a minimum or a maximum depends on whether the curve turn from a downward slope to an upward." Sometimes it is alternately written P |Q or [P ]Q .6. Refer to Fig.3: A local extremum. or second derivative. respectively. then negative. a local minimum or maximum—of a real-valued function f (x). 4.28) to find the extremum.3. EXTREMA AND HIGHER DERIVATIVES Figure 4. Hence if df /dx = 0 at x = xo .6
Extrema and higher derivatives
One problem which arises very frequently in applied mathematics is the problem of finding a local extremum—that is." or "P evaluated at Q.
. The curve momentarily runs level there.
y
83
f (xo ) f (x) xo x
4. then d2 f dx2 d2 f dx2 > 0 implies a local minimum at xo . the slope is zero." "P . given Q. One solves (4. if from upward to downward. or from an upward slope to a downward. then the derivative of the slope is evidently positive.4.
x=xo
< 0 implies a local maximum at xo .
too.29)
Of course if the first and second derivatives are zero not just at x = xo but everywhere. would be a matter of definition. or both or neither. being rather a level inflection point as depicted in Fig. THE DERIVATIVE Figure 4. best established not by prescription but rather by the needs of the model at hand. Whether one chooses to call some random point on a level straight line an inflection point or an extremum. 4. if both functions go to zero or infinity together at z = zo —then l'Hˆpital's rule holds that o lim df /dz f (z) = g(z) dg/dz .
. 4.4.)
4.17 (In general the term inflection point signifies a point at which the second derivative is zero.
x=xo
this might be either a minimum or a maximum but more probably is neither.7
L'Hˆpital's rule o
If z = zo is a root of both f (z) and g(z).4 is level because its first derivative is zero. but you knew that already. or alternately if z = zo is a pole of both functions—that is.
z=zo
z→zo
17
(4.84
CHAPTER 4. then f (x) = yo is just a level straight line.4: A level inflection.
y
f (xo ) f (x) xo x
Regarding the case d2 f dx2 = 0. The inflection point of Fig.
z2 . broadly applicable method for finding roots numerically.. fast converging.5. o
4. z3 . Good examples of the use require math from Ch.8
The Newton-Raphson iteration
The Newton-Raphson iteration is a powerful. give successively better estimates of the true root z∞ . calculated in turn by the iteration (4. lim √ = lim x→∞ 1/2 x x→∞ x x
x→∞
The example incidentally shows that natural logarithms grow slower than square roots.30). then come back here and read the paragraph if you prefer. The iteration approximates the curve f (x) by its tangent
20
This paragraph is optional reading for the moment. THE DERIVATIVE
L'Hˆpital's rule is used in evaluating indeterminate forms of the kinds o 0/0 and ∞/∞. Given a function f (z) of which the root is desired.30)
One begins the iteration by guessing the root and calling the guess z0 .3. consider the function y = f (x) of Fig 4. Section 5.
(4. plus related forms like (0)(∞) which can be recast into either of the two main forms.7) the natural logarithmic function and its derivative. You can read Ch. Then z1 .3 will put l'Hˆpital's rule to work. 5 first.86
CHAPTER 4. but if we may borrow from (5.20 1 d ln x = . 5 and later. § 10-2]
. an instance of a more general principle we shall meet in § 5. the Newton-Raphson iteration is f (z)
d dz f (z) z=zk
zk+1 = z −
. dx x then a typical l'Hˆpital example is21 o ln x 1/x 2 √ = lim √ = 0. etc. To understand the Newton-Raphson iteration. 21 [42.
is the line which most nearly approximates a curve at a given point. leaving the rightward leg tangent to the circle. and in the neighborhood of the point it goes in the same direction the curve goes.
y
87
x xk xk+1 f (x)
line22 (shown as the dashed line in the figure): ˜ fk (x) = f (xk ) + d f (x) dx (x − xk ).5: The Newton-Raphson iteration. we have that xk+1 = x − f (x)
d dx f (x) x=xk
d f (x) dx
x=xk
(xk+1 − xk ).
x=xk
˜ It then approximates the root xk+1 as the point at which fk (xk+1 ) = 0: ˜ fk (xk+1 ) = 0 = f (xk ) + Solving for xk+1 . 3 is slightly obscure. The trigonometric tangent function is named from a variation on Fig. nothing forbids complex z and f (z). THE NEWTON-RAPHSON ITERATION Figure 4. Although the illustration uses real numbers.
A tangent line.
which is (4.4. 4.30) with x ← z.
22
.8. 3. also just called a tangent.1 in which the triangle's bottom leg is extended to unit length.
. maybe more of linguistic interest than of mathematical. The dashed line in Fig. The Newton-Raphson iteration works just as well for these. The tangent touches the curve at the point.5 is a good example of a tangent line. The relationship between the tangent line and the trigonometric tangent function of Ch.
Usually in practice. then the iteration might never converge at √ all. the roots of f (z) = z 2 + 2 are z = ±i 2. However. as most interesting functions do. so √ it never converges23 (and if you guess that z0 = 2—well. A second limitation of the Newton-Raphson is that. even the relatively slow convergence is still pretty fast and is usually adequate. The Newton-Raphson iteration is a champion square root calculator. try it with your pencil and see what z2 comes out to be). If this procedure is not practical (perhaps because the function has a large or infinite number of roots). For most functions. A third limitation arises where there is a multiple root. The iteration often converges on the root nearest the initial guess zo but does not always. incidentally. the Newton-Raphson iteration for this is xk+1 = If you start by guessing x0 = 1
23
1 p xk + .5 shows why: in the neighborhood. You can fix the problem with a different. and in any case there is no guarantee that the root it finds is the one you wanted. the Newton-Raphson converges relatively slowly on the triple root of f (z) = z 3 . once the Newton-Raphson finds the root's neighborhood. then you find the next root by iterating on the new function f (z)/(z − α). then you should probably take care to make a sufficiently accurate initial guess if you can. The most straightforward way to beat this problem is to find all the roots: first you find some root α.30).31)
It is entertaining to try this on a computer. In this case. For example. Figure 4. whose roots are √ x = ± p. but relatively slowly.
. Consider f (x) = x2 − p. THE DERIVATIVE
The principal limitation of the Newton-Raphson arises when the function has more than one root. 2 xk
(4. Then try again with z0 = 1 + i2−0x10 .88
CHAPTER 4. if you happen to guess z0 especially unfortunately. the Newton-Raphson iteration works very well.
Per (4. possibly complex initial guess. the curve hardly departs from the straight line. but if you guess that z0 = 1 then the iteration has no way to leave the real number line. For instance. and so on until you have found all the roots. the Newton-Raphson normally still converges. then you remove that root (without affecting any of the other roots) by dividing f (z)/(z − α). it converges on the actual root remarkably quickly. even for calculations by hand.
26) as p1/n = σ 1/n cis(ψ/n). seek p1/n directly. § 6.1. you can decompose p1/n per de Moivre's theorem (3.4.32) reliably. using complex zk in place of the real xk . in which cis(ψ/n) = cos(ψ/n) + i sin(ψ/n) is calculated by the Taylor series of Table 8.31) and (4. This concludes the chapter.8. let f (x) = xn − p and iterate24. upon which (4. treating the Taylor series.1][51]
24
. n xk √
89 p fast.1.32)
Section 13. then don't bother to decompose. the iteration (4. orderly convergence is needed for complex p = u + iv = σ cis ψ. If so.5. they converge reliably and orderly only for real. 25 [42.
(4. σ ≥ 0. sketch f [x] in the fashion of Fig. Given x0 = 1. however. so it often pays to be a little less theoretically rigid in applying it. § 4-9][34. (To see why.) If reliable. 4. orderly computes σ 1/n .25 xk+1 = p 1 (n − 1)xk + n−1 . This saves effort and usually works.
Equations (4. The Newton-Raphson iteration however excels as a practical root-finding technique. nonnegative p. will continue the general discussion of the derivative.31) converges on x∞ = To calculate the nth root x = p1/n . In the uncommon event that the direct iteration does not seem to converge.7 generalizes the Newton-Raphson iteration to handle vectorvalued functions. THE NEWTON-RAPHSON ITERATION and iterate several times.32) work not only for real p but also usually for complex. Chapter 8. start over again with some randomly chosen complex z0 . Then σ is real and nonnegative.
THE DERIVATIVE
.90
CHAPTER 4.
ǫ→0
(5.1)
Equation (5. It seems to appear just about everywhere. Another way to write the same definition is exp x = ex . It derives the functions' basic properties and explains their close relationship to the trigonometrics. and shows how the two operate on complex arguments.1) defines the natural exponential function—commonly. the complex natural exponential is almost impossible to avoid.3)
91
. It works out the functions' derivatives and the derivatives of the basic members of the trigonometric and inverse trigonometric families to which they respectively belong.
ǫ→0
(5. What happens when ǫ is very small but N is very large? The really interesting question is. the natural logarithm. as ǫ → 0 and N → ∞.2) (5.
Consider the factor This is the overall factor by which a quantity grows after N iterative rounds of multiplication by (1 + ǫ).1
The real exponential
(1 + ǫ)N . what happens in the limit. more briefly named the exponential function.
5. e ≡ lim (1 + ǫ)1/ǫ . while x = ǫN remains a finite number? The answer is that the factor becomes exp x ≡ lim (1 + ǫ)x/ǫ .Chapter 5
The complex exponential
In higher mathematics. This chapter introduces the concept of the natural exponential and of its inverse.
1 we observe per (4. To show this.92
CHAPTER 5. the next step toward putting a concrete bound on e is to show that y(x) ≤ exp x for all real x. implying that the straight line which best approximates the exponential function in that neighborhood—the tangent line. but even at the applied level cannot think of a logically permissible way to do it.ǫ→0 δ (1 + ǫ)+δ/2ǫ − (1 + ǫ)−δ/2ǫ δ.
ǫ→0
which is to say that d exp x = exp x. For the moment. what interests us is that d exp 0 = exp 0 = lim (1 + ǫ)0 = 1. THE COMPLEX EXPONENTIAL
Whichever form we write it in. that is. dx (5. Since 1 + ǫ > 1. It seems nonobvious that the limit limǫ→0 (1 + ǫ)1/ǫ actually does exist. the slope and height of the exponential function are everywhere equal. To show that we can.ǫ→0 δ (1 + δ/2) − (1 − δ/2) = lim (1 + ǫ)x/ǫ δ. to divide repeatedly by 1 + ǫ as x decreases. ǫ→0 dx which says that the slope and height of the exponential function are both unity at x = 0. the question remains as to whether the limit actually exists. however.19) that the derivative of the exponential function is exp(x + δ/2) − exp(x − δ/2) d exp x = lim δ→0 dx δ (x+δ/2)/ǫ − (1 + ǫ)(x−δ/2)/ǫ (1 + ǫ) = lim δ.4)
This is a curious. whether 0 < e < ∞. With the tangent line y(x) found. this
Excepting (5. The rest of this section shows why it does.
1
. which just grazes the curve—is y(x) = 1 + x. important result: the derivative of the exponential function is the exponential function itself.ǫ→0 δ = lim (1 + ǫ)x/ǫ = lim (1 + ǫ)x/ǫ . the author would prefer to omit much of the rest of this section. we observe per (5. whether in fact we can put some concrete bound on e.1) that the essential action of the exponential function is to multiply repeatedly by 1 + ǫ as x increases. that the curve runs nowhere below the line.4).
d y(0) = 1. Leftward. Either way.2). Rightward.
.
93
However. 1 2 ≤ exp (1) . the curve never crosses below the line for real x.
But we have purposely defined the tangent line y(x) = 1 + x such that exp 0 = d exp 0 = dx y(0) = 1.1. THE REAL EXPONENTIAL action means for real x that exp x1 ≤ exp x2 if x1 ≤ x2 .5. dx if x1 ≤ x2 . Evaluating the last inequality at x = −1/2 and x = 1. In light of (5. such that the line just grazes the curve of exp x at x = 0.1 depicts. bending upward away from the line. ≤ e1 . at x < 0. a positive number remains positive no matter how many times one multiplies or divides it by 1 + ǫ. evidently the curve's slope only increases. we have that 1 2 2 But per (5. again bending upward away from the line. at x > 0. so 1 2 2 ≤ e−1/2 . y(x) ≤ exp x. dx
that is. the last two equations imply further that d exp x1 ≤ dx 0 ≤ d exp x2 dx d exp x.4). Figure 5. evidently the curve's slope only decreases. ≤ exp − . In symbols. exp x = ex . so the same action also means that 0 ≤ exp x for all real x.
such that we define for it the special notation ln(·) = loge (·). thus the natural logarithm inverts the natural exponential and vice versa: ln exp x = ln ex = x. (5.1. where e is the constant introduced in (5. so also for base b = e.1: The natural exponential.94
CHAPTER 5. Just as for any other base b. however. 2 ≤ e ≤ 4. one can choose any base b.2
The natural logarithm
In the general exponential expression bx .2) puts the desired bound on the exponential function. is the most interesting choice of all.
exp x
1 x −1
or in other words.6) exp ln x = eln x = x. As we shall see in § 5.5) which in consideration of (5. (5. For this reason among others. By the Taylor series of Table 8.4.B7E1 can readily be calculated. b = 2 is an interesting choice. it turns out that b = e. but the derivation of that series does not come until Ch.3). for example. and call it the natural logarithm.
5. THE COMPLEX EXPONENTIAL Figure 5. 8. the base-e logarithm is similarly interesting. The limit does exist.
. the value e ≈ 0x2.
dx = exp y.2. If y = ln x. here is another rather significant result.4).2
2 Besides the result itself. THE NATURAL LOGARITHM Figure 5.2 plots the natural logarithm. We shall use the technique more than once in this book.
ln x
95
1 x −1
Figure 5. and per (5. the technique which leads to the result is also interesting and is worth mastering.
d 1 ln x = . dy 1 dy = .7) dx x Like many of the equations in these early chapters.
. dx x
the inverse of which is In other words. dy But this means that dx = x.2: The natural logarithm. then x = exp y.5. (5.
It's a long. x4 x x x 1! 2! 3! 4! 1! 0! x0 x1 x2 x3 x4 3! 2! .3. 3 . The exception is peculiar. . The reader can detail these as the need arises. Also.. each term in each sequence is the derivative with respect to x of the term to its right—except left of the middle element in the first sequence and right of the middle element in the second. and indeed one can write ln(x + u) . .. As we have seen. .. The logarithm can in some situations profitably be thought of as a "diverging constant" of sorts. ln x. . What is going on here? The answer is that x0 (which is just a constant) and ln x both are of zeroth order in x. The natural logarithm does indeed eventually diverge to infinity. but the point nevertheless remains that x0 and ln x often play analogous roles in mathematics.3 It is interesting and worthwhile to contrast the sequence ..
3
.4 Figure 5. .− 3! 2! 1! 0! x1 x2 x3 x4 . but the divergence of the former is extremely slow— so slow.5. regardless of the base.
There are of course some degenerate edge cases like b = 0 and b = 1. x0 = lim u→∞ ln u which casts x0 as a logarithm shifted and scaled. FAST AND SLOW FUNCTIONS
97
which reveals the exponential to be a fast function. each sequence increases in magnitude going rightward. in fact. it takes practically forever just to reach 0x100. 1 . . in the literal sense that there is no height it does not eventually reach.9) limx→∞ (ln x)/xǫ = 0 for any positive ǫ no matter how small. Admittedly.− against the sequence . x4 x x x 0! 1! 2! 3! 4!
As x → +∞. 1. . This seems strange at first because ln x diverges as x → ∞ whereas x0 does not. ... Consider for instance how far out x must run to make ln x = 0x100. 3.2 has plotted ln x only for x ∼ 1. logarithms diverge slower. Thus exponentials generally are fast and logarithms generally are slow. that per (5.− 2. . Such conclusions are extended to bases other than the natural base e simply by observing that logb x = ln x/ ln b and that bx = exp(x ln b). . .. Exponentials grow or decay faster than powers. but beyond the figure's window the curve (whose slope is 1/x) flattens rapidly rightward. 4 One does not grasp how truly slow the divergence is until one calculates a few concrete values... − 2 . but it certainly does not hurry. one ought not strain such logic too far. because ln x is not in fact a constant. . long way. to the extent that it locally resembles the plot of a constant value.
.98
CHAPTER 5. THE COMPLEX EXPONENTIAL
Less strange-seeming perhaps is the consequence of (5.10). So let us multiply a complex number z = x + iy by 1 + iǫ. It befits an applied mathematician subjectively to internalize (5.
ǫ→0
but from here it is not obvious where to go. The resulting change in z is ∆z = (1 + iǫ)(x + iy) − (x + iy) = (ǫ)(−y + ix).4
Euler's formula
The result of § 5. then exp(iǫ) = (1 + iǫ)ǫ/ǫ = 1 + iǫ. How can one evaluate exp iθ = lim (1 + ǫ)iθ/ǫ . helps one to grasp mentally the essential features of many mathematical models one encounters in practice. that x∞ and exp x play analogous roles. where the factor is not quite exactly unity—in this case. if we don't quite know where to go with this yet. fast.14) to write the last equation in the form exp iθ = lim (1 + iǫ)θ/ǫ . to remember that ln x resembles x0 and that exp x resembles x∞ . by 1 + iǫ. one can take advantage of (4. Now leaving aside fast and slow functions for the moment.1 leads to one of the central questions in all of mathematics.10) that exp x is of infinite order in x. In fact it appears that the interpretation of exp iθ remains for us to define.
5. But per § 5. A qualitative sense that logarithms are slow and exponentials. obtaining (1 + iǫ)(x + iy) = (x − ǫy) + i(y + ǫx).9) and (5. the essential operation of the exponential function is to multiply repeatedly by some factor. we turn our attention in the next section to the highly important matter of the exponential of a complex argument. The book's development up to the present point gives no obvious direction. So.
ǫ→0
where i2 = −1 is the imaginary unit introduced in § 2. if we can find a way to define it which fits sensibly with our existing notions of the real exponential.1. what do we know? One thing we know is that if θ = ǫ.12? To begin.
the last equation is wz = exp(x ln σ − ψy) exp i(y ln σ + ψx).
where cis(·) is as defined in § 3. exp iθ is the complex number which lies on the Argand unit circle at phase angle θ. cos φ and i sin φ.29).11). we have for real φ that arg [exp iφ] = φ.11) is one of the most famous results in all of mathematics.
. If a complex base w is similarly expressed in the form w = u + iv = σ exp iψ. How? Consider in light of Fig.47). (5. Had we known that θ was an Argand phase angle. then adds the latter two to show them equal to the first of the three.11) that one can express any complex number in the form z = x + iy = ρ exp iφ.5. perhaps.11) |exp iφ| = 1. It is called Euler's formula.100
CHAPTER 5. THE COMPLEX EXPONENTIAL
That is. Such an alternate derivation lends little insight.
5
(5.6 and it opens the exponential domain fully to complex numbers. Since exp(α + β) = eα+β = exp α exp β.3 and (5.1 the Taylor series for exp iφ. the fundamental theorem of calculus (7." 6 An alternate derivation of Euler's formula (5. Changing φ ← θ now. not just for the natural base e but for any base. Along with the Pythagorean theorem (2.2) and Cauchy's integral formula (8. then it follows that wz = exp[ln wz ] = exp[z ln w] = exp[(x + iy)(iψ + ln σ)] = exp[(x ln σ − ψy) + i(y ln σ + ψx)]. 5. Leonhard Euler's name is pronounced as "oiler.12)
For native English speakers who do not speak German. which says neither more nor less than that exp iφ = cos φ + i sin φ = cis φ. but at least it builds confidence that we actually knew what we were doing when we came up with the incredible (5. (5.11)—less intuitive and requiring slightly more advanced mathematics. naturally we should have represented it by the symbol φ from the start.11. but briefer—constructs from Table 8.
Such confusion probably tempts few readers unfamiliar with the material of Ch. Although less commonly seen than the other two.17) and (5.19) and (5.22)'s first line from that figure. Their inverses arccosh. The figure is quite handy for real φ. (5.22) hold for complex φ as well as for real. so you can ignore this footnote for now. in the notation of that chapter—which tempts one incorrectly to suppose v ˆ by analogy that cos∗ φ cos φ + sin∗ φ sin φ = 1 and that cosh∗ φ cosh φ − sinh∗ φ sinh φ = 1 when the angle φ is complex. cosh φ (5.
7
.102
CHAPTER 5. 2 exp(+φ) − exp(−φ) . etc. and from (5.7 The notation exp i(·) or ei(·) is sometimes felt to be too bulky. 15 and if the confusion then arises.22) holds exactly as written for complex φ as well as for real. then such thoughts may serve to clarify the matter.20) can generally be true only if (5. (5. 15. However. but what if anything the figure means when φ is complex is not obvious.20) (5. whereas we originally derived (5. are defined in the obvious way. then consider that the angle φ of Fig. 3.2) is that cos2 φ + sin2 φ = 1. if later you return after reading Ch.18)
Thus are the trigonometrics expressed in terms of complex exponentials.22)
Both lines of (5.17)
Subtracting the second equation from the first and solving for sin φ yields sin φ = (5.20) one can derive the hyperbolic analog: cos2 φ + sin2 φ = 1.. the notation cis(·) ≡ exp i(·) = cos(·) + i sin(·)
Chapter 15 teaches that the "dot product" of a unit vector and its own conjugate is unity—ˆ ∗ · v = 1. THE COMPLEX EXPONENTIAL
Adding the two equations and solving for cos φ yields cos φ = exp(+iφ) + exp(−iφ) .19) (5. 2 exp(+iφ) − exp(−iφ) . 2 sinh φ .1 is a real angle. The Pythagorean theorem for trigonometrics (3. i2 (5. Hence in fact cos∗ φ cos φ + sin∗ φ sin φ = 1 and cosh∗ φ cosh φ − sinh∗ φ sinh φ = 1.21)
These are called the hyperbolic functions. cosh2 φ − sinh2 φ = 1. The forms (5.17) through (5. If the confusion descends directly or indirectly from the figure.18) suggest the definition of new functions cosh φ ≡ sinh φ ≡ tanh φ ≡ exp(+φ) + exp(−φ) . However.
Conventional names for these two mutually inverse families of functions are unknown to the author.
5. At this point in the development one begins to notice that the sin.7.7
Summary of properties
Table 5. Or.17) and (5. and likewise for the several other trigs.11 and 4. by which one can immediately adapt the many trigonometric properties of Tables 3. cosh and sinh functions are each really just different facets of the same mathematical phenomenon. Better applied style is to find the derivatives by observing directly the circle from which the sine and cosine functions come. cis.1
Derivatives of sine and cosine
One can compute derivatives of the sine and cosine functions from (5. SUMMARY OF PROPERTIES
103
is also conventionally recognized.8
Derivatives of complex exponentials
This section computes the derivatives of the various trigonometric and inverse trigonometric functions.19) and (5. Likewise their respective inverses: arcsin. arccosh and arcsinh. if the various tangent functions were included. but one might call them the natural exponential and natural logarithmic families.1 and 3. −i ln. i sinh z = sin iz. but to do it in that way doesn't seem sporting. ln. 3. Also conventionally recognized are sin−1 (·) and occasionally asin(·) for arcsin(·). as earlier seen in § 3.18) respectively to (5.23)
5. comparing (5. (5. exp.20). cos.3 to hyperbolic use. arccos.8.1 gathers properties of the complex exponential from this chapter and from §§ 2. Then.12. then one might call them the trigonometric and inverse trigonometric families.4.
.17) and (5. we have that cosh z = cos iz. Replacing z ← φ in this section's several equations implies a coherent definition for trigonometric functions of a complex variable.
5.18).5.11. i tanh z = tan iz.
Equations (5.4) and (5.28) give the derivatives of exp(·), sin(·) and cos(·). From these, with the help of (5.22) and the derivative chain and product rules (§ 4.5), we can calculate the several derivatives of Table 5.2.8
5.8.3
Derivatives of the inverse trigonometrics
Observe the pair d exp z = exp z, dz 1 d ln w = . dw w The natural exponential exp z belongs to the trigonometric family of functions, as does its derivative. The natural logarithm ln w, by contrast, belongs to the inverse trigonometric family of functions; but its derivative is simpler, not a trigonometric or inverse trigonometric function at all. In Table 5.2, one notices that all the trigonometrics have trigonometric derivatives. By analogy with the natural logarithm, do all the inverse trigonometrics have simpler derivatives? It turns out that they do. Refer to the account of the natural logarithm's derivative in § 5.2. Following a similar procedure, we have by successive steps
8
Derivatives of the other inverse trigonometrics are found in the same way. Table 5.3 summarizes.
5.9
The actuality of complex quantities
Doing all this neat complex math, the applied mathematician can lose sight of some questions he probably ought to keep in mind: Is there really such a thing as a complex quantity in nature? If not, then hadn't we better avoid these complex quantities, leaving them to the professional mathematical theorists? As developed by Oliver Heaviside in 1887,9 the answer depends on your point of view. If I have 300 g of grapes and 100 g of grapes, then I have 400 g
9
altogether. Alternately, if I have 500 g of grapes and −100 g of grapes, again I have 400 g altogether. (What does it mean to have −100 g of grapes? Maybe that I ate some!) But what if I have 200 + i100 g of grapes and 200 − i100 g of grapes? Answer: again, 400 g. Probably you would not choose to think of 200 + i100 g of grapes and 200 − i100 g of grapes, but because of (5.17) and (5.18), one often describes wave phenomena as linear superpositions (sums) of countervailing complex exponentials. Consider for instance the propagating wave A A exp[+i(ωt − kz)] + exp[−i(ωt − kz)]. 2 2 The benefit of splitting the real cosine into two complex parts is that while the magnitude of the cosine changes with time t, the magnitude of either exponential alone remains steady (see the circle in Fig. 5.3). It turns out to be much easier to analyze two complex wave quantities of constant magnitude than to analyze one real wave quantity of varying magnitude. Better yet, since each complex wave quantity is the complex conjugate of the other, the analyses thereof are mutually conjugate, too; so you normally needn't actually analyze the second. The one analysis suffices for both.10 (It's like A cos[ωt − kz] =
10
If the point is not immediately clear, an example: Suppose that by the Newton-
110
CHAPTER 5. THE COMPLEX EXPONENTIAL
reflecting your sister's handwriting. To read her handwriting backward, you needn't ask her to try writing reverse with the wrong hand; you can just hold her regular script up to a mirror. Of course, this ignores the question of why one would want to reflect someone's handwriting in the first place; but anyway, reflecting—which is to say, conjugating—complex quantities often is useful.) Some authors have gently denigrated the use of imaginary parts in physical applications as a mere mathematical trick, as though the parts were not actually there. Well, that is one way to treat the matter, but it is not the way this book recommends. Nothing in the mathematics requires you to regard the imaginary parts as physically nonexistent. You need not abuse Occam's razor! (Occam's razor, "Do not multiply objects without necessity,"11 is a fine philosophical tool as far as it goes, but is overused in some circles. More often than one likes to believe, the necessity remains hidden until one has ventured to multiply the objects, nor reveals itself to the one who wields the razor, whose hand humility should stay.) It is true by Euler's formula (5.11) that a complex exponential exp iφ can be decomposed into a sum of trigonometrics. However, it is equally true by the complex trigonometric formulas (5.17) and (5.18) that a trigonometric can be decomposed into a sum of complex exponentials. So, if each can be decomposed into the other, then which of the two is the real decomposition? Answer: it depends on your point of view. Experience seems to recommend viewing the complex exponential as the basic element—as the element of which the trigonometrics are composed—rather than the other way around. From this point of view, it is (5.17) and (5.18) which are the real decomposition. Euler's formula itself is secondary. The complex exponential method of offsetting imaginary parts offers an elegant yet practical mathematical way to model physical wave phenomena. So go ahead: regard the imaginary parts as actual. It doesn't hurt anything, and it helps with the math.
Raphson iteration (§ 4.8) you have found a root of the polynomial x3 + 2x2 + 3x + 4 at x ≈ −0x0.2D + i0x1.8C. Where is there another root? Answer: at the complex conjugate, x ≈ −0x0.2D − i0x1.8C. One need not actually run the Newton-Raphson again to find the conjugate root. 11 [48, Ch. 12]
Chapter 6
Primes, roots and averages
This chapter gathers a few significant topics, each of whose treatment seems too brief for a chapter of its own.
6.1
Prime numbers
A prime number —or simply, a prime—is an integer greater than one, divisible only by one and itself. A composite number is an integer greater than one and not prime. A composite number can be composed as a product of two or more prime numbers. All positive integers greater than one are either composite or prime. The mathematical study of prime numbers and their incidents constitutes number theory, and it is a deep area of mathematics. The deeper results of number theory seldom arise in applications,1 however, so we shall confine our study of number theory in this book to one or two of its simplest, most broadly interesting results.
6.1.1
The infinite supply of primes
The first primes are evidently 2, 3, 5, 7, 0xB, . . . . Is there a last prime? To show that there is not, suppose that there were. More precisely, suppose that there existed exactly N primes, with N finite, letting p1 , p2 , . . . , pN represent these primes from least to greatest. Now consider the product of
The deeper results of number theory do arise in cryptography, or so the author has been led to understand. Although cryptography is literally an application of mathematics, its spirit is that of pure mathematics rather than of applied. If you seek cryptographic derivations, this book is probably not the one you want.
1
111
112 all the primes,
CHAPTER 6. PRIMES, ROOTS AND AVERAGES
N
C=
j=1
pj .
What of C + 1? Since p1 = 2 divides C, it cannot divide C + 1. Similarly, since p2 = 3 divides C, it also cannot divide C + 1. The same goes for p3 = 5, p4 = 7, p5 = 0xB, etc. Apparently none of the primes in the pj series divides C + 1, which implies either that C + 1 itself is prime, or that C + 1 is composed of primes not in the series. But the latter is assumed impossible on the ground that the pj series includes all primes; and the former is assumed impossible on the ground that C + 1 > C > pN , with pN the greatest prime. The contradiction proves false the assumption which gave rise to it. The false assumption: that there were a last prime. Thus there is no last prime. No matter how great a prime number one finds, a greater can always be found. The supply of primes is infinite.2 Attributed to the ancient geometer Euclid, the foregoing proof is a classic example of mathematical reductio ad absurdum, or as usually styled in English, proof by contradiction.3
6.1.2
Compositional uniqueness
Occasionally in mathematics, plausible assumptions can hide subtle logical flaws. One such plausible assumption is the assumption that every positive integer has a unique prime factorization. It is readily seen that the first several positive integers—1 = (), 2 = (21 ), 3 = (31 ), 4 = (22 ), 5 = (51 ), 6 = (21 )(31 ), 7 = (71 ), 8 = (23 ), . . . —each have unique prime factorizations, but is this necessarily true of all positive integers? To show that it is true, suppose that it were not.4 More precisely, suppose that there did exist positive integers factorable each in two or more distinct ways, with the symbol C representing the least such integer. Noting that C must be composite (prime numbers by definition are each factorable
[45] [39, Appendix 1][52, "Reductio ad absurdum," 02:36, 28 April 2006] 4 Unfortunately the author knows no more elegant proof than this, yet cannot even cite this one properly. The author encountered the proof in some book over a decade ago. The identity of that book is now long forgotten.
3 2
Nq > 1. that the same prime cannot appear in both factorizations—because if the same prime r did appear in both then C/r either would be prime (in which case both factorizations would be [r][C/r]. let
Np
113
Cp ≡ Cq ≡
pj .1. Cp = Cq = C. qk ≤ qk+1 . We see that pj = q k for any j and k—that is.
j=2 Nq
qk . Among other effects.
k=1
Cp = Cq = C.
where Cp and Cq represent two distinct prime factorizations of the same number C and where the pj and qk are the respective primes ordered from least to greatest.
p1 ≤ q 1 .
k=2
.6. pj ≤ pj+1 . Let us now rewrite the two factorizations in the form Cp = p1 Ap .
Np
Ap ≡ Aq ≡
pj . PRIME NUMBERS only one way. like 5 = [51 ]).
Np > 1. Cq = q1 Aq . defying our assumption that the two differed) or would constitute an ambiguously factorable composite integer less than C when we had already defined C to represent the least such. the fact that pj = qk strengthens the definition p1 ≤ q1 to read p1 < q 1 .
j=1 Nq
qk .
Indeed this is so. The last inequality lets us compose the new positive integer B = C − p1 q 1 . we have that √ 1 < p1 < q1 ≤ C ≤ Aq < Ap < C. Let E represent the positive integer which results from dividing C by p1 q1 : C . with C the least positive integer factorable two ways. q1
That Eq1 = Ap says that q1 divides Ap . the plausible assumption of the present subsection has turned out absolutely correct. so Ap 's prime factorization is unique—and we see above that Ap 's factorization does not include any qk . which might be prime or composite (or unity). The contradiction proves false the assumption which gave rise to it. Interestingly however. Observing that some integer s which divides C necessarily also divides C ± ns. Such effects
.114
CHAPTER 6. But Ap < C. p1 C = Aq . Thus no positive integer is ambiguously factorable. We have observed at the start of this subsection that plausible assumptions can hide subtle logical flaws. E≡ p1 q 1 Then. then it divides B + p1 q1 = C. Since C is composite and since p1 < q1 . The false assumption: that there existed a least composite number C prime-factorable in two distinct ways. which further means that the product p1 q1 divides B. not even q1 . PRIMES. ROOTS AND AVERAGES
where p1 and q1 are the least primes in their respective factorizations. also. Eq1 = Ep1 = C = Ap . This means that B's unique factorization includes both p1 and q1 . we note that each of p1 and q1 necessarily divides B. we have just had to do some extra work to prove it. But if p1 q1 divides B. but which either way enjoys a unique prime factorization because B < C. which implies that p1 q1 < C. Prime factorizations are always unique.
1. PRIME NUMBERS
115
are typical on the shadowed frontier where applied shades into pure mathematics: with sufficient experience and with a firm grasp of the model at hand. Judging when to delve into the mathematics anyway. q > 1. but nevertheless. Hence there √ exists no rational. whereas 2/3 is.3
Rational and irrational numbers
p x = . when one lets logical minutiae distract him to too great a degree. q2 which form is evidently also fully reduced. q
A rational number is a finite real number expressible as a ratio of integers
The ratio is fully reduced if p and q have no prime factors in common. he admittedly begins to drift out of the applied mathematical realm that is the subject of this book. The proof is readily extended to show that any x = nj/k is irrational if nonintegral. then the fully reduced n = p2 /q 2 is not an integer as we had assumed that it was. as was to be demonstrated. q = 0. The author does judge the present subsection's proof to be worth the applied effort. and more importantly on whether the unsureness felt is true uncertainty or is just an unaccountable desire for more precise mathematical definition (if the latter. 4/6 is not fully reduced. For √ √ example. It depends on how sure one feels.
6. To prove5 the last point. is a matter of applied mathematical style. p > 0. the
5
A proof somewhat like the one presented here is found in [39. we have that p2 = n. then it probably is. 2 is irrational.6. then unlike the author you may have the right temperament to become a professional mathematician). The contradiction proves false the assumption which gave rise to it.
. Squaring the equation. But if q > 1. seeking a more rigorous demonstration of a proposition one feels pretty sure is correct. suppose that there did exist a fully reduced x= p √ = n. if you think that it's true. For instance. nonintegral n. Appendix 1]. q and n are all integers. In fact any x = n is irrational unless integral. √ there is no such thing as a n which is not an integer but is rational. An irrational number is a finite real number which is not rational. q
where p.1.
N > 0. then the quotient Q(z) = B(z)/(z − α) has exactly order N − 1.
. z N −1 term of the quotient is never null. The reason is that if the leading term were null. which is to say that z − α exactly divides every polynomial B(z) of which z = α is a root. ROOTS AND AVERAGES
extension by writing pk /q k = nj then following similar steps as those this paragraph outlines.2
The existence and number of polynomial roots
This section shows that an N th-order polynomial must have exactly N roots. so ρ = 0. this reduces to B(α) = ρ. Note that if the polynomial B(z) has order N . a remainder. Thus substituting yields B(z) = (z − α)Q0 (z) + ρ. Now onward we go to other topics. first-order divisors leave zeroth-order. where Q0 (z) is the quotient and R0 (z). the leading. if Q(z) had order less than N − 1. But B(α) = 0 by assumption.
N
B(z) =
k=0
bk z k . In this case the divisor A(z) = z − α has first order. where A(z) = z − α. Evidently the division leaves no remainder ρ. In the long-division symbology of Table 2.1
Polynomial roots
Consider the quotient B(z)/A(z). constant remainders R0 (z) = ρ.
6. bN = 0.116
CHAPTER 6.
6. That is. so little will take you pretty far. That's all the number theory the book treats. but in applied math. When z = α. B(z) = A(z)Q0 (z) + R0 (z). PRIMES. and as § 2.6. then B(z) = (z − α)Q(z) could not possibly have order N as we have assumed.
B(α) = 0.2.2 has observed.3.
where z = ρeiφ .5). the bN z N term dominates B(z). then according to § 6.2
The fundamental theorem of algebra
The fundamental theorem of algebra holds that any polynomial B(z) of order N can be factored
N N
B(z) =
k=0
bk z = bN
k
j=1
(z − αj ). On the other hand.1 one can divide the polynomial by z − αN to obtain a new polynomial of order N − 1. the locus of all points in a plane at distance ρ from a point O is a circle. factoring the polynomial step by step into the desired form bN N (z − αj ). then one can divide it by z−αN −1 to obtain yet another polynomial of order N − 2. For example. it suffices to show that all polynomials of order N > 0 have at least one root. Prob. Because ei(φ+n2π) = eiφ and no fractional powers of z appear in (6.
(6. THE EXISTENCE AND NUMBER OF ROOTS
117
6.2. 74] 7 A locus is the geometric collection of points which satisfy a given criterion. this locus forms a closed loop. when ρ = 0 the entire locus collapses on the single point B(0) = b0 . the locus of all points in three-dimensional space equidistant from two points P and Q is a plane. 2. the locus is nearly but not quite a circle at radius bN ρN from the Argand origin B(z) = 0. [24. 10. etc. so the locus there evidently has the general character of bN ρN eiN φ . joined at the ends and looped in N great loops. one root extracted at each step. At very large ρ. Ch. Eventually ρ disappears and the entire string collapses on the point B(0) = b0 . j=1 It remains however to show that there exists no polynomial B(z) of order N > 0 lacking roots altogether. bN = 0. for if a polynomial of order N has a root αN . As ρ shrinks smoothly.6 To prove the theorem. They also prove it in rather a different way. and so on. the string's shape changes smoothly. consider the locus7 of all B(ρeiφ ) in the Argand range plane (Fig. but at the end has collapsed on a single point. then at some time between it must have swept through the origin and every other point within the original loops.2. Now consider the locus at very large ρ again. Since the string originally has looped N times at great distance about the Argand origin. Watch the locus as ρ shrinks. As such. but this time let ρ slowly shrink. To the new polynomial the same logic applies: if it has at least one root αN −1 . revolving N times at that great distance before exactly repeating.6. and φ is variable.
.
6 Professional mathematicians typically state the theorem in a slightly different form. ρ is held constant. To show that there is no such polynomial.2. The locus is like a great string or rubber band.1)
where the αk are the N roots of the polynomial.1).
given that the formula be constructed according to certain rules. formulas for the roots are known (see Ch. The Argand origin lies inside the loops at the start but outside at the end.8 but the NewtonRaphson iteration (§ 4.1). finding the polynomial given the roots.
6. (2. but to the applied mathematician it probably suffices to observe merely that no such formula is known. Undoubtedly the theorem is interesting to the professional mathematician.
6. reducing the polynomial step by step until all the roots are found. Charles is new. PRIMES. every 40 seconds. 10) though seemingly not so for quintic (fifth order) and higher-order polynomials. as in (6. The strongest and most experienced of the three. ROOTS AND AVERAGES
After all. For a quadratic (second order) polynomial. Given eight hours.
.3. every 60 seconds. he lays only 60.118
CHAPTER 6. "Abel's impossibility theorem"]. Charles. Now suppose that we are told that Adam can lay a brick every 30 seconds. is much easier: one just multiplies out j (z − αj ). then the values of ρ and φ precisely where the string has swept through the origin by definition constitute a root B(ρeiφ ) = 0. it is said to be shown that no such formula even exists. lays 120 bricks per hour. Brian. If so.9 Next is Brian who lays 90. so the string can only sweep as ρ decreases. how many bricks can the three men lay? Answer: (8 hours)(120 + 90 + 60 bricks per hour) = 2160 bricks.8) can be used to locate a root numerically in any case. it can never skip.1
Serial and parallel addition
Consider the following problem. which observation completes the applied demonstration of the fundamental theorem of algebra. B(z) does have at least one root. Thus as we were required to show. Adam. Actually finding the roots numerically is another matter. The fact that the roots exist is one thing. How much time do the
8 In a celebrated theorem of pure mathematics [50. The Newton-Raphson is used to extract one root (any root) at each step as described above. The reverse problem.2) gives the roots. 9 The figures in the example are in decimal notation. For cubic (third order) and quartic (fourth order) polynomials.3
Addition and averages
This section discusses the two basic ways to add numbers and the three basic ways to calculate averages of them. There are three masons. B(z) is everywhere differentiable.
1 . . Counterintuitive. 5. .—rather than as 1. . ADDITION AND AVERAGES three men need to lay 2160 bricks? Answer:
1 30
119
+
1 40
2160 bricks 1 + 60 bricks per second
= 28. . 5 . where 1 1 1 1 = + + . but suggests that the notation which appears in the table. Assuming that none of the values involved is negative. 1 .2) and a bit of arithmetic. 3.2) a b a b where the familiar operator + is verbally distinguished from the when necessary by calling the + the serial addition or series addition operator.6. It works according to the law 1 1 1 = + . "if and only if. 30 40 60 30 40 60
The operator is called the parallel addition operator. (6. Neither is stated in simpler terms than the other.800 seconds = 8 hours. (6. (6. but fortunately there exists a better notation: (2160 bricks)(30 40 60 seconds per brick) = 8 hours. 4. perhaps.3) This is intuitive.1 are soon derived.4)
Because we have all learned as children to count in the sensible man1 1 ner 1. the several parallel-addition identities of Table 6.3. 2. With (6. 2 . one can readily show that10 a x ≤ b x iff a ≤ b.
b k=a
f (k) ≡ f (a) f (a + 1) f (a + 2)
· · · f (b).
might serve if needed.
1 hour 3600 seconds
The two problems are precisely equivalent. . The writer knows of no conventional notation for parallel sums of series. . is that a x ≤ a."
. The notation used to solve the second is less elegant.—serial addition (+) seems 3 4
10
The word iff means.
they are connected in series. Which figure you choose depends on what you want to calculate.11 Now that you have seen it. yet outside the electrical engineering literature the parallel addition notation is seldom seen. which means exactly what it appears to mean. eqn. Realizing this. The rejoinder is fair enough. .27] 13 "And what does the author know about business?" comes the rejoinder. 3 3
1 These two figures are not the same. 1. If the author wanted to demonstrate his business acumen (or lack thereof) he'd do so elsewhere not here! There are a lot of good business books
11
.6. One merely writes a (−b). how does the parallel analog go?)12 Convention brings no special symbol for parallel subtraction. Among the three masons. 4. . you can use it. in fact probably more often than. the productivities average one way. That is. yet for many purposes parallel addition is in fact no less fundamental. if in seconds per brick.
6. Yet both figures are valid.1 illustrates. Parallel addition gives the electrical engineer a neat way of adding the impedances of parallel-connected loads. incidentally. 1/(43 3 seconds per brick) = 90 bricks per hour. 12 [41.3. loads are connected in parallel as often as.3. Its rules are inherently neither more nor less complicated. what is their average productivity? The answer depends on how you look at it. the clever businessperson might negotiate a contract so that the average used worked to his own advantage. There is profit in learning to think both ways.. (Exercise: counting from zero serially goes 0. 3. 3 On the other hand. . The psychological barrier is hard to breach. On the one hand.2
Averages
Let us return to the problem of the preceding section. ADDITION AND AVERAGES
121
more natural than parallel addition ( ) does.13
In electric circuits. 30 + 40 + 60 seconds per brick = 43 1 seconds per brick. A common mathematical error among businesspeople is not to realize that both averages are possible and that they yield different numbers (if the businessperson quotes in bricks per hour. as Table 6. the other way. 1. 120 + 90 + 60 bricks per hour = 90 bricks per hour. 5. 2. yet some businesspeople will never clearly consider the difference).
7)
where the xk are the several samples and the wk are weights. the geometric mean [(120)(90)(60)] 1/3 bricks per hour.122
CHAPTER 6. Generally.8) (6. the arithmetic.
out there and this is not one of them. Business demands other talents.9) (6.6)
wk
xk wk
. The author is not sure. PRIMES.5)
j
Pk
1/wk
.
(6. making relatively easy problems harder and more mysterious than the problems need to be.
(6. that's what they hire engineers. is in the author's experience usually a waste of time. Trying to convince businesspeople that their math is wrong. For two samples weighted equally. incidentally. The mathematically savvy sometimes prefer the geometric mean over either of the others for this reason. these are µ = µΠ µ a+b . The inverse geometric mean [(30)(40)(60)] 1/3 seconds per brick implies the same average productivity.
(6. a third average is available.10)
= 2(a b).
. then you have met the difficulty. but somehow he doubts that many boards of directors would be willing to bet the company on a financial formula containing some mysterious-looking ex . The geometric mean does not have the problem either of the two averages discussed above has. The fact remains nevertheless that businesspeople sometimes use mathematics in peculiar ways. geometric and harmonic means are defined µ ≡ µΠ ≡ µ ≡
k
j k
wk xk = k wk k 1/ Pk wk
w xj j
k
xk /wk = 1/wk
=
k
1 wk
wk xk
k w xj j k
. ROOTS AND AVERAGES
When it is unclear which of the two averages is more appropriate. Some businesspeople are mathematically rather sharp—as you presumably are if you are in business and are reading these words—but as for most: when real mathematical ability is needed. If you have ever encountered the little monstrosity of an approximation banks (at least in the author's country) actually use in place of (9. (6.12) to accrue interest and amortize loans. architects and the like for. √2 = ab.
√ 2 ab ≤ a + b. 4ab ≤ a2 + 2ab + b2 . but the motivation behind them remains inscrutable until the reader realizes that the writer originally worked the steps out backward with his pencil. where the word "average" signifies the arithmetic. Starting there and recursing back. 0 ≤ a2 − 2ab + b2 . Only then did he reverse the order and write the steps formally here.14 0 ≤ (a − b)2 .11) holds for each subset individually then surely it holds for the whole set (this is so because the average of the whole set is itself the average of the two subset averages. and so on until each smallest group has two members only—in which case we already know that (6. ADDITION AND AVERAGES If a ≥ 0 and b ≥ 0. then each subsubset can be divided in half again. Does (6. The writer had no idea that he was supposed to start from 0 ≤ (a−b)2 until his pencil working backward showed him. The same reading strategy often clarifies inscrutable math. 2 (6. then by successive steps. consider the case of N = 2m nonnegative samples of equal weight. geometric or harmonic mean as appropriate)." the saying goes. But each subset can further be divided in half. we have that (6.
123
a+b √ .3.11) hold when there are several nonnegative samples of various nonnegative weights? To show that it does.11)
The arithmetic mean is greatest and the harmonic mean. Nothing prevents one from dividing such a set of samples in half. µ ≤ µΠ ≤ µ. 2 ab a+b . with the geometric mean falling between. "Begin with the end in mind. least.11) obtains.6.11)
The steps are logical enough. 2 a+b . considering each subset separately. In this case the saying is right. from the last step to the first. √ 2 ab ≤ 1 ≤ a+b √ 2ab ≤ ab ≤ a+b √ 2(a b) ≤ ab ≤ That is. When you can follow the logic but cannot understand what could possibly have inspired the writer to conceive the logic in the first place. try reading backward.
14
. for if (6.
. which was to be demonstrated. PRIMES. Now consider that a sample of any weight can be approximated arbitrarily closely by several samples of weight 1/2m .124
CHAPTER 6. (6.11) holds for any nonnegative weights of nonnegative samples. By this reasoning. ROOTS AND AVERAGES
obtains for the entire set. provided that m is sufficiently large.
125
. Concision can be a virtue—and by design.
7. f (t)? Chapter 4 has built toward a basic understanding of the first question. The understanding of the second question constitutes the concept of the integral.1
The concept of the integral
An integral is a finite accretion or sum of an infinite number of infinitesimal elements. or derivative. This chapter builds toward a basic understanding of the second. is undeniably a hard chapter. yet the book you are reading cannot be that kind of book. This chapter. [20] is warmly recommended. what is the function's instantaneous rate of change. one of the profoundest ideas in all of mathematics. Experience knows no reliable way to teach the integral adequately to the uninitiated except through dozens or hundreds of pages of suitable examples and exercises.Chapter 7
The integral
Chapter 4 has observed that the mathematics of calculus concerns a complementary pair of questions: • Given some function f (t). what is the corresponding accretion. or integral. However. It can be done. This section introduces the concept. The sections of the present chapter concisely treat matters which elsewhere rightly command chapters or whole books of their own. which introduces the integral. for less intrepid readers who quite reasonably prefer a gentler initiation. nothing essential is omitted here—but the bold novice who wishes to learn the integral from these pages alone faces a daunting challenge. f ′ (t)? • Interpreting some function f ′ (t) as an instantaneous rate of change.
1 As n grows. n 1 .
(k|τ =0x10 )−1
Sn =
k=0
τ ∆τ. then the reader should probably suspend reading here and go study a good basic calculus text like [20].7.
or more properly. the reader is urged to pause and consider the illustration carefully until he does understand it.
k=0
If the reader does not fully understand this paragraph's illustration. Notice how we have evaluated S∞ . If it still seems unclear. In the equation 1 Sn = n let us now change the variables τ ∆τ to obtain the representation
(0x10)n−1 (0x10)n−1
k=0
k .
. The concept is important. THE CONCEPT OF THE INTEGRAL
127
k/2. without actually adding anything up. We have taken a shortcut directly to the total. n→∞ 2 or more tersely S∞ = 0x80. In fact it appears that bh lim Sn = = 0x80. the sum of an infinite number of infinitely narrow rectangles. the shaded region in the figure looks more and more like a triangle of base length b = 0x10 and height h = 0x10. n
← ←
k .
where the notation k|τ =0x10 indicates the value of k when τ = 0x10. is the area the increasingly fine stairsteps approach. if the relation of the sum to the area seems unclear. Then
(k|τ =0x10 )−1
S∞ = lim
1
∆τ →0+
τ ∆τ. n
Sn = ∆τ
k=0
τ.1.
0
This means." See [52.2: An area representing an infinite sum of infinitesimals. denoting discrete summation.2. "stepping in infinitesimal intervals of dτ .
2
. THE INTEGRAL
Figure 7.)
f (τ ) 0x10
S∞ τ 0x10
in which it is conventional as ∆τ vanishes to change the symbol dτ ← ∆τ .
k=0
τ is cumbersome. the sum of all τ dτ from τ = 0 to τ = 0x10. Compare against ∆τ in Fig. "Long s." 14:54. where dτ is the infinitesimal of Ch.128
CHAPTER 7. (Observe that the infinitesimal dτ is now too narrow to show on this scale.1. . 7. so we replace it with the The symbol limdτ →0+ k=0=0x10 0x10 new symbol2 0 to obtain the form
0x10
S∞ =
τ dτ. 7 April 2006]. ." English "sum." Graphically. stands for Latin "summa.
P Like the Greek S. the seventeenth century-styled R Roman S. 4:
(k|τ =0x10 )−1
S∞ = lim
(k|
dτ →0+ )−1
τ dτ. 7. it is the shaded area of Fig.
The name "trapezoid" comes of the shapes of the shaded integration elements in the figure.1. (7. It represents the area under the curve of f (τ ) in that interval.3
The balanced definition and the trapezoid rule
Actually.
(k|τ =b )−1 b
S∞ = lim
dτ →0+
f (τ ) dτ =
k=(k|τ =a ) a
f (τ ) dτ.2
Generalizing the introductory example
Now consider a generalization of the example of § 7.1) + 2 2 dτ →0+ a
k=(k|τ =a )+1
Here.1.3 depicts it.
(In the example of § 7.
In the limit.
This is the integral of f (τ ) in the interval a < τ < b.) With the change of variables τ ∆τ this is Sn =
k=(k|τ =a )
← ←
k .1.1.1) is known as the trapezoid rule. THE CONCEPT OF THE INTEGRAL
129
7. the first and last integration samples are each balanced "on the edge. we do well to define the integral in balanced form. n 1 . too: (k|τ =b )−1 f (a) dτ b f (b) dτ f (τ ) dτ ≡ lim f (τ ) dτ + .7. Equation (7.15).1." half within the integration domain and half without. just as we have defined the derivative in the balanced form (4. n
(k|τ =b )−1
f (τ ) ∆τ. Figure 7.1: 1 Sn = n
bn−1
f
k=an
k n
. Observe however that it makes no difference whether one regards the shaded trapezoids or the dashed rectangles as the actual integration
. f [τ ] was the simple f [τ ] = τ .1. but in general it could be any function.
7.
130
CHAPTER 7.3: Integration by the trapezoid rule (7. refer to the treatment of the Leibnitz notation in § 4.2. general. The only requirement is that dτ remain infinitesimal.
a
The trapezoid rule (7.
f (τ )
a
dτ
b
τ
elements. Notice that the shaded and dashed areas total the same. but other schemes are possible. the total integration area is the same either way. incidentally. THE INTEGRAL
Figure 7.2
If
The antiderivative and the fundamental theorem of calculus
x
S(x) ≡
3
g(τ ) dτ. one can give the integration elements quadratically curved tops which more nearly track the actual curve.1).)
7.1) is perhaps the most straightforward.3 The important point to understand is that the integral is conceptually just a sum. That scheme is called Simpson's rule. too.4. Constant widths are usually easiest to handle but variable widths find use in some cases.]
. but a sum nevertheless. nothing more. (For further discussion of the point. [A section on Simpson's rule might be added to the book at some later date. It is a sum of an infinite number of infinitesimal elements as dτ tends to vanish. robust way to define the integral. Nothing actually requires the integration element width dτ to remain constant from element to element. For example.
or how to count or to add.
. then the idea dawns on him. fittingly named the fundamental theorem of calculus. 23 May 2006] Having read from several calculus books and. but make sure that you a understand it the other way. the author has never yet met a more convincing demonstration of (7.2). one sees that the derivative must be dS = g(x). like millions of others perhaps including the reader. b df dτ = f (τ )|b . the faster the accretion. (7. In this way one sees that the integral and the derivative are inverse operators. Somehow the underlying idea is too simple. too profound to explain. having sat years ago in various renditions of the introductory calculus lectures in school. sketch the corresponding slope function df /dτ . More precisely.2) is of utmost importance in the practice of mathematics.5 The idea is simple but big. but to grasp the idea firmly in the first place is not entirely trivial. try this: Sketch some arbitrary function f (τ ) on a set of axes at the bottom of a piece of paper—some squiggle of a curve like
5 4
f (τ ) τ a b
will do nicely—then on a separate set of axes directly above the first. The reader
[20.2) a dτ a where the notation f (τ )|b or [f (τ )]b means f (b) − f (a)." 06:29. fine. The integral accretes area at a rate proportional to the curve's height f (τ ): the higher the curve. If this way works better for you. If you want some help pondering. The integral is the antiderivative. Now consider (7. shade the integration area under the curve.2. a a The importance of (7. THE ANTIDERIVATIVE
131
then what is the derivative dS/dx? After some reflection.4 can hardly be overstated. It's like trying to explain how to drink water.2) in light of your sketch.2) than the formula itself. "Fundamental theorem of calculus. There. then on the upper. (7. The idea behind the formula is indeed simple once grasped.2) a while. Mark two points a and b on the common horizontal axis. § 11. dx This is so because the action of the integral is to compile or accrete the area under a curve. Does the idea not dawn? Another way to see the truth of the formula begins by canceling its (1/dτ ) dτ to obtain Rb the form τ =a df = f (τ )|b .6][42. the one inverts the other.7. One ponders the formula (7. Elaborate explanations and their attendant constructs and formalities are indeed possible to contrive. § 5-4][52. but the idea itself is so simple that somehow such contrivances seem to obscure the idea more than to reveal it. df /dτ plot. As the formula which ties together the complementary pair of questions asked at the chapter's start. too.
f (k) ≡ 5 if k = 1. the + operator can indeed be said to use a dummy variable up.
k=0
By such admittedly excessive formalism. unfortunately. Such a variable. is a dummy variable. The operator has used the variable up.
Notice that the operator has acted to remove the variable j from the expression 2j − 1.
7. . undefined otherwise. used up by an operator.3. LINEARITY AND MULTIPLE INTEGRALS
133
Such a definition. S=
f (k) = f (0) + f (1) = 3 + 5 = 8.
where if k = 0. A better way to introduce the operator is by giving examples.7. it depends on how you look at it. −. The essential action of an operator is to take several values of a function and combine them in some way. do they? Well.
. multiplication. is an operator in
5 j=1
(2j − 1) = (1)(3)(5)(7)(9) = 0x3B1. The point is that + is in fact an operator just like the others. Operators include +.2
A formalism
But then how are + and − operators? They don't use any dummy variables up. is extraordinarily unilluminating to those who do not already know what it means.
1
3
Then.3. One can write this as
1
S=
k=0
f (k). The j appears on the equation's left side but not on its right. For example.3. as encountered earlier in § 2. Consider the sum S = 3 + 5. and ∂. OPERATORS. division. .
Integrals and derivatives must
7 You don't see d in the list of linear operators? But d in this context is really just another way of writing ∂.
Now consider that an integral is just a sum of many elements. k) in the indicated domain. Apparently S1 = S2 .2. See § 4. so. d is linear.7 . k).6. too.
135
. f (j. Reflection along these lines must soon lead the reader to the conclusion that. and that a derivative is just a difference of two elements. For d df1 df2 [f1 (z) + f2 (z)] = + .
S2 =
j=p
xk .4
Summational and integrodifferential transitivity
b
Consider the sum S1 =
k=a
q j=p
xk j!
This is a sum of the several values of the expression xk /j!. evaluated at every possible pair (j.7. − and ∂ are examples of linear operators. Now consider the sum
q b k=a
.3.
. L(0) = 0. LINEARITY AND MULTIPLE INTEGRALS An operator L is linear iff it has the properties L(f1 + f2 ) = Lf1 + Lf2 . OPERATORS. division and the various trigonometric functions. The operators instance. k) =
k j j k
f (j. +.
7. yes. only added in a different order.2 will have more to say about operators and their notation. j!
This is evidently a sum of the same values. Section 15. among others.3. dz dz dz
Nonlinear operators include multiplication. in general. L(αf ) = αLf.4.
Professional mathematics does bring an elegant notation and a set of formalisms which serve ably to spotlight certain limited kinds of blunders. maybe. preceded if necessary by [20] and/or [1. but these are blunders no less by the applied approach. the professional approach is worth study if you have the time. The stalwart Leonhard Euler—arguably the greatest series-smith in mathematical history—wielded his heavy analytical hammer in thunderous strokes before professional mathematics had conceived the notation or the formalisms. Ch. v) =
Consider the function
u2 . but cannot be.3. § 16]
. Well. but rather as a curved surface in a three-dimensional space. V = v1 v u1
u2
This is a double integral.9 So specifying would have prevented the error.
or so it would seem. Integrating the function seeks not the area under the curve but rather the volume under the surface: v2 2 u2 u dv du. A better way to have handled the example might have been to write the series as 1 1 1 1 1 lim 1 − + − + · · · + − n→∞ 2 3 4 2n − 1 2n
7. v Such a function would not be plotted as a curved line in a plane. 1]. Recommended introductions include [28]. thus explicitly specifying equal numbers of positive and negative terms. LINEARITY AND MULTIPLE INTEGRALS
137
in the first place. u2 dv. If the great Euler did without. This writer however does not feel sure that rigor is quite the right word for what was lacking here. One can normally swap summational and integrodifferential operators with little worry. for it claims falsely that the sum is half itself. Inasmuch as it can be written in the form V =
u1 v2
g(u) du.7. OPERATORS. v
g(u) ≡
9
v1
Some students of professional mathematics would assert that the false conclusion was reached through lack of rigor. On the other hand. seldom poses much of a dilemma in practice.5
Multiple integrals
f (u. The conditional convergence 10 of the last paragraph. 10 [28. then you and I might not always be forbidden to follow his robust example. which can occur in integrals as well as in sums.3. The reader however should at least be aware that conditional convergence troubles can arise where a summand or integrand varies in sign or phase.
another triple: a sixfold integration altogether. V =
S
f (ρ) dρ. Manifold nesting of integrals is thus not just a theoretical mathematical topic. it arises in sophisticated real-world engineering models. Hence
v2 u2 u1
V =
v1
u2 du dv. doesn't it? What difference does it make whether we add the towers by rows first then by columns.11 then the total soil mass in some rectangular volume is
x2 y2 y1 z2
M=
x1 z1
µ(x. upright slices. then v. Triple integrations arise about as often. if µ(r) = µ(x. For instance.
where the V stands for "volume" and is understood to imply a triple integration.
. y. v
And indeed this makes sense.4.3.
11 Conventionally the Greek letter ρ not µ is used for density. Double integrations arise very frequently in applications. z) dz dy dx. and its inverse.
where the S stands for "surface" and is understood to imply a double integration. In light of § 7.
As a concise notational convenience.138
CHAPTER 7. then the slices over u to constitute the volume. THE INTEGRAL
its effect is to cut the area under the surface into flat. y. If we integrated over time as well as space. but it happens that we need the letter ρ for a different purpose later in the paragraph. The topic concerns us here for this reason. Similarly for the double integral. the last is often written M=
V
µ(r) dr. A spatial Fourier transform ([section not yet written]) implies a triple integration. then the slices crosswise into tall. evidently nothing prevents us from swapping the integrations: u first. Even more than three nested integrations are possible. or by columns first then by rows? The total volume is the same in any case. the integration would be fourfold. The towers are integrated over v to constitute the slice. thin towers. z) represents the variable mass density of some soil.
areas and volumes of interesting common shapes and solids. but inasmuch as the wedge is infinitesimally narrow.5. Integrating the many triangles. We shall calculate it in § 8. the wedge is indistinguishable from a triangle of base length ρ dφ and height ρ.2
The volume of a cone
One can calculate the volume of any cone (or pyramid) if one knows its base area B and its altitude h measured normal12 to the base.4)
(The numerical value of 2π—the circumference or perimeter of the unit circle—we have not calculated yet.4. one can calculate the perimeters.1
The area of a circle
Figure 7."
.11.4.7.4
Areas and volumes
By composing and solving appropriate integrals. 7. The area of such a triangle is Atriangle = ρ2 dφ/2.4 depicts an element of a circle's area. The element has wedge shape. 2 2
(7. we find the circle's area to be
π π
Acircle =
φ=−π
Atriangle =
−π
2πρ2 ρ2 dφ = .4.4: The area of a circle.
y
139
dφ
ρ
x
7. AREAS AND VOLUMES Figure 7.)
7.
7. Refer to Fig.
12
Normal here means "at right angles.
5)
7. For the surface area. well characterized part of a square? (With a pair of scissors one can cut any shape from a square piece of paper. then z the cross-sectional area is evidently13 (B)(z/h)2 . Fig. then ponder Fig. If it is not yet evident to you. THE INTEGRAL Figure 7.5: The volume of a cone.
h
B
A cross-section of a cone. has the same shape the base has but a different scale.4.
13
. and how cross-sections cut nearer a cone's vertex are smaller though the same shape.6.) Thinking along such lines must soon lead one to the insight that the parallel-cut cross-sectional area of a cone can be nothing other than (B)(z/h)2 . after all. What if the base were square? Would the cross-sectional area not be (B)(z/h)2 in that case? What if the base were a right triangle with equal legs—in other words.7. half a square? What if the base were some other strange shape like the base depicted in Fig. 7. regardless of the base's shape. cut parallel to the cone's base. the sphere's surface is sliced vertically down the z axis into narrow constant-φ tapered strips (each strip broadest at the sphere's equator.5 a moment. as in Fig. 3
(7. A surface element so produced (seen as shaded in the latter figure) evidently has the area dS = (r dθ)(ρ dφ) = r 2 sin θ dθ dφ. If coordinates are chosen such that the altitude h runs in the ˆ direction with z = 0 at the cone's vertex. 7. 7. Consider what it means to cut parallel to a cone's base a cross-section of the cone. tapering to points at the sphere's ±z poles) and horizontally across the z axis into narrow constant-θ rings.140
CHAPTER 7. one wants to calculate both the surface area and the volume.
The fact may admittedly not be evident to the reader at first glance. For this reason.3
The surface area and volume of a sphere
Of a sphere. 7.5? Could such a strange shape not also be regarded as a definite. the cone's volume is
h
Vcone =
0
(B)
z h
2
dz =
B h2
h 0
z 2 dz =
B h2
h3 3
=
Bh .
The circular area derivation of § 7.142
CHAPTER 7.1 that sin τ = (d/dτ )(− cos τ ). each cone with base area dS and altitude r. In light of (7. the volume of one such cone is Vcone = r dS/3. however.4. with the vertices of all the cones meeting at the sphere's center. Per (7.6).
where we have used the fact from Table 7. 3 S
where the useful symbol
S
indicates integration over a closed surface. Hence.7) Vsphere = 3 (One can compute the same spherical volume more prosaically. Vsphere =
S
Vcone =
S
r dS r = 3 3
r dS = Ssphere .1 lends an analogous insight: that a circle can sometimes be viewed as a great triangle rolled up about its own vertex. THE INTEGRAL
The sphere's total surface area then is the sum of all such elements over the sphere's entire surface:
π π
Ssphere =
φ=−π π θ=0 π θ=0
dS r 2 sin θ dθ dφ
= = r
φ=−π π 2
= r2
φ=−π π φ=−π 2
[− cos θ]π dφ 0 [2] dφ (7. without reference to cones. The derivation given above.6)
= 4πr .4. the total volume is 4πr 3 . by writing dV = r 2 sin θ dr dθ dφ then integrating V dV . one divides the sphere into many narrow cones. (7. Having computed the sphere's surface area.1 has found a circle's area—except that instead of dividing the circle into many narrow triangles. is preferred because it lends the additional insight that a sphere can sometimes be viewed as a great cone rolled up about its own vertex.)
. one can find its volume just as § 7.5).
go with it. and it's how computers divide. and long division in straight binary is kind of fun.
14
.
b
S≡
a
df dτ = f (b) − f (a). integrating
b a
τ2 b3 − a3 dτ = 2 6
with a pencil.10) which can be used to check further. dτ
(7.8) with respect to b and a. Actually. according to (7. you might.10) to check the example. If you have never tried it. 2
Differentiation inverts integration. it's 1131/13 = 87. They are of little use to check definite integrals
Admittedly. It is simpler than decimal or hexadecimal division. how does one check the result? Answer: by differentiating ∂ ∂b b3 − a3 6 =
b=τ
τ2 .9)
a=τ
Either line of (7. More formally. dτ =
b=τ
(7.15 As useful as (7. (b3 − a3 )/6|b=a = 0. 15 Using (7. few readers will ever have done much such multidigit hexadecimal arithmetic with a pencil.8)
Differentiating (7.9) and (7.2). The insight gained is worth the trial. Evaluating (7.7. Easier than integration. they nevertheless serve only integrals with variable limits.5
Checking an integration
Dividing 0x46B/0xD = 0x57 with a pencil.5. Multiplication inverts division. hexadecimal is just proxy for binary (see Appendix A).9) can be used to check an integration. but. reliable check.8) at b = a yields S|b=a = 0. CHECKING AN INTEGRATION
143
7. ∂S ∂b ∂S ∂a df . Likewise.10) are. how does one check the result?14 Answer: by multiplying (0x57)(0xD) = 0x46B. reliable check. (7. In decimal. dτ df =− . Easier than division. differentiation like multiplication provides a quick. hey. multiplication provides a quick.
10) do serve such indefinite integrals. b. 9 will bring much harder ones. Yet many other paths of integration from xρ to yρ are possible. a + 2dτ. 7.9) and (7. a f (τ ) dτ . Equations (7.8. many or most integrals one meets in practice have or can be given variable limits. but they differ in between. . then from there to r = yρ? Or does it mean to integrate along the arc of Fig. Consider the integral
ˆ yρ
S=
r=ˆ ρ x
(x2 + y 2 ) dℓ. Analytically.8? The two paths of integration begin and end at the same points.
where C stands for "contour" and means in this example the specific contour of Fig. not just these two. So far the book has introduced only easy integrals. which lack variable limits to differentiate. in which the function f (τ ) is evaluated at τ = a. . However. a + dτ. 7.
7. although numerically differentiation is indeed harder than integration. . so 2π/4 2π 3 ρ . ˆ What does this integral mean? Does it mean to integrate from r = xρ to ˆ r = 0.6
Contour integration
To this point we have considered only integrations in which the variable of integration advances in a straight line from one point to another: for b instance. THE INTEGRAL
like (9.144
CHAPTER 7. The integration variable is a real-valued scalar which can do nothing but make a straight line from a to b.14) below. ρ3 dφ = ρ2 dℓ = S= 4 0 C
. x2 + y 2 = ρ2 (by the Pythagorean theorem) and dℓ = ρ dφ. In the example. analytically precisely the opposite is true. Because multiple paths are possible. Reversing an integration by taking an easy derivative is thus an excellent way to check a hard-earned integration result. we must be more specific: S=
C
(x2 + y 2 ) dℓ. and the integral certainly does not come ˆ ˆ out the same both ways.
where dℓ is the infinitesimal length of a step along the path of integration. differentiation is the easier. Even experienced mathematicians are apt to err in analyzing these. It is a rare irony of mathematics that. Such is not the case when the integration variable is a vector. . but Ch.
as we shall see in §§ 8. but closed contours which begin and end at the same point are also possible.5. It means that the contour ends where it began: the loop is closed. Consider a mechanical valve opened at time t = to . DISCONTINUITIES Figure 7. In the latter case some interesting mathematics emerge. but one thing they do not model gracefully is the simple discontinuity.
.7
Discontinuities
The polynomials and trigonometrics studied to this point in the book offer flexible means to model many physical phenomena of interest.7.
7. xo . t > t o . contour integration applies equally where the variable of integration is a complex scalar. The contour of Fig.7. 7. t < to .8 would be closed. The useful symbol
indicates integration over a closed contour.8 and 9. if it continued to r = 0 and then back to r = xρ. indeed common. ˆ for instance. The flow x(t) past the valve is x(t) = 0.
y C ρ φ x
145
In the example the contour is open.8: A contour of integration. Besides applying where the variable of integration is a vector.
14)
for any function f (t).
V
16 It seems inadvisable for the narrative to digress at this point to explore u(z) and δ(z). then you would expect me to adapt my definition. but that was the professionals' problem not ours. The professionals who had established the theoretical framework before 1930 justifiably felt reluctant to throw the whole framework away because some scientists and engineers like us came along one day with a useful new function which didn't quite fit. The objection is not so much that δ(t) is not allowed as it is that professional mathematics for years after 1930 lacked a fully coherent theory for it. 25 May 2006] and slapped his disruptive δ(t) down on the table. The internal discussion of words and means. when the six-fingered Count Rugen appeared on the scene. (Equation 7. coupled with the difficulty we applied mathematicians experience in trying to understand the reason the dispute even exists. such that δ(r) dr = 1. To us the Dirac delta δ(t) is just a function. an elusive sense that δ(t) hides subtle mysteries—when what it really hides is an internal discussion of words and means among the professionals. For the book's present purpose the interesting action of the two functions is with respect to the real argument t. however. inasmuch as it lacks certain properties common to functions as they define them [36. If I had established a definition of "nobleman" which subsumed "human. In the author's country at least. Even if not. is not for this writer to judge. the fact of the Dirac delta dispute.4][12]. the six-fingered count is "not a nobleman". has unfortunately surrounded the Dirac delta with a kind of mysterious aura. a sort of debate seems to have run for decades between professional and applied mathematicians over the Dirac delta δ(t). wouldn't you? By my pre¨xisting definition.)16 The Dirac delta is defined for vectors. although by means of Fourier analysis ([chapter not yet written]) or by conceiving the Dirac delta as an infinitely narrow Gaussian pulse ([chapter not yet written]) it could perhaps do so. too. until one realizes that it is more a dispute over methods and definitions than over facts. of course. "Paul Dirac.7." whose relevant traits in my definition included five fingers on each hand. Whether the professional mathematician's definition of the function is flawed.
(7. e but such exclusion really tells one more about flaws in the definition than it does about the count. The book has more pressing topics to treat.14 is the sifting property of the Dirac delta.7. DISCONTINUITIES and the interesting consequence that
∞ −∞
147
δ(t − to )f (t) dt = f (to )
(7. we leave to the professionals. From the applied point of view the objection is admittedly a little hard to understand. Some professional mathematicians seem to have objected that δ(t) is not a function." 05:48. the unit step and delta of a complex argument. § 2.15)
. strictly speaking. What the professionals seem to be saying is that δ(t) does not fit as neatly as they would like into the abstract mathematical framework they had established for functions in general before Paul Dirac came along in 1930 [52. who know whereof they speak. It's a little like the six-fingered man in Goldman's The Princess Bride [19].
and in fact how easy it is to come up with an integral which no one really knows how to solve very well. Chapter 9 introduces some of the basic.8. faster to numerically calculate. the more such integrals you can solve. you should be able to work right now. The mathematical art of solving diverse integrals is well worth cultivating. as promised earlier we turn our attention in Chapter 8 back to the derivative. REMARKS (AND EXERCISES)
149
The last exercise in particular requires some experience to answer. however. Before addressing techniques of integration. integrals which arise in practice often can be solved very well with sufficient cleverness—and the more cleverness you develop. The point of the exercises is to illustrate how hard integrals can be to solve. The ways to solve them are myriad. Some of the easier exercises. On the other hand. etc.) yet not even the masters can solve them all in practical ways. Some solutions to the same integral are better than others (easier to manipulate. of course. Moreover.7.
. most broadly useful integralsolving techniques. it requires a developed sense of applied mathematical style to put the answer in a pleasing form (the right form for part b is very different from that for part a). applied in the form of the Taylor series.
150
CHAPTER 7. THE INTEGRAL
.
6 would cease to be necessary. 8. 8. most of §§ 8. plus the introduction of § 8. to read only the following sections might not be an unreasonable way to shorten the chapter: §§ 8. For the impatient. This chapter introduces the Taylor series and some of its incidents. the chapter errs maybe toward too little rigor. The chapter's early sections prepare the ground for the treatment of the Taylor series proper in § 8.8. The chapter errs maybe toward too much rigor. Fitting a function in such a way brings two advantages:
• it lets us take derivatives and integrals in the same straightforward way (4.
1
151
. and
• it implies a simple procedure to calculate the function numerically. Some pretty constructs of pure mathematics serve the Taylor series and Cauchy's integral formula. It also derives Cauchy's integral formula.1. However.1
Because even at the applied level the proper derivation of the Taylor series involves mathematical induction.9 and 8. such constructs drive the applied mathematician on too long a detour.2. no balance of rigor the chapter might strike seems wholly satisfactory.1.3. The chapter as written represents the most nearly satisfactory compromise the writer has been able to attain.3. with a little less.4 and 8.11. From another point of view. 8. 8.20) we take them with any power series. for.5. 8.Chapter 8
The Taylor series
The Taylor series is a power series which fits a function in a limited domain neighborhood. analytic continuation and the matter of convergence domains.
3).4) better match (8. this is n + (k − 1) n + (n − 1) + k n−1 = n+k n . applied to (8.
Transforming according to the rule (4. yield (8. (8.8.1.3). all we can say is that (8. (8.2).3) reminds one of (4.1) seems right.3) perfectly.
.4) gives n+k−1 k−1 + n+k−1 k = n+k k .3) is not a(m−1)(j−1) + a(m−1)j = amj . this much is easy to test. or in other words that an(k−1) + a(n−1)k = ank . (8. But to seem is not to be. (8.
153
(8.5)
which fits (8.3)
Thinking of Pascal's triangle.5). THE POWER-SERIES EXPANSION OF 1/(1 − Z)N +1 This is to say that a(n−1)k = ank − an(k−1) . We shall establish that it is right in the next subsection. Various changes of variable are possible to make (8. Thus changing in (8.6)
which coefficients.1) is thus suggestive.1). It works at least for the important case of n = 0. At this point. it seems to imply a relationship between the 1/(1 − z)n+1 series and the 1/(1 − z)n series for any n. We might try at first a few false ones. recommends itself. Hence we conjecture that ank = n+k n . In light of (8.3). transcribed here in the symbols m−1 m−1 m + = .4) j−1 j j except that (8. but eventually the change n + k ← m. Equation (8. k ← j.
A more rigorous way of saying the same thing is as follows: the series ∞ X τk S= k=0
.7. THE POWER-SERIES EXPANSION OF 1/(1 − Z)N +1
155
8." distinguishing it through a test devised by Weierstrass from the weaker "pointwise convergence" [1. this means that n+k n or more tersely. but the effect of trying to teach the professional view in a book like this would not be pleasing. The applied mathematician can profit substantially by learning the professional view in the matter. for all possible positive constants ǫ. but if explanation here helps: a series converges if and only if it approaches a specific. n ← j.1. m j = m m−j m−1 j
for any integers m > 0 and j.8. With the substitution n + k ← m.1).5]. § 1. ˛ ˛ ˛
k=K+1
2 The meaning of the verb to converge may seem clear enough from the context and from earlier references.9). ank an(k−1) = n+k n =1+ .
are the coefficients of the power series (8.1) converges. finite value after many terms.11)
for all n ≥ K (of course it is also required that the τk be finite.
converges iff (if and only if). The professional mathematical literature calls such convergence "uniform convergence. Here. there exists a finite K ≥ −1 such that ˛ ˛ n ˛ X ˛ ˛ ˛ τk ˛ < ǫ. It is interesting nevertheless to consider an example of an integral for which convergence is not so simple. Rearranging factors. but you knew that already). we avoid error by keeping a clear view of the physical phenomena the mathematics is meant to model.3
Convergence
The question remains as to the domain over which the sum (8.1.2 To answer the question. consider that per (4. such as Frullani's integral of § 9. k n+k n = n+k k n + (k − 1) n . ank = where ank ≡ n+k an(k−1) . k k (8.
20).11) by z k /z k−1 gives the ratio ank z k n z. The theoretical difficulty the student has with mathematical induction arises from the reluctance to ask seriously. as in § 8.
3
.10. 3. the reader may nevertheless wonder why the simpler |(1 + n/k)z| < 1 is not given as a criterion. (8. The P surprising answer is that not all series τk with |τk /τk−1 | < 1 converge! For example. P P the extremely simple 1/k does not converge. = 1+ k−1 an(k−1) z k
which is to say that the kth term of (8.4
General remarks on mathematical induction
We have proven (8. The virtue of induction as practiced in § 8.
8. Hamming once said of mathematical induction. So long as the criterion3 1+ n z ≤1−δ k
is satisfied for all sufficiently large k > K—where 0 < δ ≪ 1 is a small positive constant—then the series evidently converges (see § 2. The distinction is P subtle but rather important.4 and eqn. maybe you can prove it by induction. all series τk with |τk /τk−1 | < 1 − δ do converge.1. but the induction probably does not help you to obtain the formula! A good inductive proof usually begins by motivating the formula proven. Once you obtain a formula somehow. As we see however.1. Its vice is that it conceals the subjective process which has led the mathematician to consider the formula in the first place. Richard W. Answer: it Rx majorizes 1 (1/τ ) dτ = ln x.1) by means of a mathematical induction. airtight case for a formula.1. The really curious reader may now ask why 1/k does not converge.2 is that it makes a logically clean. so to meet the criterion it suffices that |z| < 1. See (5.1.1). "How could I prove a formula for an infinite number of cases when I know that testing a finite number of cases is not enough?" Once you
Although one need not ask the question to understand the proof.1) is (1 + n/k)z times the (k − 1)th term.156
CHAPTER 8.6.7) and § 8.12)
The bound (8.12) thus establishes a sure convergence domain for (8. But we can bind 1+n/k as close to unity as desired by making K sufficiently large. THE TAYLOR SERIES Multiplying (8.
We shall not always write inductions out as explicitly in this book as we have done in the present section—often we shall leave the induction as an implicit exercise for the interested reader—but this section's example at least lays out the general pattern of the technique. but he does not disdain mathematical rigor on principle. [20.2. you will understand the ideas behind mathematical induction.3 concerns the shifting of a power series' expansion point. Mathematical induction is a broadly applicable technique for constructing mathematical proofs. [one cannot teach] a uniform level of rigor. SHIFTING A POWER SERIES' EXPANSION POINT really face this question. but psychologically there is little else that can be done. Hamming. § 1. [20. wrote further. but Hamming nevertheless makes a pertinent point.8. It is only when you grasp the problem clearly that the method becomes clear. on the ground that the math serves the model. How can the expansion point of the power series f (z) =
∞ k=K
(ak )(z − zo )k . which is needed to protect us against careless thinking. Logically. The style lies in exercising rigor at the right level for the problem at hand. K ≤ 0.6]
157
The applied mathematician may tend to avoid rigor for which he finds no immediate use. § 2.
8.
(8. [20. It is necessary to require a gradually rising level of rigor so that when faced with a real need for it you are not left helpless. when teaching a topic the degree of rigor should follow the student's perceived need for it. but rather a gradually rising level. § 1. . Ideally.13)
.2
Shifting a power series' expansion point
One more question we should treat before approaching the Taylor series proper in § 8. Rigor is the hygiene of mathematics.3] Hamming also wrote. .6] Applied mathematics holds that the practice is defensible. The function of rigor is mainly critical and is seldom constructive. a professional mathematician who sympathized with the applied mathematician's needs. this is indefensible. As a result. .
18) can be nothing other than the Taylor series about z = z1 of the function f+ (z) in any event. it also must lie within the convergence domain of the original. anyway.
4
.17) nor of f+ (z) in (8. however. See §§ 8.9) and its brethren. Rather.8.4 through 8. The convergence domain vanishes as z1 approaches such a forbidden point. which ratio approaches unity with increasing n.8. is harder to establish in the abstract because that subseries has an infinite number of terms.16) converges for |w| < 1. one shifts the expansion point of some concrete function like sin z or ln(1 − z). The treatment begins with a question: if you had to express
A rigorous argument can be constructed without appeal to § 8.3.13) through (8.3. The attentive reader might observe that we have formally established the convergence neither of f− (z) in (8. shifting the expansion point away from the pole at z = zo has resolved the k < 0 terms. z = zo series for the shift to be trustworthy as derived. we have strategically framed the problem so that one needn't worry about it. enjoying the same convergence domain any such Taylor series enjoys. The method fails if z = z1 happens to be a pole or other nonanalytic point of f (z). running the sum in (8. The imagined difficulty (if any) vanishes in the concrete case.12) each term of the original f− (z) of (8. one does not normally try to shift the expansion point of an unspecified function f (z). that of f+ (z).3. it has no terms of negative order. the reconstituted f− (z) of (8. given those of a power series about z = zo . A more elegant rigorous argument can be made indirectly by way of a complex contour integral. we now stand in position to treat the Taylor series proper. z = zo power series—the new.12) of restricting the convergence domain to |w| < 1.3 if desired.) Furthermore. and since according to (8. In applied mathematics. that of f− (z).18) serve to shift a power series' expansion point. Appealing to § 8. At the price per (8.3
Expanding functions in Taylor series
Having prepared the ground.13) from the finite k = K ≤ 0 rather than from the infinite k = −∞. however. As we shall see by pursuing a different line of argument in § 8. The latter convergence.17) safely converges in the same domain. z = z1 power series has terms (z − z1 )k only for k ≥ 0. EXPANDING FUNCTIONS IN TAYLOR SERIES
159
Equations (8. from the ratio n/(n − k) of (4. Notice that—unlike the original. even if z1 does represent a fully analytic point of f (z). (Examples of such forbidden points include z = 0 in √ h[z] = 1/z and in g[z] = z.4
8.18). calculating the coefficients of a power series for f (z) about z = z1 . the important point is the one made in the narrative: f+ (z) can be nothing other than the Taylor series in any event. Regarding the former convergence. the f+ (z) of (8.
then ∆F (zo ) and all its derivatives at zo must be identically zero (otherwise by the Taylor series formula of eqn.) The reason the rigorous proof is confined to a footnote is not a deprecation of rigor as such. If ∆F (z) is the part of F (z) not representable as a Taylor series. having terms of nonnegative order only. for example?5 Fortunately a different way to attack the power-series expansion problem is known. Consider an infinitely differentiable function F (z) and its Taylor series f (z) about zo .160
CHAPTER 8. there is no part of F (z) not representable as a Taylor series. However. the argument goes briefly as follows. most resembles f (z) in the immediate neighborhood of z = zo ? To resemble f (z). and so on. It is a deprecation of rigor which serves little purpose in applications. THE TAYLOR SERIES
some function f (z) by a power series f (z) =
∞ k=0
(ak )(z − zo )k . f (z) =
∞ k=0
dk f dz k
z=zo
(z − zo )k . Further proof details may be too tiresome to inflict on applied mathematical readers. it has all the same derivatives f (z) has.1 worked well enough in the case of f (z) = 1/(1−z)n+1 . one could construct a nonzero Taylor series for ∆F [zo ] from the nonzero derivatives). but it is not immediately obvious that the same procedure works more generally. What if f (z) = sin z. This means that ∆F (z) = 0.9. and so on. Where it converges. Then it should have a1 = f ′ (zo ) for the right slope. hence also at z = zo ± 2ǫ.
with terms of nonnegative order k ≥ 0 only. It works by asking the question: what power series. preferred by the professional mathematicians [29][14] but needing significant theoretical preparation. a2 = f ′′ (zo )/2 for the right second derivative. (A more elegant rigorous proof. for readers who want a little more rigor nevertheless.19) is the Taylor series.19)
Equation (8. if F (z) is infinitely differentiable and if all the derivatives of ∆F (z) are zero at z = zo . In other words. However. otherwise it would not have the right value at z = zo . Let ∆F (z) ≡ F (z) − f (z) be the part of F (z) not representable as a Taylor series about zo . the desired power series should have a0 = f (zo ). We will not detail that proof here. 8.19. involves integrating over a complex contour about the expansion point. Then. The interested reader can fill the details in. all the derivatives must also be zero at z = zo ± ǫ. Applied mathematicians
6 5
.4. how would you do it? The procedure of § 8. With this procedure.6
The actual Taylor series for sin z is given in § 8. then by the unbalanced definition of the derivative from § 4. k!
(8. so if f (z) is infinitely differentiable then the Taylor series is an exact representation of the function. but basically that is how the more rigorous proof goes.
8.2. maybe more appropriately for our applied purpose.4
Analytic continuation
As earlier mentioned in § 2. With such an implicit definition. except that the transposed series may there enjoy a different convergence domain. In the swapped notation. When zo = 0.3.
However. ANALYTIC CONTINUATION
161
The Taylor series is not guaranteed to converge outside some neighborhood near z = zo . a function expressible as a Taylor series in that neighborhood.12. the two series evidently describe the selfsame underlying analytic function. so long as the expansion point z = zo lies fully within (neither outside nor right on the edge of) the z = z1 series' convergence domain. Since an analytic function f (z) is infinitely differentiable and enjoys a unique Taylor expansion fo (z −zo ) = f (z) about each point zo in its domain.8. transposing rather from expansion about z = z1 to expansion about z = zo . Since the functions are imprecise analogs in any case. then the two can
normally regard mathematical functions to be imprecise analogs of physical quantities of interest. to restrict the set of infinitely differentiable functions used in the model to the subset of such functions representable as Taylor series. nothing prevents one from transposing the series to a different expansion point z = z1 by the method of § 8. In applied mathematics. that is. it follows that if two Taylor series f1 (z − z1 ) and f2 (z − z2 ) find even a small neighborhood |z − zo | < ǫ which lies in the domain of both. the series is a construct of great importance and tremendous practical value. (It is entertaining to consider [51. but where it does converge it is precise. this section's purpose finds it convenient to swap symbols zo ↔ z1 .)
. as we shall soon see. As we have seen. By either name. As it happens. the applied mathematician is logically free implicitly to define the functions he uses as Taylor series in the first place. an analytic function is a function which is infinitely differentiable in the domain neighborhood of interest—or. the series is also called the Maclaurin series.4. whether there actually exist any infinitely differentiable functions not representable as Taylor series is more or less beside the point. only one Taylor series about zo is possible for a given function f (z): f (z) =
∞ k=0
(ak )(z − zo )k . "Extremum"] the Taylor series of the function sin[1/x]—although in practice this particular function is readily expanded after the obvious change of variable u ← 1/x. not the other way around. the definitions serve the model.
extends the analytic continuation principle to cover power series in general. The series f1 and the series fn represent the same analytic function even if their domains do not directly overlap at all. The subject of analyticity is rightly a matter of deep concern to the professional mathematician.162
CHAPTER 8.
. One can construct
7 The writer hesitates to mention that he is given to understand [44] that the domain neighborhood can technically be reduced to a domain contour of nonzero length but zero width. The result of § 8. one need not actually expand every analytic function in a power series." he means any function at all. observe: though all convergent power series are indeed analytic. so that is all right. is an analytic function of analytic functions. The domain neighborhood |z − zo | < ǫ suffices. because the function is not differentiable there. then they are the same everywhere. THE TAYLOR SERIES
both be transposed to the common z = zo expansion point. then the whole chain of overlapping series necessarily represents the same underlying analytic function f (z).7 Observe however that the principle fails at poles and other nonanalytic points. by the derivative chain rule. and if each pair in the chain matches at least in a small neighborhood in its region of overlap. if a series f3 is found whose domain overlaps that of f2 . which shows general power series to be expressible as Taylor series except at their poles and other nonanalytic points. then a series f4 whose domain overlaps that of f3 . Now. This is a manifestation of the principle of analytic continuation. If the two are found to have the same Taylor series there. When the professional mathematician speaks generally of a "function. Besides. Moreover. It is also a long wedge which drives pure and applied mathematics apart. He does not especially recommend that the reader worry over the point. given Taylor series for g(z) and h(z) one can make a power series for f (z) by long division if desired. Section 8. the writer has neither researched the extension's proof nor asserted its truth. where g(z) and h(z) are analytic. then f1 and f2 both represent the same function. Having never met a significant application of this extension of the principle. The principle holds that if two analytic functions are the same within some domain neighborhood |z − zo | < ǫ.2. including power series with terms of negative order. and so on. Sums. products and ratios of analytic functions are no less differentiable than the functions themselves—as also. For example.15 speaks further on the point. there also is f (z) ≡ g(z)/h(z) analytic (except perhaps at isolated points where h[z] = 0).
Maybe a future release of the book will trace the pure theory's main thread in an appendix. troublesome function. BRANCH POINTS some pretty unreasonable functions if one wants to. reduces. However. The foundations of the pure theory of a complex variable.
where k and m are integers. replaces and/or avoids them. if one encircles9 the point once alone (that is. he encounters several. without also encircling some other branch point) by a closed contour in the Argand domain plane. Such functions are nonanalytic either because they lack proper derivatives in the Argand plane according to (4. are beautiful. given a function f (z) with such a point at z = zo . the pure theory is probably best appreciated after one already understands its chief conclusions.8 This does not mean that the scientist or engineer never encounters nonanalytic functions. Refer to §§ 2. δ(t). ℜ(z). approximates. geometrical circle.8.5
Branch points
√ The function g(z) = z is an interesting. Though not for the explicit purpose of serving the pure theory. even an applied mathematician can profit substantially by studying them.5. one transforms. When such functions do arise. probably the most fitting shape to imagine when thinking of the concept abstractly. ℑ(z). The full theory which classifies and encompasses—or explicitly excludes—such functions is thus of limited interest to the applied mathematician.
163
f (z) ≡ 0 otherwise. and though they do not comfortably fit a book like this. any closed shape suffices.12 and 7. its derivative is not finite there." The verb does not require the boundary to have the shape of an actual.7. 9 For readers whose native language is not English. though abstract. z ∗ . such as f ([2k + 1]2m ) ≡ (−)m . neither functions like this f (z) nor more subtly unreasonable functions normally arise in the modeling of physical phenomena. Evidently g(z) has a nonanalytic point at z = 0.19) or because one has defined them only over a real domain. However that may be. However. On the contrary. arg z. but they are not subtle: |z|.
8
. The defining characteristic of the branch point is that. so even though the function is finite at z = 0. the present chapter does develop just such an understanding. including [14][44][24] and numerous others. while simultaneously tracking f (z) in the Argand range plane—and if one demands that z and f (z) move
Many books do cover it in varying degrees. "to encircle" means "to surround" or "to enclose. the circle is a typical shape. What is it? We call it a branch point. u(t). and this book does not cover it.
8. Its deriva√ tive is dg/dz = 1/2 z. yet the point is not a pole.
is single-valued even though it has a pole. Poles do not cause their functions to be multiple-valued and thus are not branch points. Evidently f (z) ≡ (z − zo )a has a branch point at z = zo if and only if a is not an integer. An analytic function like h(z) = 1/z. there really is no distinction between z1 = zo + ρeiφ and z2 = zo + ρei(φ+2π) —none at all. Where an integral follows a closed contour as in § 8. that neither suddenly skip from one spot to another—then one finds that f (z) ends in a different place than it began. by contrast. even though the two are exactly the same number. whereupon for some reason he began calling me Gorbag Pfufnik. is it? It's in my colleague. even though z itself has returned precisely to its own starting point. It is multiple-valued. but paradoxically f (z1 ) = f (z2 ). As far as z is concerned. 16 May 2006] √ An analytic function like g(z) = z having a branch point evidently is not single-valued. THE TAYLOR SERIES
smoothly. It is confusing. hadn't I. The change isn't really in me. What draws the distinction is the multiplevalued function f (z) which uses the argument. "Pfufnik!" then I had better keep a running count of how many times I have turned about his desk. even though the number of turns is personally of no import to me. a branch point may be thought of informally as a point zo at which a "multiple-valued function" changes values when one winds once around zo . but now the colleague calls me by a different name. until one realizes that the fact of a branch point says nothing whatsoever about the argument z. too.
. For a single z more than one distinct g(z) is possible. I had not changed at all. When a domain contour encircles a pole. who seems to suffer a branch point. It is as though I had a mad colleague who called me Thaddeus Black. the corresponding range contour is properly closed.8. If f (z) does have a branch point—if a is not an integer— then the mathematician must draw a distinction between z1 = zo + ρeiφ and z2 = zo + ρei(φ+2π) . The range contour remains open even though the domain contour is closed. In complex analysis. If it is important to me to be sure that my colleague really is addressing me when he cries. This function does not suffer the syndrome described. Indeed z1 = z2 . The usual analysis strategy when one encounters a branch point is simply to avoid the point. until one day I happened to walk past behind his desk (rather than in front as I usually did). "Branch point.164
CHAPTER 8." 18:10. [52. This is difficult.
and that point is no branch point. pure mathematics nevertheless brings some technical definitions the applied mathematician can use. isolated nonanalytic point at z = 0. but not f (z) = 1/z which has a pole at z = 0. but are not central to the analysis as developed in this book and are not covered here. Even the function f (z) = exp(1/z) is not meromorphic: it has only the one. A function f (z) which is analytic for all finite z except at isolated poles (which can be n-fold poles if n is a finite integer). is a meromorphic function. Examples include f (z) = z 2 and f (z) = exp z. ENTIRE AND MEROMORPHIC FUNCTIONS
165
the strategy is to compose the contour to exclude the branch point.6
Entire and meromorphic functions
Though an applied mathematician is unwise to let abstract definitions enthrall his thinking. but of which the poles nowhere cluster in infinite numbers. Examples include f (z) = 1/z. incidentally. These ideas are interesting. f (z) = 1/(z + 2) + 1/(z − 1)3 + 2z 2 and f (z) = tan z—the last of which has an infinite number of poles. which has no branch points. 4
10 Traditionally associated with branch points in complex variable theory are the notions of branch cuts and Riemann sheets. then consider that tan z = cos w sin z =− .12 If it seems unclear that the singularities of tan z are actual poles. to shut it out.10
8.11 A function f (z) which is analytic for all finite z is an entire function. Such a strategy of avoidance usually prospers. of which no circle of finite radius in the Argand domain plane encompasses an infinity of poles. The function f (z) = tan(1/z) is not meromorphic since it has an infinite number of poles within the Argand unit circle. having the character of an infinitifold (∞-fold) pole. The interested reader might consult a book on complex variables or advanced calculus like [24]. cos z sin w
wherein we have changed the variable w ← z − (2n + 1) 2π . Two such are the definitions of entire and meromorphic functions.6. but the point is an essential singularity. among many others. 11 [51] 12 [29]
.8.
Inasmuch as z is complex.21)
If z were always a real number.5. or contour. Consider the integral
z2
Sn =
z1
z n−1 dz. then by the antiderivative (§ 7. this always works—even where [df /dz]z=zo = 0. so the contour problem does not arise in that case.
(8.8. 14 [9][29]
13
. Except at a nonanalytic point of f (z) or in the trivial case that f (z) were everywhere constant. It does however arise where the variable of integration is a complex scalar. to ln(z2 /z1 ). One must also consider the meaning of the symbol dz. Refer to the Argand plane of Fig. 2.14
8. That one can shift an analytic function's output smoothly in any Argand direction whatsoever has the significant consequence that neither the real nor the imaginary part of the function—nor for that matter any linear combination ℜ[e−iω f (z)] of the real and imaginary parts—can have an extremum within the interior of a domain over which the function is fully analytic. one must consider some specific path of integration in the Argand plane.13.8
Cauchy's integral formula
In § 7. over which the integration is done. 7. as in Fig.
Professional mathematicians tend to define the domain and its boundary more carefully. That is. because there again different paths are possible.2) this inten n gral would evaluate to (z2 − z1 )/n. in the case of n = 0. however.8.8. Because real scalars are confined to a single line. the correct evaluation is less obvious. no alternate choice of path is possible where the variable of integration is a real scalar. in which the sum value of an integration depends not only on the integration's endpoints but also on the path.6 we considered the problem of vector contour integration. or. CAUCHY'S INTEGRAL FORMULA
167
by ∆z ≈ (ǫ/aK )1/K ei(ψ+n2π)/K . a function's extrema over a bounded analytic domain never lie within the domain's interior but always on its boundary. To evaluate the integral sensibly in the latter case.
you'll know it.8. One cannot always drop them.8. The math itself will tell you. Occasionally one encounters a sum in which not only do the finite terms cancel. 8. we can drop the dρ dφ term.1. An example of the type is (1 − 3ǫ + 3ǫ2 ) + (3 + 3ǫ) − 4 (1 − ǫ)3 + 3(1 + ǫ) − 4 = lim = 3. Otherwise there isn't much point in carrying second-order infinitesimals around. the second-order infinitesimals dominate and cannot be dropped. In such a case.22)
8. added to first order infinitesimals like dρ. (8.15 After canceling finite terms.
Since the product of two infinitesimals is negligible even on infinitesimal scale. however.
.1
The meaning of the symbol dz
The symbol dz represents an infinitesimal step in some direction in the Argand plane: dz = [z + dz] − [z] = = =
(ρ + dρ)ei(φ+dφ) − ρeiφ
(ρ + dρ)ei dφ eiφ − ρeiφ (ρ + dρ)(1 + i dφ)eiφ − ρeiφ . then you proceed from there. In the relatively uncommon event that you need them. but also the first-order infinitesimals. Integrat15
The dropping of second-order infinitesimals like dρ dφ. THE TAYLOR SERIES
8. we are left with the peculiar but excellent formula dz = (dρ + iρ dφ)eiφ . you simply go back to the step where you dropped the infinitesimal and you restore it. 2 ǫ→0 ǫ ǫ2
ǫ→0
lim
One typically notices that such a case has arisen when the dropping of second-order infinitesimals has left an ambiguous 0/0.2
Integrating along the contour
Now consider the integration (8. To fix the problem. is a standard calculus technique.21) along the contour of Fig.168
CHAPTER 8.
even though the integration ends where it begins. (z − zo )a−1 dz = 0. Notice that the formula's i2π does not depend on the precise path of integration. a = 0. and where it is implied that the contour loops positively (counterclockwise.26) does not hold in the latter case.27) has that z a−1 dz = 0. its integral comes to zero for nonintegral a only if the contour does not enclose the branch point at z = zo . But because any analytic function can be expanded in the form f (z) = ak −1 (which is just a Taylor series if the a happen to be k k (ck )(z − zo ) positive integers). n = 0.26)
where as in § 7.28)
The careful reader will observe that (8.28)'s derivation does not explicitly handle
. dz z − zo = i2π. a
(8.
(8.27)
However.16
16
(8. so one can write
z2 z1
z a−1 dz =
a a z2 − z1 . φ2 = φ1 + 2π. CAUCHY'S INTEGRAL FORMULA
171
The odd thing about this is in what happens when the contour closes a complete loop in the Argand plane about the z = 0 pole. In this case. thus S0 = i2π. (8.23) actually requires that n be an integer. this means that f (z) dz = 0 if f (z) is everywhere analytic within the contour. or with the change of variable z − zo ← z. Notice also that nothing in the derivation of (8.8. For a closed contour which encloses no pole or other nonanalytic point. (8.6 the symbol represents integration about a closed contour that ends where it begins. but only on the fact that the path loops once positively about the pole.8. in the direction of increasing φ) exactly once about the z = zo pole. Generalizing. we have that (z − zo )n−1 dz = 0.
fo (z) +
k
fk (z) z − zk
dz = i2π
k
fk (zk ).6. Thus. Because the cells share boundaries within the contour's interior.
(8. The original contour—each piece of which is an exterior boundary of some cell—is integrated once piecewise. each interior boundary is integrated twice. Let the contour's interior be divided into several cells. 8. once in each direction.29) is Cauchy's integral formula. If the contour were a tiny circle of infinitesimal radius about the pole.2). THE TAYLOR SERIES
8. and then per (8. then the integrand would reduce to f (zo )/(z − zo ). f [z] = ln[1 − z]).172
CHAPTER 8. z − zo (8.3.29) holds for any positivelydirected contour which once encloses a pole and no other nonanalytic point.8.1]
.3
The formula
The combination of (8.30)
where the fo (z) is a regular part. 17 [32. and similarly as we have just reasoned. each of which is small enough to enjoy a single convergence domain. If the contour encloses multiple poles (§§ 2. f (z) dz = i2πf (zo ). Equation (8.2 one can transpose such a series from zo to an overlapping convergence domain about z1 . the reverse-directed integral about the tiny detour circle is −i2πf (zo ).17 and again. This is the basis on which a more rigorous proof is constructed. then according to (8. Consider the closed contour integral f (z) dz. where neither fo (z) nor any of the several fk (z) has a pole or other nonanalytic point within (or on) the
an f (z) represented by a Taylor series with an infinite number of terms and a finite convergence domain (for example. then by the principle of linear superposition (§ 7. § 1.26) and (8.11 and 9.28) the resulting integral totals zero. but the two straight integral segments evidently cancel. canceling.2? In this case. if the dashed detour which excludes the pole is taken. by § 8. so to bring the total integral to zero the integral about the main contour must be i2πf (zo ).26). whether the contour be small or large.29)
But if the contour were not an infinitesimal circle but rather the larger contour of Fig. (8.3). However. Integrate about each cell. z − zo
where the contour encloses no nonanalytic point of f (z) itself but does enclose the pole of f (z)/(z − zo ) at z = zo .28) is powerful.
§ 10. (z − zo )m+1
where m + 1 = n. Expanding f (z) in a Taylor series (8. The values fk (zk ). triple or other n-fold pole. CAUCHY'S INTEGRAL FORMULA Figure 8.5. "Cauchy's integral formula. 20 April 2006]
. (k!)(z − zo )m−k+1
[24.2: A Cauchy contour integral. (Note however that eqn. the integration can be written S= f (z) dz." 14:13.18
8.8.
ℑ(z) zo
173
ℜ(z)
contour. are called residues. (8.8.30 does not handle branch points. In words.8. which represent the strengths of the poles.30) says that an integral about a closed contour in the Argand plane comes to i2π times the sum of the residues of the poles (if any) thus enclosed. the contour must exclude it or the formula will not work.30) Cauchy's integral formula is an extremely useful result.29) or of (8.4
Enclosing a multiple pole
When a complex contour of integration encloses a double.6][44][52. S=
18
∞ k=0
dk f dz k
z=zo
dz . If there is a branch point.) As we shall see in § 9. whether in the form of (8.19) about z = zo . 8.
One can expand the Taylor series about a different point. Table 8. then quit—which raises the question: how many terms are enough? How can one know that one has added adequately many terms. one can relatively efficiently calculate actual numerical values for ln z and many other functions. Section 8. k
evidently convergent for |1 − z| < 1. the Taylor series about z = 1 is ln z =
∞ k=1
175
−(−)k (k − 1)!
(z − 1)k =− k!
∞ k=1
(1 − z)k . the series for arctan z is computed indirectly20 by way of Table 5.10.3 and (2. ln
20
1 3 = ln 1 + 2 2
=
1 1 1 1 − + − + ··· (1)(21 ) (2)(22 ) (3)(23 ) (4)(24 )
[42. ERROR BOUNDS With these derivatives.10. that the remaining terms. One must add some finite number of terms. For example.) Using such Taylor series.8. For these it is easy if the numbers involved happen to be real.10. (And if z lies outside the convergence domain? Several strategies are then possible.1 lists Taylor series for a few functions of interest.1. All the series converge for |z| < 1.10
Error bounds
One naturally cannot actually sum a Taylor series to an infinite number of terms. 2k + 1
8. § 11-7]
. The exp z. which constitute the tail of the series. from Table 8. are sufficiently insignificant? How can one set error bounds on the truncated sum?
8. sin z and cos z series converge for all complex z. but cleverer and easier is to take advantage of some convenient relationship like ln w = − ln[1/w].33):
z
arctan z =
0
1 dw 1 + w2 (−)k w2k dw
= =
z ∞ 0 k=0 ∞ k=0
(−)k z 2k+1 .4 elaborates.1
Examples
Some series alternate sign. Among the several series.
As the plot shows. S < 2. Reflecting.178
CHAPTER 8.
21
.10. everywhere at least as great as. THE TAYLOR SERIES
terms of the series to establish a lower bound. known quantity.1) I=
1 ∞
dx 1 =− x2 x
∞ 1
= 1. The technique usually works well in practice for this reason. but the summation does rather resemble the integral (refer to Table 7.
Figure 8. then to overestimate the ′ remainder of the series to establish an upper bound. Consider the summation S=
∞ k=1
1 1 1 1 = 1 + 2 + 2 + 2 + ··· 2 k 2 3 4
The exact value this summation totals to is unknown to us. such as in representing the terms graphically—not as flat-topped rectangles—but as slant-topped trapezoids.3 plots S and I together as areas—or more precisely. The overestimate Rn ′ in the example majorizes the series' true remainder Rn . n = 0x40 would have bound ln 2 tighter than the limit of a computer's typical double-type floating-point accuracy). and that it would have been a lot smaller yet had we included a few more terms in Sn (for instance.
8. This is best explained by example. but as a reflection of majorization the word seems logical. (Of course S < 2 is a very loose limit. shifted in the figure a half unit rightward. The quantities in question
The author does not remember ever encountering the word minorization heretofore in print. This book at least will use the word where needed. Even so. one would let a computer add many terms of the series first numerically. plots S − 1 and I together as areas (the summation's first term is omitted). minorization 21 serves surely to bound an unknown quantity by a smaller. Notice that the Rn is a fairly small number. You can use it too if you like. known quantity. thus guaranteeing the absolute upper limit on S. the unknown area S − 1 cannot possibly be as great as the known area I.) Majorization serves surely to bound an unknown quantity by a larger. cleverer ways to majorize the remainder of this particular series will occur to the reader. or. S − 1 < I = 1. In symbols. and only then majorize the remainder. The integral I majorizes the summation S − 1. In practical calculation. or to replace by virtue of being. but that isn't the point of the example.2
Majorization
To majorize in mathematics is to be.
which are easier to sum in that they include a z k factor. The area I between the dashed curve and the x axis majorizes the area S − 1 between the stairstep curve and the x axis.10. but clever majorization can help (and if majorization does not help enough.10. because the height of the dashed curve is everywhere at least as great as that of the stairstep curve. [not yet written] can help even more).8. it does have a z k factor.3 illustrates.
y
1
1/22 1/32 1/42 1 2
y = 1/x2 x 3
are often integrals and/or series summations. the point of establishing a bound is not to sum a power series exactly but rather to fence the sum within some
. the two of which are akin as Fig.3: Majorization.
8. and the ratio of adjacent terms' magnitudes approaches unity as k grows. but more common than harmonic series are true power series. However. because although its terms do decrease in magnitude it has no z k factor (or seen from another point of view. the series transformations of Ch. There is no one. Harmonic series can be hard to sum accurately. but z = 1). ERROR BOUNDS
179
Figure 8. ideal bound which works equally well for all power series.3 has observed. 8. It is a harmonic series rather than a power series.10. incidentally. The choice of whether to majorize a particular unknown quantity by an integral or by a series summation depends on the convenience of the problem at hand.3
Geometric majorization
Harmonic series can be hard to sum as § 8. The series S of this subsection is interesting.
then (2.
k=0
(8.34) and (3.
.1's series for exp z. Here. τk = z k /[k!]). (8. τk represents the power series' kth-order term (in Table 8. THE TAYLOR SERIES
sufficiently (rather than optimally) small neighborhood.32)
Here. exact (but uncalculatable.32). for all k > n.34) (8. for example. 1 − |ρ| (8.1 is truncated after the nth-order term. A simple. general bound which works quite adequately for most power series encountered in practice.180
CHAPTER 8. Given these definitions. The |ρ| is a positive real number chosen. unknown) infinite series sum. is the geometric majorization |ǫn | < |τn+1 | . then
n
− ln(1 − z) ≈ |ǫn | <
k=1 z n+1
zk . k /(n + 1) . a pair of concrete examples should serve to clarify. such that each term in the series' tail is smaller than the last by at least a factor of |ρ|. First. preferably as small as possible.20) imply the geometric majorization (8. if
n
Sn ≡
τk .35)
ǫn ≡ S n − S ∞ . |τk+1 /τk | = [k/(k+1)] |z| < |z|. such that |ρ| ≥ τk+1 τk τk+1 |ρ| > τk |ρ| < 1. where S∞ represents the true.1. so we have chosen |ρ| = |z|. for at least one k > n. including among many others all the Taylor series of Table 8. more or less.33)
which is to say. if the Taylor series − ln(1 − z) =
∞ k=1
zk k
of Table 8. If the last paragraph seems abstract. 1 − |z|
where ǫn is the error in the truncated sum.
whose maximum value for all k > n occurs when k = n + 1. whereas each series diverges when asked to compute a quantity like − ln 3 or 3a−1 directly. Let f (1 + ζ) be a function whose Taylor series about ζ = 0 converges for
.10. |τk+1 /τk | = |z| /(k + 1).1 is truncated after the nth-order term.6) like these a more probably profitable strategy is to find and exploit some property of the functions to transform their arguments. γ 1 γ a−1 = . (1/γ)a−1 which leave the respective Taylor series to compute quantities like − ln(1/3) and (1/3)a−1 they can handle. but for nonentire functions (§ 8.8. if the Taylor series
∞ k
181
exp z =
k=0 j=1
z = j
∞ k=0
zk k!
also of Table 8.
8. ERROR BOUNDS Second.10. so we have chosen |ρ| = |z| /(n + 2). k!
+ 1)! . To shift the series' expansion points per § 8. The series for − ln(1 − z) and (1 + z)a−1 for instance each converge for |z| < 1 (though slowly for |z| ≈ 1).2 is one way to seek convergence. then
n k
exp z ≈ |ǫn | <
k=0 j=1 z n+1 /(n
z = j
n k=0
zk . where n + 2 > |z|. such as 1 − ln γ = ln .4
Calculation outside the fast convergence domain
Used directly. 1 − |z| /(n + 2)
Here.1 tend to converge slowly for some values of z and not at all for others. the Taylor series of Table 8.
24 To draw another example from Table 8. the tactic (8. For the logarithm. For the power. but shall in § 8. For example.39) fences γ within a comfortable zone. but such functions may have other properties one can analogously exploit. most nonentire functions lack properties of the specific kinds that (8. Axes are rotated per (3. Besides allowing all ω = 0.40)
calculates f (ω) fast for any ω = 0.24
8. The method and tactic of (8.5
Nonconvergent series
Variants of this section's techniques can be used to prove that a series does not converge at all.40) leaves open the question of how to compute f (in 2m ).36) demands.11. shrinking a function's argument by some convenient means before feeding the argument to the function's Taylor series. In theory all finite γ rightward of the frontier let the Taylor series converge. as are the numbers ln(1/2) and (1/2)a−1 . τ
Equation (8. Of course. we have not calculated yet. The sine and cosine in the cis function are each calculated directly by Taylor series.
. ω sin α + cos α ˆ ˆ ˆ where arctan ω is interpreted as the geometrical angle the vector x + yω makes with x. − ln(in 2m ) = m ln(1/2) − in(2π/4). but for the example functions at least this is not hard. keeping γ moderately small in magnitude but never too near the ℜ(γ) = 1/2 frontier in the Argand plane. thus causing the Taylor series to converge faster or indeed to converge at all.39) also thus significantly speeds series convergence.40) are useful in themselves and also illustrative generally. ERROR BOUNDS whereupon the formula23 f (ω) = h[f (in 2m ). consider that arctan ω = α + arctan ζ.10. but extreme γ of any kind let the series converge only slowly (and due to compound floating-point rounding error inaccurately) inasmuch as they imply that |ζ| ≈ 1.8.7) through some angle α to reduce the tangent from ω to ζ. Notice how (8.10. ω cos α − sin α ζ≡ .
∞ k=1
1 k
does not converge because 1 > k
23
k+1 k
dτ .1.36) through (8. (in 2m )a−1 = cis[(n2π/4)(a − 1)]/[(1/2)a−1 ]m . f (γ)]
183
(8. Any number of further examples and tactics of the kind will occur to the creative reader. The number 2π.
but since eliminating the error entirely also requires recording the sum to infinite precision. A series sum converges to a definite value. Perfect precision is impossible. There is normally no such thing as an optimal error bound—with sufficient cleverness. but adequate precision is usually not hard to achieve.
∞ k=1
CHAPTER 8. to increase n a little). merely by including a sufficient number of terms in the sum. and to the same value every time the series is summed. An infamous example is S=− 1 1 1 (−)k √ = 1 − √ + √ − √ + ··· . some tighter bound can usually be discovered—but often easier and more effective than cleverness is simply to add a few extra terms into the series before truncating it (that is. and we can make that neighborhood as tight as we want. 2 3 4 k k=1
∞
which obviously converges. suggestions and tactics. which is impossible. The "error" in a series summation's error bounds is unrelated to the error of probability theory. Occasionally nonetheless a series arises for which even adequate precision is quite hard to achieve. The English word "error" is thus overloaded here. To eliminate the error to the 0x34-bit (sixteen-decimal place) precision of a computer's double-type floating-point representation typically requires something like 0x34 terms—if the series be wisely composed and if care be taken to keep z moderately small and reasonably distant from the edge of the series' convergence domain. THE TAYLOR SERIES
1 > k
∞ k=1 k
k+1
dτ = τ
∞ 1
dτ = ln ∞.184 hence.10. few engineering applications really use much more than 0x10 bits (five decimal places) in any case. is to establish a definite neighborhood in which the unknown value is sure to lie.
. It is just that we do not necessarily know exactly what that value is. eliminating the error entirely is not normally a goal one seeks. by this section's techniques or perhaps by other methods. but sum it if you can! It is not easy to do. To eliminate the error entirely usually demands adding an infinite number of terms.6
Remarks
The study of error bounds is not a matter of rules and formulas so much as of ideas. which is impossible anyway. What we can do. Before closing the section. we ought to arrest one potential agent of terminological confusion. Besides. τ
8. no chance is involved.
However. CALCULATING 2π
185
The topic of series error bounds is what G. we have that26 2π = 0x18
∞ k=0
(−)k 2k + 1
√ 3−1 √ 3+1
2k+1
≈ 0x6. We already know that tan 2π/8 = 1. 2k + 1
(8. Admittedly.
[6] The writer is given to understand that clever mathematicians have invented subtle. from Table 3. Much faster convergence is given by angles smaller than 2π/8.
8. but such schemes are not covered here."25 There is no general answer to the error-bound problem. but there are several techniques which help.487F. some of which this section has introduced. Any function whose Taylor series about zo = 0 includes only odd-order terms is an odd function. we have that 2π = 8
∞ k=0
2π . For example. √ 2π 3−1 = . 8
(−)k . arctan √ 0x18 3+1 Applying the Taylor series at this angle. we shall meet later in the book as the need for them arises.42)
8. you only need to compute the numerical value of 2π once.
26 25
.8. the writer supposes that useful lessons lurk in the clever mathematics underlying the subtle schemes. The relatively straightforward series this section gives converges to the best accuracy of your computer's floating-point register within a paltry 0x40 iterations or so—and.2. Examples of odd functions include z 3 and sin z.
(8.11. or in other words that arctan 1 = Applying the Taylor series.41) is simple but converges extremely slowly. still much faster-converging iterative schemes toward 2π. Other techniques. Brown refers to as "trickbased.11
Calculating 2π
The Taylor series for arctan z in Table 8.S.1 implies a neat way of calculating the constant 2π. after all.41)
The series (8.12
Odd and even functions
An odd function is one for which f (−z) = −f (z). there is fast and there is fast.
but for real z. sinh z cosh z tanh z 1/ tanh z to reach (8. .2 plus l'Hˆpital's rule (4.44)
[29]
. These simpler trigonometrics are not only meromorphic but also entire.1 and 5. which is possible only for real z. o however. single poles.1). exp z and cis z—conspicuously excluded from this section's gang of eight—have no poles for finite z. . the sine function's very definition establishes the poles z = kπ (refer to Fig. therefore. cos z. Consider for instance the function 1/ sin z = i2/(eiz − e−iz ). we should like to verify that the poles (8. we finally apply l'Hˆpital's rule to each of the ratios o z − kπ z − (k − 1/2)π z − kπ z − (k − 1/2)π .
8. Consider for example expanding f (z) =
27
e−z 1 − cos z
(8. The trigonometrics are meromorphic functions (§ 8. similar reasoning for each of the eight trigonometrics forbids poles other than those (8. and thus are not meromorphic at all. . This function evidently goes infinite only when eiz = e−iz . . With the observations from Table 5. sin z cos z tan z 1/ tan z z − ikπ z − i(k − 1/2)π z − ikπ z − i(k − 1/2)π .43) lists are in fact the only poles that there are. because ez .43)'s claims. we shall marshal the identities of Tables 5. Sometimes one nevertheless wants to expand at least about a pole.29). . sinh z.1 that i sinh z = sin iz and cosh z = cos iz.6) for this reason. Before calculating residues and such. eiz .8.43). subject to Cauchy's integral formula and so on. cosh z.43) lists. The poles are ordinary. with residues. but never about a pole or branch point of the function. THE LAURENT SERIES
187
To support (8. Trigonometric poles evidently are special only in that a trigonometric function has an infinite number of them.27 The six simpler trigonometrics sin z. Satisfied that we have forgotten no poles. ez ± e−z and eiz ± e−iz then likewise are finite. that we have forgotten no poles. 3.14
The Laurent series
Any analytic function can be expanded in a Taylor series.14. Observe however that the inverse trigonometrics are multiple-valued and have branch points.
it may suffice to write f (z) = f (z) = This is f (z) = 1 sin w . As an example of a different kind.190
CHAPTER 8. One could expand the function directly about some other point like z = 1. the resulting series would suffer a straitly limited convergence domain. even then. The typical trouble one has with the Taylor series is that a function's poles and branch points limit the series' convergence domain. Consider the function sin(1/z) . cos z z
which is all one needs to calculate f (z) numerically—and may be all one needs for analysis. consider g(z) = 1 . and one cannot expand the function directly about it.15
Taylor series in 1/z
A little imagination helps the Taylor series a lot. one needs no Taylor series to handle such a function (one simply does the indicated arithmetic). (z − 2)2
z −1 − z −3 /3! + z −5 /5! − · · · . but calculating the Taylor coefficients would take a lot of effort and. All that however tries too hard. Depending on the application.1) and (4. Then by (8. Suppose however that a Taylor series specifically about z = 0 were indeed needed for some reason. The point is an essential singularity. Thinking flexibly. THE TAYLOR SERIES
8. Several other ways are possible. zk
. too. cos z This function has a nonanalytic point of a most peculiar nature at z = 0. 2k+2
That expansion is good only for |z| < 2.2). but for |z| > 2 we also have that g(z) = 1/z 2 1 = 2 2 (1 − 2/z) z
∞ k=0
1+k 1
2 z
k
=
∞
k=2
2k−2 (k − 1) . w≡ . The Laurent series of § 8. 1 − z 2 /2! + z 4 /4! − · · ·
Most often. g(z) = 1 1/4 = 2 (1 − z/2) 4
∞ k=0
1+k 1
z 2
k
=
∞ k=0
k+1 k z . however.14 represents one way to extend the Taylor series. one can often evade the trouble.
v2 = y z and v3 = z. Note that we have computed the two series for g(z) without ever actually taking a derivative. v1 = x. . too. . consider the function f (z1 . then take the Taylor series either of the whole function at once or of pieces of it separately. though taking derivatives per (8. What does it mean? • The z is a vector 30 incorporating the several independent variables z1 . One can expand in negative powers of z equally validly as in positive powers. is a vector with N = 3. . Where two or more independent variables are involved. which has terms z1 and 2z2 —these we understand—but also has the cross-term z1 z2 for which the relevant derivative is the cross-derivative ∂ 2 f /∂z1 ∂z2 .16
The multidimensional Taylor series
Equation (8.19) has given the Taylor series for functions of a single variable. one must account for the crossderivatives. . the multidimensional Taylor series is f (z) =
k
∂kf ∂zk
z=zo
(z − zo )k . One is not required immediately to take the Taylor series of a function as it presents itself.19) may be the canonical way to determine Taylor coefficients. k2 . k!
(8. THE MULTIDIMENSIONAL TAYLOR SERIES
191
which expands in negative rather than positive powers of z. that's neat. only the details are a little more complicated. • The k is a nonnegative integer vector of N counters—k1 . And. With this idea in mind. For example. z2 ) = z1 +z1 z2 +2z2 . Each of the kn runs independently from 0 to ∞.16.3.49)
Well. Neither of the section's examples is especially interesting in itself. .
30
. .8. The ˆ ˆ geometrical vector v = xx + yy + ˆz of § 3.
8. . The idea of the Taylor series does not differ where there are two or more independent variables. kN — one for each of the independent variables. z2 .
In this generalized sense of the word. one can first change variables or otherwise rewrite the function in some convenient way. For 2 2 example. but their point is that it often pays to think flexibly in extending and applying the Taylor series. a vector is an ordered set of N elements. . zN . any effective means to find the coefficients suffices. and every permutation is possible. then.
a dτ
(9.2. recognizing its integrand to be the derivative of something already known:1
z a
df dτ = f (τ )|z .1)
For instance. implies a general technique only for calculating an integral numerically—and even for this purpose it is imperfect.
9. This chapter surveys several weapons of the intrepid mathematician's arsenal against the integral. It is hard to guess in advance which technique might work best. then directly writes down the solution ln τ |x . Its counterpart (7.1
Integration by antiderivative
The simplest way to solve an integral is just to look at it. how is one actually to do the sum? It turns out that there is no one general answer to this question. recognizing it to be the derivative of ln τ . 1
1
The notation f (τ )|z or [f (τ )]z means f (z) − f (a). 1 τ
One merely looks at the integrand 1/τ . when it comes to adding an infinite number of infinitesimal elements. some by another. a a
193
. Some functions are best integrated by one technique.19) implies a general technique for calculating a derivative symbolically.Chapter 9
Integration techniques
Equation (4. for. Refer to § 7.1). unfortunately.
1
x
1 dτ = ln τ |x = ln x.
whose differential is (by successive steps) d(u) = d(1 + x2 ).
9. (9. with the change of variable u ← 1 + x2 . However. z ρ1
(9.3)
This helps.24) and (8. nonobvious. when z1 and z2 are real but negative numbers.
. However.1 provide several further good derivatives this antiderivative technique can use. 1 + x2
This integral is not in a form one immediately recognizes.2.3 and 9. for example. 5.2)
Tables 7.194
CHAPTER 9. One particular. du = 2x dx. Besides the essential τ a−1 = d dτ τa a .2
Integration by substitution
Consider the integral
x2
S=
x1
x dx . INTEGRATION TECHNIQUES
The technique by itself is pretty limited. If z = ρeiφ . useful variation on the antiderivative technique seems worth calling out specially here. then (8. the frequent object of other integration techniques is to transform an integral into a form to which this basic technique can be applied. 5.25) have that
z2 z1
dz ρ2 = ln + i(φ2 − φ1 ).1.
not just for C = 0. sin ατ v = .
dv ← cos ατ dτ. we can begin by integrating part of it. What it does is to integrate one part of an integral separately—whichever part one has chosen to identify as dv—while contrarily differentiating the other part u.
Unsure how to integrate this. then. but one should understand clearly what it does and does not do. nothing in the integration by parts technique requires us to consider all possible v. consider the integral
x
S(x) =
0
τ cos ατ dτ. The technique is powerful for this reason. Any convenient v suffices. consider the definite integral3 Γ(z) ≡
2
∞
0
e−τ τ z−1 dτ. For an example of the rule's operation. INTEGRATION TECHNIQUES
Equation (9.5)
The careful reader will observe that v = (sin ατ )/α + C matches the chosen dv for any value of C.4). Letting u ← τ. We can begin by integrating the cos ατ dτ part. The new integral may or may not be easier to integrate than was the original u dv. It does not just integrate each part of an integral separately. α According to (9. we find that2 du = dτ.4) is the rule of integration by parts.
(9. ℜ(z) > 0. It isn't that simple. However. S(x) = τ sin ατ α
x 0 x
−
0
sin ατ x dτ = sin αx + cos αx − 1. The virtue of the technique lies in that one often can find a part dv which does yield an easier v du. This is true. For another kind of example of the rule's operation. we choose v = (sin ατ )/α.196
CHAPTER 9. In this case. upon which it rewards the mathematician only with a whole new integral v du. 3 [32]
. α α
Integration by parts is a powerful technique.
where I is the loan's interest rate and P is the borrower's payment rate. dρ
with which one can solve the integral. one can guess a likely-seeming antiderivative form. one has no guarantee that the guess is right. after all). x|t=0 = xo . too. concluding that S(x) = −σ 2 e−(ρ/σ)
2 /2
x 0
= σ2
1 − e−(x/σ)
2 /2
. Having guessed.198
CHAPTER 9.10)
which conceptually represents4 the changing balance x of a bank loan account over time t. too. but see: if the guess were right. one can write the specific antiderivative e−(ρ/σ)
2 /2
ρ=
d 2 −σ 2 e−(ρ/σ) /2 . x|t=T = 0. Consider for example the differential equation dx = (Ix − P ) dt. Using this value for a. then the antiderivative would have the form e−(ρ/σ)
2 /2
ρ =
d −(ρ/σ)2 /2 ae dρ aρ 2 = − 2 e−(ρ/σ) /2 . dρ
where the a is an unknown coefficient.9)
The same technique solves differential equations. If it is desired to find the correct payment rate P which pays
Real banks (in the author's country. (9. at least) by law or custom actually use a needlessly more complicated formula—and not only more complicated. such as e−(ρ/σ)
2 /2
ρ=
d −(ρ/σ)2 /2 ae . INTEGRATION TECHNIQUES Consider the integral (which arises in probability theory)
x
S(x) =
0
e−(ρ/σ)
2 /2
ρ dρ.
(9.8)
If one does not know how to solve the integral in a more elegant way.
(9. but mathematically slightly incorrect. σ
implying that a = −σ 2 (evidently the guess is right.
4
.
For the former condition. Evidently good choices for α and B.12)
. then (perhaps after some bad guesses) we guess the form x(t) = Aeαt + B. Substituting the last two equations into (9. we establish by applying the given boundary conditions x|t=0 = xo and x|t=T = 0. then. which at least is satisfied if both of the equations αAeαt = IAeαt .10). 0 = AeIT + P .10) and dividing by dt yields αAeαt = IAeαt + IB − P. where α. The guess' derivative is dx = αAeαt dt. we have that A = P = −e−IT xo .4. 1 − e−IT Ixo . (9.11) is P P =A+ . 1 − e−IT
(9. INTEGRATION BY UNKNOWN COEFFICIENTS
199
the loan off in the time T . are satisfied. 0 = IB − P.11) x(t) = AeIt + I to (9. are α = I. I
Solving the last two equations simultaneously. A and B are unknown coefficients. P . The constants A and P . xo = Ae(I)(0) + I I and for the latter condition. B = I Substituting these coefficients into the x(t) equation above yields the general solution P (9.9.
The trouble. Slightly inelegant the method may be. yields I= za dz = i2πz a |z=−1 = i2π ei2π/2 z+1
a
= i2πei2πa/2 .200
CHAPTER 9. Observing that τ is only a dummy integration variable. seems to solve the integral. Such are the kinds of problems the method can solve. is that the integral S does not go about a closed complex contour. One can however construct a closed complex contour I of which S is a part.13)
to (9.29) has that integrating once counterclockwise about a closed complex contour. as in Fig 9. The integrand has a pole at τ = −1. τ +1
This is a hard integral. −1 < a < 0. too—and it has surprise value (for some reason people seem not to expect it). the method usually finds it.1. with the contour enclosing the pole at z = −1 but shutting out the branch point at z = 0. The method of unknown coefficients is an elephant.2]
.12).5
Integration by closed contour
We pass now from the elephant to the falcon.11) yields the specific solution x(t) = xo 1 − e−IT 1 − e(I)(t−T ) (9. from the inelegant to the sublime. The virtue of the method of unknown coefficients lies in that it permits one to try an entire family of candidate solutions at once. but there is a way. but it is pretty powerful. of course. No obvious substitution. If a solution exists anywhere in the family. with the family members distinguished by the values of the coefficients. Consider the definite integral5 S=
0 ∞
τa dτ. § 1. with the payment rate P required of the borrower given by (9. no evident factoring into parts. INTEGRATION TECHNIQUES
Applying these to the general solution (9.10) meeting the boundary conditions. then Cauchy's integral formula (8. if one writes the same integral using the complex variable z in place of the real variable τ . If the outer circle in the figure is of infinite
5
[32.
9.
unlike θ. One thus obtains the equivalent integral
T
=
dz/iz i2 dz =− 2 + 2z/a + 1 1 + (a/2)(z + 1/z) a z dz i2 = − . begins and ends the integration at the same point.
whose magnitudes are such that
√ 2 − a2 ∓ 2 1 − a2 . Rather. In this example there is no branch point to exclude.5.9. The integrand evidently has poles at −1 ± √ a 1 − a2
z=
. here again the contour is not closed. nor need one extend the contour. |z| = a2
2
One of the two magnitudes is less than unity and one is greater. excluding the branch point. one changes the variable
z ← eiθ
and takes advantage of the fact that z. meaning that one of the two poles lies within the contour and one lies without. as is
. INTEGRATION BY CLOSED CONTOUR
203
As in the previous example. The previous example closed the contour by extending it. √ √ a z − −1 + 1 − a2 /a z − −1 − 1 − a2 /a
whose contour is the unit circle in the Argand plane.
A related property is that dk Φ dwk =0
w=0
for 0 ≤ k < p.21) is (9. when 0 ≤ k < p and w = 0.21)
where g and hk are polynomials in nonnegative powers of w. the function and its first p − 1 derivatives are all zero at w = 0.21) holds for all 0 ≤ k ≤ p. (9. di Φ dwi = d di−1 Φ d = i−1 dw dw dw wp−i+1 hi−1 (w) [g(w)]i = wp−i hi (w) [g(w)]i+1 . By induction on this basis.21) is good at least for this case.
(9. of course.3 affords extra insight. The reason is that (9. (9.
9. When k = 0. g(0) = 0.
hi (w) ≡ wg
dg dhi−1 − iwhi−1 + (p − i + 1)ghi−1 .20). as was to be demonstrated. whereas its numerator has a wp−k = 0 factor.2 and 9. (Notice incidentally how much easier it is symbolically to differentiate than to integrate!)
9.6. but the derivatives do bring some noteworthy properties. 0 < i ≤ p.6.21)'s denominator is [g(w)]k+1 = 0. The property is proved by induction. its derivatives interest us. (9. INTEGRATION TECHNIQUES
result.4
The derivatives of a rational function
Not only the integral of a rational function interests us.5
Repeated poles (the conventional technique)
Though the technique of §§ 9. too. 0 ≤ k ≤ p. dwk [g(w)]k+1 (9.21) holds for k = i − 1.20) Φ(w) = g(w) enjoys derivatives in the general rational form dk Φ wp−k hk (w) = .210
CHAPTER 9. Then. if (9. One needs no special technique to compute such derivatives. so (9. First of interest is the property that a function in the general rational form wp h0 (w) . it is not the conventional technique to expand in partial fractions a rational function
.6.6.22)
That is. dw dw
which makes hi (like hi−1 ) a polynomial in nonnegative powers of w.
"Why should we prove that a solution exists.25)'s coefficients actually come from the same expansion. yes. some from the other. "The present book is a book of applied mathematics. once we have actually found it?" Ah. and uniquely. On the other hand. one derivative per repetition of the pole. "But these are quibbles. Maybe there exist two distinct expansions. but on this occasion let us nonetheless follow the professional's line of reasoning.
z=αj
0 ≤ ℓ < p.5's logic discovers no guarantee that all of (9. A professional mathematician might object however that it has done so without first proving that a unique solution actually exists. INTEGRATION TECHNIQUES
so. A careful review of § 9. the (z − αm )pm f (z) equation's kth derivative reduces at that point to dk (z − αm )pm f (z) dz k =
z=αm pm −1 ℓ=0
dk (Amℓ )(z − αm )ℓ dz k
z=αm
= k!Amk . but the professional's point is that we have found the solution only if in fact it does exist. In case of a repeated pole.6
The existence and uniqueness of solutions
Equation (9. Uniqueness is proved by positing two solutions
M pj −1 j=1 ℓ=0
f (z) =
Ajℓ = (z − αj )pj −ℓ
M pj −1 j=1 ℓ=0
Bjℓ (z − αj )pj −ℓ
. if only a short way. Comes from us the reply. in which event it is not even clear what (9.
(9.
9.6.24)'s partial fractions.6.24).25) has solved (9. and some of the coefficients come from the one. 0 ≤ k < pm . these coefficients evidently depend not only on the residual function itself but also on its several derivatives. cavils and nitpicks!" we are inclined to grumble.23) and (9." Well. maybe there exists no expansion at all. Changing j ← m and ℓ ← k and solving for Ajℓ then produces the coefficients Ajℓ = 1 ℓ! dℓ (z − αj )pj f (z) dz ℓ .25) means.212
CHAPTER 9.25)
to weight the expansion (9. otherwise what we have found is a phantom.
9. From the N coefficients bk and the N coefficients Ajℓ .23). We shall not often in this book prove existence and uniqueness explicitly.9. Logically this difference must be zero for all z if the two solutions are actually to represent the same function f (z). because each half-integral
. one cannot. But uniqueness.7. FRULLANI'S INTEGRAL and computing the difference
M pj −1 j=1 ℓ=0
213
Bjℓ − Ajℓ (z − αj )pj −ℓ
between them. there always exist an alternate set of bk for which the same system has multiple solutions.7
Frullani's integral
∞ 0
One occasionally meets an integral of the form S= f (bτ ) − f (aτ ) dτ. This however is seen to be possible only if Bjℓ = Ajℓ for each (j. which we have already established. say. where the combination's weights depend solely on the locations αj and multiplicities pj of f (z)'s several poles. but such proofs when desired tend to fit the pattern outlined here. positive coefficients and f (τ ) is an arbitrary complex expression in τ . Therefore. one wants to split in two as [f (bτ )/τ ] dτ − [f (aτ )/τ ] dτ . the solution necessarily exists. Such an integral. but if f (0+ ) = f (+∞). 11 through 14 that when such a system has no solution. Therefore it is not possible for the system to have no solution—which is to say.24) over a common denominator and comparing the resulting numerator against the numerator of (9. N = 3) look like b0 = −2A00 + A01 + 3A10 . b1 = A00 + A01 + A10 . ℓ). Existence comes of combining the several fractions of (9. the two solutions are one and the same. forbids such multiple solutions in all cases. τ
where a and b are real. Each coefficient bk is seen thereby to be a linear combination of the several Ajℓ . b2 = 2A01 − 5A10 . We shall show in Chs. an N × N system of N linear equations in N unknowns results—which might for example (if.
however. at the price of losing the functions' known closed analytic forms. it's just a function you hadn't heard of before. too.
. After all.
0 ∞ k=0
(−)k z 2k+1 = (z) (2k + 1)2k k!
x
∞ k=0
1 2k + 1
k j=1
−z 2 .
The myf z is no less a function than sin z is.1 is used to calculate sin z. You can plot the function. For example.
x 0
τ2 exp − 2
dτ
= = =
x ∞ 0 k=0 x ∞ 0 k=0 ∞ k=0
(−τ 2 /2)k dτ k! (−)k τ 2k dτ 2k k!
x
(−)k τ 2k+1 (2k + 1)2k k!
0 ∞ k=0
=
∞ k=0
(−)k x2k+1 (2k + 1)2k k!
= (x)
1 2k + 1
k j=1
−x2 . It works just the same.9.9. or do with it whatever else one does with functions. 2j
The result is no function one recognizes. The series above converges just as accurately and just as fast.
or calculate its value. when a Taylor series from Table 8. This is not necessarily bad. 2j
exp −
τ2 2
dτ = myf x. then sin z is just a series. Sometimes it helps to give the series a name like myf z ≡ Then. it is just a series. or take its derivative τ2 d myf τ = exp − dτ 2 . INTEGRATION BY TAYLOR SERIES
217
practical way to integrate some functions.
INTEGRATION TECHNIQUES
.218
CHAPTER 9.
and as we have seen in § 4. one likes to lay down one's burden and rest a spell in the shade.
219
. But in this short chapter which rests between. 11.2) extracts with little effort.30) locates swiftly. The expression z + a0 is a linear polynomial. The roots of higher-order polynomials. No general formula to extract the roots of the nth-order polynomial seems to be known. and Ch. which though not plain to see the quadratic formula (2. 6's footnote 8.2). but that is an approximate iteration rather than an exact formula like (2. One would prefer an actual formula to extract the roots. z 4 + a3 z 3 + a2 z 2 + a1 z + a0 . the Newton-Raphson iteration (4. hefting the weighty topic of the matrix. will indeed begin to build on those foundations. The quadratic polynomial z 2 + a1 z + a0 has of course two roots. to extract the roots of the cubic and quartic polynomials z 3 + a2 z 2 + a1 z + a0 . between the hard work of the morning and the heavy lifting of the afternoon. Chapters 2 through 9 have established the applied mathematical foundations upon which coming chapters will build. So much algebra has been known since antiquity. we shall refresh ourselves with an interesting but lighter mathematical topic: the topic of cubics and quartics.1 However. the lone root z = −a0 of which is plain to see.Chapter 10
Cubics and quartics
Under the heat of noonday.8 it can occasionally fail to converge.
1
Refer to Ch.
2
10." 05:17. "Fran¸ois Vi`te.
w+
An interesting alternative to Vieta's transform is w
2 wo ← z. but as |w| approaches |wo | this 2 ceases to be true. 31 Oct.3)
2 [51.4 For |w| ≫ |wo |. "Quartic equation"][52." [51. Figure 10. 4 Also called "Vieta's substitution. "Quartic equation. and so forth. Section 10. z ≈ wo /w. For |w| ≪ |wo |. 1/3 resembles 3. 9 Nov. (10. we have that z ≈ w. "Gerolamo Cardano.2
Cubics
The general cubic polynomial is too hard to extract the roots of directly. The constant wo is the corner value. so one begins by changing the variable x+h←z (10. 1 ← z.5] 3 This change of variable broadly recalls the sum-of-exponentials form (5. one can transform a function f (z) into a function f (w) by the change of variable3 w+ or.1) w Equation (10.3 might be named Vieta's parallel transform. 2006][52. "Cubic equation"][51.
10. c e 2006][46.1 plots Vieta's transform for real w in the case wo = 1. The 16thcentury algebraists Ferrari.1) is Vieta's transform. 1 Nov. Vieta. w
(10.2)
which in light of § 6. To capture this sense. "Vieta's substitution"]
." 22:35. § 1. 2006][52. inasmuch as exp[−φ] = 1/ exp φ." 00:26.2 shows how Vieta's transform can be used.1
Vieta's transform
There is a sense to numbers by which 1/2 resembles 2. This chapter explains. formulas do exist. in the neighborhood of which w transitions from the one domain to the other. Tartaglia and Cardano have given us the clever technique. more generally. 1/4 resembles 4.19) of the cosh(·) function. CUBICS AND QUARTICS
though the ancients never discovered how.220
CHAPTER 10. w
2 wo ← z.
This works.3.3) achieves this—or rather. with the idea of balancing the equation between offsetting w and 1/w terms. To define inscrutable coefficients unnecessarily before the need for them is apparent seems poor applied mathematical style. The careful reader will observe that (10.7)
(10.6) by the change of variable w+ we get the new equation
2 2 w3 + (3wo − p)w + (3wo − p) 2 w6 wo + o = q. none of which seems to help much. double the three the fundamental theorem of algebra (§ 6. Vieta-transforming (10. but at the price of reintroducing an unwanted x2 or z 2 term. we should like to improve the notation by defining5 p P ←− .2. but after weeks or months of such frustration one might eventually discover Vieta's transform (10. various substitutions. we have the quadratic equation (w3 )2 = 2 p q w3 − 2 3
3
. For the moment.2) allows a cubic polynomial to have.10) seems to imply six roots. CUBICS AND QUARTICS
roots would follow immediately.9)
reducing (10. such a substitution does achieve it.11)
Why did we not define P and Q so to begin with? Well. Lacking guidance.
(10. We shall return to this point in § 10. one might try many. but no very simple substitution like (10. 3 q Q←+ . Vieta's transform has reduced the original cubic to a quadratic. 3
(10.222
CHAPTER 10.10).10)
which by (2.8) to read w3 + (p/3)3 = q. we lacked motivation to do so. That way is no good.2) we know how to solve. w3
Multiplying by w3 and rearranging terms. before unveiling (10. however.8)
which invites the choice
2 wo ≡
p .1). 2
5
(10. w w3 2 wo ←x w
(10.
.
what the equations really imply is not six distinct roots but six distinct w.1: The method to extract the three roots of the general cubic polynomial.4 below. otherwise. which three do we really need and which three can we ignore as superfluous? The six w naturally come in two groups of three: one group of three from the one w3 and a second from the other.
10. x ≡ 0 w − P/w a2 z = x− 3
3
if P = 0 and Q = 0. one can choose either sign.10) are written
3 2
(w )
x3 = 2Q − 3P x.
3
(10. (In the definition of w3 .2) allows a cubic polynomial to have. we shall guess—and logically it is only a guess—that a single w3 generates three distinct x and thus (because z differs from x only by a constant offset) all three roots z. treated in §§ 10.1 seem to imply six roots.2. If
. so in fact the equations imply only three x and thus three roots z.2 has observed. For this reason.13)
= 2Qw + P .12) (10. However. SUPERFLUOUS ROOTS
223
Table 10.1 summarizes the complete cubic polynomial root extraction method in the revised notation—including a few fine points regarding superfluous roots and edge cases. The definition x ≡ w − P/w maps two w to any one x. the equations of Table 10.) 0 = z 3 + a2 z 2 + a1 z + a0 a1 a2 2 P ≡ − 3 3 a2 a2 a1 1 −2 −a0 + 3 Q ≡ 2 3 3 3 2Q if P = 0.6) and (10. The question then is: of the six w.
with which (10.3. w3 ≡ Q ± Q2 + P 3 otherwise.10.
3
Table 10. double the three the fundamental theorem of algebra (§ 6.3 and 10.3
Superfluous roots
As § 10.
Q2 + P 3 . just as the verb to square means "to raise to the second power.
6 w1 = 2Q2 + P 3 ± 2Q 6 Combining the last two on w1 . Q4 + 2Q2 P 3 + P 6 = Q4 + Q2 P 3 . then the second w3 cannot but yield the same three roots. (P 3 )(Q2 + P 3 ) = 0. rearranging terms and halving. then canceling offsetting terms and factoring.
6 The verb to cube in this context means "to raise to the third power. by successive steps.
Cubing6 the last equation." as to change y to y 3 .
6 w1 = −P 3 . CUBICS AND QUARTICS
the guess is right. which means that the second w3 is superfluous and can safely be overlooked.224
CHAPTER 10. e+i2π/3 w1 − P e+i2π/3 w1 P e+i2π/3 w1 + −i2π/3 e w1 P e+i2π/3 w1 + w1 . But is the guess right? Does a single w3 in fact generate three distinct x? To prove that it does. e−i2π/3 w1 P = e−i2π/3 w1 + +i2π/3 . e w1 P = e−i2π/3 w1 + .
−P 3 = 2Q2 + P 3 ± 2Q or. Squaring. w1 = e−i2π/3 w1 − P
which can only be true if
2 w1 = −P.
Q2 + P 3 = ∓Q Q2 + P 3 . Letting the symbol w1 represent the third w. Let us suppose that a single w3 did generate two w which led to the same x.
Q2 + P 3 . Because x ≡ w − P/w.
but squaring the table's w3 definition for w = w1 . then (since all three w come from the same w3 ) the two w are e+i2π/3 w1 and e−i2π/3 w1 . let us suppose that it did not."
.
just as it ought to do. when the two w3 differ in magnitude one might choose the larger.4
Edge cases
Section 10.1. 1 and 2. as was to be demonstrated. This certainly is possible. One can choose either sign. but e±i2π/3 = (−1 ± i 3)/2.32). the three x descending from a single w3 are indeed distinct.1 gives the quadratic solution w 3 ≡ Q ± Q2 + P 3 . EDGE CASES
225
The last equation demands rigidly that either P = 0 or P 3 = −Q2 .
7 Numerically. the Taylor series of Table 8. then. For example. of the two signs in the table's quadratic solution w3 ≡ Q ± Q2 + P 3 demands to be considered. that nothing prevents two actual roots of a cubic polynomial from having the same value. In calculating the three w from w3 . if for no other reason than to offer the reader a model of how to think about edge cases on his own. Some cubic polynomials do meet the demand—§ 10. When this happens. They are e 1 √ −1 ± i 3 w1 .4 will treat these and the reader is asked to set them aside for the moment—but most cubic polynomials do not meet it.4. Therefore.10. it matters not which. the method of Table 10.3 excepts the edge cases P = 0 and P 3 = −Q2 . The assumption: that the three x descending from a single w3 were not distinct. Mostly the book does not worry much about edge cases.
10. incidentally. For most cubic polynomials. the cubic polynomial (z − 1)(z − 1)(z − 2) = z 3 − 4z 2 + 5z − 2 has roots at 1. and it does not mean that one of the two roots is superfluous or that the polynomial has fewer than three roots. so two roots come easier. 2 We should observe. provided that P = 0 and P 3 = −Q2 . with a single root at z = 2 and a double root—that is. (10.1 properly yields the single root once and the double root twice. Then√ other the ±i2π/3 w . two roots—at z = 1. or any other convenient root3 finding technique to find a single root w1 such that w1 = w3 . The conclusion: either. the contradiction proves false the assumption which gave rise to it. one can apply the Newton-Raphson iteration (4. Table 10. it can matter.14) w = w1 . because w appears in the denominator of x's definition. As a simple rule. not both. but the effects of these cubic edge cases seem sufficiently nonobvious that the book might include here a few words about them.7 The one sign alone yields all three roots of the general cubic polynomial.
.
In the edge case P = 0. and does not worry about choosing one w3 over the other. by sidestepping it. the trouble is that w3 = 0 and that no alternate w3 is available. w (2Q)1/3
. At the corner. w3 = 2Q or 0. This is fine. (2Q)1/3 ǫ ǫ x = w− =− + (2Q)1/3 = (2Q)1/3 . we shall consider first the edge cases themselves. or equivalently where P = 0 and Q = 0. This implies the triple root z = −a2 /3. or corner case. However. gives two distinct quadratic solutions w3 .
w3 = Q. w3 = Q − w = − Q2 + ǫ3 = Q − (Q) 1 + ǫ3 2Q2 =− ǫ3 . which puts an awkward 0/0 in the table's definition of x.3. so the one w3 suffices by default because the other w3 brings nothing different. changing P infinitesimally from P = 0 to P = ǫ.
Both edge cases are interesting. The edge case P = 0 however does give two distinct w3 .3 has excluded the edge cases from its proof of the sufficiency of a single w3 . choosing the − sign in the definition of w3 . one of which is w3 = 0. no other x being possible. In the edge case P 3 = −Q2 . In this section. The edge case P 3 = −Q2 gives only the one quadratic solution w3 = Q.3 generally finds it sufficient to consider either of the two signs. or more precisely. which in this case means that x3 = 0 and thus that x = 0 absolutely. One of the two however is w3 = Q − Q = 0. The double edge case.1's definition that x ≡ w − P/w. in applying the table's method when P = 0. The edge case P = 0. Then. like the general non-edge case. which is awkward in light of Table 10. according to (10. both w3 are the same. CUBICS AND QUARTICS
in which § 10. Let us now add the edge cases to the proof. 2Q
ǫ .226
CHAPTER 10. it gives two quadratic solutions which happen to have the same value. We address this edge in the spirit of l'Hˆpital's o rule. one chooses the other quadratic solution. One merely accepts that w3 = Q. then their effect on the proof of § 10. For this reason.12). w3 = Q + Q = 2Q. Section 10. arises where the two edges meet— where P = 0 and P 3 = −Q2 . In the edge case P 3 = −Q2 . x3 = 2Q − 3P x.
though. historically Ferrari discovered it earlier [51. this writer does not know. which he could not solve until Tartaglia applied Vieta's transform to it.
8 Even stranger. but one supposes that it might make an interesting story. This completes the proof. What motivated Ferrari to chase the quartic solution while the cubic solution remained still unknown. whereas (1)1/3 = 1. 6's footnote 8. we now turn our attention to the general quartic.10. Apparently Ferrari discovered the quartic's resolvent cubic (10.
. w (2Q)1/3
Evidently the roots come out the same. either way. +3 a3 4
4
. The details differ. ±i. strangely enough. The kernel of the quartic technique lies likewise in reducing the quartic to a cubic. and. "Quartic equation"].8 As with the cubic.17) . This may also be why the quintic is intractable—but here we trespass the professional mathematician's territory and stray from the scope of this book.
10.5
Quartics
Having successfully extracted the roots of the general cubic polynomial. the roots lying nicely along the Argand axes. (−1±i 3)/2. See Ch.
1/3
227
.5. 4 a3 4
2
(10. ǫ ǫ x = w − = (2Q)1/3 − = (2Q)1/3 .22).16)
s ≡ −a2 + 6
. one begins solving the quartic by changing the variable x+h←z to obtain the equation x4 = sx2 + px + q. The (1)1/4 brings a much neater result.
3 2
p ≡ −a1 + 2a2 q ≡ −a0 + a1
a3 a3 −8 4 4 a3 a3 − a2 4 4
(10. The reason the quartic is simpler to reduce is probably related to the fact that (1)1/4 = √ ±1.15) (10. The kernel of the cubic technique lay in reducing the cubic to a quadratic. where h≡− a3 . w3 = Q + w = (2Q) Q2 + ǫ3 = 2Q. QUARTICS But choosing the + sign. in some ways the quartic reduction is actually the simpler.
one must regard (10. but not so u. look at (10. The left side of that equation is a perfect square. but with respect to x2 rather than x. Ferrari9 supplies the cleverness. where u remains to be chosen. what choice for u would be wise? Well.18)
Now. And though j 2 and k2 depend on u. The variable u is completely free.18).16) further. too. better expressed. one must be cleverer. or. we have introduced it ourselves and can assign it any value we like. even after specifying u we remain free at least to choose signs for j and k. (10.17). though no choice would truly be wrong.
(10. j 2 ≡ u2 + q. The clever idea is to transfer some but not all of the sx2 term to the equation's left side by x4 + 2ux2 = (2u + s)x2 + px + q. we have that p2 = 4(2u + s)(u2 + q). that 4sq − p2 s . 0 = u3 + u2 + qu + 2 8
9
(10. CUBICS AND QUARTICS
To reduce (10. (10.20) and substituting for j 2 and k2 from (10.21)
Squaring (10.228
CHAPTER 10.19).18) easier to simplify. rearranging terms and scaling. So. j or k.19) properly. "Quartic equation"]
. if it were that p = ±2jk.20) or.18) and (10. In these equations. p and q have definite values fixed by (10.2.19)
2
= k2 x2 + px + j 2 . As for u. then to complete the square on the equation's left side as in § 2.22)
[51. j= p . still. we propose the constraint that p = 2jk. The right side would be. as x2 + u where k2 ≡ 2u + s. one supposes that a wise choice might at least render (10. arbitrarily choosing the + sign. so. after distributing factors. 2k (10. s.
Equation (10.22) are both honored.
10. which we know by Table 10.20) into (10.23) implies the quadratic x2 = ±(kx + j) − u.25)
wherein the two ± signs are tied together but the third. but the resolvent cubic is a voluntary constraint.22) is the resolvent cubic.25).
2 2
= kx + j . If the constraints (10. which (2. reveals the four roots of the general quartic polynomial. and (10.
(10.22) of course yields three u not one. (10. Using the improved notation.21) then gives j.2) solves as x=± k ±o 2 k 2
2
(10. (10. then we can safely substitute (10. Table 10.
.19) then gives k (again.23)
The resolvent cubic (10.26)
improves the notation. so we can just pick one u and ignore the other two. we can just pick one of the two signs).2 summarizes the complete quartic polynomial root extraction method. ±o sign is independent of the two. the change of variables K← k . 2 J ← j.21) and (10. GUESSING THE ROOTS
229
Equation (10.25). With u.24)
± j − u.6
Guessing the roots
It is entertaining to put pencil to paper and use Table 10.
(10. with the other equations and definitions of this section. j and k established.6. Equation (10.18) to reach the form x2 + u which is x2 + u
2
= k2 x2 + 2jkx + j 2 . and which we now specify as a second constraint.1's method to extract the roots of the cubic polynomial 0 = [z − 1][z − i][z + i] = z 3 − z 2 + z − 1.1 how to solve for u. In view of (10.10.
1 and 10. ± . one finds that 1. 2 2 Doubling to make the coefficients all integers produces the polynomial 2z 5 − 7z 4 + 8z 3 + 1z 2 − 0xAz + 6. but if any of the roots happens to be real and rational.
10
.10. it must belong to the set 1 2 3 6 ±1. ±3 and ±6) divided by the factors of the polynomial's leading coefficient (in the example.
No other real. whose factors are ±1 and ±2). One has found the number 1 but cannot recognize it. rational root is possible. 3 3 w √ 2 5 + 33 . ±2. ± 2 2 2 2 . but why? The root's symbolic form gives little clue. Figuring the square and cube roots in the expression numerically. GUESSING THE ROOTS One finds that z = w+ w3 ≡ 1 2 − 2 . rational candidates are the factors of the polynomial's trailing coefficient (in the example. which naturally has the same roots. a quartic. but just you try to simplify it! A more baroque. ±3.10 we are stuck with the cubic baroquity. ± . 6. Consider for example the quintic polynomial 1 7 z 5 − z 4 + 4z 3 + z 2 − 5z + 3. the author would like to hear of it.2 and guess the roots directly. a quintic or any other polynomial has real. However. ±2. to the extent that a cubic. Dividing these out leaves a quadratic which is easy to solve for the remaining roots. rational root is
At least. The reason no other real. more impenetrable way to write the number 1 is not easy to conceive. The real.0000. whose factors are ±1. no better way is known to this author. rational roots. If the roots are complex or irrational. the root of the polynomial comes mysteriously to 1. ± .6. 2. 33
231
which says indeed that z = 1. a trick is known to sidestep Tables 10. −1 and 3/2 are indeed roots. In general no better way is known. Trying the several candidates on the polynomial. ±i. ±6. If any reader can straightforwardly simplify the expression without solving a cubic polynomial of some kind. they are hard to guess.
12 [46.12 Such root-guessing is little more than an algebraic trick. By similar reasoning. rational root is possible except a factor of a0 divided by a factor of an . We conclude for this reason. Moving the q n term to the equation's right side. as was to be demonstrated. which implies that a0 q n is a multiple of p. We do not want to spend many pages on this. The next several chapters turn to the topic of the matrix. harder but much more profitable. that no real.
The presentation here is quite informal. we have that an pn−1 + an−1 pn−2 q + · · · + a1 q n−1 p = −a0 q n . § 3. but now that the reader has tasted the topic he may feel inclined to agree that. though the general methods this chapter has presented to solve cubics and quartics are interesting. we have defined p and q to be relatively prime to one another—that is. we have defined them to have no factors but ±1 in common—so. then p and q are factors of a0 and an respectively. a multiple of q. But by demanding that the fraction p/q be fully reduced. toward which we mean to put substantial effort. One could write much more about higher-order algebra. of course. and an . CUBICS AND QUARTICS
possible is seen11 by writing z = p/q—where p and q are integers and the fraction p/q is fully reduced—then multiplying the nth-order polynomial by q n to reach the form an pn + an−1 pn−1 q + · · · + a1 pq n−1 + a0 q n = 0. further effort were nevertheless probably better spent elsewhere.2]
11
. but it can be a pretty useful trick if it saves us the embarrassment of inadvertently expressing simple rational numbers in ridiculous ways. where all the coefficients ak are integers. But if a0 is a multiple of p.232
CHAPTER 10. an is a multiple of q. not only a0 q n but a0 itself is a multiple of p.
or how C might scale something. the √ eigenvalues of C happen to be −1 and [7 ± 0x49]/2). but prior to this point in the book it was usually trivial—the eigenvalue of 5 is just 5. for example. But. just what is an eigenvalue? Answer: an eigenvalue is the value by which an object like C scales an eigenvector without altering the eigenvector's direction. but it is to answer precisely such questions that this chapter and the three which follow it are written. we have not yet said what an eigenvector is. 14. most of the foundational methods of the earlier chapters have handled only one or at most a few numbers (or functions) at a time. Where we have used single numbers as coefficients or multipliers 233
. So. we are getting ahead of ourselves. either. Taken by themselves. as. It serves as a generalized coefficient or multiplier. However. Regarding the eigenvalue: the eigenvalue was always there. Let's back up. This chapter begins to build on those foundations. for instance—so we didn't bother much to talk about it. demanding some heavier mathematical lifting. the eigenvalue of Ch. It is when numbers are laid out in orderly grids like C=
2 6 4 3 3 4 0 1 3 0 1 5 0
that nontrivial eigenvalues arise (though you cannot tell just by looking. An object like C is called a matrix. in practical applications the need to handle large arrays of numbers at once arises often.Chapter 11
The matrix
Chapters 2 through 9 have laid solidly the basic foundations of applied mathematics. Some nonobvious effects emerge then. Of course.
so in this example m = 2 and n = 3). 6 1. This book does not treat them. the chief advantage of the standard matrix is that it neatly represents the linear transformation of one vector into another. THE MATRIX
heretofore.
Fifth. which can be written in-line with the notation A = [0 6 2. an m-element vector does not differ for most purposes from an m×1 matrix. a column of m scalars like u=
» 5 −4 + i3 –
.1. Third. "Matrix stacks" bring no such advantage. vector. matrix suggests next a "matrix stack" or stack of p matrices. in square brackets. the three-element (that is. The technical name for the "single number" is the scalar. even infinity. is called a scalar because its action alone during multiplication is simply to scale the thing it multiplies. Such a number.1 Where one needs visually to distinguish a symbol like A representing a matrix. one can with sufficient care often use matrices instead. First.234
CHAPTER 11. though the progression scalar.2 Normally however a simple A suffices. such objects in fact are seldom used. • the vector. even one.
which can be written in-line with the notation u = [5 −4 + i3]T (here there are two scalar elements. so in this example m = 2).
1
. Fourth. As we shall see in § 11. generally the two can be regarded as the same thing. 5 and −4+i3. like – » 0 6 2 A = 1 1 −1 . a single number like α = 5 or β = −4 + i3. three-dimensional) geometrical vector of § 3. or equivalently a row of n vectors. The matrix interests us for this reason among others. m and n can be any nonnegative integers. despite the geometrical Argand interpretation of the complex number. Besides acting alone. scalars can also act in concert—in orderly formations—thus constituting any of three basic kinds of arithmetical object: • the scalar itself. Second. one can write it [A].3 is just an m-element vector with m = 3. 1 1 −1] or the notation A = [0 1. • the matrix. even zero. as for instance 5 or −4 + i3. 2 −1]T (here there are two rows and three columns of scalar elements. Several general points are immediately to be observed about these various objects. an m × n grid of scalars. 2 Alternate notations seen in print include A and A. therefore any or all of a vector's or matrix's scalar elements can be complex. however. a complex number is not a two-element vector but a scalar.
no more likely to be mastered by mere theoretical study than was the classical arithmetic of childhood. and using a sword. The reward is worth the effort. 11 and 12 as need arises. At least. Would the rebel consider alternate counsel? If so.3. The reader who has previously drilled matrix arithmetic will meet here the essential applied theory of the matrix. Only the doggedly determined beginner will learn the matrix here alone. is likely to find these chapters positively hostile. decomposing each carefully by pencil per the Gauss-Jordan method of § 12. The way of the warrior is hard. necessary the chapters may be. the earlier parts are not very exciting (later parts are better). Part of the trouble with the matrix is that its arithmetic is just that. The matrix upsets this balance. surprisingly less dull [23] for instance. Applied mathematics brings nothing else quite like it. using a machine defeats the point of the exercise. an arithmetic. still determined to learn the matrix here alone. logical. one must drill it. yet the book you hold is fundamentally one of theory not drill. the author salutes his honorable defiance. The most basic body of matrix theory. however. That reader will find this chapter and the next three tedious enough. the beginner can expect to find these chapters still tedious but no longer impenetrable. The reader who has not previously drilled matrix arithmetic. square and tall. 13 and 14. others will find it more amenable to drill matrix arithmetic first in the early chapters of an introductory linear algebra textbook. dull though such chapters be (see [31] or better yet the fine. Substantial. without which little or no useful matrix work can be done. but exciting they are not. even interminable at first encounter. no one has ever devised a way to introduce the matrix which does not seem shallow. The idea of the matrix is deceptively simple. referring back to Chs. yet the matrix is too important to ignore.235 The matrix is a notoriously hard topic to motivate. though the early chapters of almost any such book give the needed arithmetical drill. but to understand the matrix one must first understand the tedium and clutter the matrix encapsulates. the young warrior with face painted and sword agleam. checking results (again by pencil. As far as the author is aware.) Returning here thereafter. Several hours of such drill should build the young warrior the practical arithmetical foundation to master—with commensurate effort—the theory these chapters bring. then the rebel might compose a dozen matrices of various sizes and shapes. That is the approach the author recommends. but conquest is not impossible.
. The mechanics of matrix arithmetic are deceptively intricate.3 Chapters 11 through 14 treat the matrix and its algebra. To the mathematical rebel. well. As a reasonable compromise. tiresome.
3 In most of its chapters. it won't work) by multiplying factors to restore the original matrices. broad. the veteran seeking more interesting reading might skip directly to Chs. To master matrix arithmetic. To the matrix veteran. This chapter. is deceptively extensive. irksome. the author presents these four chapters with grim enthusiasm. the book seeks a balance between terseness the determined beginner cannot penetrate and prolixity the seasoned veteran will not abide. The matrix neatly encapsulates a substantial knot of arithmetical tedium and clutter.
The linear transformation 5 is the operation of an m × n matrix A.1)
to transform an n-element vector x into an m-element vector b. 2 and 5] has much to recommend it.3.1
The linear transformation
Section 7.
(11. We begin there.2)
is the 2 × 3 matrix which transforms a three-element vector x into a twoelement vector b such that Ax = where x=
4 5
» 2
0x1 + 6x2 + 2x3 1x1 + 1x2 − 1x3
–
= b.236
CHAPTER 11. 1 and 2][31. The professional approach [2. introduces the rudiments of the matrix itself.4
11.
. while respecting the rules of linearity A(x1 + x2 ) = Ax1 + Ax2 = b1 + b2 . 1.
3 x1 4 x2 5. (11. but it is not the approach we will follow here. the basis set and the simultaneous system of linear equations—proving from suitable axioms that the three amount more or less to the same thing.1. A=
» 0 1 6 1 2 −1 –
= αb.
[2][15][23][31] Professional mathematicians conventionally are careful to begin by drawing a clear distinction between the ideas of the linear transformation. 11.3 has introduced the idea of linearity. as in Ax = b.
11. A(αx) = αAx A(0) = 0.1
Provenance and basic use
It is in the study of linear transformations that the concept of the matrix first arises. rather than implicitly assuming the fact. THE MATRIX
Ch. Chs. x3
b=
»
b1 b2
–
. For example. Chs.
there are unfortunately not enough distinct Roman and Greek letters available to serve the needs of higher mathematics.1. industrial mass production-style. however. the operation of a matrix A is that6. is compactly represented as Ax = b.3. matrices can also represent simultaneous systems of linear equations.
11. one can use ℓjk or some other convenient letters for the indices. the system 0x1 + 6x2 + 2x3 = 2. following mathematical convention in the matter for this reason. P the meaning is usually clear from the context: i in i or aij is an index.11. 1x1 + 1x2 − 1x3 = 4. PROVENANCE AND BASIC USE In general. with A as given above and b = [2 4]T . and aij ≡ [A]ij is the element at the ith row and jth column of A.7
n
237
bi =
j=1
aij xj . Seen from this point of view. In computers. The book you are reading tends to let the index run from 1 to n. Here. so the computer's indexing convention poses no dilemma in this case. which is not an index and has nothing to do with indices. but the same letter i also serves as the imaginary unit. a simultaneous system of linear equations is itself neither more nor less than a linear transformation.1. a 0 index normally implies something special or basic about the object it identifies.3)
where xj is the jth element of x. i in −4 + i3 or eiφ is the imaginary unit. transforming them at once into the corresponding
As observed in Appendix B. a12 = 6). an m × n matrix can be considered an ∞ × ∞ matrix with zeros in the unused cells. the index normally runs from 0 to n − 1. counting from top left (in the example for instance. both indices i and j run from −∞ to +∞ anyway. and in many ways this really is the more sensible way to do it. Besides representing linear transformations as such.
6
. 7 Whether to let the index j run from 0 to n−1 or from 1 to n is an awkward question of applied mathematical style.2
Matrix multiplication (and addition)
Nothing prevents one from lining several vectors xk up in a row. In matrix work. bi is the ith element of b. In mathematical theory. Fortunately. Should a case arise in which the meaning is not clear. See § 11. the Roman letters ijk conventionally serve as indices. Conceived more generally. For example.
(11.
the concept is important. however. from the left.10) or a row operation (11. the same matrix equation represents something else.1. It operates on A's columns. all columns of X"—that is. row operators. the jth column of A. one writes (11. it represents a weighted sum of the columns of A. The ∗ here means "any" or "all. In this view. Since matrix multiplication produces the same result whether one views it as a linear transformation (11. By virtue of multiplying A from the right. If several vectors xk line up in a row to form a matrix X. with the elements of x as the weights. the matrix X operates on A's columns.
(11. This rule is worth memorizing. the vector x is a column operator acting on A. Similarly.3
Row and column operators
The matrix equation Ax = b represents the linear transformation of x into b.
(11.11. the matrix A operates on X's rows. besides (11.11). If a matrix multiplying from the right is a column operator. is
n
[B]i∗ =
j=1
aij [X]j∗ . is a matrix multiplying from the left a row operator? Indeed it is." Hence [X]j∗ means "jth row. a column operation (11.10).
(11. PROVENANCE AND BASIC USE
239
11. the jth row of X.4). such that AX = B. as we have seen.11)
The ith row of A weights the several rows of X to yield the ith row of B. In AX = B.3) as
n
b=
j=1
[A]∗j xj . [A]∗j means "all rows. jth column of A"—that is. (Observe the notation. it is also an operator.1. then the matrix X is likewise a column operator:
n
[B]∗k =
j=1
[A]∗j xjk .) Column operators attack from the right. it is not so much for the sake
. However. Another way to write AX = B. The matrix A is a row operator. one might wonder what purpose lies in defining matrix multiplication three separate ways. Viewed from another perspective.10)
The kth column of X weights the several columns of A to yield the kth column of B.9)
where [A]∗j is the jth column of A. Here x is not only a vector.
It is worth developing the mental agility to view and handle matrices all three ways for this reason.240
CHAPTER 11. 11.4
The transpose and the adjoint
One function peculiar to matrix algebra is the transpose C = AT . A tedious. but algebraically the transpose is artificial.4). the latter two do indeed expand to yield (11.1. −1
(11.63). The transpose is convenient notationally to write vectors and matrices inline and to express certain matrix-arithmetical mechanics.
Alternate notations sometimes seen in print for the adjoint include A† (a notation which in this book means something unrelated) and AH (a notation which recalls the name of the mathematician Charles Hermite). the transpose and the adjoint amount to the same thing. the book you are reading writes the adjoint only as A∗ . of course. It is the adjoint rather which mirrors a matrix properly. nonintuitive matrix theorem from one perspective can appear suddenly obvious from another (see for example eqn.12)
Similar and even more useful is the conjugate transpose or adjoint 8 C = A∗ . We do it for ourselves." whereas the adjoint would be "snoitavired. cij = a∗ . then the transpose of "derivations" would be "snoitavired. ji (11. but as written the three represent three different perspectives on the matrix. For example. Results hard to visualize one way are easy to visualize another." See the difference?) On real-valued matrices like the A in the example. a notation which better captures the sense of the thing in the author's view. THE MATRIX
of the mathematics that we define it three ways as it is for the sake of the mathematician. cij = aji . (If the transpose and adjoint functions applied to words as to matrices. However.13)
which again mirrors an m × n matrix into an n × m matrix.
11. AT =
2 0 4 6 2 3 1 1 5. but conjugates each element as it goes.
8
. Mathematically. which mirrors an m × n matrix into an n × m matrix.
11. . ··· ··· ··· ··· ··· ··· ··· . .1. A adds to the second row of X. DIMENSIONALITY AND MATRIX FORMS The ∞ × ∞ matrix
2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 . . And really. . . . the idea of an ∞ × 1 vector or an ∞ × ∞ matrix should not seem so strange. . . 0 0 0 1 0 0 0 . 12 The idea of infinite dimensionality is sure to discomfit some readers. 0 0 0 0 1 0 0 .. . .20) that when A acts AX on a matrix X of any dimensionality whatsoever. only that that view does not suit the present book's development. among others.11 This section explains. which holds all values of the function sin θ of a real argument θ. . that the idea thereof does not threaten to overturn the reader's pre¨xisting matrix knowledge. As before. 1 0 0 0 0 0 0 . 0 0 0 0 0 1 0 . any more than one actually writes down or stores all the bits (or digits) of 2π. . Traditionally in matrix work and elsewhere in the book. . ··· ··· ··· ··· ··· ··· ··· . 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5
243
A=
expresses the operation generally. and has no second row? Then the operation AX creates a new second row. . that. who have studied matrices before and are used to thinking of a matrix as having some definite size. a11 = 1 and a21 = 5. . (But what if X is a 1 × p matrix. . . Different ways of looking at the same mathematics can be extremely useful to the applied mathematician. . . By running ones infinitely both ways out the main diagonal. 5 times the first—and affects no other row. .) In the infinite-dimensional view. 5 times the first—or rather so fills in X's previously null second row. . the matrices A and X differ essentially. 0 0 1 5 0 0 0 . Such usage admittedly proves awkward in other contexts. consider the vector u such that uℓ = sin ℓǫ.. . . After all. the letter A does not necessarily represent an extended operator as it does here. but now also a(−1)(−1) = 1 and a09 = 0. There is nothing wrong with thinking of a matrix as having some definite size. .12
This particular section happens to use the symbols A and X to represent certain specific matrix forms because such usage flows naturally from the usage Ax = b of § 11. . Writing them down or storing them is not the point. . developing some nonstandard formalisms the derivations of later sections and chapters can use. . . . The applied mathematical reader who has never heretofore considered
11
. . but rather an arbitrary matrix of no particular form. we guarantee by (11.3. 0 0 0 0 0 0 1 . where 0 < ǫ ≪ 1 and ℓ is an integer. . . Of course one does not actually write down or store all the elements of an infinite-dimensional vector or matrix. no fundamental conceptual barrier rises against it. . . The point is that infinite dimensionality is all right. 0 1 0 0 0 0 0 . . though the construct seem e unfamiliar. .
. too. What really counts is not a matrix's m × n dimensionality but rather its rank. inasmuch as X differs from the null matrix only in the mn elements xij . but usually a simple 0 suffices. . Basically. .
. Special symbols like 0. .244
CHAPTER 11. but those aren't what we were speaking of here. . . 0 or O are possible for the null matrix.
7 7 7 7 7 7 7. ··· ··· ··· ··· ··· . The same symbol 0 used for the null scalar (zero) and the null matrix is used for the null vector. or to summing an infinity of zeros as in Ch. . . 12. THE MATRIX
11. There are no surprises here. 7. to retain complete information about such a matrix. as a variation on the null matrix. the null matrix brings all the expected properties of a zero. one need record only the mn elements. 3
0=
6 6 6 6 6 6 6 6 6 6 6 4
··· ··· ··· ··· ··· . . 13 Well. . . . . So the semantics are these: when we call a matrix X an m × n matrix. or more precisely a dimension-limited matrix with an m × n
infinite dimensionality in vectors and matrices would be well served to take the opportunity to do so here. 0 0 0 0 0 .. 7 7 7 7 5
or more compactly. dimensionality is a poor measure of a matrix's size in any case. the vector 0 and the matrix 0 actually represent different things is a matter of semantics. there's not much else to it. 0 0 0 0 0 .3. . Whether the scalar 0. As we shall discover in Ch. 1 ≤ i ≤ m. plus the values of m and n. when it comes to dividing by zero as in Ch. . but the three are interchangeable for most practical purposes in any case. . . like 0 + A = A. 0 0 0 0 0 . 0 0 0 0 0 . 0 0 0 0 0 . 1 ≤ j ≤ n. infinitedimensionally. .13 Now a formality: the ordinary m × n matrix X can be viewed. [0][X] = 0. . Though the theoretical dimensionality of X be ∞ × ∞. . a zero is a zero is a zero. . . 4.. there's a lot else to it. [0]ij = 0.1
The null and dimension-limited matrices
The null matrix is just what its name implies:
2 . . of course. . . .
bearing repeated entries not only along the main diagonal but also along the diagonals just above and just below. 2 × 2. meaning that they depart from λI in only a very few of their many elements. If λ = 0. j) ≤ n.3. here it is in outline: aij bij [AB]ij = = = = = λa δij unless 1 ≤ (i. j) ≤ n. otherwise the product is an extended operator.4
Other matrix forms
Besides the dimension-limited form of § 11.3.1 and the extended-operational form of § 11. so one normally stores just the few elements.3
The active region
Though maybe obvious. Section 11. λ = 0. however. etc. j) ≤ n.). the dimensionThe section's earlier subsections formally define the term active region with respect to each of the two matrix forms. the scalar matrix λI is just the null matrix. are extended operators with 0 × 0 active regions (and also 1 × 1. DIMENSIONALITY AND MATRIX FORMS
247
sparse.
15
λb δij unless 1 ≤ (i.2.9 introduces one worthwhile matrix which fits neither the dimension-limited nor the extended-operational form.3.16
11.) If any of the factors has dimension-limited form then so does the product. it bears stating explicitly that a product of dimension-limited and/or extended-operational matrices with n × n active regions itself has an n × n active region.) Implicit in the definition of the extended operator is that the identity matrix I and the scalar matrix λI.15 (Remember that a matrix with an m′ × n′ active region also by definition has an n × n active region if m′ ≤ n and n′ ≤ n.
. 16 If symbolic proof of the subsection's claims is wanted.
11.
Pk
(λa δik )bkj = λa bij unless 1 ≤ i ≤ n k aik (λb δkj ) = λb aij unless 1 ≤ j ≤ n
It's probably easier just to sketch the matrices and look at them.11. recording only nonzero elements and their addresses in an otherwise null matrix. though. other infinite-dimensional matrix forms are certainly possible. instead. Still. X aik bkj
k
(P
λa λb δij unless 1 ≤ (i. which is no extended operator but rather by definition a 0 × 0 dimension-limited matrix. One could for example advantageously define a "null sparse" form.3.3. or a "tridiagonal extended" form. It would waste a lot of computer memory explicitly to store all those zeros.
The areas of I3 not shown are all zero.) The rank r can be any nonnegative integer. otherwise.3. Further reasons to have defined such forms will soon occur. The name "rank-r" implies that Ir has a "rank" of r. even along the main diagonal. I3 . and they are the ones we shall principally be handling in this book.5 defines rank for matrices more generally.1.29)
where X is an m × n matrix and x. we shall discern the attribute of rank only in the rank-r identity matrix itself.
(11. though a 3 × 3 matrix. and indeed it does. The effect of Ir is that Im X = X = XIn .
. One reason to have defined specific infinite-dimensional matrix forms is to show how straightforwardly one can fully represent a practical matrix of an infinity of elements by a modest.
11. Examples of Ir include I3 =
1 4 0 0 0 1 0
(Remember that in the infinite-dimensional view.28)
where either the "and" or the "or" can be regarded (it makes no difference). It has only the three ones and fits the 3 × 3 dimension-limited form of § 11. finite quantity of information. even zero (though the rankzero identity matrix I0 is in fact the null matrix.30)
can be used. Section 12. If alternate indexing limits are needed (for instance for a computer-indexed b identity matrix whose indices run from 0 to r − 1). Im x = x. however.248
CHAPTER 11. is formally an ∞ × ∞ matrix with zeros in the unused cells. (11. 1
(11. which is just the count of ones along the matrix's main diagonal. normally just written 0). otherwise.3. where
b [Ia ]ij ≡
δij 0
if a ≤ i ≤ b and/or a ≤ j ≤ b. the notation Ia . the rank in this case is r = b − a + 1.5
The rank-r identity matrix
The rank-r identity matrix Ir is the dimension-limited matrix for which [Ir ]ij = δij 0 if 1 ≤ i ≤ r and/or 1 ≤ j ≤ r. THE MATRIX
limited and extended-operational forms are normally the most useful. For the moment. an m × 1 vector.
2 3 0 0 5.
(After all.)
17
. Attacking from the right. which has a one as the mth element: 1 if i = m.7
The elementary vector and the lone-element matrix
The lone-element matrix Emn is the matrix with a one in the mnth cell and zeros elsewhere: [Emn ]ij ≡ δim δjn = 1 if i = m and j = n. Attacking from the left. [I]∗j = ej and [I]i∗ = eT . 0 otherwise.6
The truncation operator
The rank-r identity matrix Ir is also the truncation operator. (11.32)
By this definition.j cij Eij for any matrix C. The vector analog of the lone-element matrix is the elementary vector em . it retains the first through rth columns. • The rank-r identity matrix Ir commutes freely past C.
11.31) says at least two things: • It is superfluous to truncate both rows and columns.3. as in Ir A. (11. Whether a matrix C has dimension-limited or extended-operational form (though not necessarily if it has some other form). There would be nothing there to truncate.3. (11. By this definition. if it were always all zero outside. That a matrix has an m × n active region does not necessarily mean that it is all zero outside the m × n rectangle.3. then there would be little point in applying a truncation operator. (11. n ≤ r.1 and 11.11. it suffices to truncate one or the other.33) [em ]i ≡ δim = 0 otherwise. C = i. i
Refer to the definitions of active region in §§ 11.2. if it has an m × n active region17 and m ≤ r.31) then Ir C = Ir CIr = CIr .
Evidently big identity matrices commute freely where small ones cannot (and the general identity matrix I = I∞ commutes freely past everything).3. For such a matrix. Such truncation is useful symbolically to reduce an extended operator to dimension-limited form.3. DIMENSIONALITY AND MATRIX FORMS
249
11. as in AIr . it retains the first through rth rows of A but cancels other rows.
3.
.18 • The first is the interchange elementary T[i↔j] = I − (Eii + Ejj ) + (Eij + Eji ). Conventionally denoted T .35) The product of matrices has off-diagonal entries in a row or column only if at least one of the factors itself has off-diagonal entries in that row or column. The elementary operator T comes in three kinds.3 has introduced the general row or column operator. where j = i.1.6). the ith row or jth column of the product of matrices can depart from eT or ej . which is to say that the operator doesn't actually do anything. The reason is that in (11. that if C1 's ith row is eT . There exist legitimate tactical reasons to forbid (as in § 11.19
18 In § 11. which itself is i just eT . then its action is merely to duplicate C2 's ith row. Or. some authors [31] forbid T[i↔i] as an elementary operator.4
The elementary operator
Section 11. then also [C1 C2 ]∗j = ej . but normally this book permits. 19 As a matter of definition. THE MATRIX
11. C1 acts as a row operator on C2 .36)
which by operating T[i↔j]A or AT[i↔j] respectively interchanges A's ith row or column with its jth. (11. Parallel logic naturally applies to (11. the elementary operator is a simple extended row or column operator from sequences of which more complicated extended operators can be built. (11.8
Off-diagonal entries
It is interesting to observe and useful to note that if [C1 ]i∗ = [C2 ]i∗ = eT .3.34)
11.35). the symbol A specifically represented an extended operator. but here and generally the symbol represents any matrix. i (11.250
CHAPTER 11. less readably but more precisely. since after all T[i↔i] = I. only if the i corresponding row or column of at least one of the factors so departs. respectively.34). i then also [C1 C2 ]i∗ = eT . i and likewise that if [C1 ]∗j = [C2 ]∗j = ej .
20 This means that any sequence of elementaries −1 safely be undone by the reverse sequence k Tk :
−1 Tk k k k
(11. The last can be considered a distinct.42)
for any elementary operator T which operates within the given bounds. extended-operational form.42) lets an identity matrix with sufficiently high rank pass through a sequence of elementaries as needed. Equation (11.
The rank-r identity matrix Ir is no elementary operator.40) Tk can (11. THE MATRIX
Note that none of these. with
−1 T[i↔j] = T[j↔i] = T[i↔j]. but the general identity matrix I is indeed an elementary operator.41)
Tk = I =
k
Tk
k
−1 Tk . −1 Tα[ij] = T−α[ij] .
. Curiously.
11. The other books are not wrong. −1 Tα[i] = T(1/α)[i] . fourth kind of elementary operator if desired. elementary operators as defined above are always invertible (which is to say.39)
being themselves elementary operators such that T −1 T = I = T T −1 in each case. In general.252
CHAPTER 11.
(11. but it is probably easier just to regard it as an elementary of any of the first three kinds. From (11. the transpose of an elementary row operator is the corresponding elementary column operator.1
Properties
Significantly. This book finds it convenient to define the elementary operator in infinite-dimensional. 21 If the statement seems to contradict statements of some other books. it is only a matter of definition.
(11.43)
20 The addition elementary Tα[ii] and the scaling elementary T0[i] are forbidden precisely because they are not generally invertible.21 nor is the lone-element matrix Emn . and in fact no elementary operator of any kind. we have that Ir T = Ir T Ir = T Ir if 1 ≤ i ≤ r and 1 ≤ j ≤ r (11.4. their underlying definitions just differ slightly.31). since I = T[i↔i] = T1[i] = T0[ij] . reversible in effect). the interchange elementary is its own transpose and adjoint:
∗ T T[i↔j] = T[i↔j] = T[i↔j]. differs from I in more than four elements.
THE ELEMENTARY OPERATOR
253
11. it can alter any elementary by commuting past. but all other pairs commute in this way. Itself subject to alteration only by another interchange elementary. however. it changes the kind of neither.4. • The interchange elementary is the strongest. where T1 and T2 are elementaries of the same kinds respectively as T1 and T2 . (ii) scaling. elementaries of different kinds always commute. though ′ indeed T1 T2 = T2 T1 . we should like to observe a natural hierarchy among the three kinds of elementary: (i) interchange.11. and for most pairs T1 and T2 of elementary operators. with several elementaries of all kinds intermixed. Significantly. they yield the same A and thus represent exactly the same matrix operation. among other possible orderings. Some applications demand that the elementaries be sorted and grouped by kind. First. One sorts a chain of elementary operators by repeatedly exchanging adjacent pairs.2
Commutation and sorting
Elementary operators often occur in long chains like A = T−4[32] T[2↔3] T(1/5)[3] T(1/2)[31] T5[21] T[1↔3] . Though you probably cannot tell just by looking. as A = T[2↔3] T[1↔3] or as A = T−4[32] T(1/0xA)[21] T5[31] T(1/5)[2] T[2↔3] T[1↔3] . or whatever symbols T−4[21] T(1/0xA)[13] T5[23] T(1/5)[1]
.4. However. at the moment we are dealing in elementary operators only. but has changed the kind of none of them. it so happens that there exists either a T1 such that ′ ′ ′ ′ ′ T1 T2 = T2 T1 or a T2 such that T1 T2 = T2 T1 . we shall list these exhaustively in a moment. Many qualitatively distinct pairs of elementaries exist. When an interchange elementary commutes past another elementary of any kind. the three products above are different orderings of the same elementary chain. though commutation can alter one (never both) of the two elementaries. Interesting is that the act of reordering the elementaries has altered some of them into other elementaries of the same kind. (iii) addition. which seems impossible since matrix multiplication is not commutative: A1 A2 = A2 A1 . This of course supposes that one can exchange adjacent pairs. what it alters are the other elementary's indices i and/or j (or m and/or n. And. The attempt sometimes fails when both T1 and T2 are addition elementaries.
what it alters is the latter's scale α (or β. or whatever symbol happens to represent the scale in question). itself having no power to alter any elementary during commutation.3 list all possible pairs of elementary operators.) Refer to Table 11. When a scaling elementary commutes past an addition elementary. Only an interchange elementary can alter it.
.1. Refer to Table 11. then what can one write of the commutation of an elementary past the general matrix A? With some matrix algebra. commuting leftward. When two interchange elementaries commute past one another. A pair of addition elementaries are the only pair that can altogether fail to commute—they fail when the row index of one equals the column index of the other—but when they do commute.2.1. neither alters the other.3 exhaustively describe the commutation of one elementary past another elementary. The mathematician chooses. is subject to alteration by either of the other two. T A = (T A)(I) = (T A)(T −1 T ).3. (11. as the reader can check. • Next in strength is the scaling elementary.2 and 11. one can write that T A = [T AT −1 ]T.254
CHAPTER 11.3.5
Inversion and similarity (introduction)
If Tables 11. to T −1 AT . Refer to Table 11.44)
where T −1 is given by (11. The only pairs that fail to commute are the last three of Table 11. • The addition elementary. Scaling elementaries do not alter one another during commutation. An elementary commuting rightward changes A to T AT −1 .
11. AT = T [T −1 AT ]. 11. AT = (I)(AT ) = (T T −1 )(AT ). 11. (Which one? Either. and it in turn can alter only an addition elementary.2 and 11. last and weakest. THE MATRIX happen to represent the indices in question). only one of the two is altered.39).1.
Tables 11.
) The broad question of how to invert a general matrix C. the null matrix has no such inverse.48)
This rule emerges upon repeated application of (11. First. INVERSION AND SIMILARITY (INTRODUCTION)
257
First encountered in § 11. AC = C[C −1 AC].9 speak further of this. Matrix inversion is not for elementary operators only. though.11.
−1
Ck
k
=
k
−1 Ck . (11. CA = [CAC −1 ]C. we leave for Chs. . which yields
−1 Ck k k
Ck = I =
k
Ck
k
−1 Ck .45).44) actually requires the matrix T there to be an elementary operator. Second.47) is a consequence of (11. Hence.4.
(11. 12 and 13 to address. Many more general matrices C also have inverses such that C −1 C = I = CC −1 . (11.2 and 14.
∗ −1
−1 ∗
(11. For example. nothing in the logic leading to (11. such that T −1 T = I = T T −1 . For the moment however we should like to observe three simple rules involving matrix inversion. Third. Sections 12. CT C
−1
= C −T = C −1 = C −∗ = C
T
.47)
where C −∗ is condensed notation for conjugate transposition and inversion in either order and C −T is of like style. Equation (11.
.45)
(Do all matrices have such inverses? No. since for conjugate transposition C −1
∗
C ∗ = CC −1
∗
= [I]∗ = I = [I]∗ = C −1 C
∗
= C ∗ C −1
∗
and similarly for nonconjugate transposition.46)
The transformation CAC −1 or C −1 AC is called a similarity transformation.14). the notation T −1 means the inverse of the elementary operator T .5. Any matrix C for which C −1 is known can fill the role.
In either notation. one can achieve any desired permutation (§ 4. This is the rank-r inverse.4 summarizes.
11. 2.45).31). (11. (11. . a matrix C −1(r) such that C −1(r) C = Ir = CC −1(r) .) Table 11.2. with which many or most readers will already be familiar. beginning
. we shall first learn how to compute it reliably in Ch. THE MATRIX
Table 11.49)
The full notation C −1(r) is not standard and usually is not needed. usually is not needed. 13.48) apply equally for the rank-r inverse as for the infinite-dimensional inverse. For example. (The properties work equally for C −1(r) as for C −1 if A honors an r ×r active region. not just adjacent pairs). .258
CHAPTER 11. Because of (11. 3. By successively interchanging pairs of the objects (any pairs. The full notation C −1(r) for the rank-r inverse incidentally is not standard. 12.4: Matrix inversion properties. one can abbreviate the notation to C −1 . This is a famous way to use the inverse. and normally is not used.47) and (11.2 uses the rank-r inverse to solve an exactly determined linear system. . (11. n.46) too applies for the rank-r inverse if A's active region is limited to r × r.) C −1 C = C −1(r) C = CT C∗
−1 −1
I Ir C −∗
= CC −1 = CC −1(r) = = C −1 C −1
T ∗
= C −T =
CA = [CAC −1 ]C AC = C[C −1 AC]
−1
Ck
k
=
k
−1 Ck
A more limited form of the inverse exists than the infinite-dimensional form of (11.1). (Section 13. When so. since the context usually implies the rank. but before using it so in Ch.6
Parity
Consider the sequence of integers or other objects 1. .
some pairs will appear in correct order with respect to one another. v) (u. v. Complementarily. −2. thus the interchange affects the ordering of no other pair). The interchange reverses with respect to one another just the pairs (u. . then we say that the permutation has even or positive parity. am the elements lying between. a1 . n). If two elements are adjacent and their order is correct. while others will appear in incorrect order. . but the pair [1. think about it. . . . thus reversing parity. Either way. Before the interchange: . . 3) (2. am . 3. a2 ) · · · (a1 . (3. .6. 2) (1. . 3] appears in incorrect order in that the larger 3 stands to the left of the smaller 1. 4. v. 7. PARITY
259
with 1. am . . v)
22 For readers who learned arithmetic in another language than English. 5. 1. . then odd or negative parity. . 0. v) (a2 . v) (am .11. the odd integers are . (n − 1. . 1. Why? Well. . then the 2 and 5. the even integers are . am−1 . n). (2. reversing parity. −1. 3) (1. . . . . 4. . . . 4) · · · (3. 3. one can achieve the permutation 3. (1. . n).. a1 ) (u. . and if p is even. then interchanging rectifies the order. am ) (am−1 . a1 . .) If p is the number of pairs which appear in incorrect order (in the example. p = 6). Now contemplate all possible pairs: (1. an adjacent interchange alters p by exactly ±1. . n). a2 . a2 . . . .
. with a1 . 5. . . 2 by interchanging first the 1 and 3. 2. . am−1 ) (u. . −3. 4. too? To answer the question. 6. u. v) · · · (u. if odd. 2. What about nonadjacent elements? Does interchanging a pair of these reverse parity. . . 1. 4) · · · . but only of that pair (no other element interposes. (In 3. In a given permutation (like 3. After the interchange: . . u. . the pair [1. 2. 4. 5. . am−1 . 2). . if the order is incorrect. let u and v represent the two elements interchanged. 2] appears in correct order in that the larger 2 stands to the right of the smaller 1. a2 . 4. −4. . . 4) · · · (2.. .22 Now consider: every interchange of adjacent elements must either increment or decrement p by one. 5. 5. . 1. then interchanging falsifies the order.
1 and 14. This does not change parity. the net change in p apparently also is odd. explained in § 11. reversing parity. there are some extra rules. The sole exception arises when an element is interchanged with itself. The rows or columns of a matrix can be considered elements in a sequence. so in parity calculations we ignore it.7. more complicated than elementary operators but less so than arbitrary matrices. interchanging rows or columns and thus reversing parity. however. then q T[ik ↔jk ] = I. it is possible that q T[ik ↔jk ] = I k=1 k=1 if q is even. called in this book the quasielementary operators. With respect to interchange and scaling.7. However. even q implies even p. which means even (positive) parity. Since each reversal alters p by ±1. It seems that regardless of how distant the pair. any sequences of elementaries of the respective kinds are allowed.41) are always invertible. We discuss parity in this. as explained in footnote 19.1. If so.260
CHAPTER 11. With respect to addition. scaling and addition—to match the three kinds of elementary. but it does not change anything else. so one finds it convenient to define an intermediate class of operators. We shall have more to say about parity in §§ 11.
The three subsections which follow respectively introduce the three kinds of quasielementary operator. Such complicated operators are not trivial to analyze. which means odd (negative) parity.3. acts precisely in the manner described. either. A quasielementary operator is composed of elementaries only of a single kind.4. THE MATRIX
The number of pairs reversed is odd. It follows that if ik = jk and q is odd. a chapter on matrices. then the interchange operator T[i↔j] .
. There are thus three kinds of quasielementary—interchange. one can form much more complicated operators.7
The quasielementary operator
Multiplying sequences of the elementary operators of § 11. because parity concerns the elementary interchange operator of § 11. i = j.
23
This is why some authors forbid self-interchanges. which per (11.
11.4.23 All other interchanges reverse parity. odd q implies odd p. In any event. interchanging any pair of elements reverses the permutation's parity.
as row operators k to C. (11. Equation (11. to build any unit triangular matrix desired.62) immediately and directly.31 then conveniently. A3 and so on. rightward.
31
.63) is most easily seen if the several L[j] and U[j] are regarded as column operators acting sequentially on I:
∞ j=−∞ ∞ j=−∞
L = (I) U = (I)
L[j] . U[j] .11. but just thinking about how L[j] adds columns leftward and U[j] . without calculation. whereas (C)( Qk Ak ) ' applies first A1 .63) follows at once.
The reader can construct an inductive proof symbolically on this basis without too much difficulty if desired.1
Construction
To make a unit triangular matrix is straightforward: L= U=
∞ j=−∞ ∞ j=−∞
L[j] .62) U[j] . A3 and so on. then A2 . (11. The symbols and as this book uses them can thus be thought of respectively as row and column sequencers. (11.
So long as the multiplication is done in the order indicated. then considering the order in which the several L[j] and U[j] act.8. as column operators to C.
Q ' Recall again from § 2.8. THE UNIT TRIANGULAR MATRIX
267
11. Q This means that ( ' Ak )(C) applies first A1 . The correctness of (11.
ij ij
ij ij
which is to say that the entries of L and U are respectively nothing more than the relevant entries of the several L[j] and U[j].63) . L U = L[j] = U[j] . then A2 .63) enables one to use (11. whereas k Ak = A1 A2 A3 · · · .3 that k Ak = · · · A3 A2 A1 .
11. The proof for unit lower and unit upper triangular matrices is the same.8.
Inasmuch as this is true.64)
is another unit triangular matrix of the same type. But this is just [L1 L2 ]ij = 0 if i < j.62).2
The product of like unit triangular matrices
The product of like unit triangular matrices.64).63) supplies the specific quasielementaries.268
CHAPTER 11. and [L2 ]mj is null when m < j. [L1 L2 ]ij = i m=j [L1 ]im [L2 ]mj if i = j. if i ≥ j.3
Inversion
Inasmuch as any unit triangular matrix can be constructed from addition quasielementaries by (11. Therefore. [L1 ]ij [L2 ]ij = [L1 ]ii [L2 ]ii = (1)(1) = 1 if i = j.59) gives the inverse of each
.8. Hence (11.
But as we have just observed. (11. U1 U2 = U. [L1 ]ij or [L2 ]ij = 1 if i = j. [L1 L2 ]ij =
∞ m=−∞
[L1 ]im [L2 ]mj . L1 L2 = L. one starts from a form of the definition of a unit lower triangular matrix: 0 if i < j.
which again is the very definition of a unit lower triangular matrix.57) or (11. THE MATRIX
11. and inasmuch as (11. In the unit lower triangular case. [L1 ]im is null when i < m. [L1 L2 ]ij = 0
i m=j [L1 ]im [L2 ]mj
if i < j. nothing prevents us from weakening the statement to read 0 if i < j. inasmuch as (11. Then.
34 35
[52. however.25) in the form35 d T g Af = gTA dx d ∗ g Af = g∗ A dx df dx df dx + dg dx dg dx
T T
Af
∗ T
.19) of the derivative.
+
Af
valid for any constant matrix A—as is seen by applying the definition (4.9 and the Jacobian derivative of this section complete the family of matrix rudiments we shall need to begin to do increasingly interesting things with matrices in Chs. 12. ∂xj →0 ∂xj ∂xj and simplifying.11. one could vary xn+1 in principle.78)
The derivative is not In as one might think.10. and ∂xn+1 /∂xn+1 = 0. the Jacobian matrix.
. which here is (g + ∂g/2)∗ A(f + ∂f /2) − (g − ∂g/2)∗ A(f − ∂f /2) ∂ (g∗ Af ) = lim . Before doing interesting things. we must treat two more foundational matrix matters. "Jacobian. 2007] Notice that the last term on (11. which will be the subjects of Ch. (11. THE JACOBIAN DERIVATIVE
277
This is called the Jacobian derivative. next.79)'s second line is transposed. The Jacobian derivative obeys the derivative product rule (4. 13 and 14.79) . still. The shift operator of § 11. 15 Sept. The Jacobian derivative of a vector with respect to itself is dx = I." 00:50. or just the Jacobian. because. dx (11. even if x has only n elements. not adjointed.34 Each of its columns is the derivative with respect to one element of x. The two are the Gauss-Jordan decomposition and the matter of matrix rank.
278
CHAPTER 11. THE MATRIX
.
and • the unit triangular matrix L or U (§ 11.1).3. L[k] or U[k] (§ 11. The general matrix A does not necessarily have any of these properties. the latter including • lone-element matrix E (§ 11.7). • the elementary operator T (§ 11. 279
. and that several orderly procedures are known to do so. • the rank-r identity matrix Ir (§ 11.4). and indeed one of the more useful. However. • the quasielementary operator P . Section 11.Chapter 12
Matrix rank and the Gauss-Jordan decomposition
Chapter 11 has brought the matrix and its rudiments.2). • the general identity matrix I and the scalar matrix λI (§ 11. but it turns out that one can factor any matrix whatsoever into a product of rudiments which do have the properties. This chapter explains. The simplest of these. • the null matrix 0 (§ 11. Such rudimentary forms have useful properties. In fact all matrices have rank.3.7). that section has actually defined rank only for the rank-r identity matrix Ir .3. as we have seen.5). supplying in its place the new concept of matrix rank. D.3. This chapter introduces it. is the Gauss-Jordan decomposition.8).3 has de¨mphasized the concept of matrix dimensionality m× e n.
Technically the property applies to scalars. the several ak are linearly independent iff α1 a1 + α2 a2 + α3 a3 + · · · + αn an = 0 (12. we shall find it helpful to prepare two preliminaries thereto: (i) the matter of the linear independence of vectors. the chapter demands more rigor than one likes in such a book as this. More formally. a4 . Significantly but less obviously. . by contrast. too. That is. any nonzero scalar alone is linearly independent—but there is no such thing as a linearly independent pair of scalars. a3 . there is also no such thing as a linearly independent set which includes the null vector. and (ii) the elementary similarity transformation. a2 . strictly speaking. Except in § 12. 13 and 14. Paradoxically. One
. at least one of which is nonzero (trivial αk . n = 1 set consisting only of a1 = 0 is. where "nontrivial αk " means the several αk . a5 .1
Linear independence
Linear independence is a significant possible property of a set of vectors— whether the set be the several columns of a matrix. (12. RANK AND THE GAUSS-JORDAN
Before treating the Gauss-Jordan decomposition and the matter of matrix rank as such. . We shall drive through the chapter in as few pages as can be managed. For consistency of definition. Vectors which can combine nontrivially to reach the null vector are by definition linearly dependent. However. The chapter begins with these.2.1)
for all nontrivial αk .280
CHAPTER 12. on the technical ground that the only possible linear combination of the empty set is trivial. inasmuch as a scalar resembles a one-element vector— so. because one of the pair can always be expressed as a complex multiple of the other.1) forbids it. not linearly independent.1
1
This is the kind of thinking which typically governs mathematical edge cases. the several rows. Linear independence is a property of vectors. however. would be α1 = α2 = α3 = · · · = αn = 0). n = 0 set as linearly independent. and then onward to the more interesting matrix topics of Chs. and logically the chapter cannot be omitted. the n vectors of the set {a1 . an } are linearly independent if and only if none of them can be expressed as a linear combination—a weighted sum—of the others. A vector is linearly independent if its role cannot be served by the other vectors in the set. or some other vectors—the property being defined as follows. . . it is hard to see how to avoid the rigor here. even the singlemember. we regard the empty.
12.
only if β1 − β1 = 0. then it cannot be expressed as any other combination of the same vectors. among other examples.
.1. the important property of matrix rank depends on the number of linearly independent columns or rows a matrix has. but it helps to visualize the concept geometrically in three dimensions. Similar thinking makes 0! = 1.5. β3 − ′ β3 = 0. then one might ask: can there exist a different linear combination of the same vectors ak which also forms b? That is. . As we shall see in § 12. this could only be so if the coefficients in the last ′ ′ ′ equation where trivial—that is. were in fact one and the same. The combination is unique. if a vector b can be expressed as a linear combination of several linearly independent vectors ak . because (§ 11. The difference of the two equations then would be
′ ′ ′ ′ (β1 − β1 )a1 + (β2 − β2 )a2 + (β3 − β3 )a3 + · · · + (βn − βn )an = 0. and 2 not 1 k=0 the least prime. Two such vectors are independent so long as they do not lie along the same line. One concludes therefore that. a chapter on matrices. β2 − β2 = 0. LINEAR INDEPENDENCE
281
If a linear combination of several independent vectors ak forms a vector b. βn − βn = 0. which we had supposed to differ. but what then of the observation that adding a vector to a linearly dependent set never renders the set independent? Surely in this light it is preferable justP define the empty set as to independent in the first place. then is
′ ′ ′ ′ β1 a1 + β2 a2 + β3 a3 + · · · + βn an = b
possible? To answer the question. .1) a matrix is essentially a sequence of vectors—either of column vectors or of row vectors. depending on one's point of view.1).
could define the empty set to be linearly dependent if one really wanted to. if β1 a1 + β2 a2 + β3 a3 + · · · + βn an = b. −1 ak z k = 0.12.
According to (12. using the threedimensional geometrical vectors of § 3. But this says no less than that the two linear combinations. suppose that it were possible. .3. Linear independence can apply in any dimensionality. A third such vector is independent of the first two so long as it does not lie in their common plane. where the several ak satisfy (12. A fourth such vector (unless it points off into some unvisualizable fourth dimension) cannot possibly then be independent of the three. We discuss the linear independence of vectors in this.1). .
1). without fundamentally altering the character of the quasielementary or unit triangular matrix.
. The symbols P ′ . L and U do. m × n matrix A is2 A = G> Ir G< = P DLU Ir KS. The similarity transformation has several interesting properties. typically merging D into L and omitting K and S. but they reveal little or nothing sketching the matrices does not. D.282
CHAPTER 12.3
The Gauss-Jordan decomposition
The Gauss-Jordan decomposition of an arbitrary. only not necessarily the same ones P .2
The elementary similarity transformation
Section 11. Perhaps the reader will agree that the decomposition is cleaner as presented here. RANK AND THE GAUSS-JORDAN
12.1 obtain. and sometimes without altering the matrix at all. The rules find use among other places in the Gauss-Jordan decomposition of § 12. dimension-limited. Most of the table's rules are fairly obvious if the meaning of the symbols is understood. D.3.
Most introductory linear algebra texts this writer has met call the Gauss-Jordan decomposition instead the "LU decomposition" and include fewer factors in it. Of course rigorous symbolic proofs can be constructed after the pattern of § 11.2)
G> ≡ P DLU. The symbols P . where • P and S are general interchange operators (§ 11. the several rules of Table 12.7. They also omit Ir . though to grasp some of the rules it helps to sketch the relevant matrices on a sheet of paper.8. D ′ . some of which we are now prepared to discuss.8. L and U of course represent the quasielementaries and unit triangular matrices of §§ 11. C = T .46) have introduced the similarity transformation CAC −1 or C −1 AC. since their matrices have pre-defined dimensionality.2. which arises when an operator C commutes respectively rightward or leftward past a matrix A. particularly in the case in which the operator happens to be an elementary. In this case.5 and its (11. L′ and U ′ also represent quasielementaries and unit triangular matrices.7 and 11. G< ≡ KS.
12.
2
(12. The rules permit one to commute some but not all elementaries past a quasielementary or unit triangular matrix.
RANK AND THE GAUSS-JORDAN • D is a general scaling operator (§ 11. > < S −1 K −1 = G−1 . > the Gauss-Jordan's complementary form.3)
12. We shall meet some of the others in §§ 13. The equation itself is easy enough to read.1
Motive
Equation (12. is the transpose of a parallel unit lower triangular matrix. for example). one can left-multiply the equation A = G> Ir G< by G−1 and right> multiply it by G−1 to obtain < U −1 L−1 D−1 P −1 AS −1 K −1 = G−1 AG−1 = Ir . 14.3." The letter G itself can be regarded as standing for "Gauss-Jordan.2) seems inscrutable. but just as there are many ways to factor a scalar (0xC = [4][3] = [2]2 [3] = [2][6].284
CHAPTER 12. there are likewise many ways to factor a matrix.6.8. • K=L being thus a parallel unit upper triangular matrix (§ 11. However—at least for matrices which do have one—because G> and G< are composed of invertible factors.2).7. 14. and • r is an unspecified rank.8). (12. in fact) is a matter this section addresses." but admittedly it is chosen as much because otherwise we were running out of available Roman capitals!
3
.12. The Gauss-Jordan decomposition we meet here however has both significant theoretical properties and useful practical applications.10 and 14. and in any case needs less advanced preparation to appreciate
One can pronounce G> and G< respectively as "G acting rightward" and "G acting leftward. Whether all possible matrices A have a Gauss-Jordan decomposition (they do. • G> and G< are composites3 as defined by (12. • L and U are respectively unit lower and unit upper triangular matrices (§ 11. Why choose this particular way? There are indeed many ways.10.4).
{r}T
The Gauss-Jordan decomposition is also called the Gauss-Jordan factorization.2). < U −1 L−1 D −1 P −1 = G−1 .
for which A−1 A = In . that is only for square A.49. how to determine it is not immediately obvious. where A is known and A−1 is to be determined. it is only supposed here that A−1 A = In .3. left-multiplying by In and observing that In = In . itself constitutes A−1 .
which implies that
A−1 = (In )
T . (The A−1 here is the A−1(n) of eqn. it is not yet claimed that AA−1 = In .
(In )
T (A) = In . and (at least as developed in this book) precedes the others logically. then we shall have that
T (A) = In . so for the moment let us suppose a square A for which A−1 does exist. and even if it does exist. The matrix A−1 such that A−1 A = In may or may not exist (usually it does exist if A is square. as we shall soon see). When In is finally achieved. This observation is what motivates the Gauss-Jordan decomposition. what if A is not square? In the present subsection however we are not trying to prove anything. each of which makes the matrix more nearly resemble In .6) to n × n dimensionality.
. truncated (§ 11. if one can determine A−1 . It emerges naturally when one posits a pair of square. THE GAUSS-JORDAN DECOMPOSITION
285
than the others. only to motivate.
The product of elementaries which transforms A to In .3.12.) To determine A−1 is not an entirely trivial problem. 11. A and A−1 . However. n × n matrices. And still. and let us seek A−1 by left-multiplying A by a sequence T of elementary row operators. but even then it may not.
2 or.
Intervening factors are multiplied by both T and T −1 .3. step 12. so if the reader will accept the other factors and suspend judgment on K until the actual need for it emerges in § 12. By successive elementary operations. it is (12. ˜ Each step of the transformation goes as follows. (12. the A on the right is gradually transformed into Ir . It begins with the equation A = IIIIAII. U . particularly to avoid dividing by zero when they encounter a zero in an inconvenient cell of the matrix (the reader might try reducing A = [0 1. The matrix I is left. this factor comes into play when A has broad rectangular rather than square shape. are all I. for instance. In between. elementary by elementary.
. which is (12. S and K in the first place? The answer regarding P and S is that these factors respectively gather row and column interchange elementaries. where the six I hold the places of the six Gauss-Jordan factors P . but only in the special case that r = 2 and P = S = K = I—which begs the question: why do we need the factors P .2.
12.or left-multiplied by T −1 . K and S of (12. we are not quite ready to detail yet. the equation is represented as ˜ ˜ ˜ ˜ ˜˜ ˜ A = P D LU I K S.2). D. For example. THE GAUSS-JORDAN DECOMPOSITION
287
Now. a row or column interchange is needed here).3. of which the example given has used none but which other examples sometimes need or want. but at present we are only motivating not proving.2).2
Method
The Gauss-Jordan decomposition of a matrix A is not discovered at one stroke but rather is gradually built up.3. the equation A = DLU I2 is not (12..2). Regarding K.4) ˜ ˜ ˜ where the initial value of I is A and the initial values of P .2)—or rather. The decomposition thus ends with the equation A = P DLU Ir KS. etc. To compensate. which multiplication constitutes an elementary similarity transformation as described in § 12. and also sometimes when one of the rows of A happens to be a linear combination of the others. while the several matrices are gradually being transformed.12. L. 1 0] to I2 .or right-multiplied by an elementary operator T . The last point. admittedly. ˜ ˜ A = P DT(1/α)[i] ˜ Tα[i] LT(1/α)[i] ˜ Tα[i] U T(1/α)[i] ˜ ˜˜ Tα[i] I K S. one of the six factors is right. while the six I are gradually transformed into the six Gauss-Jordan factors. then we shall proceed on this basis. D.3.
3
The algorithm
Having motivated the Gauss-Jordan decomposition in § 12. the algorithm decrees the following steps.2. ˜ ˜ L ← Tα[i] LT(1/α)[i] . ˜ = Ir .4) has become the Gauss-Jordan decomposition (12.1 and having proposed a basic method to pursue it in § 12.
˜ thus associating the operation with the appropriate factor—in this case.)
. D. The remarks are unnecessary to execute the algorithm's steps as such. Specifically. at Such elementary row and column operations are repeated until I which point (12. RANK AND THE GAUSS-JORDAN
which is just (12. then. failproof algorithm to achieve it.
12. the algorithm ˜ • copies A into the variable working matrix I (step 1 below).2). orderly. • establishes a rank r (step 8).4). ˜ • reduces I by suitable row (and maybe column) operations to unit upper triangular form (steps 2 through 7).3. (The steps as written include many parenthetical remarks—so many that some steps seem to consist more of parenthetical remarks than of actual algorithm.
˜ ˜ D ← DT(1/α)[i] .288
CHAPTER 12. we shall now establish a definite. ˜ ˜ U ← Tα[i] U T(1/α)[i] . inasmuch as the adjacent elementaries cancel one another. Broadly.3. and ˜ • reduces the now unit triangular I further to the rank-r identity matrix Ir (steps 9 through 13).3. ˜ ˜ I ← Tα[i] I. They are however necessary to explain and to justify the algorithm's steps to the reader.
Regarding the ˜ other factors.8). The other six variable working matrices each have extendedoperational form. What is less clear until one has read the whole algorithm. The ˜ relates not to the ı ı ˜ ˜ doubled. n × n for K and S. leaving ˜ = Ir . the ı matrix has proper unit upper triangular form (§ 11. The need grows clear once ˜ one has read through step 7. so e this step 2 though logical seems unneeded. The notation ˜ii thus means [I]ii —in other words. subscribed index ii but to I.
289
˜ U ← I. Begin by initializing ˜ P ← I. where i is a row index. but this is accidental. ˜ S ← I.
˜ where I holds the part of A remaining to be decomposed. that I is all-zero below-leftward of and directly leftward of (though not directly below) the pivot element ˜ii .3. notice that L enjoys the major partial unit triangular
The notation ˜ii looks interesting.) Observe that neither the ith row of I ˜ nor any row below it has an entry left of the ith column. one naturally need not store A beyond the m × n active region. From step 1. L and U . ˜ K ← I. One need store none of the matrices beyond these bounds. is that ˜ one also need not store the dimension-limited I beyond the m×n active region.
5
. and where the others are the variable working matrices ˜ of (12. i ← 1. the algorithm also ˜ ˜ re¨nters here from step 7 below. I = A and L = I. (Besides arriving at this point from step 1 above. but nevertheless true.5 Observe also that above the ith row. ˜ D ← I. ı ˜ it means the current iith element of the variable working matrix I.4).) 2.12. ˜ L ← I. ˜ I ← A. (The eventual goal will be to factor all of I away. THE GAUSS-JORDAN DECOMPOSITION 1. but they also confine their activity to well defined ˜ ˜ ˜ ˜ ˜ ˜ regions: m × m for P . I Since A by definition is a dimension-limited m×n matrix. D. though the precise value of r will not be known until step 8.
See § 11.8. End.4
Rank and independent rows
Observe that the Gauss-Jordan algorithm of § 12. here again it is not necessary actually to apply the addition elementaries one by one. Let ˜ ≡ P. entering this step. We can safely ignore this unproven fact however for the immediate moment. Together they easily form a parallel unit upper—not lower—triangular matrix {r}T .296
CHAPTER 12. Therefore. (12. regardless of which pivots ˜pq = 0 ı one happens to choose in step 3 along the way. ˜ Never stalling. that the algorithm always produces the same Ir . ˜ D ≡ D.)
12.3. RANK AND THE GAUSS-JORDAN ˜ use the now conveniently elementarized columns of I's main body to suppress the extra columns on its right edge by
n r
˜ I ←
˜ ˜ (Actually. r ≤ n.5)
. ı
q=r+1 p=1
˜ T˜pq [pq] K . Notice now that I = Ir . r ≤ m. As in steps 6 and 10. but shall in § 12.5. ˜ K ≡ K. ı
˜ ≡ U.4.) L ˜ 13. the same rank r ≥ 0. ˜ L ≡ L. P U
˜ K ←
˜ I
n
q=r+1 p=1 r
T−˜pq [pq] . (We have not yet proven. though what value the rank r might turn out to have is not normally known to us in advance. so in fact K becomes just the product above. the algorithm cannot fail to achieve I = Ir and thus a complete Gauss-Jordan decomposition of the form (12.3.2).3 operates always within the bounds of the original m × n matrix A. it was that K = I. necessarily. ˜ S ≡ S.
r < m. one can invert. Refer to (11. but according to § 12. Observe also however that the rank always fully reaches r = m if the rows of the original matrix A are linearly independent. Step 6 in the ith iteration adds multiples of the ith pivot row downward. U .1). which already include multiples of the first through (i − 1)th. one can drown the assertion rigorously in symbols to prove it.41). K and S—is composed of a sequence T of elementary operators. a multiple of itself—until step 10. Step 6 in the second iteration therefore adds multiples of the second pivot row downward. in fact. If one merely records the sequence of elementaries used to build each of the six factors—if one reverses each sequence.
12.12. and multiplies—then the six inverse factors result. One actually need not record the individual elementaries. but find only rows which already include multiples of the first pivot row. How do we know that step 6 can never create a null row? We know this because the action of step 6 is to add multiples only of current and earlier pivot rows ˜ to rows in I which have not yet been on pivot. such action has no power to cancel the independent rows it targets. D−1 .7 and 11. inverts each elementary. but before going to that extreme consider: The action of steps 3 and 4 is to choose a pivot row p ≥ i and to shift it upward to the ith position.8 have shown how. L.3. One need not however go even to that much trouble. To no row is ever added. The reason for this observation is that the rank can fall short. This being so. such null rows never were linearly independent in the first place). only if step 3 finds a null row i ≤ m.8 According to (12. but step 3 can find such a null row only if step 6 has created one (or if there were a null row in the original matrix A. L−1 . Sections 11. steps 3 and 4 in the second iteration find no unmixed rows available to choose as second pivot. The action of step 6 then is to add multiples of the chosen pivot row downward only—that is. Each of the six factors—P . which already include multiples of the first pivot row. only to rows which have not yet been on pivot.1. So it comes to pass that multiples only of current and earlier pivot rows are added to rows which have not yet been on pivot.3. directly or indirectly. it isn't even that hard. multiply and forget them in stream. which does not belong to the algorithm's main loop and has nothing to do with the availability of nonzero rows to step 3. This is unsurprising. Each of the six inverse factors—P −1 . D. And. This means starting the algorithm from step 1 with six extra variable work8 If the truth of the sentence's assertion regarding the action of step 6 seems nonobvious. Indeed the narrative of the algorithm's step 8 has already noticed the fact. THE GAUSS-JORDAN DECOMPOSITION
297
The rank r exceeds the number neither of the matrix's rows nor of its columns.
.5
Inverting the factors
Inverting the six Gauss-Jordan factors is easy. U −1 . K −1 and S −1 —is therefore composed of the reverse sequence T −1 of inverse elementary operators.
RANK AND THE GAUSS-JORDAN
ing matrices (besides the seven already there): ˜ P −1 ← I. we left-multiply (12.
˜ U −1 ← I. by their n × n.3). actions which have no effect on A since it is already a dimension-limited m×n
9
Section 11. anyway. To truncate the six operators formally. Indeed.3.) Then. the Im and In respectively truncate rows and columns. row operators. ˜ L−1 ← I. so there is nothing for the six operators to act upon beyond those bounds in any event.3. ˜ ˜ ı D ← DT˜ii [i] . unknown at algorithm's start. ı ˜ ˜ I ← T(1/˜ii )[i] I. ı ˜ ˜ D−1 ← T(1/˜ii )[i] D−1 . ˜ K −1 ← I.2) actually needs to retain its entire extendedoperational form (§ 11. not because it would not be useful. ı ı
With this simple extension. in step 5.298
CHAPTER 12. K or S. for each ˜ ˜ ˜ ˜ ˜ ˜ operation on any of P . For example. obtaining Im AIn = Im P DLU Ir KSIn .1) for this reason if we want. U . the algorithm yields all the factors not only of the Gauss-Jordan decomposition (12. the two on the right. According to § 11. neither Ir nor A has anything but zeros outside the m × n rectangle. We can truncate all six operators to dimension-limited forms (§ 11. ˜ D−1 ← I. column operators. but because its initial value would be9 A−1(r) . act wholly by their m × m squares. The four factors on the left.2).2) but simultaneously also of the Gauss-Jordan's complementary form (12.6
Truncating the factors
None of the six factors of (12. ˜ ˜ ı L ← T(1/˜ii )[i] LT˜ii [i] .3.3. ˜ (There is no I −1 .2) by Im and right-multiply it by In .5 explains the notation. ı ˜ ˜ L−1 ← T(1/˜ii )[i] L−1 T˜ii [i] . ˜ S −1 ← I.
. D.
12. L. one operates inversely on the corresponding inverse matrix.6.
Comes the objection.6) expresses any dimension-limited rectangular matrix A as the product of six particularly simple. having infinite rank. the dimensionality m × n of the matrix is a distraction. serves the latter case. m × r. A = Im P DLU Ir KSIn
7 2 3 = Im P DLU Ir KSIn . then. dimension-limited rectangular factors.
299
and finally. whose rank matters. Equation (12. and theoretically in the author's view. too.2 lists a few. Table 12.31) or (11. (12. m × m. or a reversible row or column operation. one need not be too impressed with either. A = (Im P Im )(Im DIm )(Im LIm )(Im U Ir )(Ir KIn )(In SIn ).3. By similar reasoning from (12. by using (11.7)
where the dimensionalities of the two factors are m × r and r × n. Anyway. a matrix displaying an infinite field of zeros resembles a shipment delivering an infinite supply of nothing. why do you make it more complicated than it needs to be? For what reason must all your matrices have infinite dimensionality. whose rank does not. The answer is that this book is a book of applied mathematical theory.7
Properties of the factors
One would expect such neatly formed operators as the factors of the GaussJordan to enjoy some useful special properties.42) repeatedly.3 manifest the sense that a matrix can represent a linear transformation. The extended-operational form. It is the rank r. The book will seldom point it out again explicitly. but one can straightforwardly truncate not only the Gauss-Jordan factors but most other factors and operators. (12. if any.
10
. however.2). nor is there any real difference between T5[21] when it row-operates on a 3 × p matrix and the same elementary when it row-operates on a 4 × p. infinite-dimensional matrices are significantly neater to handle. THE GAUSS-JORDAN DECOMPOSITION matrix. "Black.12. The relevant theoretical constructs ought to reflect such insights. To append a null row or a null column to a dimension-limited matrix is to alter the matrix in no essential way.10
12. m × m.3. Indeed they do. anyway? They don't do it that way in my other linear algebra book. by the method of this subsection. The table's properties formally come from (11.52) and Table 11. Hence infinite dimensionality." It is a fair question. A = (Im G> Ir )(Ir G< In ). The two matrix forms of § 11.5. that counts. In either case. By successive steps. r × n and n × n.6)
where the dimensionalities of the six factors on the equation's right side are respectively m × m.
The table's properties regarding P and S express a general advantage all permutors share. Further properties of the several Gauss-Jordan factors can be gleaned from the respectively relevant subsections of §§ 11.
12. if one firmly grasps the matrix forms involved and comprehends the notation (neither of which is trivial to do).8. then the properties are plainly seen without reference to Ch. then one can take advantage of (11.8
Marginalizing the factor In
If A happens to be a square.3.8)
thus marginalizing the factor In . The table's properties regarding K are admittedly less significant. n × n matrix and if it develops that the rank r = n. and if one sketches the relevant factors schematically with a pencil.300
CHAPTER 12.31) to rewrite the Gauss-Jordan decomposition (12.2) in the form P DLU KSIn = A = In P DLU KS. RANK AND THE GAUSS-JORDAN Table 12. Still.2: A few properties of the Gauss-Jordan factors. P ∗ = P −1 = P T S ∗ = S −1 P −∗ = S −∗ = P S = ST = P −T = S −T = I = Ir (K − I)(In − Ir ) = (K − I)(I − In )
K + K −1 2 Ir K −1 (In (I − In Ir K(In − Ir ) = − Ir ) = − I) = K −1 K −I 0 0
(I − In )(K − I) = )(K −1
− I = Ir (K −1 − I)(In − Ir ) = (K −1 − I)(I − In )
but. They might find other uses. if one understands that the operator (In − Ir ) is a truncator that selects columns r + 1 through n of the matrix it operates leftward upon. (12.7 and 11. This is to express the Gauss-Jordan solely in row operations or solely in column operations.3 will need them. It does not change the
. 11 as such. included mostly only because § 13. even the K properties are always true.
9
Decomposing an extended operator
Once one has derived the Gauss-Jordan decomposition. the extended-operational matrix form is hardly more than a formality. the preceding equation results. if you like. Or. too. this may be a distinction without a difference.12.6 and 11. of (11. Indeed this was also the point of § 12." as the narrative says. 13.3. eqn. To put them all in dimension-limited form however brings at least three effects the book you are reading prefers to avoid. one decomposes the n × n dimension-limited matrix In A = In AIn = AIn as AIn = P DLU In KS = P DLU KSIn . wherein the Ir has become an I. because the vector on which the operator ultimately acts is probably null there in any event. The last of the three is arguably most significant: matrix rank is such an important attribute that one prefers to impute it only to those operators about which it actually says something interesting. it leaves shift-and-truncate operations hard to express cleanly (refer to §§ 11. it leaves one to consider the ranks of reversible operators like T[1↔2] that naturally should have no rank.6 and. whereas a dimension-limited operator would have truncated whatever it found there.31). then one might just put all matrices in dimension-limited form. THE GAUSS-JORDAN DECOMPOSITION
301
algorithm and it does not alter the factors. Nevertheless.9 and. n × n extended operator A (where per § 11. Since what is found outside the operational domain is often uninteresting.3. One can decompose only reversible extended operators so. It fails however if A is rectangular or r < n. it merely reorders the factors after the algorithm has determined them. is that what an operator does outside some appropriately delimited active region is seldom interesting.
11
. One merely writes A = P DLU KS.3. This subsection's equations remain unnumbered because they say little new. really. to extend it to decompose a reversible.3. In such a context it may not matter whether one truncates the operator.3. See § 12. as a typical example of the usage.11
If "it may not matter. it confuses the otherwise natural extension of discrete vectors into continuous functions. Second. inasmuch as all the factors present but In are n × n extended operators. which one can safely ignore. First.
12. for which the rank of the truncated form AIn is r < n. Their only point. equivalently. Third. All it says is that the extended operator unobtrusively leaves untouched anything it happens to find outside its operational domain. from which.7).5.2 A outside the n × n active region resembles the infinite-dimensional identity matrix I) is trivial. The GaussJordan fails on irreversible extended operators. Many books do.
13. . for which ψo = 0. However. the Gauss-Jordan decomposition is a significant achievement. RANK AND THE GAUSS-JORDAN
Regarding the present section as a whole. the space consists of all vectors b formable as βo u + β1 a1 + β2 a2 + · · · + βm am = b.12.302
CHAPTER 12. That is. Solving (12. before closing the present chapter we should like finally. ψo u + ψ1 a1 + ψ2 a2 + · · · + ψm am = v. . As a definition.10 and the singular value of § 14. we shall first need one more preliminary: the technique of vector replacement. ψo ψo ψo ψo (12. We shall put the Gauss-Jordan to good use in Ch. the diagonal of § 14.
12. Now consider a specific vector v in the space. which we had not known how to handle very well.9)
. into a product of unit triangular matrices and quasielementaries. am }. To do that.10) (12. It is not the only matrix decomposition— further interesting decompositions include the Gram-Schmidt of § 13. we find that ψ1 ψ2 ψm 1 v− a1 − a2 − · · · − am = u. . squarely to define and to establish the concept of matrix rank. which we do. a2 . a1 . among others—but the Gauss-Jordan nonetheless reliably factors an arbitrary m × n matrix A. the space these vectors address consists of all linear combinations of the set's several vectors.4
Vector replacement
Consider a set of m + 1 (not necessarily independent) vectors {u.10) for u. . the Schur of § 14.10. not only for Ir but for all matrices.6.
10) would still also explicitly make v a linear combination of the same ak plus a nonzero multiple of u. for.6 have introduced the rank-r identity matrix Ir . As a corollary. The contradiction proves false the assumption which gave rise to it: that the vectors of the new set were linearly dependent. because according to § 12. The new space is exactly the same as the old. where the integer r is the number of ones the matrix has along its main diagonal.1).5 and 11. If a matrix element
. if it were that γo v + γ1 a1 + γ2 a2 + · · · + γm am = 0 for nontrivial γo and γk . To forestall potential confusion in the matter. Also.12. unambiguous rank. Commonly. then the vectors of the new set are linearly independent. the third row is just two-thirds the second. 6
The third column of this matrix is the sum of the first and second columns. in which case v would be a linear combination of the several ak alone. an n × n matrix has rank r = n. too. then (12. then either γo = 0—impossible since that would make the several ak themselves linearly dependent—or γo = 0. the matrix has only two independent vectors. Either way.5. we should immediately observe that—like the rest of this chapter but unlike some other parts of the book—this section explicitly trades in exact numbers.1.5
Rank
Sections 11.3. The section demonstrates that every matrix has a definite. Such vector replacement does not in any way alter the space addressed. if the vectors of the original set happen to be linearly independent (§ 12. The rank of this 3 × 3 matrix is not r = 3 but r = 2. But if v were a linear combination of the several ak alone. Yet both combinations cannot be. and shows how this rank can be calculated.3. RANK
305
linear combination of the several vectors of the original set. untainted by foreign contribution. but consider the matrix
2 5 4 3 2 1 6 4 3 6 9 5. by columns or by rows. Hence the vectors of the new set are equally as independent as the vectors of the old.
12. too. two distinct combinations of independent vectors can never target the same v. This section establishes properly the important concept of matrix rank. Other matrices have rank.
then it is exactly 5. the numbers are exact. The author has exactly one brother. but that is not the point here. and so on.5. "the end justifies the means. which thereby one can and does prove.2 days). so this subsection is to prepare the reader to expect the maneuver.0 ± 0. If P3 then Q. Herein.306
CHAPTER 12.12
12. then.1 sides! A triangle has exactly three sides. it is the applied mathematician's responsibility to distinguish between the two kinds of quantity.0±0. would be to suppose P1 and show that it led to Q. one can still conclude that their common object Q is true. On the contrary. none of which one actually means to prove. which one might name.5.2 we shall execute a pretty logical maneuver. Where the distinction matters. although one can draw no valid conclusion regarding any one of the three predicates P1 . of course—especially matrices populated by measured data—can never truly be exact. Once the reader feels that he grasps its logic.1 inches. Exact quantities are every bit as valid in applied mathematics as imprecise ones are. The ratio of a circle's circumference to its radius is exactly 2π. Many real-world matrices. the length of your thumb may indeed be 3. The final step would be to show somehow that P1 .1
A logical maneuver
In § 12.1 inches for the length of your thumb. the maneuver if unexpected can confuse. One valid way to prove Q.
It is false to suppose that because applied mathematics permits imprecise quantities. 13 The maneuver's name rings a bit sinister. it also requires them. the means is to assert several individually suspicious claims.0±0. but surely no triangle has 3. then again to suppose P3 and show that it also led to Q. P2 and P3 could not possibly all be false at once. P2 and P3 are true is not known. RANK AND THE GAUSS-JORDAN
here is 5. A construction contract might require the builder to finish within exactly 180 days (though the actual construction time might be an inexact t = 172. Therefore.6 ± 0. Here. Which of P1 .
12
. If P2 then Q.2."13 When embedded within a larger logical construct as in § 12. P2 or P3 . does it not? The author recommends no such maneuver in social or family life! Logically here however it helps the math. It is a subtle maneuver. then exactly 0. if 0. The logical maneuver follows this basic pattern. like 3.5. but what is known is that at least one of the three is true: P1 or P2 or P3 . If P1 then Q. The end which justifies the means is the conclusion Q. then alternately to suppose P2 and show that it separately led to Q. he can proceed to the next subsection where the maneuver is put to use.
7).1 among others has already illustrated the technique. then. e5 . 15 As the reader will have observed by this point in the book. e2 . In fact it is impossible.14)
exists for each ej . e4 . but to prove this. . . a4 .12) can be written in the form (AIr )B = Is . then reasons toward a contradiction which proves the assumption false. here we have not necessarily assumed that the several ak are linearly independent.
Observe that unlike as in § 12. This subsection proves the impossibility. . a3 . . .3. but the technique's use here is more sophisticated. If r < s. because Ir attacking from the right is the column truncation operator (§ 11. more precisely. ever transform an identity matrix into another identity matrix of greater rank (§ 11. B operates on the r columns of AIr to produce the s columns of Is . .3.15 and it runs as follows.3.12) holds: A = Is = B. RANK
307
12. are nothing more than the elementary vectors e1 .2
The impossibility of identity-matrix promotion
Consider the matrix equation AIr B = Is .1.14) will turn out to be false because there are too many ej .6). One assumes the falsehood to be true.
14
. (12. we shall assume for the moment that the claim were true. . ar denote these columns.1. that a linear combination14 b1j a1 + b2j a2 + b3j a3 + · · · + brj ar = ej (12. reversible or otherwise. a5 .5. a2 . the technique—also called reductio ad absurdum—is the usual mathematical technique to prove impossibility. the product AIr is a matrix with an unspecified number of rows but only r columns—or.5). The proof then is by contradiction.13) makes is thus that the several vectors ak together address each of the several elementary vectors ej —that is. . 1 ≤ j ≤ s.1. (12.13)
where. Viewed this way. The s columns of Is . Equation (12. The claim (12. it is not so easy. Let the symbols a1 . however.12)
If r ≥ s. Section 6. per § 11. The r columns of AIr are nothing more than the first through rth columns of A.5.3. es (§ 11. with no more than r nonzero columns. then it is trivial to find matrices A and B for which (12. e3 . The claim (12. It shows that one cannot by any row and column operations.12.
lead ultimately alike to the same identical end remains to be determined.4 is that e1 contain at least a little of the vector ak it replaces—that bk1 = 0. then a3 might be available instead because b31 = 0. . even though it is illegal to replace an ak by an e1 which contains none of it. true and false. then all the logic requires is an assurance that there does exist at least one true choice. ar }. {e1 . Therefore. a5 . then according to (12. Whether as the maneuver also demands. . so for e1 to replace a1 might not be allowed. according to § 12. but surely it contains at least one of them. . . provided that the false choice and the true choice lead ultimately alike to the same identical end. .1 comes in. For j = 1. even if we remain ignorant as to which choice that is. The claim (12. even though we have no idea which of the several ak the vector e1 contains.14) guarantees at least one true choice. Some of the ak might indeed be forbidden. . ar }. Of course there is no guarantee specifically that b11 = 0. a5 . . RANK AND THE GAUSS-JORDAN Consider the elementary vector e1 .14) at least one of the several bk1 also is nonzero. However. even though replacing the wrong ak logically invalidates any conclusion which flows from the replacement. Because e1 is a linear combination. still we can proceed with the proof. If they do. inasmuch as e1 is nonzero. there is always at least one ak which e1 can replace. e1 . In this case the new set would be {a1 . a4 . (For example. all the choices. replacing a1 by e1 .4 one can safely replace any of the vectors in the set by e1 without altering the space addressed. The only restriction per § 12. According to (12.
which says that the elementary vector e1 is a linear combination of the several vectors {a1 . . . a2 .14).14) is b11 a1 + b21 a2 + b31 a3 + · · · + br1 ar = e1 . a4 . and if bk1 is nonzero then e1 can replace ak . . a3 . a2 . For example. but never all. a2 .5. the vector e1 might contain some of the several ak or all of them. a4 . a3 . a5 .) Here is where the logical maneuver of § 12. but that section depends logically on this one.
. The book to this point has established no general method to tell which of the several ak the elementary vector e1 actually contains (§ 13.308
CHAPTER 12.2 gives the method. . . (12. if a1 were forbidden because b11 = 0. ar }. so we cannot licitly appeal to it here).
e2 lies in the space addressed by the original set {a1 . . a3 . a5 . ar }. e3 . . Therefore as we have seen. for (as should seem plain to the reader who grasps the concept of the elementary vector. ar }. a2 . or whatever the new set happens to be). a2 . which thereby is properly established.7) no elementary vector can ever be a linear combination of other elementary vectors alone! The linear combination which forms e2 evidently includes a nonzero multiple of at least one of the remaining ak . And so it goes. § 11. Again it is impossible for all the coefficients βk2 to be zero. . This newer set addresses precisely the same space as the previous set. . which is attached to e1 ) must be nonzero.5. e2 . addresses precisely the same space as did the original set {a1 . . . . a5 . All intermediate choices. a3 . . . e2 also lies in the space addressed by the new set {e1 . a4 . ultimately lead to the single conclusion of this paragraph. . . .3.5. . ar }.
. a2 . thus also as the original set. we now choose an ak with a nonzero coefficient βk2 = 0 and replace it by e2 . RANK
309
Now consider the elementary vector e2 . a3 . . . . obtaining an even newer set of vectors like {e1 . a2 . a5 . not only do coefficients bk2 exist such that b12 a1 + b22 a2 + b32 a3 + · · · + br2 ar = e2 .12. At least one of the βk2 attached to an ak (not β12 . . . but moreover. which. That is. e2 . . . until all the ak are gone and our set has become {e1 . a5 . a4 . . . replacing one ak by an ej at a time. a4 . true and false. e5 . ar }.14). According to (12. And this is the one identical end the maneuver of § 12. a4 . ar } (or {a1 . Therefore by the same reasoning as before. a3 . but also coefficients βk2 exist such that β12 e1 + β22 a2 + β32 a3 + · · · + βr2 ar = e2 . er }. . a2 .1 has demanded. a5 . . e4 . e1 . as we have reasoned. it is impossible for β12 to be the sole nonzero coefficient.
are evidently left over. than there are ak . e2 . (This is from § 11. e2 . e4 .
12. . e4 . 1 ≤ j ≤ s. as we have stipulated.13). a5 . even when r < s.3
General matrix rank and its uniqueness
Step 8 of the Gauss-Jordan algorithm (§ 12. asserts that B is a column operator which does precisely what we have just shown impossible: to combine the r columns of AIr to yield the s columns of Is . e5 . Plainly this is impossible with respect to the left-over ej . . r < j ≤ s. r < j ≤ s. 1 ≤ k ≤ r. but until now we have lacked the background theory to prove it. they can never promote them. . the claim (12. The false claim: that the several ak . r < s.5. . . we conclude that although row and column operations can demote identity matrices in rank. because. The proof begins with a formal definition of the quantity whose uniqueness we are trying to prove. 1 ≤ k ≤ r. RANK AND THE GAUSS-JORDAN
The conclusion leaves us with a problem. Equation (12.
. The contradiction proves false the claim which gave rise to it. . ar } together addressed each of the several elementary vectors ej . .310
CHAPTER 12. es .3. In other words. 1 ≤ j ≤ s. e3 .14) made was that {a1 . Here is the proof. addressed all the ej . Back at the beginning of the section. There are more ej . Hence finally we conclude that no matrices A and B exist which satisfy (12. But as we have seen.12) when r < s. The promotion of identity matrices is impossible. a3 . One should like to think that this rank r were a definite property of the matrix itself rather than some unreliable artifact of the algorithm. a2 . Some elementary vectors ej . Now we have the theory. . e3 . er } together addressed each of the several elementary vectors ej . .12) written differently. the latter of which are just the elementary vectors e1 . .) • The rank r of a general matrix A is the rank of an identity matrix Ir to which A can be reduced by reversible row and column operations. • The rank r of an identity matrix Ir is the number of ones along its main diagonal.3) discovers a rank r for any matrix A. . however.3. .5. e5 . which is just (12. a4 . this amounts to a claim that {e1 .
15) only a single rank r is possible. −1 −1 B> B> = I = B> B> . < > Combining these equations. according to (12. This finding has two immediate implications: • Reversible row and/or column operations exist to change any matrix of rank r to any other matrix of the same rank. RANK
311
A matrix A has rank r if and only if matrices B> and B< exist such that B> AB< = Ir . we suppose that another rank were possible. −1 −1 B< B< = I = B< B< . reversible operations exist to change both matrices to Ir and back. No matrix has two different ranks.
(12. Therefore r = s is also impossible. Then.
−1 −1 B> Ir B< = G−1 Is G−1 .12. then for Is .
A = G−1 Is G−1 .
−1 −1 A = B> Ir B< . a promotion.2 and its (12. The discovery that every matrix has a single. > <
Solving first for Ir . but they are important achievements
. and r=s is guaranteed. (B> G−1 )Is (G−1 B< ) = Ir . But according to § 12.15).5.15)
The question is whether in (12.
Were it that r = s. unambiguous rank and the establishment of a failproof algorithm—the Gauss-Jordan—to ascertain that rank have not been easy to achieve. Matrix rank is unique.12). To answer the question. • No reversible operation can change a matrix's rank. The reason is that. > <
−1 −1 (G> B> )Ir (B< G< ) = Is . promotion is impossible.
−1 −1 A = B> Ir B< . then one of these two equations would constitute the demotion of an identity matrix and the other.5. that A had not only rank r but also rank s.
Since AT is such a matrix. A matrix of less than full rank is a degenerate matrix. both. m × n matrix C likewise to r < n. We shall rely on it often. Section 12. To square matrices. m = n. Matrix rank by contrast is an entirely solid. The greatest rank possible for an m × n matrix is the lesser of m and n. RANK AND THE GAUSS-JORDAN
nonetheless. Consider a tall m × n matrix C.4 was that a matrix of independent rows always has rank equal to the number of rows. using multiples of the other columns to wipe the dependent column out.8 comments further. The reason these achievements matter is that the mere dimensionality of a matrix is a chimerical measure of the matrix's true size—as for instance for the 3 × 3 example matrix at the head of the section. both lines of reasoning apply. One of the conclusions of § 12. done by reversible column operations. but also A itself. one could then interchange it over to the matrix's extreme right.
12. Now consider a tall matrix A with the same m × n dimensionality. we have that • a tall m × n matrix.312
CHAPTER 12. of course.
. what this T T T T says is that there exist operators B< and B> such that In = B< AT B> .3. But formally. The transpose AT of such a matrix has a full n independent rows.3 binds the rank of the original.5). m ≥ n. But the shrink. has full rank r = n.5. has full rank if and only if its columns are linearly independent. by which § 12. the rank r of a matrix can exceed the number neither of the matrix's rows nor of its columns. One could by definition target the dependent column with addition elementaries. Having zeroed the dependent column. worth the effort thereto. Gathering findings. shrinking the matrix to m × (n − 1) dimensionality. Shrinking the matrix necessarily also shrinks the bound on the matrix's rank to r ≤ n − 1—which is to say. if m = n. dependable measure. the transpose of which equation is B> AB< = In —which in turn says that not only AT . is defined to be an m × n matrix with maximum rank r = m or r = n—or.5. but with a full n independent columns. is itself reversible.5. to r < n. m ≥ n. effectively throwing the column away. m ≤ n. one of whose n columns is a linear combination (§ 12.4
The full-rank matrix
According to (12.1) of the others. A full-rank matrix. Parallel reasoning rules the rows of broad matrices. is necessarily degenerate for this reason. one of whose columns is a linear combination of the others. then. The matrix C. its rank is a full r = n.
but. meaning that knowledge of b does not suffice to determine x uniquely. because a tall or broad matrix cannot but include.
12. more columns or more rows than Ir . Further generalities await Ch. respectively. To say that a matrix has full column rank is to say that it is tall or square and has full rank r = n ≤ m.5. > If the m-element vector c ≡ G−1 b. The truth of this claim can be seen by decomposing the system's matrix A by Gauss-Jordan and then leftmultiplying the decomposed system by G−1 to reach the form > Ir G< x = G−1 b. never just one or the other. An overdetermined linear system Ax = b cannot have a solution for every possible m-element driving vector b. has full rank if and only if its columns and/or its rows are linearly independent.6 analyzes the unsolvable overdetermined linear system among others.2 solves the exactly determined linear system. m = n. Section 13. if r < m. then Ir G< x = c. Only a square matrix can have full column rank and full row rank at the same time.5
Underdetermined and overdetermined linear systems (introduction)
The last paragraph of § 12. If A happily has both. then the system is paradoxically both underdetermined and overdetermined and is thereby degenerate. But since G> is
.4 solves the nonoverdetermined linear system. or neither. if r < n—because inasmuch as some of A's columns then depend linearly on the others such a system maps multiple n-element vectors x to the same m-element vector b.5. then the system is exactly determined.12.4 provokes yet further terminology. Section 13. • a square n × n matrix. which is impossible > unless the last m − r elements of c happen to be zero. m ≤ n. 13. To say that a matrix has full row rank is to say that it is broad or square and has full rank r = m ≤ n. the present subsection would observe at least the few following facts. If A lacks both. RANK
313
• a broad m × n matrix. A linear system Ax = b is underdetermined if A lacks full column rank—that is. Complementarily. and • a square matrix has both independent columns and independent rows. Section 13. has full rank if and only if its rows are linearly independent. regarding the overdetermined system specifically.5. a linear system Ax = b is overdetermined if A lacks full row rank—that is.
C = Ir G< In .
12. The full-rank factorization is not unique.6
The full-rank factorization
One sometimes finds dimension-limited matrices of less than full rank inconvenient to handle.5. each b corresponds to a unique c and vice versa. if b is an unrestricted m-element vector then so also is c. Other full-rank factorizations are possible. better stated.16)
The truncated Gauss-Jordan (12. Complementarily. then the full-rank factorization is trivial: A = Im A or A = AIn . every dimension-limited. whatever other solutions such a system might have. and an easy one innocently to commit. § 3. because it has no last m − r elements. The error is easy to commit because the equation looks right. RANK AND THE GAUSS-JORDAN
invertible. a nonoverdetermined linear system Ax = b does have a solution for every possible m-element driving vector b. which verifies the claim. both also of rank r: A = BC. for the trivial reason that r = m. m × n matrix of rank r can be expressed as the product of two full-rank matrices. it has at least the solution x = 0. It is an analytical error. However. so.6.
16
[2. to require that Ax = b for unrestricted b when A lacks full row rank. good for any matrix.3][37. because such an equation is indeed valid over a broad domain of b and might very well have been written correctly in that context. This is so because in this case the last m − r elements of c do happen to be zero.7) constitutes one such full-rank factorization: B = Im G> Ir . only not in the context of unrestricted b.54).16 Of course. Section 13. It is never such an analytical error however to require that Ax = 0 because.314
CHAPTER 12.4 uses the full-rank factorization. including among others the truncated Gram-Schmidt (13. (12. one m × r and the other r × n. however. "Moore-Penrose generalized inverse"]
. if an m × n matrix already has full rank r = m or r = n. Analysis including such an error can lead to subtly absurd conclusions. or. because c in this case has no nonzeros among its last m − r elements.
as the full-column-rank matrix A evidently leaves one free to do. regardless of the pivots one chooses during the Gauss-Jordan algorithm's step 3. That S = I comes immediately of choosing q = i for pivot column during each iterative instance of the algorithm's step 3. We conclude that though one might voluntarily choose q = i during the algorithm's step 3. the algorithm cannot force one to do so if r = n. of a tall or square m × n matrix A of full column rank r = n ≤ m always finds the factor K = I. what if the ˜ only nonzero elements remaining in I's ith column stood above the main diagonal. A = P DLU Ir KS. then one would indeed have to choose q = i to swap the unusable column away rightward. Therefore. the Gauss-Jordan decomposition theoretically needs no K or S for such matrices. Hence step 12 does nothing and K = I.2) includes the factors K and S precisely to handle matrices with more columns than rank. Step 12 ˜ ˜ nulls the spare columns q > r that dress I's right. which fact lets us abbreviate the decomposition for such matrices to read A = P DLU In . The column would remain unusable. RANK
315
12. If one happens always to choose q = i as pivot column then not only K = I but S = I. then indeed S = I. but see: nothing in the algorithm later fills such a column's zeros with anything else—they remain zeros—so swapping the column away rightward could only delay the crisis. (12. Yet if one always does choose q = i. the Gauss-Jordan decomposition (12. That K = I is seen by the algorithm's step 12.5.7
Full column rank and the Gauss-Jordan factors K and S
The Gauss-Jordan decomposition (12. which creates K.2). too. were it so. Matrices of full column rank r = n. one must ask.12.17)
.5. by definition have no such problem. Such contradiction can only imply that if r = n then no unusable column can ever appear. Eventually the column would reappear on pivot when no usable column rightward remained available to swap it with. But. One need not swap. which contrary to our assumption would mean precisely that r < n. unavailable for step 4 to bring to pivot? Well. can one choose so? What if column q = i were unusable? That is. Theoretically. but in this case I has only r columns and therefore has no spare columns to null. common in applications.
and. that each row of In lies in the space the rows of A address. 12.17)'s complementary form. after all. much more than its mere dimensionality or the extent of its active region. U −1 L−1 D −1 P −1 A = In . is an extremely important matrix theorem. Without this theorem. the very concept of matrix rank must remain in doubt. however. This is obvious and boring. if a matrix A is square and has full rank r = n. two independent columns.2 is used). represents the matrix's true size. since B = AT has full rank just as A does. after all—especially numerically to avoid small pivots during early invocations of the algorithm's step 5. simply by applying some 3 × 3 row and column
. The concept underlying the theorem promotes the useful sensibility that a matrix's rank.17) is not mandatory but optional for a matrix A of full column rank (though still r = n and thus K = I for such a matrix. In 's columns together and In 's rows together each address the same. along with all that attends to the concept.316
CHAPTER 12. which we have spent so many pages to attain. then A's columns together. The column permutor S exists to be used. rank r = 2. Since P DLU acts as a row operator.5. complete n-dimensional space.3. The theorem is the rock upon which the general theory of the matrix is built.8
The significance of rank uniqueness
The result of § 12. In the whole. the honest 2 × 2 matrix
» 5 3 1 6 –
has two independent rows or. but interesting is the converse implication of (12.5. Equation (12. hence. even when the unabbreviated eqn.
12. It constitutes the chapter's chief result. The rows of In and the rows of A evidently address the same space. One can easily construct a phony 3 × 3 matrix from the honest 2 × 2. One can moreover say the same of A's columns. For example. alternately.17) implies that each row of the full-rank matrix A lies in the space the rows of In address. Dimensionality can deceive. that matrix rank is unique. (12. RANK AND THE GAUSS-JORDAN
Observe however that just because one theoretically can set S = I does not mean that one actually should. A's rows together. if doing exact arithmetic. set S = I if one wanted to. There are however times when it is nice to know that one theoretically could.
one sometimes is dismayed to discover that one of the equations. This can happen in matrix work.
An applied mathematician with some matrix experience actually probably recognizes this particular 3 × 3 matrix as a fraud on sight. dimensions or equations one actually has available to work with.5. Consider for instance the 5 × 5 matrix (in hexadecimal notation) 2 3 12 9 3 1 0 F 15 6 3 2 12 7 2 2 6 2 7 6 D 9 −19 − E −6 7. Its true rank is r = 2.17 Now. but it is a very simple example. so long as the true rank is correctly recognized." are a bit hyperbolic. 3 6 7 4 −2 0 6 1 5 5 1 −4 4 1 −8
17
As the reader can verify by the Gauss-Jordan algorithm. admittedly. We have here caught a matrix impostor pretending to be bigger than it really is. That is the sense of matrix rank. No one can just look at some arbitrary matrix and instantly perceive its true rank. The rank of a matrix helps one to recognize how many truly independent vectors. It looks like a rank-three matrix. RANK elementaries: T(2/3)[32]
» 5 3 1 6 –
317
T1[13] T1[23] =
2
5 4 3 2
1 6 4
3 6 9 5. but of course there is nothing mathematically improper or illegal about a matrix of less than full rank.
.12. the matrix's rank is not r = 5 but r = 4. too. adjectives like "honest" and "phony. 6
The 3 × 3 matrix on the equation's right is the one we met at the head of the section. is really just a useless combination of the others. When one models a physical phenomenon by a set of equations. The last paragraph has used them to convey the subjective sense of the matter. thought to be independent. but really has only two independent columns and two independent rows. rather than how many seem available at first glance." terms like "imposter.
318
CHAPTER 12. RANK AND THE GAUSS-JORDAN
.
Building upon the two tedious chapters. First. after all. 11 and 12. if only to prepare to do here something we already knew how to do? The question is a fair one.Chapter 13
Inversion and orthonormalization
The undeniably tedious Chs.5. as in the last step to derive (3. Why should we have suffered two bulky chapters. Uses however it has. 12 might at least review § 12.5 have already broached1 the matrix's most basic use.1. this chapter brings the first rewarding matrix work. before we go on. the matrix neatly solves a linear system not only for a particular driving vector b but for all possible driving vectors b at one stroke. to represent a system of m linear scalar equations in n unknowns neatly as Ax = b and to solve the whole system at once by inverting the matrix A that characterizes it.1 and 12. the primary subject of this chapter. we want to confess that such a use alone. 11 and 12 have piled the matrix theory deep while affording scant practical reward. Now. as this chapter
1
The reader who has skipped Ch.5.9) as far back as Ch. One might be forgiven for forgetting after so many pages of abstract theory that the matrix afforded any reward or had any use at all.5. Sections 11. 3. We already knew how to solve a simultaneous system of linear scalar equations in principle without recourse to the formality of a matrix. on the surface of it—though interesting—might not have justified the whole uncomfortable bulk of Chs.
319
. but admits at least four answers.
to solve the linear system neatly is only the primary and most straightforward use of the matrix. the matrix allows § 13. 14. to do so both optimally and efficiently. It continues in § 13. G−1 and G−1 can be found. then he stands in good company. After briefly revisiting the Newton-Raphson iteration in § 13.8 through 13. Fourth.1
Inverting the square matrix
Consider an n×n square matrix A of full rank r = n.4. it concludes by introducing the concept and practice of vector orthonormalization in §§ 13. G< . it brings forth the aforementioned pseudoinverse.6. The matrix finally begins to show its worth here.2. m × n linear system in § 13. n × n linear system in § 13. whereas such overdetermined systems arise commonly in applications. The chapter opens in § 13. so. if the reader now wonders. moreover. Third. specific applications aside. INVERSION AND ORTHONORMALIZATION
explains.7.6 to introduce the pseudoinverse to approximate the solution to an unsolvable linear system and. not its only use: the even more interesting eigenvalue and its incidents await Ch.3 by computing the rectangular matrix's kernel to solve the nonoverdetermined. one should never underestimate the blunt practical benefit of reducing an arbitrarily large grid of scalars to a single symbol A. which rightly approximates the solution to the unsolvable overdetermined linear system.11.
13. Second and yet more impressively.320
CHAPTER 13. Most students first learning the matrix have wondered at this stage whether it were worth all the tedium.1 by inverting the square matrix to solve the exactly determined. Suppose that extended operators G> . which one can then manipulate by known algebraic rules. each with an n × n active > <
. In § 13.
The definitions do however necessarily.31) that In A −1 −1 In G< G> = = A G−1 In G−1 < > = AIn .13. = In .
2 The symbology and associated terminology might disorient a reader who had skipped Chs. None of this is complicated. = G−1 G−1 In .49. In this book. because all matrices are theoretically ∞ × ∞. which means that the operators affect respectively only the first m rows or n columns of the thing they operate on. but more often it does not—which naturally is why the other books tend to forbid such multiplication). the symbol I theoretically represents an ∞ × ∞ identity matrix. 13. INVERTING THE SQUARE MATRIX region (§ 11. the matrix A of the m × n system Ax = b has nonzero content only within the m × n rectangle. really. though it too can be viewed as an ∞ × ∞ matrix with zeroes in the unused regions. as in eqns. anyway (though whether it makes any sense in a given circumstance to multiply mismatched matrices is another question. > > G−1 G< = I = G< G−1 .1. If interpreted as an ∞ × ∞ matrix.3. one can legally multiply any two matrices. In A −1 −1 G< G> In A −1 (G< In G−1 )(A) > = G> G< In . Observing from (11. such that2 G−1 G> = I = G> G−1 .
. especially § 11. Its purpose is merely to separate the essential features of a reversible operation like G> or G< from the dimensionality of the vector or matrix on which the operation happens to operate. In this book. 11 and 12.2). Outside the m × m or n × n square. (In the present section it happens that m = n because the matrix A of interest is square.24 and 14.3. the operators G> and G< each resemble the ∞ × ∞ identity matrix I. < >
321
(13. = In . the reader might briefly review the earlier chapters. but this footnote uses both symbols because generally m = n. < < A = G> In G< . slightly diverge from definitions the reader may have been used to seeing in other books.1)
we find by successive steps that A = G> In G< .) The symbol Ir contrarily represents an identity matrix of only r ones. sometimes it does make sense. To the extent that the definitions confuse.
< > (13. not I. The rank-n inverse exists. but the rank-n inverse from (11. That is.2) = In G> G< .2) include the following. (12.2)." because the rank is implied by the size of the square active region of the matrix inverted. • If B = A−1 then B −1 = A. A is itself the rank-n inverse of A−1 . reciprocal pair. INVERSION AND ORTHONORMALIZATION
or alternately that A = G> In G< . unique inverse A−1 .2) features the important matrix A−1 . its columns are linearly independent. = In . • Like A.5. G< .4). According > < to (13. which might seem a practical hurdle. No other rank-n inverse of A exists.2). AIn −1 −1 AIn G< G> (A)(G−1 In G−1 ) < > Either way.49) is not quite the infinitedimensional inverse from (11. • On the other hand. When naming the rank-n inverse in words one usually says simply. there is no trouble here. the product of A−1 and A—or. written more fully. but we have defined it in (11. inasmuch as A is square and has full rank.3) and the body of § 12. "the inverse. be known > < and honor n × n active regions. separately.
Of course.
. so. we have that A−1 A = In = AA−1 . and indeed we know how to find them. G< .3 have shown exactly how to find just such a G> . However. A−1 ≡ G−1 In G−1 .45). it does per (13. The factors do exist. G> . Properties that emerge from (13. G−1 and G−1 for any square matrix A of full rank. the rank-n inverse of A.49). for this to work. the rank-n inverse A−1 (more fully written A−1(n) ) too is an n × n square matrix of full rank r = n. which is what G−1 and G−1 are. so it has only the one. Equation (13. the product of A−1(n) and A—is. but In . = In . We have not yet much studied the rank-n inverse.322
CHAPTER 13. nonstandard notation A−1(n) . its rows and. • Since A is square and has full rank (§ 12. G−1 and G−1 must exist. (12.2) indeed have an inverse A−1 . > < without exception. where we gave it the fuller. The matrices A and A−1 thus form an exclusive.
Mathematical convention owns a special name for a square matrix which is degenerate and thus has no inverse.13. • Only a square. From this beginning and the fact that In = AA−1 .
. it is hardly the only means.3. A−1 is unique. which has no reciprocal.2).4's observation that the columns (like the rows) of A are linearly independent because A is square and has full rank. n × n matrix of full rank r = n has a rank-n inverse. What are unique are not the factors but the A and A−1 they produce. A matrix A′ which is not square. so in the sense of (13. incidentally. inasmuch as the Gauss-Jordan decomposition plus (13.1). B = A−1 . it follows that [A−1 ]∗1 represents3 the one and only possible combination of A's columns which achieves e1 . and any proper G> and G< found by any means will serve so long as they satisfy (13. that [A−1 ]∗2 represents the one and only possible combination of A's columns which achieves e2 .5. and so on through en . or whose rank falls short of a full r = n. of rank r < n? Rank promotion is impossible as §§ 12. One could observe likewise respecting the independent rows of A.2) reliably calculate it.5.2)—that BA = In or that AB = In —much less both. It is not claimed that the matrix factors G> and G< themselves are unique. What of the degenerate n × n square matrix A′ . One can have neither equality without the other. Indeed. then A′−1 would by definition represent a row or column operation which impossibly promoted A′ to the full rank r = n of In .3's finding that reversible operations like G−1 and G−1 cannot change In 's rank. it calls it a singular matrix." Refer to § 11. thus.2) such a matrix has no inverse. On the contrary.2) of A−1 plus § 12. Either way. then both equalities in fact hold. in that it has no inverse such a degenerate matrix closely resembles the scalar 0.2 and 12. That A−1 is an n × n square matrix of full rank and that A is itself the inverse of A−1 proceed from the definition (13. INVERTING THE SQUARE MATRIX
323
• If B is an n × n square matrix and either BA = In or AB = In . > < That the inverse exists is plain. Though the GaussJordan decomposition is a convenient means to G> and G< . many different pairs of matrix factors G> and G< can yield A = G> In G< .3 have shown.5. no other n × n matrix B = A−1 satisfies either requirement of (13.1.1. is not invertible in the rank-n sense of (13.5.
3
The notation [A−1 ]∗j means "the jth column of A−1 . no less than that many different pairs of scalar factors γ> and γ< can yield α = γ> 1γ< . for. That the inverse is unique begins from § 12. Moreover. if it had.
See also §§ 12. (13.4) with less effort.2
The exactly determined linear system
Section 11. introductory.1 has shown how the single matrix equation Ax = b (13.4. which gives A full rank and makes it invertible. What the tutorials do is pedagogically necessary—it is how the
4
. First.5. 1][31. then the rows of A are similarly independent.4 As the chapter's introduction has observed. however. appending the right number of null rows or columns to a nonsquare matrix does turn it into a degenerate square. we
For motivational reasons. The definitions of the present particular section are meant for square matrices.3) and the corresponding system of linear scalar equations by left-multiplying (13.6.4 and 13. in the long run the tutorials save no effort. However. tutorial linear algebra textbooks like [23] and [31] rightly yet invariably invert the general square matrix of full rank much earlier. INVERSION AND ORTHONORMALIZATION
And what of a rectangular matrix? Is it degenerate? Well. one can solve (13. then one can indeed reach (13.4) with rather less effort than the book has done.2) and (13. reaching (13.3)
concisely represents an entire simultaneous system of linear scalar equations. the student fails to develop the GaussJordan decomposition properly.1) to reach the famous formula x = A−1 b. not exactly. Under these conditions. Furthermore.324
CHAPTER 13. not necessarily. Second. Ch. because the student still must at some point develop the theory underlying matrix rank and supporting each of the several coincident properties of § 14.3 and 12. then the matrix A has square. (13. It has taken the book two long chapters to reach (13. The deferred price the student pays for the simplerseeming approach of the tutorials is twofold.5.5.1.4) concisely solves a simultaneous system of n linear scalar equations in n scalar unknowns. § 1. in which case the preceding argument applies.4) Inverting the square matrix A of scalar coefficients. 13. they do not neatly apply to nonsquare ones. It is the classic motivational result of matrix theory.3) by the A−1 of (13. instead learning the less elegant but easier to grasp "row echelon form" of "Gaussian elimination" [23.5. n × n dimensionality. no.4).
13. If the system has n scalar equations and n scalar unknowns. if the n scalar equations are independent of one another.2]—which makes good matrixarithmetic drill but leaves the student imperfectly prepared when the time comes to study kernels and eigensolutions or to read and write matrix-handling computer code.2. If one omits first to prepare the theoretical ground sufficiently to support more advanced matrix work. Refer to §§ 12.
5. e If we were really precise. Equation (13. and surely more comprehensible to those who do not happen to have read this particular book.6.
.49).3
The kernel
If a matrix A has full column rank (§ 12. any n-element x (including x = 0) that satisfies (13. Indeed. minimally represents the kernel of A if and only if • AK has n × (n − r) dimensionality (which gives AK tall rectangular form unless r = 0). though.
writer first learned the matrix and probably how the reader first learned it. thus degenerate.
13. 5 The conventional mathematical notation for the kernel of A is ker{A}. no unique solution to a linear system described by a singular square matrix is possible—though a good approximate solution is given by the pseudoinverse of § 13. the rows of the matrix are linearly dependent. THE KERNEL
325
shall soon meet additional interesting applications of the matrix which in any case require the theoretical ground to have been prepared. because Chs.13. null{A} or something nearly resembling one of the two—the notation seems to vary from editor to editor—which technically represent the kernel space itself. Let A be an m × n matrix of rank r. as opposed to the notation AK which represents a matrix whose columns address the kernel space.4) is only the first fruit of the effort. where derivations prevail.3. where the square matrix A is singular. In the language of § 12.5) belongs to the kernel of A.5.5.5) is impossible if In x = 0. A second matrix. The abbreviated notation AK is probably clear enough for most practical purposes. Either way.5) is possible even if In x = 0. the singular square matrix characterizes a system that is both underdetermined and overdetermined. In this book. If the matrix however lacks full column rank then (13.4). the superfluous equations either merely reproduce or flatly contradict information the other equations already supply. In either case. This book de¨mphasizes the distinction and prefers the kernel matrix notation AK . Where the inverse does not exist. meaning that the corresponding system actually contains fewer than n useful scalar equations. the inversion here goes smoothly. Depending on the value of the driving vector b. not to a study reference like this book. then the columns of A are linearly independent and Ax = 0 (13.5 AK . too—but it is appropriate to a tutorial. we might write not AK but AK(n) to match the A−1(r) of (11. the proper place to invert the general square matrix of full rank is here. 11 and 12 have laid under it a firm foundation upon which—and supplied it the right tools with which—to work.
3.
13. G> Ir KSx = b.4. for generality's sake. where S −1 . Ax = A(AK a) = (AAK )a = 0.1 derives the formula.5) requires but it serves § 13.7) is not easy. and it is the latter space rather than AK as such that is technically the kernel (if you forget and call AK "a kernel. so.3. next. What is unique is not the kernel matrix but rather the space its columns address. the columns of AK are linearly independent. Section 13.8) and where b and x are respectively m. (13. The Gauss-Jordan kernel formula 6 AK = S −1 K −1 Hr In−r = G−1 Hr In−r < (13. there are infinitely many kernel matrices AK to choose from for a given matrix A.6)
The n−r independent columns of the kernel matrix AK address the complete space x = AK a of vectors in the kernel.
. or both. >
6 The name Gauss-Jordan kernel formula is not standard as far as the writer is aware. by successive steps. This name seems as fitting as any. too. which gives AK full column rank). we leave b unspecified but by the given proviso.3) and Hr is the shift operator of § 11.9. for the moment. INVERSION AND ORTHONORMALIZATION • AK has full rank n − r (that is. Ir KSx = G−1 b. In symbols. and • AK satisfies the equation AAK = 0. Gauss-Jordan factoring A. (13. K −1 and G−1 are the factors < their respective symbols indicate of the Gauss-Jordan decomposition's complementary form (12.7) gives a complete kernel AK of A. Except when A has full column rank the kernel matrix is not unique. This statement is broader than (13." though. you'll be all right).7). where the (n − r)-element vector a can have any value.326
CHAPTER 13. but we would like a name for (13. where b = 0 or r = m. The definition does not pretend that the kernel matrix AK is unique.and n-element vectors and A is an m × n matrix of rank r.1
The Gauss-Jordan kernel formula
To derive (13. > Ir (K − I)Sx + Ir Sx = G−1 b. It begins from the statement of the linear system Ax = b.
9) is interesting. we have (probably after some trial and error not recorded here) planned the steps leading to (13. which is the remaining n − r elements.9) implies that one can choose the last n − r elements of Sx freely. THE KERNEL Applying an identity from Table 12. b) = f (a. > Rearranging terms. It has Sx on both sides. b) + Hr a. This makes f and thereby also x functions of the free parameter a and the driving vector b: f (a. and on the right only (In − Ir )Sx. The equation has however on the left only Ir Sx.3. The flexibility to reassociate operators in such a way is one of many good reasons Chs. b) a = f (a.
.10)
a ≡ H−r (In − Ir )Sx. The implication is significant. (13.9) in the improved form f = G−1 b − Ir KHr a. then f (a. Ir Sx = G−1 b − Ir K(In − Ir )Sx. Sx(a. 0) = −Ir KHr a.
If b = 0 as (13. 0) a
= f (a. Equation (13.11)
f ≡ Ir Sx. where a represents the n − r free elements of Sx and f represents the r dependent elements. (13. b) = G−1 b − Ir KHr a. where Sx is the vector x with elements reordered in some particular way. Naturally this is no accident. 11 and 12 have gone to such considerable trouble to develop the basic theory of the matrix.9)
Equation (13. 0) =
7
f (a. but that the choice then determines the first r elements. To express the implication more clearly we can rewrite (13. > Sx(a. Ir K(In − Ir )Sx + Ir Sx = G−1 b.13.7 No element of Sx appears on both sides. > Sx = f a = f + Hr a.5) requires.9) to achieve precisely this effect.
Notice how we now associate the factor (In − Ir ) rightward as a row truncator. >
327
(13. which is the first r elements of Sx.2 on page 300. though it had first entered acting leftward as a column truncator. 0) + Hr a.
14)
The alternate kernel formula (13. this seems trivial. 0) = (I − Ir K)Hr In−r .15)
8 These are difficult steps. 0) = (I − Ir K)Hr a. AK = S −1 (I − Ir K)Hr In−r . then x by AK ? One justifies them in that the columns of In−r are the several ej . But if all the ej at once. in aggregate.13) (13. then the columns of x(In−r . Sx(In−r .328
CHAPTER 13. exactly address the domain of a. Seen from one perspective. (13. which by definition are the independent values of x that cause b = 0.76). with the elements of a again as the weights. Sx(ej . until one grasps what is really going on here. Sx(a. if we can solve the problem for the identity matrix In−r —then we shall implicitly have solved it for every a because a is a weighted combination of the ej and the whole problem is linear. The idea is that if we can solve the problem for each elementary vector ej —that is. then ej by In−r .13) is SAK = (I − Ir K)(In − Ir )Hr
= [(In − Ir ) − Ir K(In − Ir )]Hr = [(In − Ir ) − (K − I)]Hr .14) is correct but not as simple as it could be. In the event that a = ej .12)
Left-multiplying by S −1 = S ∗ = S T produces the alternate kernel formula (13. For all the ej at once. 0) likewise exactly address the range of x(a. How does one justify replacing a by ej .6) has already named this range AK . The solution x = AK a for a given choice of a becomes a weighted combination of the solutions for each ej . the columns of In−r . from another perspective. by which8 SAK = (I − Ir K)Hr In−r .
(13. By the identity (11.
. of which any (n − r)-element vector a can be constructed as the linear combination
a = In−r a = [ e1
e2
e3
···
en−r ]a =
n−r X j=1
aj ej
weighted by the elements of a. And what are the solutions for each ej ? Answer: the corresponding columns of AK . Equation (13. (13. INVERSION AND ORTHONORMALIZATION
Substituting the first line into the second. where 1 ≤ j ≤ n − r. baffling. 0) = (I − Ir K)Hr ej . 0).
17) Second.15) yields SAK = [(In − K −1 Ir ) − (I − K −1 )]Hr .17) and (13. since all of K's interesting content lies by construction right of its rth column). then it appears that SAK = K −1 Hr In−r . SAK = K −1 (In − Ir )Hr + [(K −1 − I)(I − In )]Hr .2 again in the last step.16) though unproven still helps because it posits a hypothesis toward which to target the analysis.
9 Well.18) into (13.2 also help. Substituting (13. THE KERNEL
329
where we have used Table 12.
.9 but (13.2.3.16)
The appearance is not entirely convincing. Adding 0 = K −1 In Hr − K −1 In Hr and rearranging terms. actually. Now we have enough to go on with.18)
(which actually is pretty obvious if you think about it. so SAK = K −1 (In − Ir )Hr . First. the appearance pretty much is entirely convincing.15) schematically with a pencil.13. but if one sketches the matrices of (13. no. 2 we have that K − I = I − K −1 . the quantity in square brackets is zero. but let us finish the proof symbolically nonetheless. SAK = K −1 (In − Ir )Hr + [K −1 − K −1 In − I + In ]Hr . How to proceed symbolically from (13.15) is not obvious. we have that K −1 Ir = Ir (13. right-multiplying by Ir the identity that Ir K −1 (In − Ir ) = K −1 − I and canceling terms. Factoring. (13. According to Table 12. and if one remembers that K −1 is just K with elements off the main diagonal negated. (13. Two variations on the identities of Table 12. from the identity that K + K −1 = I.
We know this because § 12.
.2
Converting between kernel matrices
If C is a reversible (n−r)×(n−r) operator by which we right-multiply (13.5. Moreover. AK lacks it because SAK lacks it.19) like AK evidently represents the kernel of A: AA′K = A(AK C) = (AAK )C = 0.7) has both features and does fit the definition. that it had full rank. the definition of AK in the section's introduction demands both of these features. AK addresses it because AK comes from all a. that is. INVERSION AND ORTHONORMALIZATION
which.3. reaching (13. reversibly.3.16) by S −1 = S ∗ = S T . without altering the space addressed.) The orthonormalizing column operator R−1 of (13. So. proves (13. In general such features would be hard to establish.7)'s AK actually addressed the whole kernel space of A rather than only part. Regarding the whole kernel space. and SAK lacks it because according to (13. but here the factors conveniently are Gauss-Jordan factors.16).76) has that (In − Ir )Hr = Hr In−r .4 lets one replace the columns of AK with those of A′K . the two matrices necessarily represent the same underlying kernel. The concept is the concept of the degree of freedom. Regarding redundancy.7) that was to be derived. The final step is to left-multiply (13. worth pausing briefly to appreciate. then the matrix A′K = AK C (13. (It might not let one replace the columns in sequence.6). in fact.
13.1 and 12.5.52) below incidentally tends to make a good choice for C.
13. Refer to §§ 12.330
CHAPTER 13.13) the last rows of SAK are Hr In−r .3
The degree of freedom
A slightly vague but extraordinarily useful concept has emerged in this section. One would like to feel sure that the columns of (13. one column at a time. (13. Indeed this makes sense: because the columns of AK C address the same space the columns of AK address. but if out of sequence then a reversible permutation at the end corrects the order. considering that the identity (11. One would further like to feel sure that AK had no redundant columns. Moreover.2 for the pattern by which this is done. some C exists to convert AK into every alternate kernel matrix A′K of A.
Likewise. Other degrees of freedom are nonlinear in effect: a certain firing elevation gives maximum range. then he would almost surely miss. . the artillerist might find some impractical to exercise.3. he can still choose firing elevation but no longer azimuth. one in muzzle velocity (as governed by the quantity of gunpowder used to propel the ball). Two degrees of freedom remain to the artillerist. invites any real artillerist among the readership to write in to improve the example. . A seventh potential degree of freedom. . The artillerist probably preloads the cannon always with a standard charge of gunpowder because. Some apparent degrees of freedom are not real.
The author. too much gunpowder might break the cannon.
10
. that is. then he can still hope to hit even with the broken carriage wheel. but. one in east-west). the height from which the artillerist fires. For example. when he finds his target in the field. And Napoleon might yell. the two are enough. he cannot spare the time to unload the cannon and alter the charge: this costs one degree of freedom. anyway. the artillerist needs somehow to recover another degree of freedom. two in aim (azimuth and elevation). the artillerist must limber up the cannon and hitch it to a horse to shift it to better ground. for this too he cannot spare time in the heat of battle: this costs two degrees. ) and waits for the target to traverse the cannon's fixed line of fire. for Napoleon had no flying cannon. In such a strait to hit some particular target on the battlefield. Now consider what happens if the artillerist loses one of his last two remaining degrees of freedom. but. "Fire!" canceling the time degree as well. Maybe the cannon's carriage wheel is broken and the artillerist can no longer turn the cannon. who has never fired an artillery piece (unless an arrow from a Boy Scout bow counts). since exactly two degrees are needed to hit some particular target on the battlefield. is of course restricted by the lay of the land: the artillerist can fire from a high place only if the place he has chosen to fire from happens to be up on a hill. "Fire!" (maybe not a wise thing to do. THE KERNEL
331
A degree of freedom is a parameter one remains free to determine within some continuous domain. For example. and one in time. nearer targets can be hit by firing either higher or lower at the artillerist's discretion. for could he choose neither azimuth nor the moment to fire. If he disregards Napoleon's order. Napoleon's artillerist10 might have enjoyed as many as six degrees of freedom in firing a cannonball: two in where he chose to set up his cannon (one degree in north-south position. muzzle velocity gives the artillerist little control firing elevation does not also give. for he needs two but has only one. Yet even among the six remaining degrees of freedom. On the other hand.13.
If the underdetermined system is not also overdetermined.20)
.332
CHAPTER 13. if it is nondegenerate such that r = m. north-south. The n may possibly for various reasons still not suffice—it might be wise in some cases to allow n + 1 or n + 2—but in no event will fewer than n do.2 is common. This family is the topic of the next section. Engineers of all kinds think in this way: an aeronautical engineer knows in advance that an airplane needs at least n ailerons. nonoverdetermined linear system Ax = b. The driver on the mountain road cannot claim a second degree of freedom at the mountain intersection (he can indeed claim a choice. represented by the n − r free elements of a. A plane brings two.
13. but he might plausibly claim a second degree of freedom upon reaching the city.4
The nonoverdetermined linear system
The exactly determined linear system of § 13. we find n − r degrees of freedom in the general underdetermined linear system. it still brings a single degree of freedom. the count captures the idea that to control n output variables of some system takes at least n independent input variables. but also common is the more general. where the web or grid of streets is dense enough to approximate access to any point on the city's surface. east-west) even though his domain in any of the three is limited to the small volume of the pool. Reviewing (13.11). not merely a discrete choice to turn left or right. In geometry. And if the road reaches an intersection? Answer: still one degree. A point brings none. a swimmer in a swimming pool enjoys three degrees of freedom (up-down. If the line bends and turns like a mountain road. rudders and other control surfaces for the pilot adequately to control the airplane. a line brings a single degree of freedom. but the choice being discrete lacks the proper character of a degree of freedom). INVERSION AND ORTHONORMALIZATION
All of this is hard to generalize in unambiguous mathematical terms. but the count of the degrees of freedom in a system is of high conceptual importance to the engineer nonetheless. an electrical engineer knows in advance that a circuit needs at least n potentiometers for the technician adequately to tune the circuit. A degree of freedom has some continuous nature. Just how many streets it takes to turn the driver's "line" experience into a "plane" experience is a matter for the mathematician's discretion. and so on. On the other hand. then it is guaranteed to have a family of solutions x. Basically. (13.
when the second line is added to the first and the third is substituted.20). The family of vectors expressible as AK a is called the homogeneous solution of (13. it remains in § 13. THE NONOVERDETERMINED LINEAR SYSTEM
333
in which b is a known.2.20) by definition admits more than one solution x for a given driving vector b. b) + AK a = f (a.4.
Any particular solution will do.4.20) already half done. the nonoverdetermined linear system has no unique solution but rather a family of solutions. This section delineates the family.7) has given us AK and thereby the homogeneous solution. To complete the analysis.4.13.21) Except in the exactly determined edge case r = m = n of § 13. Notice the italicized articles a and the.2
A particular solution
f (a. and A is a square or broad. but it does let us treat the system's first and second lines in (13. A(AK a) = 0. > (S) x1 (a.5.4) r = m ≤ n.20). Splitting the system does not change it. Such a system is hard to solve all at once.2 to find a particular solution. x is an unknown.22) separately. whereupon AK a represents the complete family of n-element vectors that satisfy the form's second line.4. so we prefer to split the system as Ax1 = b. x = x1 + A a.1
Particular and homogeneous solutions
The nonoverdetermined linear system (13. which renders the analysis of (13.20). though.11) has that
. m-element vector.22)
13. b) + Hr a. the mathematician just picks one—and is called a particular solution of (13. Equation (13. which.
13. b) = G−1 b − Ir KHr a. n-element vector. (13. The (n − r)-element vector a remains unspecified. m × n matrix of full row rank (§ 12. b) a = f (a. In the split form. The Gauss-Jordan kernel formula (13.
K
(13. the symbol x1 represents any one n-element vector that happens to satisfy the form's first line—many are possible. makes the whole form (13.
24) solves the nonoverdetermined linear system in theory exactly. (It can also lose accuracy when the matrix's rows are almost dependent. In exact arithmetic (13. > (13. This holds for any a and b.22) and (13.8. which addresses a related problem.4) solve the exactly determined linear system Ax = b. Equation (13.24) for matrices larger than some moderately large size. but since we need only one particular solution. b).24)'s reach to larger matrices. but that is more the fault of the matrix than of the formula. b) = That is.4.
13.)
13. practical calculations are usually done in limited precision. but one should at least be aware that it can in practice lose floating-point accuracy when the matrix it attacks grows too large. We are not free to choose the driving vector b. Avoiding unduly small pivots early in the Gauss-Jordan extends (13.24) broadens the solution to include the nonoverdetermined linear system.23) f (0. x1 = S −1 G−1 b. INVERSION AND ORTHONORMALIZATION
where we have substituted the last line of (13. None of those equations however can handle the
. and for yet larger matrices a bewildering variety of more sophisticated techniques exists to mitigate the problem. See § 14.3
The general solution
Assembling (13. (13.334
CHAPTER 13.2) and (13. b) 0 = f (0. Why not a = 0? Then f (0. > Sx1 (0.5
The residual
Equations (13.24)
to the nonoverdetermined linear system (13. in which compounded rounding error in the last bit eventually disrupts (13.23) yields the general solution x = S −1 (G−1 b + K −1 Hr In−r a) > (13. a can be anything we want. Of course. b) = G−1 b. Equation (13.22) for x.20). which can be vexing because the problem arises even when the matrix A is exactly known.24) is useful and correct.7).
12 Some books [49] prefer to define r(x) ≡ b − Ax. The quantity11. For example. The r here is unrelated to matrix rank r. and the more so because it arises so frequently in applications. fully trained labor on demand. More precisely.25) has no exact solution.5 for the definitions of underdetermined. On Saturday morning at the end of the second week. 13. We need to develop the mathematics to handle the overdetermined system properly. etc.12 r(x) ≡ Ax − b (13.25). and the smaller. because for general b the overdetermined linear system Ax ≈ b (13.
Alas. the smaller the nonnegative real scalar [r(x)]∗ [r(x)] =
i
|ri (x)|2
(13.
13.13.6
The Moore-Penrose pseudoinverse and the least-squares problem
A typical problem is to fit a straight line to some data. Suppose further that we are contracted to build a long freeway and have been adding workers to the job in recent weeks to speed construction. called the squared residual norm. the better. One seldom trusts a minimal set of data for important measurements.5. yet extra data imply an overdetermined system. THE PSEUDOINVERSE AND LEAST SQUARES
335
overdetermined linear system. we gather and plot the production data on the left of Fig. (See § 12.27)
is. If ui and bi respectively represent the number of workers and the length of freeway completed during week i.1. instead. In fact the overdetermined system is especially interesting.26) measures how nearly some candidate solution x solves the system (13.) One is tempted to declare the overdetermined system uninteresting because it has no solution and to leave the matter there. but this would be a serious mistake.6. We call this quantity the residual. the alphabet has only so many letters (see Appendix B). overdetermined. the more favorably we regard the candidate solution x.
11
. whose labor union can supply additional. suppose that we are building-construction contractors with a unionized work force.
Why now two? The answer is that § 13. INVERSION AND ORTHONORMALIZATION Figure 13. they differ as driving an automobile differs from washing one. Do not let this confuse you. The added data present a problem. Though both involve lines. but geometrically we need only two points to specify a line.2 to solve for x ≡ [σ γ]T . to which we should like to fit a better line to predict production more accurately. what are we to do with the other three? The five points together overdetermine the linear system 2 3 3 2 =
b1 6 b2 6 6 b3 6 4 b4 b5 7 7 7.3.
Length newly completed during the week
Number of workers
Length newly completed during the week Number of workers u2 1 γ b2 u1 6 u2 6 6 u3 6 4 u4 u5 1 1 1 1 1 7» – 7 σ 7 7 γ 5
then we can fit a straight line b = σu + γ to the measured production data such that – » – –» » b1 σ u1 1 = .
13
.336
CHAPTER 13.3 characterized a line as enjoying only one degree of freedom. By the fifth Saturday however we shall have gathered more production data. 7 5
There is no way to draw a single straight line b = σu + γ exactly through all five. More precisely. in the hope that the resulting line will predict future production accurately.3. the added data are welcome. for in placing the line we enjoy only two degrees of freedom.13 The proper approach is to draw among the data points a single straight line that misses the points as narrowly as possible.1: Fitting a line to measured data.1 and 13. inverting the matrix per §§ 13. That is all mathematically irreproachable. the proper
Section 13. Statistically.3 discussed travel along a line rather than placement of a line as here. plotted on the figure's right.
6.13. . all ones on the right. This is a typical structure for A. 7 7 5 2 6 6 6 6 6 6 6 4 b1 b2 b3 b4 b5 . this section attacks the difficult but important problem of approximating optimally a solution to the general.10). THE PSEUDOINVERSE AND LEAST SQUARES
337
approach chooses parameters σ and γ to minimize the squared residual norm [r(x)]∗ [r(x)] of § 13. . 7 7 5
A=
x=
»
σ γ
–
.
13. . A requirement that d xT AT (Ax − 2b) + bT b = 0 dx
. The matrix A in the example has two columns. Ax ≈ b.1
Least squares in the real domain
The least-squares problem is simplest when the matrix A enjoys full column rank and no complex numbers are involved. 3 7 7 7 7 7. data marching on the left. we seek to minimize the squared residual norm [r(x)]T [r(x)] = (Ax − b)T (Ax − b)
= xT AT (Ax − 2b) + bT b. given that
2 6 6 6 6 6 6 6 4 u1 u2 u3 u4 u5 .
= xT AT Ax + bT b − 2xT AT b
= xT AT Ax + bT b − xT AT b + bT Ax
in which the transpose is used interchangeably for the adjoint because all the numbers involved happen to be real.5. . but in general any matrix A with any number of columns of any content might arise (because there were more than two relevant variables or because some data merited heavier weight than others. 1 1 1 1 1 3 7 7 7 7 7.25). b=
Such parameters constitute a least-squares solution. Whatever matrix A might arise from whatever source. In this case. among many further reasons). possibly unsolvable linear system (13. The norm is minimized where d rT r = 0 dx (in which d/dx is the Jacobian operator of § 11.6.
No line can pass more nearly.28) x = AT A to the unsolvable linear system (13. for applied mathematics is a poor guide to such mysteries. if the other metric really. Strictly speaking.27) is so chosen. 13. As the reader can see. the better.
or. in the squared-residual norm sense of (13. An experienced mathematician would probably reject the circle on the aesthetic yet practical ground that the parabola b = αu2 + σu + γ lends
14
. the mathematics cannot tell us which metric to use. objectively is better.338
CHAPTER 13. but where no other consideration prevails the applied mathematician tends to choose the metric that best simplifies the mathematics at hand— and. the simplified equation AT Ax = AT b.28) plots the line on Fig. and the less. real cost function or metric. "But. This is where the mathematician's experience. but it does pass pretty convincingly nearly among them. next) that the n×n square matrix AT A is invertible. rearranging terms and dividing by 2.6. Assuming (as warranted by § 13.27)." comes the objection. usually. to identify the good as good in the first place. "Optimal" means "best.79) yields the equation xT AT A + AT (Ax − 2b)
T
= 0. Equation (13. In general however the mathematical question is: what does one mean by "better?" Better by which metric? Each metric is better according to itself. "what if some more complicated metric is better?" Well.2. that is about as good a way to choose a metric as any.25) in the restricted but quite typical case that A and b are real and A has full column rank. after transposing the equation. nonnegative.1's right. even of dispute. We shall leave to the philosopher and the theologian the important question of what constitutes objective good. A circle. One could therefore seek to fit not a line but some downward-turning curve to the data. the line does not pass through all the points." Many problems in applied mathematics involve discovering the best of something. One generally establishes mathematical optimality by some suitable. The role of applied mathematics is to construct suitable models to calculate quantities needed to achieve some definite good. the simplified equation implies the approximate but optimal least-squares solution −1 T A b (13. The metric (13. then one should probably use it. In fact it passes optimally nearly among them. taste and judgment come in.14
Here is a nice example of the use of the mathematical adjective optimal in its adverbial form. really. its role is not. maybe? Not likely. INVERSION AND ORTHONORMALIZATION
comes of combining the last two equations. What constitutes the best however can be a matter of judgment. too much labor on the freeway job might actually slow construction rather than speed it. for no line can. In the present section's example. Differentiating by the Jacobian product rule (11. Mathematics offers many downward-turning curves.
This would mean that there existed a nonzero. Let A be a complex. Left-multiplying by u∗ would give that u∗ A∗ Au = 0. In u = 0. One might fit the line to the points (bi . Suppose falsely that A∗ A were not invertible but singular.
itself to easier analysis.4) that its columns (as its rows) depended on one another. In u = 0. implying (§ 12.13. bi ). The big square is not very interesting and in any case is not invertible.2
The invertibility of A∗ A
Section 13. Yet even fitting a mere straight line offers choices. impossible when the columns of A are independent. The adjective "optimal" alone evidently does not always tell us all we need to know.1) that the product's rank r ′ < n were less than full.6. m × n matrix of full column rank r = n ≤ m. the n × n product A∗ A is invertible for any tall or square.
.6. it happens that AT = A∗ . The contradiction proves false the assumption which gave rise to it. It is the compact square that concerns this section. then A∗ A is a compact. Section 6. Thus. They predict production differently.3 offers a choice between averages that resembles in spirit this footnote's choice between metrics.5.
But this could only be so if Au = 0. 15 Notice that if A is tall.1 has assumed correctly but unwarrantedly that the product AT A were invertible for real A of full column rank. n-element vector u for which A∗ Au = 0. In u = 0. For real A. m × n matrix A of full column rank r = n ≤ m.15 This subsection warrants the latter assumption. Since the product A∗ A is a square. ui ) or (ln ui . n × n square. so it only broadens the same assumption to suppose that the product A∗ A were invertible for complex A of full column rank. or in other words that
n
(Au) (Au) =
i=1
∗
|[Au]i |2 = 0.6. The false assumption: that A∗ A were singular. thereby incidentally also warranting the former. THE PSEUDOINVERSE AND LEAST SQUARES
339
13. ln bi ) rather than to the points (ui . this is to suppose (§ 13. n × n matrix. In u = 0. whereas AA∗ is a big. The three resulting lines differ subtly. m × m square.
then one can just choose B = Im or C = In . the product A∗ A is nonnegative definite for any matrix A whatsoever. Such definitions might seem opaque. Suppose that. C = Ir G< . INVERSION AND ORTHONORMALIZATION
13. which is unfortunate but conventional. that the product Au points more in the direction of u than of −u. (13.
16 This subsection uses the symbols B and b for unrelated purposes.16). we manipulated (13. both of which factors themselves enjoy full rank r.3
Positive definiteness
An n × n matrix C is positive definite if and only if ℑ(u∗ Cu) = 0 and ℜ(u∗ Cu) > 0 for all In u = 0.
(B B)
∗
−1
B BCx ≈ (B ∗ B)−1 B ∗ b. a conjecture seems warranted. (13. Thus per (13. but their sense is that a positive definite operator never reverses the thing it operates on. An n × n matrix C is nonnegative definite if and only if ℑ(u∗ Cu) = 0 and ℜ(u∗ Cu) ≥ 0 for all u.2. Section 13.25) by the successive steps Ax ≈ b. 12.6. however. the Gauss-Jordan decomposition of eqn. every m × n matrix A of rank r can be factored into a product16 A = BC of an m × r tall or square matrix B and an r × n broad or square matrix C. (If A happens to have full row or column rank. According to (12.30) By reasoning like the last paragraph's.
13.29) the product A∗ A is positive definite for any matrix A of full column rank.2 finds at least the full-rank factorization B = G> Ir .28). inspired by (13.8 explains further. but even if A lacks full rank.29)
As in § 13.6. See footnote 11.6. nelement vectors u. A positive definite operator resembles a positive scalar in this sense. Cx ≈ (B ∗ B)−1 B ∗ b.
∗
BCx ≈ b. here also when a matrix A has full column rank r = n ≤ m the product u∗ A∗ Au = (Au)∗ (Au) is real and positive for all nonzero.4
The Moore-Penrose pseudoinverse
Not every m × n matrix A enjoys full rank.
.) This being so.340
CHAPTER 13.
After all. altering the "≈" sign to "=. minimal squared residual norm.
Changing the variable back and (because we are conjecturing and can do as we like).
.26). So here is our conjecture: • no x enjoys a smaller squared residual norm r∗ r than the x of (13.2 at least that the two matrices it tries to invert are invertible.6.31) does.31) does resemble (13. the latter of which admittedly requires real A of full column rank but does minimize the residual when its requirements are met. whether small.28). The conjecture is bold. The first point of the conjecture is symbolized r(x) ≤ r(x + ∆x).6. this is [Ax − b]∗ [Ax − b] ≤ [(A)(x + ∆x) − b]∗ [(A)(x + ∆x) − b]. moderate or large. even if there were more than one x which minimized the residual. Reorganizing. According to (13." x = C ∗ (CC ∗ )−1 (B ∗ B)−1 B ∗ b. and. (13. (13. and • among all x that enjoy the same.31) has a pleasingly symmetrical form.31)
Equation (13. one of them might be smaller than the others: why not the x of (13.
u ≈ (CC ∗ )−1 (B ∗ B)−1 B ∗ b.13.31) is strictly least in magnitude. CC ∗ u ≈ (B ∗ B)−1 B ∗ b. Continuing. [Ax − b]∗ [Ax − b] ≤ [(Ax − b) + A ∆x]∗ [(Ax − b) + A ∆x].
341
thus restricting x to the space addressed by the independent columns of C ∗ . THE PSEUDOINVERSE AND LEAST SQUARES Then suppose that we changed C ∗ u ← x. but if you think about it in the right way it is not unwarranted under the circumstance.31)? One can but investigate. the x of (13.31). of some alternate x from the x of (13. and we know from § 13. where ∆x represents the deviation.
It does not claim that no x + ∆x enjoys the same. Each step in the present paragraph is reversible.3.2 and 9. thus establishing the conjecture's first point.3 is a nonnegative definite operator. Moreover. See Ch. minimal squared residual norm.31) and the full-rank factorization A = BC. leaving an assertion that 0 ≤ ∆x∗ A∗ A ∆x.6. The conjecture's first point. the assertion in the last form is correct because the product of any matrix and its adjoint according to § 13. which since B has full column rank is possible only if C ∆x = 0. A∗ (Ax − b) = A∗ Ax − A∗ b
= [C ∗ B ∗ ][BC][C ∗ (CC ∗ )−1 (B ∗ B)−1 B ∗ b] − [C ∗ B ∗ ][b] = C ∗ (B ∗ B)(CC ∗ )(CC ∗ )−1 (B ∗ B)−1 B ∗ b − C ∗ B ∗ b = C ∗ B ∗ b − C ∗ B ∗ b = 0.17 so the assertion in the last form is logically equivalent to the conjecture's first point. The latter case is symbolized r(x) = r(x + ∆x). 6's footnote 14. so this is to claim that B(C ∆x) = 0. or in other words.5. has it that no x+∆x enjoys a smaller squared residual norm than the x of (13. with which the paragraph began. A ∆x = 0. INVERSION AND ORTHONORMALIZATION
Distributing factors and canceling like terms.31) does. But according to (13. 0 ≤ (Ax − b)∗ A ∆x + ∆x∗ A∗ (Ax − b) + ∆x∗ A∗ A ∆x. But A = BC. 0 = ∆x∗ A∗ A ∆x.
which reveals two of the inequality's remaining three terms to be zero. or equivalently by the last paragraph's logic.
.342
CHAPTER 13. now established.
17 The paragraph might inscrutably but logically instead have ordered the steps in reverse as in §§ 6.
which naturally for ∆x = 0 is true. reverse logic establishes the conjecture's second point. THE PSEUDOINVERSE AND LEAST SQUARES
343
Considering the product ∆x∗ x in light of (13. Distributing factors and canceling like terms. which is to observe that ∆x∗ x = 0 for any ∆x for which x + ∆x achieves minimal squared residual norm. Returning attention to the conjecture. more briefly the pseudoinverse of A. its second point is symbolized x∗ x < (x + ∆x)∗ (x + ∆x) for any ∆x = 0 for which x+∆x achieves minimal squared residual norm (note that it's "<" this time. so the last inequality reduces to read 0 < ∆x∗ ∆x.31) is called the Moore-Penrose pseudoinverse of A. With both its points established. exactly determined. overdetermined or even degenerate.32)
of (13. Since each step in the paragraph is reversible. the conjecture is true. not "≤" as in the conjecture's first point). then the matrix18 A† ≡ C ∗ (CC ∗ )−1 (B ∗ B)−1 B ∗ (13. every matrix has a Moore-Penrose pseudoinverse.31) and the last equation.33)
Some books print A† as A+ . But the last paragraph has found that ∆x∗ x = 0 for precisely such ∆x as we are considering here.6. 0 < x∗ ∆x + ∆x∗ x + ∆x∗ ∆x. Whether underdetermined. Yielding the optimal approximation x = A† b.
18
(13. we observe that ∆x∗ x = ∆x∗ [C ∗ (CC ∗ )−1 (B ∗ B)−1 B ∗ b] = [C ∆x]∗ [(CC ∗ )−1 (B ∗ B)−1 B ∗ b]. If A = BC is a full-rank factorization.13.
.
then the Moore-Penrose A† = A−1 is just the inverse.23 fails to do).20
19 20
[2. The MoorePenrose is thus a general-purpose solver and approximator for linear systems.25) as well as the system can be solved—exactly if possible. If A lacks full row rank. 4.
x=xk
Solving for xk+1 (approximately if necessary).34)
where [·]† is the Moore-Penrose pseudoinverse of § 13.7
The multivariate Newton-Raphson iteration
When we first met the Newton-Raphson iteration in § 4. f dx x=xk where df /dx is the Jacobian derivative of § 11. and then of course (13. Nothing can solve the system uniquely if A has broad shape but the Moore-Penrose still solves the system exactly in that case as long as A has full row rank. then the Moore-Penrose solves the system as nearly as the system can be solved (as in Fig.344
CHAPTER 13. 13.
(13.1 if f and x happen each to have the same number of elements. If A is square and invertible. § 3. It then approximates the root xk+1 as the point at which ˜k (xk+1 ) = 0: f ˜k (xk+1 ) = 0 = f (xk ) + f d f (x) dx (xk+1 − xk ). Now that we have the notation and algebra we can write down the multivariate Newton-Raphson iteration almost at once.19
13.8 and Fig. moreover minimizing the solution's squared magnitude x∗ x (which the solution of eqn.5. INVERSION AND ORTHONORMALIZATION
the Moore-Penrose solves the linear system (13. It is a significant discovery. The iteration approximates the nonlinear vector function f (x) by its tangent ˜k (x) = f (xk ) + d f (x) (x − xk ). we have that xk+1 d f (x) = x− dx
†
f (x)
x=xk
.33) solves the system uniquely and exactly.6—which is just the ordinary inverse [·]−1 of § 13.8 we lacked the matrix notation and algebra to express and handle vector-valued functions adeptly. 13. with minimal squared residual norm if impossible. Refer to § 4.10. "Moore-Penrose generalized inverse"] [40]
.1) and as a side-benefit also minimizes x∗ x.3][37.
also called the inner product. § 3. it must be that ei · ej = δij .
(13. It does not intend to use the full (13. Ch. Where used.32). at least in the author's country.8
The dot product
The dot product of two vectors. especially in some of the older literature. some similar product). if r = n ≤ m. more broadly. as though the curve of Fig. Most recently. The iteration intends rather in light of (13.37)
21 The term inner product is often used to indicate a broader class of products than the one defined here.
.1][15. if the Jacobian is degenerate—then (13.32) that d f (x) dx
†
where B = Im in the first case and C = In in the last.35) fails. Therefore. a).5 ran horizontally at the test point—when one quits. or. If both r < m and r < n—which is to say.
−1 [df /dx]∗ ([df /dx] [df /dx]∗ ) = [df /dx]−1 −1 ([df /dx]∗ [df /dx]) [df /dx]∗
if r = m ≤ n.8. It is written a · b. the usage a. 4].
(13. if r = m = n. In general.34). the Newton-Raphson iteration is not normally meant to be applied at a value of x for which the Jacobian is degenerate. This book prefers the dot. restarting the iteration from another point. the notation usually resembles a. both of which mean a∗ · b (or. b ≡ a∗ · b seems to be emerging as standard where the dot is not used [2. b or (b. But if the dot product is to mean anything.13. THE DOT PRODUCT
345
Despite the Moore-Penrose notation of (13. except that which of a and b is conjugated depends on the author. a · b ≡ aT b =
∞ j=−∞
(13.21 is the product of the two vectors to the extent that they run in the same direction. 4. a · b = (a1 e1 + a2 e2 + · · · + an en ) · (b1 e1 + b2 e2 + · · · + bn en ).36)
aj bj . more concisely. a · b = a1 b1 + a2 b2 + · · · + an bn .35)
13.
such that (13.43)
which formally defines the angle θ even when a and b have more than three elements each.346
CHAPTER 13. the dot product |a|2 = a∗ · a (13. usually one is not interested in a · b so much as in a ·b≡a b= The reason is that ℜ(a∗ · b) = ℜ(a) · ℜ(b) + ℑ(a) · ℑ(b). The unit vector in a's direction then is a a ˆ =√ .) Where vectors may have complex elements. incidentally: a · b = a · bT = aT · b = aT · bT = aT b. never negative. the notation implicitly reorients it before using it. a a∗ · b = 0.40) a≡ |a| a∗ · a from which ˆ ˆ |ˆ|2 = a∗ · a = 1.42)
the two vectors are said to lie orthogonal to one another. (The more orderly notation aT b by contrast assumes that both are proper column vectors. For other angles θ between two vectors." By the Pythagorean theorem. (13. thus honoring the right Argand sense of "the product of the two vectors to the extent that they run in the same direction.39)
∗ ∗ ∞
a∗ bj . always real. if either vector is wrongly oriented. with the product of the imaginary parts added not subtracted.38)
j=−∞
gives the square of a vector's magnitude. ˆ ˆ a∗ · b = cos θ. Geometrically this puts them at right angles.
. j
(13. (13.41)
When two vectors do not run in the same direction at all. INVERSION AND ORTHONORMALIZATION
The dot notation does not worry whether its arguments are column or row vectors. That is. (13.
The same goes for the eigenvectors of Ch.3). Kernel vectors have no inherent scale. then so does α1 x1 + α2 x2 . (13.3]
22
. 7 − 1 0]T .46)
13. This says that the columns of A⊥ ≡ A∗K address the complete space of vectors that lie orthogonal to A's columns." since by (13. THE ORTHOGONAL COMPLEMENT
347
13. such that A⊥∗ A = 0 = A∗ A⊥ . If I claim AK = [3 4 5. we might write not A⊥ but A⊥(m) .44)
is an interesting matrix. If we were really precise. style generally asks the applied mathematician not only to normalize but also to orthogonalize the columns
The symbol A⊥ [23][2][31] can be pronounced "A perp.45) The matrix A⊥ is called the orthogonal complement 23 or perpendicular matrix to A.45) A⊥ is in some sense perpendicular to A. (13. the orthogonal complement A⊥ supplies the columns A lacks to reach full row rank. Properties include that A∗K = A⊥ . −1 1 0]T to represent a kernel. If the vectors x1 = AK a1 and x2 = AK a2 both belong. By definition of the kernel. 23 [23. Where a kernel matrix AK has two or more columns (or a repeated eigenvalue has two or more eigenvectors).9.40) to normalize the columns of a kernel matrix to unit magnitude before reporting them. instead for instance representing the same kernel 1 as AK = [6 8 0xA. Among other uses. then you are not mistaken arbitrarily to rescale each column of my AK by a separate nonzero factor. then so equally does any αx. 14 to come." short for "A perpendicular.13.VI. Style 7 generally asks the applied mathematician to remove the false appearance of scale by using (13. A∗⊥ = AK . the columns of A∗K are the independent vectors uj for which A∗ uj = 0.3)22 (13. Refer to footnote 5.10
Gram-Schmidt orthonormalization
If a vector x = AK a belongs to a kernel space AK (§ 13.9
The orthogonal complement
A⊥ ≡ A∗K
The m × (m − r) kernel (§ 13. which—inasmuch as the rows of A∗ are the adjoints of the columns of A—is possible only when each uj lies orthogonal to every column of A. § 3.
a ·a √ ˆ ˆ ˆ But according to (13. and according to (13. x3 . Substituting the second of these equations into the first and solving for β yields β= Hence.50) j−1 xj⊥ ≡
i=1
ˆ ˆ i⊥ (I − xi⊥ x∗ ) xj . orthogonalize x2 with respect to x1⊥ by (13. Symbolically. (13. . rather than the scalar [ˆ∗ ][ˆ]). a matrix. b⊥ ≡ b − βa.48) or (13. then normalize it. ˆ Second. (13. xn } ˆ therefore is as follows.40).48) or. One orthogonalizes a vector b with respect to a vector a by subtracting from b a multiple of a such that a∗ · b⊥ = 0. . a∗ · a = 1.
This is arguably better written
b⊥ = [I − (ˆ)(ˆ∗ )] b a a
(observe that it's [ˆ][ˆ∗ ]. in matrix notation.49). Proceed in this manner through the several xj . so. xj⊥ ˆ xj⊥ = . ˆ a b⊥ = b − a(ˆ∗ )(b). a∗ · b . a a a a One orthonormalizes a set of vectors by orthogonalizing them with respect to one another. then ˆ ˆ normalize it. a∗ · a
(13. First.49)
a∗ · b⊥ = 0. then by normalizing each of them to unit magnitude.47) a∗ · b b⊥ ≡ b − ∗ a. x∗ xj⊥ j⊥ (13.
. . normalize x1 by (13. x2 .348
CHAPTER 13.
where the symbol b⊥ represents the orthogonalized vector. call the result x3⊥ . orthogonalize x3 with respect to x1⊥ ˆ ˆ then to x2⊥ . . Third.41). INVERSION AND ORTHONORMALIZATION
before reporting them. The procedure to orthonormalize several vectors {x1 . a = a a∗ · a. call the result x1⊥ .40). call the result x2⊥ . ˆ a b⊥ = b − a(ˆ∗ · b).
in perspective of whatever it happens to be that one is trying to accomplish. GRAM-SCHMIDT ORTHONORMALIZATION
349
By the vector replacement principle of § 12.10. "Gram-Schmidt process. 11 August 2007]
. If all one wants is some vectors orthonormalized. probably does not even reserve that much. then the equation as written is neat but is ˆ i⊥ ˆ ˆ i⊥ overkill because the product xi⊥ x∗ is a matrix." 04:48.10. representing the multiplication in the admittedly messier form ˆ xj(i+1) ≡ xji − (ˆ ∗ · xji ) xi⊥ .4 in light of (13. . xn⊥ } x addresses the same space as did the original set. Orthonormalization naturally works equally for any linearly independent set of vectors. not only for kernel vectors or eigenvectors. .13.2
The Gram-Schmidt decomposition
The orthonormalization technique this section has developed is named the Gram-Schmidt process. Fortunately.48) is just a scalar. which memory it repeatedly overwrites—and. whereas the product x∗ xj implied by (13.
13.1
Efficient implementation
To turn an equation like the latter line of (13. x2⊥ . (A well written orthonormalizing computer program reserves memory for one intermediate vector only.51)
ˆ ˆ i⊥ Besides obviating the matrix I − xi⊥ x∗ and the associated matrix multiplication. (13. one need not apply the latter line of (13. xi⊥ xj⊥ = xjj .10. neater.51) has the significant additional practical virtue that it lets one forget each intermediate vector xji immediately after using it. One can instead introduce intermediate vectors xji . xj1 ≡ xj . x3⊥ . . the resulting orthonormal set of vectors ˆ ˆ ˆ {ˆ 1⊥ . One can turn it into the Gram-Schmidt decomposi24
[52. the messier form (13. By the technique. working rather in the ˆ memory space it has already reserved for xj⊥ . orthonormal set which addresses precisely the same space. .50) exactly as written. actually.)24 Other equations one algorithmizes can likewise benefit from thoughtful rendering.50) into an efficient numerical algorithm sometimes demands some extra thought.
13.47). one can conveniently replace a set of independent vectors by an equivalent.
bringing the very same index pairs in a different sequence.1—where the latter three working matrices ultimately become the extended-operational factors U . such that the several (i. The i-major nesting (1. Borrowing the language of computer science we observe that the indices i and j of (13. By elementary column operations based on (13. The equations suggest j-major nesting.50) ˜ and (13. The algorithm. distributing the inverse elementaries ˜ ˜ ˜ to U . INVERSION AND ORTHONORMALIZATION
A = QR = QU DS. m×r matrix Q of orthonormal columns. (13.350 tion
CHAPTER 13. 4) (2. We choose i-major nesting on the subtle ground that it affords better information to the choice of column index p during the algorithm's step 3. is just as valid. by an algorithm that somewhat resembles the Gauss-Jordan algorithm of § 12. D and S of (13. j) pair follows which. 4) .. one level looping over j and the other over i. with the loop over j at the outer level and the loop over i at the inner.53) ˜ and initially Q ← A.. 4) (3. 4) · · · .51)'s middle line requires only that no xi⊥ be used before it is fully calculated.52)
also called the orthonormalizing or QR decomposition. in detail:
. otherwise that line does not care which (i. 3) (1. ··· ··· ··· ˆ In reality.4) here becomes ˜˜ ˜˜ A = QU D S (13. 3) (1. 4) · · · (2. . D and S according to Table 12. however. 3) (2.3. except that (12. 2) (1.52). 4) · · · (3.50) and (13. j) index pairs occur in the sequence (reading left to right then top to bottom) (1.3.
(13.51). 3) (2. the algorithm gradually transforms Q into a dimension-limited. R ≡ U DS. 2) (1. .51) imply a two-level nested loop.
among other ˜ heuristics.1. 5. but one might instead prefer to choose the column of greatest magnitude.7.2) with djj = 1 for all j ≥ i.13. the algorithm also ˜ re¨nters here from step 9 below. Begin by initializing ˜ Q ← A. ˜ S ← I.) If Q is null in and rightward of its ith column such that no column p ≥ i remains available to choose. ˜ ˜ U ← T[i↔p] U T[i↔p] . ˜ ˜ S ← T[i↔p] S.7. Observing that (13. interchange the chosen pth column to the ith position by ˜ ˜ Q ← QT[i↔p] . Choose a column p ≥ i of Q containing at least one nonzero element.1).53) can be expanded to read A = = ˜ QT[i↔p] ˜ QT[i↔p] ˜ T[i↔p] U T[i↔p] ˜ T[i↔p] DT[i↔p] ˜ T[i↔p]S
˜ ˜ ˜ T[i↔p] U T[i↔p] D T[i↔p]S . GRAM-SCHMIDT ORTHONORMALIZATION 1.) Observe that U enjoys the major e {i−1}T (§ 11. Observing that (13. ˜ 3. or to choose randomly. and that the first through (i − 1)th columns of Q consist of mutually orthonormal unit vectors.5). ˜ D ← I.
351
2.8.
. that D is a general ˜ partial unit triangular form L ˜ ˜ scaling operator (§ 11.53) can be expanded to read ˜ A = QT(1/α)[i] ˜ Tα[i] U T(1/α)[i] ˜ ˜ Tα[i] D S. 4. that S is permutor ˜ (§ 11. (The simplest choice is perhaps p = i as long as the ith column does not happen to be null.
where the latter line has applied a rule from Table 12. then skip directly to step 10.10. i ← 1. (Besides arriving at this point from step 1 above. ˜ U ← I.
Observing that the r independent columns of the m × r matrix Q address the same space the columns of A address. achieving square.56)
. here also one sometimes prefers that S = I. 13. if one always chooses p = i during the algorithm's step 3.52) and its algorithm in § 13. (ii) since Q is itself dimension-limited.10. GRAM-SCHMIDT ORTHONORMALIZATION End.7).10. below).13.52) needs and has no explicit factor Ir .16) is a proper full-rank factorization with which one can compute the pseudoinverse A† of A (see eqn. this is A = (Q)(Ir R).5. the matrix Q = QIr has only r columns.32. the Gram-Schmidt decomposition too brings a kernel formula. above. Reassociating factors. but see also eqn.11.3. The algorithm optionally supports this preference if the m × n matrix A has full column rank r = n. next.2.10. then it becomes a unitary matrix —the subject of § 13.10. never on the rows.
13.54)
which per (12. (13. If the Gram-Schmidt decomposition (13. If Q reaches the maximum possible rank r = m. however.52) looks useful. whose orthonormal columns address the same space the columns of A themselves address.3
The Gram-Schmidt kernel formula
Like the Gauss-Jordan decomposition in (13. when null columns cannot arise. Such optional discipline maintains S = I when desired. m × m shape.7. the Gram-Schmidt decomposition (13. one decomposes an m × n matrix A = QR (13.55) per the Gram-Schmidt (13. Whether S = I or not. one then constructs the m × (r + m) matrix A′ ≡ Q + Im H−r = Q Im (13.63.
353
Though the Gram-Schmidt algorithm broadly resembles the GaussJordan. 13. at least two significant differences stand out: (i) the Gram-Schmidt ˜ is one-sided because it operates only on the columns of Q. To develop and apply it. The most interesting of its several factors is the m × r orthonormalized matrix Q.52) as A = (QIr )(R). so one can write (13. Before treating the unitary matrix. it is even more useful than it looks. As in § 12. let us pause to extract a kernel from the Gram-Schmidt decomposition in § 13.
it brings one
.354
CHAPTER 13.57) again by Gram-Schmidt—with the differences that.58) or the form (13. The resulting m × m. A∗K = A⊥ = Q′ Hr Im−r . .59). To compute the kernel of a matrix B by Gram-Schmidt one sets A = B ∗ and applies (13. full-rank square matrix Q′ = Q + A⊥ H−r = consists of • r columns on the left that address the same space the columns of A address and • m−r columns on the right that give a complete orthogonal complement (§ 13. A′ = Q′ R′ .11
The unitary matrix
When the orthonormalized matrix Q of the Gram-Schmidt decomposition (13. after all. are just Q.58)
extracts the rightward columns that express the kernel.11 that follows. if one wants a Gauss-Jordan kernel orthonormalized.46). as the last paragraph of § 13. one chooses p = 1.58) is probably the more useful form. and that one skips the unnecessary step 7 for all j ≤ r.9) A⊥ of A. INVERSION AND ORTHONORMALIZATION
and decomposes it too.59) Q A⊥ (13. The unitary matrix is the subject of § 13. then one must orthonormalize it as an extra step. the m × m matrix Q′ is a unitary matrix. but of A∗ .55) has already orthonormalized first r columns of A′ . the Gram-Schmidt kernel formula does everything the Gauss-Jordan kernel formula (13. . having the maximum possible rank r = m. 3. . Refer to (13. left and right.7) does and in at least one sense does it better.10. 2. (13. on the ground that the earlier Gram-Schmidt application of (13. Being square.55) through (13.2 has alluded. which columns. but the GramSchmidt kernel formula as such.
13. not of A. this time.59).52) is square. In either the form (13. (13. Equation (13. Each column has unit magnitude and conveniently lies orthogonal to every other column. whereas the Gram-Schmidt kernel comes already orthonormalized. r during the first r instances of the algorithm's step 3. for. .
60) is that a square matrix with either orthonormal columns or orthonormal rows is unitary and has both. until one realizes25 that the equation Q∗ Q = Im characterizes Q∗ to be the rank-m inverse of Q. and that the very definition of orthonormality demands that the dot product [Q]∗ · ∗i [Q]∗j of orthonormal columns be zero unless i = j.1. qj =
i
qbij qai . (Note that the permutor of § 11. qaj and qbj respectively represent the jth columns of Q. when the dot product of a unit vector with itself is unity. however. The product of two or more unitary matrices is itself unitary if the matrices are of the same dimensionality.
The adjoint dot product of any two of Q's columns then is q∗′ · qj = j
i. Thus. By the columnwise interpretation (§ 11. To prove it.13.11. Q−1 = Q∗ . is called a unitary matrix.1 lets any rank-m inverse (orthonormal or otherwise) attack just as well from the right as from the left.1 enjoys the property of eqn. and that § 13.61)
a very useful property. ai
But q∗ ′ · qai = δi′ i because Qa is unitary.3) of matrix multiplication.i′ ∗ qbi′ j ′ qbij q∗ ′ · qai .4] This is true only for 1 ≤ i ≤ m.26 so ai q∗′ · qj = j
25 26
i
∗ qbij ′ qbij = q∗ ′ · qbj = δj ′ j . consider the product Q = Qa Qb (13.61 precisely because it is unitary. (13. bj
[15.) One immediate consequence of (13. whether derived from the Gram-Schmidt or from elsewhere.60) The reason that Q∗ Q = Im is that Q's columns are orthonormal. Let the symbols qj . That Im = QQ∗ is unexpected.62)
of m × m unitary matrices Qa and Qb .
. THE UNITARY MATRIX
355
property so interesting that the property merits a section of its own. Qa and Qb and let the symbol qbij represent the ith element of qbj .60).7. (13. 13. A matrix Q that satisfies (13. § 4. The property is that Q∗ Q = Im = QQ∗ . but you knew that already.
Unitary operations preserve length. But according to (13. so. To prove it. Multiplying the system by its own adjoint yields x∗ Q∗ Qx = b∗ b. It has explained how to orthonormalize a set of vectors and has derived from the explanation the useful Gram-Schmidt decomposition. The chapter as a whole has demonstrated at least in theory (and usually in practice) techniques to solve any linear system characterized by a matrix of finite dimensionality.52) to invert a square matrix as A−1 = R−1 Q∗ = S ∗ D −1 U −1 Q∗ . whatever the matrix's rank or shape.60) for m = ∞. and it is the subject of Ch.63)
Unitary extended operators are certainly possible. next. consider the system Qx = b. the single most interesting agent of matrix arithmetic remains yet to be treated.60).
. We shall meet the unitary matrix again in §§ 14. operating on an m-element vector by an m × m unitary matrix does not alter the vector's magnitude. itself meets the unitary criterion (13. 14. which is just Q with ones running out the main diagonal from its active region. for if Q is an m × m dimension-limited matrix.356
CHAPTER 13. And even so.10 and 14. Unitary matrices are so easy to handle that they can sometimes justify significant effort to convert a model to work in terms of them if possible. INVERSION AND ORTHONORMALIZATION
which says neither more nor less than that the columns of Q are orthonormal. Q∗ Q = Im . x∗ x = b∗ b.61) lets one use the Gram-Schmidt decomposition (13. the matrix has shown its worth here. That is. as was to be demonstrated. then the extended operator Q∞ = Q + (I − Im ). (13. This last is the eigenvalue. as was to be demonstrated. As the chapter's introduction had promised. which is to say that Q is unitary. Equation (13. arithmetic and algebra most of the chapter's findings would have lain beyond practical reach.12. for without the matrix's notation.
Unless you already know about determinants. This chapter analyzes the eigenvalue and the associated eigenvector it scales. Here comes the first unexpected turn.Chapter 14
The eigenvalue
The eigenvalue is a scalar by which a square matrix scales a vector without otherwise changing it. each term the product of n elements.7. 11. terms of positive parity adding to and terms of negative parity subtracting from the sum—a term's parity (§ 11. so try this.2 for reference. the chapter gathers from across Chs. the definition alone might seem hard to parse.1
The determinant
Through Chs. assembling them in § 14. which opens the chapter.1) marking the positions of the term's elements. Before treating the eigenvalue proper. such that Av = λv.6) being the parity of the permutor (§ 11.
14. 12 and 13 the theory of the matrix has developed slowly but pretty straightforwardly. It begins with an arbitrary-seeming definition. The inverse of the general 2 × 2 square matrix a11 a12 . 11 through 14 several properties all invertible square matrices share. no two elements from the same row or column. One of these regards the determinant. A2 = a21 a22 357
. The determinant of an n × n square matrix A is the sum of n! terms.
nonnegative. analytic function. 2 a11 a22 − a12 a21 The quantity1 det A2 = a11 a22 − a12 a21 in the denominator is defined to be the determinant of A2 . we do find that quantity in the denominator there.358
CHAPTER 14. an appropriately terse notation for which the author confesses some nostalgia. (How do we know that exactly half are of positive and half. n! in number. whereas the determinant det A is a complex-valued.
1
. THE EIGENVALUE
by the Gauss-Jordan method or any other convenient technique. P =
2 0 4 0 1 1 0 0 3 0 1 5 0
marking the positions of the term's elements is positive. The determinant comprehends all possible such terms. is found to be a22 −a12 −a21 a11 A−1 = . nonanalytic function of the elements of the quantity in question. which has no second term to pair.1). with parity giving the term its ± sign.)
The determinant det A used to be written |A|. or interchange quasielementary (§ 11. The book follows convention by denoting the determinant as det A for this reason among others. The older notation |A| however unluckily suggests "the magnitude of A. The parity of a term like a13 a22 a31 is negative for the same reason. For every term like a12 a23 a31 whose marking permutor is P . there is a corresponding a13 a22 a31 whose marking permutor is T[1↔2] P . The determinant of the general 3 × 3 square matrix by the same rule is det A3 = (a11 a22 a33 + a12 a23 a31 + a13 a21 a32 ) − (a13 a22 a31 + a12 a21 a33 + a11 a23 a32 ). The parity rule merits a more careful description. Each of the determinant's terms includes one element from each column of the matrix and one from each row. half of positive parity and half of negative. either. The parity of a term like a12 a23 a31 is positive because the parity of the permutor. and indeed if we tediously invert such a matrix symbolically. negative? Answer: by pairing the terms.7. necessarily of opposite parity. The magnitude |z| of a scalar or |u| of a vector is a real-valued. The sole exception to the rule is the 1 × 1 square matrix." which though not quite the wrong idea is not quite the right idea.
Historically the determinant probably emerged not from abstract considerations but for the mundane reason that the quantity it represents occurred frequently in practice (as in the A−1 example above).2 ) It is admitted3 that we have not. 13's footnotes 5 and 22.14. otherwise. when j = j ′ . Nothing however log2 ically prevents one from simply defining some quantity which. Interchanging rows or columns negates the determinant. we have merely motivated and defined it. but the nonstandard notation det(n) A is available especially to call the rank out.1)
αai∗ ai∗
when i = i′ . when i = i′′ .
or if
where i′′ = i′ and j ′′ = j ′ . 1]
.1. [15. So we do here. stating explicitly that the determinant has exactly n! terms. when j = j ′′ . THE DETERMINANT
359
Normally the context implies a determinant's rank n.2] 4 [15.5 and 11.49. one merely suspects will later prove useful. then
det C = − det A.1
Basic properties
The determinant det A enjoys several useful basic properties. Ch. as yet. (See also §§ 11. actually shown the determinant to be a generally useful quantity.
And further Ch. § 1.1. • If ci∗ =
2 3
(14. 11. • If ai′′ ∗ ci∗ = ai′ ∗ ai∗ c∗j a∗j ′′ = a∗j ′ a∗j when i = i′ . otherwise.4
14.5 and eqn. at first. otherwise.3.
• If or if c∗j ′′ = γc∗j ′ . otherwise.2. (Equation 14.3.
Scaling a single row or column of a matrix scales the matrix's determinant by the same factor.3) If one row or column of a matrix C is the sum of the corresponding rows or columns of two other matrices A and B.2.2)
det C = α det A. ci′ ∗ = 0. (14.360 or if c∗j = then
CHAPTER 14. otherwise. then det C = 0. 11.3 tracks the linear superposition property of § 7. (14. then the determinant of the one matrix is the sum of the determinants of the other two. (Equation 14.3. (14.2 tracks the linear scaling property of § 7.
.4) A matrix with a null row or column also has a null determinant.3 and of eqn.3 and of eqn. otherwise. a∗j + b∗j a∗j = b∗j when j = j ′ . THE EIGENVALUE
αa∗j a∗j
when j = j ′ . 11. ci′′ ∗ = γci′ ∗ . while the three matrices remain otherwise identical.) • If ci∗ = or if c∗j = then det C = det A + det B.) • If or if c∗j ′ = 0. ai∗ + bi∗ ai∗ = bi∗ when i = i′ .
(14. Hence the net effect of the procedure is to negate the determinant—to negate the very determinant the procedure is not permitted to alter. the procedure does not alter the determinant either. (14. However. then det C = 0.5) comes because the following procedure does not alter the matrix: (i) scale row i′′ or column j ′′ by 1/γ.6) comes because according to § 11.2). more formally.3) and (14.7. (14. Equation (14. when j = j ′′ . Or. and the determinant of the transpose is just the determinant itself: det C ∗ = (det C)∗ . the permutors P and P ∗ always have the same parity. (ii) scale row i′ or column j ′ by γ.6)
These basic properties are all fairly easy to see if the definition of the determinant is clearly understood.
. Equation (14. and indeed according to (14.1. The apparent contradiction can be reconciled only if the determinant is zero to begin with. Not altering the matrix.1.2). according to (14. in this case.1).14. step (iii) negates the determinant. • If ci∗ = or if c∗j = a∗j + αa∗j ′ a∗j ai∗ + αai′ ∗ ai∗ when i = i′′ . and because the adjoint operation individually conjugates each element of C. • The determinant of the adjoint is just the determinant's conjugate. THE DETERMINANT where i′′ = i′ and j ′′ = j ′ .5) comes because.4) come because each of the n! terms in the determinant's expansion has exactly one element from row i′ or column j ′ . Finally. otherwise. (14. step (ii)'s effect on the determinant cancels that of step (i).
361
(14. every term in the determinant's expansion finds an equal term of opposite parity to offset it. (iii) interchange rows i′ ↔ i′′ or columns j ′ ↔ j ′′ .5)
The determinant is zero if one row or column of the matrix is a multiple of another. det C T = det C.1) comes because a row or column interchange reverses parity. Equations (14. From the foregoing properties the following further property can be deduced. otherwise.
according to (14. scales or does not alter the matrix's determinant. bi∗ ≡ ai∗ otherwise.9)
. det Tα[ij] A = det A = det ATα[ij] .
(14. det In A = det A = det AIn . bi′′ ∗ = αai′ ∗ whereas bi′ ∗ = ai′ ∗ . j) ≤ n.2
The determinant and the elementary operator
Section 14. Obviously also. where [C]i′′ ∗ = [A]i′′ ∗ + [B]i′′ ∗ . then
CHAPTER 14. so bi′′ ∗ = αbi′ ∗ .7)
Adding to a row or column of a matrix a multiple of another row or column does not change the matrix's determinant.4.3). B and C differ only in the (i′′ )th row. On the other hand. (14.1 has it that interchanging.7) results from combining the last two equations. THE EIGENVALUE
det C = det A. which by (14.1.7) for rows (the column proof is similar). one defines a matrix B such that αai′ ∗ when i = i′′ . det Tα[i] A = α det A = det ATα[j] . det C = det A + det B. To derive (14.1. det I = 1 = det In .362 where i′′ = i′ and j ′′ = j ′ . But the three operations named are precisely the operations of the three elementaries of § 11. i = j. From this definition.8)
1 ≤ (i. so. Equation (14. for any n × n square matrix A. (14.
14. the three matrices A. det T[i↔j]A = − det A = det AT[i↔j].5) guarantees that det B = 0. det IA = det A = det AI. Therefore. scaling or adding rows or columns of a matrix respectively negates.
Elementary operators being reversible have no power to breach this barrier. an elementary operator which honors an n × n active region. In . Nevertheless.} restricts Mk to be any of the things between the braces.8) gives elementary operators the power to alter a matrix's determinant almost arbitrarily—almost arbitrarily. Another thing n × n elementaries cannot do according to § 12. In symbols. is that the determinant of a product is the product of the determinants. I. This matters because. full-rank square matrices never do.9).1. a determinant remains zero under the action of elementary operators. applied recursively. j) ≤ n. Once zero. where r ≤ n is the matrix's rank. . What an n × n elementary operator6 cannot do is to change an n × n matrix's determinant to or from zero.11) below is going to erase the restriction.
∈
14. by the Gauss-Jordan algorithm of § 12.4 will put the rule (14.8) and (14. Once nonzero. then a significant consequence of (14. always nonzero.3. As it happens though. . one can build up any square matrix of full rank by applying elementary operators to In . The notation Mk ∈ {. T[i↔j] . so it follows that the n × n determinant of every rank-r matrix is similarly zero if r < n. (14.10)
1 ≤ (i.1. Singular matrices always have zero determinants.3 is to change an n × n matrix's rank. Equation (14. but not quite. as the Gauss-Jordan decomposition of § 12. One can evidently tell the singularity or invertibility of a square matrix from its determinant alone.2. at least where identity matrices and elementary operators are concerned. Tα[i] . but it does help here.1. and complementarily that the n × n determinant of a rank-n matrix is never zero. such elementaries can reduce any n × n matrix reversibly to Ir . Section 14.3
The determinant of a singular matrix
Equation (14.5 det
k
Mk Mk
=
k
det Mk .3 has shown.3.4) has that the n × n determinant of Ir is zero if r < n. THE DETERMINANT
363
If A is taken to represent an arbitrary product of identity matrices (In and/or I) and elementary operators.
5
.
Notation like "∈" can be too fancy for applied mathematics.
(14. Tα[ij] . 6 That is.14. in this case.5. See § 11.10) to good use.
The first case is that A is singular. Here. the determinant of the product is the product of the determinants. The determinant of a matrix product is the product of the matrix determinants.12)
.10) merely that since A and B are products of identity matrices and elementaries.11) holds in the first case. which says that AB is no less singular than A.5
Determinants of inverse and unitary matrices
From (14. for which det AB = det {[A] [B]} = det = = det T In det T det In T In T T det T det T In T det T det T det In T In T
= det A det B.1. which is a schematic way of pointing out in light of (14.1.11) holds in all three cases. THE EIGENVALUE
14. The third case is that neither matrix is singular.1. (14. So it is that (14.4
The determinant of a matrix product
Sections 14.364
CHAPTER 14.
14. Evidently (14. whereas according to § 12.3 suggest the useful rule that det AB = det A det B. B acts as a column operator on A. as was to be demonstrated.11)
To prove the rule.5.2 no operator has the power to promote A in rank. which implies that AB like A has a null determinant. Hence the product AB is no higher in rank than A. we use GaussJordan to decompose both matrices into sequences of elementary operators and rank-n identity matrices.2 and 14. 1 det A (14. In this case. we consider three distinct cases. The second case is that B is singular. The proof here resembles that of the first case.11) it follows that det A−1 = because A−1 A = In and det In = 1.1.
14) by det A.16) still holds in theory for yet larger matrices. too—especially with the help of a computer to do the arithmetic. so this book does not treat Cramer's rule as such. One does
This is a bit subtle.7 In the case that i = j. In practice. but rather of A with the jth row replaced by a copy of the ith. through about 4 × 4 dimensionality (the A−1 equation 2 at the head of the section is just eqn.16 for n = 2).15) on them. even the Gauss-Jordan grows impractical. for concrete matrices much bigger than 4 × 4 to 6 × 6 or so its several determinants begin to grow too great and too many for practical calculation. (14. CT .16) inverts a matrix by determinant. Cramer's rule is really nothing more than (14. Iterative techniques ([chapter not yet written]) serve to invert such matrices approximately.16) to (13.16) A−1 = det A Equation (14. where aiℓ provides the ith row and Riℓ provides the other rows. being a determinant. of which the reader may have heard.
7
. plus this chapter up to the present point.9
14. Similar equations can be written for [C T A]ij in both cases.
wherein ℓ aiℓ det Rjℓ is the determinant. THE EIGENVALUE
wherein the equation ℓ aiℓ det Riℓ = det A states that det A. n × n matrix concisely whose entries remain unspecified. 12 and 13. due to compound floating-point rounding error and the maybe large but nonetheless limited quantity of available computer memory. not of A itself. it inverts small matrices nicely. results from applying (14. hence also (14. each term including one factor from each row of A. Though (14. [AC T ]ij =
ℓ
aiℓ cjℓ =
ℓ
aiℓ det Rjℓ = 0 = (det A)(0) = (det A)δij . Dividing (14.5) evaluates to zero. consists of several terms.366
CHAPTER 14.6]. 8 Cramer's rule [15.2
Coincident properties
Chs.4). The Gauss-Jordan technique (or even the Gram-Schmidt technique) is preferred to invert concrete matrices above a certain size for this reason. § 1. and though symbolically it expresses the inverse of an abstract. The two cases together prove (14. have discovered several coincident properties of the invertible n × n square matrix. we have that8 A−1 A = In = AA−1 . 11.16) in a less pleasing form. 14. which according to (14. but if you actually write out A3 and its cofactor C3 symbolically. However.14).15). It inverts 5 × 5 and even 6 × 6 matrices reasonably. 9 For very large matrices. trying (14. then you will soon see what is meant.
given any specific n-element driving vector b (§ 13.3.3. but a matrix can be almost singular just as a scalar can be almost zero.5.1). the computers which invert them tend to do arithmetic imprecisely in floating-point. COINCIDENT PROPERTIES
367
not feel the full impact of the coincidence when these properties are left scattered across the long chapters.2. • The linear system Ax = b it represents has a unique n-element solution x.7).4). per § 12.4).1. Whether the distinction is always useful is another matter. Assuming exact arithmetic.5. and its rows address the same space the rows of In address (§ 12. no matter how small.4). The square matrix which lacks one. • Its columns are linearly independent (§ 12. Matrices which live
.3). (In this. never both. the choice of pivots does not matter. Practical matrices however often have entries whose values are imprecisely known. let us gather and summarize the properties here. the Gram-Schmidt algorithm achieves a fully square.5. has all of them. unitary. implies a theoretically invertible matrix.) • Decomposing it.10. Now it is true: in exact arithmetic.1 and 12. • Its rows are linearly independent (§§ 12.14. below). n × n matrix evidently has either all of the following properties or none of them.3. Such a matrix is known. • The determinant det A = 0 (§ 14. a square matrix is either invertible. • The matrix is invertible (§ 13.3). • Its columns address the same space the columns of In address.2). a nonzero determinant. by its unexpectedly small determinant. with all that that implies. • It has full rank r = n (§ 12. so. or singular. The distinction between invertible and singular matrices is theoretically as absolute as (and indeed is analogous to) the distinction between nonzero and zero scalars. • The Gauss-Jordan algorithm reduces it to In (§ 12. • None of its eigenvalues is zero (§ 14. A square. and even when they don't. among other ways.2). n × n factor Q (§ 13.3.5. The square matrix which has one of these properties. lacks all. never some but not others. Usually the distinction is indeed useful.
They are called ill conditioned matrices.3
The eigenvalue itself
We stand ready at last to approach the final major agent of matrix arithmetic.1.3 is to demand that det(A − λIn ) = 0. 3 −1 – » 2−λ 0 det 3 −1 − λ » λ2 − λ − 2 = 0. When a matrix happens to operate on the right eigenvector v. together such that Av = λv.17) and a scalar λ. whose n roots are the eigenvalues 10 of the matrix A. it is all the same whether one applies the entire matrix or just the eigenvalue to the vector. (14.8 develops the topic. or in other words such that Av = λIn v. changing the vector's
10
An example: A det(A − λIn ) = = = = λ = – 2 0 . What is an eigenvalue. Suppose a square.
(2 − λ)(−1 − λ) − (0)(3)
.1. n×n matrix A. then [A − λIn ]v = 0. The matrix scales the eigenvector by the eigenvalue without otherwise altering the vector. (14. the last equation is true if and only if the matrix [A − λIn ] is singular—which in light of § 14. Section 14.20)
The left side of (14.
14. the eigenvalue.19) (14. the characteristic polynomial. (14.368
CHAPTER 14.18)
Since In v is nonzero. THE EIGENVALUE
on the hazy frontier between invertibility and singularity resemble the infinitesimals of § 4. If so. really? An eigenvalue is a scalar a matrix resembles under certain conditions. a nonzero n-element vector v = In v = 0.1. −1 or 2.20) is an nth-order polynomial in λ.
j (14. This is what (14. Of course it works only when v happens to be the right eigenvector.30).18) means.59). the determinant (14. The eigenvalues of a matrix's inverse are the inverses of the matrix's eigenvalues. here iterative techniques help: see [chapter not yet written]. then what does A′ Avj = Ivj do to vj ? Naturally one must solve (14. taken together. Zero eigenvalues and singular matrices always travel together.2). One solves it by the same techniques by which one solves any polynomial: the quadratic formula (2.19) and (14. which § 14. one can feed it back to calculate the associated eigenvector.
11
The inexpensive [15] also covers the topic competently and readably.11
14. An eigenvalue and its associated eigenvector. nonsingular matrices never do.4
The eigenvector
It is an odd fact that (14. Once one has calculated an eigenvalue. When λ = 0.20) makes det A = 0. That is. On the other hand.20)'s nth-order polynomial to locate the actual eigenvalues. According to (14. which as we have said is the sign of a singular matrix. are sometimes called an eigensolution.4. 10.7) or the Gram-Schmidt kernel formula (13. hulking matrix. the Newton-Raphson iteration (4.19).4 discusses.21)
The reason behind (14.20) reveal the eigenvalues λ of a square matrix A while obscuring the associated eigenvectors v. λ′ λj = 1 for all 1 ≤ j ≤ n if A′ A = In = AA′ . The eigenvalue alone takes the place of the whole. which is to say that the eigenvectors are the vectors of the kernel space of the degenerate matrix [A − λIn ]—which one can calculate (among other ways) by the Gauss-Jordan kernel formula (13. the eigenvectors are the nelement vectors for which [A − λIn ]v = 0. the cubic and quartic methods of Ch.14. THE EIGENVECTOR
369
magnitude but not its direction.21) comes from answering the question: if Avj scales vj by the factor λj . (14.20) can be impractical to expand for a large matrix.
. though. Singular matrices each have at least one zero eigenvalue.
. which is to say. vj ). then any n-element vector can be expressed as a unique linear combination of the eigenvectors. v2 . vk−1 respectively with distinct eigenvalues λ1 . Unfortunately some matrices with repeated eigenvalues also have repeated eigenvectors—as for example. did nontrivial coefficients cj exist such that
k−1
vk =
j=1
cj vj .5
Eigensolution facts
Many useful or interesting mathematical facts concern the eigensolution. Refer to (14. Thus vk too is independent.
. . . THE EIGENVALUE
14. . • If the eigensolutions of A are (λj . then the eigensolutions of A + αIn are (λj + α. whereupon by induction from a start case of k = 1 we conclude that there exists no dependent eigenvector with a distinct eigenvalue. This is seen by adding αvj to both sides of the definition Avj = λvj . .
impossible since the k − 1 eigenvectors are independent. • Eigenvectors corresponding to distinct eigenvalues are always linearly independent of one another. • A matrix and its inverse share the same eigenvectors with inverted eigenvalues.
then left-multiplying the equation by A − λk would yield
k−1
0=
j=1
(λj − λk )cj vj .370
CHAPTER 14. λ2 . This is a simple consequence of the fact that the n × n matrix V whose columns are the several eigenvectors vj has full rank r = n.3. . The eigenvalues move over by αIn while the eigenvectors remain fixed. . and further consider another eigenvector vk which might or might not be independent but which too has a distinct eigenvalue λk . λk−1 . vj ). . consider several independent eigenvectors v1 . To prove this fact. • If an n × n square matrix A has n independent eigenvectors (which is always so if the matrix has n distinct eigenvalues and often so even otherwise).21) and its explanation in § 14. Were vk dependent. among them the following.
6. . Were it not so..3.22) where Λ=
2 6 6 6 6 6 4 λ1 0 . every n×n matrix with n distinct eigenvalues) can be diagonalized as A = V ΛV −1 .2 speaks of matrices of the last kind. Section 14. but is not limited to. which in turn is possible only if A1 = A2 . λn−1 0 0 0 . .10.6
Diagonalization
Any n × n matrix with n independent eigenvectors (which class per § 14.12 [1 0. nonnegative eigenvalues.1) the matrix's characteristic polynomial (14. 1 1]T . . This fact proceeds from the last point.
14. . which by definition would be no eigenvalue if it had no eigenvector to scale.19) necessarily admits at least one nonzero solution v because its matrix A − λIn is degenerate. then v∗ Av = λv∗ v (in which v∗ v naturally is a positive real scalar) would violate the criterion for positive or nonnegative definiteness. • An n × n square matrix whose eigenvectors are linearly independent of one another cannot share all eigensolutions with any other n×n square matrix. (14. ··· ··· 0 0 . • Every n × n square matrix has at least one eigensolution if n > 0. whose double eigenvalue λ = 1 has the single eigenvector [1 0]T . positive eigenvalues. . because according to the fundamental theorem of algebra (6.20) has at least one root. that every n-element vector x is a unique linear combination of independent eigenvectors. See § 13. . • A positive definite matrix has only real. .6. A nonnegative definite matrix has only real. DIAGONALIZATION
371
curiously. . and for which (14. Neither of the two proposed matrices A1 and A2 could scale any of the eigenvector components of x differently than the other matrix did.5 includes. 0 λn 3 7 7 7 7 7 5
12
[22]
. 0 0 ··· ··· . so A1 x − A2 x = (A1 − A2 )x = 0 for all x. . 0 0 0 λ2 . an eigenvalue.14.
One might object that we had shown only how to compose some matrix V ΛV −1 with the correct eigenvalues and independent eigenvectors.23) for any diagonalizable matrix A. includes every n × n matrix with n distinct eigenvalues and also includes many matrices with fewer) is called a diagonalizable matrix. The diagonal matrix Λk is nothing more than the diagonal matrix Λ with each element individually raised to the kth power. because § 14. THE EIGENVALUE
is an otherwise empty n × n matrix with the eigenvalues of A set along its main diagonal and V = v1 v2 · · · vn−1 vn
is an n × n matrix whose columns are the eigenvectors of A. the diagonal decomposition or the diagonalization of the square matrix A. or. Besides factoring a diagonalizable matrix by (14.372
CHAPTER 14. Equation (14.22). we need not show this. then consider that the jth column of the product AV is Avj .55) is trivially diagonalizable as diag{x} = In diag{x}In . one can apply the same formula to compose a diagonalizable matrix with desired eigensolutions. j
ij
If this seems confusing. from which (14. because the identity AV = V Λ holds.13 The matrix V is invertible because its columns the eigenvectors are independent. An n × n matrix with n independent eigenvectors (which class.5 has already demonstrated that two matrices with the same eigenvalues and independent eigenvectors are in fact the same matrix. The diagonal matrix diag{x} of (11. such that Λk = δij λk . but had failed to show that the matrix was actually A.22) follows. This is so because the identity Avj = vj λj holds for all 1 ≤ j ≤ n.22) is called the eigenvalue decomposition.
13
. expressed more concisely. whereas the jth column of Λ having just the one element acts to scale V 's jth column only. It is a curious and useful fact that A2 = (V ΛV −1 )(V ΛV −1 ) = V Λ2 V −1 and by extension that Ak = V Λk V −1 (14. whereby the product V ΛV −1 can be nothing other than A. However. again.
The nondiagonalizable matrix vaguely resembles the singular matrix in that both represent edge cases and can be hard to handle numerically.2 will have more to say about the nondiagonalizable matrix. Should the dominant eigenvalue have greater than unit magnitude. tends then to transform v into the associated eigenvector. REMARKS ON THE EIGENVALUE Changing z ← k implies the generalization14 Az = V Λz V −1 . All this is fairly deep mathematics.
14 It may not be clear however according to (5. a nondiagonalizable matrix is a matrix with an n-fold eigenvalue whose corresponding eigenvector space fewer than n eigenvectors fully characterize. then to A2 v. largest in magnitude. thus one can sometimes judge the stability of a physical process indirectly by examining the eigenvalues of the matrix which describes it.
. Λz
ij
373
= δij λz .24)
good for any diagonalizable A and complex z. The idea that a matrix resembles a humble scalar in the right circumstance is powerful.10.
14. The n × n null matrix for example is singular but still diagonalizable. but the resemblance ends there. operating repeatedly on a vector v to change it first to Av. Section 14. A3 v and so on. Then there is the edge case of the nondiagonalizable matrix. which matrix surprisingly covers only part of its domain with eigenvectors. twice or more. and a matrix can be either without being the other. it destabilizes the iteration.2 and 14. Remarks continue in §§ 14.12) which branch of λz one should choose j at each index j. It brings an appreciation of the matrix for reasons which were anything but apparent from the outset of Ch.10. gradually but relatively eliminating all other components of v.14. Nondiagonalizable matrices are troublesome and interesting. 11. What a nondiagonalizable matrix is in essence is a matrix with a repeated eigensolution: the same eigenvalue with the same eigenvector.7
Remarks on the eigenvalue
Eigenvalues and their associated eigenvectors stand among the principal reasons one goes to the considerable trouble to develop matrix theory as we have done in recent chapters. More formally.7. especially if A has negative or complex eigenvalues. j
(14.13. The dominant eigenvalue of A. Among the reasons for this is that a matrix can represent an iterative process.
then the corresponding eigenvalue of A′ is λmin / |λmax |. Section 14. for instance. Matrix condition so defined turns out to have another useful application. An ill conditioned matrix by definition16 is a matrix of large κ ≫ 1.3 are singular. THE EIGENVALUE
14. or where the elements of A are known only within some tolerance. the scale factor 1/ |λmax | scales all eigenvalues equally. In terms of the new operator.
16 15
The largest in magnitude of the several eigenvalues of a diagonalizable operator A. no particular edge value of κ. One sometimes finds it convenient to normalize a dominant eigenvalue by defining a new operator A′ ≡ A/ |λmax |. but in computer floating point. If zero. then both matrices are ill conditioned. or that one with a wretched κ = 20x18 were well conditioned. but you knew that already. then you might not credit me—but the mathematics nevertheless can only give the number. If I tried to claim that a matrix with a fine κ = 3 were ill conditioned. denoted here λmax . tends to dominate the iteration Ak x.25) κ≡ λmin by which κ≥1 (14. at and beyond which it turns ill conditioned.374
CHAPTER 14. Suppose that a diagonalizable matrix A is precisely known but that the corresponding driving vector b is not. it remains to the mathematician to interpret it. if nearly zero. The applied mathematician handles such a matrix with due skepticism. the iteration becomes Ak x = |λmax |k A′k x. (14. If A(x + δx) = b + δb. κ = 1 would be ideal (it would mean that all eigenvalues had the same magnitude). leaving one free to carry the magnifying effect |λmax |k separately if one prefers to do so. Such considerations lead us to define the condition of a diagonalizable matrix quantitatively as15 λmax . thus. though in practice quite a broad range of κ is usually acceptable. Could we always work in exact arithmetic. the value of κ might not interest us much as long as it stayed finite.26)
. less than which a matrix is well conditioned.8
Matrix condition
is always a real number of no less than unit magnitude. For best invertibility. whose own dominant eigenvalue λmax / |λmax | has unit magnitude.7 has named λmax the dominant eigenvalue for this reason.
[9] There is of course no definite boundary. if A's eigenvalue of smallest magnitude is denoted λmin . infinite κ tends to emerge imprecisely rather as large κ ≫ 1. However. then both matrices according to § 14.
Left-multiplication by B −1 . only transformed to work within the basis17. One can therefore convert any operator A to work within a complete basis B by the successive steps Ax = b. converting into the basis. That is. e3 . Such semantical distinctions seem a little too fine for applied use.28)
The reader may need to ponder the basis concept a while to grasp it. though.
17
(14. in which the n basis vectors are independent and address the same full space the columns of In address. by successive steps. then does the reverse. B −1 A(BB −1 )x = λB −1 x.376
CHAPTER 14.18 B. unitarily similar ) to the matrix A. if B is unitary.
by which the operator B −1 AB is seen to be the operator A. not only from the standard building blocks e1 .
. [B
−1
AB]u = B −1 b. Probably the most important property of the similarity transformation is that it alters no eigenvalues. but the concept is simple once grasped and little purpose would be served by dwelling on it here.—except that the right word for the relevant "building block" is basis vector. The basis provides the units from which other vectors can be built. If x = Bu then u represents x in the basis B. one might consult one of those if needed. Now we have the theory to appreciate it properly. Basically. The books [23] and [31] introduce the basis more gently. 1) in the basis B: five times the first basis vector plus once the second. [B −1 AB]u = λu. If B happens to be unitary (§ 13. if Ax = λx. invertible complete basis B. We have already met the similarity transformation in §§ 11. THE EIGENVALUE
is (5. etc. The conversion from A into B −1 AB is called a similarity transformation.2. the idea is that one can build the same vector from alternate building blocks.11). then. e2 . ABu = b. u = B −1 x.5 and 12. The matrix B −1 AB the transformation produces is said to be similar (or. Left-multiplication by B evidently converts out of the basis. Particularly interesting is the n×n. 18 The professional matrix literature sometimes distinguishes by typeface between the matrix B and the basis B its columns represent. then the conversion is also called a unitary transformation. This book just uses B.
else the permutor's bottom row would be empty which is not allowed.30) the matrix US has the general upper triangular form the Schur decomposition (14. If P marked no position below the main diagonal. Moreover.41)
The determinant's definition in § 14. (ii) that the only permutor that marks no position below the main diagonal is the one which also marks no position above.21 Hence no element above the main diagonal of US even influences the eigenvalues. p(n−2)(n−2) = 1.
14. we have that
n−1
Q=
i′ =0
(W |i=i′ ) .10. In either form. if we let B|i=0 = A. this determinant brings only the one term
n
det(US − λIn ) =
i=1
(uSii − λ) = 0
whose factors run straight down the main diagonal. Unlike most determinants.
(14. In the next row up. because the product of unitary matrices according to (13. then it would necessarily have pnn = 1.39) (14. THE EIGENVALUE
because the form (14. which apparently are λi = uSii .29) requires.30) of B at i = i′ is nothing other than the form (14.
21
(14. then it follows by induction that B|i=n = US .40)
which along with (14.39) accomplishes the Schur decomposition. Therefore.
. In the next-to-bottom row.1 makes the following two propositions equivalent: (i) that a determinant's term which includes one or more factors above the main diagonal also includes one or more factors below.38)
where per (14. the proposition's truth might seem less than obvious until viewed from the proper angle.20) of the general upper triangular matrix US is det(US − λIn ) = 0. where the determinant's n! − 1 other terms are all zero because each of them includes at least one zero factor from below the main diagonal.62) is itself a unitary matrix. p(n−1)(n−1) = 1. thus affirming the proposition. (14. because the nth column is already occupied.31) of C at i = i′ − 1. and so on.382
CHAPTER 14.2
The nondiagonalizable matrix
The characteristic equation (14. Consider a permutor P .
however. By the Schur decomposition. Though infinitesimally near A.22 One might think that the Schur decomposition offered an easy way to calculate eigenvalues.28).2 has analyzed perturbed poles.
≡ US + ǫ diag{u}. 23 Equation (11. whose triangular factor US openly lists all eigenvalues along its main diagonal. one can construct a second square matrix A′ . simply by perturbing the main diagonal of US to23
′ US
ui′
= ui if λi′ = λi .42)
where |ǫ| ≪ 1 and where u is an arbitrary vector that meets the criterion ′ given. This theoretical benefit pays when some of the n eigenvalues of an n × n square matrix A repeat.55) defines the diag{·} notation. and.
22 An unusually careful reader might worry that A and US had the same eigenvalues with different multiplicities.10. still.
. The Schur decomposition (14. as we have seen. but.11) and (14. thus further the same eigenvalue multiplicities.41): every square matrix without exception has a Schur decomposition. one can analyze such perturbed eigenvalues and their associated eigenvectors similarly as § 9. similarity transformations preserve eigenvalues. which says that A and US have not only the same eigenvalues but also the same characteristic polynomials. then the eigenvalues of A are just the values along the main diagonal of US . as near as desired to A but having n distinct eigenvalues.29) is in fact a similarity transformation. THE SCHUR DECOMPOSITION
383
the main-diagonal elements. With sufficient toil. If therefore A = QUS Q∗ .14. one would like to give a sounder reason than the participle "surprising.13).6. it brings at least the theoretical benefit of (14. but it is less easy than it first appears because one must calculate eigenvalues to reach the Schur decomposition in the first place.
(14. this equation's determinant is
= Q[US − λ(Q∗ In Q)]Q∗ = Q[US − λIn ]Q∗ . It would be surprising if it actually were so." Consider however that
A − λIn = QUS Q∗ − λIn = Q[US − Q∗ (λIn )Q]Q∗ According to (14. Whatever practical merit the Schur decomposition might have or lack. every matrix A has a Schur decomposition. the modified matrix A′ = QUS Q∗ unlike A has n (maybe infinitesimally) distinct eigenvalues. According to (14.
det[A − λIn ] = det{Q[US − λIn ]Q∗ } = det Q det[US − λIn ] det Q∗ = det[US − λIn ].
Ch. or concerning the eigenvalue problem26 more broadly.5.384
CHAPTER 14. but fail where eigenvectors repeat. Ch. Then there is a kind of sloppy Schur form called a Hessenberg form which allows content in US along one or more subdiagonals just beneath the main diagonal. The finding that every matrix is arbitrarily nearly diagonalizable illuminates a question the chapter has evaded up to the present point.22) diagonalize any matrix with distinct eigenvalues and even any matrix with repeated eigenvalues but distinct eigenvectors. One could profitably propose and prove any number of useful theorems concerning the nondiagonalizable matrix and its generalized eigenvectors.42) brings us to the nondiagonalizable matrix of the subsection's title. Section 14. 7] [15.6 and its diagonalization formula (14.
24 25
[17. is arbitrarily nearly diagonalizable with distinct eigenvalues. (14.6. in more and less rigorous ways. The question: does a p-fold root in the characteristic polynomial (14.
14.11
The Hermitian matrix
A∗ = A.42) separates eigenvalues.25 which together roughly track the sophisticated conventional pole-separation technique of § 9. 5] 26 [53]
. but for the time being we shall let the matter rest there. THE EIGENVALUE
Equation (14. We shall leave the answer in that form.5 eigenvectors of distinct eigenvalues never depend on one another—permitting a nonunique but still sometimes usable form of diagonalization in the limit ǫ → 0 even when the matrix in question is strictly nondiagonalizable. Generalizing the nondiagonalizability concept leads one eventually to the ideas of the generalized eigenvector 24 (which solves the higher-order linear system [A − λI]k v = 0) and the Jordan canonical form.20) necessarily imply a p-fold eigenvalue in the corresponding matrix? The existence of the nondiagonalizable matrix casts a shadow of doubt until one realizes that every nondiagonalizable matrix is arbitrarily nearly diagonalizable— and. Equation (14. better.43)
An m × m square matrix A that is its own adjoint. thus also eigenvectors—for according to § 14. That is the essence of the idea. then you can show him a nearly identical matrix with three infinitesimally distinct eigenvalues. If you claim that a matrix has a triple eigenvalue and someone disputes the claim.
let the s columns of the m × s matrix Vo represent the s independent eigenvectors of A such that (i) each column has unit magnitude and (ii) columns whose eigenvectors share the same eigenvalue lie orthogonal to one another.11 and 14. • its eigenvectors corresponding to distinct eigenvalues lie orthogonal to one another. and • it is unitarily diagonalizable (§§ 13. for which λ∗ v∗ v = (Av)∗ v = v∗ Av = v∗ (Av) = λv∗ v.
27
[31. the eigenvalues λ1 and λ2 are no exceptions.
∗ λ2 = λ1 or v2 v1 = 0. Let the s × s diagonal matrix Λo carry the eigenvalues on its main diagonal such that AVo = Vo Λo . which naturally is possible only if λ is real. 2
But according to the last paragraph all eigenvalues are real. § 8.1]
. λ∗ = λ. Properties of the Hermitian matrix include that • its eigenvalues are real. 2
That is. v) represent an eigensolution of A and constructing the product v∗ Av. Given an m × m matrix A. Hence. v2 ) represent eigensolu∗ tions of A and constructing the product v2 Av1 . That is. THE HERMITIAN MATRIX
385
is called a Hermitian or self-adjoint matrix.6) such that A = V ΛV ∗ .14.44)
That the eigenvalues are real is proved by letting (λ. (14. That eigenvectors corresponding to distinct eigenvalues lie orthogonal to one another is proved27 by letting (λ1 . v1 ) and (λ2 .
∗ λ∗ = λ1 or v2 v1 = 0. for which
∗ ∗ ∗ ∗ λ∗ v2 v1 = (Av2 )∗ v1 = v2 Av1 = v2 (Av1 ) = λ1 v2 v1 .11.
To prove the last hypothesis of the three needs first some definitions as follows.
9) to Vo —perpendicular to all eigenvectors. Λo = 4 0 5 0 5. Recall from § 14.22) is that the latter always includes a p-fold eigenvalue p times. Vo = 6 √ 7. not an eigenvector but perpendicular to them all. whereas the former includes a p-fold eigenvalue only as many times as the eigenvalue enjoys independent eigenvectors. the latter of whose eigenvector space is fully characterized by two eigenvectors rather than three such that 2 1 3 3 2 √ 0 0 2 3 0 2 −1 0 0 1 6 √ 7 6 1 0 7 0 7 6 ⊥ Vo = 6 2 7. Let the m−s columns of the m× (m − s) matrix Vo⊥ represent the complete orthogonal complement (§ 13.
. falsely supposing a nondiagonalizable Hermitian matrix A.28 With these definitions in hand. each column of unit magnitude— such that Vo⊥∗ Vo = 0 and Vo⊥∗ Vo⊥ = Im−s . s × (m − s) and (m − s) × (m − s) auxiliary matrices F and G necessarily would exist such that AVo⊥ = Vo F + Vo⊥ G. In the example. For such a matrix A. The two λ = 5 eigenvectors are reported in mutually orthogonal form.386
CHAPTER 14. we can now prove by contradiction that all Hermitian matrices are diagonalizable. implying that s < m) would have at least one column. THE EIGENVALUE
where the distinction between the matrix Λo and the full eigenvalue matrix Λ of (14. whose Vo⊥ (since A is supposed to be nondiagonalizable. 1 1 5 4 0 √2 5 4 0 2 0 0 5 1 1 √ − 2 0 0 √2
A concrete example: the invertible but 2 −1 6 −6 A=6 4 0 0
nondiagonalizable matrix 3 0 0 0 5 5 2 −5 7 2 7 0 5 0 5 0 0 5
The orthogonal complement Vo⊥ supplies the missing vector.6 that s = m if and only if A is diagonalizable. but notice that eigenvectors corresponding to distinct eigenvalues need not be orthogonal when A is not Hermitian. All vectors in the example are reported with unit magnitude.5 that s = 0 but 0 < s ≤ m because every square matrix has at least one eigensolution. m = 4 and s = 3. Recall from § 14. not due to any unusual property of the product AVo⊥ but for the mundane reason that the columns of Vo and Vo⊥ together by definition addressed
28
has a single eigenvalue at λ = −1 and a triple eigenvalue at λ = 5.
In the reduced equation the matrix G would have at least one eigensolution. o Λ∗ (0) = F + (0)G.11. one wonders whether the converse holds: are all diagonalizable matrices with real eigenvalues and orthogonal eigenvectors Hermitian? To show that they are. We conclude that all Hermitian matrices are diagonalizable—and conclude further that they are unitarily diagonalizable on the ground that their eigenvectors lie orthogonal to one another—as was to be demonstrated. The finding that F = 0 reduces the AVo⊥ equation above to read AVo⊥ = Vo⊥ G. we would have by successive steps that AVo⊥ w = Vo⊥ Gw. w) represent an eigensolution of G. Right-multiplying by the (m − s)-element vector w = 0. its distinctly eigenvalued eigenvectors lay orthogonal to one another. The assumption: that a nondiagonalizable Hermitian A existed. Vo⊥ w) were an eigensolution of A. Λ∗ Vo∗ Vo⊥ = F + Vo∗ Vo⊥ G. The contradiction proves false the assumption that gave rise to it. o 0 = F.14. A(Vo⊥ w) = µ(Vo⊥ w). not due to any unusual property of G but because according to § 14.22). we would have by successive steps that Vo∗ AVo⊥ = Vo∗ Vo F + Vo∗ Vo⊥ G. A = V ΛV ∗ . (AVo )∗ Vo⊥ = Is F + Vo∗ Vo⊥ G. The last equation claims that (µ. (Vo Λo )∗ Vo⊥ = F + Vo∗ Vo⊥ G.
. has at least one eigensolution. thus by construction did not lie in the space addressed by the columns of Vo⊥ . in consequence of which A∗ = A and Vo∗ Vo = Is . 1 × 1 or larger. where we had relied on the assumption that A were Hermitian and thus that. Having proven that all Hermitian matrices are diagonalizable and have real eigenvalues and orthogonal eigenvectors.5 every square matrix. as proved above. THE HERMITIAN MATRIX
387
the space of all m-element vectors—including the columns of AVo⊥ . Let (µ. when we had supposed that all of A's eigenvectors lay in the space addressed by the columns of Vo . one can construct the matrix described by the diagonalization formula (14. Leftmultiplying by Vo∗ .
2. 18 Oct. The equation's adjoint is A∗ = V Λ∗ V ∗ . but it works in theory at least. (14.3. thus is unitarily diagonalizable according to (14. 30 [52. which means that Λ∗ = Λ and the right sides of the two equations are the same. then the following idea is one you might discover. All diagonalizable matrices with real eigenvalues and orthogonal eigenvectors are Hermitian. is clearly Hermitian according to § 14. 2007]
29
. That the eigenvalues.45) in which A = C ∗ C and b = C ∗ d.5 does require all of its eigenvalues to be real and moreover positive— which means among other things that the eigenvalue matrix Λ has full rank.46) Here." 14:29. almost in plain sight. the n × n matrices Λ and V represent respectively the eigenvalues and eigenvectors not of A but of the product A∗ A.6.11). (14.11. overlooked. since (A∗ A)∗ = A∗ A. The properties demand a Hermitian matrix.12
The singular value decomposition
Occasionally an elegant idea awaits discovery. for which the properties obtain. THE EIGENVALUE
where V −1 = V ∗ because this V is unitary (§ 13. "Singular value decomposition. and.45) worsens a matrix's condition and may be undesirable for this reason. But all the eigenvalues here are real. Though nothing requires the product's eigenvectors to be real. the diagonal elements of Λ. That is.388
CHAPTER 14. are real and positive is
The device (14. This section brings properties that greatly simplify many kinds of matrix analysis. m × n matrix A of full column rank r=n≤m and its adjoint A∗ . A∗ = A as was to be demonstrated. If the unlikely thought occurred to you to take the square root of a matrix.29
14. which might seem a severe and unfortunate restriction—except that one can left-multiply any exactly determined linear system Cx = d by C ∗ to get the equivalent Hermitian system [C ∗ C]x = [C ∗ d]. The product A∗ A is invertible according to § 13. because the product is positive definite § 14.30 Consider the n × n product A∗ A of a tall or square.6.44) as A∗ A = V ΛV ∗ . is positive definite according to § 13.
··· ··· 0 0 . + λn−1 0 0 0 . . 0 √ + λn 3 7 7 7 7. so this case poses no special trouble. instead.12. thus making U a unitary matrix? If we do this. positive square root under these conditions. 7 5
where the singular values of A populate Σ's diagonal. which says neither more nor less than that the first n columns of U are orthonormal.48)
(14. then the surprisingly simple (14. AV = U Σ. positive square root of the eigenvalue matrix Λ such that Λ = Σ∗ Σ.48)'s second line gives the equation Σ∗ U ∗ U Σ = Σ∗ Σ. . V ∗ A∗ AV = Σ∗ Σ. A = U ΣV . . . for just as a real. positive scalar has a real. If A happens to have broad shape then we can decompose A∗ .49) constitutes the singular value decomposition of A. but ΣΣ−1 = In .46) then yields A∗ A = V Σ∗ ΣV ∗ . 0 0
√
(14. Apparently every full-rank matrix has a singular value decomposition.47)
0 √ + λ2 . Now consider the m × m matrix U such that AV Σ−1 = U In .
∗
Σ =Σ =
2 6 6 6 6 6 4
+ λ1 0 . √. .49)
. Substituting (14. Λ Let the symbol Σ = Λ represent the n × n real. THE SINGULAR VALUE DECOMPOSITION
389
a useful fact. . Why not use GramSchmidt to make them orthonormal. . too.. so equally has √ a real. . 0 0 ··· ··· .49) does not constrain the last m − n columns of U . so left. Applying (14.49)'s second line into (14. Equation (14.14.and right-multiplying respectively by Σ−∗ and Σ−1 leaves that In U ∗ U In = In . leaving us free to make them anything we want. But what of the matrix of less than full rank r < n? In this case the product A∗ A is singular and has only s < n nonzero eigenvalues (it may be
∗
(14. positive square root.47) to (14.
51)
∗
(14. Apparently every matrix of less than full rank has a singular value decomposition. This book has chosen a way that needs some tools like truncators other books omit. If A happens to be an invertible square matrix. (14. the orthogonal complement. then the singular value decomposition evidently inverts it as A−1 = V Σ−1 U ∗ .390
CHAPTER 14. These several matrix chapters have not covered every topic they might. The other is the class of basic matrix theory these chapters do not happen to use. However. then (14. orthonormalization. one might avoid true understanding and instead work by memorized rules. A = U ΣV . and how many scientists and engineers would rightly choose the "nothing" if the matrix did not serve so very many applications as it does? Since it does serve so very many. but the reasoning is otherwise the same as before. about which we shall have more to say in a moment. the "all" it must be.
31
. but this is irrelevant to the proof at hand). The choice to learn the basic theory of the matrix is almost an all-or-nothing choice. THE EIGENVALUE
that s = r. One is the class of more advanced and more specialized matrix theory. the kernel. but does not need other tools like
Of course.31 Applied mathematics brings nothing else quite like it. pseudoinversion. That is not always a bad plan.50)
14. if the s nonzero eigenvalues are arranged first in Λ. The essential agents of matrix analysis— multiplicative associativity. The topics they omit fall roughly into two classes.49) becomes AV Σ−1 = U Is . really. The product A∗ A is nonnegative definite in this case and ΣΣ−1 = Is . diagonalization and so on—are the same in practically all books on the subject. AV = U Σ. too. rank. no one has yet discovered a satisfactory way to untangle the knot. but if that were your plan then it seems spectacularly unlikely that you would be reading a footnote buried beneath the further regions of the hinterland of Chapter 14 in such a book as this. but the way the agents are developed differs. As far as the writer knows. the eigenvalue.13
General remarks on the matrix
Chapters 11 through 14 have derived the uncomfortably bulky but—incredibly—approximately minimal knot of theory one needs to grasp the matrix properly and to use it with moderate versatility. inversion.
VI. which is to † say that Ax ≈ b. should the need arise. However.
Such as [23. yet most of the tools are of limited interest in themselves. it is the agents that matter. From the present pause one could proceed directly to such topics.32). since this is the first proper pause these several matrix chapters have afforded. § 3. GENERAL REMARKS ON THE MATRIX
391
projectors other books32 include. Tools like the projector not used here tend to be omitted here or deferred to later chapters. more advanced and more specialized matrix theory though often harder tends to come in smaller. briefly: a projector is a matrix that flattens an arbitrary ˜ vector b into its nearest shadow b within some restricted subspace. More broadly defined. per (13. in which the matrix B(B ∗ B)−1 B ∗ is the projector. whereupon x = A b. since the book is Derivations of Applied Mathematics rather than Derivations of Applied Matrices. If the columns of A ˜ ˜ represent the subspace. then x represents b in the subspace basis iff Ax = b. The theory develops endlessly. Thence it is readily shown that the ˜ ˜ deviation b − b lies orthogonal to the shadow b. Well.3]. not because they are altogether useless but because they are not used here and because the present chapters are already too long. The reader who understands the Moore-Penrose pseudoinverse and/or the Gram-Schmidt process reasonably well can after all pretty easily figure out how to construct a projector without explicit instructions thereto. for instance. That is.13.
. maybe we ought to take advantage to change the subject. since we have brought it up (though only as an example of tools these chapters have avoided bringing up). or the conjugategradient algorithm. One can approach the projector in other ways. What has given these chapters their hefty bulk is not so much the immediate development of the essential agents as the preparatory development of theoretical tools used to construct the essential agents. but there are two ways at least. any matrix M for 2 which M = M is a projector.
33
32
˜ b = Ax = AA† b = [BC][C ∗ (CC ∗ )−1 (B ∗ B)−1 B ∗ ]b = B(B ∗ B)−1 B ∗ b.33 Paradoxically and thankfully. more manageable increments: the Cholesky decomposition.14. a lengthy but well-knit tutorial this writer recommends.
THE EIGENVALUE
.392
CHAPTER 14.
11 through 14.4 and 3. the three-dimensional geometrical vector is the n = 3 special case of the general.4 on page 63 interrelates. z) or spherical (r.Chapter 15
Vector analysis
Leaving the matrix. the need to disambiguate seems seldom to arise in practice. A three-dimensional geometrical vector—or. 2]
393
." A name like "matrix vector" can disambiguate the vector of Chs. more exotic possibilities. among other. the three-dimensional geometrical vector merits closer attention and special treatment.3 of an amplitude of some kind plus a direction. Without
1
[7. because its three elements represent the three dimensions of the physical world. Ch. a vector—consists per § 3. 15. Seen from one perspective. However. x). This chapter details it. 3. cylindrical (ρ. conventionally call it just a "vector. y. this chapter turns to a curiously underappreciated agent of applied mathematics: the three-dimensional geometrical vector. In this chapter and in other appropriate contexts we shall usually. φ) coordinates. The vector brings an elegant notation. simply. The short name "vector" serves. φ. first met in §§ 3.3. Section 3. but since the three-dimensional geometrical vector brings all the matrix vector's properties and disrupts none of its operations. since it adds semantics to but does not subtract them from it.1 Applied convention finds the full name "three-dimensional geometrical vector" too bulky for common use.9. as Fig.1 illustrates and Table 3. θ. 11 through 14 if and where required. n-dimensional vector of Chs.9 has told that three scalars called coordinates suffice to specify a vector: these three can be rectangular (x.
1
Reorientation
3 ˆ x′ 4 y′ 5 ˆ ˆ z′ 2 2 32 3 ˆ x 0 ˆ 0 54 y 5. after which it develops the calculus of vector fields.15.1. The vector notation's principal advantage lies in that it frees a model's geometry from reliance on any particular coordinate system. Two-dimensional geometrical vectors arise in practical modeling about as often. one rotates the coordinates of its aspect coefficient about the y axis. ˆ 1 z
Matrix notation expresses the rotation of axes (3. With the notation. The example measures this advantage when.2." though for vectors some students seem to insist on preferring the expression without anyway. whose spaghetti-like result is as unsuggestive as it is unwieldy. This chapter therefore begins with the reorientation of axes in § 15. The two-dimensional however needs little or no additional special treatment. The same coefficient in standard vector notation is ˆ n · ∆ˆ. REORIENTATION the notation. such rotation is a lengthy. don't worry. for instance. Vector addition will already be familiar to the reader from Ch. one writes an expression like (z − z ′ ) − [∂z ′ /∂x]x=x′ . too. Three-dimensional geometrical vectors naturally are not the only interesting kind.
15.y=y′ (x − x′ ) − [∂z ′ /∂y]x=x′ . the rotation is almost trivial.y=y′ [(x − x′ )2 + (y − y ′ )2 + (z − z ′ )2 ] for the aspect coefficient relative to a local surface normal (and if the sentence's words do not make sense to you yet. just look the symbols over).1 and vector multiplication in § 15.5) as =
cos ψ 4 − sin ψ 0 sin ψ cos ψ 0
In three dimensions however one can do more than just to rotate the x and y axes about the z. You probably should not.y=y′ (y − y ′ )
395
[1 + (∂z ′ /∂x)2 + (∂z ′ /∂y)2 ]x=x′ . They are important. 3 or (more likely) from earlier work outside this book. for it is just the threedimensional with z = 0 or θ = 2π/4. With two rotations to point the x axis in the desired
. error-prone exercise. Without the notation. r To prefer the expression without to the expression with the notation would be as to to prefer the pseudo-Roman "DCXCIII/M + CXLVII/(M)(M)" to "ln 2.
and indeed if we write (15.9) z2 and then expand each of the two factors on the right according to (15. by definition hereby if complex.8). (15. (15. is obvious from (15.8.8) in matrix notation. 13: v1 · v2 = x1 x2 + y1 y2 + z1 z2 .1's cosine if the vectors are real.
(15. we see that the dot product does not in fact vary.43) of § 13. 3. Fig. v2 · v1 = v1 · v2 . As in (13.10)
gives the angle θ between two vectors according Fig. (15.11)
. ˆ∗ ˆ v1 · v2 = cos θ. That the dot product is commutative. x2 v1 · v2 = x1 y1 z1 y2 .1
The dot product
We first met the dot product in § 13. here too the relationship
∗ ∗ v1 · v2 = v1 v2 cos θ. VECTOR ANALYSIS Figure 15.8. to be valid.2: The dot product
b θ b cos θ
a · b = ab cos θ a
15. 15.2.6). the dot product must not vary under a reorientation of axes. It works similarly for the geometrical vectors of this chapter as for the matrix vectors of Ch.398
CHAPTER 15. Naturally.8)
which is the product of the vectors v1 and v2 to the extent that they run in the same direction.2 illustrates the dot product.
z otherwise a − sign. as in ˆx1 y2 . Unlike the dot product the cross product is a vector. Where the progression is honored.2. inasmuch as a reorientation consists of three rotations in sequence.1 is a scalar. v2 × v1 = −v1 × v2 .2
The cross product
The dot product of two vectors according to § 15. MULTIPLICATION
399
15. the cross product is the product of two vectors to the extent that they run in different directions.6) then substituting the results into (15.12) arises again and again in vector analysis.1 could have but did not happen to use) whose semantics are as shown.6's parity principle and the right-hand rule.14)
which is a direct consequence of the previous point regarding parity.2) and (15. One can also multiply two vectors to obtain a vector. however.
where the |·| notation is a mnemonic (actually a pleasant old determinant notation § 14. the associated term bears a + sign.12) x2 y2 z2 ˆ ˆ ˆ ≡ x(y1 z2 − z1 y2 ) + y(z1 x2 − x1 z2 ) + z(x1 y2 − y1 x2 ).2. Several facets of the cross product draw attention to themselves.
. • The cyclic progression ··· → x → y → z → x → y → z → x → y → ··· (15. or which can be seen more prosaically in (15. As the dot product.12) by swapping the places of v1 and v2 . it suffices merely that rotation about one axis not alter the dot product. Fortunately. In fact. it is also an unnecessary exercise. • The cross product is not commutative. due to § 11.15. and it is often useful to do so. unpleasant exercise.12): a lengthy.2. the cross product too is invariant under reorientation. As the dot product is the product of two vectors to the extent that they run in the same direction. It is defined in rectangular coordinates as v1 × v2 = ˆ ˆ z x y ˆ x1 y1 z1 (15. (15. for. One proves the proposition in the latter form by setting any two of φ. θ and η to zero before multiplying out and substituting.13) of (15. One could demonstrate this fact by multiplying out (15.
ˆ ˆ as is proved by a suitable counterexample like v1 = v2 = x. exotic physical phenomena is three-dimensional). 15.12) into (15. and finally by applying (15.10—in fact the operation v1 2 is used routinely to calculate electromagnetic power flow2 —but each of the cross product's three rectangular components has its own complex phase which the magnitude operation flattens. the physical world is threedimensional (or.8) with an appropriate change of variables and simplifying. Fortunately. (v1 × v2 ) × v3 = v1 × (v2 × v3 ). One can similarly relate the angle's sine to the cross product if the vectors involved are real.
as is seen by substituting (15. as ˆ |ˆ 1 × v2 | = sin θ.12) and comparing the result against Fig. 1-51]
. • Section 15. v1 · (v1 × v2 ) = 0 = v2 · (v1 × v2 ).400
CHAPTER 15. 3. • The cross product runs perpendicularly to each of its two factors.1 has related the cosine of the angle between vectors to the dot product. (If the vectors involved are com∗ plex then nothing prevents the operation |v1 × v2 | by analogy with ∗ × v without the magnitude sign eqn. so the cross product's limitation as defined here to three dimensions will seldom if ever disturb us. the space in which we model all but a few. the cross product is closely tied to threedimensional space.2. but to speak of a cross product in fourdimensional space would require arcane definitions and would otherwise make little sense. That is. and that v2 has only a nonnegative comz ˆ ponent in the y′ direction. so the result's relationship to the sine of an angle is not immediately clear. at least.)
2
[21. by remembering that reorientation cannot alter a cross product. that v2 has no ˆ component in the ˆ′ direction. That is. VECTOR ANALYSIS • The cross product is not associative. Two-dimensional space (a plane) can have a cross product so long as one does not mind that the product points off into the third dimension. (15. eqn.15)
ˆ ˆ ˆ demonstrated by reorienting axes such that v1 = x′ . v3 = y.
• Unlike the dot product.1's sine. v |v1 × v2 | = v1 v2 sin θ.
such a concept is flexible enough to confuse the uninitiated severely and soon. here too an example affords perspective.
15. for. Normally one requires that x′ . y′ and ˆ′ can serve.7. Imagine driving your automobile down a winding road.3. That your velocity were ˆ meant ℓq that you kept skilfully to your lane. and obey the right-hand rule (§ 3.7. However. However. not generally. but just at the spot along the road at which your automobile momentarily happened to be. Recalling the constants and variables of § 2.
3
. to choose x′ . Moreover. y′ and z′ wisely can immensely simplify the model's ˆ ˆ ˆ analysis.15. whether q = xx + yy + ˆz or q = x′ x′ + y′ y ′ + z′ z ′ . with velocity as v which in the present example would happen to be v = ˆ ℓv. 15. it remains z the same vector q. a cylinder or a sphere.3 illustrates the cross product. a model can specify z ′ . ˆ ˆ but otherwise any x′ . ORTHOGONAL BASES Figure 15. y′ and z′ under various conditons. that your velocity
Conventionally one would prefer the letter v to represent speed. where q represented your speed3 and ˆ represented the direction ℓ the road ran.3).
401
c = a×b = ˆab sin θ c
b θ a
Fig. As in § 2. y′ and z′ each retain unit length. where a model involves a circle.3: The cross product. this section will require the letter v for an unrelated purpose. run perpendiclarly to one another. for nothing requires the three ˆ ˆ various x ˆ to be constant. on the other hand.3
Orthogonal bases
A vector exists independently of the components by which one expresses ˆ ˆ ˆ ˆ ˆ it. where a model involves a contour or a curved surface of some ˆ ˆ ˆ kind.
where β repℓ resented the difference (assuming that the other driver kept skilfully to his own lane) between the direction of the road a mile ahead and the direction at your spot. whereas the wind's velocity had not changed while the expression representing it had. often a variable one. where ℓ ˆ ˆ ˆ and w (or u and v. A headwind had velocity −ˆ wind . after you had rounded a bend in the road.16)
constitutes such an orthogonal basis. like
. The geometries of other models however do x ˆ z suggest a particular basis. then the three together would ℓˆ ˆ constitute an orthogonal basis. ˆ ˆ z ˆ ˆ x′ × y′ = ˆ′ .402
CHAPTER 15. z
(15. ±ˆ qwind . ˆ ˆ ˆ y′ × z′ = x′ . which location changes as you drive. from which other vectors can be built. such that w lay in the plane of ˆ ˆ (ˆ × ℓ) ˆ z z ˆ ˆ ˆ and ˆ ℓ)—but if the geometry suggests a particular v or w (or u). for which x ˆ ˆ ˆ ˆ y′ · z′ = 0. ˆ ˆ z′ · x′ = 0. Any right-handedly mutually perpendicular [ˆ ′ y′ z′ ] in three dimensions. ℓ However. • Where the model features a contour like the example's winding road. ˆ′ × x′ = y′ . at right angles to ˆ represented the ℓ ℓ. Rather. an [ˆ v w] basis (or a [ˆ v ˆ basis or even a [ˆ ˆ w] basis) can be ℓ ˆ ˆ u ˆ ℓ] u ℓ ˆ ˆ locally follows the contour. it is because ˆ is defined relative to the road at ℓ your automobile's location. The geometries of some models suggest no particular basis. the wind would no longer be a headwind. And this is where confusion can arise: your own velocity had changed while the expression representing it had not.) can be defined in any convenient way so long as they remain perpendicular to one another and to ˆ ℓ—such that ˆ · w = 0 for instance (that is. For all these purposes the unit vector ˆ would remain constant. with respect to which one would express your new velocity as ˆ but ℓq would no longer express the headwind's velocity as −ˆ wind because. direction right-to-left across the road—would have you drifting out of your lane at an angle ψ. This is not because ˆ differs from place to place at a given moment. ˆ the symbols ˆ and v would by definition represent different vectors than ℓ before. The variable unit vectors v ˆ used. for like any ℓ other vector the vector ˆ (as defined in this particular example) is the same ℓ vector everywhere. whether constant or variable. perpendicular both to ˆ and to v ℓ such that [ˆ v w] obeyed the right-hand rule. ˆ ˆ If a third unit vector w were defined. ℓq v ˆ A car a mile ahead of you had velocity (q2 )(ˆ cos β + v sin β). VECTOR ANALYSIS
ˆ ˆ were (q)(ˆ cos ψ + v sin ψ)—where v. ˆ ˆ x′ · y′ = 0. when one usually just uses a constant [ˆ y ˆ]. a crosswind. fifteen seconds later. etc. since ℓq the road had turned while the wind had not.
etc. that such a unit normal n tells one everything one needs to know about its surface's local orientation. Refer to § 3. Refer to § 3. where the model features a contour along a curved surface.3. Standing on the earth's surface. • Combining the last two. θ is variable and runs locally tangentially to the sphere's surface in the direction of increasing elevation θ (that is.
4
The assertion wants a citation. In such a model one conventionally establishes z ˆ as the direction of the principal circle's axis but then is left with x ˆ or y as the direction of the secondary circle's axis.) can be used. • Occasionally a model arises with two circles that lie neither in the same plane nor in parallel planes but whose axes stand perpendicuˆ lar to one another.4.9. and φ east.
. along the sphere's surface perpendicularly to ˆ)." Observe."4 • Where the model features a curved surface like the surface of a wavy ˆ sea. then that v or w should probably be used. where n u ˆ ˆ u ˆ ˆ points locally perpendicularly to the surface. [φ y ρy ] or [θ φ ˆ] basis can be used locally x ˆ ˆ as appropriate. a [ˆ φ z] basis can ρ ˆ ˆ ˆ be used. θ south. a [ˆ v n] basis (or a [ˆ n w] basis. which the author lacks. an [ˆ θ φ] basis can be used. Refer to § 3. ˆ z r ˆ ˆ would be up. • Where the model features a sphere.5. The letter n here stands for "normal. The letter ℓ here stands for "longitudinal. 15. incidentally ˆ but significantly. an [ˆ v n] basis can be used. as nearly as possible to the −ˆ direction z z ˆ without departing from the sphere's surface). upon which an x ˆ x r ˆx ˆ y ˆ ˆ ˆy ˆ y r [ˆ ρx φ ]. 15. ORTHOGONAL BASES
403
ˆ ˆ the direction right-to-left across the example's road. choosing a direction for v in this case since necessarily v = n × ˆ • Where the model features a circle or cylinder. and φ is variable and runs locally tangentially to the sphere's surface in the direction of increasing azimuth φ (that is. and φ is variable and runs locally along the circle's perimeter in the direction of increasing azimuth φ. though not usually in the −ˆ direction itself. ρ is variable and points ˆ locally away from the axis.9 and Fig. where z is constant and runs along the cylinder's axis (or ˆ perpendicularly through the circle's center)." a synonym for "perpendicular. r ˆ ˆ where ˆ is variable and points locally away from the sphere's cenr ˆ ter. One need not worry about ℓˆ ˆ ˆ ˆ ˆ ℓ.15. [φ ˆ θ ]. with the earth as the sphere.9 and Fig.
The dot in the is supposed to look like the tip of an arrowhead.
ˆ r ˆ θ ˆ φ
.1). VECTOR ANALYSIS
and Figure 15.404
CHAPTER 15. this figure shows the constant ˆ basis vector z pointing out of the page toward the reader. 15.) middle of
ˆ φ ˆ ρ
ρ φ
ˆ z
Figure 15. (The conventional symbols respectively represent vectors pointing out of the page toward the reader and into the page away from the reader.4: The cylindrical basis. Thus.5: The spherical basis (see also Fig.
2
Einstein's summation convention
Einstein's summation convention is this: that repeated indices are implicitly summed over. right-handed. 10 February 2008]. VECTOR ANALYSIS
Because all those prime marks burden the notation and for professional mathematical reasons. If the reader feels that the notation begins to confuse more than it describes. y and z could just as well be ρ. the general forms (15.406
CHAPTER 15. the abbreviated notation really is the proper notation. with the implied understanding that x.20) and (15. Typically. study closely and take heart! The notation is not actually as impenetrable as it at first will seem. For vectors. "Einstein notation. far from granting the reader a comfortable chance to absorb the elaborated notation as it stands. indeed. So. three-dimensional basis.21) but prefer not to burden the notation with extra little strokes—that is.
15. where the superscript bears some additional semantics [52.8).18) and (15.4. b1 b2 b3 but you have to be careful about that in applied usage because people are not always sure whether a symbol like a3 means "the third component of ˆ the vector a" (as it does here) or "the third vector's component in the a direction" (as it would in eqn.22)
[25] Some professional mathematicians now write a superscript ai in certain cases in place of a subscript ai . the equation6 a · b = ai bi
5 6
(15. The trouble with vector work is that one has to learn to abbreviate or the expressions involved grow repetitive and unreadably long. the writer empathizes but regrets to inform the reader that the rest of the section. ˆ ˆ ˆ e1 e2 e3 a×b = a1 a2 a3 .19) with the implied understanding that they really mean (15.20) and (15.
. and.21) are sometimes rendered a · b = a1 b1 + a2 b2 + a3 b3 . the abbreviation is rather elegant once one becomes used to it. 15. where the convention is in force.5 For instance." 05:36. Eventually one accepts the need and takes the trouble to master the conventional vector abbreviation this section presents. Scientists and engineers however tend to prefer Einstein's original. shall not delay to elaborate the notation yet further! The confusion however is subjective. φ and z or the coordinates of any other orthogonal. applied mathematicians will write in the manner of (15. subscript-only notation.
judging it somewhat elegant but not worth devoting a subsection to. "Einstein summation"] The book might not present the convention at all were it not so widely used. Admittedly confusing on first encounter.20). below.24) before presenting it.3. the convention is both serviceable and neat. Einstein's summation convention is also called the Einstein notation. you should indeed learn the convention—if only because you must learn it to understand the
[50.21)—although an experienced applied mathematician would probably apply the Levi-Civita epsilon of § 15. however. it brings no new mathematics. It is rather a notational convenience. and. Under the convention. a term sometimes taken loosely to include also the Kronecker delta and LeviCivita epsilon of § 15. Besides. who does not want to learn a thing to which the famous name of Albert Einstein is attached? 9 [38]
8 7
. in and of itself. It is the kind of notational trick an accountant might appreciate. but the operator is still there.7 . The convention is indeed widely used in more advanced applied circles. NOTATION means that a·b= or more fully that a·b = ai bi = ax′ bx′ + ay′ by′ + az ′ bz ′ ." It does not magically create a summation where none existed. You can waive the convention.22) expresses it more succinctly. writing the summation symbol out explicitly whenever you like.9 for the symbol after all exists to be employed.4.3.4.15.8 It asks a reader to regard a repeated index like the i in "ai bi " as a dummy index (§ 2. Nothing requires you to invoke Einstein's summation convention everywhere and for all purposes. after all. it just hides the summation sign to keep it from cluttering the page. What is important to understand about Einstein's summation convention is that.23) is (15.
i=x′ .z ′
407
ai bi
i
which is (15. to further abbreviate this last equation to the form of (15. except that Einstein's form (15.3) and thus to read "ai bi " as " i ai bi .4. the convention's utility and charm are felt after only a little practice.y ′ . to invoke the convention at all may make little sense. the summational operator i is implied not written. a × b = ˆ(ai+1 bi−1 − bi+1 ai−1 ) ı (15. Likewise. In contexts other than vector work. Nevertheless.
k) = (x′ . not only three as in the present chapter.3
The Kronecker delta and the Levi-Civita epsilon
Einstein's summation convention expresses the dot product (15. Table 15. (y ′ . x′ . y ′ . ı where12 +1 ≡ −1 0 if (i.6. VECTOR ANALYSIS
rest of this chapter—but once having learned it you should naturally use it only where it actually serves to clarify. x′ .7." As far as the writer knows. concerns the three-dimensional case only. it often does just that.) Technically. y ′ . 11. In this more general sense the Levi-Civita is the determinant of the permutor whose ones hold the indicated positions—which is a formal way of saying that it's a + sign for even parity and a − sign for odd. x′ . (Chapters 11 and 14 did not use it. does not by itself wholly avoid unseemly repetition in the cross product. ǫ is merely the letter
11 10
. otherwise [for instance if (i. The Levi-Civita epsilon 11 ǫijk mends this.23). the "ci" in Levi-Civita's name is pronounced as the "chi" in "children. 4 × 4 case ǫ1234 = 1 whereas ǫ1243 = −1: refer to §§ 11. (y ′ . rendering the cross-product as a × b = ǫijkˆaj bk . in the fourdimensional. "Levi-Civita permutation symbol"] 13 The writer has heard the apocryphal belief expressed that the letter ǫ. x′ ).24). the Levi-Civita epsilon and Einstein's summation convention are two separate.1. Fortunately. y ′ ). k) = (x′ . y ′ )].22) neatly but. z ′ ) or (z ′ . For instance. tensor. as the rest of this section and chapter. independent things. j. a Greek e." 12 [37. j. as in (15. z ′ . if (i. Quiz:10 if δij is the Kronecker delta of § 11. but a canny reader takes the Levi-Civita's appearance as a hint that Einstein's convention is probably in force.408
CHAPTER 15.4. z ′ ).6.13
[25] Also called the Levi-Civita symbol. in vector work. but the Levi-Civita notation applies in any number of dimensions. (15.2. y ′ ).25)
In the language of § 11. x′ ) or (z ′ . j.24)
ǫijk
(15.1. as we have seen in (15. or permutor. For native English speakers who do not speak Italian. then what does the symbol δii represent where Einstein's summation convention is in force?
15. the Levi-Civita epsilon quantifies parity. k) = (x′ . stood in this context for "Einstein. z ′ . however. The two tend to go together.1 and 14.
2's quiz. y ′ ). and also either (m. n) or (j. n) = (y ′ .15. k) = (z ′ .1 lists several relevant properties. which makes for yet another plausible story). k) = (n. in the case that i = x′ . thus contributing nothing to the sum). z ′ ) or (m. in each case the several indices can take any values. This implies that either (j. Table 15. In any event. to zero. with Einstein's summation convention in force. NOTATION
409
Table 15. y ′ ).10. Both delta and epsilon find use in vector work.1: Properties of the Kronecker delta and the Levi-Civita epsilon. one sometimes hears Einstein's summation convention. and similarly in the cases that i = y ′ and i = z ′ (more precisely. the Kronecker delta and the Levi-Civita epsilon together referred to as "the Einstein notation. one can write (15. k) = (m. n) = (z ′ .6 approximately as the cross product relates to the dot product.
. but combinations other than the ones listed drive ǫimn or ǫijk . the property that ǫimn ǫijk = δmj δnk − δmk δnj is proved by observing that.22) alternately in the form a · b = δij ai bj . or both.14 each as with Einstein's summation convention in force. δjk = δkj δij δjk = δik δii = 3 δjk ǫijk = 0 δnk ǫijk = ǫijn ǫijk = ǫjki = ǫkij = −ǫikj = −ǫjik = −ǫkji ǫijk ǫijk = 6 ǫijn ǫijk = 2δnk ǫimn ǫijk = δmj δnk − δmk δnj The Levi-Civita epsilon ǫijk relates to the Kronecker delta δij of § 11." which though maybe not quite terminologically correct is hardly incorrect enough to argue over and is clear enough in practice. k) = (y ′ .15 Of the table's several properties. z ′ ) or (j. when one takes parity into
after δ. 14 [38] 15 The table incidentally answers § 15.4. either (j. which represents the name of Paul Dirac—though the writer does not claim his indirected story to be any less apocryphal than the other one (the capital letter ∆ has a point on top that suggests the pointy nature of the Dirac delta of Fig.4. For example. m)—which. 7.
dot and cross products. but with three distinct types of product it has more rules controlling the way its products and sums are combined.18) and (15. until he is satisfied that he adequately understands the several properties and their correct use.2: Algebraic vector identities. the Levi-Civita epsilon and the properties of Table 15. the compound Kronecker operator −δmk δnj includes canceling terms for these same three cases. 17 [47.19) respectively for the scalar. ψa = a∗ · a b·a a · (b + c) a · (ψb) ˆψai ı a · b = ai bi a × b = ǫijkˆaj bk ı = |a|2 (ψ)(a + b) = ψa + ψb = a·b b × a = −a × b = a·b+a·c a × (b + c) = a × b + a × c = (ψ)(a · b) a × (ψb) = (ψ)(a × b) a · (b × c) = b · (c × a) = c · (a × b) a × (b × c) = b(a · c) − c(a · b)
table's claim that ǫimn ǫijk = δmj δnk − δmk δnj . Appendix A]
.5
Algebraic identities
Vector algebra is not in principle very much harder than scalar algebra is. VECTOR ANALYSIS Table 15. (15. in something like the style of the example. Table 15.412
CHAPTER 15. for the case that j = k = m = n = y ′ and for the case that j = k = m = n = z ′ . Appendix II][21. whereas the compound Levi-Civita operator ǫimn ǫijk does not.7).17 Most of the table's identities are plain by the formulas (15. The reader who does not feel entirely sure that he understands what is going on might work out the table's several properties with his own pencil.2 lists several of these. This is why the table's claim is valid as written. (Notice that the compound Kronecker operator δmj δnk includes nonzero terms for the case that j = k = m = n = x′ . However.) To belabor the topic further here would serve little purpose. and one was proved
It is precisely to encapsulate such interminable detail that we use the Kronecker delta.
15.1.
since ˆ ˆ z q(r) = x′ qx′ (r) + y′ qy′ (r) + ˆ′ qz ′ (r) for any orthogonal basis [x′ y′ z′ ] as well. the specific scalar fields qx (r).6. FIELDS AND THEIR DERIVATIVES
413
as (15. The notation dσ/dt means "the rate of σ as time t advances.15. The remaining identity is proved in the notation of § 15. Air pressure p(r) is an example of a scalar field. § 18." We can likewise call a scalar quantity ψ(r) or vector quantity a(r) whose varies over space "a function of position r. derivatives with respect to position create a notational problem.3. We call it a field. Wind velocity18 q(r) is an example of a vector field. These are typical examples. the table also includes the three vector products in Einstein notation. As one can take the derivative dσ/dt or df /dt with respect to time t of a function σ(t) or f (t). which it needs for another purpose. a function in which spatial position serves as independent variable. this section also uses the letter q for velocity in place of the conventional v [3. qy (r) and qz (r) are no more essential to the vector field q(r) than the specific scalars bx ." but if the notation dψ/dr likewise meant "the rate of ψ as
18 As § 15.27)
= ǫijk ai bj ck = ǫijk bi cj ak = ǫijk ci aj bk
Besides the several vector identities. one can likewise take the derivative with respect to position r of a field ψ(r) or a(r). A field is a quantity distributed over space or. if you prefer.26). A vector field can be thought of as composed of three scalar fields ˆ ˆ ˆ q(r) = xqx (r) + yqy (r) + zqz (r). Scalar and vector fields are of utmost use in the modeling of physical phenomena. whose value at a given location r has amplitude but no direction.4]. (15.
15. However." but there is a special. for it is not obvious what notation like dψ/dr or da/dr would mean. but.
. by and bz are to the vector b.6
Fields and their derivatives
We call a scalar quantity φ(t) or vector quantity f (t) whose value varies over time "a function of time t. whose value at a given location r has both amplitude and direction. alternate name for such a quantity.4 as a · (b × c) = a · (ǫijkˆbj ck ) ı = ǫijk ai bj ck = ǫkij ak bi cj = ǫjki aj bk ci = b · (c × a) = c · (a × b).
but were you to try to explain such a building to a caveman used to thinking of the ground and the floor as more or less the same thing.2 has given the vector three distinct kinds of product. z If you think that the latter does not look very much like a vector. z
Consider a vector Then consider a "vector" ˆ ˆ c = x[Tuesday] + y[Wednesday] + ˆ[Thursday].1
The ∇ operator
ˆ ˆ a = xax + yay + ˆaz . Section 15. VECTOR ANALYSIS
position r advances" then it would prompt one to ask.
19
. and the curl. If we will speak of a field's derivative with respect to position r then we shall be more precise.
15. The writer does not know how to interpret a nonsense term like "[Tuesday]ax " any more than the reader does. then the writer thinks as you do. Once one has grasped the concept of upstairs and downstairs. "advances in which direction?" The notation offers no hint.414
CHAPTER 15. the gradient. a building with three floors or thirty is not especially hard to envision. In fact dψ/dr and da/dr mean nothing very distinct in most contexts and we shall avoid such notation. We shall address the Laplacian in [section not yet written]. What matters in this context is not that c have amplitude and direction (it has neither) but rather that it have the three orthonormal
Vector veterans may notice that the Laplacian is not listed. he might be conflicted by partly false images of sitting in a tree or of clambering onto (and breaking) the roof of his hut. This section gives the field no fewer than four distinct kinds of derivative: the directional derivative.19 So many derivatives bring the student a conceptual difficulty akin to a caveman's difficulty in conceiving of a building with more than one floor. This is not because the Laplacian were uninteresting but rather because the Laplacian is actually a second-order derivative—a derivative of a derivative. but the point is that c behaves as though it were a vector insofar as vector operations like the dot product are concerned. "There are many trees and antelopes but only one sky and floor.6. the divergence. but consider: c · a = [Tuesday]ax + [Wednesday]ay + [Thursday]az . How can one speak of many skies or many floors?" The student's principal conceptual difficulty with the several vector derivatives is of this kind.
The operator ∇ however shares more in common with a true vector than merely having x.15. partial derivatives. Now consider a "vector" ˆ ∇=x ∂ ∂ ∂ ˆ ˆ +y +z . It has these. ∂x ∂y ∂z (15. unlike the terms in the earlier dot product. That the components' amplitudes seem nonsensical is beside the point.
ˆ + y′ [−ax′ cos φ sin φ + ay′ sin2 φ + ax′ cos φ sin φ + ay′ cos2 φ]
. Maybe there exists a model in which "[Tuesday]" knows how to operate on a scalar like ax . As if this were not enough. so "[Tuesday]" might do something to ax other than to multiply it.2).6. then the dot product c · a could be licit in that model. but. maybe. Nothing in the dot's product's definition requires the component amplitudes of c to multiply those of a. This is easier to see at first with respect the true vector a. ∂x ∂y ∂z
Such a dot product might or might not prove useful.1) and (15. The model might allow it. but c is not a true vector. the cross product c × a too could be licit in that model. The dot and cross products in and of themselves do not forbid it. like a true vector. Multiplication is what the component amplitudes of true vectors do.28)
This ∇ is not a true vector any more than c is.) If there did exist such a model. at least we know what this one's terms mean. Section 15. days of the week. Well. There follows ˆ ˆ ˆ a = xax + yay + zaz ′ ′ ˆ = (ˆ cos φ − y sin φ)(ax′ cos φ − ay′ sin φ) x
ˆ ˆ + (ˆ ′ sin φ + y′ cos φ)(ax′ sin φ + ay′ cos φ) + z′ az ′ x
ˆ = x′ [ax′ cos2 φ − ay′ cos φ sin φ + ax′ sin2 φ + ay′ cos φ sin φ] ˆ ˆ z = x′ ax′ + y′ ay′ + ˆ′ az ′ . Consider rotating the x and y axes through an angle φ about the z axis. What's the point? The answer is that there wouldn't be any point if the only nonvector "vectors" in question were of c's nonsense kind. composed according to the usual rule for cross products. but if we treat it as one then we have that ∇·a= ∂ax ∂ay ∂az + + . for. the operator ∇ is amenable to having its axes reoriented by (15. y and z components.6. (Operate on? Yes.2 elaborates the point. FIELDS AND THEIR DERIVATIVES
415
components it needs to participate formally in relevant vector operations. ersatz vectors—it all seems rather abstract.
ı Introduced by Oliver Heaviside. ∇ =ˆ ı ∂ ∂ ∂ ∂ ˆ ˆ ˆ = x′ ′ + y′ ′ + z′ ′ . The partial differential operators ∂/∂x.416
CHAPTER 15. VECTOR ANALYSIS
where the final expression has different axes than the original but.6. where the "·" holds the place of the thing operated upon. relative to those axes. exactly the same form. then what takes the place ˜ of the ambiguous d/dr′ . where ro = ˆio . a single field.1). [6]
20
. That is the convention. Any letter might serve as well as the example's J. It is x ˆ ˆ this invariance under reorientation that makes the ∇ operator useful. ay and az do. Now consider ∇. Further rotation about other axes would further reorient but naturally also would not alter the form.6. r † and so on.2
Operator notation
Section 15. ∇o . ı there ∇o = ˆ ∂/∂io . the same mark ∇ distinguishes the corresponding special ∇. for instance. but what distinguishes operator notation is that. Operator notation concerns the representation of unary operators and the operations they specify. the operator J in operator notation's operation Jt attacks from the left. For example. ∂i ∂x ∂y ∂z (15. For example. ∇. the vector differential operator ∇ finds extensive use in the modeling of physical phenomena. A unary operator is a mathematical agent that transforms a single discrete quantity. informally pronounced "del" (in the author's country at least). ∂/∂y and ∂/∂z change no differently under reorientation than the component amplitudes ax .1. d/d˜. Section 7. Jt = t2 /2 and J cos ωt = (sin ωt)/ω. d/dro . a single function or another single mathematical object in some definite t way.
In this example it is clear enough.1 has introduced operator notation without explaining what it is. but where otherwise unclear one can write an Rt operator's definition in the applied style of J ≡ 0 · dt. After a brief digression to discuss operator notation.3 has already broadly introduced the notion of the operator. a single distributed quantity. J ≡ 0 dt is a unary operator20 whose effect is such that. This subsection digresses to explain.
15.29)
evidently the same operator regardless of the choice of basis [ˆ ′ y′ z′ ]. d/dr† and so on? Answer: ∇′ . Whatever mark distinguishes the special r. Hence. If ∇ takes the place of the ambiguous d/dr. the subsections that follow will use the operator to develop and present the four basic kinds of vector derivative. like the matrix row operator A in matrix notation's product Ax (§ 11.
and for the same reason." Both confusion and perceived need would vanish if Ω ≡ (ω)(·) were unambiguously an operator. The matrix actually is a type of unary operator and matrix notation is a specialization of operator notation. not (Jω)t. Jt = tJ if J is a unary operator. Seen in this way. In operator notation. FIELDS AND THEIR DERIVATIVES
417
Thus. better.6. One can compare the distinction in § 11. we have met operator notation much earlier than that. The product 5t can if you like be regarded as the unary operator "5 times. though the notation Jt usually formally resembles multiplication in other respects as we shall see. the scalar 5 itself is no operator but just a number—but where no other operation is defined operator notation implies scalar multiplication by default.5) a matrix enjoys. the operator K ≡ · + 3 is not linear and almost certainly should never be represented in operator notation as here. so J1 J2 = J2 J1 . the same algebra matrices obey. Whether operator notation can can licitly represent any unary operation whatsoever is a definitional question we will leave for the professional mathematicians to answer. It is precisely this algebra that makes operator notation so useful. Linear unary operators generally are not commutative.' operating on t. rather than go to the trouble of defining extra symbols like Ω.3. unary operations that scrupulously honor § 7. And. but otherwise linear unary operators follow familiar rules of multiplication like (J2 + J3 )J1 = J2 J1 + J3 J1 .2 between λ and λI against the distinction between ω and Ω here. in which case (JΩ)t = JΩt = J(Ωt). so we have met operator notation before.3.15.3's rule of linearity. to rely on the right-to-left convention that Jωt = J(ωt). Still. Perhaps you did not know that 5 was an operator—and. or. which take little space on the page and are universally understood. Linear unary operators obey a definite algebra. Jωt = J(ωt). Observe however that the confusion and the perceived need for parentheses come only of the purely notational ambiguity as to whether ωt bears the semantics of "the product of ω and t" or those of "the unary operator 'ω times. generally. for a linear unary operator enjoys the same associativity (11. in fact." operating on t. Modern conventional applied mathematical notation though generally excellent re-
. it is usually easier just to write the parentheses. The operators J and A above are examples of linear unary operators. lest an expression like Kt mislead an understandably unsuspecting audience. The a · in the dot product a · b and the a × in the cross product a × b can profitably be regarded as unary operators. though naturally in the specific case of scalar multiplication. which happens to be commutative. indeed. 5t and t5 actually mean two different things. but in normal usage operator notation represents only linear unary operations. it is true that 5t = t5.
tJ is itself a unary operator—it is the operator t 0 dt—though one can assign no particular value to it until it actually operates on something. the notation gives r no specific direction in which to advance.1) except where parentheses group otherwise. The vector b only directs the derivative. A or Ω without giving it anything in particular to operate upon.
21 Well. (b · ∇)a(r) = bi ∂a ∂aj = ˆbi .32) b · ∇ψ(r) = bi ∂i In the case (15. In operator notation. when it matters. so. incidentally. ∇a(r) itself means nothing coherent21 so the parentheses usually are retained.29) is an unresolved unary operator of the same kind.32) define the directional derivative. VECTOR ANALYSIS
mains imperfect. ∂i ∂i (15. however.3
The directional derivative and the gradient
In the calculus of vector fields.29) and accorded a reference vector b to supply a direction and.31)
For the scalar field the parentheses are unnecessary and conventionally are omitted.30) (b · ∇) = bi ∂i to express the derivative unambiguously. This operator applies equally to the scalar field. as the section's introduction has observed. One can leave an operation unresolved. it does mean something coherent in dyadic analysis.1. The operator ∇ of (15. one can compose the directional derivative operator ∂ (15.
. however.418
CHAPTER 15.
15. given (15. but this book won't treat that. it is not the object of it. Nothing requires b to be constant. Equations (15. t For example. One can speak of a unary operator like J.6.31) and (15. Note that the directional derivative is the derivative not of the reference vector b but only of the field ψ(r) or a(r). ∂i as to the vector field. ∂ψ (b · ∇)ψ(r) = bi . the derivative notation d/dr is ambiguous because.31) of the vector field. to supply a scale. as ∂ψ . notationally. (15. operators associate from right to left (§ 2.
Pick your favorite brand. the directional derivative does not care. in which the vector ds represents the area and orientation of a patch of the region's enclosing surface.35) is a vector infinitesimal of amplitude ds. It is not easy to motivate divergence directly. One of these is divergence.4
Divergence
There exist other vector derivatives than the directional derivative and gradient of § 15.15.34)
where ˆ ds ≡ n · ds (15. The result of a ∇ operation. but other football games have goal lines and goal posts. directed normally outward from the closed surface bounding the region—ds being the area of an infinitesimal element of the surface. too. through the concept of flux as follows.
(15. the directional derivative operator b · ∇ is invariant under reorientation of axes. however. Flux is flow through a surface: in this case. The flux of a vector field a(r) outward from a region in space is Φ=
S
a(r) · ds. wherefore the directional derivative is invariant.3. so we shall approach it indirectly.
.
15. The gradient represents the amplitude and direction of a scalar field's locally steepest rate.33)
is called the gradient of the scalar field ψ(r). FIELDS AND THEIR DERIVATIVES
419
though. too.6. If it seems opaque then try to visualize eqn. Formally a dot product. only scalar fields have gradients. When something like air flows through any surface—not necessarily a physical barrier but an imaginary surface like the goal line's vertical plane in a football game22 —what matters is not the surface's area as such but rather
22 The author has American football in mind.6. (The paragraph says much in relatively few words.34's dot product a[r] · ds. the quantity ∇ψ(r) = ˆ ı ∂ψ ∂i (15. It can be a vector field b(r) that varies from place to place.6. the area of a tiny patch. 15. the gradient ∇ψ(r) is likewise invariant. Within (15. net flow outward from the region in question.32). Though both scalar and vector fields have directional derivatives.
but otherwise the flow sees a foreshortened surface. y) = ∆z(x. One can contemplate the flux Φopen = S a(r) · ds through an open surface as well as through a closed. a sink. where wind blowing through the region. Refer to Fig. ∂y ∂az dz ∂z
∆ay (z.x) zmax (x. then some lines might enter and exit the region more than once. as though the surface were projected onto a plane perpendicular to the flow. z). ∂x ∂ay dy. 15. 15.or ˆ-directed line. entering and leaving. It changes the problem in no essential way. x) = ∆y(z.z) ymax (z. then these increases are simply ∂ax ∆x(y.y)
∆az (x.y)
represent the increase across the region respectively of ax . ∆ax (y. but where a positive net flux would have barometric pressure falling and air leaving the region maybe because a storm is coming—and you've got the idea. x) dz dx +
∆az (x. z) dy dz +
∆ay (z. z) =
xmin (y. y) dx dy. ∂z
Naturally.) A region of positive flux is a source. Now realize that eqn.
23
.34 actually describes flux not through an open surface but through a closed—it could be the surface enclosing the region of football play to goal-post height. VECTOR ANALYSIS
the area the surface presents to the flow. but this merely elaborates the limits of integration along those lines.z)
∆ax (y. but it is the outward flux (15.
∆ax (y. The outward flux Φ of a vector field a(r) from some definite region in space is evidently Φ= where
xmax (y. z) = ∂x ∂ay ∆ay (z.2. would constitute zero net flux.x)
∂ax dx. x).23 If the field has constant derivatives ∂a/∂i. ay or az along ˆ ˆ an x-.420
CHAPTER 15. if the region's boundary happens to be concave. y). y. ∂y ∂az ∆az (x. y) =
zmin (x.34) through a closed surface that will concern us here. of negative flux. x) =
ymin (z. The surface presents its full area to a perpendicular flow. z or equivalently if the region in question is small enough that the derivatives are practically constant through it.
Formally a dot product. y) dx dy. so ∂az ∂ax ∂ay + + . Φ = (V ) ∂x ∂y ∂z or. divergence is invariant under reorientation of axes. z) dy dz + ∂ay ∂y ∆y(z.6. representing the intensity of a source or sink. though. x) dz dx
421
∆z(x. The circulation of a vector field a(r) about a closed contour in space is Γ= a(r) · dℓ. flat plane in space.
15.36)
the name divergence.34) which represented a double integration over a surface. ˆ Let [ˆ v n] be an orthogonal basis with n normal to the contour's plane uˆ ˆ ˆ ˆ such that travel positively along the contour tends from u toward v rather
. ∇ · a(r) = ∂ai . ∂i (15. V ∂x ∂y ∂z ∂i We give this ratio of outward flux to volume. One can in general contemplate circulation about any closed contour.38)
where.37) (15. dividing through by the volume. ∂ax ∂ay ∂az ∂ai Φ = + + = = ∇ · a(r). FIELDS AND THEIR DERIVATIVES upon which Φ = ∂ax ∂x + ∂az ∂z ∆x(y. It needs first the concept of circulation as follows.
But each of the last equation's three integrals represents the region's volume V . the here represents only a single integration. Curl is a little trickier to visualize.15. unlike the S of (15. (15.5
Curl
Curl is to divergence as the cross product is to the dot product. but it suits our purpose here to consider specifically a closed contour that happens not to depart from a single.6.
7.
15. ∇ × a(r) = ǫijkˆ ı ∂ak . about a specified axis. a vector. Observe that. whereupon one can return to define directional curl more generally than (15. Formally a cross product. so it may be said that curl is the locally greatest directional curl. not only n.6. b · [∇ × a(r)] = b · ǫijkˆ ı ∂ak ∂ak . Directional curl.40) has defined it. representing the intensity of circulation.40) with respect to a contour in some specified plane.41) would necessarily result.4 contemplated the flux of a vector field a(r) from a volume small enough that the divergence ∇ · a was practically constant through
.41) of curl.40) to motivate and develop the concept (15. Once developed.31) here too any reference vector b or ˆ vector field b(r) can serve to direct the curl. although we have developed directional curl (15.1
The divergence theorem
Section 15. curl (15.
15. ˆ We have needed n and (15. Directional curl evidently cannot exceed curl in magnitude. ∂j (15. This shift comes in two kinds. which we are now prepared to treat. the degree of twist so to speak. We might have chosen ˆ another plane and though n would then be different the same (15. curl is invariant under reorientation of axes. directional curl is likewise invariant. dot-multiplied by a reference vector. a scalar. As in (15. Curl. The two subsections that follow explain.7
Integral forms
The vector's distinctive maneuver is the shift between integral forms. Note however that directional curl so defined is not a distinct kind of derivative but rather is just curl. INTEGRAL FORMS
423
the name directional curl.40). = ǫijk bi ∂j ∂j (15. An ordinary dot product. the concept of curl stands on its own. unexpectedly is a property of the field only.15. but will equal ˆ it when n points in its direction. oriented normally to the locally greatest directional curl's plane.42)
This would be the actual definition of directional curl. Hence.7. The cross product in (15.41) itself turns out to be altogether independent of any particular plane. however. is a property of the field and the plane.41)
we call curl.
Fortunately. so ∇ · a dv = a · ds.24 According to (15. we have that ∇ · a dv = a · ds + a · ds.43)
Where waves propagate through a material interface the fields that describe the waves can be discontinuous. overall volume.424
CHAPTER 15. each element small enough that the constant-divergence assumption holds across it.
V
24
∇ · a dv =
S
a · ds. thin transition layer in which the fields and their derivatives vary smoothly rather than jumping sharply. Adding all the elements together. VECTOR ANALYSIS
it.36). such that the one volume element's ds on the surface the two elements share points oppositely to the other volume element's ds on the same surface). the flux from any one of these volume elements is a · ds.
. and outer surface area shared with no other element because it belongs to the surface of the larger. Interior elements naturally have only the former kind but boundary elements have both kinds of surface area. One would like to treat larger volumes as well. whereas the narrative depends on fields not only to be continuous but even to have continuous derivatives. so we can elaborate the last equation to read ∇ · a dv = a · ds + a · ds
Sinner
Souter
for a single element. where Selement = Sinner + Souter .
elements
elements
Sinner
elements Souter
but the inner sum is null because it includes every interior surface twice.34) and (15. the apparent difficulty is dismissed by imagining an appropriately graded. but to us it is admittedly a distraction so we shall let the matter rest in that form. ∇ · a dv =
Selement
Even a single volume element however can have two distinct kinds of surface area: inner surface area shared with another element. because each interior surface is shared by two elements such that ds2 = −ds1 (in other words. A professional mathematician might develop this footnote into a formidable discursion.
elements
elements
Souter
That is.
(15. One can treat a larger volume by subdividing the volume into infinitesimal volume elements dv.
and outer edge shared with no other element because it belongs to the edge of the larger.43) is the divergence theorem.7. INTEGRAL FORMS
425
Equation (15. overall surface. If an open surface— whether the surface be confined to a plane or be warped (as for example the surface of a bowl) in three dimensions—is subdivided into infinitesimal surface elements ds.2. (∇ × a) · ds =
elements elements outer
25
[47. developed as follows. so a · dℓ. eqn.7.
15. where element = inner + outer . then the circulation about any one of these elements according to (15.38) and (15.1 is a second.39) is (∇ × a) · ds = a · dℓ. each element small enough to be regarded as planar and to experience essentially constant curl.43) swaps the one integral for the other. related theorem for directional curl. (15.2
Stokes' theorem
Corresponding to the divergence theorem of § 15.15. 1. we have that (∇ × a) · ds = a · dℓ + a · dℓ.
elements
elements inner
elements outer
but the inner sum is null because it includes every interior edge twice. so we can elaborate the last equation to read (∇ × a) · ds = a · dℓ + a · dℓ
inner
outer
for a single element. so it is an important result. When they do. The integral on the equation's left and the one on its right each arise in vector analysis more often than one might expect. because each interior edge is shared by two elements such that dℓ2 = −dℓ1 . Interior elements naturally have only the former kind but boundary elements have both kinds of edge. It is the vector's version of the fundamental theorem of calculus (7.25 neatly relating the divergence within a volume to the flux from it.2).7.8]
. often a profitable maneuver.
element
Even a single surface element however can have two distinct kinds of edge: inner edge shared with another element. Adding all the elements together.
Orthogonal polynomials 23. Basic special functions 20. Fourier and Laplace transforms 17. Feedback control 27. Bessel functions The following chapters are further tentatively planned but may have to await the book's second edition. The conjugate-gradient algorithm is to be treated here. Energy conservation 28.Plan
The following chapters are tentatively planned to complete the book. 26. 16. Iterative techniques2 The following chapters are yet more remotely planned. Transformations to speed series convergence 19. Spatial Fourier transforms1 24.
427
. Numerical integration 25. The wave equation 21. Stochastics 29. Probability 18. Statistical mechanics
1 2
Weyl and Sommerfeld forms are to be included here. 22.
Remarks Tentatively planned new appendices include "Additional derivations" (to prove a few obscure results) and maybe "A summary of pure complexvariable theory. the author would rather not see a four-digit page number here. however. Maxwell's equations happen to lie squarely within the author's own professional area of expertise. Several of the tentatively planned chapters from Ch. and it does provide another way to think about complex numbers. Scientists and engineers with advanced credentials occasionally expect you to be acquainted with the pure theory for technical-social reasons. it is not impossible that the author would yet amend the plan to include it. Differential geometry and Kepler's laws 32. Information theory 34. First. and a good chapter is not a bad thing. the book is a book. and to develop the Poynting vector that quantifies the electromagnetic delivery of energy. Even to the applied mathematician.428 30. As for the Navier-Stokes equation. Maxwell's equations are the equations of radio waves and light. For these reasons. a future edition might tersely outline the pure theory's main line of argument in an appendix. It is not planned to develop electromagnetics generally but it is planned to derive the vector Helmholtz equation from the differential forms of Amp`re's and Faraday's laws.
The inclusion of such a chapter in the plan prompts one to ask why chapters on any number of other physical topics are not also included—on the Navier-Stokes equation. allowing him to deliver a better chapter on this particular topic than on some others. few if any of these chapters would treat much more than a few general results from their respective fields."4 either or both to precede the existing appendix "Manuscript history" in a future edition of the book. Maxwell's equations happen to provide a beautiful instance of a vector wave equation's scalarization. for example. 25 onward represent deep fields of study each wanting full books of their own. The answer is threefold. 2 through 9 will have noticed that the author loves complex numbers but lacks commensurate enthusiasm for the standard development of the pure theory of the complex variable. but even if so it is not yet clear to the author just where in the book to put it or how extensively to treat it. on the other hand. Any reader who does not find such things inherently interesting holds a different notion of "interesting" than the author does. Second. regardless of its practical use. Besides. Maxwell's equations3 33. If written according to plan. The pure theory though maybe unnecessarily abstract is pretty. Rotational dynamics
CHAPTER 15. to e scalarize the result. Third. VECTOR ANALYSIS
31. 4 Any reader who has read Chs. not a library: if it tried to treat all physical topics it would never end. the standard development of the pure theory does have its proper place.
3
. To earn a technical degree you are not unlikely to have to obtain a passing grade from a professional mathematician teaching the pure theory.
7. If you find a part of the book insufficiently rigorous. or how you have cited it. if it fails to do so in your view then maybe you and I should discuss the matter. what you have learned from it.org. as far as it has yet gone. o or whatever.org. Write as appropriate. so no such correction is too small. which means that you as reader have a stake in it if you wish. then my response is likely to be that there is already a surfeit of fine professional mathematics books in print. readers are downloading the book at the rate of about four thousand copies per year directly through derivations. such errors only mar the manuscript. actually have read it. THB
. I do not discourage such criticism and would be glad to hear it. or a substantial fraction of it. Finding the right balance is not always easy. then that is another matter. Check On the other hand. More to come. if you have found any part of the book unnecessarily confusing. I would most expressly solicit your feedback on typos. then write at your discretion. At the time of this writing. On a higher plane. but this book may not be well placed to meet it (the book might compromise by including a footnote that briefly suggests the outline of a more rigorous proof.15. then you can help to improve it. false or missing symbols and the like. the book does intend to derive every one of its results adequately from an applied perspective. On no particular plane. If you want to detail H¨lder spaces and Galois theory.derivations. Some fraction of those. INTEGRAL FORMS A personal note to the reader
429
Derivations of Applied Mathematics belongs to the open-source tradition. misprints. If you have read the book. if you would tell me what you have done with your copy of the book. plus others who have installed the book as a Debian package or have acquired the book through secondary channels. but it tries not to distract the narrative by formalities that do not serve applications). please tell how so. this just isn't that kind of book. now you stand among them. then write me at thb@derivations.org/ for the latest revision.
430
CHAPTER 15. VECTOR ANALYSIS
.
and it concisely summarizes complex ideas on paper to the writer himself. if writers will not incrementally improve it? Consider the notation of the algebraist Girolamo Cardano in his 1539 letter to Tartaglia: [T]he cube of one-third of the coefficient of the unknown is greater in value than the square of one-half of the number. when it e falls to the discoverer and his colleagues to establish new notation to meet the need. one would find it difficult even to think clearly about the math. Nevertheless. and rightly so. A more difficult problem arises when old notation exists but is inelegant in modern use. surely he would express the same thought in the form x 2 a 3 > .Appendix A
Hexadecimal and other notational matters
The importance of conventional mathematical notation is hard to overstate. 3 2 Good notation matters. Without the notation. New mathematical ideas occasionally find no adequate pre¨stablished notation. Convention is a hard hill to climb. [35] If Cardano lived today. The right notation is not always found at hand. Such notation serves two distinct purposes: it conveys mathematical ideas from writer to reader. slavish devotion to convention does not serve the literature well. of course. for how else can notation improve over time. Although this book has no brief to overhaul applied mathematical notation generally. it does seek to aid the honorable cause of notational evolution 431
. nearly impossible. to discuss it with others.
483. the introduction of new symbols ξ and such. plus five times sixteen. or in other words 20x1F − 1.32 million or 9. However. sixteens cubed. this is 0x7FFF FFFF. the aesthetic advantage may not seem immediately clear from the one example. so that (for instance) the quarter revolution or right angle is expressed as 2π/4 rather than as the less evocative π/2. sixteens. we choose to leap.1
Hexadecimal numerals
Treating 2π as a single symbol is a small step. To the reader who is not a computer scientist. the sixteen symbols 0123456789ABCDEF respectively represent the numbers zero through
. the next. For example. but whose decimal notation does nothing to convey the notion of the largest signed integer storable in a byte. and so on. plus once sixteen times sixteen times sixteen. to explain very briefly: hexadecimal represents numbers not in tens but rather in sixteens. but its iterative tenfold structure meets little or no aesthetic support in mathematical theory.147.7 miles. In this book. Traditional decimal notation is unobjectionable for measured quantities like 63. the book sometimes treats 2π implicitly as a single symbol. If this book treats 2π sometimes as a single symbol—if such treatment meets the approval of slowly evolving convention—then further steps. HEX AND OTHER NOTATIONAL MATTERS
in a few specifics. In hexadecimal notation. whose number suggests a significant idea to the computer scientist.432
APPENDIX A. One wants to introduce some new symbol ξ = 2π thereto. The rightmost place in a hexadecimal numeral represents ones. of course.
A. 2π remains a bit awkward. the hexadecimal numeral 0x1357 means "seven. either we leap the ditch or we remain on the wrong side. sixteens squared. A bolder step is to adopt from the computer science literature the important notational improvement of the hexadecimal numeral.81 m/s2 . As a single symbol.647. Much better is the base-sixteen hexadecimal notation 0x7F. Consider for instance the decimal numeral 127. can safely be left incrementally to future writers. No incremental step is possible here. which is the largest signed integer storable in a standard thirty-two bit word. it is neither necessary nor practical nor desirable to leap straight to notational Utopia in one great bound." In hexadecimal. which clearly expresses the idea of 27 − 1. unlikely to trouble readers much. but consider the decimal number 2. It suffices in print to improve the notation incrementally. The question is: which notation more clearly captures the idea? To readers unfamiliar with the hexadecimal notation. plus thrice sixteen times sixteen. $ 1. For instance. the next place leftward. the next place leftward.
The reason it is unfamiliar is that it is not often encountered outside the computer science literature. but it is not encountered because it is not used. one quarter. Combining the hexadecimal and 2π ideas. and so on in a cycle. In base twelve. such as where hexadecimal numbers are arrayed in matrices.06)(2π) in base twelve. with sixteen being written 0x10. If you have never yet used the hexadecimal system. AVOIDING NOTATIONAL CLUTTER
433
fifteen.2
Avoiding notational clutter
Good applied mathematical notation is not cluttered. but where they do naturally decimal not hexadecimal is used: vsound = 331 m/s rather than the silly-looking vsound = 0x14B m/s.6.3. that this particular cycle is worth breaking. so hexadecimal (base sixteen) is found to offer a convenient shorthand for binary (base two. the fundamental. All this raises the sensible question: why sixteen?1 The answer is that sixteen is 24 .2. however. so this book uses the hexadecimal for integers larger than 9.A. on aesthetic grounds. Good notation does not necessarily include every possible limit. the hour angles (§ 3. smallest possible base). it is worth your while to learn it. letters which when set in italics usually represent coefficients not digits. It seems to this writer.6) come in neat increments of (0. one third and one half are respectively written 0. For the sake of elegance.
A. Observe that in some cases. we note here for interest's sake that 2π ≈ 0x6.4 and 0. Also. Hexadecimal. However. Binary is inherently theoretically interesting. 0. is preferred for its straightforward proxy of binary. at the risk of challenging entrenched convention. superscript and
An alternative advocated by some eighteenth-century writers was twelve. The conventional hexadecimal notation is admittedly a bit bulky and unfortunately overloads the letters A through F. this book employs hexadecimal throughout. the real problem with the hexadecimal notation is not in the notation itself but rather in the unfamiliarity with it. qualification. but direct binary notation is unwieldy (the hexadecimal number 0x1357 is binary 0001 0011 0101 0111).
1
.487F. this book may omit the cumbersome hexadecimal prefix "0x. so there are some real advantages to that base. and it is not used because it is not familiar. so hexadecimal is written in proxy. Each of the sixteen hexadecimal digits represents a unique sequence of exactly four bits (binary digits)." Specific numbers with physical dimensions attached appear seldom in this book. besides having momentum from the computer science literature.
434
APPENDIX A. HEX AND OTHER NOTATIONAL MATTERS
subscript. For example, the sum
M N
S=
i=1 j=1
a2 ij
might be written less thoroughly but more readably as S=
i,j
a2 ij
if the meaning of the latter were clear from the context. When to omit subscripts and such is naturally a matter of style and subjective judgment, but in practice such judgment is often not hard to render. The balance is between showing few enough symbols that the interesting parts of an equation are not obscured visually in a tangle and a haze of redundant little letters, strokes and squiggles, on the one hand; and on the other hand showing enough detail that the reader who opens the book directly to the page has a fair chance to understand what is written there without studying the whole book carefully up to that point. Where appropriate, this book often condenses notation and omits redundant symbols.
Appendix B
The Greek alphabet
Mathematical experience finds the Roman alphabet to lack sufficient symbols to write higher mathematics clearly. Although not completely solving the problem, the addition of the Greek alphabet helps. See Table B.1. When first seen in mathematical writing, the Greek letters take on a wise, mysterious aura. Well, the aura is fine—the Greek letters are pretty— but don't let the Greek letters throw you. They're just letters. We use them not because we want to be wise and mysterious1 but rather because
Well, you can use them to be wise and mysterious if you want to. It's kind of fun, actually, when you're dealing with someone who doesn't understand math—if what you want is for him to go away and leave you alone. Otherwise, we tend to use Roman and Greek letters in various conventional ways: Greek minuscules (lower-case letters) for angles; Roman capitals for matrices; e for the natural logarithmic base; f and g for unspecified functions; i, j, k, m, n, M and N for integers; P and Q for metasyntactic elements (the mathematical equivalents of foo and bar); t, T and τ for time; d, δ and ∆ for change; A, B and C for unknown coefficients; J, Y and H for Bessel functions; etc. Even with the Greek, there still are not enough letters, so each letter serves multiple conventional roles: for example, i as an integer, an a-c electric current, or—most commonly—the imaginary unit, depending on the context. Cases even arise in which a quantity falls back to an alternate traditional letter because its primary traditional letter is already in use: for example, the imaginary unit falls back from i to j where the former represents an a-c electric current. This is not to say that any letter goes. If someone wrote e2 + π 2 = O 2 for some reason instead of the traditional a2 + b2 = c2 for the Pythagorean theorem, you would not find that person's version so easy to read, would you? Mathematically, maybe it doesn't matter, but the choice of letters is not a matter of arbitrary utility only but also of convention, tradition and style: one of the early
1
we simply do not have enough Roman letters. An equation like α2 + β 2 = γ 2 says no more than does an equation like a2 + b2 = c2 , after all. The letters are just different. Applied as well as professional mathematicians tend to use Roman and Greek letters in certain long-established conventional sets: abcd; f gh; ijkℓ; mn (sometimes followed by pqrst as necessary); pqr; st; uvw; xyzw. For
writers in a field has chosen some letter—who knows why?—then the rest of us follow. This is how it usually works. When writing notes for your own personal use, of course, you can use whichever letter you want. Probably you will find yourself using the letter you are used to seeing in print, but a letter is a letter; any letter will serve. Excepting the rule, for some obscure reason almost never broken, that i not follow h, the mathematical letter convention consists of vague stylistic guidelines rather than clear, hard rules. You can bend the convention at need. When unsure of which letter to use, just pick one; you can always go back and change the letter in your notes later if you need to.
437 the Greek: αβγ; δǫ; κλµνξ; ρστ ; φψωθχ (the letters ζηθξ can also form a set, but these oftener serve individually). Greek letters are frequently paired with their Roman congeners as appropriate: aα; bβ; cgγ; dδ; eǫ; f φ; kκ; ℓλ; mµ; nν; pπ; rρ; sσ; tτ ; xχ; zζ.2 Some applications (even in this book) group the letters slightly differently, but most tend to group them approximately as shown. Even in Western languages other than English, mathematical convention seems to group the letters in about the same way. Naturally, you can use whatever symbols you like in your own private papers, whether the symbols are letters or not; but other people will find your work easier to read if you respect the convention when you can. It is a special stylistic error to let the Roman letters ijkℓmn, which typically represent integers, follow the Roman letters abcdef gh directly when identifying mathematical quantities. The Roman letter i follows the Roman letter h only in the alphabet, almost never in mathematics. If necessary, p can follow h (curiously, p can alternatively follow n, but never did convention claim to be logical). Mathematicians usually avoid letters like the Greek capital H (eta), which looks just like the Roman capital H, even though H (eta) is an entirely proper member of the Greek alphabet. The Greek minuscule υ (upsilon) is avoided for like reason, for mathematical symbols are useful only insofar as we can visually tell them apart.3
2 The capital pair Y Υ is occasionally seen but is awkward both because the Greek minuscule υ is visually almost indistinguishable from the unrelated (or distantly related) Roman minuscule v; and because the ancient Romans regarded the letter Y not as a congener but as the Greek letter itself, seldom used but to spell Greek words in the Roman alphabet. To use Y and Υ as separate symbols is to display an indifference to, easily misinterpreted as an ignorance of, the Graeco-Roman sense of the thing—which is silly, really, if you think about it, since no one objects when you differentiate j from i, or u and w from v—but, anyway, one is probably the wiser to tend to limit the mathematical use of the symbol Υ to the very few instances in which established convention decrees it. (In English particularly, there is also an old typographical ambiguity between Y and a Germanic, non-Roman letter named "thorn" that has practically vanished from English today, to the point that the typeface in which you are reading these words lacks a glyph for it—but which sufficiently literate writers are still expected to recognize on sight. This is one more reason to tend to avoid Υ when you can, a Greek letter that makes you look ignorant when you use it wrong and pretentious when you use it right. You can't win.) The history of the alphabets is extremely interesting. Unfortunately, a footnote in an appendix to a book on derivations of applied mathematics is probably not the right place for an essay on the topic, so we'll let the matter rest there. 3 No citation supports this appendix, whose contents (besides the Roman and Greek alphabets themselves, which are what they are) are inferred from the author's subjective observation of seemingly consistent practice in English-language applied mathematical
438
APPENDIX B. THE GREEK ALPHABET
publishing, plus some German and a little French, dating back to the 1930s. From the thousands of readers of drafts of the book, the author has yet to receive a single serious suggestion that the appendix were wrong—a lack that constitutes a sort of negative (though admittedly unverifiable) citation if you like. If a mathematical style guide exists that formally studies the letter convention's origins, the author is unaware of it (and if a graduate student in history or the languages who, for some reason, happens to be reading these words seeks an esoteric, maybe untouched topic on which to write a master's thesis, why, there is one).
Appendix C
Manuscript history
The book in its present form is based on various unpublished drafts and notes of mine, plus some of my wife Kristie's (n´e Hancock), going back to 1983 e when I was fifteen years of age. What prompted the contest I can no longer remember, but the notes began one day when I challenged a high-school classmate to prove the quadratic formula. The classmate responded that he didn't need to prove the quadratic formula because the proof was in the class math textbook, then counterchallenged me to prove the Pythagorean theorem. Admittedly obnoxious (I was fifteen, after all) but not to be outdone, I whipped out a pencil and paper on the spot and started working. But I found that I could not prove the theorem that day. The next day I did find a proof in the school library,1 writing it down, adding to it the proof of the quadratic formula plus a rather inefficient proof of my own invention to the law of cosines. Soon thereafter the school's chemistry instructor happened to mention that the angle between the tetrahedrally arranged four carbon-hydrogen bonds in a methane molecule was 109◦ , so from a symmetry argument I proved that result to myself, too, adding it to my little collection of proofs. That is how it started.2 The book actually has earlier roots than these. In 1979, when I was twelve years old, my father bought our family's first eight-bit computer. The computer's built-in BASIC programming-language interpreter exposed
A better proof is found in § 2.10. Fellow gear-heads who lived through that era at about the same age might want to date me against the disappearance of the slide rule. Answer: in my country, or at least at my high school, I was three years too young to use a slide rule. The kids born in 1964 learned the slide rule; those born in 1965 did not. I wasn't born till 1967, so for better or for worse I always had a pocket calculator in high school. My family had an eight-bit computer at home, too, as we shall see.
2 1
439
440
APPENDIX C. MANUSCRIPT HISTORY
functions for calculating sines and cosines of angles. The interpreter's manual included a diagram much like Fig. 3.1 showing what sines and cosines were, but it never explained how the computer went about calculating such quantities. This bothered me at the time. Many hours with a pencil I spent trying to figure it out, yet the computer's trigonometric functions remained mysterious to me. When later in high school I learned of the use of the Taylor series to calculate trigonometrics, into my growing collection of proofs the series went. Five years after the Pythagorean incident I was serving the U.S. Army as an enlisted troop in the former West Germany. Although those were the last days of the Cold War, there was no shooting war at the time, so the duty was peacetime duty. My duty was in military signal intelligence, frequently in the middle of the German night when there often wasn't much to do. The platoon sergeant wisely condoned neither novels nor cards on duty, but he did let the troops read the newspaper after midnight when things were quiet enough. Sometimes I used the time to study my German—the platoon sergeant allowed this, too—but I owned a copy of Richard P. Feynman's Lectures on Physics [13] which I would sometimes read instead. Late one night the battalion commander, a lieutenant colonel and West Point graduate, inspected my platoon's duty post by surprise. A lieutenant colonel was a highly uncommon apparition at that hour of a quiet night, so when that old man appeared suddenly with the sergeant major, the company commander and the first sergeant in tow—the last two just routed from their sleep, perhaps—surprise indeed it was. The colonel may possibly have caught some of my unlucky fellows playing cards that night—I am not sure— but me, he caught with my boots unpolished, reading the Lectures. I snapped to attention. The colonel took a long look at my boots without saying anything, as stormclouds gathered on the first sergeant's brow at his left shoulder, then asked me what I had been reading. "Feynman's Lectures on Physics, sir." "Why?" "I am going to attend the university when my three-year enlistment is up, sir." "I see." Maybe the old man was thinking that I would do better as a scientist than as a soldier? Maybe he was remembering when he had had to read some of the Lectures himself at West Point. Or maybe it was just the singularity of the sight in the man's eyes, as though he were a medieval knight at bivouac who had caught one of the peasant levies, thought to be illiterate, reading Cicero in the original Latin. The truth of this, we shall never know. What the old man actually said was, "Good work, son. Keep
441 it up." The stormclouds dissipated from the first sergeant's face. No one ever said anything to me about my boots (in fact as far as I remember, the first sergeant—who saw me seldom in any case—never spoke to me again). The platoon sergeant thereafter explicitly permitted me to read the Lectures on duty after midnight on nights when there was nothing else to do, so in the last several months of my military service I did read a number of them. It is fair to say that I also kept my boots better polished. In Volume I, Chapter 6, of the Lectures there is a lovely introduction to probability theory. It discusses the classic problem of the "random walk" in some detail, then states without proof that the generalization of the random walk leads to the Gaussian distribution p(x) = exp(−x2 /2σ 2 ) √ . σ 2π
For the derivation of this remarkable theorem, I scanned the book in vain. One had no Internet access in those days, but besides a well equipped gym the Army post also had a tiny library, and in one yellowed volume in the library—who knows how such a book got there?—I did find a derivation √ of the 1/σ 2π factor.3 The exponential factor, the volume did not derive. Several days later, I chanced to find myself in Munich with an hour or two to spare, which I spent in the university library seeking the missing part of the proof, but lack of time and unfamiliarity with such a German site defeated me. Back at the Army post, I had to sweat the proof out on my own over the ensuing weeks. Nevertheless, eventually I did obtain a proof which made sense to me. Writing the proof down carefully, I pulled the old high-school math notes out of my military footlocker (for some reason I had kept the notes and even brought them to Germany), dusted them off, and added to them the new Gaussian proof. That is how it has gone. To the old notes, I have added new proofs from time to time; and although somehow I have misplaced the original high-school leaves I took to Germany with me, the notes have nevertheless grown with the passing years. After I had left the Army, married, taken my degree at the university, begun work as a building construction engineer, and started a family—when the latest proof to join my notes was a mathematical justification of the standard industrial construction technique for measuring the resistance-to-ground of a new building's electrical grounding system—I was delighted to discover that Eric W. Weisstein had compiled
3
The citation is now unfortunately long lost.
442
APPENDIX C. MANUSCRIPT HISTORY
and published [51] a wide-ranging collection of mathematical results in a spirit not entirely dissimilar to that of my own notes. A significant difference remained, however, between Weisstein's work and my own. The difference was and is fourfold: 1. Number theory, mathematical recreations and odd mathematical names interest Weisstein much more than they interest me; my own tastes run toward math directly useful in known physical applications. The selection of topics in each body of work reflects this difference. 2. Weisstein often includes results without proof. This is fine, but for my own part I happen to like proofs. 3. Weisstein lists results encyclopedically, alphabetically by name. I organize results more traditionally by topic, leaving alphabetization to the book's index, that readers who wish to do so can coherently read the book from front to back.4 4. I have eventually developed an interest in the free-software movement, joining it as a Debian Developer [10]; and by these lights and by the standard of the Debian Free Software Guidelines (DFSG) [11], Weisstein's work is not free. No objection to non-free work as such is raised here, but the book you are reading is free in the DFSG sense. A different mathematical reference, even better in some ways than Weisstein's and (if I understand correctly) indeed free in the DFSG sense, is emerging at the time of this writing in the on-line pages of the generalpurpose encyclopedia Wikipedia [52]. Although Wikipedia reads unevenly, remains generally uncitable,5 forms no coherent whole, and seems to suffer a certain competition among some of its mathematical editors as to which
4 There is an ironic personal story in this. As children in the 1970s, my brother and I had a 1959 World Book encyclopedia in our bedroom, about twenty volumes. It was then a bit outdated (in fact the world had changed tremendously in the fifteen or twenty years following 1959, so the encyclopedia was more than a bit outdated) but the two of us still used it sometimes. Only years later did I learn that my father, who in 1959 was fourteen years old, had bought the encyclopedia with money he had earned delivering newspapers daily before dawn, and then had read the entire encyclopedia, front to back. My father played linebacker on the football team and worked a job after school, too, so where he found the time or the inclination to read an entire encyclopedia, I'll never know. Nonetheless, it does prove that even an encyclopedia can be read from front to back. 5 Some "Wikipedians" do seem actively to be working on making Wikipedia authoritatively citable. The underlying philosophy and basic plan of Wikipedia admittedly tend to thwart their efforts, but their efforts nevertheless seem to continue to progress. We shall see. Wikipedia is a remarkable, monumental creation.
0xB.443 of them can explain a thing most reconditely. patiently storing up the long. nor is such criticism necessarily improper from my point of view. 7. but anyway you can see where that line of logic leads. after all. . This book is bound to lose at least a few readers for its unorthodox use of hexadecimal notation ("The first primes are 2. at first to prove to myself that I could do hexadecimal arithmetic routinely and accurately with a pencil. undoubtedly some will do neither."). stagnate at a local maximum. some will tolerate. had it no formulas. Still. which is understandable but does not always help the scientist or engineer who wants to
. it is laden with mathematical knowledge. it evolves or accumulates only gradually. I started keeping my own theoretical math notes in hex a long time ago. 3. I have with hesitation and no little trepidation resolved to let this book use it. which I have referred to more than a few times in the preparation of this text. in all its richness. though. . It risks one leap at least: it employs hexadecimal numerals. a hillock whence higher ground is achievable not by gradual ascent but only by descent first—or by a leap. and in general these are not permitted to burden the published text. adverse criticism from some quarters for lack of rigor is probably inevitable. Perhaps it will gain a few readers for the same reason. A book can follow convention or depart from it. neither or both. A leap risks a fall. but in any case the book does both follow and depart. Like other applied mathematicians. More substantively: despite the book's title. even in such an inherently unconservative discipline as mathematics. . time will tell. to be balanced against other factors. One ought not run such risks without cause. yet herein arises the ancient dilemma. too. 5. frequent departure seldom renders a book good. serious books by professional mathematicians tend to be for professional mathematicians. Descent risks a bog. Convention. Whether this particular book is original or good. in all its profundity. The hex notation is not my own. It existed before I arrived on the scene. hidden wisdom of generations past. yet. is for the reader to tell. later from aesthetic conviction that it was the right thing to do. but in the meantime the book has a mission. and since I know of no math book better positioned to risk its use. though occasional departure might render a book original. Convention is a peculiar thing: at its best. and crass popularity can be only one consideration. The views of the last group must be respected. can. sometimes. Well. the book does risk. I've several own private notations. The book might gain even more readers. Some readers will approve. including many proofs. and painted landscapes in place of geometric diagrams! I like landscapes.
The ideal author of such a book as this would probably hold two doctorates: one in mathematics and the other in engineering or the like. Such developments augur well for the book's future at least. Who can say? The manuscript met an uncommonly enthusiastic reception at Debconf 6 [10] May 2006 at Oaxtepec. whip this book out and turn to § 2. Mexico. extended over twenty years and through the course of two-and-a-half university degrees. THB
. MANUSCRIPT HISTORY
use the math to model something. Where this manuscript will go in the future is hard to guess. now partly A typed and revised for the first time as a L TEX manuscript. and in August of the same year it warmly welcomed Karl Sarnow and Xplora Knoppix [54] aboard as the second official distributor of the book.10. The ideal author lacking. But in the meantime. I have written the book. if anyone should challenge you to prove the Pythagorean theorem on the spot.444
APPENDIX C. That should confound 'em. Perhaps the revision you are reading is the last. So here you have my old high-school notes. why. | 677.169 | 1 |
This book is a guide through a playlist of Calculus instructional videos. The format, level of details and rigor, and progression of topics are consistent with a semester long college level second Calculus course, or equivalently, together with the
This book is the exercise companion to A youtube Calculus Workbook (part II). Its structures in modules mirrors that of the workbook. The book includes, for 31 topics, a worksheet of exercises without solutions, which...
A reference for students in basic Algebra covering topics on counting numbers and phrases, equalities and inequalities, addition, subtraction, signed number phrases, co-multiplication and values among others.
This book starts with a very brief development of signals and systems. It then develops the characteristics and the design of finite impulse response (FIR) digital filters. That is followed by developing the characteristics and the design of infinite impulse response (IIR) digital Filters.
Digital Signal Processing: A User's Guide is intended both for the practicing engineer with a basic knowledge of DSP and for a second course in signal processing at the senior or first-year postgraduate level. FFTs, digital filter design, adaptive filters, and multirate signal processing are... | 677.169 | 1 |
How does mathematics impact everyday events? The purpose of this book is to show a range of examples where mathematics can be seen at work in everyday life. From money (APR, mortgage repayments, personal finance), simple first and second order ODEs, sport and games (tennis, rugby, athletics, darts, tournament design, soccer, snooker), business (stock control, linear programming, check digits, promotion policies, investment), the social sciences (voting methods, Simpson's Paradox, drug testing, measurements of inequality) to TV game shows and even gambling (lotteries, roulette, poker, horse racing), the mathematics behind commonplace events is explored.
Spectrum as the word itself signifies is the coverage of the concepts, characteristics, etc of any subject or topic in the widest possible manner. And Arihant is back with its popular Mathematics Spectrum to provide wide coverage of the concepts covered under the JEE Main and Advanced Mathematics syllabi.This monthly magazine will ensure thorough understanding of the concepts covered under JEE Main and Advanced syllabi through various sections like Master the NCERT, Board Exam Corner, School Practice, Olympiad Practice and Revision through Concept MapsThe first edition of this book sold more than 100,000 copies—and this new edition will show you why! Schaum's Outline of Discrete Mathematics shows you step by step how to solve the kind of problems you're going to find on your exams. And this new edition features all the latest applications of discrete mathematics to computer science! | 677.169 | 1 |
An Introduction to Fundamental Computer Algorithms for Spatial Analysis
This work critically re-examines decades of work on the problem of Space from the traditional point of view, and offers a new perspective, based on the author's own extensive research on spatial data structures, spatial models of perception and adjacency, Geo-informatics applications, and…
Virtual and Classical
The Only Undergraduate Textbook to Teach Both Classical and Virtual Knot Theory
An Invitation to Knot Theory: Virtual and Classical gives advanced undergraduate students a gentle introduction to the field of virtual knot theory and mathematical research. It provides the foundation for students to…
Submanifolds and Holonomy, Second Edition explores recent progress in the submanifold geometry of space forms, including new methods based on the holonomy of the normal connection. This second edition reflects many developments that have occurred since the publication of its popular predecessor.…
Recursion Theory and Descriptive Complexity
This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories……
Differential Geometry of Curves and Surfaces, Second Edition takes both an analytical/theoretical approach and a visual/intuitive approach to the local and global properties of curves and surfaces. Requiring only multivariable calculus and linear algebra, it develops students' geometric intuition…
Cremona Groups and the Icosahedron focuses on the Cremona groups of ranks 2 and 3 and describes the beautiful appearances of the icosahedral group A5 in them. The book surveys known facts about surfaces with an action of A5, explores A5-equivariant geometry of the quintic del Pezzo threefold V5,…
A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry…
An Illustrated Introduction to Topology and Homotopy explores the beauty of topology and homotopy theory in a direct and engaging manner while illustrating the power of the theory through many, often surprising, applications. This self-contained book takes a visual and rigorous approach that…
An Introduction
The concept of the Euclidean simplex is important in the study of n-dimensional Euclidean geometry. This book introduces for the first time the concept of hyperbolic simplex as an important concept in n-dimensional hyperbolic geometry.
Following the emergence of his gyroalgebra in 1988, the…
This volume, first published in 2000, presents a classical approach to the foundations and development of the geometry of vector fields, describing vector fields in three-dimensional Euclidean space, triply-orthogonal systems and applications in mechanics. Topics covered include Pfaffian forms,…
Designed for a one-semester course at the junior undergraduate level, Transformational Plane Geometry takes a hands-on, interactive approach to teaching plane geometry. The book is self-contained, defining basic concepts from linear and abstract algebra gradually as needed.
The text adheres to the… | 677.169 | 1 |
Details about Mathematics:
For instructors of liberal arts mathematics classes who focus on problem-solving, Harold Jacobs's remarkable textbook has long been the answer, helping teachers connect with of math-anxious students. Drawing on over thirty years of classroom experience, Jacobs shows students how to make observations, discover relationships, and solve problems in the context of ordinary experience.
Back to top
Rent Mathematics 3rd edition today, or search our site for other textbooks by Harold R. Jacobs. Every textbook comes with a 21-day "Any Reason" guarantee. Published by W. H. Freeman. | 677.169 | 1 |
Description: CK-12 Foundation's Algebra FlexBook is an introduction to algebraic concepts for the high school student. Topics include: Equations and Functions, Real Numbers, Equations of Lines, Solving Systems of Equations and Quadratic Equations. | 677.169 | 1 |
Shipping prices may be approximate. Please verify cost before checkout.
About the book:
Treating numerical methods as a tool for use by the engineer or applied scientist, this introductory text is concerned with the application of such methods to the solution of algebraic, transcendental and differential equations. With a minimum of mathematical theory needed for understanding, the book concentrates on the methods likely to be needed by students in training and later in their careers. The emphasis, as far as differential equations are concerned, is towards finite difference methods, since they form the basis of most introductory courses on numerical techniques. However, an introduction to integral methods is also given. For the same reason, the depth of coverage given to ordinary differential equations is rather greater than that to partial differential equations (especially hyperbolic equations). Nevertheless, the material included on PDEs would be suitable for leading on to more advanced courses, such as one in computational fluid dynamics. Worked examples and problems are provided. Many of the methods can be used with a simple electronic calculator, others involve so much computation that a programmable device is required, and some will need a digital computer. This text is intended for undergraduate engineers and applied scientists. It will service a standard introductory course of approximately 40 hours duration, and equip students to tackle more advanced courses on specialized topics. Worked examples and problems are provided. Many of the methods can be used with a simple electronic calculator, others involve so much computation that a programme is required, and some will need a digital computer. This book should be of interest to undergraduate engineers and applied scientists.
Search under way...
80% done 100% done
Search complete.
Compare book prices from over 100,000 booksellers. Click on the price to find out more about a book. | 677.169 | 1 |
Find a Surprise received an A+ in MAT 342 at ASU in Tempe and ever since have been tutoring students in Linear Algebra. The basics is the understanding of matrices and the Gauss-Jordan Method. Later you get into inverses, proofs of a vector space(zero, scalar, addition), eigen values, dot product, and much more. | 677.169 | 1 |
Algebra
is the Arabic al gebr (the equalisation), "the
supplementing and equalising (process);" so called because the problems
are solved by equations, and the equations are made by supplementary
terms. Fancifully identified with the Arabian chemist Gebir. | 677.169 | 1 |
en.wikipedia.org/wiki/Differential_equationA differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical ...
tutorial.math.lamar.edu/Classes/DE/DE.aspxCheat Sheets & Tables Algebra, Trigonometry and Calculus cheat sheets and a variety of tables. Class Notes Each class has notes available. Most of the classes have ...
en.wikipedia.org/wiki/Ordinary_differential_equationIn mathematics, an ordinary differential equation or ODE is a differential equation containing a function or functions of one independent variable and its derivatives.
ocw.mit.edu/courses/mathematics/.../index.htmThe laws of nature are expressed as differential equations. Scientists and engineers must know how to model the world in terms of differential equations, and how to ...
to Solve Differential Equations. A full course in differential equations involves applications of derivatives to be studied after two or three semester courses in ...
hyperphysics.phy-astr.gsu.edu/hbase/diff.htmlDifferential Equations. A differential equation is an equation which contains the derivatives of a variable, such as the equation. Here x is the variable and the ... | 677.169 | 1 |
Book is in Good Condition. Good copy with light amount of wear. Pages may include limited notes and highlighting.Fast Shipping from Amazon! Qualifies for Prime Shipping and FREE standard shipping for orders overCourse 2 consists of a structured approach to a variety of topics such as ratios, percents, equations, inequalities, geometry, graphing and probability.
Test Taking Strategies provide a guide to problem solving approaches that are necessary for success on standardized tests. Checkpoint Quizzes assess student understanding after every few lessons. Daily Guided Problem Solving in the text is supported by the Guided Problem Solving worksheet expanding the problem, guiding the student through the problem solving process and providing extra practice.
Top Customer Reviews
Each year we are required to purchase a very specific book for school so that all students have the same copy - why the school doesn't just order in bulk I haven't a clue but - I was thankful this seller took the time to put correct ISBN since many of the books look alike but are not exact.
Comment
2 of 3 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback... | 677.169 | 1 |
A' Level Mathematics (Full AS + A2), Distance Learning Course
Important information
A Level
Distance Learning
Duration: 2 Years
When: Flexible
Description
This course aims to develop a deeper understanding of some of the most important topics in mathematics which play a crucial role in the world around us, in everything from technology to our understanding of the universe. The course builds upon existing knowledge as well as introducing completely new and challenging topics.
Objectives of the A' Level Mathematics(Full AS+A2)
The objective of the course is to:
help students to gain knowledge and understanding of Mathematics direct study of the original sources
encourage and develop your enthusiasm for maths as well as give you the chance to form your own personal responses to the set texts chosen for study.
help you to further and enhance your mathematics and evaluative skills such as geometry and algebra and other additional tools to like the mechanics based questions, to use with the kinematic equations.
develop a deeper understanding of some of the most important topics in mathematics which play a crucial role in the world
Important information
Price for Emagister users: We are offering a 20% discount this month for all courses that are paid for in full and online.
Venues
Where and when
Starts
Location
Flexible
Distance Learning
Frequent Asked Questions
· What are the objectives of this course?
Key Topics
There are many modules that make up A-Level Mathematics. The main modules are as follows:
AS Level
Core 1
Core 2
Mechanics 1
A2 Level
Core 3
Core 4
Mechanics 2
Opinions
What I would highlightMy tutor was very helpful and was always there if I needed help or was stuck on something, she always helped and got back to me on time. I learned a lot from this course and enjoyed it very much.
Would you recommend this course? Yes.
Did this opinion help you?Yes (0)
What you'll learn on the course
GCSE Mathematics
Mathematics
IT
Maths
Teachers and trainers (1)
Support Advisor
Support Advisor
Course programme
A Level Mathematics (Full AS + A2) Course.
How is the course structured?
The A' Level Mathematics (AS/A2) course is divided into six comprehensive modules: | 677.169 | 1 |
Many students worry about starting algebra. Pre-Algebra Essentials For Dummies provides an overview of critical pre-algebra concepts to help new algebra students (and their parents) take the next step without fear. Free of ramp-up material, Pre-Algebra Essentials For Dummies contains content focused on key topics only. It provides discrete explanations... more...
Boost academic achievement for all students in your mathematics classroom! This timely resource leads the way in applying RTI to mathematics instruction. The authors describe how the three tiers can be implemented in specific math areas and illustrate RTI procedures through case studies. Aligned with the NMAP final report and IES practice guide,... more...
Ramsey theory is a fast-growing area of combinatorics with deep connections to other fields of mathematics such as topological dynamics, ergodic theory, mathematical logic, and algebra. The area of Ramsey theory dealing with Ramsey-type phenomena in higher dimensions is particularly useful. Introduction to Ramsey Spaces presents in a systematic... more...
In the past 15 years, the theory of crossed products has enjoyed a period of vigorous development. The foundations have been strengthened and reorganized from new points of view, especially from the viewpoint of graded rings. The purpose of this monograph is to give, in a self-contained manner, an up-to-date account of various aspects of this development,... more...
Applied mathematics connects the mathematical theory to the reality by solving real world problems and shows the power of the science of mathematics, greatly improving our lives. Therefore it plays a very active and central role in the scientific world.This volume contains 14 high quality survey articles — incorporating original results and... more... | 677.169 | 1 |
Details about A Problem Solving Approach to Mathematics for Elementary School Teachers plus MyMathLab:
This best-selling text continues as a comprehensive, skills-based resource for future teachers. In this edition, students will benefit from additional emphasis on active and collaborative learning. Revised and updated contents will better prepare your students for the day when they will be teachers with students of their own.
Back to top
Rent A Problem Solving Approach to Mathematics for Elementary School Teachers plus MyMathLab 9th edition today, or search our site for other textbooks by Rick Billstein. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Addison Wesley. | 677.169 | 1 |
Precise Calculator has arbitrary precision and can calculate with complex numbers, fractions, vectors and matrices. Has more than 150 mathematical functions and statistical functions and is programmable (if, goto, print, return, for). | 677.169 | 1 |
Details about Elements of Number Theory:
This book is a concise introduction to number theory and some related algebra, with an emphasis on solving equations in integers. Finding integer solutions led to two fundamental ideas of number theory in ancient times - the Euclidean algorithm and unique prime factorization - and in modern times to two fundamental ideas of algebra - rings and ideals. The development of these ideas, and the transition from ancient to modern, is the main theme of the book. The historical development has been followed where it helps to motivate the introduction of new concepts, but modern proofs have been used where they are simpler, more natural, or more interesting. These include some that have not yet appeared in textbooks, such as a treatment of the Pell equation using Conway's theory of quadratic forms. Also, this is the only elementary number theory book that includes significant applications of ideal theory. It is clearly written, well illustrated, and supplied with carefully designed exercises, making it a pleasure to use as an undergraduate textbook or for independent study. John Stillwell is Professor of Mathematics at the University of San Francisco. He is the author of several highly regarded books published by Springer-Verlag, including Mathematics and Its History (Second Edition 2001), Numbers and Geometry (1997) and Elements of Algebra (1994).
Back to top
Rent Elements of Number Theory 1st edition today, or search our site for other textbooks by John Stillwell. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Springer. | 677.169 | 1 |
LifeLongLearning.com - Peterson's
An online database of undergraduate and graduate-level college distance learning courses. Search by institution name or browse by subjects such as mathematics, mathematical statistics, and mathematics education. You may also search categories by keyword, choose courses for undergraduate or graduate credit, and specify a maximum cost.
more>>
S.O.S. Mathematics - Dept. of Mathematical Sciences, Univ. of Texas at El Paso
Internet courses/tutorials to help students do homework, prepare for a test, or get ready for a class. The material presented here reviews the most important results, techniques and formulas in college and pre-college mathematics. The learning units are presented in worksheet format and require the student's active participation. Subjects covered include Algebra, Trigonometry, Calculus, Differential Equations, Complex Variables, and Matrix Algebra.
more>>
Academic OR/MS Courses on the Web - Armann Ingolfsson
Links to Web pages of courses in Operations Research/Management Science and related fields offered at universities or colleges all over the world. Also short courses and tutorials on OR/MS-related topics. By Armann Ingolfsson for the INFORMS Forum on
...more>>
AdventureOnline.com - Learning Outfitters, Inc.
Interdisciplinary curricula that use the Web to connect students to expeditions from around the world, creating motivating, real-time materials for K-12 classrooms. Thematic content includes traditional and online lessons, games, quizzes, research stations,
...more>>
Algebra PowerPoint Lessons - James Wenk
PowerPoint lessons to purchase, designed to help teachers integrate technology and improve student performance and interest in algebra. Plans meet state standards and can be modified to suit teachers' needs.
...more>>
The Algebra Project - The Algebra Project, Inc.
The Algebra Project seeks to impact the struggle for citizenship and equality by assisting students in inner city and rural areas to achieve mathematics literacy. Read about the project's teacher training and support programs such as MUSIC (Multi-User
...more>>
AP Computer Science - College Board Online
General information about exams and courses (A and AB), links to archives of free-response questions in C++, the AP CS mailing list, development committee access, related websites, and an on-line store for College Board books. Maintained by the College
...more>>
AP Statistics - BB&N Upper School
A course from the Buckingham Browne & Nichols School. Advanced Placement Statistics acquaints students with the major concepts and tools for collecting, analyzing, and drawing conclusions from data, featuring work on projects involving the hands-on
...more>>
The Art of Problem Solving - Richard Rusczyk
An online school and community, with a message board and widely used textbooks, e.g., The Art of Problem Solving by Sandor Lehoczky and Richard Rusczyk. See, in particular, the AoPS's Math Jams, guided improvisational problem solving sessions. Also, learn
...more>>
Ask Dr. Callahan - Dale Callahan
For the homeschooling parent who needs high school math and science training material: textbooks, videos for the parent and child, teaching guidelines, tests and test grading guides as well as support for the duration of the course. Courses in geometry,
...more>>
BizWorld: Real World Math
BizWorld is a real world math program in which students apply their schoolwork to the world of business. Hands-on activities correlated to National math, economic and social studies standards, lead students through the steps of starting and running businesses.
...more>>
Carnegie Learning
A suite of Cognitive Tutors for Algebra I, Geometry, Algebra II and College Algebra. Each course combines computers and paper-based components with professional development, assisting students while ensuring an important role for the teacher. These products
...more>> | 677.169 | 1 |
This book presents the Riemann Hypothesis, connected problems, and a taste of the body of theory developed towards its solution. It is targeted at the educated non-expert. Almost all the material is accessible to any senior mathematics student, and much is accessible to anyone with some university mathematics. more...
The book presents the theory of multiple trigonometric sums constructed by the authors. Following a unified approach, the authors obtain estimates for these sums similar to the classical I. M. Vinogradov´s estimates and use them to solve several problems in analytic number theory. They investigate trigonometric integrals, which are often... more...
For hundreds of years, the study of elliptic curves has played a central role in mathematics. The past century in particular has seen huge progress in this study, from Mordell's theorem in 1922 to the work of Wiles and Taylor-Wiles in 1994. Nonetheless, there remain many fundamental questions where we do not even know what sort of answers to expect.... more...
This book introduces advanced numerical-functional analysis to beginning computer science researchers. The reader is assumed to have had basic courses in numerical analysis, computer programming, computational linear algebra, and an introduction to real, complex, and functional analysis. Although the book is of a theoretical nature, each chapter contains... more...
Sums of Squares of Integers covers topics in combinatorial number theory as they relate to counting representations of integers as sums of a certain number of squares. The book introduces a stimulating area of number theory where research continues to proliferate. It is a book of "firsts" - namely it is the first book to combine Liouville's elementary... more... | 677.169 | 1 |
The Humongous Book of Calculus Problems/i>
Overview
Now students have nothing to fear missing steps and simplifying solutions. Finally, everything is made perfectly clear. Students will be prepared to solve those obscure problems that were never discussed in class but always seem to find their way onto exams. —Includes 1,000 problems with comprehensive solutions —Annotated notes throughout the text clarify what's being asked in each problem and fill in missing steps —Kelley is a former award-winning calculus teacher
Related Subjects
Meet the Author
W. Michael Kelley is a former award-winning calculus teacher and author of The Complete Idiot's Guide to Calculus, The Complete Idiot's Guide to Precalculus, and The Complete Idiot's Guide to Algebra. He is also the founder and editor of calculus-help.com, which helps thousands of students conquer their math anxiety every month.
Customer Reviews
Most Helpful Customer Reviews
Humongous Book of Calculus Problems 4.2 out of 5based on
0 ratings.
5 reviews.
Countrylife1
More than 1 year ago
The detail inside is written with side chats and notes that help you follow thru. This is book is better for those that like more details and insight on how a problem was done. Like when teachers actually worked thru several problems in detail during class before the student would have to go home and struggle like they do today.
Guest
More than 1 year ago
If you found urself having a hard time reading or understanding ur cal textbook, this book would help u alot in terms of the language* and practice problems!! 5 star! | 677.169 | 1 |
Cynthia A. Harris
Resources
I am an instructor of mathematics at Triton College in River Grove, Illinois. This Website is developed to enhance communication with my students in Math 116 (Mathematics for Elementary Teachers I), Math 117 ( Mathematics for Elementary Teachers II), Mat 110 (College Algebra) and Mat 111 (College Algebra/Trig). In the boxes below, I have listed many resources that can help you, the student succeed in my courses.
I have included links to my syllabi, math (tutoring) zone and the Triton Mathematics Department website.
Letter to the online student
Welcome to my website! In order to succeed in my Mat 110, Mat 116 and Mat 117 online courses, you will need to read carefully, be disciplined and take advantage of the resources available to you. Read everything you have received in the letter I sent home(most of which is also posted on the MyMathLab website) Read once, twice, even three times, especially when you are reading your textbook.
I have provided you with dates for completion of homework, projects and exams.You must adhere to those deadlines.There are a myriad of resources available through MyMathLab (link at left), use them. Good luck this semester!
Letter to students in my "live" classes:
There are many resources available to you- my office hours, e-mail or telephone,Triton's Math Tutoring Zone, MyMathLab, the AT center for computer use, the Educational Technology Resource Center and this website. I have described these resources below and there are links to them on this page Also ckeck out "Our Advice to You" at the Triton Mathematics Department site (link at the left)
MathTutoring Zone The Math Tutoring Zone is part of Triton's Academic Success Center, which is located in A100. When you go to the Math Zone for the first time, you will sign in. Each time you go, you will sign in and out. The math zone has computers to practice problems and tutors who will answer questions or sit down with you and help with homework. The tutors can also help you with MyMathlab or the graphing calculator.
The Educational Technology Resource Center (ETRC) The Educational Technology Resource Center, located in A 100is the place you go to pick up materials, turn in projects (if applicable) and take the midterm (if applicable) and final exam. In order to take a test, you must make an appointment and show a schedule and a Triton ID. The hours of the ETRC are listed at their webpage (click on ETRC)
Contacting Me You may contact me by calling me, e-mailing me (home or work) or by stopping by during my office hours. I will also be happy to meet with you by appointment at a time outside my office hours, just call me to set it up. You can access all of my contact information by clicking on the link at the bottom of the page.
Videos, powerpoint There are videos and powerpoint lectures for all of my classes. Check them out at MyMathLab. | 677.169 | 1 |
Before you even consider getting some book, I urge you to point your
browser to the Learning Center...
(1) View some of the introductory video screencasts (you'll want
sound!). In the Learning Center, click the top item, "Looking to get
started? Watch a video screencast". On the Screencast & Video Gallery
page, go to Tutorials > Getting Started. I recommend Cliff Hastings'
"Hands-on Start to Mathematica-Part 1" and Jeff Todd's much longer "An
Introduction to Mathematica".
You can certainly profit from watching these before your Mathematica
arrives, but they are good to review, too, with a copy of Mathematica
installed.
(2) Perhaps browse in the free PDF tutorial files (which are just
print-formatted versions of what's included in the Documentation Center
you get with Mathematica). In the Learning Center, these are in the
section "Want more detailed information? Read one of our tutorials."
(3) Once you have your copy of Mathematica, open the Documentation
Center from the main menu Help and then in its menu click the icon that
looks like a book in order to view documentation in an organized
"Virtual Book". You'll see a separate window with a table of contents
listing virtual chapters. You can expand each such chapter to show its
sections.
This virtual book can provide a framework for learning. Sooner or later,
you'll doubtless want to use the Documentation Center proper, jumping
around among tutorials, how-tos, individual function reference pages, etc.
On 6/20/2010 3:46 AM, Olive wrote:
> I am new to mathematica and I am looking at a book to learn
> mathematica (in fact I have not yet mathematica but I am considering
> purchasing a home license). I would say no more than 300-400 pages that
> explain how the system works, basic of programming etc... Any thought?
>
> Olive
>
>
>
--
Murray Eisenberg murrayeisenberg at gmail.com
80 Fearing Street phone 413 549-1020 (H)
Amherst, MA 01002-1912 | 677.169 | 1 |
The new feature, the Euclidean Fountain, allows users to perform an action to both sides of the equation. The fountain will facilitate the problem solving process by users who wish to simplify problems by performing an action to both sides of the equation.
Algebra Touch features:
* Appropriate for learning or reviewing of algebra
* For students of any age
* Drag to rearrange, click to simplify, and draw lines to eliminate identical terms
* Distribute by clicking and sliding, Factor Out by dropping terms on one another
* Easily switch between lessons and randomly generated practice problems
* Users may create their own sets of problems
* Topics include: Simplification, Like Terms, Commutativity, and Order of Operations
* Additional topics: Factorization, Prime Numbers, Elimination, and Isolation
* Advanced topics: Variables, Solving Equations, Distribution, Factoring Out, and Substitution
Other iOS apps by Regular Berry Software include Long Division Touch, an educational app for learning and practicing long division problems. The app can supplement classroom lessons by adding a guided method of practicing and exploring long division principles. By providing instant feedback the app simplifies and reinforces how to solve long division problems.
Regular Berry Software is an educational app software company. Their goal is to reveal to students that math problems are just puzzles, and can be fun if you know the rules. Copyright (C) 2012 Regular Berry Software LLC | 677.169 | 1 |
3.
1IntroductionGraph theory may be said to have its begin-ning in 1736 when E ULER considered the (gen-eral case of the) Königsberg bridge problem:Does there exist a walk crossing each of theseven bridges of Königsberg exactly once? (So-lutio Problematis ad geometriam situs perti-nentis, Commentarii Academiae Scientiarum Impe-rialis Petropolitanae 8 (1736), pp. 128-140.) It took 200 years before the first book on graph theory was written. This was "The-orie der endlichen und unendlichen Graphen" ( Teubner, Leipzig, 1936) by K ÖNIG in1936. Since then graph theory has developed into an extensive and popular branch ofmathematics, which has been applied to many problems in mathematics, computerscience, and other scientific and not-so-scientific areas. For the history of early graphtheory, seeN.L. B IGGS , R.J. L LOYD AND R.J. W ILSON, "Graph Theory 1736 – 1936", ClarendonPress, 1986. There are no standard notations for graph theoretical objects. This is natural, be-cause the names one uses for the objects reflect the applications. Thus, for instance, ifwe consider a communications network (say, for email) as a graph, then the comput-ers taking part in this network, are called nodes rather than vertices or points. On theother hand, other names are used for molecular structures in chemistry, flow chartsin programming, human relations in social sciences, and so on. These lectures study finite graphs and majority of the topics is included inJ.A. B ONDY, U.S.R. M URTY, "Graph Theory with Applications", Macmillan, 1978.R. D IESTEL, "Graph Theory", Springer-Verlag, 1997.F. H ARARY, "Graph Theory", Addison-Wesley, 1969.D.B. W EST, "Introduction to Graph Theory", Prentice Hall, 1996.R.J. W ILSON, "Introduction to Graph Theory", Longman, (3rd ed.) 1985. In these lectures we study combinatorial aspects of graphs. For more algebraic topicsand methods, seeN. B IGGS, "Algebraic Graph Theory", Cambridge University Press, (2nd ed.) 1993.C. G ODSIL , G.F. R OYLE, "Algebraic Graph Theory", Springer, 2001.and for computational aspects, seeS. E VEN, "Graph Algorithms", Computer Science Press, 1979.
4.
3 In these lecture notes we mention several open problems that have gained respectamong the researchers. Indeed, graph theory has the advantage that it contains easilyformulated open problems that can be stated early in the theory. Finding a solutionto any one of these problems is another matter. Sections with a star (∗) in their heading are optional.Notations and notions• For a finite set X, |X | denotes its size (cardinality, the number of its elements).• Let [1, n] = {1, 2, . . . , n},and in general, [i, n] = {i, i + 1, . . . , n}for integers i ≤ n.• For a real number x, the floor and the ceiling of x are the integers ⌊ x⌋ = max{k ∈ Z | k ≤ x} and ⌈ x⌉ = min{k ∈ Z | x ≤ k}.• A family {X1 , X2 , . . . , Xk } of subsets Xi ⊆ X of a set X is a partition of X, if X= Xi and Xi ∩ X j = ∅ for all different i and j . i∈[1,k]• For two sets X and Y, X × Y = {( x, y) | x ∈ X, y ∈ Y }is their Cartesian product, and X △Y = ( X Y ) ∪ (Y X )is their symmetric difference. Here X Y = { x | x ∈ X, x ∈ Y }. /• Two integers n, k ∈ N (often n = |X | and k = |Y | for sets X and Y) have the sameparity, if both are even, or both are odd, that is, if n ≡ k (mod 2). Otherwise, theyhave opposite parity. Graph theory has abundant examples of NP-complete problems. Intuitively, aproblem is in P 1 if there is an efficient (practical) algorithm to find a solution to it. Onthe other hand, a problem is in NP 2 , if it is first efficient to guess a solution and thenefficient to check that this solution is correct. It is conjectured (and not known) thatP = NP. This is one of the great problems in modern mathematics and theoreticalcomputer science. If the guessing in NP-problems can be replaced by an efficientsystematic search for a solution, then P=NP. For any one NP-complete problem, if itis in P, then necessarily P=NP. 1 Solvable – by an algorithm – in polynomially many steps on the size of the problem instances. 2 Solvable nondeterministically in polynomially many steps on the size of the problem instances.
5.
1.1 Graphs and their plane figures 41.1 Graphs and their plane figuresLet V be a finite set, and denote by E(V ) = {{u, v} | u, v ∈ V, u = v} .the 2-sets of V, i.e., subsets of two distinct elements.D EFINITION . A pair G = (V, E) with E ⊆ E(V ) is called a graph (on V). The elementsof V are the vertices of G, and those of E the edges of G. The vertex set of a graph Gis denoted by VG and its edge set by EG . Therefore G = (VG , EG ). In literature, graphs are also called simple graphs; vertices are called nodes or points;edges are called lines or links. The list of alternatives is long (but still finite). A pair {u, v} is usually written simply as uv. Notice that then uv = vu. In order tosimplify notations, we also write v ∈ G and e ∈ G instead of v ∈ VG and e ∈ EG .D EFINITION . For a graph G, we denote νG = |VG | and ε G = | EG | .The number νG of the vertices is called the order of G, and ε G is the size of G. For anedge e = uv ∈ G, the vertices u and v are its ends. Vertices u and v are adjacent orneighbours, if uv ∈ G. Two edges e1 = uv and e2 = uw having a common end, areadjacent with each other.A graph G can be represented as a plane figure bydrawing a line (or a curve) between the points u and v1 v3 v6v (representing vertices) if e = uv is an edge of G.The figure on the right is a geometric representationof the graph G with VG = {v1 , v2 , v3 , v4 , v5 , v6 } and v2 v4 v5E G = { v1 v2 , v1 v3 , v2 v3 , v2 v4 , v5 v6 }. Often we shall omit the identities (names v) of the vertices in our figures, in whichcase the vertices are drawn as anonymous circles. Graphs can be generalized by allowing loops vv and parallel (or multiple) edgesbetween vertices to obtain a multigraph G = (V, E, ψ), where E = {e1 , e2 , . . . , em } isa set (of symbols), and ψ : E → E(V ) ∪ {vv | v ∈ V } is a function that attaches anunordered pair of vertices to each e ∈ E: ψ(e) = uv.Note that we can have ψ(e1 ) = ψ(e2 ). This is drawn in bthe figure of G by placing two (parallel) edges that con-nect the common ends. On the right there is (a draw-ing of) a multigraph G with vertices V = {a, b, c}and edges ψ(e1 ) = aa, ψ(e2 ) = ab, ψ(e3 ) = bc, and a cψ(e4 ) = bc.
6.
1.1 Graphs and their plane figures 5Later we concentrate on (simple) graphs.D EFINITION . We also study directed graphs or digraphsD = (V, E), where the edges have a direction, that is, theedges are ordered: E ⊆ V × V. In this case, uv = vu. The directed graphs have representations, where the edges are drawn as arrows.A digraph can contain edges uv and vu of opposite directions. Graphs and digraphs can also be coloured, labelled, and weighted:D EFINITION . A function α : VG → K is a vertex colouring of G by a set K of colours.A function α : EG → K is an edge colouring of G. Usually, K = [1, k] for some k ≥ 1. If K ⊆ R (often K ⊆ N), then α is a weight function or a distance function.Isomorphism of graphsD EFINITION . Two graphs G and H are isomorphic, denoted by G ∼ H, if there exists =a bijection α : VG → VH such that uv ∈ EG ⇐⇒ α(u)α(v) ∈ E Hfor all u, v ∈ G. Hence G and H are isomorphic if the vertices of H are renamings of those of G.Two isomorphic graphs enjoy the same graph theoretical properties, and they are oftenidentified. In particular, all isomorphic graphs have the same plane figures (exceptingthe identities of the vertices). This shows in the figures, where we tend to replace thevertices by small circles, and talk of 'the graph' although there are, in fact, infinitelymany such graphs.Example 1.1. The following graphs are v2 v3 2 4isomorphic. Indeed, the required iso- v5 1morphism is given by v1 → 1, v2 → 3,v3 → 4, v4 → 2, v5 → 5. v1 v4 3 5Isomorphism Problem. Does there exist an efficient algorithm to check whether any twogiven graphs are isomorphic or not? n The following table lists the number 2( 2 ) of all graphs on a given set of n vertices,and the number of all nonisomorphic graphs on n vertices. It tells that at least forcomputational purposes an efficient algorithm for checking whether two graphs areisomorphic or not would be greatly appreciated. n 1 2 3 4 5 6 7 8 9 graphs 1 2 8 64 1024 32 768 2 097 152 268 435 456 236 > 6 · 1010 nonisomorphic 1 2 4 11 34 156 1044 12 346 274 668
7.
1.1 Graphs and their plane figures 6Other representationsPlane figures catch graphs for our eyes, but if a problem on graphs is to be pro-grammed, then these figures are, to say the least, unsuitable. Integer matrices are idealfor computers, since every respectable programming language has array structuresfor these, and computers are good in crunching numbers.Let VG = {v1 , . . . , vn } be ordered. The adjacency ma-trix of G is the n × n-matrix M with entries Mij = 1 or Mij = 0 according to whether vi v j ∈ G or vi v j ∈ G. / 0 1 1 0 1 1 0 0 1 1For instance, the graph in Example 1.1 has an adja- 1 0 0 1 0cency matrix on the right. Notice that the adjacency 0 1 1 0 0matrix is always symmetric (with respect to its diag-onal consisting of zeros). 1 1 0 0 0 A graph has usually many different adjacency matrices, one for each ordering ofits set VG of vertices. The following result is obvious from the definitions.Theorem 1.1. Two graphs G and H are isomorphic if and only if they have a common adja-cency matrix. Moreover, two isomorphic graphs have exactly the same set of adjacency matri-ces. Graphs can also be represented by sets. For this, let X = {X1 , X2 , . . . , Xn } be afamily of subsets of a set X, and define the intersection graph GX as the graph withvertices X1 , . . . , Xn , and edges Xi X j for all i and j (i = j) with Xi ∩ X j = ∅.Theorem 1.2. Every graph is an intersection graph of some family of subsets.Proof. Let G be a graph, and define, for all v ∈ G, a set Xv = {{v, u} | vu ∈ G }.Then Xu ∩ Xv = ∅ if and only if uv ∈ G. ⊔ ⊓ Let s( G ) be the smallest size of a base set X such that G can be represented as anintersection graph of a family of subsets of X, that is, s( G ) = min{| X | | G ∼ GX for some X ⊆ 2X } . =How small can s( G ) be compared to the order νG (or the size ε G ) of the graph? It wasshown by K OU , S TOCKMEYER AND W ONG (1976) that it is algorithmically difficult todetermine the number s( G ) – the problem is NP-complete.Example 1.2. As yet another example, let A ⊆ N be a finite set of natural numbers,and let G A = ( A, E) be the graph with rs ∈ E if and only if r and s (for r = s) have acommon divisor > 1. As an exercise, we state: All graphs can be represented in the formG A for some set A of natural numbers.
8.
1.2 Subgraphs 71.2 SubgraphsIdeally, given a nice problem the local properties of a graph determine a solution.In these situations we deal with (small) parts of the graph (subgraphs), and a solu-tion can be found to the problem by combining the information determined by theparts. For instance, as we shall later see, the existence of an Euler tour is very local, itdepends only on the number of the neighbours of the vertices.Degrees of verticesD EFINITION . Let v ∈ G be a vertex a graph G. The neighbourhood of v is the set NG (v) = {u ∈ G | vu ∈ G } .The degree of v is the number of its neighbours: dG (v) = | NG (v)| .If dG (v) = 0, then v is said to be isolated in G, and if dG (v) = 1, then v is a leaf of thegraph. The minimum degree and the maximum degree of G are defined as δ( G ) = min{dG (v) | v ∈ G } and ∆( G ) = max{dG (v) | v ∈ G } . The following lemma, due to E ULER (1736), tells that if several people shakehands, then the number of hands shaken is even.Lemma 1.1 (Handshaking lemma). For each graph G, ∑ d G ( v) = 2 · ε G . v∈ GMoreover, the number of vertices of odd degree is even.Proof. Every edge e ∈ EG has two ends. The second claim follows immediately fromthe first one. ⊔ ⊓ Lemma 1.1 holds equally well for multigraphs, when dG (v) is defined as the num-ber of edges that have v as an end, and when each loop vv is counted twice. Note that the degrees of a graph G do not determine G. Indeed, there are graphsG = (V, EG ) and H = (V, E H ) on the same set of vertices that are not isomorphic, butfor which dG (v) = d H (v) for all v ∈ V.
9.
1.2 Subgraphs 8SubgraphsD EFINITION . A graph H is a subgraph of a graph G, denoted by H ⊆ G, if VH ⊆ VGand E H ⊆ EG . A subgraph H ⊆ G spans G (and H is a spanning subgraph of G), ifevery vertex of G is in H, i.e., VH = VG . Also, a subgraph H ⊆ G is an induced subgraph, if E H = EG ∩ E(VH ). In thiscase, H is induced by its set VH of vertices. In an induced subgraph H ⊆ G, the set E H of edges consists of all e ∈ EG such thate ∈ E(VH ). To each nonempty subset A ⊆ VG , there corresponds a unique inducedsubgraph G [ A] = ( A, EG ∩ E( A)) .To each subset F ⊆ EG of edges there corresponds a unique spanning subgraph of G, G [ F ] = (VG , F ) . G subgraph spanning induced For a set F ⊆ EG of edges, let G − F = G [ EG F ]be the subgraph of G obtained by removing (only) the edges e ∈ F from G. In partic-ular, G −e is obtained from G by removing e ∈ G. Similarly, we write G + F, if each e ∈ F (for F ⊆ E(VG )) is added to G. For a subset A ⊆ VG of vertices, we let G − A ⊆ G be the subgraph induced byVG A, that is, G − A = G [VG A] ,and, e.g., G −v is obtained from G by removing the vertex v together with the edgesthat have v as their end.Reconstruction Problem. The famous open problem, Kelly-Ulam problem or the Re-construction Conjecture, states that a graph of order at least 3 is determined up to isomor-phism by its vertex deleted subgraphs G −v (v ∈ G): if there exists a bijection α : VG → VHsuch that G −v ∼ H −α(v) for all v, then G ∼ H. = =
12.
1.3 Paths and cycles 11 A discrete graph is 0-regular, and a complete graph Kn is (n − 1)-regular. In par-ticular, ε Kn = n(n − 1)/2, and therefore ε G ≤ n(n − 1)/2 for all graphs G that haveorder n. Many problems concerning (induced) subgraphs are algorithmically difficult. Forinstance, to find a maximal complete subgraph (a subgraph Km of maximum order)of a graph is unlikely to be even in NP.Example 1.4. The graph on the right is the Petersengraph that we will meet several times (drawn differ-ently). It is a 3-regular graph of order 10.Example 1.5. Let k ≥ 1 be an integer, and consider the set B k of all binary stringsof length k. For instance, B3 = {000, 001, 010, 100, 011, 101, 110, 111}. Let Qk be thegraph, called the k-cube, with VQk = B k , where uv ∈ Qk if and only if the strings uand v differ in exactly one place. 110 111The order of Qk is νQk = 2k ,the number of binary 100 101strings of length k. Also, Qk is k-regular, and so, by the 010 011handshaking lemma, ε Qk = k · 2k−1 . On the right wehave the 3-cube, or simply the cube. 000 001Example 1.6. Let n ≥ 4 be any even number. We show by induction that there existsa 3-regular graph G with νG = n. Notice that all 3-regular graphs have even order bythe handshaking lemma. x yIf n = 4, then K4 is 3-regular. Let G be a 3-regulargraph of order 2m − 2, and suppose that uv, uw ∈ EG . w vLet VH = VG ∪ { x, y}, and E H = ( EG {uv, uw}) ∪ u{ux, xv, uy, yw, xy}. Then H is 3-regular of order 2m.1.3 Paths and cyclesThe most fundamental notions in graph theory are practically oriented. Indeed, manygraph theoretical questions ask for optimal solutions to problems such as: find ashortest path (in a complex network) from a given point to another. This kind ofproblems can be difficult, or at least nontrivial, because there are usually choices whatbranch to choose when leaving an intermediate point.WalksD EFINITION . Let ei = ui ui+1 ∈ G be edges of G for i ∈ [1, k]. The sequence W =e1 e2 . . . ek is a walk of length k from u1 to uk+1 . Here ei and ei+1 are compatible in thesense that ei is adjacent to ei+1 for all i ∈ [1, k − 1].
13.
1.3 Paths and cycles 12 We write, more informally, k W : u1 − u2 − . . . − u k − u k+1 → → → → or W : u1 − u k+1 . →Write u − v to say that there is a walk of some length from u to v. Here we under- →⋆stand that W : u − v is always a specific walk, W = e1 e2 . . . ek , although we sometimes ⋆ →do not care to mention the edges ei on it. The length of a walk W is denoted by |W |.D EFINITION . Let W = e1 e2 . . . ek (ei = ui ui+1 ) be a walk. W is closed, if u1 = uk+1 . W is a path, if ui = u j for all i = j. W is a cycle, if it is closed, and ui = u j for i = j except that u1 = uk+1 . W is a trivial path, if its length is 0. A trivial path has no edges. For a walk W : u = u1 − . . . − uk+1 = v, also → → W −1 : v = u k+1 − . . . − u1 = u → →is a walk in G, called the inverse walk of W. A vertex u is an end of a path P, if P starts or ends in u. The join of two walks W1 : u − v and W2 : v − w is the walk W1 W2 : u − w. →⋆ ⋆ → ⋆ →(Here the end v must be common to the walks.) Paths P and Q are disjoint, if they have no vertices in common, and they areindependent, if they can share only their ends. Clearly, the inverse walk P−1 of a path P is a path (the inverse path of P). The joinof two paths need not be a path.A (sub)graph, which is a path (cycle) of lengthk − 1 (k, resp.) having k vertices is denoted byPk (Ck , resp.). If k is even (odd), we say that thepath or cycle is even (odd). Clearly, all paths oflength k are isomorphic. The same holds for cy- P5 C6cles of fixed length.Lemma 1.3. Each walk W : u − v with u = v contains a path P : u − v, that is, there is a ⋆ → ⋆ →path P : u − v that is obtained from W by removing edges and vertices. ⋆ →Proof. Let W : u = u1 − . . . − uk+1 = v. Let i < j be indices such that ui = u j . → →If no such i and j exist, then W, itself, is a path. Otherwise, in W = W1 W2 W3 : u − ⋆ →ui − u j − v the portion U1 = W1 W3 : u − ui = u j − v is a shorter walk. By ⋆ → →⋆ ⋆ → →⋆repeating this argument, we obtain a sequence U1 , U2 , . . . , Um of walks u − v with ⋆ →|W | > |U1 | > · · · > |Um |. When the procedure stops, we have a path as required.(Notice that in the above it may very well be that W1 or W3 is a trivial walk.) ⊔ ⊓
14.
1.3 Paths and cycles 13D EFINITION . If there exists a walk (and hence a path) from u to v in G, let k dG (u, v) = min{k | u − v} →be the distance between u and v. If there are no walks u − v, let dG (u, v) = ∞ by ⋆ →convention. A graph G is connected, if dG (u, v) < ∞ for all u, v ∈ G; otherwise, itis disconnected. The maximal connected subgraphs of G are its connected compo-nents. Denote c( G ) = the number of connected components of G .If c( G ) = 1, then G is, of course, connected. The maximality condition means that a subgraph H ⊆ G is a connected compo-nent if and only if H is connected and there are no edges leaving H, i.e., for every ver-tex v ∈ H, the subgraph G [VH ∪ {v}] is disconnected. Apparently, every connected /component is an induced subgraph, and ∗ NG (v) = {u | dG (v, u) < ∞}is the connected component of G that contains v ∈ G. In particular, the connectedcomponents form a partition of G.Shortest pathsD EFINITION . Let G α be an edge weighted graph, that is, G α is a graph G togetherwith a weight function α : EG → R on its edges. For H ⊆ G, let α( H ) = ∑ α( e) e∈ Hbe the (total) weight of H. In particular, if P = e1 e2 . . . ek is a path, then its weight isα( P) = ∑k=1 α(ei ). The minimum weighted distance between two vertices is i dα (u, v) = min{α( P) | P : u − v} . G ⋆ → In extremal problems we seek for optimal subgraphs H ⊆ G satisfying specificconditions. In practice we encounter situations where G might represent• a distribution or transportation network (say, for mail), where the weights on edges are distances, travel expenses, or rates of flow in the network;• a system of channels in (tele)communication or computer architecture, where the weights present the rate of unreliability or frequency of action of the connections;• a model of chemical bonds, where the weights measure molecular attraction.
17.
2Connectivity of Graphs2.1 Bipartite graphs and treesIn problems such as the shortest path problem we look for minimum solutions thatsatisfy the given requirements. The solutions in these cases are usually subgraphswithout cycles. Such connected graphs will be called trees, and they are used, e.g., insearch algorithms for databases. For concrete applications in this respect, seeT.H. C ORMEN , C.E. L EISERSON AND R.L. R IVEST, "Introduction to Algorithms",MIT Press, 1993.Certain structures with operations are representable +as trees. These trees are sometimes called constructiontrees, decomposition trees, factorization trees or grammatical · ytrees. Grammatical trees occur especially in linguistics,where syntactic structures of sentences are analyzed. x +On the right there is a tree of operations for the arith- y zmetic formula x · (y + z) + y.Bipartite graphsD EFINITION . A graph G is called bipartite, if VG has a partition to two subsets X andY such that each edge uv ∈ G connects a vertex of X and a vertex of Y. In this case,( X, Y ) is a bipartition of G, and G is ( X, Y )-bipartite.A bipartite graph G (as in the above) is complete (m, k)-bipartite, if | X | = m, |Y | = k, and uv ∈ G for all u ∈ Xand v ∈ Y.All complete (m, k)-bipartite graphs are isomorphic. LetKm,k denote such a graph.A subset X ⊆ VG is stable, if G [ X ] is a discrete graph. K2,3 The following result is clear from the definitions.Theorem 2.1. A graph G is bipartite if and only if VG has a partition to two stable subsets.Example 2.1. The k-cube Qk of Example 1.5 is bipartite for all k. Indeed, considerA = {u | u has an even number of 1′ s} and B = {u | u has an odd number of 1′ s}.Clearly, these sets partition B k , and they are stable in Qk .
19.
2.1 Bipartite graphs and trees 18BridgesD EFINITION . An edge e ∈ G is a bridge of the graph G,if G −e has more connected components than G, that is,if c( G −e) > c( G ). In particular, and most importantly,an edge e in a connected G is a bridge if and only if G −eis disconnected.On the right (only) the two horizontal lines are bridges. We note that, for each edge e ∈ G, e = uv is a bridge ⇐⇒ u, v in different connected components of G −e .Theorem 2.3. An edge e ∈ G is a bridge if and only if e is not in any cycle of G.Proof. (⇒) If there is a cycle in G containing e, say C = PeQ, then QP : v − u is a ⋆ →path in G −e, and so e is not a bridge. (⇐) If e = uv is not a bridge, then u and v are in the same connected componentof G −e, and there is a path P : v − u in G −e. Now, eP : u − v − u is a cycle in G ⋆ → → → ⋆containing e. ⊔ ⊓Lemma 2.1. Let e be a bridge in a connected graph G. (i) Then c( G −e) = 2.(ii) Let H be a connected component of G −e. If f ∈ H is a bridge of H, then f is a bridge of G.Proof. For (i), let e = uv. Since e is a bridge, the ends u and v are not connected inG −e. Let w ∈ G. Since G is connected, there exists a path P : w − v in G. This is a ⋆ →path of G −e, unless P : w − u → v contains e = uv, in which case the part w − u is ⋆ → →⋆a path in G −e. For (ii), if f ∈ H belongs to a cycle C of G, then C does not contain e (since e is inno cycle), and therefore C is inside H, and f is not a bridge of H. ⊔ ⊓TreesD EFINITION . A graph is called acyclic, if it has no cycles. An acyclic graph is alsocalled a forest. A tree is a connected acyclic graph. By Theorem 2.3 and the definition of a tree, we haveCorollary 2.1. A connected graph is a tree if and only if all its edges are bridges.Example 2.3. The following enumeration result for trees has many different proofs,the first of which was given by C AYLEY in 1889: There are nn−2 trees on a vertex set V ofn elements. We omit the proof.
20.
2.1 Bipartite graphs and trees 19 On the other hand, there are only a few trees up to isomorphism: n 1 2 3 4 5 6 7 8 trees 1 1 1 2 3 6 11 23 n 9 10 11 12 13 14 15 16 trees 47 106 235 551 1301 3159 7741 19 320 The nonisomorphic trees of order 6 are: We say that a path P : u − v is maximal in a graph G, if there are no edges e ∈ G →⋆for which Pe or eP is a path. Such paths exist, because νG is finite.Lemma 2.2. Let P : u − v be a maximal path in a graph G. Then NG (v) ⊆ P. Moreover, if →⋆G is acyclic, then dG (v) = 1.Proof. If e = vw ∈ EG with w ∈ P, then also Pe is a path, which contradicts the /maximality assumption for P. Hence NG (v) ⊆ P. For acyclic graphs, if wv ∈ G, thenw belongs to P, and wv is necessarily the last edge of P in order to avoid cycles. ⊔ ⊓Corollary 2.2. Each tree T with νT ≥ 2 has at least two leaves.Proof. Since T is acyclic, both ends of a maximal path have degree one. ⊔ ⊓Theorem 2.4. The following are equivalent for a graph T. (i) T is a tree. (ii) Any two vertices are connected in T by a unique path.(iii) T is acyclic and ε T = νT − 1.Proof. Let νT = n. If n = 1, then the claim is trivial. Suppose thus that n ≥ 2. (i)⇒(ii) Let T be a tree. Assume the claim does not hold, and let P, Q : u − v →⋆be two different paths between the same vertices u and v. Suppose that | P| ≥ | Q|.Since P = Q, there exists an edge e which belongs to P but not to Q. Each edge of Tis a bridge, and therefore u and v belong to different connected components of T −e.Hence e must also belong to Q; a contradiction. (ii)⇒(iii) We prove the claim by induction on n. Clearly, the claim holds for n = 2,and suppose it holds for graphs of order less than n. Let T be any graph of order nsatisfying (ii). In particular, T is connected, and it is clearly acyclic.
21.
2.1 Bipartite graphs and trees 20 Let P : u − v be a maximal path in T. By Lemma 2.2, we have dT (v) = 1. In this ⋆ →case, P : u − w − v, where vw is the unique edge having an end v. The subgraph ⋆ → →T −v is connected, and it satisfies the condition (ii). By induction hypothesis, ε T −v =n − 2, and so ε T = ε T −v + 1 = n − 1, and the claim follows. (iii)⇒(i) Assume (iii) holds for T. We need to show that T is connected. Indeed,let the connected components of T be Ti = (Vi , Ei ), for i ∈ [1, k]. Since T is acyclic, soare the connected graphs Ti , and hence they are trees, for which we have proved that|Ei | = |Vi | − 1. Now, νT = ∑k=1 |Vi |, and ε T = ∑k=1 |Ei |. Therefore, i i k k n − 1 = εT = ∑ (|Vi | − 1) = ∑ |Vi | − k = n − k , i=1 i=1which gives that k = 1, that is, T is connected. ⊔ ⊓Example 2.4. Consider a cup tournament of n teams. If during a round there are kteams left in the tournament, then these are divided into ⌊k⌋ pairs, and from eachpair only the winner continues. If k is odd, then one of the teams goes to the nextround without having to play. How many plays are needed to determine the winner? So if there are 14 teams, after the first round 7 teams continue, and after the secondround 4 teams continue, then 2. So 13 plays are needed in this example. The answer to our problem is n − 1, since the cup tournament is a tree, where aplay corresponds to an edge of the tree.Spanning treesTheorem 2.5. Each connected graph has a spanning tree, that is, a spanning graph that isa tree.Proof. Let T ⊆ G be a maximum order subtree of G (i.e., subgraph that is a tree). IfVT = VG , there exists an edge uv ∈ EG such that u ∈ T and v ∈ T. But then T is not / /maximal; a contradiction. ⊔ ⊓Corollary 2.3. For each connected graph G, ε G ≥ νG − 1. Moreover, a connected graph G isa tree if and only if ε G = νG − 1.Proof. Let T be a spanning tree of G. Then ε G ≥ ε T = νT − 1 = νG − 1. The secondclaim is also clear. ⊔ ⊓Example 2.5. In Shannon's switching game a positive player P and a negative playerN play on a graph G with two special vertices: a source s and a sink r. P and N al-ternate turns so that P designates an edge by +, and N by −. Each edge can be des-ignated at most once. It is P's purpose to designate a path s − r (that is, to designate →⋆all edges in one such path), and N tries to block all paths s − r (that is, to designate ⋆ →at least one edge in each such path). We say that a game ( G, s, r) is
22.
2.1 Bipartite graphs and trees 21• positive, if P has a winning strategy no matter who begins the game,• negative, if N has a winning strategy no matter who begins the game,• neutral, if the winner depends on who begins the game. rThe game on the right is neutral. s L EHMAN proved in 1964 that Shannon's switching game ( G, s, r) is positive if and onlyif there exists H ⊆ G such that H contains s and r and H has two spanning trees with noedges in common. In the other direction the claim can be proved along the following lines. Assumethat there exists a subgraph H containing s and r and that has two spanning treeswith no edges in common. Then P plays as follows. If N marks by − an edge fromone of the two trees, then P marks by + an edge in the other tree such that thisedge reconnects the broken tree. In this way, P always has two spanning trees for thesubgraph H with only edges marked by + in common. In converse the claim is considerably more difficult to prove. There remains the problem to characterize those Shannon's switching games( G, s, r) that are neutral (negative, respectively).The connector problemTo build a network connecting n nodes (towns, computers, chips in a computer) itis desirable to decrease the cost of construction of the links to the minimum. This isthe connector problem. In graph theoretical terms we wish to find an optimal span-ning subgraph of a weighted graph. Such an optimal subgraph is clearly a spanningtree, for, otherwise a deletion of any nonbridge will reduce the total weight of thesubgraph. Let then G α be a graph G together with a weight function α : EG → R + (posi-tive reals) on the edges. Kruskal's algorithm (also known as the greedy algorithm)provides a solution to the connector problem.Kruskal's algorithm: For a connected and weighted graph G α of order n:(i) Let e1 be an edge of smallest weight, and set E1 = {e1 }.(ii) For each i = 2, 3, . . . , n − 1 in this order, choose an edge ei ∈ Ei−1 of smallest / possible weight such that ei does not produce a cycle when added to G [ Ei−1 ], and let Ei = Ei−1 ∪ {ei }. The final outcome is T = (VG , En−1 ).
23.
2.1 Bipartite graphs and trees 22 By the construction, T = (VG , En−1 ) is a spanning tree of G, because it contains nocycles, it is connected and has n − 1 edges. We now show that T has the minimumtotal weight among the spanning trees of G. Suppose T1 is any spanning tree of G. Let ek be the first edge produced by thealgorithm that is not in T1 . If we add ek to T1 , then a cycle C containing ek is created.Also, C must contain an edge e that is not in T. When we replace e by ek in T1 , westill have a spanning tree, say T2 . However, by the construction, α(ek ) ≤ α(e), andtherefore α( T2 ) ≤ α( T1 ). Note that T2 has more edges in common with T than T1 . Repeating the above procedure, we can transform T1 to T by replacing edges, oneby one, such that the total weight does not increase. We deduce that α( T ) ≤ α( T1 ). The outcome of Kruskal's algorithm need not be unique. Indeed, there may existseveral optimal spanning trees (with the same weight, of course) for a graph.Example 2.6. When applied to the weightedgraph on the right, the algorithm produces the se- 3 2 v1 v2 v3quence: e1 = v2 v4 , e2 = v4 v5 , e3 = v3 v6 , e4 = v2 v3 1and e5 = v1 v2 . The total weight of the spanning 1 2 2tree is thus 9. 4Also, the selection e1 = v2 v5 , e2 = v4 v5 , e3 = v5 v6 , 2 1 2e4 = v3 v6 , e5 = v1 v2 gives another optimal solu- v4 v5 v6tion (of weight 9). 3Problem. Consider trees T with weight functions α : ET → N. Each tree T of order nhas exactly (n) paths. (Why is this so?) Does there exist a weighted tree T α of order n such 2that the (total) weights of its paths are 1, 2, . . . , (n)? 2In such a weighted tree T α different paths have 1 4different weights, and each i ∈ [1, ( n)] is a weight 2 5of one path. Also, α must be injective. 2 8No solutions are known for any n ≥ 7.TAYLOR (1977) proved: if T of order n exists, then necessarily n = k2 or n = k2 + 2 forsome k ≥ 1.Example 2.7. A computer network can be presented as a graph G, where the verticesare the node computers, and the edges indicate the direct links. Each computer v hasan address a(v), a bit string (of zeros and ones). The length of an address is the numberof its bits. A message that is sent to v is preceded by the address a(v). The Hammingdistance h( a(v), a(u)) of two addresses of the same length is the number of places,where a(v) and a(u) differ; e.g., h(00010, 01100) = 3 and h(10000, 00000) = 1. It would be a good way to address the vertices so that the Hamming distanceof two vertices is the same as their distance in G. In particular, if two vertices wereadjacent, their addresses should differ by one symbol. This would make it easier fora node computer to forward a message.
24.
2.2 Connectivity 23 010A graph G is said to be addressable, if it has an 000 110 111addressing a such that dG (u, v) = h( a(u), a(v)). 100 We prove that every tree T is addressable. Moreover, the addresses of the vertices of T canbe chosen to be of length νT − 1. The proof goes by induction. If νT ≤ 2, then the claim is obvious. In the caseνT = 2, the addresses of the vertices are simply 0 and 1. Let then VT = {v1 , . . . , vk+1 }, and assume that dT (v1 ) = 1 (a leaf) and v1 v2 ∈ T. Bythe induction hypothesis, we can address the tree T −v1 by addresses of length k − 1.We change this addressing: let ai be the address of vi in T −v1 , and change it to 0ai .Set the address of v1 to 1a2 . It is now easy to see that we have obtained an addressingfor T as required. The triangle K3 is not addressable. In order to gain more generality, we modifythe addressing for general graphs by introducing a special symbol ∗ in addition to0 and 1. A star address will be a sequence of these three symbols. The Hammingdistance remains as it was, that is, h(u, v) is the number of places, where u and vhave a different symbol 0 or 1. The special symbol ∗ does not affect h(u, v). So, h(10 ∗∗01, 0 ∗ ∗101) = 1 and h(1 ∗ ∗ ∗ ∗∗, ∗00 ∗ ∗∗) = 0. We still want to have h(u, v) =dG (u, v).We star address this graph as follows: v3 a(v1 ) = 0000 , a(v2 ) = 10 ∗ 0 , a(v3 ) = 1 ∗ 01 , a(v4 ) = ∗ ∗ 11 . v1 v2These addresses have length 4. Can you design a v4star addressing with addresses of length 3? W INKLER proved in 1983 a rather unexpected result: The minimum star addresslength of a graph G is at most νG − 1. For the proof of this, see VAN L INT AND W ILSON, "A Course in Combinatorics".2.2 ConnectivitySpanning trees are often optimal solutions to problems, where cost is the criterion.We may also wish to construct graphs that are as simple as possible, but where twovertices are always connected by at least two independent paths. These problems oc-cur especially in different aspects of fault tolerance and reliability of networks, whereone has to make sure that a break-down of one connection does not affect the func-tionality of the network. Similarly, in a reliable network we require that a break-downof a node (computer) should not result in the inactivity of the whole network.
27.
2.2 Connectivity 26 Conversely, we use induction on m = νG + ε G to show that if S = {w1 , w2 , . . . , wk }is a (u, v)-separating set of the smallest size, then G has at least (and thus exactly) kindependent paths u − v. →⋆ The case for k = 1 is clear, and this takes care of the small values of m, requiredfor the induction. (1) Assume first that u and v have a common neighbour w ∈ NG (u) ∩ NG (v). Thennecessarily w ∈ S. In the smaller graph G −w the set S {w} is a minimum (u, v)-separating set, and the induction hypothesis yields that there are k − 1 independentpaths u − v in G −w. Together with the path u − w − v, there are k independent ⋆ → → →paths u − v in G as required. →⋆ ∗ (2) Assume then that NG (u) ∩ NG (v) = ∅, and denote by Hu = NG−S (u) and ∗Hv = NG−S (v) the connected components of G −S for u and v. (2.1) Suppose next that S NG (u) and S NG (v).Let v be a new vertex, and define Gu to be the graphon Hu ∪ S ∪ {v } having the edges of G [ Hu ∪ S] to-gether with vwi for all i ∈ [1, k]. The graph Gu is con- wknected and it is smaller than G. Indeed, in order for ... vS to be a minimum separating set, all wi ∈ S have w2to be adjacent to some vertex in Hv . This shows that u w1ε Gu ≤ ε G , and, moreover, the assumption (2.1) rulesout the case Hv = {v}. So | Hv | ≥ 2 and νGu < νG . If S′ is any (u, v)-separating set of Gu , then S′ will separate u from all wi ∈ S S′ inG. This means that S′ separates u and v in G. Since k is the size of a minimum (u, v)-separating set, we have |S′ | ≥ k. We noted that Gu is smaller than G, and thus by theinduction hypothesis, there are k independent paths u − v in Gu . This is possible ⋆ →only if there exist k paths u − wi , one for each i ∈ [1, k], that have only the end u in →⋆common. By the present assumption, also u is nonadjacent to some vertex of S. A symmetricargument applies to the graph Gv (with a new vertex u), which is defined similarlyto Gu . This yields that there are k paths wi − v that have only the end v in common. ⋆ →When we combine these with the above paths u − wi , we obtain k independent →⋆paths u − wi − v in G. →⋆ →⋆ (2.2) There remains the case, where for all (u, v)-separating sets S of k elements,either S ⊆ NG (u) or S ⊆ NG (v). (Note that then, by (2), S ∩ NG (v) = ∅ or S ∩NG (u) = ∅.) Let P = e f Q be a shortest path u − v in G, where e = ux, f = xy, and Q : y − v. →⋆ ⋆ →Notice that, by the assumption (2), | P| ≥ 3, and so y = v. In the smaller graph G − f ,let S′ be a minimum set that separates u and v. If |S′ | ≥ k, then, by the induction hypothesis, there are k independent paths u − v →⋆in G − f . But these are paths of G, and the claim is clear in this case.
28.
2.2 Connectivity 27 If, on the other hand, |S′ | < k, then u and v are still connected in G −S′ . Every pathu − v in G −S′ necessarily travels along the edge f = xy, and so x, y ∈ S′ . →⋆ /Let Sx = S′ ∪ { x} and Sy = S ′ ∪ {y} .These sets separate u and v in G (by the above fact), and they have size k. By ourcurrent assumption, the vertices of Sy are adjacent to v, since the path P is shortestand so uy ∈ G (meaning that u is not adjacent to all of Sy ). The assumption (2) yields /that u is adjacent to all of Sx , since ux ∈ G. But now both u and v are adjacent to thevertices of S′ , which contradicts the assumption (2). ⊔ ⊓Theorem 2.8 (M ENGER (1927)). A graph G is k-connected if and only if every two verticesare connected by at least k independent paths.Proof. If any two vertices are connected by k independent paths, then it is clearthat κ ( G ) ≥ k. In converse, suppose that κ ( G ) = k, but that G has vertices u and v connected by atmost k − 1 independent paths. By Theorem 2.7, it must be that e = uv ∈ G. Considerthe graph G −e. Now u and v are connected by at most k − 2 independent paths inG −e, and by Theorem 2.7, u and v can be separated in G −e by a set S with |S| = k − 2.Since νG > k (because κ ( G ) = k), there exists a w ∈ G that is not in S ∪ {u, v}. Thevertex w is separated in G −e by S from u or from v; otherwise there would be a pathu − v in ( G −e)−S. Say, this vertex is u. The set S ∪ {v} has k − 1 elements, and it ⋆ →separates u from w in G, which contradicts the assumption that κ ( G ) = k. This provesthe claim. ⊔ ⊓ We state without a proof the corresponding separation property for edge connec-tivity.D EFINITION . Let G be a graph. A uv-disconnecting set is a set F ⊆ EG such thatevery path u − v contains an edge from F. ⋆ →Theorem 2.9. Let u, v ∈ G with u = v in a graph G. Then the maximum number of edge- ⋆disjoint paths u − v equals the minimum number k of edges in a uv-disconnecting set. →Corollary 2.4. A graph G is k-edge connected if and only if every two vertices are connectedby at least k edge disjoint paths.Example 2.9. Recall the definition of the cube Qk from Example 1.5. We show thatκ ( Qk ) = k. First of all, κ ( Qk ) ≤ δ( Qk ) = k. In converse, we show the claim by induction.Extract from Qk the disjoint subgraphs: G0 induced by {0u | u ∈ B k−1 } and G1induced by {1u | u ∈ B k−1 }. These are (isomorphic to) Qk−1 , and Qk is obtained fromthe union of G0 and G1 by adding the 2k−1 edges (0u, 1u) for all u ∈ B k−1 .
29.
2.2 Connectivity 28 Let S be a separating set of Qk with |S| ≤ k. If both G0 −S and G1 −S were con-nected, also Qk −S would be connected, since one pair (0u, 1u) necessarily remains inQk −S. So we can assume that G0 −S is disconnected. (The case for G1 −S is symmet-ric.) By the induction hypothesis, κ ( G0 ) = k − 1, and hence S contains at least k − 1vertices of G0 (and so |S| ≥ k − 1). If there were no vertices from G1 in S, then, ofcourse, G1 −S is connected, and the edges (0u, 1u) of Qk would guarantee that Qk −Sis connected; a contradiction. Hence |S| ≥ k.Example 2.10. We have κ ′ ( Qk ) = k for the k-cube. Indeed, by Whitney's theorem,κ ( G ) ≤ κ ′ ( G ) ≤ δ( G ). Since κ ( Qk ) = k = δ( Qk ), also κ ′ ( Qk ) = k.Algorithmic Problem. The connectivity problems tend to be algorithmically difficult.In the disjoint paths problem we are given a set (ui , vi ) of pairs of vertices for i =1, 2, . . . , k, and it is asked whether there exist paths Pi : ui − vi that have no vertices in ⋆ →common. This problem was shown to be NP-complete by K NUTH in 1975. (However,for fixed k, the problem has a fast algorithm due to R OBERTSON and S EYMOUR (1986).)Dirac's fansD EFINITION . Let v ∈ G and S ⊆ VG such that v ∈ S /in a graph G. A set of paths from v to a vertex in S iscalled a (v, S)-fan, if they have only v in common. ∗ ... ∗ vTheorem 2.10 (D IRAC (1960)). A graph G is k-connected ∗ Sif and only if νG > k and for every v ∈ G and S ⊆ VG with /|S| ≥ k and v ∈ S, there exists a (v, S)-fan of k paths.Proof. Exercise. ⊔ ⊓Theorem 2.11 (D IRAC (1960)). Let G be a k-connected graph for k ≥ 2. Then for any kvertices, there exists a cycle of G containing them.Proof. First of all, since κ ( G ) ≥ 2, G has no cut vertices, and thus no bridges. Itfollows that every edge, and thus every vertex of G belongs to a cycle. Let S ⊆ VG be such that |S| = k, and let C be a cycle of G that contains themaximum number of vertices of S. Let the vertices of S ∩ VC be v1 , . . . , vr listed inorder around C so that each pair (vi , vi+1 ) (with indices modulo r) defines a pathalong C (except in the special case where r = 1). Such a path is referred to as a segmentof C. If C contains all vertices of S, then we are done; otherwise, suppose v ∈ S is noton C. It follows from Theorem 2.10 that there is a (v, VC )-fan of at least min{k, |VC |}paths. Therefore there are two paths P : v − u and Q : v − w in such a fan that end →⋆ ⋆ →in the same segment (vi , vi+1 ) of C. Then the path W : u − w (or w − u) along C →⋆ ⋆ →contains all vertices of S ∩ VC . But now PWQ−1 is a cycle of G that contains v and allvi for i ∈ [1, r]. This contradicts the choice of C, and proves the claim. ⊔ ⊓
30.
3Tours and Matchings3.1 Eulerian graphsThe first proper problem in graph theory was the Königsberg bridge problem. In gen-eral, this problem concerns of travelling in a graph such that one tries to avoid usingany edge twice. In practice these eulerian problems occur, for instance, in optimiz-ing distribution networks – such as delivering mail, where in order to save time eachstreet should be travelled only once. The same problem occurs in mechanical graphplotting, where one avoids lifting the pen off the paper while drawing the lines.Euler toursD EFINITION . A walk W = e1 e2 . . . en is a trail, if ei = e j for all i = j. An Euler trail ofa graph G is a trail that visits every edge once. A connected graph G is eulerian, if ithas a closed trail containing every edge of G. Such a trail is called an Euler tour. Notice that if W = e1 e2 . . . en is an Euler tour (and so EG = {e1 , e2 , . . . , en }), alsoei ei+1 . . . en e1 . . . ei−1 is an Euler tour for all i ∈ [1, n]. A complete proof of the followingEuler's Theorem was first given by H IERHOLZER in 1873.Theorem 3.1 (E ULER (1736), H IERHOLZER (1873)). A connected graph G is eulerian ifand only if every vertex has an even degree.Proof. (⇒) Suppose W : u − u is an Euler tour. Let v (= u) be a vertex that occurs ⋆ →k times in W. Every time an edge arrives at v, another edge departs from v, andtherefore dG (v) = 2k. Also, dG (u) is even, since W starts and ends at u. (⇐) Assume G is a nontrivial connected graph such that dG (v) is even for all v ∈G. Let W = e1 e2 . . . en : v0 − vn with ei = vi−1 vi ⋆ →be a longest trail in G. It follows that all e = vn w ∈ G are among the edges of W, for,otherwise, W could be prolonged to We. In particular, v0 = vn , that is, W is a closedtrail. (Indeed, if it were vn = v0 and vn occurs k times in W, then dG (vn ) = 2(k − 1) + 1and that would be odd.) If W is not an Euler tour, then, since G is connected, there exists an edge f = vi u ∈G for some i, which is not in W. However, now e i + 1 . . . e n e1 . . . e i fis a trail in G, and it is longer than W. This contradiction to the choice of W provesthe claim. ⊔ ⊓
31.
3.1 Eulerian graphs 30Example 3.1. The k-cube Qk is eulerian for even integers k, because Qk is k-regular.Theorem 3.2. A connected graph has an Euler trail if and only if it has at most two verticesof odd degree.Proof. If G has an Euler trail u − v, then, as in the proof of Theorem 3.1, each vertex ⋆ →w ∈ {u, v} has an even degree. / Assume then that G is connected and has at most two vertices of odd degree. If Ghas no vertices of odd degree then, by Theorem 3.1, G has an Euler trail. Otherwise,by the handshaking lemma, every graph has an even number of vertices with odddegree, and therefore G has exactly two such vertices, say u and v. Let H be a graphobtained from G by adding a vertex w, and the edges uw and vw. In H every vertexhas an even degree, and hence it has an Euler tour, say u − v − w − u. Here the ⋆ → → →beginning part u − v is an Euler trail of G. →⋆ ⊔ ⊓The Chinese postmanThe following problem is due to G UAN M EIGU (1962). Consider a village, where apostman wishes to plan his route to save the legs, but still every street has to bewalked through. This problem is akin to Euler's problem and to the shortest pathproblem. Let G be a graph with a weight function α : EG → R + . The Chinese postmanproblem is to find a minimum weighted tour in G (starting from a given vertex, thepost office). If G is eulerian, then any Euler tour will do as a solution, because such a tourtraverses each edge exactly once and this is the best one can do. In this case the weightof the optimal tour is the total weight of the graph G, and there is a good algorithmfor finding such a tour:Fleury's algorithm:• Let v0 ∈ G be a chosen vertex, and let W0 be the trivial path on v0 .• Repeat the following procedure for i = 1, 2, . . . as long as possible: suppose a trail Wi = e1 e2 . . . ei has been constructed, where e j = v j−1 v j . Choose an edge ei+1 (= e j for j ∈ [1, i ]) so that(i) ei+1 has an end vi , and(ii) ei+1 is not a bridge of Gi = G −{e1 , . . . , ei }, unless there is no alternative. Notice that, as is natural, the weights α(e) play no role in the eulerian case.Theorem 3.3. If G is eulerian, then any trail of G constructed by Fleury's algorithm is anEuler tour of G.Proof. Exercise. ⊔ ⊓
32.
3.2 Hamiltonian graphs 31 If G is not eulerian, the poor postman has to walk at least one street twice. Thishappens, e.g., if one of the streets is a dead end, and in general if there is a street cornerof an odd number of streets. We can attack this case by reducing it to the eulerian caseas follows. An edge e = uv will be duplicated, if it is added to G parallel to an existingedge e′ = uv with the same weight, α(e′ ) = α(e). 4 3 4 3 4 3 3 3 3 3 2 1 2 2 2 1 2 2 2 1 2 Above we have duplicated two edges. The rightmost multigraph is eulerian. There is a good algorithm by E DMONDS AND J OHNSON (1973) for the constructionof an optimal eulerian supergraph by duplications. Unfortunately, this algorithm issomewhat complicated, and we shall skip it.3.2 Hamiltonian graphsIn the connector problem we reduced the cost of a spanning graph to its minimum.There are different problems, where the cost is measured by an active user of thegraph. For instance, in the travelling salesman problem a person is supposed to visiteach town in his district, and this he should do in such a way that saves time andmoney. Obviously, he should plan the travel so as to visit each town once, and sothat the overall flight time is as short as possible. In terms of graphs, he is lookingfor a minimum weighted Hamilton cycle of a graph, the vertices of which are thetowns and the weights on the edges are the flight times. Unlike for the shortest pathand the connector problems no efficient reliable algorithm is known for the travellingsalesman problem. Indeed, it is widely believed that no practical algorithm exists forthis problem.Hamilton cyclesD EFINITION . A path P of a graph G is a Hamilton path,if P visits every vertex of G once. Similarly, a cycle C isa Hamilton cycle, if it visits each vertex once. A graph ishamiltonian, if it has a Hamilton cycle. Note that if C : u1 → u2 → · · · → un is a Hamilton cycle, then so is ui → . . . un →u1 → . . . ui−1 for each i ∈ [1, n], and thus we can choose where to start the cycle.Example 3.2. It is obvious that each Kn is hamiltonian whenever n ≥ 3. Also, as iseasily seen, Kn,m is hamiltonian if and only if n = m ≥ 2. Indeed, let Kn,m have a
33.
3.2 Hamiltonian graphs 32bipartition ( X, Y ), where | X | = n and |Y | = m. Now, each cycle in Kn,m has evenlength as the graph is bipartite, and thus the cycle visits the sets X, Y equally manytimes, since X and Y are stable subsets. But then necessarily | X | = |Y |. Unlike for eulerian graphs (Theorem 3.1) no good characterization is known forhamiltonian graphs. Indeed, the problem to determine if G is hamiltonian is NP-complete. There are, however, some interesting general conditions.Lemma 3.1. If G is hamiltonian, then for every nonempty subset S ⊆ VG , c( G − S ) ≤ |S | .Proof. Let ∅ = S ⊆ VG , u ∈ S, and let C : u − u be a Hamilton cycle of G. Assume →⋆G −S has k connected components, Gi , i ∈ [1, k]. The case k = 1 is trivial, and hencesuppose that k > 1. Let ui be the last vertex of C that belongs to Gi , and let vi be thevertex that follows ui in C. Now vi ∈ S for each i by the choice of ui , and v j = vt forall j = t, because C is a cycle and ui vi ∈ G for all i. Thus |S| ≥ k as required. ⊔ ⊓Example 3.3. Consider the graph on the right. In G,c( G −S) = 3 > 2 = |S| for the set S of black ver-tices. Therefore G does not satisfy the condition ofLemma 3.1, and hence it is not hamiltonian. Interest-ingly this graph is ( X, Y )-bipartite of even order with|X | = |Y |. It is also 3-regular.Example 3.4. Consider the Petersen graph on the right,which appears in many places in graph theory as acounter example for various conditions. This graphis not hamiltonian, but it does satisfy the conditionc( G −S) ≤ |S| for all S = ∅. Therefore the conclusionof Lemma 3.1 is not sufficient to ensure that a graph ishamiltonian. The following theorem, due to O RE, generalizes an earlier result by D IRAC (1952).Theorem 3.4 (O RE (1962)). Let G be a graph of order νG ≥ 3, and let u, v ∈ G be such that dG (u) + dG (v) ≥ νG .Then G is hamiltonian if and only if G + uv is hamiltonian.Proof. Denote n = νG . Let u, v ∈ G be such that dG (u) + dG (v) ≥ n. If uv ∈ G, thenthere is nothing to prove. Assume thus that uv ∈ G. / (⇒) This is trivial since if G has a Hamilton cycle C, then C is also a Hamiltoncycle of G + uv. (⇐) Denote e = uv and suppose that G + e has a Hamilton cycle C. If C does notuse the edge e, then it is a Hamilton cycle of G. Suppose thus that e is on C. We maythen assume that C : u − v − u. Now u = v1 − v2 − . . . − vn = v is a Hamilton ⋆ → → → → → | 677.169 | 1 |
George Fleischer
Marianopolis College
Mathematics & Computer Science
Some of the course files shown here are written using the
Maple symbolic mathematics program. You do not need to know Maple
to understand them. Just concentrate on the Math. You can of
course pick up the basics of Maple syntax by studying the
examples. For further Maple illustrations in a variety of our
courses, see Prof. John
Osborne's Home Page. | 677.169 | 1 |
Mathematical Ideas
9780321168085
ISBN:
0321168089
Edition: 10th Pub Date: 2003 Publisher: Addison Wesley
Summary: Covering a variety of mathematical ideas, this text includes information on: the art of problem solving; the basic concepts of set theory; logic; numeration and mathematical systems; and number theory.
Charles D. Miller is the author of Mathematical Ideas, published 2003 under ISBN 9780321168085 and 0321168089. One hundred twenty five Mathematical Ideas textbooks are available for sale on ValoreBooks.com, tw...enty four used from the cheapest price of $0.06, or buy new starting at $34 | 677.169 | 1 |
Find a West BoxfordThese can be overcome by embedding feature, depending on the format. The rules for algebra form an elegant language from which we can develop amazingly useful counting relationships for a broad range of applications. From leaving a tip to calculating the impact of multiple influences on a complex system.
...The lectures were two to three classes per term and 6 times per year. Each class had up to 60 students. Topics lectured on were interviewing, creating your career network, navigating social media, resume writing, creating your on-line business image, selling your skills on an interview and creating your 30 second elevator speech to best be able to present yourself and your skills. | 677.169 | 1 |
Elementary Algebra for College Students (8th Edition)
9780321620934
ISBN:
0321620933
Edition: 8 Pub Date: 2010 Publisher: Pearson
Summary: Allen R. Angel is the author of Elementary Algebra for College Students (8th Edition), published 2010 under ISBN 9780321620934 and 0321620933. Two hundred sixty Elementary Algebra for College Students (8th Edition) textbooks are available for sale on ValoreBooks.com, ninety five used from the cheapest price of $1.19, or buy new starting at $25 class that required me to use this book was math 101 at Rockland community college. the class was very effective especially with the professor who taught us each topic. it was a very cooperative class.
there is nothing i would change about this book. it offered problems to do and even showed exactly how to do them with examples provided. | 677.169 | 1 |
9780495392767
ISBN:
0495392766
Edition: 5 Pub Date: 2007 Publisher: Cengage Learning
Summary: This best selling author team explains concepts simply and clearly, without glossing over difficult points. Problem solving and mathematical modeling are introduced early and reinforced throughout, so that when students finish the course, they have a solid foundation in the principles of mathematical thinking. This comprehensive, evenly paced book provides complete coverage of the function concept and integrates subs...tantial graphing calculator materials that help students develop insight into mathematical ideas. The authors' attention to detail and clarity, as in James Stewart's market-leading Calculus text, is what makes this text the market leader.
James Stewart is the author of Precalculus: Mathematics for Calculus, Enhanced Review Edition, 5th Edition, published 2007 under ISBN 9780495392767 and 0495392766. Ninety one Precalculus: Mathematics for Calculus, Enhanced Review Edition, 5th Edition textbooks are available for sale on ValoreBooks.com, seventy one used from the cheapest price of $1.29, or buy new starting at $269.55.[read more495392767-4-1-3 Orders ship the same or next business day. Expedited [more]
Missing components. May include moderately worn cover, writing, markings or slight discoloration. SKU:97804953927 | 677.169 | 1 |
Catch the Waves to Calculus
Integrated Calculus and Analytical Geometry
Target Audience
Undergraduates with at least college algebra and trigonometry
Course Description
A two-semester calculus sequence with undergraduate research projects in a wide variety of disciplines. The focus is on the mathematical foundation of the Fourier series, the Fourier and the inverse Fourier Transform, and the theoretical background of the Fast Fourier Transform.
Fourier series and Fourier and inverse Fourier Transforms are used in many higher-level mathematics, science and engineering classes, such as
partial and ordinary differential equations,
physics, chemistry, biology,
medicine, communication science, photography, linguistics
The Fast Fourier Transform method is used in many areas of research and in industry:
In engineering: devices engineered on the basis of Fast Fourier Transforms are used in the design and construction of smooth-running cars, trains, and airplanes. These devices use Fast Fourier Transforms to eliminate noises and vibrations.
In individual projects, the students conduct research on actual applications from their own area of studies. They are supervised jointly by the mathematics instructor and an instructor from the other discipline. | 677.169 | 1 |
Find a Talmo Algebra 2
...Calculus is so often seen as utterly mystifying, but as usual, the problem is quite frequently algebra. A firm foundation must be built for an understanding of the derivative: what is the difference quotient and average rate of change, and how do we use these to derive the instantaneous rate of ... | 677.169 | 1 |
National Curriculum: Mathematics
The National Curriculum for Mathematics was introduced into England, Wales and Northern Ireland as a nationwide curriculum for primary and secondary state schools following the Education Reform Act 1988. The basis of the curriculum and its associated testing was to standardise the content taught across schools in order to raise standards of attainment in mathematics. The National Curriculum (NC) went hand-in-hand with the development of national tests (SATs) at the end of the Key Stages. The NC introduced Programmes of Study (PoS), Attainment Targets (AT) levels and Statements of Attainment (SoA).
Following the Cockcroft committee recommendations ([i]Mathematics Counts[/i]), Using and Applying Mathematics was a significant inclusion in the curriculum through ATs 1 and 9 which included using mathematics in practical tasks, in real life problems and to investigating within mathematics itself.
The National Curriculum required all schools to address the issue of teaching solely for the acquisition of knowledge and skills in isolation from the application of mathematics, and to develop a teaching and learning approach in which the uses and applications of mathematics permeate and influence all work in mathematics. This was a major undertaking for schools, and perhaps the single most significant challenge for the teaching of mathematics required by the National Curriculum in its aim of raising standards for all students.
The National Curriculum required students to develop a range of methods for calculating - from mental methods through to the use of electronic calculators. In order to progress through the levels, students at every stage were to be encouraged to develop their own methods for doing calculations, a feature which was developed further through the Numeracy project and the [i]Framework for Teaching Mathematics[/i].
Although [i]Mathematics in the National Curriculum[/i] underwent a number of revisions, the mathematical content changed very little and kept assessment as a major constituent. To enable teachers to make sense of the new curriculum, non-statutory guidance and training materials were published to go alongside training for all teachers.
Resources
These resources from the National Curriculum Council were published in 1991 following the recently introduced National Curriculum. They were designed to help schools in their own INSET programmes, and were written with local or departmental groups of teachers in mind. The introduction of the National Curriculum posed...
The Education Act 2002 implemented the legislative commitments set out in the White paper Schools Achieving success. It was a substantial and important piece of legislation intended to raise standards, promote innovation in schools and reform education law.The Act added the Foundation Stage as a statutory part of the...
A resource from the National Curriculum Council (NCC). One of the first acts of the new Labour government was to announce national targets for literacy and numeracy, these were for 75 per cent of 11 year olds to achieve the standards expected for their age in mathematics by 2002.
The Numeracy Task Force was...
A report from the National Curriculum Council (NCC). In January 1991 the Secretary of State for Education and Science announced an urgent review of the attainment targets in mathematics because:• the structure of the 14 targets was proving an obstacle to manageable and sound testing, and to intelligible reporting to... | 677.169 | 1 |
en.wikipedia.org/wiki/Special_functionsSpecial functions are particular mathematical functions which have more or less established names and notations due to their importance in mathematical analysis ...
math lessons and math homework help from basic math to algebra, geometry and beyond. Students, teachers, parents, and everyone can find solutions to their math ...
mathworld.wolfram.com/SpecialFunction.htmlSpecial Function. A function (usually named after an early investigator of its properties) having a particular use in mathematical physics or some other branch of ...
math.tutorcircle.com/algebra/how-to-graph-special-functions.htmlTopGraph is actually a diagrammatic representation of a function which relates two or more variables. For instance y = f (x) showing a function 'y' which depends and ...
people.sc.fsu.edu/~jburkardt/f_src/special_functions/special_functions.htmlSPECIAL_FUNCTIONS is a FORTRAN90 library which computes the value of various special functions, by Shanjie Zhang, Jianming Jin. Jianming Jin makes the text ...
en.wikipedia.org/wiki/List_of_mathematical_functionsIn mathematics, many functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these ...
reference.wolfram.com/mathematica/tutorial/SpecialFunctions.htmlThe Wolfram System includes all the common special functions of mathematical physics found in standard handbooks. Each of the various classes of functions is ... | 677.169 | 1 |
You are here
A Transition to Mathematics with Proofs
Publisher:
Jones and Bartlett Learning
Number of Pages:
354
Price:
111.95
ISBN:
9781449627782
Cullinane's book, geared at undergraduates making the transition from calculus courses to proof-intensive courses such as abstract algebra, has a suitable, if not particularly unsurprising, structure. He begins with a conversational chapter introducing and motivating the ideas of proof and of precise mathematical writing. He follows this with core chapters on elementary logic and on the planning and writing of proofs. Along the way, he introduces basic set theory objects and axioms, as well as exploring properties of the real numbers, in order to provide a context in which students can write proofs. These are followed by topical chapters introducing other mathematical playgrounds in which students can practice their proof-writing skills.
I enjoyed reading the book. It has an accessible style; I can picture some undergraduates actually reading this book, rather than just skimming through it for definitions and examples. It covers the basics of proof-writing, and draws attention to most of the common pitfalls into which students tumble, such as assuming what you want to prove or neglecting to formally choose an arbitrary element in a predicate's domain when attempting to prove a universal statement.
I like the book's emphasis on the distinction between proof-planning and proof-writing. Throughout the book, Cullinane provides informal discussions or sketches of arguments preceding their rigorous distillations as proofs. The difference between the linear progression of logic in proofs versus its often non-linear progression in the thought-processes leading up to the proofs is something that is often elided in proof-instruction. After reading Cullinane's book I recognize that this is something I need to emphasize more in my own classes.
Cullinane discusses many useful proof techniques, such as the uses of contradiction, contraposition, and induction; the proving of if-then statements; and the demonstratation of the equality of sets. (Though I must admit, I have always thought of what he terms "contradiction" and "contraposition" proof techniques to be more or less the same thing, and wonder if distinguishing between them might be more confusing for students than necessary). Finally, each section of the book contains practice problems, embedded in appropriate places within the text, whose solutions appear at the end of the section. I believe students will find these very helpful.
There are some aspects of the book of which I was less fond. Each section begins with questions "to guide the reading of the section." While this is something I approve of in theory, in practice I forgot about the questions immediately after reading them. I think keeping them in mind might be even trickier for students, since many of the guiding-question words will be as yet unfamiliar to them. That said, the inclusion of these questions doesn't detract from the quality of the book.
More significant to me was the use of the word undefined to describe terms such as "set" which are fundamental mathematical concepts Cullinane is — for obvious reasons — avoiding formally defining. While I completely agree that one should simply appeal to intuition, rather than attempting to formally define such notions, "undefined" seems to me to be a very poor choice of a word for the classification of such terms. For instance, it leads to the peculiar phrase "A formal version of definition adheres… to this requirement by using only undefined terms, previously defined terms, and allowable logical phrases…" (27). Here, by "undefined" Cullinane means the specific terms such as "set" and "membership" which he has identified as fundamental but not easily definable, but really, there are myriad undefined words which definitions should not use, such as Curious George's "blimlimlim." Simply choosing a different word, such as "fundamental" or "core" to describe these so-called "undefined" terms would have provided ample clarification.
After his proof-writing chapter, Cullinane provides three topical chapters. I see good reasons for including the first two of these. Chapter 5 focuses on relations and functions; while it is not necessary to discuss these in a proof-writing course, the notions of one-to-one/onto maps and equivalence relations are central to linear algebra, abstract algebra, and other courses for which a proof-writing course is often a prerequisite: an introduction to these concepts prior to these classes is, in my mind, highly useful. Chapter 6 focuses on number theory and combinatorics, and uses the study of natural numbers as a vehicle for introducing the crucial proof concept of induction. Chapter 7, however, feels a bit arbitrary, introducing the areas of graph theory, group theory, and set cardinality (this third topic could have been nicely tucked into Chapter 6). That said, it does provide further topics whose studies involve proofs; this may be of use in courses which can get through the previous material sufficiently quickly (and also may allow its adoption as a text in a discrete math course).
In short, this is a good, if not particularly groundbreaking, text. It would work very well, I think, for a one-semester sophomore or junior-level introduction to proofs course (though it would be insufficient for an honors or high-level version of such a course). Its exercises are plentiful and well-organized into subsections pertaining to certain goals or topics (e.g., "Proving an Or Statement" or "The Binomial Theorem"), and it is eminently readable. I would definitely consider using it if I teach a proofs class in the near future. | 677.169 | 1 |
Bacliff StatisticsSteve O.
...Finite math is often taught as mathematical models with applications. In this course, students use algebraic, graphical, and geometric reasoning to recognize patterns and structure, to model information, and to solve problems from various disciplines. The course typically provides a survey of mathematical techniques used in the working world. | 677.169 | 1 |
The Math Plague
By Dr. Sherry Mantyka
While there are barriers to learning mathematics, Dr. Sherry Mantyka says that none of these are insurmountable. The Math Plague looks at a myriad of stumbling blocks and provides a good way around them.
Dr. Mantyka, an associate professor in the Department of Mathematics and Statistics, and the director of the Mathematics Learning Centre, has years of experience working successfully with thousands of post-secondary under-achievers in mathematics. It all began in 1988 when province-wide data on the math skill levels of high school graduates entering Memorial University was collected. This data showed that 44 per cent of these students graduating from an academic math stream in high school had a math skill level of Grade six or below, and 38 per cent had a junior high math skill level.
Furthermore, follow-up studies of students' performance in a university pre-calculus course, it was discovered that their pass rate in that course was only 20 and 50 per cent, respectively.
Dr. Sherry Mantyka
In 1988, in response to this situation, Dr. Mantyka founded the Mathematics Learning Centre (MLC) at Memorial University to investigate why this was so and how to correct it. Her work focused on the students with the lowest skill levels because initially the university allowed students to self-select to participate in the program at the MLC.
The Math Plague is an amalgamation of the things Dr. Mantyka learned at the MLC. The book is divided into 38 small sections, each containing a confirmed principle for the effective learning of mathematics.
The Math Plague is published by MayT Consulting and can be ordered by calling (709) 579-5879. | 677.169 | 1 |
Session Overview
In this session we will start our study of linear equations, which is probably the most important class of differential equations. We will introduce the ideas and terminology of superposition, systems, input and response which will be used for the rest of the | 677.169 | 1 |
Disclaimer: This blog post has been contributed by Dr. Nicola Wilkin, Head of Teaching Innovation (Science), College of Engineering and Physical Sciences and Jonathan Watkins from the University of Birmingham Maple T.A. user group*.
We all know the problem. During the course of a degree, students become experts at solving problems when they are given the sets of equations that they need to solve. As anyone will tell you, the skill they often lack is the ability to produce these sets of equations in the first place. With Maple T.A. it is a fairly trivial task to ask a student to enter the solution to a system of equations and have the system check if they have entered it correctly. I speak with many lecturers who tell me they want to be able to challenge their students, to think further about the concepts. They want them to be able to test if they can provide the governing equations and boundary conditions to a specific problem.
With Maple T.A. we now have access to a math engine that enables us to test whether a student is able to form this system of equations for themselves as well as solve it.
In this post we are going to explore how we can use Maple T.A. to set up this type of question. The example I have chosen is 2D Couette flow. For those of you unfamiliar with this, have a look at this wikipedia page explaining the important details.
In most cases I prefer to use the question designer to create questions. This gives a uniform interface for question design and the most flexibility over layout of the question text presented to the student.
On the Questions tab, click New question link and then choose the question designer.
For the question title enter "System of equations for Couette Flow".
For the question text enter the text
The image below shows laminar flow of a viscous incompressible liquid between two parallel plates.
What is the system of equations that specifies this system. You can enter them as a comma separated list.
e.g. diff(u(y),y,y)+diff(u(y),y)=0,u(-1)=U,u(h)=0
You then want to insert a Maple graded answer box but we'll do that in a minute after we have discussed the algorithm.
When using the questions designer, you often find answers are longer than width of the answer box. One work around is to change the width of all input boxes in a question using a style tag. Click the source button on the editor and enter the following at the start of the question
I always set this to $TA for consitency across my questions. To check there is a solution to this I use a maple call to the dsolve function in Maple, this returns the solution to the provided system of equations. Pressing refresh on next to the algorithm performs these operations and checks the teacher's answer.
The key part of this question is the grading code in the Maple graded answer box. Let's go ahead and add the answer box to the question text. I add it at the end of the text we added in step 3. Click Insert Response area and choose the Maple-graded answer box in the left hand menu. For the Answer enter the $sys variable that we defined in the algorithm. For the grading code enter a:=dsolve({$RESPONSE}): evalb({$sol}={a})
This code checks that the students system of equations produces the same solution as the teachers. Asking the question in this way allows a more open ended response for the student.
To finish off make sure the expression type is Maple syntax and Text entry only is selected.
Press OK and then Finish on the Question designer screen.
That is the question completed. To preview a working copy of the question, have a look here at the live preview of this question. Enter the system of equations and click How did I do?
I have included a downloadable version of the question that contains the .xml file and image for this question. Click this link to download the file. The question can also be found on the Maple T.A. cloud under "System of equations for Couette Flow".
* Any views or opinions presented are solely those of the author(s) and do not necessarily represent those of the University of Birmingham unless explicitly stated otherwise.
Disclaimer: This blog post has been contributed by Dr. Nicola Wilkin, Head of Teaching Innovation (Science), College of Engineering and Physical Sciences and Jonathan Watkins from the University of Birmingham Maple T.A. user group*.
If you have arrived at this post you are likely to have a STEM background. You may have heard of or had experience with Maple T.A or similar products in the past. For the uninitiated, Maple T.A. is a powerful system for learning and assessment designed for STEM courses, backed by the power of the Maple computer algebra engine. If that sounds interesting enough to continue reading let us introduce this series of blog posts for the mapleprimes website contributed by the Maple T.A. user group from the University of Birmingham(UoB), UK.
These posts mirror conversations we have had amongst the development team and with colleagues at UoB and as such are likely of interest to the wider Maple T.A. community and potential adopters. The implementation of Maple T.A. over the last couple of years at UoB has resulted in a strong and enthusiastic knowledge base which spans the STEM subjects and includes academics, postgraduates, undergraduates both as users and developers, and the essential IT support in embedding it within our Virtual Learning Environment (VLE), CANVAS at UoB.
By effectively extending our VLE such that it is able to understand mathematics we are able to deliver much wider and more robust learning and assessment in mathematics based courses. This first post demonstrates that by comparing the learning experience between a standard multiple choice question, and the same material delivered in a Maple TA context.
To answer this lets compare how we might test if a student can solve a quadratic equation, and what we can actually test for if we are not restricted to multiple choice. So we all have a good understanding of the solution method, let's run through a typical paper-based example and see the steps to solving this sort of problem.
Here is an example of a quadratic
To find the roots of this quadratic means to find what values of x make this equation equal to zero. Clearly we can just guess the values. For example, guessing 0 would give
So 0 is not a root but -1 is.
There are a few standard methods that can be used to find the roots. The point though is the answer to this sort of question takes the form of a list of numbers. i.e. the above example has the roots -1, 5. For quadratics there are always two roots. In some cases two roots could be the same number and they are called repeated roots. So a student may want to answer this question as a pair of different numbers 3, -5, the same number repeated 2, 2 or a single number 2. In the last case they may only list a repeated roots once or maybe they could only find one root from a pair of roots. Either way there is quite a range of answer forms for this type of question.
With the basics covered let us see how we might tackle this question in a standard VLE. Most are not designed to deal with lists of variable length and so we would have to ask this as a multiple choice question. Fig. 1, shows how this might look.
Fig 1: Multiple choice question from a standard VLE
Unfortunately asking the question in this way gives the student a lot of implicit help with the answer and students are able to play a process of elimination game to solve this problem rather than understand or use the key concepts.
They can just put the numbers in and see which work...
Let's now see how we may ask this question in Maple T.A.. Fig. 2 shows how the question would look in Maple T.A. Clearly this is not multiple choice and the student is encouraged to answer the question using a simple list of numbers separated by commas. The students are not helped by a list of possible answers and are left to genuinely evaluate the problem. They are able to provide a single root or both if they can find them, and moreover the question is not fussy about the way students provide repeated roots. After a student has attempted the question, in the formative mode, a student is able to review their answer and the teacher's answer as well as question specific feedback, Fig. 3. We'll return to the power of the feedback that can be incorporated in a later post.
Fig. 2: Free response question in Maple T.A.
Fig. 3: Grading response from Maple T.A.
The demo of this question and others presented in this blog, are available as live previews through the UoB Maple T.A. user group site.
The question can be downloaded from here and imported as a course module to your Maple T.A. instance. It can also be found on the Maple TA cloud by searching for "Find the roots of a quadratic". Simply click on the Clone into my class button to get your own version of the question to explore and modify.
* Any views or opinions presented are solely those of the author(s) and do not necessarily represent those of the University of Birmingham unless explicitly stated otherwise.
This January 28th, we will be hosting another full-production, live streaming webinar featuring an all-star cast of Maplesoft employees: Andrew Rourke (Director of Teaching Solutions), Jonny Zivku (Maple T.A. Product Manager), and Daniel Skoog (Maple Product Manager). Attend the webinar to learn how educators all around the world are using Maple and Maple T.A. in their own classrooms.
Any STEM educator, administrator, or curriculum coordinator who is interested in learning how Maple and Maple T.A. can help improve student grades, reduce drop-out rates, and save money on administration costs will benefit from attending this webinar.
The Joint Mathematics Meetings are taking place this week (January 6 – 9) in Seattle, Washington, U.S.A. This will be the 99th annual winter meeting of the Mathematical Association of America (MAA) and the 122nd annual meeting of the American Mathematical Society (AMS).
Maplesoft will be exhibiting at booth #203 as well as in the networking area. Please stop by our booth or the networking area to chat with me and other members of the Maplesoft team, as well as to pick up some free Maplesoft swag or win some prizes.
Given the size of the Joint Math Meetings, it can be challenging to pick which events to attend. Hopefully we can help by suggesting a few Maple-related talks and events:
Maplesoft is hosting a catered reception and presentation 'Challenges of Modern Education: Bringing Math Instruction Online' on Thursday, January 7th at 18:00 in the Cedar Room at the Seattle Sheraton. You can find more details and registration information here:
Another not to miss Maple event is "30 Years of Digitizing Mathematical Knowledge with Maple", presented by Edgardo Cheb-Terrab, on Thursday, January 7 at 10:00 in Room 603 of the Convention Center.
I have a grudging respect for Victorian engineers. Isambard Kingdom Brunel, for example, designed bridges, steam ships and railway stations with nothing but intellectual flair, hand-calculations and painstakingly crafted schematics. His notebooks are digitally preserved, and make for fascinating reading for anyone with an interest in the history of engineering.
If computational support is needed, engineers often choose spreadsheets. They're ubiquitous, and the barrier to entry is low. It's just too easy to fire-up a spreadsheet and do a few simple design calculations.
Spreadsheets are difficult to debug, validate and extend.
Spreadsheets are great at manipulating tabular data. I use them for tracking expenses and budgeting.
However, the very design of spreadsheets encourages the propagation of errors in equation-oriented engineering calculations
Results are difficult to validate because equations are hidden and written in programming notation
You're often jumping about from one cell to another in a different part of the worksheet, with no clear visual roadmap to signpost the flow of a calculation
For these limitations alone, I doubt if Brunel would have used a spreadsheet.
Technology has now evolved to the point where an engineer can reproduce the design metaphor of Brunel's paper notebooks in software – a freeform mix of calculations, text, drawings and equations in an electronic notebook. A number of these tools are available (including Maple, available via the APA website).
Modern calculation tools reproduce the design metaphor of hand calculations.
Additionally, these modern software tools can do math that is improbably difficult to do by hand (for example, FFTs, matrix computation and optimization) and connect to CAD packages.
For example, Brunel could have designed the chain links on the Clifton Suspension Bridge, and updated the dimensions of a CAD diagram, while still maintaining the readability of hand calculations, all from the same electronic notebook.
That seems like a smarter choice.
Would I go back to the physical notebooks that Brunel diligently filled with hand calculations? Given the scrawl that I call my handwriting, probably not.
Since we're almost at the end of the year, I thought it would be interesting to look back at our most popular webinars for academics in 2015. I found that they fell into one of two categories: live streaming webinars featuring Dr. Robert Lopez and Maple how-to tutorials. (If you missed the live presentation, you can watch the recordings of all these webinars below.)
The first and second most popular webinar were, unsurprisingly, both of the live streaming webinars that featured Dr. Robert Lopez (Emeritus Professor at Rose Hulman Institute of Technology and Maple Fellow at Maplesoft). These webinars were streamed live to an audience and allowed many people to get their first glimpse of the man behind the Clickable Calculus series and Teaching Concepts with Maple:
1. Eigenpairs Enlivened
In this webinar, Dr. Robert Lopez demonstrates how Maple can enhance the task of teaching the eigenpair concept, and shows how Maple bridges the gap between the concept and the algorithms by which students are expected to practice finding eigenpairs.
2. Resequencing Concepts and Skills via Maple's Clickable
In this webinar, Dr. Lopez presents examples of what "resequencing" looks like when implemented with Maple's point-and-click syntax-free paradigm. Not only can Maple be used to elucidate the concept, but in addition, it can be used to illustrate and implement the manipulations that ultimately the student must master.
The next three were all brief webinars on how to complete specific tasks in Maple 2015. Just under a dozen of these were created in 2015 and they were all quite popular, but these three stood out above the rest:
3. Working with Data Sets in Maple
This video walks through examples of working with several types of data in Maple, including visualizing stock and commodity data, forecasting future temperatures using weather data, and analyzing macroeconomic data, such as employment statistics, GDP and other economic indicators.
4. Custom Color Schemes in Maple
This webinar provides an overview of the colorscheme option for coloring surfaces, curves and collections of points in Maple, including how to color with gradients, according to function value or point position. Examples of how the colorscheme option is used with various commands from the Maple library are also demonstrated.
5. Working with Units in Maple
Maple 2015 allows for more fluid and natural interaction with units. This webinar provides an overview of the new unit formatting controls and new Temperature object, and demonstrates how to compute with units and tolerances.
Are there any topics you'd like to see Robert cover in upcoming webinars? Or, any Maple how-to videos you think would be a helpful addition to our library? Let us know in the comments below doing
Here at Maplesoft, we like to foster innovation in technological development. Whether that is finding solutions to global warming, making medical discoveries that save millions, or introducing society to very advanced functional robots, Maplesoft is happy to contribute, support and encourage innovative people and organizations researching these complex topics. This year, we are delighted to have sponsored two contests in the robotics field that provide opportunities to think big and make an impact: Create the Future Design Contest and the International Space Apps Challenge.
Create the Future Design Contest
Established in 2002, and organized by TechBriefs, the goal of the Create the Future Design Contest is to help engineers bring their product design ideas to life. The overall 'mission of the contest is to benefit humanity, the environment, and the economy.' This year, there were a record 1,159 new product ideas submitted by students, engineers, and entrepreneurs from all over the globe. In the machinery/automation/robotics category, which Maplesoft sponsored, the project with the top votes was designed by two engineers who chose to name their innovation CAP Exoskeleton, a type of assistive robotic machine designed to aid the user in walking, squatting, and carrying heavy loads over considerable distances. It can either be used to enhance physical endurance for military purposes or to help the physically impaired perform daily tasks. A contest like Create the Future is a perfect opportunity, for engineers in particular, to learn, explore, and create.
The exploration of space has always been unique in its search for knowledge. The International Space Apps Challenge, a NASA incubator innovation program, is an 'international mass collaboration focused on space exploration that takes place over 48-hours in cities around the world'. It is a unique global competition where people rally together to find solutions to real world problems, bringing humanity closer to understanding the Earth, the universe, the human race, and robotics. These goals, the organizers believe, can be reached much faster if we combine the power of the seven billion or so brains that occupy the planet, not forgetting the six that are currently orbiting above us aboard the International Space Station. The competition is open to people of all ages and in all fields, including engineers, technologists, scientists, designers, artists, educators, students, entrepreneurs, and so on. With an astounding 13,846 participants from all over the world, several highly innovative solutions were presented.
Maplesoft sponsored the University of York location in the UK where the winning team of five modeled an app called CropOp, a communication tool that connects the government to local farmers with the goal of providing instantaneous, crucial information regarding pest breakout warnings, extreme weather, and other important updates. This UK-based team believes the quality and quantity of food produced will be improved, especially benefiting the undernourished communities in Africa. Maplesoft supports the Space Apps Challenge because it proves that collaboration makes for bigger and better discoveries that can save millions of people.
Donating Maplesoft software for contestants to use is part of the sponsorship. The real delight is to wait and see what innovative concepts they come up with. When we sponsor contests like these, we find it benefits our software as much as it does the participants. Plus, if the contestants can provide solutions to real world issues, well, that benefits everyone!
I have two linear algebra texts [1, 2] with examples of the process of constructing the transition matrix that brings a matrix to its Jordan form . In each, the authors make what seems to be arbitrary selections of basis vectors via processes that do not seem algorithmic. So recently, while looking at some other calculations in linear algebra, I decided to revisit these calculations in as orderly a way as possible.
First, I needed a matrix with a prescribed Jordan form. Actually, I started with a Jordan form, and then constructed via a similarity transform on . To avoid introducing fractions, I sought transition matrices with determinant 1.
The eigenvalue has algebraic multiplicity 6. There are sub-blocks of size 3×3, 2×2, and 1×1. Consequently, there will be three eigenvectors, supporting chains of generalized eigenvectors having total lengths 3, 2, and 1. Before delving further into structural theory, we next find a transition matrix with which to fabricate .
The following code generates random 6×6 matrices of determinant 1, and with integer entries in the interval . For each, the matrix is computed. From these candidates, one is then chosen.
After several such trials, the matrix was chosen as
for which the characteristic and minimal polynomials are
So, if we had started with just , we'd now know that the algebraic multiplicity of its one eigenvalue is 6, and there is at least one 3×3 sub-block in the Jordan form. We would not know if the other sub-blocks were all 1×1, or a 1×1 and a 2×2, or another 3×3. Here is where some additional theory must be invoked.
The null spaces of the matrices are nested: , as depicted in Figure 1, where the vectors , are basis vectors.
Figure 1 The nesting of the null spaces
The vectors are eigenvectors, and form a basis for the eigenspace . The vectors , form a basis for the subspace , and the vectors , for a basis for the space , but the vectors are not yet the generalized eigenvectors. The vector must be replaced with a vector that lies in but is not in . Once such a vector is found, then can be replaced with the generalized eigenvector , and can be replaced with . The vectors are then said to form a chain, with being the eigenvector, and and being the generalized eigenvectors.
If we could carry out these steps, we'd be in the state depicted in Figure 2.
Figure 2 The null spaces with the longest chain determined
Next, basis vector is to be replaced with , a vector in but not in , and linearly independent of . If such a is found, then is replaced with the generalized eigenvector . The vectors and would form a second chain, with as the eigenvector, and as the generalized eigenvector.
Define the matrix by the Maple calculation
and note
The dimension of is 3, and of , 5. However, the basis vectors Maple has chosen for do not include the exact basis vectors chosen for .
We now come to the crucial step, finding , a vector in that is not in (and consequently, not in either). The examples in are simple enough that the authors can "guess" at the vector to be taken as . What we will do is take an arbitrary vector in and project it onto the 5-dimensional subspace , and take the orthogonal complement as .
A general vector in is
A matrix that projects onto is
The orthogonal complement of the projection of Z onto is then . This vector can be simplified by choosing the parameters in Z appropriately. The result is taken as .
The other two members of this chain are then
A general vector in is a linear combination of the five vectors that span the null space of , namely, the vectors in the list . We obtain this vector as
A vector in that is not in is the orthogonal complement of the projection of ZZ onto the space spanned by the eigenvectors spanning and the vector . This projection matrix is
The orthogonal complement of ZZ, taken as , is then
Replace the vector with , obtained as
The columns of the transition matrix can be taken as the vectors , and the eigenvector . Hence, is the matrix
Proof that this matrix indeed sends to its Jordan form consists in the calculation
=
The bases for , are not unique. The columns of the matrix provide one set of basis vectors, but the columns of the transition matrix generated by Maple, shown below, provide another.
I've therefore added to my to-do list the investigation into Maple's algorithm for determining an appropriate set of basis vectors that will support the Jordan form of a matrix.
Like most companies today, Maplesoft monitors its website traffic, including the traffic coming to MaplePrimes. This allows us to view statistical data such as how many total visits MaplePrimes gets, how many unique visitors it gets, what countries these visitors come from, how many questions are asked and answered, how many people read but never respond to posts, etc.
Recently one of our regular MaplePrimes users made the comment that MaplePrimes does not reach new Maple users. We found this comment interesting because our data and traffic numbers show a different trend. MaplePrimes gets unique visitors in the hundreds of thousands each year, and since its inception, it has welcomed unique visitors in the many millions. Based on these unique visitor numbers and the thousands of common searches specifically about Maple that people are doing, we can see that many of these unique visitors are in fact new Maple users looking for resources and support as they begin using Maple. Other visitors to MaplePrimes include people who use Google (or other search engines) to find an answer to a particular mathematics or engineering question, regardless of what mathematics software they choose to use, and Google points them to MaplePrimes. There are some popular posts that were written months, even years ago, that are still getting high visitor views today, showing the longevity of the information on MaplePrimes.
MaplePrimes gets the majority of its visible activity from a small number of extremely active members. In public user forums around the world, these types of members are given many names – power users, friendlies, evangelists. Every active public user forum has them. On MaplePrimes, it's this small number of active members that are highly visible. But, what our traffic data reveals is the silent majority. These people, many of them repeat visitors, are quietly reviewing the questions and answers that our evangelists are posting. The silent majority of MaplePrimes visitors are the readers; they are the quiet consumers of information. For every person that writes, comments on, or likes a post, there are thousands more that read it.
Here are a few more MaplePrimes traffic data points for your reference:
MaplePrimes is very international and draws people from all around the world. Here are the top 10 countries where the most MaplePrimes visitors come from:
USA
India
Canada
Germany
China
United Kingdom
Brazil
Australia
France
Denmark
Here are the top 5 keywords people are using in their searches on MaplePrimes:
Data from plot
Physics
Sprintf
Size of plot
Fractal
MaplePrimes is growing at a very fast rate: Traffic (visitors to the site) and membership size is growing at nearly double the pace it was last year. The total number of posts and questions this year is also much higher compared to the same timeframe last year.
Our top 5 MaplePrimes members have each visited MaplePrimes more than 1200 times and viewed a combined total of more than 10,000 pages (that is total page views, not unique page views). Our top 25 MaplePrimes members have visited at least 250 times each (many of them nearly 1000 times each) and our top 50 MaplePrimes members have visited a combined total of over 23,000 times, visiting nearly 200,000 pages. Thank you! We're glad you like it. :-)
The engineering design process involves numerous steps that allow the engineer to reach his/her final design objectives to the best of his/her ability. This process is akin to creating a fine sculpture or a great painting where different approaches are explored and tested, then either adopted or abandoned in favor of better or more developed and fine-tuned ones. Consider the x-ray of an oil painting. X-rays of the works of master artists reveal the thought and creative processes of their minds as they complete the work. I am sure that some colleagues may disagree with the comparison of our modern engineering designs to art masterpieces, but let me ask you to explore the innovations and their brilliant forms, and maybe you will agree with me even a little bit.
Design Process
Successful design engineers must have the very best craft, knowledge and experience to generate work that is truly worthy of being incorporated in products that sell in the tens, or even hundreds, of millions. This is presently achieved by having cross-functional teams of engineers work on a design, allowing cross checking and several rounds of reviews, followed by multiple prototypes and exhaustive preproduction testing until the team reaches a collective conclusion that "we have a design." This is then followed by the final design review and release of the product. This necessary and vital approach is clearly a time consuming and costly process. Over the years I have asked myself several times, "Did I explore every single detail of the design fully"? "Am I sure that this is the very best I can do?" And more importantly, "Does every component have the most fine-tuned value to render the best performance possible?" And invariably I am left with a bit of doubt. That brings me to a tool that has helped me in this regard.
A Great New Tool
I have used Maple for over 25 years to dig deeply into my designs and understand the interplay between a given set of parameters and the performance of the particular circuit I am working on. This has always given me a complete view of the problem at hand and solidly pointed me in the direction of the best possible solutions.
In recent years, a new feature called "Explore" has been added to Maple. This amazing feature allows the engineer/researcher to peer very deeply into any formula and explore the interaction of EVERY variable in the formula.
Take for example the losses in the control MOSFET in a synchronous buck converter. In order to minimize these losses and maximize the power conversion efficiency, the most suitable MOSFET must be selected. With thousands of these devices being available in the market, a dozen of them are considered very close to the best at any given time. The real question then is, which one is really the very best amongst all of them?
There are two possible approaches - one, build an application prototype, test a random sample of each and choose the one that gives you the best efficiency. Or, use an accurate mathematical model to calculate the losses of each and chose the best. The first approach lacks the variability of each parameter due to the six sigma statistical distribution where it is next to impossible to get a device laying on the outer limits of the distribution. That leaves the mathematical model approach. If you take this route, you can have built-in tolerances in the equations to accommodate all the variabilities and use a simplified equation for the control MOSFET losses (clearly you can use a very detailed model should you chose to) to explore these losses. Luckily you can explore the losses using the Explore function in Maple.
The figure below shows a three dimensional plot, plus five other variables in the formula that the user can change using sliders that cover the range of values of interest including Minima and Maxima, while observing in real time the effects of the change on the power loss.
This means that by changing the values of any set of variables, you can observe their effect on the function. To put it simply, this single feature helps you replace dozens of plots with just one, saving you precious time and cost in fine-tuning your design. In my opinion, this is equivalent to an eight-dimensional/axes plot.
I used this amazing feature in the last few weeks and I was delighted at how simple it is to use and how much it simplifies the study of my approach and my components selection, in record times!
This October 21st, Maplesoft will be hosting a full-production, live streaming webinar featuring Dr. Robert Lopez, Emeritus Professor and Maple Fellow. You might have caught Dr. Lopez's Clickable Calculus webinar series before, but this webinar is your chance to meet the man behind the voice and watch him use Clickable Math techniques live!
In this webinar, Dr. Lopez will present examples of what "resequencing concepts and skills" looks like when implemented with Maple's point-and-click syntax-free paradigm. He will demonstrate how Maple can not only be used to elucidate the concept, but also, how it can be used to illustrate and implement the manipulations that ultimately the student must master embedded in the original responses.
Before I begin, a quick note that the content below was primarily created by one of our summer interns, Pia, with guidance and advice from me.
On the other hand, Carl Love answered this enquiry using a more direct and simple code:
simplify(x1=a-y1-d*y2, {a-y2-d*y1= x2, 1-d^2= b, a-a*d= c});
Let's talk more about the expand, algsubs, subs, and simplify commands
First let's take a look at the method nm used to solve the problem using the commands expand, subs, solve and algsubs.
The expand command, expand(expr, expr1, expr2, ..., exprn), distributes products over sums. This is done for all polynomials. For quotients of polynomials, only sums in the numerator are expanded; products and powers are left alone.
The solve command, solve(equations, variables), solves one or more equations or inequalities for their unknowns.
The subs command, subs(x=a,expr), substitutes a for x in the expression expr.
The function algsubs, algsubs(a = b, f),performs an algebraic substitution, replacing occurrences of a with b in the expression f. It is a generalization of the subs command, which only handles syntactic substitution.
Let's tackle the Maple code written by nm step by step:
1) restart; The restart command is used to clear Maple's internal memory
2) eq1:=x1=a-y1-d*y2: eq2:=x2=a-y2-d*y1: The names eq1 and eq2 were assigned to the equations SY G provided.
3) z:=expand(subs(y2=solve(eq2,y2),eq1)): A new variable, z, was created, which will end up being x1 written in the terms SY G wanted.
solve(eq2,y2)
the solve command was used to solve the expression eq2 for the variable y2.
subs(y2=solve(eq2,y2),eq1)
The subs command was used to replace in expression eq1, y2 as determined by the solve step.
expand(subs(y2=solve(eq2,y2),eq1))
The expand command was used to distribute products over sums. Note: this step served to ensure that the final output looked exactly how SY G wanted.
4) z:=algsubs((a-a*d)=c,z): First, nm equated a-a*d to c, so later the algsubs command could be applied to substitute the new variable c into the expression z.
5) algsubs((1-d^2)=b,z); Again, nm equated 1-d^2 to b, so later the algsubs command could be applied to substitute the new variable b into the expression z.
An alternate approach
Now let us check out Carl Love's approach. Carl Love uses the simplify command in conjunction with side relations.
The simplify command has many calling sequences and one of them is the simplify(expr,eqns), that is known as simplify/siderels. A simplification of expr with respect to the side relations eqns is performed. The result is an expression which is mathematically equivalent toexpr but which is in normal form with respect to the specified side relations. Basically you are telling Maple to simplify the expression (expr) using the parameters (eqns) you gave to it.
I hope that you find this useful. If there is a particular question on MaplePrimes that you would like further explained, please let me know.
We are happy to announce that Maple T.A. now supports the Learning Tools Interoperability® (LTI) standard, which means that Maple T.A. can be easily integrated with course management systems that support LTI. Maplesoft officially supports LTI connectivity with Canvas, Blackboard Learn™, Brightspace™, Moodle™, and Sakai.
Using the LTI standard, you can integrate Maple T.A. directly into your existing course management or learning management platforms. This allows for single-sign on in one central location and Maple T.A. assignment delivery and grade pushing right inside of your existing solutions.
If you would like to use the LTI connectivity feature, please contact Maplesoft Technical Support at support@maplesoft.com. They will provide the instructions and files you need to set up your connection, and answer any questions you may have about how the integration works on your platform embedded in the original responses.
Before I begin, a quick note that the content below was primarily created by one of our summer interns, Pia, with guidance and advice from me.
Eberch, a new Maple user, was interested in learning how to build his own Math Apps by looking at the source code of some of the already existing Math Apps that Maple offers.
Acer helpfully suggested that he look into the Startup Code of a Math App, in order to see definitions of procedures, modules, etc. He also recommended Eberch take a look at the "action code" that most of the Math Apps have which consist of function calls to procedures or modules defined in the Startup Code. The Startup Code can be accessed from the Edit menu. The function calls can be seen by right-clicking on the relevant component and selecting Edit Click Action.
Acer's answer is correct and helpful. But for those just learning Maple, I wanted to provide some additional explanation.
Let's talk more about building your own Math Apps
Building your own Math Apps can seem like something that involves complicated code and rare commands, but Daniel Skoog perfectly portrays an easy and straightforward method to do this in his latest webinar. He provides a clear definition of a Math App, a step-by-step approach to creating a Math App using the explore and quiz commands, and ways to share your applications with the Maple community. It is highly recommended that you watch the entire webinar if you would like to learn more about the core concepts of working with Maple, but you can find the Math App information starting at the 33:00 mark.
I hope that you find this useful. If there is a particular question on MaplePrimes that you would like further explained, please let me know. | 677.169 | 1 |
Mathematical Structures for Computer Science
9780716768647
ISBN:
071676864X
Edition: Sixth Edition Pub Date: 2006 Publisher: W. H. Freeman
Summary: This edition offers a pedagogically rich and intuitive introduction to discrete mathematics structures. It meets the needs of computer science majors by being both comprehensive and accessible.
Judith L. Gersting is the author of Mathematical Structures for Computer Science, published 2006 under ISBN 9780716768647 and 071676864X. Two hundred twenty Mathematical Structures for Computer Science textbooks are a...vailable for sale on ValoreBooks.com, fifty nine used from the cheapest price of $9.44, or buy new starting at $39 | 677.169 | 1 |
The second part of the three-part Calculus series. Transcendental functions, techniques of integration, improper integrals, infinite series and power series, parametrized curves and polar coordinates. Prerequisite: MATH 201 with grade of C or higher.
Prerequisite(s) / Corequisite(s):
MATH 201 with grade of C or higher.
Course Rotation for Day Program:
Offered Fall and Spring.
Text(s):
Most current editions of the following:
Calculus
By Stewart (Brookes-Cole) Recommended
Course Learning Outcomes
Demonstrate understanding of the calculus of logarithmic and exponential functions.
Demonstrate understanding of the calculus of inverse trigonometric functions.
Analyze indeterminate forms and apply L'Hospital's rule to evaluate limits of such forms.
Explore geometric applications of integration, such as the length, the area of a surface, as well as their applications to physics, engineering, economics, and biology.
Apply basic calculus ideas to parametric and polar curves to determine the arc length, surface area of revolution, and other geometric characteristics.
Use polar coordinates to plot points and regions in the plane.
Apply various tests for convergence to distinguish between absolutely and conditionally convergent and divergent numeric series.
Find the radius and the interval of convergence of power series.
Find Taylor and Maclaurin series for certain classes of functions.
Explore applications of Taylor series and polynomials to approximate functions and definite integrals, to evaluate limits, and solve initial value problems | 677.169 | 1 |
2Introductory Algebra : A Just-in-Time Approach
Introductory Algebra
Introductory Algebra: A Just-in-Time Approach
Introductory Algebra Everyday Explorations
Summary
Kaseberg/Cripe/Wildman's respected INTRODUCTORY ALGEBRA is known for an informal, interactive style that makes algebra more accessible to students while maintaining a high level of mathematical accuracy. This new edition introduces two new co-authors, Greg Cripe and Peter Wildman. The three authors have created a new textbook that introduces new pedagogy to teach students how to be better prepared to succeed in math and then life by strengthening their ability to solve critical-thinking problems. This text's popularity is attributable to the author's use of guided discovery, explorations, and problem solving, all of which help students learn new concepts and strengthen their skill retention. | 677.169 | 1 |
Trigonometry (with BCA Tutorial and InfoTrac ) / Edition 1…
See more details below
Overview trigonometry text has been designed specifically to help students learn to think mathematically and to develop true problem-solving skills. Patient, clear, and accurate, this text consistently illustrates how useful and applicable trigonometry is to real life. | 677.169 | 1 |
What is a volume? The word usually refers to the amount of three-dimensional space that an object occupies. It is commonly measured in cubic centimetres (cm3) or cubic metres (m3).
A closely related idea is capacity; this is used to specify the volume of liquid or gas that a container can actually hold. You might refer to the volume of a brick and the capacity of a jug – but not vice versa. Note that a container with a particular volume will not nec
Drawing circles freehand often produces very uncircle-like shapes! If you need a reasonable circle, you could draw round a circular object, but if you need to draw an accurate circle with a particular radius, you will need a pair of compasses and a ruler. Using the ruler, set the distance between the point of the compasses and the tip of the pencil at the desired radius; place the point on the paper at the position where you want the centre of the circle to be and carefully rotate the compass
The term learning file is used to mean a record of your work in some sort of filing system. This may consist of a file, a box, note books, a filing cabinet, files on your computer or something else that suits you. Whatever the content, you will certainly need some way of organizing your written notes so that they stay together and in order.
Section 3 is an audio section. We begin by defining the terms group, Abelian group and order of a group. We then demonstrate how to check the group axioms, and we extend the examples of groups that we use to include groups of numbers – the modular arithmetics, the integers and the real numbers.
In Section 3 we examine the language used to express mathematical statements and proofs, and discuss various techniques for proving that a mathematical statement is true. These techniques include direct proof, proof by mathematical induction, proof by contradiction and proof by contraposition. We also illustrate the use of counter-examples to show that a statement is false. | 677.169 | 1 |
Guesstimation is a book that unlocks the power of approximation--it's popular mathematics rounded to the nearest power of ten! The ability to estimate is an important skill in daily life. More and more leading businesses today use estimation questions in interviews to test applicants' abilities to think on their feet. Guesstimation... more...
Too often math gets a bad rap, characterized as dry and difficult. But, Alex Bellos says, "math can be inspiring and brilliantly creative. Mathematical thought is one of the great achievements of the human race, and arguably the foundation of all human progress. The world of mathematics is a remarkable place." Bellos has traveled all around the globe... more...
Written for the 2012 syllabus, this stretching, comprehensive text will challenge your HL students and prepare them to achieve strong exam results. Fully supporting the new Exploration, digital support includes an eBook, interactive worked solutions, GDC support, extension opportunities and practice exam-style papers. more...
Computational science is an exciting new field at the intersection of the sciences, computer science, and mathematics because much scientific investigation now involves computing as well as theory and experiment. This textbook provides students with a versatile and accessible introduction to the subject. It assumes only a background in high school... more...
A book that explains how negative attitudes toward math get established in the brain and what teachers can do to turn those attitudes around. Includes more than 50 strategies (suitable for any grade level) that give students a math attitude makeover, reduce mistake anxiety, and relate math to students? interests and goals. more...
Blackline master math activity book jam packed with
adventures
Blackline master math activity book jam packed with adventures | 677.169 | 1 |
Developer information
Description
1. Solves equations with variables 2. Very simple to use 3. Stores your final calculation, so you can always return and calculate 4. Custom built keyboard enables to insert data in the most convenient way
1. Solves equations with variables 2. Very simple to use 3. Stores your final calculation, so you can always return and calculate 4. Custom built keyboard enables to insert data in the most convenient way | 677.169 | 1 |
INTRODUCTION
Who is this website for?
Abstractmath.org is designed for people who are beginning the study of some part of abstract math. This includes:
University math majors or beginning grad students taking math courses that require working with abstract definitions and understanding and creating proofs.
Teachers of university courses like those just described.
Professionals who need to learn math (in any one of many fields) that is described in terms of mathematical properties with no reference to applications.
Anyone who is curious about advanced math!
Abstract math is my name for what is often called "higher math" or "pure math".
Abstract Math provides the conceptual background and theory that justfies the way math is used in applications..
Abstract math requires conceptual reasoning about abstract ideas (as well as manipulating symbols), in particular on understanding and constructing proofs.
Abstract math is mathematics for its own sake. In doing abstract math, you state theorems and prove them mostly in the context of mathematical ideas rather than applications or ideas from other fields.
When you first meet up with abstract math, you may find it hard to understand or even bizarre. If you need to know some piece of abstract math you may find the texts in the subject appear to be unmotivated and full of mysterious chains of reasoning. This happens to many
people who are quite good at solving trig, derivative and integral problems.
Overview of the site
This website is a multiple-entry site with many cross-links. This overview will give you a start on finding out what is on it.
Diagnostic examples: These examples illustrate some of the many kinds of difficulty people meet with when studying and doing abstract math. Each example gives links to the relevant sections of the website.
Gyre&Gimble: A blog that discusses new ideas I have about abstract math and
language, some specifically related to abstractmath.org. | 677.169 | 1 |
Find a Westchester StatisticsWith the right perspective, it can be a mind-broadening experience. Calculus is the mathematics of change and variation which is fundamental to Physics, Chemistry, Biology, and Engineering. It is also fundamental to a deeper understanding of finance, marketing, and economics | 677.169 | 1 |
ALEX Lesson Plans
Title: Predict the Future?
Description:
Students will use data collected and a "best-fit line" to make predictions for the future. The example the students will be working on for this lesson will demonstrate an exponential regressionSubject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: Predict the Future? Description: Students will use data collected and a "best-fit line" to make predictions for the future. The example the students will be working on for this lesson will demonstrate an exponential regression.
Title: Investigating Parabolas in Standard Form
Description:
StudentsStandard(s): [MA2015] ALT (9-12) 34
[F-BF3]
Subject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: Investigating Parabolas in Standard Form Description: Students
Title: I'm Lovin' It: Finding Areas Between Curves
Description:
StudentsStandard(s):
Subject: Mathematics (9 - 12) Title: I'm Lovin' It: Finding Areas Between Curves Description: Students
Title: "Woody Sine"
Description:
TheStandard(s): [MA2015] PRE (9-12) 30: (+) Use the unit circle to explain symmetry (odd and even) and periodicity of trigonometric functions. [F-TF4]
Subject: Mathematics (9 - 12) Title: "Woody Sine" Description: The
Title: Technology for Displaying Trigonometric Graph Behavior: Sine and Cosine
Description:
AfterStandard(s): [MA2015] PRE (9-12) 29: (+) Use special triangles to determine geometrically the values of sine, cosine, and tangent for π/3, π/4, and π/6, and use the unit circle to express the values of sine, cosine, and tangent for π - x,
π + x, and 2π - x in terms of their values for x, where x is any real number. [F-TF3]
Subject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: Technology for Displaying Trigonometric Graph Behavior: Sine and Cosine Description: After
Title: Exponential Growth and Decay
Description:
ThisStandard(s): [MA2015] PRE (9-12) 25: Compare effects of parameter changes on graphs of transcendental functions. (Alabama)
Subject: Mathematics (9 - 12) Title: Exponential Growth and Decay Description: This
Title: You Mean ANYTHING To The Zero Power Is One?
Description:
This lesson is a technology-based project to reinforce concepts related to the Exponential Function. It can be used in conjunction with any textbook practice set. Construction of computer models of several Exponential Functions will promote meaningful learning rather than memorization.
Standard(s):
Subject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: You Mean ANYTHING To The Zero Power Is One? Description: This lesson is a technology-based project to reinforce concepts related to the Exponential Function. It can be used in conjunction with any textbook practice set. Construction of computer models of several Exponential Functions will promote meaningful learning rather than memorization.
Thinkfinity Lesson Plans
Title: Exact Ratio
Description:
This
Standard(s): [MA2015] AL1 (9-12) 35: Write arithmetic and geometric sequences both recursively and with an explicit formula, use them to model situations, and translate between the two forms.* [F-BF2]
Subject: Mathematics Title: Exact Ratio Description: This Whelk-Come to Mathematics
Description:
In
Standard(s): [MA2015] AM1 (9-12) 12: Calculate the limit of a sequence, of a function, and of an infinite series. (Alabama)
Subject: Mathematics,Science Title: Whelk-Come to Mathematics Description: In Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
Title: Northwestern Crows
Standard(s): [MA2015] PRE (9-12) 50: (+) Define a random variable for a quantity of interest by assigning a numerical value to each event in a sample space; graph the corresponding probability distribution using the same graphical displays as for data distributions. [S-MD1]
Subject: Mathematics,Science Title: Northwestern Crows Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
Title: Conduct an Experiment
Standard(s): [MA2015] AM1 (9-12) 12: Calculate the limit of a sequence, of a function, and of an infinite series. (Alabama)
Subject: Mathematics,Science Title: Conduct an Experiment Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
Title: Do You Hear What I Hear?
Description:
In this lesson, from Illuminations, students explore the dynamics of a sound wave. Students use an interactive Java applet to view the effects of changing the initial string displacement and the initial tension.
Standard(s): [MA2015] PRE (9-12) 26: Determine the amplitude, period, phase shift, domain, and range of trigonometric functions and their inverses. (Alabama)
Subject: Mathematics,Science Title: Do You Hear What I Hear? Description: In this lesson, from Illuminations, students explore the dynamics of a sound wave. Students use an interactive Java applet to view the effects of changing the initial string displacement and the initial tensionations The Effects of Outliers
Description:
This
Standard(s): [MA2015] PRE (9-12) 44: Understand statistics as a process for making inferences about population parameters based on a random sample from that population. [S-IC1]
Subject: Mathematics Title: The Effects of Outliers Description: This Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
Title: Traveling Distances
Description:
In Traveling Distances Description: In Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
ALEX Learning Assets
Title: Graphing Periodic Functions
Digital Tool:
Height of Waist - Graphing Stories in 15 seconds Web Address URL:
Standard(s):
[ Digital Tool Description: Graphing Stories is a website that provides short videos of real world activities that can be translated to graphs. The Height of Waist video models a person in a swing. The graph should be a periodic function.
Title:Graphing Periodic Functions Digital Tool: Height of Waist - Graphing Stories in 15 seconds Digital Tool Description: Graphing Stories is a website that provides short videos of real world activities that can be translated to graphs. The Height of Waist video models a person in a swing. The graph should be a periodic function.
Web Resources
Interactives/GamesLearning ActivitiesThinkfinity Learning Activities
Title: Isosceles Triangle Investigation
Description:
This student interactive, from an Illuminations lesson, allows students to investigate the relationship between the area of the triangle and the length of its base.
Standard(s): [MA2015] ALT (9-12) 33: Write a function that describes a relationship between two quantities.* [F-BF1]
Subject: Mathematics Title: Isosceles Triangle Investigation Description: This student interactive, from an Illuminations lesson, allows students to investigate the relationship between the area of the triangle and the length of its base. Thinkfinity Partner: Illuminations Grade Span: 9 | 677.169 | 1 |
This book is the leading title in a series targeted at the average A Level mathematics student which aims to tackle the basic ideas and misconceptions associated with this subject. The inclusion of stretch and challenge material caters for the most able students, and lots of regular exercises and exam questions provide plenty of practice. These books have an innovative 'flexispread' structure devised to motivate students and include real life links to inspire students. A background knowledge chapter at the beginning of each book helps to bridge the gap between GCSE and A level study tackling the retention issue of AS students. This Core book contains a free students' CD-ROMUp until recently I use to use the Heinemann offerings to teach the C1 and C2 modules. While these were ok and the results obtained were good, they do not come close in comparison to this book that covers both C1 and C2.
The explanations are so much better. More time is taken to explain concepts and there are more worked examples. The format for each chapter is an introductory section stating what you are expected to know. This is followed by a number of sections that include the required theory, worked examples, excercises and open ended investigations. at the end of the chapter there is a review of all the work studied and a concise summary.
The pathetically easy material contained in the first chapter of the Heinemann C1 book has been removed. Instead a CD-ROM is included with the book. This contains a bridging course (pdf) containing elementary material, an e-book (pdf) containing the entire contents of the book and some very useful powerpoint presentations. These materials are excellent and seem to be specifically designed for the teacher. Each pdf page is in fact a double-page spread of a book or bridging course. These could be used in conjunction with an overhead projector or interactive whiteboard. If the teacher wants something interactive, then the powerpoint slides could be used in the same manner. For a student buying the book, they can of course be used at home as well.
All-in-all this represents the best preparation for the C1 and C2 exams. There is incremental development of topics in very easy stages, leading up to exam-level questions. There are also two specimen exam papers and a complete set of answers to all questions at the back of the book. With the addition of materials on the CD-ROM it represents very good value for money.
I highly recommend this book.
Comment
20 of 22 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
I have been using this book with my students. The presentation is good, the explanations effective, and the questions excellent. All in all, it has the making of a five star book. BUT there is a hiccough, and it's one which could cause a lot of annoyance and wasted time for students. The answers in the back contain far too many errors for comfort.
I remember reading this criticism before my purchase, and I can now confirm that in my copy (purchased in 2011) there are around 60 errors in the answers - and I mean actual errors, not simple formatting or precision deficiencies. Now if OUP were to publish an errata list, this would not be such a major problem. But they don't, and judging from my correspondence with them, they have no intention of doing so. For this reason, their book only justifies 4 stars at best.
Pearson's Edexcel Maths series directed by Keith Pledger is also excellent - certainly more colourful and glossy, and with a CD containing not just a copy of every page but a detailed analytical answer to every question. It's a good idea to cover both series, starting with this one, as the Pledger series is a little more demanding.
Comment
6 of 7 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
Teaching for nearly two decades i was asked to teach a friends son. Having the freedom to choose a text book that I had never used before i purchased Peter Hinds book. It contains excellent explanations, goods questions linked directly to edexcel exam questions and a few more that I have seen in the usual school text books. All in all an excellent aid to learning c1 and c2
I credit this book alone in helping me get an A in the C1 and C2 unit exams. It has tons of clearly laid out examples and many practice questions which makes the content easy to grasp. Get this book you will not regret it. The CD Rom is also excellent, these are a fantastic series and will be sure to help those doing an A level in Maths.
This book is filled with questions and has the answers in the back of the book. I don't feel they explain how you do some things properly, they show an example sometimes which can help, use this book for practice mainly.
This book may be as good as the other reviews have said. I do however object strongly when to it says on the Amazon website that it has a hardcover, but the book I was sent by Amazon definitely did not have a hardcover. | 677.169 | 1 |
A Modern Introduction to Differential Equations, Second Edition, provides an introduction to the basic concepts of differential equations. The book begins by introducing the basic concepts of differential equations, focusing on the analytical, graphical, and numerical aspects of first-order equations, including slope fields and phase lines. The discussions... more...
Non-Linear Differential Equations covers the general theorems, principles, solutions, and applications of non-linear differential equations. This book is divided into nine chapters. The first chapters contain detailed analysis of the phase portrait of two-dimensional autonomous systems. The succeeding chapters deal with the qualitative methods for... more...
The papers in this book originate from lectures which were held at the "Vienna Workshop on Nonlinear Models and Analysis" ? May 20?24, 2002.They represent a cross-section of the research field Applied Nonlinear Analysis with emphasis on free boundaries, fully nonlinear partial differential equations, variational methods, quasilinear partial differential... more...
Nonlinear Differential Equations: Invariance, Stability, and Bifurcation presents the developments in the qualitative theory of nonlinear differential equations. This book discusses the exchange of mathematical ideas in stability and bifurcation theory. Organized into 26 chapters, this book begins with an overview of the initial value problem for... more...
Many problems in partial differential equations which arise from physical models can be considered as ordinary differential equations in appropriate infinite dimensional spaces, for which elegant theories and powerful techniques have recently been developed. This book gives a detailed account of the current state of the theory of nonlinear differential... more...
An ideal companion to the student textbook Nonlinear Ordinary Differential Equations 4th Edition (OUP, 2007) this text contains over 500 problems and solutions in nonlinear differential equations, many of which can be adapted for independent coursework and self-study. - ;An ideal companion to the new 4th Edition of Nonlinear Ordinary Differential Equations... more...
Nonlinear Systems and Applications: An International Conference contains the proceedings of an International Conference on Nonlinear Systems and Applications held at the University of Texas at Arlington, on July 19-23, 1976. The conference provided a forum for reviewing advances in nonlinear systems and their applications and tackled a wide array of... more... | 677.169 | 1 |
Olympiad mathematics is not a collection of techniques of solving mathematical problems but a system for advancing mathematical education. This book is based on the lecture notes of the mathematical Olympiad training courses conducted by the author in Singapore. Its scope and depth not only covers and exceeds the usual syllabus, but introduces a variety... more...
This affordable reprint of a classic graduate textbook, originally published in 1971, places emphasis on applications to theoretical mechanics, mathematical physics, and applied mathematics and presents a variety of techniques with extensive examples. more...
Linear Integral Equations: Theory and Technique is an 11-chapter text that covers the theoretical and methodological aspects of linear integral equations. After a brief overview of the fundamentals of the equations, this book goes on dealing with specific integral equations with separable kernels and a method of successive approximations. The next... more...
The purpose of the volume is to provide a support textbook for a second lecture course on Mathematical Analysis. The contents are organised to suit, in particular, students of Engineering, Computer Science and Physics, all areas in which mathematical tools play a crucial role. The basic notions and methods concerning integral and differential calculus... more...
The International Mathematical Olympiad (IMO) is a competition for high school students. China has taken part in the IMO 21 times since 1985 and has won the top ranking for countries 14 times, with a multitude of golds for individual students. The six students China has sent every year were selected from 20 to 30 students among approximately 130 students... more... Mathematical Olympiad has been held in Poland every year since... more...
Students and research workers in mathematics, physics, engineering and other sciences will find this compilation invaluable. All the information included is practical, rarely used results are excluded. Great care has been taken to present all results concisely and clearly. Excellent to keep as a handy reference! If you don't have a lot of time... more...
A stimulating and rigorous approach to Mathematics that goes beyond the requirements of the National Curriculum for Year 6 pupils (aged 10 and above) and lays the foundation for success at Common Entrance and other independent entrance exams at 11+. - Plenty of worked examples to demonstrate method. - Develops key skills with clear explanations and... more... | 677.169 | 1 |
This collection of activities is intended to provide middle and high school Algebra I students with a set of data collection investigations that integrate mathematics and science and promote mathemati... More: lessons, discussions, ratings, reviews,...
This mathlet allows you to solve simple linear equations through the use of a balance beam. Unit blocks (representing 1s) and X-boxes (for the unknown, X), are placed on the pans of a balance beam. More: lessons, discussions, ratings, reviews,...
Algebra Concepts is a tool for introducing many of the difficult concepts that are necessary for success in higher level math courses. This program includes a special feature, the Algebra Tool Kit, wh... More: lessons, discussions, ratings, reviews,...
Algebra Concepts is an interactive learning system designed to provide instruction in mathematics at the 7th grade enrichment through adult levels. The instructional goals for Algebra Concepts include... More: lessons, discussions, ratings, reviews,...
Students play a generalized version of connect four, gaining the chance to place a piece on the board by solving an algebraic equation. Parameters: Level of difficulty of equations to solve and type o... More: lessons, discussions, ratings, reviews,...
An algebra practice program for anyone working on simplifying expressions and solving equations. Create your own sets of problems to work through in the equation editor, and have them appear on all of... More: lessons, discussions, ratings, reviews,...
This program introduces students to fundamental concepts in algebra. Using two or three weighing scales, students must determine the weight of a specific object or group of objects. The program provid... More: lessons, discussions, ratings, reviews,...
This applet allows students to play with the concept of force and mass, and provides them with a great introduction to algebra as they attempt to formulate rules for balancing a teeter-totter and for ... More: lessons, discussions, ratings, reviews,...
A TI-Nspire file that students can use to reflect on the "Doing it Wrong" Activity from the Math Forum's new Problem Solving and Communication Activity Series. This is designed to be used with the ... More: lessons, discussions, ratings, reviews,...
A TI-Nspire file that students can use to reflect on the "Noticings/Wonderings" Activity from the Math Forum's new Problem Solving and Communication Activity Series. This is designed to be used wit | 677.169 | 1 |
Scope of use
The first textbook, Saxon Algebra, was published in 1979 by John Saxon for junior college students. In 1980, a high school version, Saxon Algebra 1, was published. By 1993, Saxon Publishers
had developed programs for kindergarten through high school; Saxon joined Harcourt in 2004. School districts in all 50 states use Saxon products.
Teaching
The Saxon Math curriculum for each grade level or course consists of at least 120 daily lessons and 12 activity-based investigations.
A daily lesson consists of learning a new mathematical concept, working on practice problems relating to that lesson, and solving a number of problems that include the current and previous
material. This daily cycle is interrupted for tests and additional topics. Some versions of the curriculum include a teacher's edition with support and options for differentiated instruction.
Cost
Individual copies of the student and teacher editions of the Saxon Algebra 1 textbook cost $70.95 and $103.75, respectively. Other available products include practice guides, manipulatives, and teaching materials, ranging from $8.75 for a student edition practice workbook to $534.40 for a manipulatives kit. | 677.169 | 1 |
The Content Graph helps explains how content behaves in an online environment. Understanding that behavior is essential for creating effective social marketing strategies. Knowingly or not, we are... More > all acting like media companies, broadcasting our ideas, attitudes, beliefs, opinions and sharing intimate details of our lives to an ever growing circle of contacts. Six degrees of separation are condensing down as our personal reputations become knowable in a global village that is getting smaller every day as our own communities expand beyond the ordinary boundaries that used to define family and friends. This means that we all need media training, and lots of it. The distinction between media professional and amateur continues to blur. Communications technologies have forever changed how human beings collect and share knowledge and information. Understanding the implications of these changes is crucial to successfully living and working in the twenty-first century.< Less
Tired of teaching coordinate graphing the same old way? Students make pictures while practicing their coordinate graphing skills. Students will know when they make a mistake
and students will be... More > able to self-correct. This resource book consists of differentiated coordinate graphs of holidays and the four seasons, graphing paper, contains the full-size pictures that can be used as an over-lay so that the teacher can check a student's work easily and fast.< Less
With over 20 years of experience teaching algebra to kids that had trouble getting it, this book was created. It is filled with exercises to teach the California Algebra standards. To help kids get... More > it easier, a lot of graphing is done or interpreted to help make them familiar | 677.169 | 1 |
This ebook is available for the following devices:
iPad
Windows
Mac
Sony Reader
Cool-er Reader
Nook
Kobo Reader
iRiver Story
more
One of the landmarks in the history of mathematics is the proof of the nonex- tence of algorithms based solely on radicals and elementary arithmetic operations (addition, subtraction, multiplication, and division) for solutions of general al- braic equations of degrees higher than four. This proof by the French mathema- cian Evariste Galois in the early nineteenth century used the then novel concept of the permutation symmetry of the roots of algebraic equations and led to the invention of group theory, an area of mathematics now nearly two centuries old that has had extensive applications in the physical sciences in recent decades. The radical-based algorithms for solutions of general algebraic equations of degrees 2 (quadratic equations), 3 (cubic equations), and 4 (quartic equations) have been well-known for a number of centuries. The quadratic equation algorithm uses a single square root, the cubic equation algorithm uses a square root inside a cube root, and the quartic equation algorithm combines the cubic and quadratic equation algorithms with no new features. The details of the formulas for these equations of degree d(d = 2,3,4) relate to the properties of the corresponding symmetric groups Sd which are isomorphic to the symmetries of the equilateral triangle for d = 3 and the regular tetrahedron for d — 4. less | 677.169 | 1 |
Business Math, Tenth Editionunlocks the world of math by showing how it is used in the business world. Written in a conversational style, the book covers essential topics such as banking, interest, insurance, taxes, depreciation, inventory, and financial statements. It carefully explains common business practices such as markup, markdown, and cash discounts-showing students how these tools work in small business or personal finance. Authors encourage self-starters from the beginning, with the review of basic math, annotated examples, stop and check exercises, skill builders and application exercises. This edition includes updated problem sets, new trends and laws, a revised financial statements chapter and the one-of-a-kind MyMathLabwebsite. | 677.169 | 1 |
Diablo ACT MathViolet C.
...Integral calculus finds a quantity when the rate of change is known. Differential calculus determines the rate of change of a quantity. I tutored all of the above concepts and their applications to college studentsFassil B. | 677.169 | 1 |
This algebra deals mostly with linear functions. Algebra 2 is a more advanced, more complex version of algebra 1. Here we get more involved with non-linear functions as well as imaginary and complex numbers. | 677.169 | 1 |
The Beginner's Guide to Mathematica, Version 2 / Edition 1/i>
Paperback
Temporarily out of stock online.
Overview quickly and easily. In 40 concise chapters, authors Gray and Glynn answer the most commonly asked questions about Mathematica. The presentation is informal, with just enough detail to smooth your way. Each chapter is self-contained, so that the book may be read from beginning to end as a tutorial, or be used later as a reference.
Written by a Mathematica insider and a prominent mathematics educator, this guide should be the first place to look when learning how to use a fascinating and powerful computational aid. | 677.169 | 1 |
The Post-Calculus Series Mathematics Exam
The B student in a Bachelor's Program
24% Rationality and 19% Creativity!
You are a solid student. You work hard and know a good deal about the foundations of math. You would likely be a great High School teacher or financial anaylist. You are still unsure of some of the more abstract topics in math, though. | 677.169 | 1 |
Mathematicians Before 1700 - MAT-912The stories behind mathematical discoveries are fascinating but rarely told. When students learn how persons like themselves have discovered and shaped mathematics, their interest and motivation grows. This course examines the lives and work of great mathematicians who lived before 1700. It is designed to help teachers of grades 3-12 show the human dimension of mathematics.
Connect With
Testimonial
"The instructors respond promptly and personally. The courses are challenging but engaging and beneficial. The lesson plans are helpful in providing new ideas for classroom teaching and different purposes for the classes." | 677.169 | 1 |
A textbook that covers the traditional topics studied in a modern prealgebra course, as well as topics of estimation,...
see more
A textbook that covers the traditional topics studied in a modern prealgebra course, as well as topics of estimation, elementary analytic geometry, and introductory algebra. It is intended for students who (1) have had a previous course in prealgebra, (2) wish to meet the prerequisite of a higher level course such as elementary algebra, and (3) need to review fundamental mathematical concepts and techniques Mathematics to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Fundamentals of Mathematics
Select this link to open drop down to add material Fundamentals of Mathematics to your Bookmark Collection or Course ePortfolio
This is a free online course offered by the Saylor Foundation.'"Everything is numbers." This phrase was uttered by the lead...
see more
This is a free online course offered by the Saylor Foundation.'"Everything is numbers." This phrase was uttered by the lead character, Dr. Charlie Epps, on the hit television show "NUMB3RS." If everything has a mathematical underpinning, then it follows that everything is somehow mathematically connected, even if it is only in some odd, "six degrees of separation (or Kevin Bacon)" kind of way.Geometry is the study of space (for now, mainly two-dimensional, with some three-dimensional thrown in) and the relationships of objects contained inside. It is one of the more relatable math courses, because it often answers that age-old question, "When am I ever going to use this in real life?" Look around you right now. Do you see any triangles? Can you spot any circles? Do you see any books that look like they are twice the size of other books? Does your wall have paint on it?In geometry, you will explore the objects that make up our universe. Most people never give a second thought to how things are constructed, but there are geometric rules at play. Most people never think twice about a rocket launch, but if that rocket is not launched at an exact angle, it will miss its target. A football field has to be measured out to be a rectangle; if you used another shape, such as a trapezoid, that would give an unfair advantage to one team, because that one team would have more space to work with.In this course, you will study the relationships between lines and angles. Have you ever looked at a street map? Believe it or not, there is a lot of geometry on a map, as you will see from this course. You will learn to calculate how much space an object covers, which is useful if you ever have to, say, buy some paint. You will learn to determine how much space is inside of a three-dimensional object, which is useful for those times you are trying to fit four suitcases, three kids, two adults, and a dog into the back of your vehicle.These are just some of the topics you will be learning. As you will quickly see, everything is not just numbers; it is also relationships. Even nature itself knows this. What did the little acorn say when it grew up? "Gee, I'm a tree!" to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Geometry
Select this link to open drop down to add material Geometry to your Bookmark Collection or Course ePortfolio
'When I teach computer science courses, I want to cover important concepts in addition to making the material interesting and...
see more
'When I teach computer science courses, I want to cover important concepts in addition to making the material interesting and engaging to students. Unfortunately, there is a tendency for introductory programming courses to focus far too much attention on mathematical abstraction and for students to become frustrated with annoying problems related to low-level details of syntax, compilation, and the enforcement of seemingly arcane rules. Although such abstraction and formalism is important to professional software engineers and students who plan to continue their study of computer science, taking such an approach in an introductory course mostly succeeds in making computer science boring. When I teach a course, I don't want to have a room of uninspired students. I would much rather see them trying to solve interesting problems by exploring different ideas, taking unconventional approaches, breaking the rules, and learning from their mistakes. In doing so, I don't want to waste half of the semester trying to sort out obscure syntax problems, unintelligible compiler error messages, or the several hundred ways that a program might generate a general protection fault.One of the reasons why I like Python is that it provides a really nice balance between the practical and the conceptual. Since Python is interpreted, beginners can pick up the language and start doing neat things almost immediately without getting lost in the problems of compilation and linking. Furthermore, Python comes with a large library of modules that can be used to do all sorts of tasks ranging from web-programming to graphics. Having such a practical focus is a great way to engage students and it allows them to complete significant projects. However, Python can also serve as an excellent foundation for introducing important computer science concepts. Since Python fully supports procedures and classes, students can be gradually introduced to topics such as procedural abstraction, data structures, and object-oriented programming — all of which are applicable to later courses on Java or C++. Python even borrows a number of features from functional programming languages and can be used to introduce concepts that would be covered in more detail in courses on Scheme and Lisp Think Like a Computer Scientist - Learning with Python to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material How to Think Like a Computer Scientist - Learning with Python
Select this link to open drop down to add material How to Think Like a Computer Scientist - Learning with Python you Algebra to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Intermediate Algebra
Select this link to open drop down to add material Intermediate Algebra to your Bookmark Collection or Course ePortfolio
This series is one part of UC Irvine's Musicianship 15 ABC sequence for music majors. An understanding of music notation and...
see more
This series is one part of UC Irvine's Musicianship 15 ABC sequence for music majors. An understanding of music notation and basic musical terms is helpful but not required for these presentations. The math involved is basic. Pitch systems use mathematics to organize audible phenomenon for creative expression. The cognitive processes we develop through exposure to music comprise a kind of applied mathematics; our emotional responses to musical nuance grow out of a largely unconscious mastery of the patterns and structures in music. This series of presentations covers the basic mathematics and cognitive phenomenon found in the tonal system used in Western music and much of the music of the world. Over the course of several presentations we will explore basic concepts of pitch and frequency, the organizing rules of tonal systems, and the mathematical construction of basic scales and chords. The reasoning and purpose of equal temperament, the standard tuning system for tonal music, will be explored in this context. Presentations will include graphics and computer applications designed specifically to illustrate these Pitch Systems in Tonal Music to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introduction to Pitch Systems in Tonal Music
Select this link to open drop down to add material Introduction to Pitch Systems in Tonal Music to your Bookmark Collection or Course ePortfolio
Abstract: The book, Introductory Statistics: Concepts, Models, and Applications, presented in the following pages represents...
see more
Abstract: The book, Introductory Statistics: Concepts, Models, and Applications, presented in the following pages represents over twenty years of experience in teaching the material contained therein. The high price of textbooks and a desire to customize course material for my own needs caused me to write this material. This Web text and associated exercises is a continuing project. Check back often for Statistics: Concepts, Models, and Applications to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Introductory Statistics: Concepts, Models, and Applications
Select this link to open drop down to add material Introductory Statistics: Concepts, Models, and Applications to your Bookmark Collection or Course ePortfolio
This is a WWW textbook written by Evans M. Harrell II and James V. Herod, both of Georgia Tech. It is suitable for a first...
see more
This is a WWW textbook written by Evans M. Harrell II and James V. Herod, both of Georgia Tech. It is suitable for a first course on partial differential equations, Fourier series and special functions, and integral equations. Students are expected to have completed two years of calculus and an introduction to ordinary differential equations and vector spaces. For recommended 10-week and 15-week syllabuses, read the preface Methods of Applied Mathematics to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Linear Methods of Applied Mathematics
Select this link to open drop down to add material Linear Methods of Applied Puzzle Pages to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Puzzle Pages
Select this link to open drop down to add material Puzzle Pages to your Bookmark Collection or Course ePortfolio
This is a great site that includes everything including information on the standards, lesson plans, interactive...
see more
This is a great site that includes everything including information on the standards, lesson plans, interactive games/practice and other activities. It is broken up by grade levels (elementary, middle and high school). You can also view their journals Council of Teachers of Mathematics to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material The National Council of Teachers of Mathematics
Select this link to open drop down to add material The National Council of Teachers of Mathematics to your Bookmark Collection or Course ePortfolio
The WorldImages database contains almost 50,000 images, is global in coverage and is not limited to art. WorldImages is...
see more
The WorldImages database contains almost 50,000 images, is global in coverage and is not limited to art. WorldImages is accessible anywhere and its images can be freely used for non-profit educational purposes. The images can be located using many search techniques, and for convenience they are organized into some 440 portfolios. Portfolio categories include the following WorldImages sets:Art & Art History,Cultural & Social Interactions,History, Politics & Warfare,Science, Technology & Mathematics,Music, Drama & Literature,Natural World,People & Portraits,Religion, Myth & Magic, andMaterial Culture & Daily Images Kiosk to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material World Images Kiosk
Select this link to open drop down to add material World Images Kiosk to your Bookmark Collection or Course ePortfolio | 677.169 | 1 |
As a 30 year old going back to school, remembering all this is hard... My 12 year old loves it also, Fl new math books for her grade level even confuse the teacher. Her and I are understanding at a better pace and easier explanation of how to do the problems are great. If your looking to beef up your math skills or going back for the GED, or just trying to keep up with your kids in math this book is great...
If you are in need of assistance with baisc algebra, this is the book for you! The explainations are always detailed and in a step by step fashion. If you are a visual learner and need a little help, try these book. Each book also contains a pre-test, many problelms for each type of problem, problems using everyday situations and a post test. don't for get the key to math is to practice the proper methods.
Number Power books are really well organized - sequential, small steps, good examples, and sufficient practice for most people. The answers in back allow users to be sure that they understand how to do the problems. I highly recommend all the books in this series. | 677.169 | 1 |
Alumni of our math program have been very successful. Our alumni profiles feature some of their positions, including Actuary, Medical Doctor, Lawyer, High School Teacher and Vice President of Information Management.
The five professors in the our math department have a wide variety of mathematical interests, including game theory, mathematical modeling, statistics, chaos theory, geometry, knot theory, graph theory, and differential equations. They are also interested in interdisciplinary applications of math in fields such as political science, economics, education, biology, chemistry and physics | 677.169 | 1 |
Understanding Elementary Algebra for College Students / Edition 1
The text contains the same chapters as Understanding Intermediate Algebra, Third Edition, plus an additional chapter on sequences and series to meet specific state requirements or for those courses requesting additional material on sequences and series.
See more details below
Hardcover
Temporarily out of stock online.
Overview
The text contains the same chapters as Understanding Intermediate Algebra, Third Edition, plus an additional chapter on sequences and series to meet specific state requirements or for those courses requesting additional material on sequences and series. | 677.169 | 1 |
$$COLLEGE_NAME$$
10 Distance Learning Mathematics (Numeracy)Course Description
If you want to develop your knowledge of English and Basic Maths for your own purposes or to improve your career prospects, then this is the course for you.
Many employers ask for job applicants to have english andThis course will provide students with a greater knowledge and understanding of Advanced Mathematics and help to enhance your current skills. Learners can expect to explore a number of areas such as understanding numbers and formulae, using decimals, fractions, percentages and much more!
It is the ideal course for adults or young people who would like to develop their skills.
The Intermediate Mathematics Skills programme is ideal for adults or young person who would like to develop and enhance their skills. This course strives to help learners to work towards Mathematic Functional Skills at Entry Level 1.
This course deals with a number of basic mathematical concepts which will be used during studies or at work. The course content includes: numbers, algebra, equations and coefficients and using a calculator. | 677.169 | 1 |
MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free!
Ohio Math Standards - 12th Grade
MathScore aligns to the Ohio Math Standards for 12th Grade.
The standards appear below along with the MathScore topics that match. If you
click on a topic name, you will see sample problems at varying degrees of
difficulty that MathScore generated. When students use our program, the
difficulty of the problems will automatically adapt based on individual
performance, resulting in not only true differentiated instruction, but a
challenging game-like experience.
Number and Number Systems
Use Measurement Techniques and Tools
* Apply informal concepts of successive approximation, upper and lower bounds, and limits in measurement situations; e.g., measurement of some quantities, such as volume of a cone, can be determined by sequences of increasingly accurate approximations.
* Solve problems involving derived measurements; e.g., acceleration and pressure.
* Use radian measures in the solution of problems involving angular velocity and acceleration.
Use Patterns, Relations and Functions
* Analyze the behavior of arithmetic and geometric sequences and series as the number of terms increases.
* Translate between the numeric and symbolic form of a sequence or series.
* Describe and compare the characteristics of transcendental and periodic functions; e.g., general shape, number of roots, domain and range, asymptotic behavior, extrema, local and global behavior.
* Represent the inverse of a transcendental function symbolically.
Use Algebraic Representations
* Make arguments about mathematical properties using mathematical induction.
* Make mathematical arguments using the concepts of limit.
* Translate freely between polar and Cartesian coordinate systems.
* Compare estimates of the area under a curve over a bounded interval by partitioning the region with rectangles; e.g., make successive estimates using progressively smaller rectangles.
* Set up and solve systems of equations using matrices and graphs, with and without technology.
Analyze Change
* Use the concept of limit to find instantaneous rate of change for a point on a graph as the slope of a tangent at a point.
Statistical Methods
* Transform bivariate data so it can be modeled by a function; e.g., use logarithms to allow nonlinear relationship to be modeled by linear function.
* Apply the concept of a random variable to generate and interpret probability distributions, including binomial, normal and uniform.
* Describe the shape and find all summary statistics for a set of univariate data, and describe how a linear transformation affects shape, center and spread.
* Use sampling distributions as the basis for informal inference. | 677.169 | 1 |
Free Trial
Trusted by more than 2,400 students
Thanks for helping me earn an A in physics! Physics has never been my strong point. In high school I failed both my physics finals, and in college I am forced to take physics as a requirement for my biology degree, so I really dreaded starting the physics sequence. I couldn't understand my professor during lecture or the answer keys that he posted, so I really needed supplementary homework help. I can say that your videos are 10X more effective than the professor's and TA's office hours that I have attended. Your videos are the best...even more so than Khan Academy's physics videos!
The videos were extremely helpful. You can play them over and over and pause them to review. The narrator explained the steps beautifully and provided details on performing the algebraic manipulations. Different colored pens made it easy to differentiate steps. My Physics teacher often moved through the material very fast in class. The videos allowed me to review areas I found difficult as many times as I needed. Giancoli Answers was a wonderful learning tool for understanding Physics.
Features
1,930 video solutions for all regular problems in Giancoli's 7th Edition and 1,681 solutions for most regular problems in the 6th Edition.
Final answer provided in text form for quick reference above each video, and formatted nicely as an equation, like $E=mc^2$. This is useful if you are in the library or have a slow internet connection.
Pen colors make the step-by-step solutions clear. Red is used to illustrate algebra steps, and to substitute numeric values in the final step of a solution. When a solution switches to a new train of thought a different pen color emphasizes the switch, so that solutions are very methodical and organized.
Solutions are classroom tested, and created by an experienced physics teacher.
Videos are delivered with a high performance content delivery network. No waiting for videos to load or buffer. | 677.169 | 1 |
package consists of the textbook plus an access kit for MyMathLab/MyStatLab. #xA0; Elayn Martin-Gay firmly believes that every student can succeed, and her developmental math textbooks and video resources are motivated by this belief. Basic College Mathematics with Early Integers, Second Editionwas written to help students effectively make the transition from arithmetic to algebra. The new edition offers new resources like the Student Organizerand now includes Student Resourcesin the back of the book to help students on their quest for success. #xA0; MyMathLabprovides a wide range of homework, tutorial, and assessment tools that make it easy to manage your course online. #xA0;
1. The Whole Numbers
1.1 Tips for Success in Mathematics
1.2 Place Value, Names for Numbers, and Reading Tables
1.3 Adding Whole Numbers and Perimeter
1.4 Subtracting Whole Numbers
1.5 Rounding and Estimating
1.6 Multiplying Whole Numbers and Area
1.7 Dividing Whole Numbers
Integrated Review-Operations on Whole Numbers
1.8 An Introduction to Problem Solving
1.9 Exponents, Square Roots, and Order of operations
Group Activity
Vocabulary Check
Highlights
Review
Test
2. Integers and Introduction to Variables
2.1 Introduction to Variables and Algebraic Expressions
2.2 Introduction to Integers
2.3 Adding Integers
2.4 Subtracting Integers
Integrated Review-Integers
2.5 Multiplying and Dividing Integers
2.6 Order of Operations
Group Activity
Vocabulary Check
Chapter Highlights
Chapter Review
Chapter Test
Cumulative Review
3. Fractions
3.1 Introduction to Fractions and Mixed Numbers
3.2 Factors and Simplest Form
3.3 Multiplying and Dividing Fractions
3.4 Adding and Subtracting Like Fractions and Least Common Denominator | 677.169 | 1 |
Basic College Mathematics: Student Support Edition: An Applied Approach
Browse related Subjects ...
Read More student learning and instruction. With its interactive, objective-based approach, Basic College Mathematics provides comprehensive, mathematically sound coverage of topics essential to the basic college math course. The Eighth Edition features chapter-opening Prep Tests, real-world applications, and a fresh design--all of which engage students and help them succeed in the course. The Aufmann Interactive Method (AIM) is incorporated throughout the text, ensuring that students interact with and master concepts as they are presented | 677.169 | 1 |
Details about E-Z Business Math:
This self-teaching manual reviews arithmetic skills as they apply to business records and functions. Topics reviewed include fractions, decimals, calculating percentages, the fundamentals of statistics and business graphics, measurements in the English and metric systems, and applications of mathematics to banking, investing, loans, and setting up a business. Barron's continues its ongoing project of updating, improving, and giving handsome new designs to its popular list of Easy Way titles, now re-named Barron's E-Z Series. The new cover designs reflect the books' brand-new page layouts, which feature extensive two-color treatment, a fresh, modern typeface, and more graphic material than ever. Charts, graphs, diagrams, instructive line illustrations, and where appropriate, amusing cartoons help to make learning E-Z. Barron's E-Z books are self-teaching manuals focused to improve students' grades across a wide array of academic and practical subjects. For most subjects, the skill level ranges between senior high school and college-101 standards. In addition to their self-teaching value, these books are also widely used as textbooks or textbook supplements in classroom settings. E-Z books review their subjects in detail, using both short quizzes and longer tests to help students gauge their learning progress. All exercises and tests come with answers. Subject heads and key phrases are set in a second color as an easy reference aid.
Back to top
Rent E-Z Business Math 4th edition today, or search our site for other textbooks by Thomas P. Walsh. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Barron's Educational Series, Incorporated. | 677.169 | 1 |
Editorial Reviews
Review
"A college-level math text for serious mathematicians and fans of recreational mathematics. This book proves that turtle graphics is not just kid stuff." Popular Computing
"Reading this book with the help of a good graphics computer system, you are sure to discover new and interesting math... an excellent textbook or self-study guide." W. Lloyd Milligan , Byte
About the Author
Hal Abelson is Class of 1922 Professor of Computer Science and Engineering at Massachusetts Institute of Technology and a fellow of the IEEE. He is a founding director of Creative Commons, Public Knowledge, and the Free Software Foundation. Additionally, he serves as co-chair for the MIT Council on Educational Technology.
Andrea diSessa is Chancellor's Professor in the Graduate School of Education at the University of California, Berkeley, and a member of the National Academy of Education. He is the coauthor of Turtle Geometry: The Computer as a Medium for Exploring Mathematics (MIT Press, 1981 discovered this little gem of a book while exploring the stacks in the library when I was attending a local junior college back in the 80's. The author uses Logo's turtle graphics as a way of exploring the properties of geometric space. From very simple beginnings drawing regular polygons and other simple shapes, the book gradually works its way to more and more complicated scenarios. After exploring the properties of ordinary turtle graphics, turtle graphics are tried on the surfaces of spheres and cubes, then on more complicated surfaces. Little by little, concepts of non-Euclidean geometry are introduced, until the final chapters in which the turtle is used to demonstrate the geometric nature of gravity in Einstein's general theory of relativity. I strongly recommend this book to anyone with interests in computer programming, geometry and physics. The unusual approach this book takes to the understanding of curved space is deceptively simple and surprisingly powerful.
Comment
21 of 21 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
Turtle Geometry teaches mathematics and physics via the computer and the Logo programming language. The mathematics covered is pretty advanced, including topology, and general relativity. Yet, through the use of turtle geometry this advanced math and physics becomes accessible to the layperson. Although all of the examples are in the Logo programming language there are listings of Basic routines in the back of the book. With the help of the Basic routines I was able to easily translate the Logo/Basic code to the Python programming language which I choose to use for reading this book. The reviewers of this book mention it as the beginnning of a revolution in mathematics education. It seems though, that this revolution did not come about as computers are still not used very effectively in the classroom. I think this is very sad as the teaching approach used in Turtle Geometry could be very successful in the classroom.
Comment
13 of 14 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
Anyone interested in logo from beginners to advanced users will benefit from reading this book. It has very easy and simple to understand examples, along with a review, and questions at the end of every chapter. Some solutions are provided at the end of the book, (and their even correct, as opposed to many other text books I've read). The pace of the book gets gradually more difficulst, yet more interesting as you reach the climax at the end. A must read for anyone interested in Mathematics.
Comment
12 of 14 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
I began working with this book in 1981 at the age of 15, using a Logo disk for the Apple II given to me by my sister's friend who worked in the MIT AI lab. It is a gem of a book. The mathematical subjects are explained in a clear, easy, and entertaining way. I loved it at the time. No one told me to read it or to create the programs in the book. I did it out of curiosity inspired by the many interesting topics. Along the way I got a good foundation in vector algebra, 2d and 3d geometry, programming, and other things, all without effort.
It is good for children or young adults who may later work in physics or vector graphics. I wish it was updated to use a modern language or a modern version of Logo. There is no other book that collects such a mixture of different subjects together. I still open the book to remember basic concepts and just for the joy of reading it again.
As an adult I created several different 2d vector graphics systems for other programmers to use. I credit this book for my interest in that area.
Comment
5 of 5 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
Everything you always wanted to know about Turtle Graphics. More, actually, than you thought there was to know. Sample code for algorithms. Also a section on implementing Turtle Graphics in other computer languages with source code.
Comment
1 of 1 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback... | 677.169 | 1 |
Find a Cumming, GA Algebra 2This branch of math has applications in a myriad of life and work situations, like insurance, financial decisions, business strategies, and even gambling - they all use probability calculations to make them viable. Phonics is the art of translating vowel sequences into long or short sounds. Long sounds can be equated to the sounds of the alphabetic vowels, a, e, i, o, u. | 677.169 | 1 |
Search Results (47)
Number systems and the rules for combining numbers can be daunting. This ...
Number systems and the rules for combining numbers can be daunting. This unit will help you to understand the detail of rational and real numbers, complex numbers and integers. You will also be introduced to modular arithmetic and the concept of a relation between elements of a set.
The students will develop an algebraic expression from geometric representations and ultimately ...
The students will develop an algebraic expression from geometric representations and ultimately graph quadratic equations with understanding. The students will also develop a better understanding of algebraic expressions by comparing with geometric, tabular, and graphical representations.
This applet is designed to allow students to explore how the coefficients ...
This applet is designed to allow students to explore how the coefficients of a quadratic equation affect the shape and location of its graph. The applet can be used to enable students to discover a formula for the Axes of symmetry equation, or to use the
This lesson aims to help students with quadratic functions y = ax2 ...
This lesson aims to help students with quadratic functions y = ax2 + bx + c. This is the next step after linear functions bx + c. The lesson begins with three quadratics and their graphs (three parabolas): y = x2 - 2x + (0 or 1 or 2). The prerequisite or co-requisite is some working experience with algebra, like factoring x2 -2x into x(x-2). The objective is to connect four things: the formula for y, the graph of y (a parabola), the roots of y and the minimum or maximum of y. The particular example y = x2 – 2x could be repeated by the teacher, for emphasis. The lesson will take more than one class period (and this is deserved!). The breaks allow time to consider parabolas starting with -x2 and opening downward. A physical path would be one (dangerous?) activity.
This lesson unit is intended to help teachers assess how well students ...
This lesson unit is intended to help teachers assess how well students are able to solve quadratics in one variable. In particular, the lesson will help teachers identify and help students who have the following difficulties: making sense of a real life situation and deciding on the math to apply to the problem; solving quadratic equations by taking square roots, completing the square, using the quadratic formula, and factoring; and interpreting results in the context of a real life situation. | 677.169 | 1 |
book, students can use the computational and pedagogical power of computer algebra to do calculations and to generalize about the theory and applications of linear algebra. This book allows students to focus their energies on the concepts and theory of linear algebra instead of time-consuming routine computations. The book is designed as a companion to a standard text in linear algebra or as a linear algebra/matrix methods supplement in engineering, mathematics, and computer science courses when linear algebra and or matrix methods are studied. This supplement will work with mainstream texts using Mathematica versions on Macintosh, 386, or NeXT computers.
Top Customer Reviews
I received a copy of the 10th printing of the first edition, copyrighted in 1995. If you are using Mathematica 3 or 4.+, this book is out of date. Also, you cannot use this book without understanding linear algebra; that is, you must be taking Lin Alg concurrently. So it is not useful for self-improvement, or fun. If you know Linear Algebra or Mathematica already, this book will not help you. There are some major problems in the examples. For instance, Johnson writes about For loops in Mathematica, but one example simply produced a runaway calculation (p. 29). Also, (p.37) the Mathematica "manual" row reduction example needs a special warning: if you make an error, you cannot simply go back and correct it by overwriting; you have to return to the beginning. Else, your results with have compound errors embedded. Problems like these float elsewhere in the book. Too bad. For students beginning Linear Algebra, a better book (and one we hope gets revised soon) is C-K Cheung et al., Getting Started With Mathematica. I am sorry that Brooks/Cole and the Wolfram website still market Johnson's out-of-date and error-prone book. If Johnson revises the book, hopefully incorporating Mathematica 4.2, I hope these errors get corrected. At that time I hope I can change this review to a 5*.
Comment
3 of 3 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback... | 677.169 | 1 |
Sherlock Holmes delighted in saying 'It's elementary, my dear Watson'. This lesson provides a brief overview of how Boolean...
see more
Boolean Algebra is Elementary to your Bookmark Collection or Course ePortfolio
Select this link to close drop down of your Bookmark Collection or Course ePortfolio for material Boolean Algebra is Elementary
Select this link to open drop down to add material Boolean Algebra is Elementary Calcul | 677.169 | 1 |
Editorial Reviews
From the PublisherAbout the Author
EDWARD H. JULIUS, CPA, is an award-winning professor of business administration at California Lutheran University. He conducts frequent lectures and workshops on rapid math and has written several accounting study guides, as well as a series of crossword puzzle books. He is also the author of the original Rapid Math Tricks and Tips and Arithmetricks, both available from Wiley.
Top Customer Reviews
I am a long retired teacher and supervisor of mathematics. In addition to having taught math for many years, I have also written and published materials covering the same subject. I am now using this book with an eight year-old math whiz and find it valuable to introduce him to rapid calculation and then an analysis of why it works (largely based on our place-value system of numbers. The various methods are interesting but most of them are not very practical for everyday calculation since most are special cases rather than general applications. For successful use they also require instant recall of the basic number facts and there is no hint as to how this can be accomplished. Barring that prior requirement not many math phobes will become highly skilled arithmeticians in 30 days using this or any other book.
Comment
37 of 39 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
I read this book about once a year. It turns arithmatic inside-out, showing quick tricks to do calculations in your head, or ways to transpose number problems to view them in new ways that make them easier to solve. For example, using the power of reciprocals, multiplying by 5 is equivalent to dividing by 2 and moving the decimal point to the right spot through a test of reasonableness. I apply many of the tips in this book regulaly in my job during the day, and the more times I review the book the more they've become intuitive. I highly recommend for anyone interested in math or numbers, or deals regularly with simple math problems.
Comment
10 of 11 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
This book has some very cool tricks. It really helps out a lot on tests. It is fun to show my dad them (he is really interested in math). In math class we are reading this book and doing the problems. I don't see how Mr. Julius figured out these tricks. You should get this book!
Comment
13 of 17 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback...
This book enables one to calculate many difficult arithmetic problems using only your brain and a repertoire of handy tricks. Training is over thirty days (one topic per day) although multiple tricks can be done per day. Proofs of methodology is not the focus (e.g. finding a fifth root in seconds), memorising and applying the tricks is.
This really is practical and by positive feedback after each set of exercises, and encouraging comments from the author, will spur the reader to finish every trick. One weakness would be a rote memory approach, so when confronted with a different problem a student may be left floundering. To get the most out the book the reader should prove each of the techniques using basic algebra.
If you master this book, you will realize that these are not tricks. You use this logic all the time but never realize you are doing it. Using this book, you verify what you are doing and maybe we can teach an old... O.K. I mean it is possible to pick u some new skills. There are examples, drills and most important though processes.
The only drawback to this book is that with the advent of cheap prolific calculators, there is no need for archaic fractions in our dismal oriented world.
This may not be the be-all-end-all on the subject but I found it helpful from the grocery store to the IRS.
This book is so great for all ages, not just for students still at school. Faster mental calculation, simple tricks but very useful in daily life and the way writer Julius presents it really leads readers to the wonderful world of math in a sense. My kids (8th grader and 3rd one) devour the book at once and take turn to do ALL exercises, this fact alone proves that Mr. Julius already successful to challenge the readers to explore, to see not only the beauty of math but its application in life. Needless to say, I go through the whole book cover to cover with excitement like a student still in school. Kudos to this great book!
I love this book! Well... At least parts of it. But those parts are enough to give it a five star rating. You will learn quite a few tricks that will speed up you head calculations enormously. However, they only applies to certain calculations, so while you sometimes will impress your friends and colleagues, you will still find problems that takes as long as before.
Comment
2 of 3 people found this helpful. Was this review helpful to you?
Yes No
Sending feedback... | 677.169 | 1 |
M.Ed. in Mathematics Education
Enhance your students' understanding of math.
Help students in grades 1-8 learn mathematics with the online M.Ed. in Mathematics Education from Lesley University. Our program covers operations of arithmetic, number theory, algebra, geometry, and probability, as well as mathematics instruction. You'll be able to connect and apply your advanced mathematics knowledge and apply it to classroom practice. And you'll study in a convenient, fully online environment with experienced faculty and engaged peers.
This program may lead to professional licensure in Massachusetts, depending on the option selected, or it may prepare you to seek licensure in other states.
OPTION A: ADVANCED CONTENT STUDY
This track is intended for master's degree candidates interested in deepening their content knowledge in mathematics. Coursework focuses primarily on matters related to mathematics and mathematics instruction. This program option has been approved by the Massachusetts Department of Elementary and Secondary Education for Professional Teaching Licensure in Elementary Education (1-6), Mathematics (1-6), or Mathematics (5-8) in the Commonwealth of Massachusetts.
Course Number
Course Title
CMATH 6110
Functions and Algebra II: Broadening the Base
CMATH 6112
Geometry and Measurement: Circles, Symmetry, and Solids
CMATH 6114
Statistics and Data Analysis
CMATH 6115
Concepts and Calculus: Change and Infinity
EEDUC 6154
Meeting Diverse Needs in the Mathematics Classroom
EEDUC 7121
Assessment Issues in Mathematics: Summative and Formative
TOTAL CREDITS
33
OPTION B: ADVANCED TEACHING PRACTICES
This track is intended for master's degree candidates interested in deepening their teaching practice, applying advanced approaches to pedagogy and instruction to complement their content knowledge. Coursework and clinical practice focus primarily on elective teaching techniques. This program option may prepare you to seek licensure in Massachusetts or other states. | 677.169 | 1 |
SpeQ Mathematics 3.3
Advanced yet easy-to-use math calculator that immediately and precisely computes the result as you type a math expression. It allows multiples math expressions at same time. It also allows fractions a Free Download | 677.169 | 1 |
The ALMMF, or Adaptive Learning Module for Mathematical Functions,
is a concept that emerged from the idea of trying to combine the
intensely practical orientation of classical handbooks of mathematics
such as [1,2]
with the pedagogical potential of the Internet, which opens the door to
exciting new ways of helping people acquire useful mathematical
knowledge.
The concept is influenced also by encyclopedias and surveys such as
[3].
The ALMMF project leaders plan to invite comments and advice on the
ALMMF concept from teachers, university professors, and other
interested individuals, and to write a detailed prospectus for the
design and implementation of an ALMMF.
However, before embarking on this task, we felt that a search should
be made to determine whether Web sites, CD-ROM's, software packages,
or other kinds of interactive resources could be found that provide
capabilities similar to what we envision for the ALMMF.
This report summarizes our findings.
The next section, §2, introduces the ALMMF and notes
its connection to a related project, now in progress at NIST, to
construct a Digital Library of Mathematical Functions.
In §3 we describe our methodology for gathering relevant data,
and we catalog the resources found.
In §4 we examine a few of the resources that are most
similar to the envisioned ALMMF.
Finally, §5 presents our summary and conclusions. | 677.169 | 1 |
provides students with decision making, critical thinking, skill building and fun-filled hands-on projects. All the mathematics projects included in the book are classroom tested which focus on concept development through creativity. The sete-by-step easy projects explained in this book help to remove the mathematics phobia commonly present... more...
Autism is a complex developmental disability. Generally, Autism presents itself during the first three years of a person's life. The condition has an effect on normal brain function characterized by social impairments, communication difficulties, and restricted, repetitive, and stereotyped patterns of behaviour. Males are five times more likely to... more...
Architects, development practitioners and designers are working in a global environment and issues such as environmental and cultural sustainability matter more than ever. Past interactions and interventions between developed and developing countries have often been unequal and inappropriate. We now need to embrace fresh design practices based on... more...
Learning Mathematics the Fun way caters to those students who deserve to have their individual learning needs satisfied. This book emphasises on teaching with activities, drawing on real-life models from children?s point of view and promotes expectations for success. The book nurtures the interest of the student by bringing up the fun-quotient in... more...
Faced with the conundrum of ever changing life, all of us yearn for a single formula which can solve the problem at hand. Recollecting the exact formula that would help in navigating the labyrinth of a problem is the perpetual problem. This really stares at us in the face when trying for competitive examinations and mostly in Mathematics where quantitative... more...
A vision for progress in the North East through peaceful meansOn 4 July 1997, Sanjoy Ghase, head of the non-governmental organizationAVARD in the North East, was abducted. The United Liberation Front of Assam (ULFA) claimed responsibility for this act. Sanjay never returned, and mystery still shrouds his disappearance.This exceptional collection of... more...
Vedic Mathematics unfolds a new method of approach to mathematics. It is based on sixteen one line formulae known as ?sutras? and thirteen one line corollaries known as ?upa sutra?. These simple straight forward formulae and techniques help in solving complex mathematical problems in quick and easy steps. The step-by-step easy methods explained in... more...
This is a concise, practical guide that will help you learn Generics in .NET, with lots of real world and fun-to-build examples and clear explanations. It is packed with screenshots to aid your understanding of the process. This book is aimed at beginners in Generics. It assumes some working knowledge of C# , but it isn't mandatory. The following... more...
The arrival of the young boy in an upper middle class Bengali household triggers a gripping story of love, desire and renunciation. Set in two different cities, New Delhi and Varanasi, Across The Mystic Shore explores the entwining lives of four women forced to confront their past decisions in order to understand their present delusions and insecurities.... more... | 677.169 | 1 |
Bacliff PrecalculusCarol NSteve O.
...The student also needs to learn to analyze system behavior to identify the type of differential equation needed to model it. As an experienced engineer and computer scientist, I use differential equations regularly and can help students master the formulation, interpretation, and solution of dif... | 677.169 | 1 |
Mia Shores, FL ACT MathAugusto R.
Andrea GEDINA B.
...Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series, with two major branches - differential and integral calculus. Remember the equation for the slope of a straight line: m = (change in y)/(change in x), usually a constant. Well calculus... | 677.169 | 1 |
Math 425 (Introduction to Probability)
Section 006
Fall 2008
Probability provides formal approaches to problems
that involve uncertainty.
The goal of this course is to enable students to solve problems in
a variety of different contexts. We
will develop a central set of concepts
(including the
notion of a random variable), and we will practice
interpreting and solving problems with these concepts at hand.
Calculus and combinatorics will be part of the process.
The textbook for the course is A First Course in
Probability
by Sheldon Ross (7th edition). Please bring the textbook and a
calculator to
class every day.
Homework:
Regular homework assignments determine part of the
grade in this class. Homework sets will
usually be due at the start of class on Mondays.
(I don't accept late homework.)
Everyone will
have their lowest homework grade dropped at the end of the semester.
Exams:
The midterms for this course will be held during
our regular class period.
Here is the exam schedule:
Midterm 1
Wednesday, October 8th
11:10am-12:00pm
455 Dennison
Midterm 2
Wednesday, November 12th
11:10am-12:00pm
221 Dennison
Final
Friday, December 12th
4:00pm-6:00pm
296 Dennison
You will be allowed to use a calculator and one page of notes
during the exams. | 677.169 | 1 |
The Method of Coordinates…
See more details below
Overview examines geometry as an aid to calculation and the necessity and peculiarities of four-dimensional space. Written for systematic study, it features a helpful series of "road signs" in the margins, alerting students to passages requiring particular attention, and an abundance of ingenious problems — with solutions, answers, and hints — promote habits of independent work
Advertising
Editorial Reviews
From the Publisher
"All through both volumes ['Functions & Graphs' and 'The Methods of Coordinates' ever again... High school students (or teachers) reading through these two books would learn an enormous amount of good mathematics. More importantly, they would also get a glimpse of how mathematics is done."
—- H. Wu, The Mathematical Intelligencer
"This book is a concise and compact treatment of the essential ideas of coordinate geometry. The authors demonstrate powerfully how geomtric ides may be communicated and studied effectively without the aid of pictures. Graphics are of course of vital importance int he methods of Euclidean geometry. However, the methods of coordinate geometry are able to transform pure geometric ideas into algebraic manipulations where the meaning is very clear once the formalism is learnt. In particular the book demonstrates the value of conveying information in the form of images embedded in formulas. This is very useful in the transmission of information by electronic means. . . This book is a valuable tool for teaching the redimentary concepts of analytical geometry. It contains a number of excellent examples and exercises which go further than a mere introductory programme. the exercises, while not numerous, are very thought-provoking and are bound to pose a serious challenge to the interested student." | 677.169 | 1 |
...At an Algebra 2 level, it is important for the student to understand why a certain question is being answered a certain way, not just knowing how to answer. I spend a good deal of time looking at mathematical theories with students, going to the root of things. This is critical because when the student might be coming short in calculation, reasoning will be key.
3 | 677.169 | 1 |
I taught freshman and sophomore calculus and differential equations at a major engineering college. It was a comprehensive two year curriculum. My approach is to try different examples to explain the material | 677.169 | 1 |
Schaum's Outline of Advanced Calculus, Second Edition / Edition 2/p>/b>…
See more details below
Paperback
Temporarily out of stock online.
Overview
Confusing Textbooks
Practice problems with full explanations that reinforce knowledge
Coverage of the most up-to-date developments in your course field
In-depth review of practices and applications
Fully compatible with your classroom text, Schaum's highlights all the important facts you need to know. Use Schaum's to shorten your study time-and get your best test scores!
Customer Reviews
Most Helpful Customer Reviews
Schaum's Outline of Advanced Calculus, Second Edition 2.7 out of 5based on
0 ratings.
3 reviews.
Anonymous
More than 1 year ago
Anonymous
More than 1 year ago
Guest
More than 1 year ago
This text book for advanced calculus is providng a better explanation then the listed text book with diagram and solved problem. The text is lacking further continuity of diagram for most explanation related to theory. There just is not enough analogy, clarification or the expressive use of the English language to provide a usefull meaning of the subject content in most (or all) advanced mathematic text book, therefore a supplement text like the Outline book should be providing adequate explnation. | 677.169 | 1 |
Mathematics Glossary
Addition and subtraction within 5, 10, 20, 100, or 1000. Addition or subtraction of two whole numbers with whole number answers, and with sum or minuend in the range 0-5, 0-10, 0-20, or 0-100, respectively. Example: 8 + 2 = 10 is an addition within 10, 14 - 5 = 9 is a subtraction within 20, and 55 - 18 = 37 is a subtraction within 100.
Bivariate data. Pairs of linked numerical observations. Example: a list of heights and weights for each player on a football team. Box plot. A method of visually displaying a distribution of data values by using the median, quartiles, and extremes of the data set. A box shows the middle 50% of the data.1
Commutative property. See Table 3 in this Glossary.
Complex fraction. A fraction A/B where A and/or B are fractions (B nonzero).
Computation algorithm. A set of predefined steps applicable to a class of problems that gives the correct result in every case when the steps are carried out correctly. See also: computation strategy.
Computation strategy. Purposeful manipulations that may be chosen for specific problems, may not have a fixed order, and may be aimed at converting one problem into another. See also: computation algorithm.
Congruent. Two plane or solid figures are congruent if one can be obtained from the other by rigid motion (a sequence of rotations, reflections, and translations).
Counting on. A strategy for finding the number of objects in a group without having to count every member of the group. For example, if a stack of books is known to have 8 books and 3 more books are added to the top, it is not necessary to count the stack all over again. One can find the total by counting on—pointing to the top book and saying "eight," following this with "nine, ten, eleven. There are eleven books now."
Dot plot.See: line plot.
Dilation. A transformation that moves each point along the ray through the point emanating from a fixed center, and multiplies distances from the center by a common scale factor.
Expanded form. A multi-digit number is expressed in expanded form when it is written as a sum of single-digit multiples of powers of ten. For example, 643 = 600 + 40 + 3.
Expected value. For a random variable, the weighted average of its possible values, with weights given by their respective probabilities.
First quartile. For a data set with median M, the first quartile is the median of the data values less than M. Example: For the data set {1, 3, 6, 7, 10, 12, 14, 15, 22, 120}, the first quartile is 6.2See also: median, third quartile, interquartile range.
Fraction. A number expressible in the form a/b where a is a whole number and b is a positive whole number. (The word fraction in these standards always refers to a non-negative number.) See also: rational number.
Identity property of 0. See Table 3 in this Glossary.
Independently combined probability models. Two probability models are said to be combined independently if the probability of each ordered pair in the combined model equals the product of the original probabilities of the two individual outcomes in the ordered pair.
Integer. A number expressible in the form a or -a for some whole number a.
Interquartile Range. A measure of variation in a set of numerical data, the interquartile range is the distance between the first and third quartiles of the data set. Example: For the data set {1, 3, 6, 7, 10, 12, 14, 15, 22, 120}, the interquartile range is 15 - 6 = 9. See also: first quartile, third quartile.
Line plot. A method of visually displaying a distribution of data values where each data value is shown as a dot or mark above a number line. Also known as a dot plot.3
Mean. A measure of center in a set of numerical data, computed by adding the values in a list and then dividing by the number of values in the list.4 Example: For the data set {1, 3, 6, 7, 10, 12, 14, 15, 22, 120}, the mean is 21.
Mean absolute deviation. A measure of variation in a set of numerical data, computed by adding the distances between each data value and the mean, then dividing by the number of data values. Example: For the data set {2, 3, 6, 7, 10, 12, 14, 15, 22, 120}, the mean absolute deviation is 20.
Median. A measure of center in a set of numerical data. The median of a list of values is the value appearing at the center of a sorted version of the list—or the mean of the two central values, if the list contains an even number of values. Example: For the data set {2, 3, 6, 7, 10, 12, 14, 15, 22, 90}, the median is 11.
Midline. In the graph of a trigonometric function, the horizontal line halfway between its maximum and minimum values. Multiplication and division within 100. Multiplication or division of two whole numbers with whole number answers, and with product or dividend in the range 0-100. Example: 72 ÷ 8 = 9.
Multiplicative inverses. Two numbers whose product is 1 are multiplicative inverses of one another. Example: 3/4 and 4/3 are multiplicative inverses of one another because 3/4 × 4/3 = 4/3 × 3/4 = 1.
Number line diagram. A diagram of the number line used to represent numbers and support reasoning about them. In a number line diagram for measurement quantities, the interval from 0 to 1 on the diagram represents the unit of measure for the quantity.
Percent rate of change. A rate of change expressed as a percent. Example: if a population grows from 50 to 55 in a year, it grows by 5/50 = 10% per year.
Probability distribution. The set of possible values of a random variable with a probability assigned to each.
Properties of operations. See Table 3 in this Glossary.
Properties of equality. See Table 4 in this Glossary.
Properties of inequality. See Table 5 in this Glossary.
Properties of operations. See Table 3 in this Glossary.
Probability. A number between 0 and 1 used to quantify likelihood for processes that have uncertain outcomes (such as tossing a coin, selecting a person at random from a group of people, tossing a ball at a target, or testing for a medical condition).
Probability model. A probability model is used to assign probabilities to outcomes of a chance process by examining the nature of the process. The set of all outcomes is called the sample space, and their probabilities sum to 1. See also: uniform probability model.
Random variable. An assignment of a numerical value to each outcome in a sample space. Rational expression. A quotient of two polynomials with a non-zero denominator.
Rational number. A number expressible in the form a/b or - a/b for some fraction a/b. The rational numbers include the integers.
Rectilinear figure. A polygon all angles of which are right angles.
Rigid motion. A transformation of points in space consisting of a sequence of one or more translations, reflections, and/or rotations. Rigid motions are here assumed to preserve distances and angle measures.
Repeating decimal. The decimal form of a rational number. See also: terminating decimal.
Sample space. In a probability model for a random process, a list of the individual outcomes that are to be considered.
Scatter plot. A graph in the coordinate plane representing a set of bivariate data. For example, the heights and weights of a group of people could be displayed on a scatter plot.5
Similarity transformation. A rigid motion followed by a dilation.
Tape diagram. A drawing that looks like a segment of tape, used to illustrate number relationships. Also known as a strip diagram, bar model, fraction strip, or length model.
Terminating decimal. A decimal is called terminating if its repeating digit is 0.
Third quartile. For a data set with median M, the third quartile is the median of the data values greater than M. Example: For the data set {2, 3, 6, 7, 10, 12, 14, 15, 22, 120}, the third quartile is 15. See also: median, first quartile, interquartile range.
Transitivity principle for indirect measurement. If the length of object A is greater than the length of object B, and the length of object B is greater than the length of object C, then the length of object A is greater than the length of object C. This principle applies to measurement of other quantities as well.
Vector. A quantity with magnitude and direction in the plane or in space, defined by an ordered pair or triple of real numbers.
Visual fraction model. A tape diagram, number line diagram, or area model.
Whole numbers. The numbers 0, 1, 2, 3, ...
1Adapted from Wisconsin Department of Public Instruction, accessed March 2, 2010.
2Many different methods for computing quartiles are in use. The method defined here is sometimes called the Moore and McCabe method. See Langford, E., "Quartiles in Elementary Statistics," Journal of Statistics Education Volume 14, Number 3 (2006).
3Adapted from Wisconsin Department of Public Instruction, op. cit.
4To be more precise, this defines the arithmetic mean.
5Adapted from Wisconsin Department of Public Instruction, op. cit.
Table 1
Common addition and subtraction.1
Result Unknown
Change Unknown
Start Unknown
Add to
Two bunnies sat on the grass. Three more bunnies hopped there. How many bunnies are on the grass now? 2 + 3 = ?
Two bunnies were sitting on the grass. Some more bunnies hopped there. Then there were five bunnies. How many bunnies hopped over to the first two? 2 + ? = 5
Some bunnies were sitting on the grass. Three more bunnies hopped there. Then there were five bunnies. How many bunnies were on the grass before? ? + 3 =5
Take from
Five apples were on the table. I ate two apples. How many apples are on the table now?5-2 = ?
Five apples were on the table. I ate some apples. Then there were three apples. How many apples did I eat?5 - ? = 3
Some apples were on the table. I ate two apples. Then there were three apples. How many apples were on the table before?? -2 = 3
Total Unknown
Addend Unknown
Both Addends Unknown2
Put Together / Take Apart3
Three red apples and two green apples are on the table. How many apples are on the table? 3 + 2 = ?
Five apples are on the table. Three are red and the rest are green. How many apples are green? 3 + ? = 5, 5-3 = ?
("How many more?" version):Lucy has two apples. Julie has five apples. How many more apples does Julie have than Lucy?("How many fewer?" version): Lucy has two apples. Julie has five apples. How many fewer apples does Lucy have than Julie? 2 + ? = 5, 5 - 2 = ?
(Version with "more"): Julie has three more apples than Lucy. Lucy has two apples. How many apples does Julie have? (Version with "fewer"): Lucy has 3 fewer apples than Julie. Lucy has two apples. How many apples does Julie have? 2 + 3 = ?, 3 + 2 = ?
(Version with "more"):Julie has three more apples than Lucy. Julie has five apples. How many apples does Lucy have?(Version with "fewer"):
Lucy has 3 fewer apples than Julie. Julie has five apples. How many apples does Lucy have? 5 – 3 = ?, ? + 3 = 5
1 Adapted from Box 2-4 of Mathematics Learning in Early Childhood, National Research Council (2009, pp. 32, 33).
2 These take apart situations can be used to show all the decompositions of a given number. The associated equations, which have the total on the left of the equal sign, help children understand that the = sign does not always mean, makes or results in but always does mean is the same number as.
3 Either addend can be unknown, so there are three variations of these problem situations. Both addends Unknown is a productive extension of the basic situation, especially for small numbers less than or equal to 10.
4 For the Bigger Unknown or Smaller Unknown situations, one version directs the correct operation (the version using more for the bigger unknown and using less for the smaller unknown). The other versions are more difficult.
Table 2
Common multiplication and division situations.1
Unknown Product
Group Size Unknown ("How many in each group?" Division)
Number of Groups Unknown ("How many groups?" Division)
3 x 6 = ?
3 x ? = 18, and
18 ÷ 3 = ?
? x 6 = 18, and
18 ÷ 6 = ?
Equal Groups
There are 3 bags with 6 plums in each bag. How many plums are there in all?
Measurement example. You need 3 lengths of string, each 6 inches long. How much string will you need altogether?
If 18 plums are shared equally into 3 bags, then how many plums will be in each bag?
Measurement example. You have 18 inches of string, which you will cut into 3 equal pieces. How long will each piece of string be?
If 18 plums are to be packed 6 to a bag, then how many bags are needed?
Measurement example. You have 18 inches of string, which you will cut into pieces that are 6 inches long. How many pieces of string will you have?
Arrays2, Area3
There are 3 rows of apples with 6 apples in each row. How many apples are there?
Area example. What is the area of a 3 cm by 6 cm rectangle?
If 18 apples are arranged into 3 equal rows, how many apples will be in each row?
Area example. A rectangle has area 18 square centimeters. If one side is 3 cm long, how long is a side next to it?
If 18 apples are arranged into equal rows of 6 apples, how many rows will there be?
Area example. A rectangle has area 18 square centimeters. If one side is 6 cm long, how long is a side next to it?
Compare
A blue hat costs $6. A red hat costs 3 times as much as the blue hat. How much does the red hat cost?
Measurement example. A rubber band is 6 cm long. How long will the rubber band be when it is stretched to be 3 times as long?
A red hat costs $18 and that is 3 times as much as a blue hat costs. How much does a blue hat cost?
Measurement example. A rubber band is stretched to be 18 cm long and that is 3 times as long as it was at first. How long was the rubber band at first?
A red hat costs $18 and a blue hat costs $6. How many times as much does the red hat cost as the blue hat?
Measurement example. A rubber band was 6 cm long at first. Now it is stretched to be 18 cm long. How many times as long is the rubber band now as it was at first?
General
a x b = ?
a x ? = p and
p ÷ a = ?
? x b = p, and
p ÷ b = ?
1The language in the array examples shows the easiest form of array problems. A harder form is to use the terms rows and columns: The apples in the grocery window are in 3 rows and 6 columns. How many apples are in there? Both forms are valuable.
2Area involves arrays of squares that have been pushed together so that there are no gaps or overlaps, so array problems include these especially important measurement situations.
3The first examples in each cell are examples of discrete things. These are easier for students and should be given before the measurement examples.
Table 3
The properties of operations. Here a, b and c stand for arbitrary numbers in a given number system. The properties of operations apply to the rational number system, the real number system, and the complex number.
Associative property of addition
(a +b) + c = a + (b+c)
Commutative property of addition
a + b = b + a
Additive identity property of 0
a + 0 = 0 + a = a
Existence of additive inverses
For every a there exists -a so that a + (-a) = (-a) + a = 0
Associative property of multiplication
(a x b) x c = a x (b x c)
Commutative property of multplication
a x b = b x a
Multiplicative identity property 1
a x 1 = 1 x a = a
Existence of multiplicative inverses
For every a = 0 there exists 1/a so that a x 1/a = 1/a x a = 1
Distributive propery of multiplication over additions
a x (b + c) = a x b + a x c
Table 4
The properties of equality. Here a, b and c stand for arbitrary numbers in the rational, real, or complex number systems.
Reflexive property of equality
a = a.
Symmetric property of equality
If a = b, then b = a.
Transitive property of equality
If a = b and b = c , then a = c.
Addition property of equality
If a = b, then a +c = b + c.
Subtraction property of equality
If a = b then a - c = b - c.
Multiplication property of equality
If a = b, then a x c = b x c.
Division property of equality
If a = b and c ≠ 0, then a ÷ c = b ÷ c.
Substitution property of equality
If a = b, then b may be substituted for a in any expression containing a.
Table 5
Table 5. The properties of inequality. Here a, b, and c stand for arbitrary numbers in the rational or real number systems. | 677.169 | 1 |
In its most common usage, a "set" is any collection of objects and a "function" is simply a rule that that assigns members of one set to members of another set. As an everyday non-mathematical example, consider thesituation in which a teacher assigns grades to students.In this case, the teacher is the function that assigns to each member in the set of students a letter in the set whose members are A, B, C, D and F.While the study of sets and functions is important in all computational mathematics courses, it is the study of limits that distinguishes the study of calculus from the study of precalculus.What this means is the topic of Part I of this course | 677.169 | 1 |
Elementary Number Theory
9780073051888
ISBN:
0073051888
Edition: 6 Pub Date: 2005 Publisher: McGraw-Hill College
Summary: Elementary Number Theory, Sixth Edition, is written for the one-semester undergraduate number theory course taken by math majors, secondary education majors, and computer science students. This contemporary text provides a simple account of classical number theory, set against a historical background that shows the subject's evolution from antiquity to recent research. Written in David Burton's engaging style, Elemen...tary Number Theory reveals the attraction that has drawn leading mathematicians and amateurs alike to number theory over the course of history.
Burton, David M. is the author of Elementary Number Theory, published 2005 under ISBN 9780073051888 and 0073051888. Nine Elementary Number Theory textbooks are available for sale on ValoreBooks.com, seven used from the cheapest price of $9.79, or buy new starting at $93.12 | 677.169 | 1 |
Linear Algebra/Vectors
Vectors are commonly used in physics and other fields to express quantities that cannot be accurately described by a scalar. Scalars are simply the value of something in a single dimension - a real number. For example, one might say that they have driven 5 kilometers, that an hour has elapsed, or that something's mass is 20 kilograms. In every one of these cases, there has been exactly one value stated.
However, we might have more information we wish to give. Take the example of driving 5 kilometers. In this case, it may be useful to know how far you drove, but it might also be equally important which direction you drove, such as 5 kilometers due east. Now, given your starting point, exactly where you drove can be determined.
We can define a vector to be an ordered pair consisting of a magnitude and a direction. In this diagram, r is the magnitude of this vector and θ is the direction. Notice, now, that we have moved horizontally r cos(θ) and vertically r sin(θ). These are called the x-component and the y-component, respectively.
We can also write a vector conveniently in terms of the x and y component. We write for vectors. In some texts, you may see the vector written sideways, like (x, y), but when you write it will help greatly to write them downwards in columns. In print we commonly use bold vectors, but since you probably don't have a pen that writes in bold print, underline your vectors, i.e. write v, or put a tilde underneath your vectors. Occasionally in Physics, you may see vectors written with an arrow pointing right.
Notice that vectors need not have two components. We can have 2 or 3 or n or an infinite number of components.
We write the set of all vectors with 2 real number components as R2; likewise for 3, n, or infinite number of components. For components with complex numbers, we write C. Polynomials are "vectors" too - we'll look at notation for the set of polynomials later. For a reason why we do this, see Set theory for an explanation.
We can define some actions on vectors. What will happen if we extend the vector? Or what will happen if we shrink the vector? The vector's direction doesn't change, only its length -- its magnitude. The action we perform to stretch or shrink a vector is that we multiply its magnitude by some amount. We refer to doing this as scalar multiplication: we multiply the vector by a scalar real number.
The operation of subtraction on two vectors, a and b, a-b, can also be written as a+(-1)b. Therefore, we can use scalar multiplication to find the value of (-1)b, then use vector addition to find our solution.
^ Complex numbers can be represented in the form or equivalently , or in other words, a vector with magnitude and direction . On the complex plane, this vector has a real x-component and an imaginary y-component. See Complex numbers for more information.
Now consider a plane. If we have two nonparallel vectors lying on the plane and we add them, we can add a linear combination (that is, add the two vectors, which are multiplied only by scalars) to choose some other vector. The set of all vectors under linear combinations of these two vectors form a plane.
More simply, if we have two nonparallel vectors a and b we can form any other vector parallel to a and b by:
If we pick a vector c=a-b to form a triangle, we can show that these two forms are indeed equivalent by trigonometry.
The angle θ then is important, as it shows that the dot product of two vectors is related to the angle between them. More specifically, we can calculate the dot product of two vectors - if the dot product is zero we can then say that the two vectors are perpendicular.
For example, consider simply
Plot these vectors on the plane and verify for yourself that these vectors are perpendicular.
The cross product is a more complicated product to define, but has a nice geometric property. We will only look at the cross product in three dimensions, since it is the most commonly used in three dimensions and it is difficult to define in greater dimensions.
For a vector with three components, the cross product is defined as
where
If you have not done Matricies before, here is a formula to work out from above...
If a and b are two vectors, a×b is the vector perpendicular to both. Now if we have two vectors, we have two choices of vector perpendicular to a and b - if we switch the order of the cross product we obtain the other vector.
The magnitude of the cross product of two vectors is the area of the parallelogram formed by these two vectors.
The scalar triple product, a·(b×c) is the volume of the paralleliped formed by these three vectors. | 677.169 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.