id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
700
https://physics.stackexchange.com/questions/735190/why-does-beta-plus-decay-positron-decay-have-a-1-022-mev-energy-requirement-ra
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Why does beta-plus decay (positron decay) have a 1.022 MeV energy requirement rather than 0.511 MeV + the binding energy of an electron? Ask Question Asked Modified 2 years, 10 months ago Viewed 2k times 3 $\begingroup$ From my understanding, beta-plus decay is only possible when certain energy requirements are met due to the fact that the atom will need to emit a positron particle, which has a rest energy-mass equivalent of 511 KeV. Therefore the difference in the rest energy-mass equivalent of the entire daughter atom vs of the parent atom (or, in other words, its difference in binding energy) has to be at least 511 KeV in order to (in layman terms) €˜produce€™ a positron. However in most physics textbooks they go a step further and mention how the energy requirement is actually double that (1.022 MeV). They explain it€™s due to the fact that positron decay converts a proton into a neutron, decreasing the atomic number by 1, meaning the daughter atom would have an excess electron that will also need to be ejected from the atom. I can't understand how this orbital electron can influence the decay properties of its nucleus. I will illustrate what I don't understand in an imaginary counter-example: Let's assume the parent nucleus has atomic number Z_1, neutron number N_1, and charge 0 (Z_1 electrons). Let's then assume the theoretical lower-energy daughter state has atomic number Z_2 = Z_1 - 1, neutron number N_2 = N_1 + 1, and charge -1 (Z_2 + 1 electrons). Let's finally assume the transition energy between those two states is lower than 1,022 KeV, say, 612 KeV, and that the binding energy of an electron in that daughter state is 1 KeV. Why couldn't this nucleus go from the first (Z_1, N_1) state to the (Z_2, N_2) state, use 611 KeV to emit a positron (511 KeV would go towards the "energy-mass" of that positron and 100 KeV would be used as its kinetic energy and the neutrino), and use 1 KeV afterwards to eject one of the valence electrons from the daughter atom, for a total of 612 KeV of transition energy? nuclear-physics atomic-physics radiation Share Improve this question edited Nov 5, 2022 at 18:01 AubertAubert asked Nov 4, 2022 at 11:20 AubertAubert 3344 bronze badges $\endgroup$ 1 1 $\begingroup$ Re: "I can't understand how this orbital electron can influence the decay properties of its nucleus." - there are a variety of reasons that orbiting electrons impact decay properties. One, of course, is electron capture (a bare Be-7 ion cannot decay by EC with no electrons to capture). There are also low energy beta decays that can only occur with a highly stripped ion because the low energy electron can't escape and has no available energy level to occupy in a neutral atom. Nature is weird... $\endgroup$ Jon Custer – Jon Custer 2022-11-04 14:24:53 +00:00 Commented Nov 4, 2022 at 14:24 Add a comment | 1 Answer 1 Reset to default 5 $\begingroup$ The additional energy requirement is because the daughter atom has an extra electron and therefore extra mass-energy. The masses we look up in tables of nuclides are the masses of neutral atoms. But when the parent decays to a daughter via inverse beta decay, it ends up with an "extra" electron, which means that it has 511 keV more mass-energy than it would if it were neutral. So for this decay to occur, the mass of the neutral parent has to be greater than the mass of the neutral daughter by (roughly) 1.022 MeV: 0.511 MeV for the ejected positron and 0.511 MeV to account for the extra mass of the daughter (before the electron is ejected.) By similar logic, in "regular" beta decay the threshold mass-energy difference is pretty negligible. This is because while the neutron decay has to create 0.511 MeV worth of mass-energy in the form of an electron, the daughter is missing an electron and therefore is 0.511 MeV lighter than its neutral mass. So in this case, the two effects cancel out, and beta decay can happen so long as the neutral daughter mass is less than the neutral parent mass. Share Improve this answer edited Nov 4, 2022 at 11:56 answered Nov 4, 2022 at 11:51 Michael SeifertMichael Seifert 55.3k1717 gold badges109109 silver badges184184 bronze badges $\endgroup$ 1 $\begingroup$ hmmm, I have a PhD in nuclear physics and have never considered this. Tell Caltech I want my $600,000 back. $\endgroup$ JEB – JEB 2022-11-05 05:30:30 +00:00 Commented Nov 5, 2022 at 5:30 Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions nuclear-physics atomic-physics radiation See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 0 Apart from the KE of the positron and neutrino, where does the remaining transition energy go during beta-plus decay? Related 5 Why does nuclear waste have to be stored until the constituent elements decay naturally? 0 Binding energy of the daughter nucleus 0 Electron count in radioactive decay 0 Predicting the mass of hydrogen 0 Why is positron in the Feynman Diagram for β+ decay directed towards the W+? 0 Why does the product nucleus lose an electron during beta positive decay process? $Q$-Factor for Beta Decay Why do some nuclei decay exclusively via positron emission rather than electron capture? 3 Question about electron capture and mass difference in the decay Hot Network Questions Are there any world leaders who are/were good at chess? How do trees drop their leaves? Numbers Interpreted in Smallest Valid Base Why do universities push for high impact journal publications? How to rsync a large file by comparing earlier versions on the sending end? Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? How to home-make rubber feet stoppers for table legs? I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? Separating trefoil knot on torus Is encrypting the login keyring necessary if you have full disk encryption? How can the problem of a warlock with two spell slots be solved? Why include unadjusted estimates in a study when reporting adjusted estimates? Who is the target audience of Netanyahu's speech at the United Nations? Can a GeoTIFF have 2 separate NoData values? Are there any alternatives to electricity that work/behave in a similar way? How to locate a leak in an irrigation system? Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation Passengers on a flight vote on the destination, "It's democracy!" Bypassing C64's PETSCII to screen code mapping What "real mistakes" exist in the Messier catalog? Proof of every Highly Abundant Number greater than 3 is Even Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Why is the definite article used in “Mi deporte favorito es el fútbol”? Is direct sum of finite spectra cancellative? more hot questions Question feed
701
https://www.wyzant.com/resources/answers/850446/compute-the-root-mean-square-speed
Compute the root-mean-square speed | Wyzant Ask An Expert Log inSign up Find A Tutor Search For Tutors Request A Tutor Online Tutoring How It Works For Students FAQ What Customers Say Resources Ask An Expert Search Questions Ask a Question Wyzant Blog Start Tutoring Apply Now About Tutors Jobs Find Tutoring Jobs How It Works For Tutors FAQ About Us About Us Careers Contact Us All Questions Search for a Question Find an Online Tutor Now Ask a Question for Free Login WYZANT TUTORING Log in Sign up Find A Tutor Search For Tutors Request A Tutor Online Tutoring How It Works For Students FAQ What Customers Say Resources Ask An Expert Search Questions Ask a Question Wyzant Blog Start Tutoring Apply Now About Tutors Jobs Find Tutoring Jobs How It Works For Tutors FAQ About Us About Us Careers Contact Us Subject ZIP Search SearchFind an Online Tutor NowAsk Ask a Question For Free Login Chemistry Julia P. asked • 05/22/21 Compute the root-mean-square speed Compute the root-mean-square speed of Ar molecules in a sample of argon gas at a temperature of 112°C. Follow •1 Add comment More Report 1 Expert Answer Best Newest Oldest By: Sidney P.answered • 05/24/21 Tutor 4.9(1,532) Astronomy, Physics, Chemistry, and Math Tutor See tutors like this See tutors like this Equation is root-mean-squared speed = √(3RT/M) where R is gas constant 8.314, T is Kelvin temp, and M is molar mass in Kg/mole. For argon, M = (39.95 g/mol) /1000 = 0.03995 kg/mol. The rms speed = [3 (8.314) (112 +273K) /0.03995]1/2 = √240,400 = 490. m/s. Upvote • 0Downvote Add comment More Report Still looking for help? Get the right answer, fast. Ask a question for free Get a free answer to a quick problem. Most questions answered within 4 hours. OR Find an Online Tutor Now Choose an expert and meet online. No packages or subscriptions, pay only for the time you need. ¢€£¥‰µ·•§¶ß‹›«»<>≤≥–—¯‾¤¦¨¡¿ˆ˜°−±÷⁄׃∫∑∞√∼≅≈≠≡∈∉∋∏∧∨¬∩∪∂∀∃∅∇∗∝∠´¸ª º†‡À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö Ø Œ Š Ù Ú Û Ü Ý Ÿ Þ à á â ã ä å æ ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ø œ š ù ú û ü ý þ ÿ Α Β Γ Δ Ε Ζ Η Θ Ι Κ Λ Μ Ν Ξ Ο Π Ρ Σ Τ Υ Φ Χ Ψ Ω α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ ς σ τ υ φ χ ψ ω ℵ ϖ ℜ ϒ℘ℑ←↑→↓↔↵⇐⇑⇒⇓⇔∴⊂⊃⊄⊆⊇⊕⊗⊥⋅⌈⌉⌊⌋〈〉◊ RELATED TOPICS MathScienceBiologyPhysicsBiochemistryOrganic ChemistryChemical EngineeringMolesStoichiometryChem...Ap ChemistryAp PhysicsThermodynamicsGas LawsChemical ReactionsGeneral ChemistryCollege ChemistryChemistry Measurement Word ProblemChemistry LabChemistry Conversion RELATED QUESTIONS ##### How many photons are produced? Answers · 3 ##### Why does salt crystals dissolve in the water? Answers · 9 ##### How much copper wire can be made from copper ore? Answers · 2 ##### If a temperature scale were based off benzene. Answers · 3 ##### why does covalent bonds determine the polarity of water? Answers · 8 RECOMMENDED TUTORS Melissa H. 5.0(214) Nick T. 4.9(35) Chandelle L. 4.9(43) See more tutors find an online tutor Chemistry tutors Physical Chemistry tutors Organic Chemistry tutors AP Chemistry tutors Biochemistry tutors Thermodynamics tutors Chemical Engineering tutors Heat Transfer tutors Download our free app A link to the app was sent to your phone. Please provide a valid phone number. App StoreGoogle Play ##### Get to know us About Us Contact Us FAQ Reviews Safety Security In the News ##### Learn with us Find a Tutor Request a Tutor Online Tutoring Learning Resources Blog Tell Us What You Think ##### Work with us Careers at Wyzant Apply to Tutor Tutor Job Board Affiliates Download our free app App StoreGoogle Play Let’s keep in touch Need more help? Learn more about how it works ##### Tutors by Subject Algebra Tutors Calculus Tutors Chemistry Tutors Computer Tutors Elementary Tutors English Tutors Geometry Tutors Language Tutors Math Tutors Music Lessons Physics Tutors Reading Tutors SAT Tutors Science Tutors Spanish Tutors Statistics Tutors Test Prep Tutors Writing Tutors ##### Tutors by Location Atlanta Tutors Boston Tutors Brooklyn Tutors Chicago Tutors Dallas Tutors Denver Tutors Detroit Tutors Houston Tutors Los Angeles Tutors Miami Tutors New York City Tutors Orange County Tutors Philadelphia Tutors Phoenix Tutors San Francisco Tutors Seattle Tutors San Diego Tutors Washington, DC Tutors Making educational experiences better for everyone. ##### IXL Comprehensive K-12 personalized learning ##### Rosetta Stone Immersive learning for 25 languages ##### Education.com 35,000 worksheets, games, and lesson plans ##### TPT Marketplace for millions of educator-created resources ##### Vocabulary.com Adaptive learning for English vocabulary ##### ABCya Fun educational games for kids ##### SpanishDictionary.com Spanish-English dictionary, translator, and learning ##### Inglés.com Diccionario inglés-español, traductor y sitio de aprendizaje ##### Emmersion Fast and accurate language certification SitemapTerms of UsePrivacy Policy © 2005 - 2025 Wyzant, Inc, a division of IXL Learning - All Rights Reserved Privacy Preference Center Your Privacy Strictly Necessary Cookies Performance Cookies Functional Cookies Targeting Cookies Your Privacy When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Clear [x] checkbox label label Apply Cancel Confirm My Choices Allow All
702
https://www.quora.com/What-is-the-difference-between-x-y-and-x-y-3
What is the difference between 'x^y' and '(x) ^y'? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics The Exponent Algebra Notation Operation (mathematics) Exponents (mathematics) Basic Algebra Algebra Laws of Exponents Exponent and Powers 5 What is the difference between 'x^y' and '(x) ^y'? All related (32) Sort Recommended Yogesh Shahi Knows Russian ·2y The difference between 'x^y' and '(x)^y' lies in the way the exponentiation operation is applied. In mathematics, the caret symbol (^) is commonly used to denote exponentiation, where the base number is raised to the power of the exponent. However, the interpretation of the caret symbol can vary depending on the context and the surrounding parentheses. 'x^y': In this notation, 'x' is the base and 'y' is the exponent. It means that 'x' is raised to the power of 'y'. This is the standard notation for exponentiation in mathematics. For example, if 'x' is 2 and 'y' is 3, then 'x^y' would be equal to Continue Reading The difference between 'x^y' and '(x)^y' lies in the way the exponentiation operation is applied. In mathematics, the caret symbol (^) is commonly used to denote exponentiation, where the base number is raised to the power of the exponent. However, the interpretation of the caret symbol can vary depending on the context and the surrounding parentheses. 'x^y': In this notation, 'x' is the base and 'y' is the exponent. It means that 'x' is raised to the power of 'y'. This is the standard notation for exponentiation in mathematics. For example, if 'x' is 2 and 'y' is 3, then 'x^y' would be equal to 2^3, which equals 8. Here, 2 is the base, and 3 is the exponent. '(x)^y': In this notation, the parentheses explicitly group the base 'x' together. It means that the expression inside the parentheses is raised to the power of 'y'. The purpose of using parentheses in this context is to clarify the order of operations. For example, if 'x' is 2 and 'y' is 3, then '(x)^y' would be equal to (2)^3, which also equals 8. Here, the expression inside the parentheses, '2', is the base, and '3' is the exponent. In practice, when 'x' is a single variable or a simple number, the use of parentheses around 'x' in '(x)^y' does not make a difference. However, parentheses can become crucial when more complex expressions are involved to ensure that the correct order of operations is followed. To summarize, 'x^y' and '(x)^y' represent the same mathematical operation of raising a base 'x' to the power of an exponent 'y'. The use of parentheses in '(x)^y' can help clarify the grouping and order of operations when more complex expressions are involved. if it is helpfull please follow me and stay with me for more information Upvote · Related questions More answers below In mathematics, when we write "x ^ y", what does 'y' represent? What's the difference between x^2 and (-x) ^2? What is the difference between e^(x+y) and e^(x-y)? Why must abs (x^y - y^x) > 2 hold true if both x and y are greater than 3 and not equal? What is the difference between x^3 and y^3? Елизавета Воронова Knows Russian ·2y Originally Answered: What is the difference between x^y and (x) ^(y)? · If it's just two numbers (or variables), I believe there's absolutely no difference, same as (a)(b)=ab. If x or y is not a single number, but an expression, the difference is significant, due to an order of operations. Standard order is: power-related operations (such as logarithm, a root or a power), then multiplication/division, then addition/subtraction. So, for example, a+b^c+d is different than (a+b)^(c+d). Your response is private Was this worth your time? This helps us sort answers on the page. Absolutely not Definitely yes Upvote · 9 1 Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·2y Related What is the difference between e^(x+y) and e^(x-y)? I will need a 3rd variable so I will draw the 3D graphs Notice I had to rotate the axes to compare the gra... Continue Reading I will need a 3rd variable so I will draw the 3D graphs Notice I had to rotate the axes to compare the gra... Upvote · 99 23 9 5 9 2 Devvrat Hans B tech in Computer Science Engineering, Indian Institute of Technology, Gandhinagar (IITGN) (Expected 2027) ·2y Related If x^y = y^x, what are x and y? To find the values of x and y when x^y = y^x, we can solve the equation algebraically. Case 1: “x” and “y” are different. Let's first take the natural logarithm (ln) of both sides of the equation: ln(x^y) = ln(y^x) Using the property of logarithms (ln(a^b) = b ln(a)), we can simplify the equation further: ⇒ y ln(x) = x ln(y) ⇒ln(x)/x = ln(y)/y = k (say) If we look at them closely, we can figure out that “x” and “y” could be the values that are plugged into the function f(x) = ln(x)/x, and then “y” is obtained when the intersection of this graph is taken with the graph of y = k (some arbitrary Continue Reading To find the values of x and y when x^y = y^x, we can solve the equation algebraically. Case 1: “x” and “y” are different. Let's first take the natural logarithm (ln) of both sides of the equation: ln(x^y) = ln(y^x) Using the property of logarithms (ln(a^b) = b ln(a)), we can simplify the equation further: ⇒ y ln(x) = x ln(y) ⇒ln(x)/x = ln(y)/y = k (say) If we look at them closely, we can figure out that “x” and “y” could be the values that are plugged into the function f(x) = ln(x)/x, and then “y” is obtained when the intersection of this graph is taken with the graph of y = k (some arbitrary constant) (“x” and ”y” are the values, don't confuse them with the x and y coordinate axis) To understand this better, let's look at its graph: The example taken above has: K = 0.3. It cuts the graph of y = ln(x)/x at two points (1.631,0.300) and (5.938,0.300) So, from this example, we get the values of “x” as 1.631 and that of “y” as 5.938 (or vice versa) From the graph, we can see that by changing the value of k (in y = k) we can get infinite values for “x” and ”y” which satisfy x^y = y^x Case 2: “x” and “y” are Same If x = y, then x^y = y^x is always true. Do Upvote if get any value from it. Happy learning and exploring! 😃 Upvote · 99 11 9 1 Related questions More answers below If x^y=y^x, what is (x/y) ^x/y equal to? What is the difference between (x, y) and (x,y) in mathematics? What is the name of the math symbol that looks like an equal sign, but the top line is wavy? What does it symbolize? What is the difference between e^x and e^(x)? What is the value of (x/y) ^(x/y) if x^y=y^x? Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·2y Related What is the difference between x^(n+1) and (x) ^n? When you ask for the “difference” between a and b then this means a – b Continue Reading When you ask for the “difference” between a and b then this means a – b Upvote · 9 8 Aaron Briseno B.S in Mathematics&Teaching, University of California, Los Angeles (Graduated 2010) · Author has 1.3K answers and 3.1M answer views ·4y Related What is the difference between y=x and f(x) =x? Are they exactly the same thing and I can use them interchangeably anyhow I want or do they actually have a difference? Same thing really, but the advantage to using f(x)=x f(x)=x is now you can have MORE functions and you won’t confuse the reader at all. Imagine I want you to graph the following functions y=x y=x y=x 2–4 y=x 2–4 y=2 x y=2 x And then I ask you please find the value of y when x=4 You’d have to find 3 different y values and you wouldn’t actually know if I meant you to find them all. But if I ask you to graph: f(x)=x f(x)=x g(x)=x 2–4 g(x)=x 2–4 h(x)=2 x h(x)=2 x and I ask for f(4)f(4) you know explicitly which function to use. Using the f(x)f(x) notation, you have given your graph a name! “f” So you can say things like go to the f graph and follow it along, but imagine Continue Reading Same thing really, but the advantage to using f(x)=x f(x)=x is now you can have MORE functions and you won’t confuse the reader at all. Imagine I want you to graph the following functions y=x y=x y=x 2–4 y=x 2–4 y=2 x y=2 x And then I ask you please find the value of y when x=4 You’d have to find 3 different y values and you wouldn’t actually know if I meant you to find them all. But if I ask you to graph: f(x)=x f(x)=x g(x)=x 2–4 g(x)=x 2–4 h(x)=2 x h(x)=2 x and I ask for f(4)f(4) you know explicitly which function to use. Using the f(x)f(x) notation, you have given your graph a name! “f” So you can say things like go to the f graph and follow it along, but imagine I asked you back up top, Go to the graph of y and follow along, you’d have to ask “which y?” Also technically f(x) is a function which passes the vertical line test. while things like y=±√x y=±x Are not a function, but instead a system of functions. Upvote · 9 5 Max Gretinski Studied Mathematics · Author has 6.6K answers and 2.5M answer views ·Aug 11 Related Why is f(x) =a^x when f(xy) =f(x) f(y)? The property f(xy) = f(x)f(y) is called being multiplicative. f(x)=a x f(x)=a x is not the only multiplicative real-valued function. That function is not multiplicative. Here’s a simple example: Let a = 2 (the base). Let x = 5. Let y = 1. Then xy = 51 = 5. f(xy) =2 5 2 5 = 32. f(x) =2 5 2 5 = 32. f(y) = 2 1 2 1 = 2. Since 32 ≠≠ 322 = 64, this function is not multiplicative. The property that is true for f(x) = a x a x is this property: f(x + y) = f(x)f(y). That property is equivalent to the Product Rule for Exponentials. What sorts of functions are multiplicative? f(x) = 1 (constant function 1). f(x) = 0 (constant funct Continue Reading The property f(xy) = f(x)f(y) is called being multiplicative. f(x)=a x f(x)=a x is not the only multiplicative real-valued function. That function is not multiplicative. Here’s a simple example: Let a = 2 (the base). Let x = 5. Let y = 1. Then xy = 51 = 5. f(xy) =2 5 2 5 = 32. f(x) =2 5 2 5 = 32. f(y) = 2 1 2 1 = 2. Since 32 ≠≠ 322 = 64, this function is not multiplicative. The property that is true for f(x) = a x a x is this property: f(x + y) = f(x)f(y). That property is equivalent to the Product Rule for Exponentials. What sorts of functions are multiplicative? f(x) = 1 (constant function 1). f(x) = 0 (constant function 0). f(x) =x n x n (n nonzero and real) The above applies, for example when n is a fraction. Example, let n =1 2 1 2, so that f(x) = x n=√x.x n=x. For any non-negative real numbers x and y, we have √x y=√x√y x y=x y This is the Power-Product Rule for radicals. There are also quite a few multiplicative functions in number theory, such as d(n) = the number of positive divisors of a positive integer, n. Upvote · 9 1 Andy Baker Works at University of Glasgow · Author has 7.3K answers and 1.7M answer views ·Aug 11 Related Why is f(x) =a^x when f(xy) =f(x) f(y)? I think what you are asking is a question of the following form. Suppose that f:R→R f:R→R is a non-constant continuous function that satisfies f(x y)=f(x)f(y)f(x y)=f(x)f(y) for all x,y∈R x,y∈R. Then f f has the form f(x)=a x f(x)=a x for some a>0 a>0. It is easy to see that f(1)≠0 f(1)≠0 so let’s assume f(1)=a f(1)=a. Then for any x x, f(x)=a f(x)f(x)=a f(x) and in particular we have a=a 2 a=a 2, giving a=1 a=1. I suspect that you have mistyped/misunderstood something and it should read f(x+y)=f(x)f(y)f(x+y)=f(x)f(y). In that case f(0)=1 f(0)=1 and f f does indeed have that form if we assume continuity and non-constancy, in fact a=f(1)a=f(1). Upvote · 9 2 Conor Sheehan I like maths · Author has 73 answers and 131.3K answer views ·8y Related What numbers x,y x,y satisfy both x 2+x=y 4+y 3+y 2+y x 2+x=y 4+y 3+y 2+y and x 4+(x+1)4=y 2+(y+1)2 x 4+(x+1)4=y 2+(y+1)2? By graphing these two equations we can see fairly easily that there are 4 solutions x=0, y=0 x=-1, y=0 x=-1, y=-1 x=0, y=-1 Continue Reading By graphing these two equations we can see fairly easily that there are 4 solutions x=0, y=0 x=-1, y=0 x=-1, y=-1 x=0, y=-1 Upvote · 99 13 9 1 Nikos Mantzakouras Μ.Sc in Applied Mathematics, Engineering, and Physics&Special Relativity, National Kapodistrian University of Athens · Author has 832 answers and 449.2K answer views ·6y Related In x+y=3 and x^y+y^x=11, what are x and y? This system has not reals roots .. Plot of function f(x)=(3−x)x+x(3−x)−11..f(x)=(3−x)x+x(3−x)−11.. Max f(x) for x=3/2, f(xmax)=-7.32577 ,that is f(xmax)<0. Only complex Roots. Continue Reading This system has not reals roots .. Plot of function f(x)=(3−x)x+x(3−x)−11..f(x)=(3−x)x+x(3−x)−11.. Max f(x) for x=3/2, f(xmax)=-7.32577 ,that is f(xmax)<0. Only complex Roots. Upvote · 9 4 Roach Ramos Studied Software Engineering at The University of Texas at Arlington ·6y Related What is (x+y) (x+y)? Instead of just giving the answer with no explanation. I'll walk you through it. First use the word FOIL. F - First O - Outer I - Inner L - Last So you multiply the terms described above as in the process below. (First+y)(First+y) (Outer+y)(x+Outer) (x+Inner)(Inner+y) (x+Last)(x+Last) Using this method you do the process below. Mark each term to reduce confusion (x₁+y₁)(x₂+y₂) (x₁x₂)+(x₁y₂)+(y₁x₂)+(y₁y₂) Which ends up with: (x²)+(xy)+(xy)+(y²) Combine like terms (x²)+2xy+(y²) Hope this helps Upvote · 99 10 9 1 Alex Jones B.S. in Applied Mathematics, University of Central Florida (Graduated 2016) · Author has 1.2K answers and 2.2M answer views ·3y Related Basically, what is the difference between (x,y) and {x,y}? The difference is order. In the first case, (x,y)(x,y) is an ordered pair, meaning that x x comes first and then y y comes second. If we switch the order, we get a different thing, i.e. (x,y)≠(y,x)(x,y)≠(y,x). In the second case, {x,y}{x,y} is a set, which is unordered. It just tells us that x x and y y are in there, but nothing about which comes first or second. If we switch the order, we get the same thing, i.e. {x,y}={y,x}{x,y}={y,x}. Upvote · 9 4 9 3 Andy Heilveil programming since 1967. · Author has 7.8K answers and 8.1M answer views ·6y Related Why is x == (x = y) not the same as (x = y) == x? In C/C++ you run into the esoteric topic of ‘sequence points’. When an lvalue (something with an address) is part of an assignment operation and is also referenced again in the same expression then in C or C++ the result is undefined. When a variable name appears where an expression is allowed it is an expression that is evaluated by accessing the memory associate with that variable. C/C++ has chosen to not define the order in which the expression operands of a binary operator are evaluated. Sequence points are places in the code where such ordering can be determined. In this case the compiler co Continue Reading In C/C++ you run into the esoteric topic of ‘sequence points’. When an lvalue (something with an address) is part of an assignment operation and is also referenced again in the same expression then in C or C++ the result is undefined. When a variable name appears where an expression is allowed it is an expression that is evaluated by accessing the memory associate with that variable. C/C++ has chosen to not define the order in which the expression operands of a binary operator are evaluated. Sequence points are places in the code where such ordering can be determined. In this case the compiler could legitimately load the value of x in to a register for the ‘x’ term, then loading y into a register, then comparing the two registers, and some time in the near future store the value in that 2nd register to memory allocated for x. Giving the compiler the freedom to reorder expression evaluation according to what is best for the actual processor was deemed more valuable than having defined behavior in such situations. You can coerce order by assigning the expression to be evaluated earlier to a named variable then use that named variable in the compare. If both sides of the binary operator modify ‘x’ there is no rule other than strict ordering by textual appearance that can order them. Such a modification could be hidden by many layers of function calls, not worth even attempting to analyze. Upvote · Related questions In mathematics, when we write "x ^ y", what does 'y' represent? What's the difference between x^2 and (-x) ^2? What is the difference between e^(x+y) and e^(x-y)? Why must abs (x^y - y^x) > 2 hold true if both x and y are greater than 3 and not equal? What is the difference between x^3 and y^3? If x^y=y^x, what is (x/y) ^x/y equal to? What is the difference between (x, y) and (x,y) in mathematics? What is the name of the math symbol that looks like an equal sign, but the top line is wavy? What does it symbolize? What is the difference between e^x and e^(x)? What is the value of (x/y) ^(x/y) if x^y=y^x? What is the meaning of up and down arrows when used together or separately in math equations? How can we solve (x−y)=(x 2−y 2)(x−y)=(x 2−y 2)? What is the difference between is Y a function of X and X is a function of Y? What is the difference between x+y x x+y x vs. x+y y x+y y? What is the difference between x+y and xy in maths? Related questions In mathematics, when we write "x ^ y", what does 'y' represent? What's the difference between x^2 and (-x) ^2? What is the difference between e^(x+y) and e^(x-y)? Why must abs (x^y - y^x) > 2 hold true if both x and y are greater than 3 and not equal? What is the difference between x^3 and y^3? If x^y=y^x, what is (x/y) ^x/y equal to? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
703
https://physics.nist.gov/PhysRefData/XrayMassCoef/ComTab/pyrex.html
NIST: X-Ray Mass Atten. Coef. - Glass, Borosilicate Back to table 4 Glass, Borosilicate ("Pyrex") HTML table format Energy μ/ρ μ en/ρ (MeV)(cm 2/g)(cm 2/g) 1.00000E-03 3.164E+03 3.155E+03 1.03542E-03 2.887E+03 2.879E+03 1.07210E-03 2.634E+03 2.627E+03 11 K 1.07210E-03 2.800E+03 2.790E+03 1.50000E-03 1.152E+03 1.148E+03 1.55960E-03 1.037E+03 1.033E+03 13 K 1.55960E-03 1.079E+03 1.073E+03 1.69350E-03 8.629E+02 8.583E+02 1.83890E-03 6.883E+02 6.845E+02 14 K 1.83890E-03 1.776E+03 1.723E+03 2.00000E-03 1.500E+03 1.457E+03 3.00000E-03 5.123E+02 5.011E+02 3.60740E-03 3.098E+02 3.036E+02 19 K 3.60740E-03 3.133E+02 3.067E+02 4.00000E-03 2.355E+02 2.307E+02 5.00000E-03 1.260E+02 1.235E+02 6.00000E-03 7.500E+01 7.337E+01 8.00000E-03 3.269E+01 3.178E+01 1.00000E-02 1.705E+01 1.642E+01 1.50000E-02 5.217E+00 4.828E+00 2.00000E-02 2.297E+00 1.995E+00 3.00000E-02 7.987E-01 5.684E-01 4.00000E-02 4.341E-01 2.361E-01 5.00000E-02 3.022E-01 1.235E-01 6.00000E-02 2.417E-01 7.648E-02 8.00000E-02 1.890E-01 4.230E-02 1.00000E-01 1.657E-01 3.209E-02 1.50000E-01 1.389E-01 2.727E-02 2.00000E-01 1.246E-01 2.757E-02 3.00000E-01 1.069E-01 2.885E-02 4.00000E-01 9.540E-02 2.946E-02 5.00000E-01 8.696E-02 2.957E-02 6.00000E-01 8.035E-02 2.941E-02 8.00000E-01 7.052E-02 2.868E-02 1.00000E+00 6.337E-02 2.774E-02 1.25000E+00 5.667E-02 2.650E-02 1.50000E+00 5.160E-02 2.533E-02 2.00000E+00 4.447E-02 2.337E-02 3.00000E+00 3.611E-02 2.069E-02 4.00000E+00 3.140E-02 1.904E-02 5.00000E+00 2.838E-02 1.795E-02 6.00000E+00 2.632E-02 1.721E-02 8.00000E+00 2.373E-02 1.629E-02 1.00000E+01 2.223E-02 1.579E-02 1.50000E+01 2.045E-02 1.522E-02 2.00000E+01 1.982E-02 1.503E-02Glass, Borosilicate ("Pyrex") ASCII format ______ Energy μ/ρ μ_en/_ρ (MeV) (cm2/g) (cm2/g) ________ 1.00000E-03 3.164E+03 3.155E+03 1.03542E-03 2.887E+03 2.879E+03 1.07210E-03 2.634E+03 2.627E+03 11 K 1.07210E-03 2.800E+03 2.790E+03 1.50000E-03 1.152E+03 1.148E+03 1.55960E-03 1.037E+03 1.033E+03 13 K 1.55960E-03 1.079E+03 1.073E+03 1.69350E-03 8.629E+02 8.583E+02 1.83890E-03 6.883E+02 6.845E+02 14 K 1.83890E-03 1.776E+03 1.723E+03 2.00000E-03 1.500E+03 1.457E+03 3.00000E-03 5.123E+02 5.011E+02 3.60740E-03 3.098E+02 3.036E+02 19 K 3.60740E-03 3.133E+02 3.067E+02 4.00000E-03 2.355E+02 2.307E+02 5.00000E-03 1.260E+02 1.235E+02 6.00000E-03 7.500E+01 7.337E+01 8.00000E-03 3.269E+01 3.178E+01 1.00000E-02 1.705E+01 1.642E+01 1.50000E-02 5.217E+00 4.828E+00 2.00000E-02 2.297E+00 1.995E+00 3.00000E-02 7.987E-01 5.684E-01 4.00000E-02 4.341E-01 2.361E-01 5.00000E-02 3.022E-01 1.235E-01 6.00000E-02 2.417E-01 7.648E-02 8.00000E-02 1.890E-01 4.230E-02 1.00000E-01 1.657E-01 3.209E-02 1.50000E-01 1.389E-01 2.727E-02 2.00000E-01 1.246E-01 2.757E-02 3.00000E-01 1.069E-01 2.885E-02 4.00000E-01 9.540E-02 2.946E-02 5.00000E-01 8.696E-02 2.957E-02 6.00000E-01 8.035E-02 2.941E-02 8.00000E-01 7.052E-02 2.868E-02 1.00000E+00 6.337E-02 2.774E-02 1.25000E+00 5.667E-02 2.650E-02 1.50000E+00 5.160E-02 2.533E-02 2.00000E+00 4.447E-02 2.337E-02 3.00000E+00 3.611E-02 2.069E-02 4.00000E+00 3.140E-02 1.904E-02 5.00000E+00 2.838E-02 1.795E-02 6.00000E+00 2.632E-02 1.721E-02 8.00000E+00 2.373E-02 1.629E-02 1.00000E+01 2.223E-02 1.579E-02 1.50000E+01 2.045E-02 1.522E-02 2.00000E+01 1.982E-02 1.503E-02 Back to table 4
704
https://personalpages.manchester.ac.uk/staff/david.d.apsley/lectures/hydraulics2/t3.pdf
Hydraulics 2 T3-1 David Apsley TOPIC T3: DIMENSIONAL ANALYSIS AUTUMN 2024 Objectives (1) Be able to determine the dimensions of physical quantities in terms of fundamental dimensions. (2) Understand the Principle of Dimensional Homogeneity and its use in checking equations and reducing physical problems. (3) Be able to carry out a formal dimensional analysis using Buckingham’s Pi Theorem. (4) Understand the requirements of physical modelling and its limitations. 1. What is dimensional analysis? 2. Dimensions 2.1 Dimensions and units 2.2 Primary dimensions 2.3 Dimensions of derived quantities 2.4 Working out dimensions 2.5 Alternative choices for primary dimensions 3. Formal procedure for dimensional analysis 3.1 Dimensional homogeneity 3.2 Buckingham’s Pi theorem 3.3 Applications 4. Physical modelling 4.1 Method 4.2 Incomplete similarity (“scale effects”) 4.3 Froude-number scaling 5. Non-dimensional groups in fluid mechanics Hydraulics 2 T3-2 David Apsley 1. WHAT IS DIMENSIONAL ANALYSIS? Dimensional analysis is a means of simplifying a physical problem by appealing to dimensional homogeneity to reduce the number of relevant variables. It is particularly useful for: • checking equations; • presenting and interpreting experimental data; • attacking problems not amenable to a direct theoretical solution; • establishing the relative importance of particular physical phenomena; • physical modelling. Example. The drag force, 𝐹, on a sphere is a function of approach-flow speed, 𝑈, sphere diameter, 𝐷, fluid density, 𝜌, and viscosity, 𝜇. However, instead of having to draw hundreds of graphs portraying its variation with all combinations of these parameters, dimensional analysis will tell us that the problem can be reduced to a dimensionless relationship between just two independent variables: 𝑐𝐷= 𝑓(Re) where 𝑐𝐷 is the drag coefficient: 𝑐𝐷≡ 𝐹 1 2 𝜌𝑈2𝐴 (𝐴= π𝐷2 4 ) and Re is the Reynolds number: Re ≡𝜌𝑈𝐷 𝜇 In this instance dimensional analysis has reduced the number of relevant variables from 5 to 2 and the experimental data to a single graph of 𝑐𝐷 against Re. F U D ,  Hydraulics 2 T3-3 David Apsley 2. DIMENSIONS 2.1 Dimensions and Units A dimension is the type of physical quantity. A unit is a means of assigning a numerical value to that quantity. SI units are preferred in scientific work. 2.2 Primary Dimensions In fluid mechanics the primary or fundamental dimensions, together with their SI units, are: mass M (kilogram, kg) length L (metre, m) time T (second, s) temperature Θ (kelvin, K) In other areas of physics additional dimensions may be necessary. The complete set specified by the SI system consists of the above plus electric current I (ampere, A) luminous intensity C (candela, cd) amount of substance n (mole, mol) 2.3 Dimensions of Derived Quantities Quantity Common Symbol(s) Dimensions Geometry Area 𝐴 L2 Volume 𝑉 L3 Second moment of area 𝐼 L4 Kinematics Velocity 𝑈 LT–1 Acceleration 𝑎 LT–2 Angle 𝜃 1 (i.e. dimensionless) Angular velocity 𝜔 T–1 Quantity of flow 𝑄 L3T–1 Mass flow rate 𝑚 ̇ MT–1 Dynamics Force 𝐹 MLT–2 Moment, torque 𝑇 ML2T–2 Energy, work, heat 𝐸, 𝑊 ML2T–2 Power 𝑃 ML2T–3 Pressure, stress 𝑝, 𝜏 ML–1T–2 Fluid properties Density 𝜌 ML–3 Viscosity 𝜇 ML–1T–1 Kinematic viscosity 𝜈 L2T–1 Surface tension 𝜎 MT–2 Thermal conductivity 𝑘 MLT–3Θ–1 Specific heat 𝑐𝑝, 𝑐𝑣 L2T–2Θ–1 Bulk modulus 𝐾 ML–1T–2 Hydraulics 2 T3-4 David Apsley 2.4 Working Out Dimensions In the following, [ ] means “dimensions of”. Example. Use the definition 𝜏= 𝜇d𝑢 d𝑦 to determine the dimensions of viscosity. From the definition, 𝜇= 𝜏 d𝑢/d𝑦 = force/area velocity/length Hence, [𝜇] = MLT−2/L2 LT−1/L = ML−1T−1 Alternatively, dimensions may be deduced indirectly from any known formula involving that quantity. Example. Since Re ≡𝜌𝑈𝐿/𝜇 is known to be dimensionless, the dimensions of 𝜇 must be the same as those of 𝜌𝑈𝐿; i.e. [𝜇] = [𝜌][𝑈][𝐿] = (ML−3)(LT−1)(L) = ML−1T−1 2.5 Alternative Choices For Primary Dimensions The choice of primary dimensions is not unique. It is not uncommon – and it may sometimes be more convenient – to choose force F as a primary dimension rather than mass, and have a {FLT} rather than {MLT} system. Example. Find the dimensions of viscosity 𝜇 in the {FLT} rather than {MLT} systems. From its definition, 𝜇= 𝜏 d𝑢/d𝑦 = force/area velocity/length = F/L2 LT−1/L = FL−2T Hydraulics 2 T3-5 David Apsley 3. FORMAL PROCEDURE FOR DIMENSIONAL ANALYSIS 3.1 Dimensional Homogeneity The Principle of Dimensional Homogeneity All additive terms in a physical equation must have the same dimensions. Examples: 𝑠= 𝑢𝑡+ 1 2 𝑎𝑡2 all terms have the dimensions of length (𝐿) 𝑝 𝜌𝑔+ 𝑉2 2𝑔+ 𝑧= 𝐻 all terms have the dimensions of length (𝐿) Dimensional homogeneity is a useful tool for checking formulae. For this reason it is useful when analysing a physical problem to retain algebraic symbols for as long as possible, only substituting numbers right at the end. However, dimensional analysis cannot determine numerical factors; e.g. it cannot distinguish between ½𝑎𝑡2 and 𝑎𝑡2 in the first formula above. Dimensional homogeneity is the basis of the formal dimensional analysis that follows. 3.2 Buckingham’s Pi Theorem Experienced practitioners can do dimensional analysis by inspection. However, the formal tool which they are unconsciously using is Buckingham’s Pi Theorem1: Buckingham’s Pi Theorem (1) If a problem involves 𝑛 relevant variables 𝑚 independent dimensions then it can be reduced to a relationship between 𝑛– 𝑚 non-dimensional parameters Π1, … , Π𝑛−𝑚. (2) To construct these non-dimensional Π groups: (i) Choose 𝑚 dimensionally-distinct scaling variables (aka repeating variables). (ii) For each of the 𝑛– 𝑚 remaining variables construct a non-dimensional Π of the form Π = (variable)(scale1)𝑎(scale2)𝑏(scale3)𝑐⋯ where 𝑎, 𝑏, 𝑐, ... are chosen so as to make each Π non-dimensional. Note. In order to ensure dimensional independence in {MLT} systems it is common – but not obligatory – to choose the scaling variables as: a purely geometric quantity (e.g. a length), a kinematic (time-, but not mass-containing) quantity (e.g. frequency, velocity or acceleration) and a dynamic (mass-, or force-containing) quantity (e.g. density). 1 Buckingham, E., 1914. The use of Π comes from its use as the mathematical symbol for a product. Hydraulics 2 T3-6 David Apsley 3.3 Applications Example. Obtain an expression in non-dimensional form for the pressure gradient in a horizontal pipe of circular cross-section. Show how this relates to the expression for frictional head loss. Step 1. Identify the relevant variables. d𝑝/d𝑥, 𝜌, 𝑉, 𝐷, 𝑘𝑠, 𝜇 Step 2. Write down dimensions. d𝑝 d𝑥 [force/area] length = MLT−2 × L−2 L = ML−2T−2 𝜌 ML−3 𝑉 LT−1 𝐷 L 𝑘𝑠 L 𝜇 ML−1T−1 Step 3. Establish the number of independent dimensions and non-dimensional groups. Number of relevant variables: 𝑛= 6 Number of independent dimensions: 𝑚= 3 (M, L and T) Number of non-dimensional groups (Πs): 𝑛−𝑚= 3 Step 4. Choose 𝑚 (= 3) dimensionally-independent scaling variables. e.g. geometric (𝐷), kinematic/time-dependent (𝑉), dynamic/mass-dependent (𝜌). Step 5. Create the Πs by non-dimensionalising the remaining variables: d𝑝/d𝑥, 𝑘𝑠 and 𝜇. Π1 = d𝑝 d𝑥𝐷𝑎𝑉𝑏𝜌𝑐 Considering the dimensions of both sides: M0L0T0 = (ML−2T−2)(L)𝑎(LT−1)𝑏(ML−3)𝑐 = M1+𝑐L−2+𝑎+𝑏−3𝑐T−2−𝑏 Equate powers of primary dimensions. Since M only appears in [𝜌] and T only appears in [𝑉] it is easiest to deal with these first: M: 0 = 1 + 𝑐  𝑐= −1 T: 0 = −2 −𝑏  𝑏= −2 L: 0 = −2 + 𝑎+ 𝑏−3𝑐  𝑎= 2 −𝑏+ 3𝑐 = 1 Hence, Π1 = d𝑝 d𝑥𝐷1𝑉−2𝜌−1 = 𝐷d𝑝 d𝑥 𝜌𝑉2 (OK – ratio of two pressures) 𝑘𝑠 can be non-dimensionalised by inspection, since it already has the same dimensions (L) as one of the scaling variables: Π2 = 𝑘𝑠 𝐷 Hydraulics 2 T3-7 David Apsley Finally, Π3 = 𝜇𝐷𝑎𝑉𝑏𝜌𝑐 Considering the dimensions of both sides: M0L0T0 = (ML−1T−1)(L)𝑎(LT−1)𝑏(ML−3)𝑐 = M1+𝑐L−1+𝑎+𝑏−3𝑐T−1−𝑏 Again, as M only appears in [𝜌] and T only appears in [𝑉] then deal with these first: M: 0 = 1 + 𝑐  𝑐= −1 T: 0 = −1 −𝑏  𝑏= −1 L: 0 = −1 + 𝑎+ 𝑏−3𝑐  𝑎= 1 −𝑏+ 3𝑐 = −1 Hence, Π3 = 𝜇𝐷−1𝑉−1𝜌−1 = 𝜇 𝜌𝑉𝐷 (OK – reciprocal of Reynolds number) Step 6. Set out the non-dimensional relationship. Π1 = 𝑓(Π2, Π3) or 𝐷d𝑝 d𝑥 𝜌𝑉2 = 𝑓(𝑘𝑠 𝐷, 𝜇 𝜌𝑉𝐷) () Step 7. Rearrange (if required) for convenience. We may replace any Π by a power of that Π, or by a product with the other Πs, provided that we retain the same number of independent dimensionless groups. Here, we recognise Π3 as the reciprocal of the Reynolds number, so it is more natural to use Π3 ′ = (Π3)−1 = Re as the third non-dimensional group. We can also write the pressure gradient in terms of head loss: d𝑝/d𝑥= 𝜌𝑔(ℎ𝑓/𝐿). With these two modifications the non-dimensional relationship () then becomes 𝑔ℎ𝑓𝐷 𝐿𝑉2 = 𝑓(𝑘𝑠 𝐷, Re) or ℎ𝑓= 𝐿 𝐷× 𝑉2 𝑔× 𝑓(𝑘𝑠 𝐷, Re) Since numerical factors (here, ½) can be absorbed into the non-specified function, this can easily be identified with the Darcy-Weisbach equation ℎ𝑓= 𝜆𝐿 𝐷 𝑉2 2𝑔 where 𝜆 is a function of relative roughness 𝑘𝑠/𝐷 and Reynolds number Re, a function given (Topic 2) by the Colebrook-White equation. Hydraulics 2 T3-8 David Apsley Example. The drag force 𝐹 on a body in a fluid flow is a function of the body size (expressed via a characteristic length, 𝐿) and the fluid velocity, 𝑉, density, 𝜌, and viscosity, 𝜇. Perform a dimensional analysis to reduce this to a single functional dependence 𝑐𝐷= 𝑓(Re) where 𝑐𝐷 is a drag coefficient and Re is the Reynolds number. What additional non-dimensional groups might appear in practice? List variables and their dimensions: 𝐹 MLT−2 𝐿 L 𝑉 LT−1 𝜌 ML−3 𝜇 ML−1T−1 Number of variables: 𝑛= 5. Number of independent dimensions: 𝑚= 3 (M, L and T). Number of non-dimensional groups: 𝑛−𝑚= 2. Choose 𝑚 (= 3) independent scaling variables, not including the subject, 𝐹: geometric (𝐿), kinematic/time-dependent (𝑉), dynamic/mass-dependent (𝜌). Form non-dimensional groups by non-dimensionalising the other variables (𝐹 and 𝜇) by balancing powers of M, L and T. Π1 = 𝐹𝐿𝑎𝑉𝑏𝜌𝑐  M0L0T0 = (MLT−2)(L)𝑎(LT−1)𝑏(ML−3)𝑐 = M1+𝑐L1+𝑎+𝑏−3𝑐T−2−𝑏 M: 0 = 1 + 𝑐  𝑐= −1 T: 0 = −2 −𝑏  𝑏= −2 L 0 = 1 + 𝑎+ 𝑏−3𝑐  𝑎= −1 + 3𝑐−𝑏= −2  Π1 = 𝐹𝐿−2𝑉−2𝜌−1 = 𝐹 𝜌𝑉2𝐿2 Π2 = 𝜇𝐿𝑎𝑉𝑏𝜌𝑐  M0L0T0 = (ML−1T−1)(L)𝑎(LT−1)𝑏(ML−3)𝑐 = M1+𝑐L−1+𝑎+𝑏−3𝑐T−1−𝑏 M: 0 = 1 + 𝑐  𝑐= −1 Hydraulics 2 T3-9 David Apsley T: 0 = −1 −𝑏  𝑏= −1 L 0 = −1 + 𝑎+ 𝑏−3𝑐  𝑎= 1 + 3𝑐−𝑏= −1  Π2 = 𝜇𝐿−1𝑉−1𝜌−1 = 𝜇 𝜌𝑉𝐿 Hence, dimensional analysis gives Π1 = 𝑓(Π2) i.e. 𝐹 𝜌𝑉2𝐿2 = 𝑓( 𝜇 𝜌𝑉𝐿) Recognition of the drag coefficient and Reynolds number suggests that we replace Π1 and Π2 respectively by Π1 ′ = constant × Π1 = 𝐹 1 2 𝜌𝑉2𝐴 Π2 ′ = (Π2)−1 = 𝜌𝑉𝐿 𝜇 where representative area 𝐴∝𝐿2. Hence, 𝑐𝐷= 𝑓(Re) , where 𝑐𝐷= 𝐹 1 2 𝜌𝑉2𝐴 , Re = 𝜌𝑉𝐿 𝜇 Other potential groups include relative roughness (𝑘𝑠/𝐿), blockage ratio and, in high-speed flow, Mach number (𝑉/𝑐). Notes. (1) Dimensional analysis simply says that there is a relationship; it doesn’t say what the relationship is. For the specific relationship one must appeal to other theory, simulation, or experimental data. (2) If there is only one Π group … then it can’t be a function of anything else … so it must be a constant. (3) If Π1, Π2, Π3, … are suitable non-dimensional groups then we are liberty to replace some or all of them by any powers or products with the other Πs, provided that we retain the same number of independent non-dimensional groups; e.g. Π1 −1, Π1/Π3 2 etc.. (4) It is very common in fluid mechanics to find (often after the rearrangement mentioned in (3)) certain combinations which can be recognised as familiar key parameters; e.g. Reynolds number (Re = 𝜌𝑈𝐿/𝜇) or Froude number (Fr = 𝑈/√𝑔𝐿). Hydraulics 2 T3-10 David Apsley (5) Often the hardest part of the dimensional analysis is determining which are the relevant variables. For example, surface tension is always present in free-surface flows, but can be neglected if the Weber number We = 𝜌𝑈2𝐿/𝜎 is large. Similarly, all fluids are compressible, but compressibility effects on the flow can be ignored if the Mach number (Ma = 𝑈/𝑐) is small; i.e. velocity is much less than the speed of sound. (6) Although certain primary dimensions (e.g. M, L, T) appear when the variables are listed, they may do not do so independently, in this case, there will be fewer independent dimensions. As an example of (6), the following example illustrates a case where M and T always appear in the combination MT–2, giving only one independent dimension. Example. The tip deflection, 𝛿, of a cantilever beam is a function of tip load, 𝑊, beam length, 𝑙, second moment of area, 𝐼, and Young’s modulus, 𝐸. Perform a dimensional analysis of this problem. Step 1. Identify the relevant variables. 𝛿, 𝑊, 𝑙, 𝐼, 𝐸. Step 2. Write down dimensions. 𝛿 L 𝑊 MLT−2 𝑙 L 𝐼 L4 𝐸 ML−1T−2 Step 3. Establish the number of independent dimensions and non-dimensional groups. Number of relevant variables: 𝑛= 5 Number of independent dimensions: 𝑚= 2 (L and MT−2 - note) Number of non-dimensional groups (Πs): 𝑛−𝑚= 3 Step 4. Choose 𝑚 (= 2) dimensionally-independent scaling variables. e.g. geometric (𝑙), kinematic/time-dependent (𝐸) Step 5. Create the Πs by non-dimensionalising the remaining variables: 𝛿, 𝐼 and 𝑊. These give (after some algebra, omitted here): Π1 = 𝛿 𝑙 Π2 = 𝐼 𝑙4 Π3 = 𝑊 𝐸𝑙2 Hydraulics 2 T3-11 David Apsley Step 6. Set out the non-dimensional relationship. Π1 = 𝑓(Π2, Π3) or 𝛿 𝑙= 𝑓( 𝐼 𝑙4 , 𝑊 𝐸𝑙2) Note 1. This is as far as dimensional analysis will get us. Detailed theory shows that, for small elastic deflections, 𝛿= 1 3 𝑊𝑙3 𝐸𝐼 or 𝛿 𝑙= 1 3 ( 𝑊 𝐸𝑙2) × ( 𝐼 𝑙4) −1 Note 2. Although three primary dimensions (M, L, T) appear here, they only do so in two independent groups: (L and MT−2), so that the number of independent dimensions 𝑚= 2. This would have been more obvious in the alternative {FLT} system, where the variables have the following dimensions: 𝛿 L 𝑊 F 𝑙 L 𝐼 L4 𝐸 FL−2 Here, only F and L appear. Hydraulics 2 T3-12 David Apsley 4. PHYSICAL MODELLING 4.1 Method If a dimensional analysis indicates that a problem is described by a functional relationship between non-dimensional parameters Π1, Π2, Π3, … then complete similarity requires that these parameters be the same at both full (“prototype”) scale and model scale. i.e. (Π1)𝑚= (Π1)𝑝 (Π2)𝑚= (Π2)𝑝 etc. Example. A prototype gate valve which will control the flow in a conduit conveying paraffin is to be studied in a model. List the significant variables on which the pressure drop across the valve would depend. Perform dimensional analysis to obtain the relevant non-dimensional groups. A 1/5-scale model is built to determine the pressure drop across the valve with water as the working fluid. (a) For a particular opening, when the velocity of paraffin in the prototype is 3.0 m s–1 what should be the velocity of water in the model for dynamic similarity? (b) What is the ratio of the quantities of flow in prototype and model? (c) Find the pressure drop in the prototype if it is 60 kPa in the model. (The density and viscosity of paraffin are 800 kg m–3 and 0.002 kg m–1 s–1 respectively. Take the kinematic viscosity of water as 1.010–6 m2 s–1). The pressure drop Δ𝑝 is expected to depend upon the gate opening ℎ, the overall depth 𝑑, the velocity 𝑉, density 𝜌 and viscosity 𝜇. List the relevant variables: Δ𝑝, ℎ, 𝑑, 𝑉, 𝜌, 𝜇 Write down dimensions: Δ𝑝 ML−1T−2 ℎ L 𝑑 L 𝑉 LT−1 𝜌 ML−3 𝜇 ML−1T−1 Number of relevant variables: 𝑛= 6 Number of independent dimensions: 𝑚= 3 (M, L and T) Hydraulics 2 T3-13 David Apsley Number of non-dimensional groups (Πs): 𝑛−𝑚= 3 Choose 𝑚 (= 3) scaling variables: geometric (𝑑); kinematic/time-dependent (𝑉); dynamic/mass-dependent (𝜌). Form dimensionless groups by non-dimensionalising the remaining variables: Δ𝑝, ℎ and 𝜇. For Δ𝑝: Π1 = Δ𝑝 𝑑𝑎𝑉𝑏𝜌𝑐 Considering the dimensions of both sides: M0L0T0 = (ML−1T−2)(L)𝑎(LT−1)𝑏(ML−3)𝑐 = M1+𝑐L−1+𝑎+𝑏−3𝑐T−2−𝑏 Equate powers of primary dimensions: M: 0 = 1 + 𝑐  𝑐= −1 T: 0 = −2 −𝑏  𝑏= −2 L: 0 = −1 + 𝑎+ 𝑏−3𝑐  𝑎= 1 −𝑏+ 3𝑐 = 0 Hence, Π1 = Δ𝑝 𝑉−2𝜌−1 = Δ𝑝 𝜌𝑉2 ℎ can be done by inspection, since it has the same dimension as the scale 𝑑: Π2 = ℎ 𝑑 For 𝜇: Π3 = 𝜇 𝑑𝑎𝑉𝑏𝜌𝑐 Considering the dimensions of both sides: M0L0T0 = (ML−1T−1)(L)𝑎(LT−1)𝑏(ML−3)𝑐 = M1+𝑐L−1+𝑎+𝑏−3𝑐T−1−𝑏 Equate powers of primary dimensions: M: 0 = 1 + 𝑐  𝑐= −1 T: 0 = −1 −𝑏  𝑏= −1 L: 0 = −1 + 𝑎+ 𝑏−3𝑐  𝑎= 1 −𝑏+ 3𝑐 = −1 Hence, Π3 = 𝜇𝑑−1𝑉−1𝜌−1 = 𝜇 𝜌𝑉𝑑 Recognition of the Reynolds number suggests that we replace Π3 by Π3 ′ = (Π3)−1 = 𝜌𝑉𝑑 𝜇 Hence, dimensional analysis yields Π1 = 𝑓(Π2, Π3 ′ ) i.e. Hydraulics 2 T3-14 David Apsley Δ𝑝 𝜌𝑉2 = 𝑓(ℎ 𝑑, 𝜌𝑉𝑑 𝜇) (a) Dynamic similarity requires that all non-dimensional groups be the same in model and prototype; i.e. Π1 = ( Δ𝑝 𝜌𝑉2) 𝑝 = ( Δ𝑝 𝜌𝑉2) 𝑚 Π2 = (ℎ 𝑑) 𝑝 = (ℎ 𝑑) 𝑚 (automatic if similar shape; i.e. “geometric similarity”) Π3 ′ = (𝜌𝑉𝑑 𝜇) 𝑝 = (𝜌𝑉𝑑 𝜇) 𝑚 From the last, we have a velocity ratio 𝑉 𝑝 𝑉 𝑚 = (𝜇/𝜌)𝑝 (𝜇/𝜌)𝑚 𝑑𝑚 𝑑𝑝 = 0.002/800 1.0 × 10−6 × 1 5 = 0.5 Hence, 𝑉 𝑚= 𝑉 𝑝 0.5 = 3.0 0.5 = 6.0 m s−1 (b) The ratio of the quantities of flow is 𝑄𝑝 𝑄𝑚 = (velocity × area)𝑝 (velocity × area)𝑚 = 𝑉 𝑝 𝑉 𝑚 (𝑑𝑝 𝑑𝑚 ) 2 = 0.5 × 52 = 12.5 (c) Finally, for the pressure drop, Π1 = ( Δ𝑝 𝜌𝑉2) 𝑝 = ( Δ𝑝 𝜌𝑉2) 𝑚 ⇒ (Δ𝑝)𝑝 (Δ𝑝)𝑚 = 𝜌𝑝 𝜌𝑚 (𝑉 𝑝 𝑉 𝑚 ) 2 = 800 1000 × 0. 52 = 0.2 Hence, Δ𝑝𝑝= 0.2 × Δ𝑝𝑚 = 0.2 × (60 kPa) = 12.0 kPa Hydraulics 2 T3-15 David Apsley 4.2 Incomplete Similarity (“Scale Effects”) For a multi-parameter problem it is often not possible to achieve full similarity. In particular, it is rare to be able to achieve full Reynolds-number scaling when other dimensionless parameters are also involved. For hydraulic modelling of flows with a free surface the most important requirement is Froude-number scaling (Section 4.3) It is common to distinguish three levels of similarity. Geometric similarity – the ratio of all corresponding lengths in model and prototype are the same (i.e. they have the same shape). Kinematic similarity – the ratio of all corresponding lengths and times (and hence the ratios of all corresponding velocities) in model and prototype are the same. Dynamic similarity – the ratio of all forces in model and prototype are the same; e.g. Re = (inertial force)/(viscous force) is the same in both. (“Inertial force” means “mass × acceleration” – i.e., the sum of all forces.) Geometric similarity is almost always assumed. However, in some applications – notably river modelling – it is necessary to distort vertical scales to prevent undue influence of, for example, surface tension or bed roughness. Achieving full similarity is particularly a problem with the Reynolds number Re = 𝑈𝐿/𝜈. • Using the same working fluid would require a velocity ratio inversely proportional to the length-scale ratio and hence impractically large velocities in the scale model. • A velocity scale fixed by, for example, the Froude number (see Section 4.3) means that the only way to maintain the same Reynolds number is to adjust the kinematic viscosity (substantially). In practice, Reynolds-number similarity is unimportant if flows in both model and prototype are fully turbulent; then momentum transport by viscous stresses is much less than that by turbulent eddies and so the precise value of molecular viscosity 𝜇 is unimportant. In some cases this may mean deliberately triggering transition to turbulence in boundary layers (for example by the use of tripping wires or roughness strips). Surface effects Full geometric similarity requires that not only the main dimensions of objects but also the surface roughness and, for mobile beds, the sediment size be in proportion. This would put impossible requirements on surface finish or grain size. In practice, it is sufficient that the surface be aerodynamically rough: 𝑢𝜏𝑘𝑠/𝜈≥5, where 𝑢𝜏= √𝜏𝑤/𝜌 is the friction velocity and 𝑘𝑠 a typical height of surface irregularities. This imposes a minimum velocity in model tests. Other Fluid Phenomena When scaled down in size, fluid phenomena which were negligible at full scale may become important in laboratory models. A common example is surface tension. Hydraulics 2 T3-16 David Apsley 4.3 Froude-Number Scaling The most important parameter to preserve in hydraulic modelling of free-surface flows driven by gravity is the Froude number, Fr = 𝑈/√𝑔𝐿. Preserving this parameter between model (𝑚) and prototype (𝑝) dictates the scaling of other variables in terms of the length scale ratio. Velocity (Fr)𝑚= (Fr)𝑝 ( 𝑈 √𝑔𝐿 ) 𝑚 = ( 𝑈 √𝑔𝐿 ) 𝑝  𝑈𝑚 𝑈𝑝 = (𝐿𝑚 𝐿𝑝 ) 1/2 i.e. the velocity ratio is the square root of the length-scale ratio. Quantity of flow 𝑄 ~ velocity × area  𝑄𝑚 𝑄𝑝 = (𝐿𝑚 𝐿𝑝 ) 5/2 Force 𝐹 ~ pressure × area  𝐹 𝑚 𝐹 𝑝 = (𝐿𝑚 𝐿𝑝 ) 3 This arises since the pressure ratio is equal to the length-scale ratio – this can be seen from either hydrostatics (pressure ∝height) or from the dynamic pressure (proportional to (velocity)2 which, from the Froude number, is proportional to length). Time 𝑡 ~ length/velocity  𝑡𝑚 𝑡𝑝 = (𝐿𝑚 𝐿𝑝 ) 1/2 Hence the quantity of flow scales as the length-scale ratio to the 5/2 power, whilst the time-scale ratio is the square root of the length-scale ratio. For example, at 1:100 geometric scale, a full-scale tidal period of 12.4 hours becomes 1.24 hours. Example. The force exerted on a bridge pier in a river is to be tested in a 1:10 scale model using water as the working fluid. In the prototype the depth of water is 2.0 m, the velocity of flow is 1.5 m s–1 and the width of the river is 20 m. (a) List the variables affecting the force on the pier and perform dimensional analysis. Can you satisfy all the conditions for complete similarity? What is the most important parameter to choose for dynamic similarity? (b) What are the depth, velocity and quantity of flow in the model? (c) If the hydrodynamic force on the model bridge pier is 5 N, what would it be on the prototype? Hydraulics 2 T3-17 David Apsley (a) A reasonable list, acknowledging forces due to dynamic and hydrostatic pressure and viscous drag, yields 𝐹~𝐷, ℎ, 𝑊, 𝑉, 𝑔, 𝜌, 𝜇 where 𝐹 is force, 𝐷 is diameter of pier, ℎ is depth of water, 𝑊 is width of river, 𝑉 is velocity of flow, 𝜌 is density of water, 𝜇 is viscosity of water. A full dimensional analysis (together with familiarity with typical fluid groups 𝑐𝐷 and Re for rearrangement, yields) 𝐹 1 2 𝜌𝑉2ℎ𝐷 = 𝑓( 𝑉 √𝑔ℎ , 𝜌𝑉𝐷 𝜇 , 𝐷 ℎ, 𝑊 ℎ) i.e. 𝑐𝐷= 𝑓(Fr, Re, length ratios) If we assume geometric similarity then this list can be reduced to 𝑐𝐷= 𝑓(Fr, Re) since the length ratios 𝐷/ℎ, 𝑊/ℎ are automatically preserved between model and prototype. To maintain complete similarity between model and prototype means maintaining the same Froude and Reynolds numbers, which is impossible in the same fluid since it would require simultaneously 𝑉∝√𝐿 and 𝑉∝1/𝐿, where 𝐿 is a length scale. In practice, it is more important to preserve the Froude number since, in fully turbulent flow, the dynamic viscosity 𝜇, and hence the Reynolds number, are not important. (b) The geometric scale is given, so ℎm ℎp = 𝑊 m 𝑊 p = 1 10 whence ℎm = 1 10 × 2 m = 0.2 m 𝑊 m = 1 10 × 20 m = 2 m Froude scaling implies 𝑉 𝑚= 𝑉 𝑝× √ℎ𝑚 ℎ𝑝 = 1.5 × √1 10 = 0.4743 m s−1 and so the quantity of flow through the cross-section of the river is 𝑄𝑚= velocity × area = 𝑉 𝑚× 𝑊 𝑚ℎ𝑚 = 0.4743 × 2 × 0.2 = 0.1897 m3 s Hydraulics 2 T3-18 David Apsley Answer: 0.2 m; 0.474 m s–1; 0.190 m3 s–1 (c) Under Froude scaling, and considering the drag coefficient, force ratio = (velocity ratio)2 × (area ratio) = (length ratio)3 Hence, 𝐹 𝑝 𝐹 𝑚 = (𝐿𝑝 𝐿𝑚 ) 3 = 1000 whence 𝐹 𝑝= 1000 × 5 = 5000 N Answer: 5 kN Hydraulics 2 T3-19 David Apsley 5. NON-DIMENSIONAL GROUPS IN FLUID MECHANICS Dynamic similarity requires that the ratio of all forces be the same. The ratio of different forces produces many of the key non-dimensional parameters in fluid mechanics. (Note that “inertial force” means “mass  acceleration” – i.e. the total force. Each non-dimensional group then involves the ratio of a particular force to the total force. This reflects the fraction of the total that this particular force is responsible for, so you can see whether its effect is likely to be small or large.) Reynolds number Re = 𝜌𝑈𝐿 𝜇 = inertial force viscous force (viscous flows) Froude number Fr = 𝑈 √𝑔𝐿 = ( inertial force gravitational force) 1/2 (free-surface flows) Weber number We = 𝜌𝑈2𝐿 𝜎 = inertial force surface tension (near-surface flows) Rossby number Ro = 𝑈 Ω𝐿 = inertial force Coriolis force (rotating flows) Mach number Ma = 𝑈 𝑐 = ( inertial force compressibility force) 1/2 (compressible flows) These groups occur regularly when dimensional analysis is applied to fluid-dynamical problems. They can be derived by considering forces on a small volume of fluid. They can also be derived by non-dimensionalising the differential equations of fluid flow (see White, 2021), or the online notes for the 4th-year Computational Hydraulics unit.
705
https://www.ahajournals.org/doi/10.1161/STROKEAHA.117.020544
Beneficial Role of Neutrophils Through Function of Lactoferrin After Intracerebral Hemorrhage | Stroke Skip to main content Advertisement Become a member Volunteer Donate Journals BrowseCollectionsSubjectsAHA Journal PodcastsTrend Watch ResourcesCMEAHA Journals @ MeetingsJournal MetricsEarly Career Resources InformationFor AuthorsFor ReviewersFor SubscribersFor International Users Alerts 0 Cart Search Sign inREGISTER Quick Search in Journals Enter search term Quick Search anywhere Enter search term Quick search in Citations Journal Year Volume Issue Page Searching: This Journal This JournalAnywhereCitation Advanced SearchSearch navigate the sidebar menu Sign inREGISTER Quick Search anywhere Enter search term Publications Arteriosclerosis, Thrombosis, and Vascular Biology Circulation Circulation Research Hypertension Stroke Current Issue Archive Journal Information About Stroke Author Instructions Editorial Board Information for Advertisers Features Stroke Alert Podcast Blogging Stroke Early Career Program Journal Awards Journal of the American Heart Association Circulation: Arrhythmia and Electrophysiology Circulation: Cardiovascular Imaging Circulation: Cardiovascular Interventions Circulation: Cardiovascular Quality & Outcomes Circulation: Genomic and Precision Medicine Circulation: Heart Failure Stroke: Vascular and Interventional Neurology Annals of Internal Medicine: Clinical Cases Information For Authors For Reviewers For Subscribers For International Users Submit & Publish Submit to the AHA Editorial Policies Open Science Value of Many Voices Publishing with the AHA Open Access Information Resources AHA Journals CME AHA Journals @ Meetings Metrics AHA Journals Podcast Network Early Career Resources Trend Watch Professional Heart Daily AHA Newsroom Current Issue Archive Journal Information About Stroke Author Instructions Editorial Board Information for Advertisers Features Stroke Alert Podcast Blogging Stroke Early Career Program Journal Awards Submit Reference #1 Research Article Originally Published 10 April 2018 Free Access Beneficial Role of Neutrophils Through Function of Lactoferrin After Intracerebral Hemorrhage Xiurong Zhao, MD, Shun-Ming Ting, MS, Guanghua Sun, MD, Meaghan Roy-O’Reilly, MS, Alexis S.Mobley, MS, Jesus Bautista Garrido, MA, Xueping Zheng, MD, Lidiya Obertas, MS, Joo Eun Jung, PhD, Marian Kruzel, PhD, and Jaroslaw Aronowski, MD, PhDAuthor Info & Affiliations Stroke Volume 49, Number 5 4,813 46 Metrics Total Downloads 4,813 Last 12 Months 813 Total Citations 46 Last 12 Months 9 View all metrics Track CitationsAdd to favorites PDF/EPUB Contents Abstract Graphical Abstract Introduction Materials and Methods Results Discussion Supplemental Material References eLetters Information & Authors Metrics & Citations View Options References Figures Tables Media Share Abstract Background and Purpose— Intracerebral hemorrhage (ICH) is a devastating disease with a 30-day mortality of ~50%. There are no effective therapies for ICH. ICH results in brain damage in 2 major ways: through the mechanical forces of extravasated blood and then through toxicity of the intraparenchymal blood components including hemoglobin/iron. LTF (lactoferrin) is an iron-binding protein, uniquely abundant in polymorphonuclear neutrophils (PMNs). After ICH, circulating blood PMNs enter the ICH-afflicted brain where they release LTF. By virtue of sequestrating iron, LTF may contribute to hematoma detoxification. Methods— ICH in mice was produced using intrastriatal autologous blood injection. PMNs were depleted with intraperitoneal administration of anti-Ly-6G antibody. Treatment of mouse brain cell cultures with lysed RBC or iron was used as in vitro model of ICH. Results— LTF mRNA was undetectable in the mouse brain, even after ICH. Unlike mRNA, LTF protein increased in ICH-affected hemispheres by 6 hours, peaked at 24 to 72 hours, and remained elevated for at least a week after ICH. At the single cell level, LTF was detected in PMNs in the hematoma-affected brain at all time points after ICH. We also found elevated LTF in the plasma after ICH, with a temporal profile similar to LTF changes in the brain. Importantly, mrLTF (recombinant mouse LTF) reduced the cytotoxicity of lysed RBC and FeCl 3 to brain cells in culture. Ultimately, in an ICH model, systemic administration of mrLTF (at 3, 24, and 48 hours after ICH) reduced brain edema and ameliorated neurological deficits caused by ICH. mrLTF retained the benefit in reducing behavioral deficit even with 24-hour treatment delay. Interestingly, systemic depletion of PMNs at 24 hours after ICH worsened neurological deficits, suggesting that PMN infiltration into the brain at later stages after ICH could be a beneficial response. Conclusions— LTF delivered to the ICH-affected brain by infiltrating PMNs may assist in hematoma detoxification and represent a powerful potential target for the treatment of ICH. Graphical Abstract Open in Viewer Introduction Intracerebral hemorrhage (ICH) is a devastating form of stroke with 30% to 67% mortality and poor prognosis for which no effective therapy is available.1,2 Rapid deposition of blood within the brain parenchyma causes increased intracranial pressure, resulting in primary mechanical brain damage. Subsequently, various toxic blood components and hemolysis products within the hematoma trigger irreversible injury to the brain. Iron toxicity associated with oxidative stress,3–5 or ferroptosis,6 or HIF-1a (hypoxia-inducible factor 1 alpha),7 could play a central role in this damage and ICH-associated inflammation.8–12 After ICH, masses of polymorphonuclear neutrophils (PMNs) infiltrate into the ICH-affected brain parenchyma where they release various granule components during degranulation.13–15 LTF (lactoferrin) is an iron-binding protein found in the secondary granules of PMNs.16 Normally, LTF released by PMNs acts as a first-line defense protein against microbial pathogens through the chelation of iron. Structurally, LTF is a single-chain glycoprotein that folds into 2 lobes, and each lobe can tightly bind and sequester 1 ferric iron with high affinity (K d ≈10−20 mol/L).17,18 Importantly, unlike other iron-binding proteins, LTF can retain ferric ions (Fe 3+) even at the low pH associated with inflamed tissue. This property can effectively prevent ferrous ions (Fe 2+) engagement in Fenton’s reaction, a source of oxidative stress and inflammation.19,20 This is particularly important in ICH pathogenesis because after ICH, the erythrocytes (RBCs) in the hematoma undergo progressive hemolysis, generating large quantities of hemoglobin, heme, and iron, leading to damage to brain cells and neurovasculature.3,10,21–24 Thus, LTF delivered from infiltrated PMNs could mediate neutralization of toxic iron and heme, which could be of particular clinical relevance to ICH.15 Here, we measured the temporal and spatial changes of LTF protein in the ICH-affected brain and explored its potential therapeutic role in rodents after ICH. Materials and Methods All animal studies followed the guidelines outlined in Guide for the Care and Use of Laboratory Animals from the National Institutes of Health and ARRIVE guidelines (Animal Research: Reporting In Vivo Experiments) and were approved by the Animal Welfare Committee of the University of Texas Health Science Center at Houston. All studies were performed using a randomization (coin toss) approach. All analyses were performed by investigators blinded to treatment assignments. ICH in Mouse ICH in male 3-month-old mouse was induced by intrastriatal injection of 15 μL of autologous blood as described previously.24–26 Animal Perfusion and Tissue Collection Animals were anesthetized with chloral hydrate (0.5 g/kg IP) and intracardially perfused with ice-cold phosphate buffered saline. For histology/biochemical analyses, the brains or the hematoma-affected striata were frozen in −80°C 2-methylbutane and stored in −80°C before cryosectioning, RNA isolation, or protein extraction. LTF ELISA Blood plasma or PMN culture medium LTF was measured with a mouse LTF ELISA kit (LS-F4352; LSBio). Blood Neutrophil Isolation PMNs were purified from heart puncture-drawn blood, using Ficoll-Paque gradient.15 PMNs were further purified using Neutrophil Enrichment kit (STEMCELL Technologies). LTF Degranulation Assay Fresh PMNs in RPMI1640/10% mouse serum at 2×10 5 cells/mL were cultured in the CO2 incubator for 2 hours before being subjected to 15 minutes oxygen-glucose deprivation.27 At 2 hours after reperfusion, or incubation with lysed RBC, or 1 μmol/L A23187 (calcium ionophore), culture medium was assessed for LTF with ELISA. LTF Administration mrLTF (mouse recombinant LTF) was injected intravenously at 10 mg/kg in 250 μL of saline (or saline alone/control), starting 3 hours after ICH, plus 5 mg/kg PO at 24 and 48 hours after ICH. For cell culture study, the mrLTF was added to the culture medium at 15 minutes before the insult. Neurological/Functional Deficits and Edema Measurement An individual test score and a combination test score (grand neurological deficit score) assumed an equal weight of each of the tests (Footfault, Forelimb Placing, Postural Flexing, Wire and Corner test).24,28 Brain edema was assessed with wet/dry method.29 RNA Isolation and Reverse Transcription-Polymerase Chain Reaction The dissected hematoma-affected striata were processed with Trizol Reagents for mRNA extraction. LTF primers (5′-cgaagcacgaatgacaaaga/3′-atcacacttgcgcttctcct) and GAPDH primers (5′-tgttcctacccccaatgtgt/3′-tgtgagggagatgctcagtg) were used for reverse transcription-polymerase chain reaction.24,30 Western Blot LTF and GAPDH in the naive and ICH-affected striatum were determined as described.24 Rabbit anti-LTF (bs-5810R; Bioss) or chicken anti-GAPDH (AB2302; Millipore) immunopositive bands were visualized using goat anti-rabbit IgG-horseradish peroxidase (Invitrogen) or goat anti-chicken IgG-horseradish peroxidase (Invitrogen) and chemiluminescence substrate (Pierce). Immunofluorescence and Cell Counting The immunohistochemistry for LTF and LTF/neutrophil double labeling on 10-μm-thick cryosections was performed as described earlier.24 Rabbit anti-LTF (L3262; Sigma) and rat anti-mouse neutrophil antibody (ab53457; Abcam) were used to visualize LTF and PMNs. The nuclei were visualized with DAPI. For PMNs counting, cryosections were generated at the level of needle insertion (site of blood injection). The total number of LTF+-PMNs on the digitized images representing whole striatum or PMNs on smears (from tail blood) was counted with CellSens (Olympus). PMN Depletion in Mice After ICH PMNs were depleted with rat monoclonal α-Ly-6G (lymphocyte antigen 6 complex locus G6D; BioXCell/BE0075) 500 μg/mouse IP at 24 and 48 hours after ICH, according to the validated protocol.15,31 A rat IgG2a isotype antibody (BioXCell/BE00892) served as the control. PMNs’ depletion was verified with flow cytometry and microscopic morphology. For flow cytometry, blood was collected via cardiac puncture. We stained cells with the following fluorophores (Tonbo-Bioscience): CD45-vf450, CD11b-APC-Cy7, Ly6G-Pe-Cy7, and Ly6C-APC. Data were acquired on a Cytoflex S cytometer (Beckmann Coulter) and analyzed by FlowJo Software (Tree Star). CD45+/CD11b+/Ly6C-Intermediate/SSC-High was used to identify neutrophils. Gating strategy is described in Figure I in the online-only Data Supplement. We additionally performed a direct blood leukocyte classification and counting on Giemsa-stained tail blood smears using microscopic morphological features.32 Primary Cortical Neuron–Glial and Cortical Neuron Cultures The primary neuron–glial cocultures from embryonic day 18-20 embryos from C57/BJ6 mice were prepared as we described.29 The cortical neuron cultures from embryonic day 18 mouse embryos in Neurobasal medium with B27 were prepared as we described.27 ICH-Like Injury In Vitro We exposed the 12-day-old cortical neuron–glial cocultures to RBC lysate (1 μL lysates/100 μL medium) in the presence/absence of mrLTF. Twenty hours later, cell viability was assessed with MTT reagent (G4000/Premega). To simulate iron-induced neurotoxicity, we added FeCl 3 (1–100 μmol/L) to the 12-day-old cortical neurons in culture. This produced a dose-dependent neuronal death, with 50 μmol/L FeCl 3 causing 95% neuronal death within 24 hours. To determine the rLTF role in detoxifying iron, we incubated neurons with 10 μmol/L FeCl 3 for 6 hours with or without 20 μg/mL of mrLTF. The lactate dehydrogenase assay kit (G1780; Promega) was used to measure injury. Statistical Analyses We used GraphPad and InStat programs for statistical analyses. One-way ANOVA followed by Newman–Keuls post-test was used for multiple comparisons. Paired t test was used when 2 groups were compared. Sample size was established based on our previous experience with this model. Results LTF Increases in Brain After ICH Using immunofluorescence to label LTF+ cells, we detected rare LTF+ cells in the naive mouse brain or in the contralateral brain hemisphere of ICH-affected brain up to 14 days after ICH (data not shown). However, we detected a robust increase in the number of LTF+ cells in the perihematoma/hematoma territory after ICH (Figure 1A and 1A). Using double immunofluorescence staining for LTF and PMNs, we found that almost all of the LTF+ cells found in the brain are PMNs (Figure 1B). The increase in LTF+ cells became apparent at 6 hours after ICH, peaked at day 2, and slowly subsided over 14 days (Figure 2A and 2B). Corresponding to these immunohistochemical findings, Western blot showed LTF levels in the ICH-affected brain hemisphere increase robustly, showing many folds increase in brain LTF at day 2 after ICH, compared with naive (Figure 2C). Open in Viewer Figure 1. LTF+ polymorphonuclear neutrophils (PMNs) in brain after intracerebral hemorrhage (ICH) in mouse. A, LTF (lactoferrin) immunofluorescence (red) in brain coronal section at 48 hours after ICH. A’ is a highlighted area in A. The arrowheads in A’ indicate the LTF+ cells. B, Double immunofluorescence of LTF (red) and neutrophil (green) in the brain at 24 hours after ICH. The nuclei are stained with DAPI (blue). A combination (Comb) of LTF and PMN staining shows the 100% colocalization. Open in Viewer Figure 2. Intracerebral hemorrhage (ICH) transiently increases LTF (lactoferrin) in the ICH-affected brain. A, Representative images of LTF immunofluorescence in striatum of naive mouse and mice at 1 hour to 14 days after ICH in the hematoma-affected subcortical striatum. The scale bar=50 μm. B, Bar graph quantitating presence of LTF+ cells in brain at 1 to 14 days after ICH. Data are expressed as mean±SEM (n=5). P≤0.05 vs naive group. C, Representative LTF and GAPDH Western blot in naive striatum and in hematoma-affected striatum at 1 hour to 14 days after ICH. Data are expressed as mean±SEM (n=3). P≤0.05 vs naïve group. OD indicates optical density. We emphasize that there is little detectable LTF mRNA in the naive mouse brain or in the ICH-affected brains at any time point after ICH, despite the ICH-induced increase in LTF protein (data not included). Furthermore, there is no detectable LTF mRNA found in purified peripheral blood PMNs (data not included). This is in agreement with the existing data that in contrast to bone marrow–derived developing PMNs, mature blood neutrophils primarily transport but do not transcribe LTF.33–37 Taken together, these results suggest that the abundant amount of LTF found in the brain after ICH is delivered by mature, bone marrow–derived PMNs. LTF Is Present in Peripheral PMNs and Is Released by PMNs on Exposure to RBC Using fresh blood-purified mature PMNs, we demonstrated that LTF (normally packaged in the specific granules of PMNs) can be quickly released into the media on PMNs activation with RBCs or brief sublethal exposure to oxygen-glucose deprivation (Figure 3A). Exposure to A23187, a calcium ionophore (capable of inducing degranulation), served as a positive control. These data suggest that RBCs and hypoperfusion (which often occurs secondary to ICH) can trigger degranulation and the release of LTF. However, the precise mechanism underlying this process is unknown. Open in Viewer Figure 3. LTF (lactoferrin) in peripheral blood polymorphonuclear neutrophils (PMNs). A, Quantification (using ELISA) of LTF released into the culture medium by purified blood PMN exposed to 15 minutes oxygen-glucose deprivation (OGD), lysed RBC, or calcium inophore (A23187; 1 μmol/L). The data are mean±SEM (n=3). P≤0.05 vs control. B, LTF immunofluorescence (green) in purified mouse blood PMNs. The LTF+ cell has a lobular nucleus, which is typical morphology of a mature neutrophil. Bar=20 μm. C, X–Y Plot of blood plasma LTF levels at indicated time point after intracerebral hemorrhage (ICH). The data are mean±SEM (n=5). P≤0.05 vs day 0 before ICH. It is intriguing to note that as a molecule that is normally stored in blood PMNs (Figure 3B), LTF after ICH was increased not only in the ICH-affected brain (site of PMNs infiltration) but also in the blood plasma (Figure 3C) and that the temporal profile of LTF increase in the blood and brain was very similar (Figure 2B and 2C). This suggests that ICH may also induce degranulation of circulating PMNs. PMN Depletion After ICH Aggravates Neurological Deficits Although earlier studies reported that PMN depletion before ICH is protective toward ICH-mediated damage,38 the role of PMNs entering the brain at later stages after ICH is not known. In light of our recent studies demonstrating that the LTF levels in PMNs gradually increase after ICH,15 we hypothesized that PMNs may exert beneficial effects in the later stages of ICH. To test this, we used a well-validated technique using systemic injection of anti-mouse Ly-6G neutralizing antibody to deplete systemic PMNs in mice. The isotype-matched antibody served as a control.15,39 Mice who received this neutralizing antibody at 24 and 48 hours after ICH became neutropenic, showing a robust 50% to 90% (depending on the method) depletion of peripheral PMNs (Figure 4A and 4B), as measures at 72 hours after ICH. These neutropenic mice showed 62% reduction in the circulating LTF level. Ultimately, we demonstrated that this delayed post-ICH depletion of PMNs worsens functional/neurological deficits as measured at day 3 after ICH (Figure 4C), suggesting that PMNs at later stages could play some beneficial role, for example, through delivery of LTF. Open in Viewer Figure 4. Polymorphonuclear neutrophils (PMNs) depletion aggravates neurological deficits. To deplete PMNs, mice were injected with anti-Ly-6G, a neutrophil neutralizing antibody or isotype IgG at 500 μg/mouse at 24 and 48 hours after ICH (intraperitoneal; n=10 mice per group). The Ly-6G–mediated PMNs depletion at day 3 resulted in 50% to 70% reduction of PMNs while having negligible effect on lymphocytes (Lymph) or monocytes (mono), as determined on tail blood smears using microscopic morphological features (A); about 90% depletion as measured with flow cytometry using CD45+/CD11b+/Ly6C-Intermediate/SSC-High as markers for gating on cardiac puncture-collected blood (B). The neurological deficit scores (NDS) were quantified with postural flexing and forward placing at 3 days after intracerebral hemorrhage (ICH; C). The data are mean±SEM. P≤0.05 vs control. WBC indicates white blood cells. LTF Is Cytoprotective in an ICH-Like Injury Model In Vitro The neurotoxicity of products of hemolysis, including hemoglobin, heme, and iron, is an important component of ICH-mediated brain damage.40,41 To clarify the role of LTF in RBC-induced cerebral toxicity, we added RBC lysate to the primary neuronal–glial cocultures29 (an in vitro ICH-like injury model) containing mrLTF. As anticipated, we observed that RBC lysates caused severe neuronal damage, as showed using MTT assay, and that exogenously added mrLTF in a dose-dependent manner preserved the viability of cells exposed to RBC lysates (Figure 5A). Open in Viewer Figure 5. mrLTF (recombinant mouse lactoferrin) protects brain cells from intracerebral hemorrhage (ICH)–like injury. A, MTT/(viability index) in the mouse neuron–glia coculture at 24 hours after exposure to lysed RBC in the presence of 0 to 100 μg/mL of mrLTF. The data are mean±SEM (n=3). P≤0.05 vs control without rLTF (0). The Con is the naive control. B, Lactate dehydrogenase in cortical neuron culture at 6 hours after exposure to 10 μmol/L FeCl 3, with or without 20 μg/mL of mrLTF. The data are mean±SEM (n=3). P≤0.05; P≤0.01 vs other 2 groups. To further explore the protective nature of LTF, we tested primary cultured neurons that were exposed to FeCl 3. First, we established that FeCl 3 at 5 to 100 μmol/L induced dose-dependent neuronal injury, which was assessed by lactate dehydrogenase release assay (data not included). We then incubated cortical neurons with 10 μmol/L FeCl 3 (a dose which produces about 80% neuronal loss) in the presence or absence of 20 μg/mL mrLTF, demonstrating that LTF can robustly protect neurons from iron-induced injury (Figure 5B). rLTF Protects Mice From Damage Caused by ICH Encouraged by the neuroprotective capacities of mrLTF in the in vitro testing, we next examined whether mrLTF treatment conferred therapeutic benefit in a mouse model of ICH. We subjected mice to ICH and 3 hours later treated the mice with mrLTF (10 mg/kg IV, at 3 hours after ICH, plus 5 mg/kg, orally on days 1 and 2). We found that rLTF provided a robust benefit after ICH, reducing brain edema (Figure 6A) and neurological deficit—grand neurological deficit score (a composite score from 5 individual behavioral tests: Postural Flexing, Forward Placing, Footfault, Wire, and Circling) by 35.2% (11.4 versus 17.6; P≤0.05) on day 3 after ICH (Figure 6A). Ultimately, we showed that the 24 hours delayed administration of mrLTF (10 mg/kg, orally on days 1–7) was effective in reducing the neurological deficit (12.2±0.7 versus 8.4±0.5; P<0.05), as assessed on day 7 after ICH (Figure 6B). Open in Viewer Figure 6. mrLTF (recombinant mouse lactoferrin) protects mouse from intracerebral hemorrhage (ICH) injury. A, Brain edema (water content in the ipsilateral and contralateral striatum) on day 3 after ICH. The data are mean±SEM (n=7). P≤0.05 vs saline control. A’, Grand neurological deficit score (NDS; a combination score of sensory-motor behavioral tests) on day 3 after ICH. The data are mean±SEM (n=12 per group). P≤0.05 vs saline control. B, Grand NDS on day 7 after ICH in animals treated with mrLTF starting 24 hours after ICH (n=5 per group). P≤0.05 vs saline control. Discussion In this study, we have demonstrated that the iron-binding protein LTF is normally present in the brain at very low levels and that ICH results in a robust and transient increase in brain LTF content. LTF detected in the brain was primarily identified in infiltrating PMNs that were seen in the brain areas directly affected by the ICH. Using in vitro models, we have demonstrated that ICH-like environments could induce PMNs to release LTF and that LTF added exogenously to the culture medium could protect brain cells in culture from toxicity induced by the product of hemolysis and iron. We also showed that the systemic depletion of PMNs using Ly-6G antibody 24 hours after ICH could exacerbate ICH outcome, suggesting that PMNs at later stages of ICH could contribute to the positive outcome, possibly in part by delivering cytoprotective LTF. Finally, we demonstrated that the mrLTF used as therapy for ICH was effective in reducing ICH-mediated damage when injected 3 hours after the ictus. LTF is a glycoprotein and a member from the transferrin family. LTF is normally synthesized and released by mucosal tissues and PMNs, and PMNs specifically are known to be the key contributor to plasma LTF levels.42 The initial objective of this study was to examine the time course and source of LTF in the brain after ICH. We found that LTF increases robustly within the initial 24 hours after ICH, reaching its highest level on day 2 and slowly declining within the next 2 weeks. Interestingly, this time profile is identical to the time profile of neutrophil infiltration to the brain after ICH, as we reported previously in a similar ICH model.14 Thus, in conjunction with the present findings showing that all of the LTF-positive cells in ICH-affected brain are neutrophils, we think that the primary source of LTF in the brain after ICH are neutrophils. However, as LTF is also increased in the peripheral blood after ICH, it is likely that some of the LTF presents in the ICH-affected brain originates from the blood circulation. The origin of increased levels of LTF in peripheral blood after ICH was not investigated in this study; nevertheless, it is known that LTF can be readily released from circulating blood PMNs during any environmental insult, including trauma.43 Indeed, experimentally induced neutropenia in this study reduced ICH-induced LTF increase. It has to be emphasized that in this study and in our earlier work,15 LTF mRNA was very low both in the naive brain and in the brain after ICH, even when PMNs counts peaked in the brain. This suggests that local brain cells and mature infiltrating neutrophils have limited abilities to transcribe LTF and that LTF found in the brain is synthesized at other remote locations. This is in agreement with the notion that LTF synthesis and packaging into granules occurs during PMN maturation in bone marrow.33–37 It is well accepted that LTF, besides its effect on iron sequestration, acts as a pleiotropic agent playing an important role of immune sensor, which directs specific immune responses and maintains immune homeostasis.43 The level of LTF in blood is normally low (0.2–0.6 μg/mL), with a transient 100-fold increase on insult-induced activation of neutrophils.44 After ICH, PMN infiltration into the ICH-affected brain has been proposed to augment local damage.13,31,45 Indeed, the systemic depletion of neutrophils before ICH was shown to ameliorate ICH-mediated damage,31 suggesting that the early infiltration of PMNs is deleterious. In contrast to the pre-ICH depletion paradigm, we found that late (24 hours) post-ICH PMN depletion is detrimental, suggesting the presence of some beneficial function of PMNs at this subacute stage of injury. One possibility, suggested by this study and our earlier findings, is that the PMNs can deliver cytoprotective LTF to the ICH-affected brain, neutralizing iron and blocking its toxicity. We have recently found that in response to ICH, microglia activated by this insult produce and release interleukin-27. Furthermore, interleukin-27 is elevated in the cerebra-spinal fluid and in peripheral blood after ICH, which could amplify production of LTF and haptoglobin and reduce expression of inducible nitric oxide synthase and matrix metalloproteinases in the maturing PMNs in bone marrow.15 Thus, ICH can alter the phenotype of PMNs in a way that the PMNs entering brain at the later stages could have acquired augmented beneficial effects, including increased blood detoxification efficacy. LTF is stored in the specific granules of PMNs and is normally released into the extracellular environment via degranulation to sequester hematoma-derived iron. Using PMNs harvested from blood, we demonstrated that PMNs release LTF on in vitro exposure to RBCs (the main component of an ICH hematoma). In addition, we showed that low, subinjurious oxygen-glucose deprivation (mimicking the moderate deficit in blood perfusion seen in the perihematoma area after ICH) can also augment LTF release from PMNs. Taken together, these results suggest that PMNs entering the ICH-affected brain may detect similar stimuli (blood products and mildly ischemic tissue), causing them to degranulate and release LTF. Guided by the fact that PMN-delivered LTF could be beneficial to ICH pathobiology and our recent studies showing that LTF limits ICH-mediated injury,15 we have now shown that rLTF blocks the toxicity of hemolytic products in mixed glia-neuron culture and the toxicity of iron in primary neuronal culture. Similar to the human rLTF used in our earlier studies, mrLTF injected 3 hours after ICH could effectively limit ICH-mediated damage and specifically reduce brain edema and neurological deficit. In conclusion, our present study demonstrates that LTF normally has a limited presence in the brain, but is delivered and released to the site of ICH injury by neutrophils. Moreover, LTF has potent capacity to limit iron-mediated cytotoxicity and protects the brain from injury caused by ICH. Supplemental Material File(str_stroke-2017-020544_supp1.pdf) Download 401.14 KB References 1. Mayer SA, Rincon F. Treatment of intracerebral haemorrhage. Lancet Neurol. 2005;4:662–672. doi: 10.1016/S1474-4422(05)70195-2. Go to Citation Crossref PubMed Google Scholar 2. Qureshi AI, Mendelow AD, Hanley DF. Intracerebral haemorrhage. Lancet. 2009;373:1632–1644. doi: 10.1016/S0140-6736(09)60371-8. Go to Citation Crossref PubMed Google Scholar 3. Wagner KR, Sharp FR, Ardizzone TD, Lu A, Clark JF. Heme and iron metabolism: role in cerebral hemorrhage. J Cereb Blood Flow Metab. 2003;23:629–652. doi: 10.1097/01.WCB.0000073905.87928.6D. Crossref PubMed Google Scholar a [...] toxicity associated with oxidative stress, b [...] damage to brain cells and neurovasculature. 4. Nakamura T, Keep RF, Hua Y, Hoff JT, Xi G. Oxidative DNA injury after experimental intracerebral hemorrhage. Brain Res. 2005;1039:30–36. doi: 10.1016/j.brainres.2005.01.036. Go to Citation Crossref PubMed Google Scholar 5. Núñez MT, Urrutia P, Mena N, Aguirre P, Tapia V, Salazar J. Iron toxicity in neurodegeneration. Biometals. 2012;25:761–776. doi: 10.1007/s10534-012-9523-0. Go to Citation Crossref PubMed Google Scholar 6. Zille M, Karuppagounder SS, Chen Y, Gough PJ, Bertin J, Finger J, et al. Neuronal death after hemorrhagic stroke in vitro and in vivo shares features of ferroptosis and necroptosis. Stroke. 2017;48:1033–1043. doi: 10.1161/STROKEAHA.116.015609. Go to Citation Crossref PubMed Google Scholar 7. Karuppagounder SS, Alim I, Khim SJ, Bourassa MW, Sleiman SF, John R, et al. Therapeutic targeting of oxygen-sensing prolyl hydroxylases abrogates ATF4-dependent neuronal death and improves outcomes after brain hemorrhage in several rodent models. Sci Transl Med. 2016;8:328ra29. doi: 10.1126/scitranslmed.aac6008. Go to Citation Crossref PubMed Google Scholar 8. Aronowski J, Hall CE. New horizons for primary intracerebral hemorrhage treatment: experience from preclinical studies. Neurol Res. 2005;27:268–279. doi: 10.1179/016164105X25225. Go to Citation Crossref PubMed Google Scholar 9. Wang J, Doré S. Inflammation after intracerebral hemorrhage. J Cereb Blood Flow Metab. 2007;27:894–908. doi: 10.1038/sj.jcbfm.9600403. Go to Citation Crossref PubMed Google Scholar 10. Aronowski J, Zhao X. Molecular pathophysiology of cerebral hemorrhage: secondary brain injury. Stroke. 2011;42:1781–1786. doi: 10.1161/STROKEAHA.110.596718. Crossref PubMed Google Scholar a [...] damage and ICH-associated inflammation. b [...] damage to brain cells and neurovasculature. 11. Zhao X, Sun G, Ting SM, Song S, Zhang J, Edwards NJ, et al. Cleaning up after ICH: the role of Nrf2 in modulating microglia function and hematoma clearance. J Neurochem. 2015;133:144–152. doi: 10.1111/jnc.12974. Go to Citation Crossref PubMed Google Scholar 12. Hickenbottom SL, Grotta JC, Strong R, Denner LA, Aronowski J. Nuclear factor-kappaB and cell death after experimental intracerebral hemorrhage in rats. Stroke. 1999;30:2472–2477; discussion 2477. Go to Citation Crossref PubMed Google Scholar 13. Gong C, Hoff JT, Keep RF. Acute inflammatory reaction following experimental intracerebral hemorrhage in rat. Brain Res. 2000;871:57–65. Crossref PubMed Google Scholar a [...] granule components during degranulation. b [...] has been proposed to augment local damage. 14. Zhao X, Sun G, Zhang H, Ting SM, Song S, Gonzales N, et al. Polymorphonuclear neutrophil in brain parenchyma after experimental intracerebral hemorrhage. Transl Stroke Res. 2014;5:554–561. doi: 10.1007/s12975-014-0341-2. Crossref PubMed Google Scholar a [...] granule components during degranulation. b [...] reported previously in a similar ICH model. 15. Zhao X, Ting SM, Liu CH, Sun G, Kruzel M, Roy-O’Reilly M, et al. Neutrophil polarization by IL-27 as a therapeutic target for intracerebral hemorrhage. Nat Commun. 2017;8:602. doi: 10.1038/s41467-017-00770-7. Crossref PubMed Google Scholar a [...] granule components during degranulation. b [...] be of particular clinical relevance to ICH. c [...] blood, using Ficoll-Paque gradient. d [...] ICH, according to the validated protocol. e [...] in PMNs gradually increase after ICH, f [...] antibody served as a control. g [...] that in this study and in our earlier work, h [...] in the maturing PMNs in bone marrow. i [...] that LTF limits ICH-mediated injury, 16. Baggiolini M, De Duve C, Masson PL, Heremans JF. Association of lactoferrin with specific granules in rabbit heterophil leukocytes. J Exp Med. 1970;131:559–570. Go to Citation Crossref PubMed Google Scholar 17. Actor JK, Hwang SA, Kruzel ML. Lactoferrin as a natural immune modulator. Curr Pharm Des. 2009;15:1956–1973. Go to Citation Crossref PubMed Google Scholar 18. Lönnerdal B, Iyer S. Lactoferrin: molecular structure and biological function. Annu Rev Nutr. 1995;15:93–110. doi: 10.1146/annurev.nu.15.070195.000521. Go to Citation Crossref PubMed Google Scholar 19. Baker HM, Baker EN. Lactoferrin and iron: structural and dynamic aspects of binding and release. Biometals. 2004;17:209–216. Go to Citation Crossref PubMed Google Scholar 20. Haney EF, Nazmi K, Bolscher JG, Vogel HJ. Structural and biophysical characterization of an antimicrobial peptide chimera comprised of lactoferricin and lactoferrampin. Biochim Biophys Acta. 2012;1818:762–775. doi: 10.1016/j.bbamem.2011.11.023. Go to Citation Crossref PubMed Google Scholar 21. Koeppen AH. The history of iron in the brain. J Neurol Sci. 1995;134(suppl):1–9. Go to Citation Crossref PubMed Google Scholar 22. Xi G, Wagner KR, Keep RF, Hua Y, de Courten-Myers GM, Broderick JP, et al. Role of blood clot formation on early edema development after experimental intracerebral hemorrhage. Stroke. 1998;29:2580–2586. Go to Citation Crossref PubMed Google Scholar 23. Wagner KR, Dwyer BE. Hematoma removal, heme, and heme oxygenase following hemorrhagic stroke. Ann N Y Acad Sci. 2004;1012:237–251. Go to Citation Crossref PubMed Google Scholar 24. Zhao X, Sun G, Zhang J, Strong R, Song W, Gonzales N, et al. Hematoma resolution as a target for intracerebral hemorrhage treatment: role for peroxisome proliferator-activated receptor gamma in microglia/macrophages. Ann Neurol. 2007;61:352–362. doi: 10.1002/ana.21097. Crossref PubMed Google Scholar a [...] damage to brain cells and neurovasculature. b [...] autologous blood as described previously. c [...] Postural Flexing, Wire and Corner test). d [...] transcription-polymerase chain reaction. e [...] striatum were determined as described. f [...] was performed as described earlier. 25. Felberg RA, Grotta JC, Shirzadi AL, Strong R, Narayana P, Hill-Felberg SJ, et al. Cell death in experimental intracerebral hemorrhage: the “black hole” model of hemorrhagic damage. Ann Neurol. 2002;51:517–524. Go to Citation Crossref PubMed Google Scholar 26. Zhao X, Sun G, Zhang J, Strong R, Dash PK, Kan YW, et al. Transcription factor Nrf2 protects the brain from damage produced by intracerebral hemorrhage. Stroke. 2007;38:3280–3286. doi: 10.1161/STROKEAHA.107.486506. Go to Citation Crossref PubMed Google Scholar 27. Zhao X, Wang H, Sun G, Zhang J, Edwards NJ, Aronowski J. Neuronal Interleukin-4 as a modulator of microglial pathways and ischemic brain damage. J Neurosci. 2015;35:11281–11291. doi: 10.1523/JNEUROSCI.1685-15.2015. Crossref PubMed Google Scholar a [...] to 15 minutes oxygen-glucose deprivation. b [...] with B27 were prepared as we described. 28. Zhao X, Strong R, Zhang J, Sun G, Tsien JZ, Cui Z, et al. Neuronal PPARgamma deficiency increases susceptibility to brain damage after cerebral ischemia. J Neurosci. 2009;29:6186–6195. doi: 10.1523/JNEUROSCI.5857-08.2009. Go to Citation Crossref PubMed Google Scholar 29. Zhao X, Song S, Sun G, Strong R, Zhang J, Grotta JC, et al. Neuroprotective role of haptoglobin after intracerebral hemorrhage. J Neurosci. 2009;29:15819–15827. doi: 10.1523/JNEUROSCI.3776-09.2009. Crossref PubMed Google Scholar a [...] edema was assessed with wet/dry method. b [...] C57/BJ6 mice were prepared as we described. c [...] to the primary neuronal–glial cocultures 30. Zhao X, Sun G, Zhang J, Ting SM, Gonzales N, Aronowski J. Dimethyl fumarate protects brain from damage produced by intracerebral hemorrhage by mechanism involving nrf2. Stroke. 2015;46:1923–1928. Go to Citation Crossref PubMed Google Scholar 31. Sansing LH, Harris TH, Kasner SE, Hunter CA, Kariko K. Neutrophil depletion diminishes monocyte infiltration and improves functional outcome after experimental intracerebral hemorrhage. Acta Neurochir Suppl. 2011;111:173–178. doi: 10.1007/978-3-7091-0693-8_29. Crossref PubMed Google Scholar a [...] ICH, according to the validated protocol. b [...] has been proposed to augment local damage. c [...] shown to ameliorate ICH-mediated damage, 32. Gautam A, Bhadauria H. Classification of white blood cells based on morphological features. IEEE Xplore. 2014;2363–2368. Go to Citation Google Scholar 33. Rado TA, Bollekens J, St Laurent G, Parker L, Benz EJ Lactoferrin biosynthesis during granulocytopoiesis. Blood. 1984;64:1103–1109. PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 34. Rado TA, Wei XP, Benz EJ Isolation of lactoferrin cDNA from a human myeloid library and expression of mRNA during normal and leukemic myelopoiesis. Blood. 1987;70:989–993. PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 35. Itoh K, Okubo K, Utiyama H, Hirano T, Yoshii J, Matsubara K. Expression profile of active genes in granulocytes. Blood. 1998;92:1432–1441. PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 36. Nagaoka I, Hirata M, Sugimoto K, Tsutsumi-Ishii Y, Someya A, Saionji K, et al. Evaluation of the expression of human CAP18 gene during neutrophil maturation in the bone marrow. J Leukoc Biol. 1998;64:845–852. Crossref PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 37. Cowland JB, Borregaard N. The individual regulation of granule protein mRNA levels during neutrophil maturation explains the heterogeneity of neutrophil granules. J Leukoc Biol. 1999;66:989–995. Crossref PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 38. Moxon-Emre I, Schlichter LC. Neutrophil depletion reduces blood-brain barrier breakdown, axon injury, and inflammation after intracerebral hemorrhage. J Neuropathol Exp Neurol. 2011;70:218–235. doi: 10.1097/NEN.0b013e31820d94a5. Go to Citation Crossref PubMed Google Scholar 39. Johnson HL, Chen Y, Jin F, Hanson LM, Gamez JD, Pirko I, et al. CD8 T cell-initiated blood-brain barrier disruption is independent of neutrophil support. J Immunol. 2012;189:1937–1945. doi: 10.4049/jimmunol.1200658. Go to Citation Crossref PubMed Google Scholar 40. Riopelle RJ, Kennedy JC. Some aspects of porphyrin neurotoxicity in vitro. Can J Physiol Pharmacol. 1982;60:707–714. Go to Citation Crossref PubMed Google Scholar 41. Wang X, Mori T, Sumii T, Lo EH. Hemoglobin-induced cytotoxicity in rat cerebral cortical neurons: caspase activation and oxidative stress. Stroke. 2002;33:1882–1888. Go to Citation Crossref PubMed Google Scholar 42. Hansen NE, Malmquist J, Thorell J. Plasma myeloperoxidase and lactoferrin measured by radioimmunoassay: relations to neutrophil kinetics. Acta Med Scand. 1975;198:437–443. Go to Citation Crossref PubMed Google Scholar 43. Kruzel ML, Actor JK, Boldogh I, Zimecki M. Lactoferrin in health and disease. Postepy Hig Med Dosw (Online). 2007;61:261–267. PubMed Google Scholar a [...] any environmental insult, including trauma. b [...] responses and maintains immune homeostasis. 44. Erga KS, Peen E, Tenstad O, Reed RK. Lactoferrin and anti-lactoferrin antibodies: effects of ironloading of lactoferrin on albumin extravasation in different tissues in rats. Acta Physiol Scand. 2000;170:11–19. doi: 10.1046/j.1365-201x.2000.00754.x. Go to Citation Crossref PubMed Google Scholar 45. Xue M, Del Bigio MR. Intracerebral injection of autologous whole blood in rats: time course of inflammation and cell death. Neurosci Lett. 2000;283:230–232. Go to Citation Crossref PubMed Google Scholar Show all references eLetters eLetters should relate to an article recently published in the journal and are not a forum for providing unpublished data. Comments are reviewed for appropriate use of tone and language. Comments are not peer-reviewed. Acceptable comments are posted to the journal website only. Comments are not published in an issue and are not indexed in PubMed. Comments should be no longer than 500 words and will only be posted online. References are limited to 10. Authors of the article cited in the comment will be invited to reply, as appropriate. Comments and feedback on AHA/ASA Scientific Statements and Guidelines should be directed to the AHA/ASA Manuscript Oversight Committee via its Correspondence page. Sign In to Submit a Response to This Article Information & Authors Information Authors Information Published In Stroke Volume 49 • Number 5 • May 2018 Pages: 1241 - 1247 PubMed: 29636422 Copyright © 2018 American Heart Association, Inc. Versions You are viewing the most recent version of this article. 1 January 2018: Previous PDF (Version 1) History Received: 21 December 2017 Revision received: 13 February 2018 Accepted: 21 February 2018 Published online: 10 April 2018 Published in print: May 2018 Permissions Request permissions for this article. Request permissions Keywords brain edema inflammation lactoferrin neutrophil transferrin Subjects Translational Studies Authors Affiliations Expand All Xiurong Zhao, MD From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Shun-Ming Ting, MS From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Guanghua Sun, MD From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Meaghan Roy-O’Reilly, MS From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Alexis S.Mobley, MS From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Jesus Bautista Garrido, MA From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Xueping Zheng, MD From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Lidiya Obertas, MS From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Joo Eun Jung, PhD From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Marian Kruzel, PhD Department of Integrative Biology and Pharmacology (M.K.), McGovern Medical School, University of Texas HSC, Houston. View all articles by this author Jaroslaw Aronowski, MD, PhD From the Department of Neurology (X.Z., S.-M.T., G.S., M.R.-O., A.S.M., J.B.G., X.Z., L.O., J.E.J., J.A.) View all articles by this author Notes Guest Editor for this article was Miguel A. Perez-Pinzon, PhD. The online-only Data Supplement is available with this article at Correspondence to Jaroslaw Aronowski, Department of Neurology, McGovern Medical School, University of Texas, 6431 Fannin St, Houston, TX 77030. E-mail j.aronowski@uth.tmc.edu Disclosures None. Sources of Funding This study was supported by National Institutes of Health–National Institute of Neurological Disorders and Stroke, grants RO1NS096308 and R42NS090650. Metrics & Citations Metrics Citations 46 Metrics Article Metrics View all metrics Downloads Citations No data available. 4,813 46 Total 6 Months 12 Months Total number of downloads and citations See more details Posted by 8 X users On 3 Facebook pages 48 readers on Mendeley Citations Download Citations If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Select your manager software from the list below and click Download. Please select your download format: [x] Direct Import Qing Xie, Yuxiang Gu, Guohua Xi, Ya Hua, Richard F. Keep, Yingfeng Wan, Immune cell response after intracerebral hemorrhage in piglets and the treatment effects of deferoxamine and minocycline, Experimental Neurology, 392, (115354), (2025). Crossref Kyle B. Walsh, Miranda C. Marion, Dengfeng Li, Hannah C. Ainsworth, Amy Zinnia, Xiang Zhang, Stacie L. Demel, Brady Williamson, David J. Roh, Robert Campbell, Frederik Denorme, Daniel Woo, Carl D. Langefeld, Neutrophil degranulation is increased at seven days after human intracerebral hemorrhage, but not at 72 h, and correlates with decreased miR-3613 and miR-3690, BMC Neurology, 25, 1, (2025). Crossref Yilong Peng, Yuewen Sun, Xiaoqian Song, Chenyang Jin, Xinbao Yin, Xueping Zheng, Dual role of innate immune activation in cerebral small vessel disease, International Immunopharmacology, 162, (115178), (2025). Crossref Andreea Sălcudean, Cristina-Raluca Bodo, Ramona-Amina Popovici, Maria-Melania Cozma, Mariana Păcurar, Ramona-Elena Crăciun, Andrada-Ioana Crisan, Virgil-Radu Enatescu, Ileana Marinescu, Dora-Mihaela Cimpian, Andreea-Georgiana Nan, Andreea-Bianca Sasu, Ramona-Camelia Anculia, Elena-Gabriela Strete, Neuroinflammation—A Crucial Factor in the Pathophysiology of Depression—A Comprehensive Review, Biomolecules, 15, 4, (502), (2025). Crossref Anson C K Ng, Cuiting Zhang, Tsz Lung Lam, Karrie M Kiang, Vaness N C Ng, Zhiyuan Zhu, Jiaxin Liu, Wenwei Tu, Wanjun Tang, Katrina C W Chau, Kwan Man, Gilberto K K Leung, CXCR3-mediated natural killer cell infiltration exacerbates white matter injury after intracerebral haemorrhage, Brain, 148, 9, (3121-3136), (2025). Crossref Aloysius Bagus Sasongko, Petra Octavian Perdana Wahjoepramono, Danny Halim, Jenifer Kiem Aviani, Achmad Adam, Yeo Tseng Tsai, Eka Julianta Wahjoepramono, Julius July, Tri Hanggono Achmad, Potential blood biomarkers that can be used as prognosticators of spontaneous intracerebral hemorrhage: A systematic review and meta-analysis, PLOS ONE, 20, 2, (e0315333), (2025). Crossref Stefanie Balk, Franziska Panier, Sebastian Brandner, Roland Coras, Ingmar Blümcke, Arif B. Ekici, Jochen A. Sembill, Stefan Schwab, Hagen B. Huttner, Maximilian I. Sprügel, Intracerebral Hemorrhage-Associated Iron Release Leads to Pericyte-Dependent Cerebral Capillary Function Disruption, Biomolecules, 15, 2, (164), (2025). Crossref Peijun Jia, Qinfeng Peng, Xiaochong Fan, Yumeng Zhang, Hanxiao Xu, Jiaxin Li, Houn Sonita, Simon Liu, Anh Le, Qiongqiong Hu, Ting Zhao, Shijie Zhang, Junmin Wang, Marietta Zille, Chao Jiang, Xuemei Chen, Jian Wang, Immune‐mediated disruption of the blood–brain barrier after intracerebral hemorrhage: Insights and potential therapeutic targets, CNS Neuroscience & Therapeutics, 30, 7, (2024). Crossref Guoxin Lin, Nannan Li, Jishi Liu, Jian Sun, Hao Zhang, Ming Gui, Youjie Zeng, Juan Tang, Identification of key genes as potential diagnostic biomarkers in sepsis by bioinformatics analysis, PeerJ, 12, (e17542), (2024). Crossref Yihui Wang, Wencao Liu, Jianing Zhang, Panpan Geng, Xinchun Jin, The role of crosstalk between cerebral immune cells and peripheral immune cells in the damage and protection of blood–brain barrier after intracerebral hemorrhage, Brain Hemorrhages, 5, 3, (117-130), (2024). Crossref See more Loading... View Options View options PDF and All Supplements Download PDF and All Supplements Download is in progress PDF/EPUB View PDF/EPUB Figures Open all in viewer Go to FigureOpen in Viewer Figure 1. LTF+ polymorphonuclear neutrophils (PMNs) in brain after intracerebral hemorrhage (ICH) in mouse. A, LTF (lactoferrin) immunofluorescence (red) in brain coronal section at 48 hours after ICH. A’ is a highlighted area in A. The arrowheads in A’ indicate the LTF+ cells. B, Double immunofluorescence of LTF (red) and neutrophil (green) in the brain at 24 hours after ICH. The nuclei are stained with DAPI (blue). A combination (Comb) of LTF and PMN staining shows the 100% colocalization. Go to FigureOpen in Viewer Figure 2. Intracerebral hemorrhage (ICH) transiently increases LTF (lactoferrin) in the ICH-affected brain. A, Representative images of LTF immunofluorescence in striatum of naive mouse and mice at 1 hour to 14 days after ICH in the hematoma-affected subcortical striatum. The scale bar=50 μm. B, Bar graph quantitating presence of LTF+ cells in brain at 1 to 14 days after ICH. Data are expressed as mean±SEM (n=5). P≤0.05 vs naive group. C, Representative LTF and GAPDH Western blot in naive striatum and in hematoma-affected striatum at 1 hour to 14 days after ICH. Data are expressed as mean±SEM (n=3). P≤0.05 vs naïve group. OD indicates optical density. Go to FigureOpen in Viewer Figure 3. LTF (lactoferrin) in peripheral blood polymorphonuclear neutrophils (PMNs). A, Quantification (using ELISA) of LTF released into the culture medium by purified blood PMN exposed to 15 minutes oxygen-glucose deprivation (OGD), lysed RBC, or calcium inophore (A23187; 1 μmol/L). The data are mean±SEM (n=3). P≤0.05 vs control. B, LTF immunofluorescence (green) in purified mouse blood PMNs. The LTF+ cell has a lobular nucleus, which is typical morphology of a mature neutrophil. Bar=20 μm. C, X–Y Plot of blood plasma LTF levels at indicated time point after intracerebral hemorrhage (ICH). The data are mean±SEM (n=5). P≤0.05 vs day 0 before ICH. Go to FigureOpen in Viewer Figure 4. Polymorphonuclear neutrophils (PMNs) depletion aggravates neurological deficits. To deplete PMNs, mice were injected with anti-Ly-6G, a neutrophil neutralizing antibody or isotype IgG at 500 μg/mouse at 24 and 48 hours after ICH (intraperitoneal; n=10 mice per group). The Ly-6G–mediated PMNs depletion at day 3 resulted in 50% to 70% reduction of PMNs while having negligible effect on lymphocytes (Lymph) or monocytes (mono), as determined on tail blood smears using microscopic morphological features (A); about 90% depletion as measured with flow cytometry using CD45+/CD11b+/Ly6C-Intermediate/SSC-High as markers for gating on cardiac puncture-collected blood (B). The neurological deficit scores (NDS) were quantified with postural flexing and forward placing at 3 days after intracerebral hemorrhage (ICH; C). The data are mean±SEM. P≤0.05 vs control. WBC indicates white blood cells. Go to FigureOpen in Viewer Figure 5. mrLTF (recombinant mouse lactoferrin) protects brain cells from intracerebral hemorrhage (ICH)–like injury. A, MTT/(viability index) in the mouse neuron–glia coculture at 24 hours after exposure to lysed RBC in the presence of 0 to 100 μg/mL of mrLTF. The data are mean±SEM (n=3). P≤0.05 vs control without rLTF (0). The Con is the naive control. B, Lactate dehydrogenase in cortical neuron culture at 6 hours after exposure to 10 μmol/L FeCl 3, with or without 20 μg/mL of mrLTF. The data are mean±SEM (n=3). P≤0.05; P≤0.01 vs other 2 groups. Go to FigureOpen in Viewer Figure 6. mrLTF (recombinant mouse lactoferrin) protects mouse from intracerebral hemorrhage (ICH) injury. A, Brain edema (water content in the ipsilateral and contralateral striatum) on day 3 after ICH. The data are mean±SEM (n=7). P≤0.05 vs saline control. A’, Grand neurological deficit score (NDS; a combination score of sensory-motor behavioral tests) on day 3 after ICH. The data are mean±SEM (n=12 per group). P≤0.05 vs saline control. B, Grand NDS on day 7 after ICH in animals treated with mrLTF starting 24 hours after ICH (n=5 per group). P≤0.05 vs saline control. Go to FigureOpen in Viewer Tables Media Share Share Share article link Copy Link Copied! Copying failed. Share FacebookLinkedInX (formerly Twitter)emailWeChatBluesky References References 1. Mayer SA, Rincon F. Treatment of intracerebral haemorrhage. Lancet Neurol. 2005;4:662–672. doi: 10.1016/S1474-4422(05)70195-2. Go to Citation Crossref PubMed Google Scholar 2. Qureshi AI, Mendelow AD, Hanley DF. Intracerebral haemorrhage. Lancet. 2009;373:1632–1644. doi: 10.1016/S0140-6736(09)60371-8. Go to Citation Crossref PubMed Google Scholar 3. Wagner KR, Sharp FR, Ardizzone TD, Lu A, Clark JF. Heme and iron metabolism: role in cerebral hemorrhage. J Cereb Blood Flow Metab. 2003;23:629–652. doi: 10.1097/01.WCB.0000073905.87928.6D. Crossref PubMed Google Scholar a [...] toxicity associated with oxidative stress, b [...] damage to brain cells and neurovasculature. 4. Nakamura T, Keep RF, Hua Y, Hoff JT, Xi G. Oxidative DNA injury after experimental intracerebral hemorrhage. Brain Res. 2005;1039:30–36. doi: 10.1016/j.brainres.2005.01.036. Go to Citation Crossref PubMed Google Scholar 5. Núñez MT, Urrutia P, Mena N, Aguirre P, Tapia V, Salazar J. Iron toxicity in neurodegeneration. Biometals. 2012;25:761–776. doi: 10.1007/s10534-012-9523-0. Go to Citation Crossref PubMed Google Scholar 6. Zille M, Karuppagounder SS, Chen Y, Gough PJ, Bertin J, Finger J, et al. Neuronal death after hemorrhagic stroke in vitro and in vivo shares features of ferroptosis and necroptosis. Stroke. 2017;48:1033–1043. doi: 10.1161/STROKEAHA.116.015609. Go to Citation Crossref PubMed Google Scholar 7. Karuppagounder SS, Alim I, Khim SJ, Bourassa MW, Sleiman SF, John R, et al. Therapeutic targeting of oxygen-sensing prolyl hydroxylases abrogates ATF4-dependent neuronal death and improves outcomes after brain hemorrhage in several rodent models. Sci Transl Med. 2016;8:328ra29. doi: 10.1126/scitranslmed.aac6008. Go to Citation Crossref PubMed Google Scholar 8. Aronowski J, Hall CE. New horizons for primary intracerebral hemorrhage treatment: experience from preclinical studies. Neurol Res. 2005;27:268–279. doi: 10.1179/016164105X25225. Go to Citation Crossref PubMed Google Scholar 9. Wang J, Doré S. Inflammation after intracerebral hemorrhage. J Cereb Blood Flow Metab. 2007;27:894–908. doi: 10.1038/sj.jcbfm.9600403. Go to Citation Crossref PubMed Google Scholar 10. Aronowski J, Zhao X. Molecular pathophysiology of cerebral hemorrhage: secondary brain injury. Stroke. 2011;42:1781–1786. doi: 10.1161/STROKEAHA.110.596718. Crossref PubMed Google Scholar a [...] damage and ICH-associated inflammation. b [...] damage to brain cells and neurovasculature. 11. Zhao X, Sun G, Ting SM, Song S, Zhang J, Edwards NJ, et al. Cleaning up after ICH: the role of Nrf2 in modulating microglia function and hematoma clearance. J Neurochem. 2015;133:144–152. doi: 10.1111/jnc.12974. Go to Citation Crossref PubMed Google Scholar 12. Hickenbottom SL, Grotta JC, Strong R, Denner LA, Aronowski J. Nuclear factor-kappaB and cell death after experimental intracerebral hemorrhage in rats. Stroke. 1999;30:2472–2477; discussion 2477. Go to Citation Crossref PubMed Google Scholar 13. Gong C, Hoff JT, Keep RF. Acute inflammatory reaction following experimental intracerebral hemorrhage in rat. Brain Res. 2000;871:57–65. Crossref PubMed Google Scholar a [...] granule components during degranulation. b [...] has been proposed to augment local damage. 14. Zhao X, Sun G, Zhang H, Ting SM, Song S, Gonzales N, et al. Polymorphonuclear neutrophil in brain parenchyma after experimental intracerebral hemorrhage. Transl Stroke Res. 2014;5:554–561. doi: 10.1007/s12975-014-0341-2. Crossref PubMed Google Scholar a [...] granule components during degranulation. b [...] reported previously in a similar ICH model. 15. Zhao X, Ting SM, Liu CH, Sun G, Kruzel M, Roy-O’Reilly M, et al. Neutrophil polarization by IL-27 as a therapeutic target for intracerebral hemorrhage. Nat Commun. 2017;8:602. doi: 10.1038/s41467-017-00770-7. Crossref PubMed Google Scholar a [...] granule components during degranulation. b [...] be of particular clinical relevance to ICH. c [...] blood, using Ficoll-Paque gradient. d [...] ICH, according to the validated protocol. e [...] in PMNs gradually increase after ICH, f [...] antibody served as a control. g [...] that in this study and in our earlier work, h [...] in the maturing PMNs in bone marrow. i [...] that LTF limits ICH-mediated injury, 16. Baggiolini M, De Duve C, Masson PL, Heremans JF. Association of lactoferrin with specific granules in rabbit heterophil leukocytes. J Exp Med. 1970;131:559–570. Go to Citation Crossref PubMed Google Scholar 17. Actor JK, Hwang SA, Kruzel ML. Lactoferrin as a natural immune modulator. Curr Pharm Des. 2009;15:1956–1973. Go to Citation Crossref PubMed Google Scholar 18. Lönnerdal B, Iyer S. Lactoferrin: molecular structure and biological function. Annu Rev Nutr. 1995;15:93–110. doi: 10.1146/annurev.nu.15.070195.000521. Go to Citation Crossref PubMed Google Scholar 19. Baker HM, Baker EN. Lactoferrin and iron: structural and dynamic aspects of binding and release. Biometals. 2004;17:209–216. Go to Citation Crossref PubMed Google Scholar 20. Haney EF, Nazmi K, Bolscher JG, Vogel HJ. Structural and biophysical characterization of an antimicrobial peptide chimera comprised of lactoferricin and lactoferrampin. Biochim Biophys Acta. 2012;1818:762–775. doi: 10.1016/j.bbamem.2011.11.023. Go to Citation Crossref PubMed Google Scholar 21. Koeppen AH. The history of iron in the brain. J Neurol Sci. 1995;134(suppl):1–9. Go to Citation Crossref PubMed Google Scholar 22. Xi G, Wagner KR, Keep RF, Hua Y, de Courten-Myers GM, Broderick JP, et al. Role of blood clot formation on early edema development after experimental intracerebral hemorrhage. Stroke. 1998;29:2580–2586. Go to Citation Crossref PubMed Google Scholar 23. Wagner KR, Dwyer BE. Hematoma removal, heme, and heme oxygenase following hemorrhagic stroke. Ann N Y Acad Sci. 2004;1012:237–251. Go to Citation Crossref PubMed Google Scholar 24. Zhao X, Sun G, Zhang J, Strong R, Song W, Gonzales N, et al. Hematoma resolution as a target for intracerebral hemorrhage treatment: role for peroxisome proliferator-activated receptor gamma in microglia/macrophages. Ann Neurol. 2007;61:352–362. doi: 10.1002/ana.21097. Crossref PubMed Google Scholar a [...] damage to brain cells and neurovasculature. b [...] autologous blood as described previously. c [...] Postural Flexing, Wire and Corner test). d [...] transcription-polymerase chain reaction. e [...] striatum were determined as described. f [...] was performed as described earlier. 25. Felberg RA, Grotta JC, Shirzadi AL, Strong R, Narayana P, Hill-Felberg SJ, et al. Cell death in experimental intracerebral hemorrhage: the “black hole” model of hemorrhagic damage. Ann Neurol. 2002;51:517–524. Go to Citation Crossref PubMed Google Scholar 26. Zhao X, Sun G, Zhang J, Strong R, Dash PK, Kan YW, et al. Transcription factor Nrf2 protects the brain from damage produced by intracerebral hemorrhage. Stroke. 2007;38:3280–3286. doi: 10.1161/STROKEAHA.107.486506. Go to Citation Crossref PubMed Google Scholar 27. Zhao X, Wang H, Sun G, Zhang J, Edwards NJ, Aronowski J. Neuronal Interleukin-4 as a modulator of microglial pathways and ischemic brain damage. J Neurosci. 2015;35:11281–11291. doi: 10.1523/JNEUROSCI.1685-15.2015. Crossref PubMed Google Scholar a [...] to 15 minutes oxygen-glucose deprivation. b [...] with B27 were prepared as we described. 28. Zhao X, Strong R, Zhang J, Sun G, Tsien JZ, Cui Z, et al. Neuronal PPARgamma deficiency increases susceptibility to brain damage after cerebral ischemia. J Neurosci. 2009;29:6186–6195. doi: 10.1523/JNEUROSCI.5857-08.2009. Go to Citation Crossref PubMed Google Scholar 29. Zhao X, Song S, Sun G, Strong R, Zhang J, Grotta JC, et al. Neuroprotective role of haptoglobin after intracerebral hemorrhage. J Neurosci. 2009;29:15819–15827. doi: 10.1523/JNEUROSCI.3776-09.2009. Crossref PubMed Google Scholar a [...] edema was assessed with wet/dry method. b [...] C57/BJ6 mice were prepared as we described. c [...] to the primary neuronal–glial cocultures 30. Zhao X, Sun G, Zhang J, Ting SM, Gonzales N, Aronowski J. Dimethyl fumarate protects brain from damage produced by intracerebral hemorrhage by mechanism involving nrf2. Stroke. 2015;46:1923–1928. Go to Citation Crossref PubMed Google Scholar 31. Sansing LH, Harris TH, Kasner SE, Hunter CA, Kariko K. Neutrophil depletion diminishes monocyte infiltration and improves functional outcome after experimental intracerebral hemorrhage. Acta Neurochir Suppl. 2011;111:173–178. doi: 10.1007/978-3-7091-0693-8_29. Crossref PubMed Google Scholar a [...] ICH, according to the validated protocol. b [...] has been proposed to augment local damage. c [...] shown to ameliorate ICH-mediated damage, 32. Gautam A, Bhadauria H. Classification of white blood cells based on morphological features. IEEE Xplore. 2014;2363–2368. Go to Citation Google Scholar 33. Rado TA, Bollekens J, St Laurent G, Parker L, Benz EJ Lactoferrin biosynthesis during granulocytopoiesis. Blood. 1984;64:1103–1109. PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 34. Rado TA, Wei XP, Benz EJ Isolation of lactoferrin cDNA from a human myeloid library and expression of mRNA during normal and leukemic myelopoiesis. Blood. 1987;70:989–993. PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 35. Itoh K, Okubo K, Utiyama H, Hirano T, Yoshii J, Matsubara K. Expression profile of active genes in granulocytes. Blood. 1998;92:1432–1441. PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 36. Nagaoka I, Hirata M, Sugimoto K, Tsutsumi-Ishii Y, Someya A, Saionji K, et al. Evaluation of the expression of human CAP18 gene during neutrophil maturation in the bone marrow. J Leukoc Biol. 1998;64:845–852. Crossref PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 37. Cowland JB, Borregaard N. The individual regulation of granule protein mRNA levels during neutrophil maturation explains the heterogeneity of neutrophil granules. J Leukoc Biol. 1999;66:989–995. Crossref PubMed Google Scholar a [...] transport but do not transcribe LTF. b [...] during PMN maturation in bone marrow. 38. Moxon-Emre I, Schlichter LC. Neutrophil depletion reduces blood-brain barrier breakdown, axon injury, and inflammation after intracerebral hemorrhage. J Neuropathol Exp Neurol. 2011;70:218–235. doi: 10.1097/NEN.0b013e31820d94a5. Go to Citation Crossref PubMed Google Scholar 39. Johnson HL, Chen Y, Jin F, Hanson LM, Gamez JD, Pirko I, et al. CD8 T cell-initiated blood-brain barrier disruption is independent of neutrophil support. J Immunol. 2012;189:1937–1945. doi: 10.4049/jimmunol.1200658. Go to Citation Crossref PubMed Google Scholar 40. Riopelle RJ, Kennedy JC. Some aspects of porphyrin neurotoxicity in vitro. Can J Physiol Pharmacol. 1982;60:707–714. Go to Citation Crossref PubMed Google Scholar 41. Wang X, Mori T, Sumii T, Lo EH. Hemoglobin-induced cytotoxicity in rat cerebral cortical neurons: caspase activation and oxidative stress. Stroke. 2002;33:1882–1888. Go to Citation Crossref PubMed Google Scholar 42. Hansen NE, Malmquist J, Thorell J. Plasma myeloperoxidase and lactoferrin measured by radioimmunoassay: relations to neutrophil kinetics. Acta Med Scand. 1975;198:437–443. Go to Citation Crossref PubMed Google Scholar 43. Kruzel ML, Actor JK, Boldogh I, Zimecki M. Lactoferrin in health and disease. Postepy Hig Med Dosw (Online). 2007;61:261–267. PubMed Google Scholar a [...] any environmental insult, including trauma. b [...] responses and maintains immune homeostasis. 44. Erga KS, Peen E, Tenstad O, Reed RK. Lactoferrin and anti-lactoferrin antibodies: effects of ironloading of lactoferrin on albumin extravasation in different tissues in rats. Acta Physiol Scand. 2000;170:11–19. doi: 10.1046/j.1365-201x.2000.00754.x. Go to Citation Crossref PubMed Google Scholar 45. Xue M, Del Bigio MR. Intracerebral injection of autologous whole blood in rats: time course of inflammation and cell death. Neurosci Lett. 2000;283:230–232. Go to Citation Crossref PubMed Google Scholar Advertisement Recommended November 2003 Iron and Iron-Handling Proteins in the Brain After Intracerebral Hemorrhage Jimin Wu, Ya Hua, Richard F. Keep, Takehiro Nakamura, Julian T. Hoff, and [...] Guohua Xi +2 authors December 2023 Clearance of Neutrophils From ICH-Affected Brain by Macrophages Is Beneficial and Is Assisted by Lactoferrin and CD91 Xiurong Zhao, Shun-Ming Ting, Guanghua Sun, Jesus Bautista Garrido, Lidiya Obertas, and [...] Jaroslaw Aronowski +2 authors March 2000 Decreased Perihematomal Edema in Thrombolysis-Related Intracerebral Hemorrhage Compared With Spontaneous Intracerebral Hemorrhage James M. Gebel, Thomas G. Brott, Cathy A. Sila, Thomas A. Tomsick, Edward Jauch, Shelia Salisbury, Jane Khoury, Rosemary Miller, Arthur Pancioli, John E. Duldner, Eric J. Topol, and [...] Joseph P. Broderick +8 authors Advertisement Submit a Response to This Article Close Compose eLetter Title: Comment text: Contributors (all fields are required) Remove Contributor First Name: Last Name: Email: Affiliation: Add Another Contributor Statement of Competing Interests Competing Interests? YES NO Please describe the competing interests Cancel Submit View full text|Download PDF Figures Tables Close figure viewer Back to article Figure title goes here Change zoom level Go to figure location within the article Toggle download panel Toggle download panel Download figure Toggle share panel Toggle share panel Share Toggle information panel Toggle information panel All figures All tables View all material View all material xrefBack.goTo xrefBack.goTo Request permissions Expand All Collapse Expand Table Show all references SHOW ALL BOOKS Authors Info & Affiliations Comment Response Now Reading: Beneficial Role of Neutrophils Through Function of Lactoferrin After Intracerebral Hemorrhage Track Citations Add to favorites Share PDF/EPUB ###### PREVIOUS ARTICLE Thalidomide Reduces Hemorrhage of Brain Arteriovenous Malformations in a Mouse Model Previous###### NEXT ARTICLE Human Neural Stem Cell Extracellular Vesicles Improve Recovery in a Porcine Model of Ischemic Stroke Next Stroke Submit BrowseBrowse Collections Subject Terms AHA Journal Podcasts Trend Watch ResourcesResources CME Journal Metrics Early Career Resources AHA Journals @ Meetings InformationInformation For Authors For Reviewers For Subscribers For International Users Arteriosclerosis, Thrombosis, and Vascular Biology Circulation Circulation Research Hypertension Stroke Journal of the American Heart Association Circulation: Arrhythmia and Electrophysiology Circulation: Cardiovascular Imaging Circulation: Cardiovascular Interventions Circulation: Cardiovascular Quality & Outcomes Circulation: Genomic and Precision Medicine Circulation: Heart Failure Stroke: Vascular and Interventional Neurology Annals of Internal Medicine: Clinical Cases This page is managed by Wolters Kluwer Health, Inc. and/or its affiliates or subsidiaries.Wolters Kluwer Privacy Policy Your California Privacy Choices Manage Cookie Preferences Back to top National Center 7272 Greenville Ave.Dallas, TX 75231 Customer Service 1-800-AHA-USA-1 1-800-242-8721 Hours Monday - Friday: 7 a.m. – 7 p.m. CT Saturday: 9 a.m. - 5 p.m. CT Closed on Sundays Tax Identification Number 13-5613797 ABOUT US About the AHA/ASA Annual report AHA Financial Information Careers International Programs Latest Heart and Stroke News AHA/ASA Media Newsroom GET INVOLVED Donate Advocate Volunteer ShopHeart ShopCPR OUR SITES American Heart Association American Stroke Association CPR & ECC Go Red For Women More Sites AHA Careers AHA Privacy Policy Medical Advice Disclaimer Copyright Policy Accessibility Statement Ethics Policy Conflict of Interes Policy Linking Policy Whistleblower Policy Content Editorial Guidelines Diversity Suppliers & Providers State Fundraising Notices ©2025 American Heart Association, Inc. All rights reserved. Unauthorized use prohibited. The American Heart Association is a qualified 501(c)(3) tax-exempt organization. Red Dress ™ DHHS, Go Red ™ AHA ; National Wear Red Day® is a registered trademark. ✓ Thanks for sharing! AddToAny More… __("articleCrossmark.closePopup") Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, show the most relevant content, and show the most useful ads. You can select your preferences by clicking the link. For more information, please review ourPrivacy & Cookie Notice Manage Cookie Preferences Accept All Cookies Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. View Vendor Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality, user experience and personalization, and may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these services may not function properly. View Vendor Details‎ Performance Cookies [x] Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. View Vendor Details‎ Advertising Cookies [x] Advertising Cookies These cookies may collect insights to issue personalized content and advertising on our own and other websites, and may be set through our site by third party providers. If you do not allow these cookies, you may still see basic advertising on your browser that is generic and not based on your interests. View Vendor Details‎ Vendors List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Allow All Reject All Confirm My Choices
706
https://people.maths.ox.ac.uk/trefethen/8all.pdf
CHAPTER  TREFETHEN    Chapter  Cheb yshev sp ectral metho ds  P olynomial in terp olation  Cheb yshev dieren tiation matrices  Cheb yshev dieren tiation b y the FFT  Boundary conditions  Stabilit y  Legendre p oin ts and other alternativ es  Implicit metho ds and matrix iterations  Notes and references CHAPTER  TREFETHEN    This c hapter discusses sp ectral metho ds for domains with b oundaries The eect of b oundaries in sp ectral calculations is great for they often in tro duce stabilit y conditions that are b oth highly restrictiv e and di cult to analyze Th us for a rst order partial dieren tial equation solv ed on an N p oin t spatial grid b y an explicit time in tegration form ula a sp ectral metho d t ypically requires k  O N  for stabilit y in con trast to k  O N   for  nite dierences F or a second order equation the disparit y w orsens to O N   vs O N  T o mak e matters w orse the matrices in v olv ed are usually non normal and often v ery far from normal so they are di cult to analyze as w ell as troublesome in practice Sp ectral metho ds on b ounded domains t ypically emplo y grids consisting of zeros or extrema of Cheb yshev p olynomials Cheb yshev p oin ts zeros or extrema of Legendre p olynomials Legendre p oin ts or some other set of p oin ts related to a family or orthogonal p olynomials Cheb yshev grids ha v e the adv an tage that the FFT is a v ailable for an O N log N  implemen tation of the dieren tiation pro cess and they also ha v e sligh t adv an tages connected their abilit y to appro ximate functions Legendre grids ha v e v arious theoretical and practical adv an tages b ecause of their connection with Gauss quadrature A t this p oin t one cannot sa y whic h c hoice will win in the long run but in this b o ok in k eeping with out emphasis on F ourier analysis most of the discussion is of Cheb yshev grids Since explicit sp ectral metho ds are sometimes troublesome implicit sp ec tral calculations are increasingly p opular Sp ectral dieren tiation matrices are dense and ill conditioned ho w ev er so solving the asso ciated systems of equa tions is not a trivial matter ev en in one space dimension Curren tly p opular metho ds for solving these systems include preconditioned iterativ e metho ds and m ultigrid metho ds These tec hniques are discussed briey in x  POL YNOMIAL INTERPOLA TION TREFETHEN     P olynomial in terp olation Sp ectral metho ds arise from the fundamen tal problem of appro ximation of a function b y in terp olation on an in terv al Multidimensional domains of a rectilinear shap e are treated as pro ducts of simple in terv als and more compli cated geometries are sometimes divided in to rectilinear pieces In this section therefore w e restrict our atten tion to the fundamen tal in terv al   The question to b e considered is what kinds of in terp olan ts in what sets of p oin ts are lik ely to b e eectiv e Let N  b e an in teger ev en or o dd and let x  x N or sometimes x  x N b e a set of distinct p oin ts in   F or deniteness let the n um b ering b e in rev erse order  x   x       x N   x N   The follo wing are some grids that are often considered Equispaced p oin ts x j   j N   j  N  Cheb yshev zero p oin ts x j  cos j  N   j  N  Cheb yshev extreme p oin ts x j  cos j  N   j  N  Legendre zero p oin ts x j  j th zero of P N   j  N  Legendre extreme p oin ts x j  j th extrem um of P N   j  N  where P N is the Legendre p olynomial of degree N Cheb yshev zeros and ex treme p oin ts can also b e describ ed as zeros and extrema of Cheb yshev p olyno mials T N more on these in x  Cheb yshev and Legendre zero p oin ts are also called Gauss Cheb yshev and Gauss Legendre p oin ts resp ectiv ely and Cheb y shev and Legendre extreme p oin ts are also called Gauss Lobatto Cheb yshev and Gauss Lobatto Legendre p oin ts resp ectiv ely These names originate in the eld of n umerical quadrature  Suc h sub division metho ds ha v e b een dev elop ed indep enden tly b y I Babushk a and colleagues for structures problems who call them p nite elemen t metho ds and b y A P atera and colleagues for uids problems who call them sp ectral elemen t metho ds  POL YNOMIAL INTERPOLA TION TREFETHEN    It is easy to remem b er ho w Cheb yshev p oin ts are dened they are the pro jections on to the in terv al   of equally spaced p oin ts ro ots of unit y along the unit circle jz j   in the complex plane Figure  Cheb yshev extreme p oin ts N   T o the ey e Legendre p oin ts lo ok m uc h the same although there is no elemen tary geometrical denition Figure  illustrates the similarit y a N   b N   Figure  Legendre vs Cheb yshev zeros As N   equispaced p oin ts are distributed with densit y x  N Equally spaced   POL YNOMIAL INTERPOLA TION TREFETHEN    and Legendre or Cheb yshev p oin tseither zeros or extremaha v e densit y x  N  p  x Legendre Cheb yshev  Indeed the densit y function  applies to p oin t sets asso ciated with an y Jacobi p olynomials of whic h Legendre and Cheb yshev p olynomials are sp ecial cases Wh y is it a go o d idea to base sp ectral metho ds up on Cheb yshev Legen dre and other irregular grids W e shall answ er this question b y addressing a second more fundamen tal question wh y is it a go o d idea to in terp olate a function f x dened on   b y a p olynomial p N x rather than a trigono metric p olynomial and wh y is it a go o d idea to use Cheb yshev or Legendre p oin ts rather than equally spaced p oin ts The remainder of this section is just a sk etc h details to b e supplied later PHENOMENA T rigonometric in terp olation in equispaced p oin ts suers from the Gibbs phenomenon due to Mic helson and Gibbs at the turn of the t w en tieth cen tury kf p N k  O  as N   ev en if f is analytic One can try to get around the Gibbs phenomenon b y v arious tric ks suc h as doubling the domain and reecting but the price is high P olynomial in terp olation in equally spaced p oin ts suers from the Runge phenomenon due to Mera y and Runge Figure  kf p N k  O  N   m uc h w orse P olynomial in terp olation in Legendre or Cheb yshev p oin ts kf p N k  O constan t N  if f is analytic for some constan t greater than  Ev en if f is quite rough the errors will still go to zero pro vided f is sa y Lipsc hitz con tin uous  POL YNOMIAL INTERPOLA TION TREFETHEN    Figure  The Runge phenomenon  POL YNOMIAL INTERPOLA TION TREFETHEN    FIRST EXPLANA TIONEQUIPOTENTIAL CUR VES Think of the limiting p oin t distribution x ab o v e as a c harge densit y distribution a c harge at p osition x is asso ciated with a p oten tial log jz xj Lo ok at the equip oten tial curv es of the resulting p oten tial function z   R   x log jz xj dx CONVER GENCE OF POL YNOMIAL INTERPOLANTS Theorem  In general kf p N k   as N   in the largest region b ounded b y an equip oten tial curv e in whic h f is analytic In particular F or Cheb yshev or Legendre p oin ts or an y other t yp e of GaussJacobi p oin ts con v ergence is guaran teed if f is analytic on   F or equally spaced p oin ts con v ergence is guaran teed if f is analytic in a particular lensshap ed region con taining   Figure  Figure  Equip oten tial curv es  POL YNOMIAL INTERPOLA TION TREFETHEN    SECOND EXPLANA TIONLEBESGUE CONST ANTS Denition of Leb esgue constan t N  kI N k  where I N is the in terp olation op erator I N  f  p N A small Leb esgue constan t means that the in terp olation pro cess is not m uc h w orse than b est appro xima tion kf p N k   N kf p N k  where p N is the b est minimax equiripple appro ximation LEBESGUE CONST ANTS Theorem  Equispaced p oin ts N N e N log N Legendre p oin ts N const p N Cheb yshev p oin ts N const log N THIRD EXPLANA TIONNUMBER OF POINTS PER W A VELENGTH Consider appro ximation of sa y f N x  cos N x as N   Th us f N c hanges but the n um b er of p oin ts p er w a v elength remains constan t Will the error kf N p N k go to zero The answ er to this question tells us something ab out the abilit y of v arious kinds of sp ectral metho ds to resolv e data POINTS PER W A VELENGTH Theorem  Equispaced p oin ts con v ergence if there are at least  p oin ts p er w a v elength Cheb yshev p oin ts con v ergence if there are at least  p oin ts p er w a v elength on a v erage W e ha v e to sa y on a v erage b ecause the grid is non uniform In fact it is   times less dense in the middle than the equally spaced grid with the same n um b er of p oin ts N see  and  Th us the second part of the theorem sa ys that w e need at least p oin ts p er w a v elength in the cen ter of the gridthe familiar Nyquist limit See Figure  The rst part of the theorem is mathematically v alid but of little v alue in practice b ecause of rounding errors  CHEBYSHEV DIFFERENTIA TION MA TRICES TREFETHEN    a Equally spaced p oin ts b Cheb yshev p oin ts Figure  Error as a function of N in in terp olation of cos N x with hence the n um b er of p oin ts p er w a v elength held xed  CHEBYSHEV DIFFERENTIA TION MA TRICES TREFETHEN     Cheb yshev dieren tiation matrices Just a sk etc h F rom no w on Cheb yshev p oin ts means Cheb yshev extreme p oin ts Multiplication b y the rst order Cheb yshev dieren tiation matrix D N transforms a v ector of data at the Cheb yshev p oin ts in to appro ximate deriv a tiv es at those p oin ts D N       v  v N              w  w N       As usual the implicit denition of D N is as follo ws CHEBYSHEV SPECTRAL DIFFERENTIA TION BY POL YNOMIAL INTERPOLA TION  In terp olate v b y a p olynomial q x  q N x  Di eren tiate the in terp olan t at the grid p oin ts x j  w j  D N v  j  q  x j   Higher order dieren tiation matrices are dened analogously F rom this denition it is easy to w ork out the en tries of D N in sp ecial cases F or N   x        D           F or N   x                 D                    CHEBYSHEV DIFFERENTIA TION MA TRICES TREFETHEN    F or N   x                            D                                          These three examples illustrate an imp ortan t fact men tioned in the in tro duc tion to this c hapter Cheb yshev sp ectral dieren tiation matrices are in general not symmetric or sk ew symmetric A more general statemen t is that they are not normal This is wh y stabilit y analysis is di cult for sp ectral metho ds The reason they are not normal is that unlik e nite dierence dieren tiation sp ectral dieren tiation is not a translation in v arian t pro cess but dep ends in stead on the same global in terp olan t at all p oin ts x j The general form ula for D N is as follo ws First dene c i   for i   or N  for   i  N   and of course analogously for c j Then CHEBYSHEV SPECTRAL DIFFERENTIA TION Theorem  Let N  b e an y in teger The rstorder sp ectral di eren tiation matrix D N has en tries D N    N   D N  N N  N   D N  j j  x j  x j  for   j  N  D N  ij  c i c j  ij x i x j for i  j Analogous form ulas for D N can b e found in P eyret   Ehrenstein ! P eyret ref  and in Zang Streett and Hussaini ICASE Rep ort     See also Can uto Hussaini Quarteroni ! Zang  Recall that a normal matrix A is one that satises AA T  A T A Equiv alen tly  A p ossesses an orthogonal set of eigen v ectors whic h implies man y desirable prop erties suc h as A n   kA n k  kAk n for an y n  CHEBYSHEV DIFFERENTIA TION MA TRICES TREFETHEN    A note of caution D N is rarely used in exactly the form describ ed in Theorem  for b oundary conditions will mo dify it sligh tly and these dep end on the problem EXER CISES  Pro v e that for an y N  D N is nilp oten t D n N for a su cien tly high in teger n  CHEBYSHEV DIFFERENTIA TION BY THE FFT TREFETHEN     Cheb yshev dieren tiation b y the FFT P olynomial in terp olation in Cheb yshev p oin ts is equiv alen t to trigonometric in terp o lation in equally spaced p oin ts and hence can b e carried out b y the FFT The algorithm describ ed b elo w has the optimal order O N log N  but w e do not w orry ab out ac hieving the optimal constan t factor F or more practical discussions see App endix B of the b o ok b y Can uto et al and also P  N Sw arztraub er Symmetric FFTs Math Comp     V aluable additional references are the b o ok The Cheb yshev P olynomials b y Rivlin and Chapter  of P  Henrici Applied and Computational Complex Analysis   Consider three indep enden t v ariables R  x   and z S  where S is the complex unit circle fz jz j g They are related as follo ws z e i x Re z  z  z   cos  whic h implies dx d sin p  x   See Figure  Note that there are t w o conjugate v alues z S for eac h x   and an innite n um b er of p ossible c hoices of  o o o   x z z Figure  z  x and   optimal that is so far as an y one kno ws as of    CHEBYSHEV DIFFERENTIA TION BY THE FFT TREFETHEN    In generalization of the fact that the real part of z is x the real part of z n n   is T n x the Cheb yshev p olynomial of degree n This statemen t can b e tak en as a denition of Cheb yshev p olynomials T n x Re z n  z n  z n  cos n  where x and z and are as alw a ys implicitly related b y  It is clear that  denes T n x to b e some function of x but it is not ob vious that the function is a p olynomial Ho w ev er a calculation of the rst few cases mak es it clear what is going on T x  z  z   T  x  z   z   x T x  z  z   z   z    x  T x  z  z   z   z   z   z   x x  In general the Cheb yshev p olynomials are related b y the threeterm recurrence relation T n x  z n  z n   z   z  z n  z n   z n  z n  x T n x T n x  By  and  the deriv ativ e of T n x is T n x n sin n d dx n sin n sin   Th us just as x z  and are equiv alen t so are T n x z n  and cos n  By taking linear com binations w e obtain three equiv alen t kinds of p olynomials A trigonometric p olyno mial q   of degree N is a  p erio dic sum of complex exp onen tials in or equiv alen tly  sines and cosines Assuming that q   is an ev en function of  it can b e written q    N X n a n e in  e in  N X n a n cos n   A Lauren t p olynomial q z  of degree N is a sum of negativ e and p ositiv e p o w ers of z up to degree N  Assuming q z  q  z  for z S  it can b e written q z   N X n a n z n  z n   An algebraic p olynomial q x of degree N is a p olynomial in x of the usual kind and w e can express it as a linear com bination of Cheb yshev p olynomials q x N X n a n T n x    Equiv alen tly  the Cheb yshev p olynomials can b e dened as a system of p olynomials orthogonal on   with resp ect to the w eigh t function  x     CHEBYSHEV DIFFERENTIA TION BY THE FFT TREFETHEN    The use of the same co e cien ts a n in   is no acciden t for all three of the p olynomials ab o v e are iden tical q   q z  q x   where again x and z and are implicitly related b y  F or this reason w e hop e to b e forgiv en the slopp y use of the same letter q in all three cases Finally  for an y in teger N   w e dene regular grids in the three v ariables as follo ws j j  N z j e i j x j Re z j  z j  z  j  cos j  for  j  N  The p oin ts fx j g and fz j g w ere illustrated already in Figure  And no w w e are ready to state the algorithm for Cheb yshev dieren tiation b y the FFT  CHEBYSHEV DIFFERENTIA TION BY THE FFT TREFETHEN    ALGORITHM F OR CHEBYSHEV DIFFERENTIA TION  Giv en data fv j g de ned at the Cheb yshev p oin ts fx j g  j  N think of the same data as b eing de ned at the equally spaced p oin ts f j g in     FFT Find the co e cien ts fa n g of the trigonometric p olynomial q   N X n a n cos n  that in terp olates fv j g at f j g  FFT Compute the deriv ativ e dq d N X n n a n sin n    Change v ariables to obtain the deriv ativ e with resp ect to x dq dx dq d d dx N X n n a n sin n sin N X n n a n sin n p  x   A t x  ie  LHopitals rule giv es the sp ecial v alues dq dx  N X n  n n a n   Ev aluate the result at the Cheb yshev p oin ts w j dq dx x j   Note that b y  equation  can b e in terpreted as a linear com bination of Cheb yshev p olynomials and b y  equation  is the corresp onding linear com bination of deriv ativ es But of course the algorithmic con ten t of the description ab o v e relates to the v ariable for in Steps and  w e ha v e p erformed F ourier sp ectral dieren ti ation exactly as in x  discrete F ourier transform m ultiply b y i  in v erse discrete F ourier transform Only the use of sines and cosines rather than complex exp onen tials and of n instead of   has disguised the pro cess somewhat  or of Cheb yshev p olynomials U n x of the se c ond kind  CHEBYSHEV DIFFERENTIA TION BY THE FFT TREFETHEN    EXER CISES      F ourier and Cheb yshev sp ectral dieren tiation W rite four brief elegan t Matlab programs for rstorder sp ectral dieren tiation FDERIVM CDERIVM construct dieren tiation matrices FDERIV CDERIV dieren tiate via FFT In the F ourier case there are N equally spaced p oin ts x N    x N  N ev en in    and no b oundary conditions In the Cheb yshev case there are N Cheb yshev p oin ts x     x N in   N arbitrary with a zero b oundary condition at x  The eect of this b oundary condition is that one remo v es the rst ro w and rst column from D N  leading to a square matrix of dimension N instead of N   Y ou do not ha v e to w orry ab out computational e ciency suc h as using an FFT of length N rather than N in the Cheb yshev case but y ou are w elcome to w orry ab out it if y ou lik e Exp erimen t with y our programs to mak e sure they dieren tiate successfully  Of course the matrices can b e used to c hec k the FFT programs a T urn in a plot sho wing the function ux cos x and its deriv ativ e computed b y FDERIV for N  Discuss the results b T urn in a plot sho wing the function ux cos x and its deriv ativ e computed b y CDERIV again for N  Discuss the results c Plot the eigen v alues of D N for F ourier and Cheb yshev sp ectral dieren tiation with N      ST ABILITY TREFETHEN     Stabilit y This section is not y et written What follo ws is a cop y of a pap er of mine from K W Morton and M J Baines eds Numerical Metho ds for Fluid Dynamics I I I Clarendon Press Oxford   Because of stabilit y problems lik e those describ ed in this pap er more and more atten tion is curren tly b eing dev oted to implicit timestepping metho ds for sp ectral computations The asso ciated linear algebra problems are generally solv ed b y preconditioned matrix iter ations sometimes including a m ultigrid iteration This pap er w as written b efore I w as using the terminology of pseudosp ectra I w ould no w summarize Section  of this pap er b y sa ying that although the sp ectrum of the Legendre sp ectral dieren tiation matrix is of size N  as N   the pseudosp ectra are of size N  for an y   The connection of pseudosp ectra with stabilit y of the metho d of lines w as discussed in Sections   ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN    ST ABILITY TREFETHEN     ST ABILITY TREFETHEN    ST ABILITY TREFETHEN    ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     ST ABILITY TREFETHEN     SOME REVIEW PR OBLEMS TREFETHEN     Some review problems EXER CISES  TR UE or F ALSE Giv e eac h answ er together with at most t w o or three sen tences of explanation The b est p ossible explanation is a pro of a coun terexample or the citation of a theorem in the text from whic h the answ er follo ws If y ou can t do quite that w ell try at least to giv e a con vincing reason wh y the answ er y ou ha v e c hosen is the righ t one In some cases a w ellthough tout sk etc h will suce a The F ourier transform of f x  expx   has compact supp ort b When y ou m ultiply a matrix b y a v ector on the righ t i e Ax the result is a linear com bination of the columns of that matrix c If an ODE initialv alue problem with a smo oth solution is solv ed b y the fourth order AdamsBashforth form ula with step size k and the missing starting v alues v  v v are obtained b y taking Euler steps with some step size k then in general w e will need k  O k   to main tain o v erall fourthorder accuracy d If a consisten t nite dierence mo del of a w ellp osed linear initialv alue problem violates the CFL condition it m ust b e unstable e If y ou F ourier transform a function u L four times in a ro w y ou end up with u again times a constan t factor f  If the function f x  x x    is in terp olated b y a p olynomial q N x in N equally spaced p oin ts of   then kf q N k  as N  g e x  O xe x  as x  h If a stable nitedierence appro ximation to u t  u x with real co ecien ts has order of accuracy then the form ula m ust b e dissipativ e i If A      then kA n k  C n for some constan t C   j If the equation u t  A u is solv ed b y the fourthorder AdamsMoulton form ula where ux t is a v ector and A is the matrix ab o v e then k   is a sucien tly small time step to ensure timestabilit y k Let u t  u xx on    with p erio dic b oundary conditions b e solv ed b y F ourier pseudosp ectral dieren tiation in x coupled with a fourthorder RungeKutta  SOME REVIEW PR OBLEMS TREFETHEN    form ula in t F or N  k   is a sucien tly small time step to ensure timestabilit y l The ODE initialv alue problem u t  f u t  cos u u    t   is w ellp osed m In exact arithmetic and with exact starting v alues the n umerical appro xima tions computed b y the linear m ultistep form ula v n    v n  v n  v n   k  f n  f n  f n  are guaran teed to con v erge to the unique solution of a w ellp osed initialv alue problem in the limit k n If computers did not mak e rounding errors w e w ould not need to study stabilit y o The solution at time t   to u t  u x  u xx x R initial data ux   f x is the same as what y ou w ould get b y rst diusing the data f x according to the equation u t  u xx then translating the result left w ard b y one unit according to the equation u t  u x p The discrete F ourier transform of a threedimensional p erio dic set of data on an N  N  N grid can b e computed on a serial computer in O N log N  op erations q The addition of n umerical dissipation ma y sometimes increase the stabilit y limit of a nite dierence form ula without aecting the order of accuracy r F or a nondissipativ e semidiscrete nitedierence mo del i e discrete space but con tin uous time phase v elo cit y as w ell as group v elo cit y is a w elldened quan tit y s v n   v n  is a stable lefthand b oundary condition for use with the leap frog mo del of u t  u x with k h   t If a nite dierence mo del of a partial dieren tial equation is stable with k h    for some    then it is stable with k h   for an y     u T o solv e the system of equations that results from a standard secondorder discretization of Laplace s equation on an N  N  N grid in three dimensions b y the ob vious metho d of banded Gaussian elimination without an y clev er tric ks requires N   op erations on a serial computer v If ux t is a solution to u t  iu xx for x R then the norm ku tk is inde p enden t of t w In a metho d of lines discretization of a w ellp osed linear IVP ha ving the ap propriate eigen v alues t in the appropriate stabilit y region is sucien t but not necessary for Laxstabilit y x Supp ose a signal that s bandlimited to frequencies in the range kHz kHz  is sampled  times a second i e fast enough to resolv e frequencies in the range kHz kHz  Then although some aliasing will o ccur the information in the range kHz kHz  remains uncorrupted  TW O FINAL PR OBLEMS TREFETHEN    Tw o nal problems EXER CISES      Equip oten tial curv es W rite a short and elegan t Matlab program to plot equip oten tial curv es in the plane corresp onding to a v ector of p oin t c harges in ter p olation p oin ts x     x N Y our program should simply sample N  P log jz x j j on a grid then pro duce a con tour plot of the result See meshdom and contour  T urn in b eautiful plots corresp onding to a  equispaced p oin ts b  Cheb yshev p oin ts c equispaced p oin ts d Cheb yshev p oin ts By all means pla y around with D graphics con v ergence and div ergence of asso ciated in terp olation pro cesses or other am usemen ts if y ou re in the mo o d      F un with Cheb yshev sp ectral metho ds The starting p oin t of this problem is the Cheb yshev dieren tiation matrix of Exercise   It will b e easiest to use a program lik e CDERIVM from that exercise whic h w orks with an explicit matrix rather than the FFT Be careful with b oundary conditions y ou will w an t to square the N    N   matrix rst b efore stripping o an y ro ws or columns a P oisson equation in D The function ux   x e x satises u   and has second deriv ativ e u x    x  x e x Th us it is the solution to the b oundary v alue problem u xx    x  x e x x   u     W rite a little Matlab program to solv e  b y a Cheb yshev sp ectral metho d and pro duce a plot of the computed discrete solution v alues N   discrete p oin ts in   sup erimp osed up on exact solution a curv e T urn in the plot for N   and a table of the errors u computed  u exact  for N     What can y ou sa y ab out the rate of con v ergence b P oisson equation in D Similarly the function ux y    x  y  cos x  y  is the solution to the b oundary v alue problem u xx  u y y  sorry illegible x y   ux   u  y     W rite a Matlab program to solv e  b y a Cheb yshev sp ectral metho d in v olving a grid of N  in terior p oin ts Y ou ma y nd that the Matlab command KR ON comes in handy for this purp ose Y ou don t ha v e to pro duce a plot of the computed solution but do turn in a table of u computed   u exact   for N     Ho w do es the rate of con v ergence lo ok c Heat equation in D Bac k to D no w Supp ose y ou ha v e the problem u t  u xx u  t  ux    x e x    TW O FINAL PR OBLEMS TREFETHEN   A t what time t c do es max x  ux t rst fall b elo w   Figure out the answ er to at least digits of relativ e precision Then describ e what y ou w ould do if I ask ed for  digits
707
https://www.quora.com/What-is-the-oxidation-state-barium-in-Bao2
Something went wrong. Wait a moment and try again. Barium Oxide Oxidation Numbers Inorganic Compounds Chemical Formulas Formal Oxidation State Inorganic Chemistry R Chemical Elements 5 What is the oxidation state barium in Bao2? Sort Assistant Bot · Sep 6 Determine oxidation states using charge balance. Let oxidation state of Ba = x. In BaO2 the peroxide ion is present, O2^2–, so each oxygen has formal oxidation state –1. Two oxygens contribute total –2. Compound is neutral: x + (–2) = 0 → x = +2. Therefore the oxidation state of barium in BaO2 is +2. Related questions Is barium peroxide, salt of barium? How do you prepare barium oxide by simple method? What is the preparation of barium peroxide (BaO2)? Was Donald Trump aware of the 9/11 attacks before they happened? If so, how much time did he have before the attacks and what actions did he take based on this information? If in peroxide oxygen is always 2, but if paired with metal like barium, why is it BaO₂? Why is hydrogen peroxide H₂O₂ and Ba peroxide BaO₂? Purna Chandra Sahu M. Sc. in Pure Chemistry & Inorganic Chemistry, Calcutta Uniiversity (Graduated 1987) · Author has 2.9K answers and 6M answer views · 4y The oxidation number of Ba in BaO2 is +2. It can be easily understood by drawing the structure of BaO2. Look at the image below: Each Ba — O bond contributes +1 to Ba and -1 to oxygen as barium is more electropositive than oxygen. O — O bond contributes zero to each other. So, the oxidation nu... The oxidation number of Ba in BaO2 is +2. It can be easily understood by drawing the structure of BaO2. Look at the image below: Each Ba — O bond contributes +1 to Ba and -1 to oxygen as barium is more electropositive than oxygen. O — O bond contributes zero to each other. So, the oxidation nu... Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) · Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Praveen Excellent debate in politics · Author has 73 answers and 233K answer views · 6y It's a basic concept how to find out oxidation state of any compound. Ans : +2 Because here Oxygen exhibits the oxidation state of -1. This implies x + 2 (-1) = 0 x - 2 = 0 Therefore x = 2 Shlok Patel BE from A D Patel Institute of Technology, Vallabh Vidhyanagar (Graduated 2023) · 6y Ans : +2 Because here Oxygen exhibits the oxidation state of -1. This implies x + 2 (-1) = 0 x - 2 = 0 Therefore x = 2 Related questions Is barium peroxide, salt of barium? How do you prepare barium oxide by simple method? What is the preparation of barium peroxide (BaO2)? Was Donald Trump aware of the 9/11 attacks before they happened? If so, how much time did he have before the attacks and what actions did he take based on this information? If in peroxide oxygen is always 2, but if paired with metal like barium, why is it BaO₂? Why is hydrogen peroxide H₂O₂ and Ba peroxide BaO₂? Why cathod in diode valve is coated with barium oxide? What is barium oxide used for? Why is the US government covering up Saudi role in 9/11? What is the oxidation state of KO2? What is the symbol for barium oxide? What is the difference between oxidation State and oxidation number? What is the oxidation number of O in BaO2? What is your opinion on 9/11 conspiracy? Do you think the attacks on 9/11 were really true? Why are these oxidation states so stable? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
708
https://www.youtube.com/watch?v=7jI0Hv9h2t8
Karnaugh Map, 3 Variables Barry Brown 26700 subscribers 822 likes Description 120285 views Posted: 17 Sep 2012 This video show you how to simplify a Boolean expression of three variables using a Karnaugh map. 78 comments Transcript: Intro in this video we're going to simplify this three variable Boolean expression using a carom map if you haven't done so already you might want to check out my introductory video about carom maps and also the one that shows you how to simplify a two variable Drawing the grid expression now let's draw our grid since this is three variables it's going to be a 2x4 grid with one variable down the side and the other two variables across the top don't forget that you need to use a gray code to write these variables across the top the gray code is so that between each column only one variable is changing at a time now fill the boxes with dots corresponding with each one of the terms in the expression the first one is not X not y z which is the box down the lower right hand corner the next one is the intersection of X not Y and not Z which is third from the left in the top row this next term involves only two variables it's y not Z which is this entire Colum column so put a dot in both boxes similarly for the next term this is the intersection of X which could be anywhere in the top row and Y which are the two cells in the top leftand Corner put a dot in both those boxes but there's already a a dot in this one and that's okay and the last term is not X YZ which is the box in the lower left hand corner now we've got all our dots placed it's time to start drawing those boxes now remember those rules they got to be as big as possible they have to be dimensions of powers of two and we want to cover all of the dots using as few boxes as possible so I see three boxes here the first one is the one that encases all four of these on the left hand side this is a 2X two box the next box I see is the one that covers these two dots remember we want to make the boxes as big as possible so we wouldn't just circle this one by itself we'd want to make this box a 1 by two box and overlap is okay and the last box I see covers this dot here but it goes off the right hand side and comes and covers this one on the left hand Writing the expressions side so there are our boxes we've covered all the dots leaving the empty cells alone each of the boxes are a dimension that's a power of two in other words we don't have any here that are say 3x one or 3x two they're as large as possible and we've used as few as possible to cover all the dots and overlap is okay now All That Remains is to write down the expressions for each one of the box boxes this big 2x two box over here on the left hand side is the Y box you can look at the key on my first video or you can figure it out by looking at the labels here and figuring out which variable do each of these four dots share in common and the only one they share in common is y alone uh Z changes and X changes and if you were to write out all the terms of this as a union you'd find that the variables that change just cancel out so this one is just the Y box the one in the middle is the X not zbx and the one that extends off the sides is the not X not y box to finish up just put a plus between each of the terms in order to make a sum of products form and this answer that we get is the simplification of our original expression
709
https://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-022-00942-y
Advertisement Efficacy of extended focused assessment with sonography for trauma using a portable handheld device for detecting hemothorax in a low resource setting; a multicenter longitudinal study BMC Medical Imaging volume 22, Article number: 211 (2022) Cite this article 3214 Accesses 3 Citations Metrics details Abstract Introduction Chest trauma is one of the most important and commonest injuries that require timely diagnosis, accounting for 25–50% of trauma related deaths globally. Although CT scan is the gold standard for detection of haemothorax, it is only useful in stable patients, and remains unavailable in most hospitals in low income countries. Where available, it is very expensive. Sonography has been reported to have high accuracy and sensitivity in trauma diagnosis but is rarely used in trauma patients in low income settings in part due to lack of the sonography machines and lack of expertise among trauma care providers. Chest X-ray is the most available investigation for chest injuries in low income countries. However it is not often safe to wheel seriously injured, unstable trauma patients to X-ray rooms. This study aimed at determining the efficacy of extended focused assessment with sonography for trauma (eFAST) in detection of haemothorax using thoracostomy findings as surrogate gold standard in a low resource setting. Methods This was an observational longitudinal study that enrolled 104 study participants with chest trauma. Informed consent was obtained from all participants. A questionnaire was administered and eFAST, chest X-ray and tube thoracotomy were done as indicated. Data were analysed using SPSS version 22. The sensitivity, specificity, predictive values, accuracy and area under the curve were determined using thoracostomy findings as the gold standard. Ethical approval for the study was obtained from the Research and Ethics Committee of Kampala International University Western Campus REC number KIU-2021-53. Results eFAST was found to be superior to chest X-ray with sensitivity of 96.1% versus 45.1% respectively. The accuracy was also higher for eFAST (96.4% versus 49.1%) but the specificity was the same at 100.0%. The area under the curve was higher for eFAST (0.980, P = 0.001 versus 0.725, P = 0.136). Combining eFAST and X-ray increased both sensitivity and accuracy. Conclusion This study revealed that eFAST was more sensitive at detecting haemothorax among chest trauma patients compared to chest X-ray. All patients presenting with chest trauma should have bedside eFAST for diagnosis of haemothorax. Peer Review reports Background Focused Assessment with Sonography in Trauma (FAST) is a bedside test developed in the mid 1990’s for use in acute trauma patients to rapidly assess for intra-abdominal hemorrhage and to rule out clinically significant pericardial tamponade . The Extended Focused Assessment with Sonography in Trauma (eFAST) adds additional views of the hemithoraces to look for signs of pneumothorax and haemothorax . These include the right and left pleural spaces (anterior axillary line between 6th and 9th intercostal spaces), and left and right anterior pleural spaces (midclavicular line between 2nd and 3rd intercostal spaces) . The unique features of high frequency resolution ultrasound to differentiate each individual tissue densities, being non-invasive, portable handheld device and non-traumatic makes it preferable to conventional radiography and nuclear medicine especially in unstable trauma patients requiring prompt intervention to save life. Trauma is one of the leading causes of mortality and disability life years lost worldwide . The trauma burden is highest in low and middle income countries (LMICs) (WHO, 2017). In Africa, trauma is the number one cause of deaths amongst individuals in their productive youthful stage and its resulting mortality is disproportionately higher compared to other regions of the world . Chest trauma is one of the most important and commonest injuries that require timely diagnosis , accounting for 25–50% of trauma related mortality globally Eyo et al. . Considerably, East Africa experiences a significant burden of chest injuries. According to Chalya et al. chest trauma contributed to 44% of all injuries due to road traffic accidents seen at Bugando Medical Centre in Tanzania and was responsible for up to 24.2% mortality. In Uganda chest trauma contributed to 34.7% of road traffic injury cases seen at the Country’s national referral hospital in Central Uganda with a resulting mortality rate of 17%. A similar mortality (16.9%) was reported at Mbarara regional referral hospital in Western Uganda . Diagnosis of chest injuries is a challenge in low income countries due to limited access to computed tomographic scan that is deemed the gold standard. Furthermore, the readily accessible chest radiographs are associated with immense costs especially for the multiply injured ; exposure to radiation and overcrowding of the emergency department due to waiting lists. Extended focused assessment with Sonography for trauma (eFAST) which can be done at the bedside, has been introduced as a potential diagnostic tool . However, there is limited published data on the accuracy and applicability of this cost-effective and radiation free tool in the detection of traumatic hemothorax in our settings. Chest X-rays are the most available method of investigating chest injuries in low income countries thus assumed to be the gold standard in this context but the X-ray machines are often malfunctioning, besides pose potential exposure to ionizing radiations enough for cancer development . Ultrasound is cheap, accessible, and fast and can be performed by bed-side without interrupting resuscitation or worsening injuries during transfer. To date, minimal efforts have been made in Uganda to incorporate eFAST in standard operating procedures for investigating haemothorax and haemo-pneumothorax in chest trauma, despite it being cheaper than a chest X-ray and the growing body of evidence for eFAST use for this purpose amidst carrying no risk of radiation exposure . To the best of our knowledge, this was the first study that assessed eFAST in detecting hemothorax comparing it to chest X-ray taking thoracostomy findings as the gold standard in low and middle income countries. This study was aimed at determining the efficacy of eFAST in detection of haemothorax using thoracostomy findings as surrogate gold standard in a low resource setting. Study methods Study design This was a two center observational longitudinal study and patients were followed from admission to completion of surgical intervention observing for findings on eFAST, X-ray and surgical intervention. Study setting The study was conducted at the accident and emergency (A&E) and the radiology departments of Kampala International University teaching hospital (KIU-TH) and Mbarara regional referral hospital (MRRH) in south-western Uganda. Study population All patients with traumatic chest injuries who attended the A&E and radiology departments of KIU-TH and MRRH during the one year period from 1st May 2021 to 30th April 2022 were considered for the study. Sample size estimation Daniel’s formula for determining the sample size was used . According to the study on epidemiology of motorcycle injuries presenting to Uganda’s national referral hospital, Mulago; traumatic chest injuries contributed to 34.7% of all trauma injuries . Using the formula N = Z2P (1 − P)/d2; where N = the sample size, Z = Score corresponding to 95% of confidence interval which is 1.96, P = Proportion of chest injured patients which is 34.7%; (1 − P) = 65.3%, and E = margin error rate set at 5%; the sample size N = 348. To increase the internal validity of study and catering for non-responders, the calculated sample was increased by 10% giving an estimated sample size of 383. Adjusting sample size to the finite population, Sample size N = ns = (1 + (ns − 1)/n). Where N = adjusted population size, ns = estimated sample size, n = population under study = 142 (based on the hospital data registry), N = 104. Therefore, a sample size of 104 participants with chest injuries was considered for study duration of 12 months. Sampling technique Consecutive recruitment method was used to enroll all eligible participants until the required sample size was realized. 69(66.3% of the sample size) was recruited from MRRH and the remaining 35(33.7%) from KIU-TH using proportionate sampling. Eligibility criteria Inclusion criteria All patients with chest injuries who consented were recruited in the study. Exclusion criteria Unstable patients that had thoracostomy before chest X-ray, those with massive hemothorax, cardiac tamponade and patients with documented evidence of pleural effusion prior to the trauma event were excluded from the study. Training of research assistants Five surgery residents including the principal investigator participated in the study and were trained in the use of point of care ultrasound. The team was availed with the “point of care ultrasound in resource limited environments” (PURE) model manual , which consists of core concepts of emergency ultrasound such as focused assessment for trauma, with aim to establish competence in knowledge related to the indication of the scan, image acquisition, interpretation and integration of findings into patient management. Later the team attended a four weeks’ intensive practicum on use of eFAST in chest and abdominal trauma evaluation and using data collection tools. The PURE model involves use of the electronic “point of care” ultrasonography (POCUS) manual , didactic lectures embedded with videos, followed by practical sessions and knowledge retention assessment test . This training module has been validated in Kenya in similar settings [18:132–7.")] and is accredited by the African Federation for Emergency Medicine [18:132–7.")]. The principal trainers were qualified radiologists from Uganda who were experienced in the use of FAST and eFAST. The training was facilitated by the investigator and trainees. Because of the concerns on learning curve, the investigator and research assistants continued to work under supervision of qualified sonographers and radiologists throughout the study period. Data collecting tools Data for this study was collected using investigator administered checklist. The key variables of interest included demographics, injury mechanisms and patterns, presence or absence of haemothorax, haemo-pneumothorax, nature of surgical intervention and findings on ultrasound, CXR and Tube thoracotomy. The two investigative techniques eFAST and CXR were compared with the findings on tube thoracostomy. The findings at tube thoracostomy were used as the surrogate gold standard to confirm if the investigations correctly detected the haemothorax. Data collection procedure After attending to and excluding the life threatening airway emergencies in the primary survey, the researchers explained the study and its purpose to the participants in order to obtain an informed consent document with a signature or thumb print. However in the event of suspected massive haemothorax or tension pneumothorax, eFAST was performed as part of primary survey and intervention made immediately before the administration of the questionnaire. A pretested check list of parameters of interest was used by the investigator with his data collection team at the radiology and accident and emergency departments. ATLS principles were used in initial assessment and management with eFAST as an adjunct in primary survey. A complete history, physical examination and imaging assessment of the chest, followed by chest X-ray which is the surrogate diagnostic standard in our setting was done. The findings for both investigations were recorded on the data tool. Two portable hand held ultrasound systems (Mindray DP-6600 FL, USA) were used in this study, one for each site. The device’s manufacturer has indicated eFAST as one of its uses and has been validated in previous studies in addition to being suitable in rural areas where there could be electricity blackouts . Ultrasound procedure This procedure was performed in accordance with Taylor and O’Rourke . The patient was asked to remove clothing and other objects such as jewelry that could interfere with the scan. Patient was positioned on examination bed either lying on back, or side or sitting up with arms raised with hands clasped around the neck depending on level of consciousness. Ultrasound gel was placed on area of chest to undergo examination. Using a transducer, ultrasound waves were sent from area being examined reflected off structures and analyzed by the ultrasound machine that created an image on the screen. The images generated were stored digitally. Patients were at times asked to cough or shift position or sniff for clarity of chest structures. Chest X-ray procedure This was carried out in a radiology certified room with fixed X-ray machines. The patient was asked to undress, remove jewelry, stand (PA view) OR lie (lateral decubitus view) next to a cassette that recorded images for processing. For severely injured patients and suspected spinal injury patients, the X-ray tube and the image receptor were positioned, rather than the patient or the part to avoid the risk worsening the patient’s condition. Patient was instructed to roll shoulders forward, withhold breath, and stay still while image was being taken. The image was recorded on computer and printed on film for interpretation. Chest tube insertion procedure This was done in the accidents and emergency department using the aseptic technique in the triangle of safety under local anesthesia according to the method described by Datta et al. . All patients who had a hemothorax volume of greater than 300 mls had chest tube insertion since this volume is associated with a retained hemothorax if not drained . Patients who had respiratory distress also underwent drainage irrespective of the volume quantified at sonography. Patients who had a volume less than 300 mls at sonography, but later deteriorated also underwent chest tube insertion. Quality control The questionnaire was pretested at Ishaka Adventist Hospital to check whether it could extract the desired information on variables of interest and necessary changes were made. The investigator and trained research assistants (residents) collected the data. For every 5th patient, eFAST was assessed by a qualified radiologist. Where two radiologists did not agree on the radiological findings, the decision of an independent third radiologist was to be considered final. Data was checked for completeness at the end of definitive surgical intervention. The data was analyzed with the guidance of a biostatistician. Data analysis and presentation Data was analyzed using SPSS version 22.0. Univariate analysis for continuous variables was summarized using mean and standard deviation, whereas proportions and percentages were computed for categorical variables and presented as frequency tables. The detection rates of haemothorax were computed individually for both CXR and eFAST with reference to the findings on chest tube drainage. The sensitivity, specificity, positive predictive value, negative predictive value and accuracy were calculated using the cross tabulation procedure and the corresponding chi-squire P values determined taking thoracostomy findings as the gold standard. P value of less or equal to 0.05 was considered significant for correlation between the detection rates of hemothorax by the investigation assessed and tube thoracostomy findings. The receiver operator characteristic curve (ROC) with the corresponding area under the curve (AUC) were used to compare the efficacy of the two investigations taking thoracostomy as the gold standard. Results During the study period, 139 patients presented to the study centres in total. Of these, only 110 were eligible for the study and of those eligible, only 104 consented to participate in the study. Figure 1 is a flow chart showing the study procedure with the corresponding number of participants. Showing the study procedure with the corresponding number of participants Characteristics of study participants This study enrolled 104 study participants of whom majority were from Mbarara regional referral hospital (66.3%), Male (59.6%), from urban areas (51.9%) with mean age of 32 years. The commonest type of injury was blunt chest trauma accounting for 93.3% of the injuries and the commonest etiology was motor cycle crash (43.3%). The commonest associated injuries were found in the limbs (35.6%) and the commonest surgical intervention was tube thoracostomy done in 55 (52.9%) of the study participants. Table 1 shows the characteristics of study participants. Comparison of detection rates of haemothorax between eFAST and CXR in relation to tube thoracostomy findings In this study, 55 (52.9% of the study participants) had thoracostomy done and of these, haemothorax was found in 51(49%). All study participants had both a chest X-ray and an eFAST done. eFAST found haemothorax in 47.1% of the study participants and X-ray in 22.1%. 48.1% of the participants were found to have a haemothorax by X-ray, eFAST or both. Table 2 Shows number of patients found with haemothorax by eFAST, X-ray and Thoracostomy. Taking thoracostomy findings as the gold standard, combining eFAST and X-ray had the highest sensitivity, Negative predictive value and accuracy, followed by eFAST then X-ray which had the lowest. All tests had specificity and positive predictive value of 100%. There was no significant relationship between the findings on X-Ray and thoracostomy according to the chi-squire test, but the relationship between the eFAST and thoracostomy findings was significant P < 0.001. Table 3 Shows comparison of detection rates of haemothorax between eFAST and CXR in relation to tube thoracostomy findings. The area under the curved for eFAST was much higher than that for X-ray (0.980, P = 0.001 vs. 0.725, P = 0.136). Figures 2 and 3 show the receiver operator characteristic curves for eFAST and X-ray respectively, taking thoracostomy findings as the gold standard. Shows the receiver operator characteristic curves for eFAST taking thoracostomy findings as the gold standard Shows the receiver operator characteristic curves for X-ray taking thoracostomy findings as the gold standard The average volume of hemothorax not detected by X-ray but detected by sonography was 416.5 millilitres and the average volume of hemothorax detected by both sonography and chest X-ray was 769.8 millilitres. This difference was significant with a P value of < 0.001 using the independent samples t test. This big difference could have been because some patients could not sit upright and X-ray had to be done in supine position reducing detection rates further. Figure 4 shows a chest X-ray of right haemothorax in a participating 32 year old male and Fig. 5 shows a sonographic image of left haemothorax in a 30 year old male participant. Shows a chest X-ray of right haemothorax in a participating 32 year old male Shows a sonographic image of left haemothorax in a 30 year old male participant Discussion In this study, eFAST was more sensitive at detecting hemothorax than chest X-ray with sensitivity of 96.1% versus 45.1% respectively. The accuracy was also higher for eFAST (96.4% versus 49.1%) but the specificity was the same at 100.0%. Our findings are comparable to the findings by Zieleskiewicz et al. in France who also observed that eFAST was superior to X-ray with sensitivity of 48% versus 29%, and specificity of 100% for both tests. The sensitivities of ultrasonography and X-ray were much lower in the France study compared to our findings possibly because they used computed tomography as a gold standard yet we used thoracostomy findings as our gold standard. The findings in our study are also comparable to the findings by Talari et al. in Iran who reported that eFAST was superior to X-ray with sensitivity of 79% versus 36.9% and accuracy of 90.2% versus 71.1% and specificity of 99.1% for both eFAST and X-ray. However the values in our study are higher than those in the Iran study possibly because the Iran study used CT scan as the gold standard and not thoracostomy as was in our study. The study findings by Attia and Gwely in Egypt are also comparable to our findings where eFAST was reported superior to chest X-ray with sensitivity of 86.2% versus 58.6%, accuracy of 96.3% versus 89% and specificity of 100% for both investigations. Another study in Egypt that assessed the value of eFAST reported that eFAST was highly sensitive but more sensitive on the left side of the chest with sensitivity of 100% versus 93% on the right, accuracy of 97% versus 96% and specificity of 100% on both the left and right side of the chest, however the explanation or the theory behind the differences was not reported . The sensitivity of chest X-ray was not reported in this study. The low sensitivity of chest X-ray is because X-ray of chest in standing posture requires a collection of more than 400 ml of blood to obliterate the costophrenic angle while chest X-ray in supine position may not detect up to 1 L of blood as reported by Bhattacharyya and Brahma . In this study, the patients who were fully conscious were asked to stand (PA view) or lie (lateral decubitus view) but in some patients especially those unconscious, the standing position could not be assumed which could have contributed to the low sensitivity of the chest X-ray. The high sensitivity of eFAST is because ultrasound can detect 100 ml of pleural fluid with 100% accuracy and also detect haemothorax as little as 20 ml according one review of literature by Zeiler et al. . Study limitations CT scan was not done in this study yet it is the currently accepted gold standard for diagnosis of haemothorax, though only in stable patients. Therefore these results should be interpreted cautiously in the context of resource limited settings where routine access to chest CT scans cannot be guaranteed. Also the principal investigator and assistants had inadequate experience in eFAST but this was mitigated by training before the study and having a qualified radiologist confirm the findings. Conclusion This study revealed that eFAST was more sensitive at detecting haemothorax among chest trauma patients compared to chest X-ray. Portable hand held ultrasound systems have a big role to play in the diagnosis of hemothorax in stable and unstable patients. Recommendations We recommend that all patients presenting with chest trauma should have an emergency eFAST for diagnosis of haemothorax preferably at all points of care especially with portable hand held ultrasound systems to minimize unnecessary exposure to radiations. Surgeons, Residents and Doctors involved in initial management of trauma patients should have skill and equipment (Portable bedside or point of care ultrasound systems) to adequately and promptly use eFAST to detect and manage hemothorax in trauma patients, in a timely and safe fashion. Availability of data and materials Data is available upon request. Requests should be sent to SMK Via doctormbaewood@gmail.com. Abbreviations Extended focused assessment with sonography for trauma Chest X-ray Positive predictive value Negative predictive value Area under the curve Receiver operator characteristic curve Probability value References Zeiler J, Idell S, Norwood S, Cook A, Health R. Hemothorax: a review of the literature. Clin Pulm Med. 2021;27(1):1–12. Article Google Scholar O’Keeffe M, Clark S, Khosa F, Mohammed M, McLaughlin P. Imaging protocols for trauma patients: trauma series, extended focused assessment with sonography for trauma, and selective and whole-body computed tomography. Semin Roentgenol. 2016;51:130–42. Article Google Scholar Cheung H. Fundamental theory of ultrasonography. Korean J Radiotech. 1977;10(1):22–35. Google Scholar Institute for health metrics and evaluation. Findings from the global burden of disease study 2017. IHME. Seattle: WA; 2018. Zaidi AA, Dixon J, Lupez K, De Vries S, Wallis LA, Ginde A, et al. The burden of trauma at a district hospital in the Western Cape Province of South Africa. Afr J Emerg Med. 2019;9:S14-20. Article PubMed PubMed Central Google Scholar Norouzi N, Amini A, Hatamabadi H. Comparison of diagnostic accuracy of Nexus chest and thoracic injury rule-out criteria in patients with blunt trauma; a cross-sectional study. Trauma Mon. 2019;24(3):1–6. Ekpe EE, Eyo C. Determinants of mortality in chest trauma patients. Niger J Surg Off Publ Niger Surg Res Soc. 2014;20(1):30–304. Google Scholar Chalya PL, Mabula JB, Dass RM, Mbelenge N, Ngayomela IH, Chandika AB, et al. Injury characteristics and outcome of road traffic crash victims at Bugando medical centre in Northwestern Tanzania. J Trauma Manag Outcomes. 2012;6(1):1. Article PubMed PubMed Central Google Scholar Galukande M, Jombwe J, Fualal J, Gakwaya A. Boda-boda injuries a health problem and a burden of disease in Uganda: a tertiary hospital survey. East Cent Afr J Surg. 2009;14(2):33–7. Google Scholar Mwesigwa MM, Bitariho D, Twesigye D. Patterns and short term outcomes of chest injuries at Mbarara regional referral hospital in Uganda. East Cent Afr J Surg. 2017;21(3):28. Article Google Scholar Whiteman C, Kiefer C, D’Angelo J, Davidov D, Larrabee H, Davis S. The use of technology to reduce radiation exposure in trauma patients transferred to a level I trauma center. W V Med J. 2014;110(3):14–8. PubMed Google Scholar Tien HC, Tremblay LN, Rizoli SB, Gelberg J, Spencer F, Caldwell C, et al. Radiation exposure from diagnostic imaging in severely injured trauma patients. J Trauma Inj Infect Crit Care. 2007;62(1):151–6. Article Google Scholar Vafaei A, Hatamabadi HR, Heidary K, Alimohammadi H, Tarbiyat M. Diagnostic accuracy of ultrasonography and radiography in initial evaluation of chest trauma patients. Emergency. 2016;4(1):29–33. PubMed PubMed Central Google Scholar Rodriguez RM, Anglin D, Langdorf MI, Baumann BM, Hendey GW, Bradley RN, et al. NEXUS chest. JAMA Surg. 2013;148(10):940. Article PubMed Google Scholar Daniel W. Biostatistics: a foundation for analysis in Health sciences. In: 7th Editio. New York: Wiley; 1999. Wanjiku GW, Bell G, Wachira B. Assessing a novel point-of-care ultrasound training program for rural healthcare providers in Kenya. BMC Health Serv Res. 2018. Article PubMed PubMed Central Google Scholar Van Den Bemt B. Point of care. Pharm Weekbl. 2013;148(50):147. Google Scholar Bell G, Wachira B, Denning G. A pilot training program for point-of-care ultrasound in Kenya. Afr J Emerg Med. 2016;6(3):132–7. Article PubMed PubMed Central Google Scholar Rivas-Vasquez L, Estrada R, Benninger B. Integrating innovative handheld portable cordless ultrasound-probe-system with artificial intelligence for pre-healthcare students in human cadaver lab course. FASEB J. 2020;34(S1):1–1. Article Google Scholar Megahed M, Habib T, Abdelhady M, Zaki H, Ahmed I. Validity of ultrasonography in detection of central venous catheter position and pneumothorax compared with portable chest radiography. Res Opin Anesth Intensive Care. 2018;5(2):120. Article Google Scholar Bharath R, Chandrashekar D, Akkala V, Krishna D, Ponduri H, Rajalakshmi P, et al. Portable ultrasound scanner for remote diagnosis. In: 2015 17th International Conference on E-Health Networking, Application and Services, HealthCom 2015;2015: 211–6. Taylor A, O’Rourke MC. Lung ultrasound. Acute StatPearls Publ. 2018;01:1. Google Scholar Zieleskiewicz L, Fresco R, Duclos G, Antonini F, Mathieu C, Medam S, et al. Integrating extended focused assessment with sonography for trauma (eFAST) in the initial assessment of severe trauma: Impact on the management of 756 patients. Injury. 2018;49(10):1774–80. Talari HR, Mousavi N, Kalahroudi MA, Akbari H, Hossein SM. Trauma monthly diagnostic value of sonography in detecting hemothorax and its size in blunt trauma patients. Trauma Mon. 2021;26(5):273–9. Google Scholar Attia SM, Gwely NN. Diagnostic accuracy of chest ultrasound versus plain chest X-ray in acute assessment of traumatic hemothorax. Egypt J Hosp Med. 2019;2021(82):969–73. Google Scholar Rabbih AAA, Helmyb TA, Fathy HA, Alkafafy AM, Zaki HM. A prospective comparison between bedside ultrasound versus chest radiograph and computed tomography scan for the diagnosis of traumatic hemothorax. Res Opin Anesth Intensive Care. 2021;8(1):1–5. Bhattacharyya D, Brahma R. Original research paper a clinical study of hemothorax following blunt thoracic trauma. 2019;5139(2):66–70 Dhaliwal J. NEXUS chest: validation of a decision instrument for selective chest imaging in blunt trauma. J Emerg Med. 2014;46(6):874. Google Scholar Download references Acknowledgements We acknowledge all patients that accepted to participate in this study. Guarantor SMK Funding This study did not receive any specific grant from funding agencies in public, commercial, or not for profit sectors. Author information Authors and Affiliations Faculty of Clinical Medicine and Dentistry, Department of Surgery, Kampala International University Western Campus, PO. Box 70, Ishaka-Bushenyi, Uganda Stephen Mbae Kithinji, Lauben Kyomukama & Joshua Muhumuza Injury Epidemiology and Prevention Research Group, Division of Clinical Neuroscience, University of Turku, Turku, Finland Herman Lule Department of Radiology, Mbarara University of Science and Technology, Mbarara, Uganda Moses Acan Uganda Martyrs University, Nkozi, Uganda Patrick Kyamanywa Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Contributions SMK was the principle investigator, conceived and designed the study, collected data, analysed data and wrote the draft of the manuscript. JM participated in data analysis, discussion of results and revised the manuscript, PK, HL, MA, and LK supervised the work, revised the manuscript and all authors approved the final paper. Corresponding authors Correspondence to Stephen Mbae Kithinji or Joshua Muhumuza. Ethics declarations Ethics approval and consent to participate All methods were carried out in accordance with relevant guidelines and regulations. Ethical approval was sought from the Research and Ethics Committee of Kampala International University Western Campus REC number KIU-2021-53. Informed consent was taken for all participants. In the event of life threatening injuries such as massive haemothorax or tension pneumothorax, eFAST was performed as part of primary survey and intervention made immediately before administering the questionnaire. The unstable patients were first stabilized and they were asked to consent by themselves after stabilization. Chest X-rays were requested in the standard way and at the discretion of attending surgeon and in accordance with the National Emergency X-ray Utilization Studies (NEXUS)-Chest injury algorithm . Consent for publication Not applicable. Competing interests The authors declare that they have no conflict of interest. Additional information Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data. Reprints and permissions About this article Cite this article Kithinji, S.M., Lule, H., Acan, M. et al. Efficacy of extended focused assessment with sonography for trauma using a portable handheld device for detecting hemothorax in a low resource setting; a multicenter longitudinal study. BMC Med Imaging 22, 211 (2022). Download citation Received: 28 September 2022 Accepted: 28 November 2022 Published: 01 December 2022 DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Keywords Advertisement BMC Medical Imaging ISSN: 1471-2342 Contact us Read more on our blogs Receive BMC newsletters Manage article alerts Language editing for authors Scientific editing for authors Policies Accessibility Press center Support and Contact Leave feedback Careers Follow BMC BMC Twitter page BMC Facebook page BMC Weibo page By using this website, you agree to our Terms and Conditions, Your US state privacy rights, Privacy statement and Cookies policy. Your privacy choices/Manage cookies we use in the preference centre. Follow BMC By using this website, you agree to our Terms and Conditions, Your US state privacy rights, Privacy statement and Cookies policy. Your privacy choices/Manage cookies we use in the preference centre. © 2025 BioMed Central Ltd unless otherwise stated. Part of Springer Nature.
710
https://biomedicapk.com/files/articles/pdf/647/show
18 Root resorption in ameloblastoma: a radiographic analysis of 35 cases Ruqqia Jehan 1, Hasan Mujtaba 2 , Nouman Noor 3 ,Muhammad Shoaib 4 , Asif Noor 5 , Javeria Afzal 6 ,Mustafa Sajid 7, Muhammad Mohsin Javaid 8 ABSTRACT Background and Objective: Odontogenic tumors are heterogenous lesions with diverse clinical manifestations and histopathological features. Ameloblastoma is a slow growing, sizeable benign tumor with an increased recurrence potential. Radiographically, ameloblastoma mimics other odontogenic tumors occurring in the same region but can be differentiated based on certain features. The objective of the study was to evaluate the radiographic features and presence of root resorption in ameloblastoma as a diagnostic feature in the local population. Methods: This retrospective hospital-based study was conducted at the Oral and Maxillofacial Surgery Department of Multan Medical and Dental College Multan, Pakistan from 1st Oct 2019 to 31st March 2020. Radiographs of 35 histopathologically confirmed cases of ameloblastoma were included in the study. These radiographs were assessed for site, locularity, and root resorption. Data were processed and analyzed by using SPSS version 23.0 Results: Mean age of the patients was 35.35 ±18.2 years with male predominance (66% vs . 34%). A total of 55% cases presented below 35 years age group thus showing increased prevalence in young adults. Multilocular appearance was seen in 24 (68.5%) cases while uni-locular pattern was seen in 12 (31.5%) cases. Root resorption was detected in 19 (54.3%) cases. Statistically, root resorption was not significantly associated with the gender or age of the patients ( p > 0.05). Conclusion: Multi-locular appear ance and root r esorption ar e the k ey r adiographical f eatures of ameloblas toma pr esenting in our population. Keywords: Ameloblastoma, multilocular, root resorption, unilocular, radiograph, odontogenic. Received: 11 December 2021 Revised date: 22 February 2022 Accepted: 05 March 2022 Correspondence to: Hasan Mujtaba Associate Professor, Department of Oral Pathology, School of Dentistry (SOD), Shahid Zulfiqar Ali Bhutto Medical University, Islamabad, Pakistan. Email: h_mujtaba@outlook.com Full list of author information is available at the end of the article. Introduction Ectomesenchymal cells or odontogenic epithelium is involved in the tooth development. 1 Any disturbance or mutation occurring in the tooth development process can lead to odontogenic tumor of diverse types which are classified according to their radiographic and histopathological features. 1 Robinson 2 described it as a benign tumor that is “usually unicentric, nonfunctional, intermittent in growth, anatomically benign and clinically persistent”. World Health Organization (WHO) considers ameloblastoma as the prototype of odontogenic tumors of epithelial origin with three clinico-pathological subtypes; conventional, unicystic and peripheral ameloblastoma. 3 Ameloblastoma is a benign slow growing tumor which has an increased recurrence potential and ability to attain large size. 4 Among all odontogenic tumors, it accounts for 13%–58% including its various types. 5Ameloblastoma presents most commonly in the posterior region of mandible with male predominance. The tumor may be found accidently in routine radiograph because of its asymptomatic nature. Swelling and jaw expansion are the most common signs. 6 Patients usually present with swelling, mobile tooth, and dull or severe pain 7. It is more common in mandible than in maxilla 8. Radiographically, this tumor appears either as multilocular or unilocular radiolucency. Multilocular appearance is frequent and may be associated with root resorption. 9Radiographically, blunt or knife-edge root resorptions are pathognomonic for ameloblastomas that differentiate them Biomedica - Official Journal of University of Health Sciences, Lahore, Pakistan Volume 38(1):18-22 This is an open access article distributed in accord-ance with the Creative Commons Attribution (CC BY 4.0) license: which permits any use, Share — copy and redistribute the material in any medium or format, Adapt — remix, transform, and build upon the material for any purpose, as long as the authors and the original source are properly cited. © The Author(s) ORIGINAL ARTICLE OPEN ACCESS OPEN ACCESS 19 Jehan et al. Biomedica. 2022;38(1):18-22 from other similar lesions in this area including nasopalatine duct cysts, odontogenic keratocysts, and simple bone cysts. Root resorption may be associated with distinct histological features of ameloblastomas, such as benign epithelial tumors without fibrous capsule, epithelial cords and epithelial islands mimicking dental lamina, invasion into the neighboring tissues and release of tooth and bone resorption mediators (epidermal growth factors and interleukins). 10 Orthopantomogram (OPG) (Figure 1) and excisional biopsy are mandatory for the diagnosis of the tumor. Computed tomography (CT) scan or cone beam computed tomography are very useful to demarcate the extension of ameloblastoma. CT scan also helps in providing clear anatomic landmarks and define the buccal and lingual curvature of the lesion which are not recorded in two-dimensional radiographs. 11 This study was conducted to evaluate the radiographic features and root resorption associated with ameloblastoma. It will contribute to the previous literature as to our knowledge no such study from Pakistan has been reported. Also, the findings will help the surgeons to educate the patients regarding prognosis and follow-up because of increased tendency for recurrence. Methods This retrospective, hospital-based study comprised of 35 cases of ameloblastomas. Complete radiographic and clinical data of n = 35 histopathologically diagnosed cases was retrieved from the Oral and Maxillofacial Surgery (OMFS) Department of Multan Medical and Dental College Multan, Pakistan from 1st Oct 2019 to 31st March 2020. Institutional Ethical approval was taken before the acquisition of data. Inclusion criteria were ameloblastomas, either uni-cystic or multicystic, presenting in adult age group (18-55 years) of both genders. Cases with presence of factors causing root resorption other than the tumor such as periapical lesion due to caries and/or adjacent impacted tooth were excluded. Complete radiographic details were retrieved to determine the features like site, locularity, and root resorption etc. Histopathological features were also recorded. Statistical analysis The data were analyzed by using SPSS-version 23.0. Frequency and percentages were calculated for categorical variables like gender, radiological features, and root resorption. Pearson chi-Square test was used to find the association between radiological, clinical and histopathological variables taking 5% level of significance. Results Among 35 patients, there were 23 (66%) males and 12 (34%) females. The mean age of the patients was 35.35+18.2 years. Patients were stratified into four age groups with age range from 18 to 55 years. Majority of the patients belonged to 26-35 years age group indicating more prevalence in young adults (Table 1). Radiologically, most of the cases 25 (86.2%) were seen in posterior region of mandible while only one case was found in the anterior region of maxilla (Table 2). Regarding locularity, approximately 24 (68.5%) cases presented as multi-locular while uni-locular appearance was observed in only 11 (31.5%) lesions as shown in Figure 2 . Root resorption was seen in 19 (54.3%) cases; 80% of these presented with scalloped margins. Root resorption was seen in 56.3% and 73.7% of unilocular and multilocular lesions respectively. Statistically, no significant association of root resorption with gender and age was observed (Tables 3 and 4). Discussion: Ameloblastoma, an odontogenic benign tumor of epithelial origin, shows the tendency of local aggressive behavior with an excessive potential for recurrence. 12 There is no gender predominance as ameloblastoma affects both males and females equally. 6,13 The present study observed males predominance over females which is synchronous with the study done by Alves et al. 14 , while studies conducted in Brazil and Chile reported preponderance in females. 15,16 In the current study, 26-35 years age group was most commonly seen which depicts local prevalence in younger population A study conducted by Ranchod 17 reported a mean age of 32 ± 11.6 years, whereas a study by Arotiba et al. 18 , reported patients mostly from an even younger age group, i.e., 18 to 19 years. Another study reported higher age group prevalence between third and fourth decade of age. 12 Ameloblastoma rarely occurs in children with almost less than 10% of pediatric cases are seen in children below 10 years of age. Table 1. Distribution of ameloblastoma in different age groups . Age groups (years) Frequency ( n)Percentage (%) 1825 38.6 26-35 16 46 36-45 12 34 46-55 04 11.4 Total 35 100 Table 2. Distribution of ameloblastoma in mandible and maxilla . Site Mandible n(%) Maxilla n(%) Total n(%) Anterior 4 (13.8%) 1 (16.7%) 5 (14.2%) Posterior 25 (86.2%) 5 (83.3%) 30 (85.8%) Total 29 (83%) 6 (17%) 35 (100%) 20 Jehan et al. Biomedica. 2022;38(1):18 -22 The unicystic variant has been found more commonly in the younger age group. 19 The treatment of amelobastoma includes surgical resection with clear margins. Radical surgery is recommended mostly in cases of multicystic / solid and advanced unicystic patterns with long term follow-up. 20,21 The present study showed predominant multilocular radiographic appearance than the unilocular pattern; however, no statistical significance with gender or age was observed ( p > 0.05). Mostly the literature supports the findings that ameloblastoma is presented with multi-locular radiolucency. 22,23 A study of Kim and Jang 24 contradicts these findings as they found a total of 59.2% uni-locular lesions with a fine demarcated boundary. The study, however, included 28.5% population in paediatric and adolescent age group which may be the reason of predominant unilocular pattern. In this study, root resorption was seen in 19 (54.3%) patients which is much lesser in frequency than the reported study of Bi et al. (87.9%), 22 but comparable to those of Au (51.9%) 25 , and Kitisubkanchana et al. (66.7%) 26 . Resorption may be regular or uniform, parallel to interface with the islands of neoplastic epithelial cells closer to the roots, and a set of clasts generates this regular surface, which presents as a knife-edge root resorption on the imaging studies. 10 Root resorption is more common amongst ameloblastomas than odontogenic keratocysts, simple bone cysts and nasopalatine duct cysts; therefore, this particular feature serves as an important landmark when considering the radiographic signs of ameloblastoma. 14 This radiographic finding can be a differentiating feature among other similar lesions of the jaws. 26 Several inflammatory mediators and cytokines are involved in mediating odontoclastic activity in ameloblastomas that play a significant role in the resorption of the root. 27 Therefore, the radiolucent lesions of the mandible and maxilla showing root resorption can support the diagnosis of Table 3 .Association between root resorption in ameloblastoma and gender. Gender Root resorption Total n(%) p-value Absent n(%) Present n(%) Female 6 (37.5%) 6 (31.6%) 12 (34.3%) 0.604 Male 10 (62.5%) 13 (68.4%) 23 (65.7%) Total 16 (45.7%) 19 (54.3%) 35 (100%) Chi square test Table 4 .Association between root resorption in ameloblastoma and age groups .Age groups (years ) Root resorption Total n(%) p-value Absent n(%) Present n(%) 16-25 2 (11.76%) 1 (5.55%) 3 (8.57%) 0.292 26-35 6 (35.3%) 10 (55.55%) 16 (45.7%) 36-45 7 (41.18%) 5 (27.8%) 12 (34.3%) 46-55 2 (11.76%) 2 (11.11%) 4 (11.43%) Total 17 (48.58%) 18 (51.42%) 35 (100%) Chi square test. Figure 1. OPG showing (arrows) multilocular (A) radiolucency on the left side of mandible and unilocular radiolucency (B) in the anterior region of mandible. 21 Jehan et al. Biomedica. 2022;38(1):18-22 ameloblastoma in differentiation from the other odontogenic cysts and tumors presenting with similar radiographic findings. Recurrence potential in these patients remains high for which a closer follow-up and patient’s education is pivotal. 22,25 Conclusion: Multilocular appearance and root resorption are the key radiographical features of ameloblastoma presenting in our population. The presence of root resorption may be taken as a differential diagnostic feature for ameloblastoma in contrast to other benign odontogenic tumors of the jaw. Limitations of the study One of the limitations of this study was a small sample size and because of the retrospective nature of the study, data of the CT scan of most of the patients could not be retrieved and authors had to rely on the OPG findings of the record files mostly. Prospective studies with large sample size and generating statistical associations between the multicytic pattern, root resorption, and recurrence may be carried out using advanced radiological techniques to validate the findings of the present study. Acknowledgment The authors would like to acknowledge the staff of OMFS Department of Multan Medical and Dental College Multan, Pakistan for their logistic and technical support in acquisition of data related to this study. We would also like to acknowledge all those patients whose data has helped us to add this scientific context in the literature. List of Abbreviations OPG Orthopantomogram WHO World Health Organization Conflict of interest None to declare. Grant support and financial disclosure None to disclose. Ethical approval This study was approved by the Ethics Committee of Multan Dental College Multan, Pakistan vide ethical letter MDC#0475 dated 24-02-2020. Authors’ contribution RJ, HM, NN : Conception of study, drafting of manuscript, critical revision with important intellectual content. MS, AN & JA : Drafting of manuscript. MS, MJ : Acquisition of data. ALL AUTHORS : Approval of the final version of the manuscript to be published. Authors’ Details Ruqqia Jehan 1, Hasan Mujtaba 2, Noman Noor 3, Muhammad Shoaib 4, Asif Noor 5, Javeria Afzal 6, Mustafa Sajid 7, Muhammad Mohsin Javaid 8 Senior Registrar, Department of Oral & Maxillofacial Surgery, CMH Multan Institute of Medical Sciences, Multan, Pakistan 2. Associate Professor, Department of Oral Pathology, School of Dentistry (SOD), Shahid Zulfiqar Ali Bhutto Medical University, Islamabad, Pakistan 3. Associate Professor, Department of Operative Dentistry, School of Dentistry (SOD), Shahid Zulfiqar Ali Bhutto Medical University, Islamabad, Pakistan 4. Assistant Professor, Department of Maxillofacial Surgery, Multan Medical and Dental College Multan, Pakistan 5. Associate Professor, Department of Community Dentistry, Multan Medical and Dental College Multan, Pakistan 6. Assistant Professor, Department of Community Dentistry, Multan Medical and Dental College Multan, Pakistan 7. Associate Professor, Department of Operative Dentistry, Multan Medical and Dental College Multan, Pakistan 8. Demonstrator, Community & Preventive Dentistry Department, School of Dentistry (SOD), Shaheed Zulfiqar Ali Bhutto Medical University, Islamabad, Pakistan References Soluk-Tekkeşin M, Wright JM. The world health organization classification of odontogenic lesions: a summary of the changes of the 2017 (4th) edition. Turk PatolojiDerg. 2018;34(1):1–5. 2. Robinson L, Martinez MG. Unicystic ameloblastoma: aprognostically distinct entity. Cancer. 1977;40(5):2278–85. Figure 2. Gender-based distribution of locularity of amelo-blastoma (n =35). 22 Jehan et al. Biomedica. 2022;38(1):18 -22 El-Naggar AK, Chan JK, Grandis JR, Takata T, Slootweg PJ. WHO classification of head and neck tumours. Lyon, France: IARC; 2017. pp 215–8. 4. Wright JM, Soluk Tekkesin M. Odontogenic tumors: where are we in 2017 ? J Istanb Univ Fac Dent. 2017;51(3 Suppl 1):S10– 30. 5. Effiom OA, Ogundana OM, Akinshipo AO, Akintoye SO. Ameloblastoma: current etiopathological concepts and management. Oral Dis. 2018;24(3):307–16. org/10.1111/odi.12646 6. Petrovic ID, Migliacci J, Ganly I, Patel S, Xu B, Ghossein R et al. Ameloblastomas of the mandible and maxilla. Ear Nose Throat J. 2018; 97(7): E26–32. org/ 10.1177/014556131809700704 7. Adeline VL, Dimba EA, Wakoli KA, Njiru AK, Awange DO, Onyango JF, et al. Clinicopathologic features of ameloblastoma in Kenya: a 10-year audit. J Craniofac Surg. 2008; 19(6):1589– 93. 8. Liu L, Zhang X, Hu Y, Zhang C, Zhang Z. Clinical and pathological analysis of jaw ameloblastomas in 890 patients. Shanghai Kou Qiang Yi Xue. 2015; 24(3):338–40. PMID: 26166525 9. Ogunsalu C, Daisley H, Henry K, Bedayse S, White K, Jagdeo B, et al. A new radiological classification for ameloblastoma based on analysis of 19 cases. West Indian Med J. 2006; 55(6):434– 9. 10. Martins GG, Oliveira IA, Consolaro A. The mechanism: how dental resorptions occur in ameloblastoma. Dental Press J Orthod. 2019; 24(4):21–32. 11. Hendra FN, Van Cann EM, Helder MN, Ruslin M, de Visscher JG, Forouzanfar T, et al. Global incidence and profile of ameloblastoma: a systematic review and meta-analysis. Oral Dis. 2020;26(1):12–21. https:// doi.org/10.1111/odi.13031 12. Cadavid AM, Araujo JP, Coutinho-Camillo CM, Bologna S, Junior CA, Lourenço SV. Ameloblastomas: current aspects of the new WHO classification in an analysis of 136 cases. Surg Exp Pathol. 2019; 2(1):1–6. s42047-019-0041-z 13. “WHO classification of tumours. In: Barnes L, Eveson J, Reichart P, Sidransky D, editors. Pathology and genetics of head and neck tumous. Lyon, France: IARC Press; 2005. pp 296–300. 14. Alves DBM, Tuji FM, Alves FA, Rocha AC, Santos-Silva ARD, Vargas PA, et al. Evaluation of mandibular odontogenic keratocyst and ameloblastoma by panoramic radiograph and computed tomography. Dentomaxillofac Radiol. 2018;47(7):1–7. https:// doi.org/10.1259/dmfr.20170288 15. Santos JN, Pereira Pinto L, FigueredoCrlvd, Souza Lbd. Odontogenic tumors: analysis of 127 cases. Pesqui Odontol Bras. 2001; 15(4):308–13. S1517-74912001000400007 16. Ochsenius G, Ortega A, Godoy L, Peñafiel C, Escobar E. Odontogenic tumors in Chile: a study of 362 cases. J Oral Pathol Med. 2002; 31(7):415–20. org/10.1034/j.1600-0714.2002.00073.x 17. Ranchod S, Titinchi F, Behardien N, Morkel J. Ameloblastoma of the mandible: analysis of radiographic and histopathological features. J Oral Med Oral Surg. 2021;27(1):6–7. org/10.1051/mbcb/2020051 18. Arotiba GT, Ladeinde AL, Arotiba JT, Ajike SO, Ugboko VI, Ajayi O. Ameloblastoma in Nigerian children and adolescents: a review of 79 cases. J Oral Maxillofac Surg. 2005; 63(6):747– 51. https:// doi.org/10.1016/j.joms.2004.04.037 19. Castro-Silva II, Israel MS, Lima GS, de Queiroz Chaves Lourenço S. Difficulties in the diagnosis of plexiform ameloblastoma. Oral Maxillofac Surg. 2012;16(1):115–8. https:// doi. org/10.1007/s10006-011-0265-x Neagu D, Escuder-de la Torre O, Vázquez-Mahía I, Carral-Roura N, Rubín-Roger G, Penedo-Vázquez Á,et al. Surgical management of ameloblastoma: review of literature.J Clin Exp Dent. 2019;11(1):e70. https:// doi.org/10.4317/jced.55452 21. Effiom OA, Ogundana OM, Akinshipo AO, Akintoye SO. Ameloblastoma: current etiopathological concepts and management. Oral Dis. 2018;24(3):307–16. org/10.1111/odi.12646. 22. Bi L, Wei D, Hong D, Wang J, Qian K, Wang H et al. A retrospective study of 158 cases on the risk factors for recurrence in ameloblastoma. Int J Med Sci. 2021; 18(14):3326–32. https:// doi.org/10.7150/ijms.61500 23. Chawla R, Ramalingam K, Sarkar A, Muddiah S. Ninety-one cases of ameloblastoma in an Indian population: A comprehensive review. J Nat Sci Biol Med. 2013; 4(2):310–15. https:// doi.org/10.4103/0976-9668.116984 24. Kim SG, Jang HS. Ameloblastoma: a clinical, radiographic, and histopathologic analysis of 71 cases. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2001;91(6):649–53. org/10.1067/moe.2001.114160 25. Au SW, Li KY, Choi WS, Su YX. Risk factors for recurrence of ameloblastoma: a long-term follow-up retrospective study. Int J Oral Maxillofac Surg. 2019;48(10):1300–6. https:// doi. org/10.1016/j.ijom.2019.04.008 26. Kitisubkanchana J, Reduwan NH, Poomsawat S. Pornprasertsuk-Damrongsri S, Wongchuensoontorn C. Odontogenic keratocyst and ameloblastoma: radiographic evaluation. Oral Radiol. 2021; 37(1):55–65. https:// doi. org/10.1007/s11282-020-00425-2 27. Teo KW, Shi AH, Teh LY, Lee AM. External root resorption in common odontogenic cysts and ameloblastomas of the jaw: A retrospective radiographic study in an Asian population. Oral Surgery. 2021; 14(4):335–41. ors.12628
711
https://www.sketchy.com/medical-lessons/streptococcus-agalactiae-group-b-strep
BACK TO SCHOOL SALE! CRUISE INTO CLINICAL CONFIDENCE WITH 25% OFF | USE CODE: BTS2025 Sketchy Cases GET 20% OFF SKETCHY MCAT WITH CODE REG20 | REGISTRATION DAY SALE / / / Streptococcus Agalactiae (Group B Strep) Microbiology Summary Streptococcus agalactiae, also known as group B strep (GBS) is a gram-positive bacterium that causes serious infections in newborns. Distinct from group A strep (Strep pyogenes), differentiating GBS becomes crucial in clinical practice. Group B strep is positive for hippurate test and has a polysaccharide capsule. It is also CAMP test positive. Group B strep is beta-hemolytic and bacitracin-resistant, features that further help in its identification. This bacterium is the leading cause of meningitis in neonates and can also cause sepsis and pneumonia in newborns. To prevent GBS infections in babies, pregnant women have their vagina and rectum swabbed for the bacteria at 35 weeks, and if positive, receive intrapartum penicillin as a prophylactic measure. Lesson Outline Don't stop here! Get access to 128 more Microbiology lessons & 13 more medical school learning courses with one subscription! FAQs What is Streptococcus agalactiae (group B strep), and what are its common clinical manifestations? Streptococcus agalactiae, or group B strep (GBS) is a gram-positive, bacitracin-resistant, beta-hemolytic bacterium that commonly colonizes the gastrointestinal and genitourinary tracts. It is a significant cause of neonatal infections, including sepsis, pneumonia, and meningitis, and can also cause urinary tract infections, chorioamnionitis, and postpartum infections in adults. Prenatal screening and intrapartum penicillin prophylaxis can significantly reduce the risk of GBS transmission to newborns during delivery. How is Streptococcus agalactiae identified in the laboratory, and what is the significance of the CAMP test? On blood agar, GBS exhibits beta-hemolysis (complete hemolysis) and is bacitracin-resistant. The CAMP test is a crucial confirmatory test used to identify GBS. In the CAMP test, GBS is streaked perpendicular to a Staphylococcus aureus strain on blood agar. A positive CAMP test exhibits enhanced beta-hemolysis in the shape of an arrowhead at the intersection of the bacterial streaks, indicating the presence of GBS. How does group B strep cause neonatal meningitis, and what are the risk factors for newborns? GBS can cause neonatal meningitis when newborns are exposed to the bacteria during passage through the birth canal of a colonized mother. Invasive GBS infection may occur in utero or during delivery, leading to bacteremia and invasion of the central nervous system. Risk factors for neonatal meningitis caused by GBS include premature birth, prolonged rupture of membranes, maternal GBS colonization, and intra-amniotic infection or chorioamnionitis. Additionally, infants born to mothers with a previous GBS-infected infant are also at increased risk for meningitis. What is the role of intrapartum penicillin in preventing early-onset group B strep infections in newborns? Intrapartum penicillin is administered to GBS-colonized pregnant women during labor to prevent the transmission and subsequent early-onset GBS infection in newborns. Penicillin is the antibiotic of choice because of its narrow spectrum, low toxicity, and proven efficacy in eradicating GBS from the maternal genital tract. Administering intrapartum penicillin prophylaxis has significantly reduced the incidence of early-onset GBS infections in newborns, particularly those with risk factors such as prematurity and prolonged rupture of membranes. What are the recommendations surrounding prenatal care and screening for group B strep? As part of prenatal care, it is recommended that pregnant women be screened for GBS colonization between 35 and 37 weeks of gestation. The screening involves obtaining vaginal and rectal swab specimens, which are cultured to detect the presence of GBS. If the culture results are positive for GBS, the pregnant woman is considered colonized and should receive intrapartum antibiotic prophylaxis (such as penicillin) during labor to prevent transmission of GBS to the newborn. Prenatal GBS screening and prophylactic antibiotic treatment have been successful in substantially reducing the incidence of early-onset GBS infection in newborns. Programs About us Support MCAT® is a registered trademark of the Association of American Medical Colleges. The United States Medical Licensing Examination (USMLE®) is a joint program of the Federation of State Medical Boards (FSMB®) and National Board of Medical Examiners (NBME®). NAPLEX® is a registered trademark of the National Association of Boards of Pharmacy. PANCE© is a registered trademark of the National Commission on Certification of Physician Assistants. NCLEX® is a registered trademark and service mark of the National Council of State Boards of Nursing, Inc. None of the trademark holders are endorsed by nor affiliated with Sketchy or this website. © 2013-2025 Sketchy Group LLC. All rights reserved Back to Top
712
https://www.ck12.org/algebra/applications-of-linear-systems/
Elementary Math Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Math 6 Math 7 Math 8 Algebra I Geometry Algebra II Math 6 Math 7 Math 8 Algebra I Geometry Algebra II Probability & Statistics Trigonometry Math Analysis Precalculus Calculus What's the difference? Grade K to 5 Earth Science Life Science Physical Science Biology Chemistry Physics Advanced Biology FlexLets Math FlexLets Science FlexLets English Writing Spelling Social Studies Economics Geography Government History World History Philosophy Sociology More Astronomy Engineering Health Photography Technology College College Algebra College Precalculus Linear Algebra College Human Biology The Universe Adult Education Basic Education High School Diploma High School Equivalency Career Technical Ed English as 2nd Language Country Bhutan Brasil Chile Georgia India Translations Spanish Korean Deutsch Chinese Greek Polski EXPLORE Flexi A FREE Digital Tutor for Every Student FlexBooks 2.0 Customizable, digital textbooks in a new, interactive platform FlexBooks Customizable, digital textbooks Schools FlexBooks from schools and districts near you Study Guides Quick review with key information for each concept Adaptive Practice Building knowledge at each student’s skill level Simulations Interactive Physics & Chemistry Simulations PLIX Play. Learn. Interact. eXplore. CCSS Math Concepts and FlexBooks aligned to Common Core NGSS Concepts aligned to Next Generation Science Standards Certified Educator Stand out as an educator. Become CK-12 Certified. Webinars Live and archived sessions to learn about CK-12 Other Resources CK-12 Resources Concept Map Testimonials CK-12 Mission Meet the Team CK-12 Helpdesk FlexLets Know the essentials. Pick a Subject Donate Sign Up Applications of Linear Systems Word problems using two equations and two unknowns Concept Map Discover related concepts in Math and Science. CK-12 ContentCommunity Content VIEW ALL CREATE We have provided many ways for you to learn about this topic. Create your own content Oops, looks like cookies are disabled on your browser. Click here to see how to enable them. Student Sign Up Are you a teacher? Having issues? Click here By signing up, I confirm that I have read and agree to the Terms of use and Privacy Policy Already have an account?
713
https://www.uvu.edu/college-of-science/mathematics/docs/hs_competition_2024/2024_state_math_senior_solution_corrected.pdf
State Math Contest 2024 Senior Level Solutions Instructions: ˆ Calculators, cell phones and other computational devices are not permitted (you can only use pens, pencils and paper to work on your answers, and then mark your answers with a number two pencil on the answer sheet). ˆ Correct answers are worth 5 points. Unanswered questions will be given 1 point. Incorrect answers will be worth 0 points. This means that it will not, on average, increase your score to guess answers randomly. ˆ Fill in the answers on the answer sheet using a number two pencil. ˆ Time limit: 120 minutes. ˆ When you are finished, please give the exam and any scratch paper to the test administrator. ˆ Good luck! 1. Solve log x + log(x + 15) = 2. A. x = 5 B. x = 5 or x = 20 C. x = 5 or x = −20 D. x = 15 E. x = 10 Solution: log x + log(x + 15) = 2 = ⇒log(x)(x + 15) = 2, converting it to exponential form yields x2 + 15x = 102 or x2 + 15x −100 = 0, which factors to (x −5)(x + 20) = 0 which has solutions 5, −20. However, −20 is not in the domain of log(x), so x = 5 is the only solution. 2. Given that i2 = −1, find (1 + i √ 3)10. A. −512 −512 √ 3i B. −256 + 265 √ 3i C. −512 √ 3 −512i D. −128 + 128 √ 6i E. 1024 + 1024 √ 3i Solution: Since √ 1 + 3 = 2 and tan−1 √ 3 1 ! = π 3 , we have (1 + i √ 3)10 =  2  cos π 3  + i sin π 3 10 = 210  cos 10π 3  + i sin 10π 3  = 1024 −1 2 − √ 3 2 i ! = −512 −512 √ 3i. 3. Let f(x −1) + f(x + 1) = f(x) for all integers x. If f(0) = 2 and f(1) = 4 then find f(2022) + f(2023) + f(2024). A. 12 B. 8 C. 337 D. 0 E. 4 Solution: First, observe that for any integer n (setting n = x + 1) it is true that f(n) = f(n −1) −f(n −2), so f(n + 1) = f(n) −f(n −1), which means that −f(n −2) = f(n + 1) and therefore f(n + 3) = −f(n) and f(n + 6) = −f(n + 3) = f(n). Next, observe that f(2) = 4 −2 = 2, f(3) = −2, f(4) = −4, f(5) = −2 and then the values of f repeat starting with f(6) = f(0) = 2. The remainder when 2022 is divided by 6 is zero, which means that f(2022) = f(0) = 2, f(2023) = 4 and f(2024) = 2, so the sum is equal to 8. 4. Evaluate the sum 1 (1)(2) + 1 (2)(3) + 1 (3)(4) + ... + 1 (19)(20). A. 19 20 B. 19! −1 20! C. 121 20! D. 0.96 E. 1.04 Solution: This is 19 X n=1 1 (n)(n + 1) which can be written as 19 X n=1 1 n − 1 n + 1. The terms in this sum cancel except the first and the last, giving 1 −1 20 = 19 20. 5. Let C be the circle of radius 24 centered at the origin in the plane. Let R be a radius of C (a line segment from the origin to a point of C). Let P be the perpendicular bisector of R, which separates the plane into sides U and V , where U contains the origin. Find the length of the arc A which consists of the points of C which are not in U. A. 8π B. 40 C. 0 D. 6π E. 16π Page 2 Solution: If we have P be the vertical line through x = 12 and R be the line segment from the origin to (24, 0) then we see that the portion of the circle further from the origin than P is the arc A between the points on C with x coordinate 12. In particular, if θ is the angle between the positive x-axis and the radius from the origin to the point of intersection of P and C in the first quadrant then 24 cos(θ) = 12, so θ = π 3 . This means that the angle subtended by A is 2π 3 , so the length of A is 24 2π 3  = 16π. 6. How many integers between 1 and 10, 000 are divisible by each of the numbers 2, 3, 4, 5, 6, 7, 8, 9? A. 0 B. 1 C. 3 D. 5 E. 28 Solution: An integer which is divisible by nine and eight is also divisible by six, two, three, and four, which means that we can rephrase the problem to be “how many integers are divisible by 8, 9, 7, and 5 between 1 and 10,000?” Such an integer must have the form 8i9j5k7l(m) for positive integers i, j, k, l, m since the prime factorizations of 8, 9, 7, and 5 have no common factors. Thus, any number divisible by 8, 9, 7 and 5 must be at least (9)(8)(7)(5) = 2520. Raising the power of any of 8, 9, 7, or 5 to two gives a product that is at least (9)(8)(7)(52) = 12600 > 10000. Hence, the numbers divisible by 8, 9, 7, and 5 which are less than 10,000 have the form 2520m for a positive integer m. We see that 2520(4) > 10, 000, so the only options are 2520, 2520(2), 2520(3). 7. Let x2 + y2 −7x + 4y + 13 = 0. What is the smallest possible value of 3x −2y? A. 4 B. 2 √ 3 C. -4 D. 8 E. There is no smallest value Solution: Moving 3x −2y to one side of the equation gives us that 3x −2y = x2 −4x + y2 + 2y + 13. Completing the square tells us 3x −2y = (x −2)2 + (y + 1)2 + 8. The right side is smallest when x = 2 and y = −1, taking on a value of 8. Since this is a solution to the equation (since 3(2) −2(−1) = 8) this must be the minimum value. Page 3 8. A survey team places a 24-foot tall vertical pole on top of a (rather short) hill. Someone standing some unknown distance horizontally from the hill measures the angle of elevation to the bottom of the pole to be 30◦and the angle of elevation to the top of the pole to be 45◦. Find the height of the hill. A. 48 feet B. 48( √ 2 −1) feet C. 24( √ 3 −1) 5 feet D. 12( √ 3 + 1) feet E. 24 feet Solution: Let A be the point at which the observer is standing, let B be the point at the top of the pole, and C be the point at the bottom of the pole. Let D be the point at height zero on the line through B and C. Then we are given that ∠BAD = 45◦and ∠CAD = 30◦, and BC = 24. This means that ∠BAC = 15◦and ∠ABD = 45◦, so ∠ACB = 120◦. We also know ∠ACD = 60◦. Applying the law of sines, we see that 24 sin(15◦) = AC sin(45◦), so AC = 12 √ 2 csc(15◦). The height of the hill is CD = AC sin(30◦) = 1 2(AC) = 6 √ 2 csc(15◦). We use the difference of angles formula to find sin(15◦) = sin(45◦−30◦) = sin(45◦) cos(30◦)− cos(45◦) sin(30◦) = √ 6 − √ 2 4 . Thus, h = 6 √ 2  4 √ 6 − √ 2  = 24 √ 2 √ 6 + √ 2 6 −2 ! = 48 √ 3 4 + 48 4 = 12( √ 3 + 1). 9. How many different “words” can be made out of the letters of SASSAFRAS, if a “word” means any string of all nine letters? For example, FRAAASSSS would be a word. A. 62 B. 2048 C. 2520 D. 362880 E. 15120 Solution: This is a permutation on a set of nine objects in which a set of three and a set of four are indistinguishable, so the number is 9! (4!)(3!) = 2520. 10. Let sin(x) −cos(x) = 1 3. Find sin(2x). A. 5 9 B. 8 9 C. 7 9 D. 2 9 Page 4 E. 4 9 Solution: Squaring both sides, we get sin2(x) −2 sin(x) cos(x) + cos2(x) = 1 9, so 1 − 2 sin(x) cos(x) = 1 −sin(2x) = 1 9, so sin(2x) = 8 9. 11. Let r and a be positive real numbers so that a disk of radius r is small enough to fit inside an equilateral triangle T of side length a. Find the area of the region enclosed by T consisting of all points which are not enclosed by any circle of radius r which is enclosed by T. A. (a2 −r2)(3 √ 3 −π) B. a2 r2 (6 √ 2 −2π) C. r2( √ 3 4 −π 12) D. r2(3 √ 3 −π) E. r2(π − √ 3) Solution: We first verify that the side length of a does not affect the solution. The only points enclosed by T that would not be covered by a disk of radius r enclosed by T are near the corners, and the area excluded near the corners is determined by the angle at the vertex (which is π/3) and the radius r of the circle. Let C be a circle of radius r with center O which is tangent to two edges of T with common vertex D of T. Let W be the quadilateral whose vertices are O, D, and the points A, B of intersection of C and T. The area enclosed by W can be obtained by splitting W into two congruent triangles △ODA and △ODB. The height of such a triangle is r and its base has length cot π 6  r = √ 3r. Hence, the area enclosed by W is √ 3r2. The interior angles of W are ∠ADB = π 3 , ∠DBO = ∠DAO = π 2 , and ∠AOB = 2π 3 . Since ∠AOB = 2π 3 , the area of the circular sector in the circle C from segment OA to segment OB is πr2 3 . Thus, the area within W not enclosed by C is r2 3 (3 √ 3 −π). Since the same area is not enclosed by any circle of radius r near each of the three vertices of the triangle, the total area not covered by disks of radius r enclosed by T is r2(3 √ 3 −π). 12. Let a, b, c, x, y, z be non-zero complex numbers so that a = b + c x + 1, b = c + a y + 1, c = a + b z + 1. If xy + yz + zx = 100 and x + y + z = 50, find the value of xyz. A. -96 B. -94 C. -82 D. -86 E. -92 Page 5 Solution: Adding one to both sides of b + c a = x + 1 we have a + b + c a = x + 2, so a a + b + c = 1 x + 2. Likewise, we are able to get that b a + b + c = 1 y + 2 and c a + b + c = 1 z + 2. Adding these gives 1 x + 2 + 1 y + 2 + 1 z + 2 = 1. Thus, xy + yz + zx + 4(x + y + z) + 12 xyz + 2(xy + yz + zx) + 4(x + y + z) + 8 = 1 Hence, 312 xyz + 408 = 1, so xyz = −96. 13. Which of the following is the graph of x2 + 2 √ 3xy + 3y2 + 6x + 16 = 0? A. A parabola B. An ellipse C. A hyperbola D. A line E. None of these Solution: The graph of a non-degenerate conic section Ax2+Bxy+Cy2+Dx+Ey+F = 0 is a parabola if B2 −4AC = 0, an ellipse if B2 −4AC < 0 and a hyperbola if B2 −4AC > 0. In this case, B2 −4AC = 12 −12 = 0, so we have a parabola. We can see the conic is non-degenerate by plotting a few points or by checking that det   1 √ 3 3 √ 3 3 0 3 0 16  ̸= 0. 14. Let S consist of all numbers which can be obtained by starting at 0 and then applying the functions f(x) = x + 1 and g(x) = x 2 finitely many times (in any possible function composition order; for example, g(f(f(g(g(0))))) would be one of the points of S). Which of the following is true? (I) If n is in S then n2 is in S. (II) If n is in S then n −1 is in S. (III) If n is in S then n2 −n is in S. A. (I) only B. (I) and (II) only C. (I), (II), and (III) D. (II) and (III) only E. None of these are true Solution: Since neither operation outputs a negative number from a non-negative input, no combination of the operations starting at zero can end in a negative number, so 0 −1 = −1 ̸∈S, so II is false. Likewise, 0 + 1 2 = g(f(0)) ∈S, and 1 2 −1 = −1 2 ̸∈S, so III is false. We show that S consists of all non-negative integers divided by all non-negative powers of two. First, for any non-negative integers n, m if we apply f n times to zero we get n, and Page 6 if we then apply g m times we get an output of n 2m , so all non-negative integers divided by all non-negative powers of two are in S. Next, we show by induction that all elements of S are of the form n 2m for non-negative integers n and m. We induct on the number of operations applied to zero. If we apply one operation to 0 we get 0 or 1, which are both of the form described. Suppose all numbers obtained after performing k operations of type f or g gives a number of the form n 2m . Then applying f an additional time to the result would give a number of the form n 2m +1 = 2m + n 2m which is a non-negative integer over a non-negative power of two. Alternately, applying g an additional time would give n 2m+1 which is a non-negative integer over a non-negative power of two. By induction, then, we see that S = n n 2m ∈Q|n, m, ∈{0} ∪N o , the non-negative dyadic rationals. In particular, if x = n 2m ∈S then x2 = n2 22m , where n2 22m is a non-negative integer over a non-negative power of two and is therefore in S, which implies that I is true. 15. Let f(x) = e3x+1. Find lim x→0 f(f(x)) −f(e) x . A. 3eee B. 1 e C. 9e3e+2 D. 9e6e2 E. 0 Solution: First, we note that f(0) = e and f ′(x) = 3e3x+1. For the function f ◦f we have (f ◦f)(0) = f(e), which means that lim x→0 f(f(x)) −f(e) x = lim x→0 f(f(x)) −f(f(0)) x −0 is the derivative of f ◦f at the point 0. Using the chain rule this derivative is f ′(f(0))f ′(0) = (3e3e+1)(3e1) = 9e3e+2. 16. Find the smallest natural number m so that m X n=1 ⌊log10(n)⌋> 2024. (The notation ⌊x⌋means the greatest integer less than or equal to x). A. 102024 B. 1035 C. 899 D. 1190 E. 1044 Solution: The greatest integer less than or equal to log10 n = 0 if 1 ≤n ≤9. For 10 ≤ n ≤99, ⌊log10(n)⌋= 1, so 99 X n=1 ⌊log10(n)⌋= 90. If 100 ≤n ≤999 then ⌊log10(n)⌋= 2, so 999 X n=1 ⌊log10(n)⌋= 90+2(900) = 1890. If 1000 ≤n ≤9999, ⌊log10(n)⌋= 3, and 2024−1890 = Page 7 134. The first integer k so that 3k > 134 is 45, which means that the first integer m so that m X n=1 ⌊log10(n)⌋> 2024 is 999 + 45 = 1044. 17. Let f(x) = ax2 + bx + c be a quadratic function so that f(1) + f(2) + ... + f(n) = n3 for every positive integer n. What is abc? A. −6 B. −8 C. −9 D. −10 E. −13 Solution: We plug in integers 1, 2, 3 to give us equations f(1) = a + b + c = 13 = 1 f(1) + f(2) = 5a + 3b + 2c = 23 = 8 f(1) + f(2) + f(3) = 14a + 6b + 3c = 33 = 27 Subtracting twice the first equation from the second and three times the first equation from the third would give us 3a + b = 6 and 11a + 3b = 24. Subtracting three times the first of these from the second would give 2a = 6 so a = 3. Substituting gives b = −3 and c = 1. The product is therefore (3)(−3)(1) = −9. It is worth noting that this does actually give a polynomial satisfying the requirements in question since n X i=1 3n2 − n X i=1 3n + n X i=1 1 = 3 n(n + 1)(2n + 1) 6  −3 n(n + 1) 2  + n = n3. 18. Let f(x) be a degree four polynomial so that f(x) < 0 for all real x. Which of the following must be true about g(x) = f(x) + f ′(x) + f ′′(x) + f ′′′(x) + f (4)(x)? A. g′′′(x) = 0 for all x. B. g(x) has degree five. C. g(x) > 0 for some real x D. g(x) < 0 for all real x E. None of these need to be true. Solution: First, observe that since f is a degree four polynomial, its graph’s end behavior in both directions corresponds to the coefficient of the highest power term, which must be negative. Each of the derivatives of f has lower degree than f, so g is degree four and has the same leading coefficient as f, which is negative. The maximum value of g must occur at a point c where g′(c) = 0 = f ′(c) + f ′′(c) + f ′′′(c) + f (4)(c) + f (5)(c) = 0, and f (5)(c) = 0 since f is degree four. Hence, g(c) = f(c) < 0. Thus, g(x) < 0 for all real x. We have already noted that (B) and (C) are false. A degree four polynomial’s third derivative is degree one and can’t be identically zero. Hence, none of (A), (B) or (C) can be true. Page 8 19. What is the number of positive integer solutions (x, y) of the equation x2+y2 = 4x+4y+5−2xy? A. 0 B. 1 C. 3 D. 4 E. Infinitely many Solution: We write this as x2 +y2 +2xy −4(x+y)−5 = 0, which is the same as (x+y)2 − 4(x + y) −5 = 0. If we treat x + y as a variable this factors as ((x + y) + 1)((x + y) −5) = 0, so (x + y) = 5. The possible solutions are thus (1, 4), (2, 3), (3, 2), (4, 1) for a total of 4 pairs of positive integers (x, y) which would be solutions. 20. Let f be a function whose domain is all real numbers and whose function values are real numbers. Consider the following statement: “For every positive real number ϵ there is a positive real number δ so that, for each real number x it is true that if 0 < |x −2| < δ then |f(x) −5| < ϵ.” Which of the following statements about f is true if and only if the statement above is false? A. “For every positive real number ϵ there is some positive real number δ so that for some real number x it is true that 0 < |x −2| < δ but |f(x) −5| ≥ϵ.” B. “For some positive real number ϵ there is a positive real number δ so that for some real number x it is true that 0 < |x −2| < δ and |f(x) −5| ≥ϵ.” C. “For every positive real number δ there is no positive real number ϵ so that |x−2| ≥ϵ or |x −2| = 0 and |f(x) −5| < δ.” D. “For each negative real number ϵ there is no negative real number δ so that δ < |x−2| and ϵ < |f(x) −5|.” E. “There is some positive real number ϵ so that for each positive real number δ there is some real number x so that 0 < |x −2| < δ and |f(x) −5| ≥ϵ.” Solution: It may be helpful to use rules for logical negation with symbols. If P(x) is the statement 0 < |x −2| < δ and Q(x) is the statement |f(x) −5| < ϵ then we could formulate the statement as follows: ∀ϵ > 0(∃δ > 0(∀x(¬P(x)∨Q(x)))), the negation of which switches the “for every” symbol to ”there exists” and vice versa, “or” to “and,” and negates atomic statements about free variables to give ∃ϵ > 0(∀δ > 0(∃x(P(x) ∧¬Q(x)))), which is the same as E. 21. Let σ(n) be the sum of the positive divisors of n (including 1 and n itself) for each positive integer n. Find the largest of the numbers σ(1), σ(2), ..., σ(50). A. 106 B. 88 C. 93 D. 122 E. 124 (not 96 as erroneously listed on the exam) Page 9 Solution: Note: This problem did not have the correct answer choice listed and was an error on the exam. The prime numbers less than 50 are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, and 47. All divisors of numbers less than 50 must be products of such prime numbers. The positive divisors of each integer are one and the number itself, each power of each of its prime factors that divides the integer and each product of the powers of prime factors dividing the integer in its prime factor decomposition. For each prime number p, then, σ(p) = p+1. None of the prime numbers listed could be numbers at which σ achieves a maximum since σ(50) = 1 + 2 + 5 + 10 + 25 + 50 = 93 exceeds σ(p) for each prime p < 50. Likewise, a product of two prime numbers to the first power p, q has σ(pq) = p + q + 1 + pq. For an integer less than fifty to be of this form p and q would both have to be less than 25, so the sum would be less than 93 again. Thus, we are only interested in products of powers of two primes raised to powers where at least one power is higher than one, or products of three or more prime numbers, and we would, of course, want the power of each prime in the decomposition to be as high as possible without the product exceeding fifty. This narrows down the list to (22)(32), (24)(3), (2)(52), (22)(7), (22)(11), (32)(5), (2)(3)(7), (2)(3)(5) as the only possibilities. Checking each directly, we have σ(36) = 1 + 2 + 3 + 4 + 6 + 9 + 12 + 18 + 36 = 91, σ(42) = 1 + 2 + 3 + 6 + 7 + 14 + 21 + 42 = 96, σ(28) = 1+2+4+7+14+28 = 56, σ(44) = 1+2+4+11+22+44 = 84, σ(45) = 1+3+9+15+45 = 73, σ(30) = 1+2+5+6+10+15+30 = 69 and σ(48) = 1+2+3+4+6+8+12+16+24+48 = 124. The largest of these is σ(48) = 124 (on the exam the choice listed was 96, which was incorrect, and grading adjustments to the exam results had to be made accordingly). 22. What is the ones digit of 22024 + 32024 + 42024 + 52024 + 62024? A. 2 B. 4 C. 6 D. 8 E. 0 Solution: We use modular arithmetic mod ten. First, note that 32 = 9 which is congruent to -1 mod 10. Thus, (32)1012 is congruent to (−1)512 = 1, so the ones digit of 32024 is 1. Next, notice that every power of five has a ones digit of 5, so the ones digit of 52024 is 5. We next observe that 23 = 8 which is −2 mod ten, so 22024 = 8253 would have the same ones digit as (−2)253 = ((−2)11)23. Raising (−2)11 = −2048 which is has a ones digit congruent to 2, so the ones digit of (−2)253 is the same as the ones digit of 223. Also, 211 has a ones digit congruent to −2, which means that 223 = (211)(211)(2) has a ones digit congruent to 2((−2)(−2)) which means the ones digit is 8. Since 42024 is the square of 22024, the ones digit is the same as that of 64, namely 4, for 42024. Any power of six ends in a six, so raising 62024 ends in a ones digit of 6. Adding those remainders we have 8 + 1 + 4 + 5 + 6 = 24, which is the same as 4 mod 10. Hence, the ones digit is a 4. 23. Evaluate the series ∞ X i=1 2i −1 2i = 1 2 + 3 4 + 5 8 + 7 16 + 9 32 + .... A. Infinity B. 4 C. 2 D. 19 18 Page 10 E. 3 Solution: We split the series into separate geometric series, writing the sum as: 1 2 + 1 4 + 1 8 + ... = 1 + 2 4 + 2 8 + 2 16 + ... = 1 + 2 8 + 2 16 + 2 32 + ... = 1 2 + 2 16 + 2 32 + 2 64 + ... = 1 4 + ... The totals of these series starting at the second series sum form the geometric series 1 + 1 2 + 1 4 + ... = 2. Thus, the total (with the first series sum) is 3. 24. A car travels along a straight road. The velocity of the car is v(t) = 3t2 −18t + 24 meters per second, where t is in seconds. Find the total distance traveled by the car in the time interval from t = 0 to t = 4 seconds. A. 16 meters B. 15 meters C. 24 meters D. 80 meters E. 32 meters Solution: The car is moving forward when the velocity is positive and backwards when the velocity is negative. To find the distance traveled, we must take the sum of the absolute values of the displacements over each maximal interval on which the velocity does not change sign. Setting 3t2−18t+24 = 0 we have 3(t2−6t+8) = 0, so 3(t−4)(t−2) = 0, so the zeroes of the velocity are 2 and 4. Checking the sign of the velocity on each interval tells us that the velocity is non-negative on [0, 2] and non-positive on [2, 4]. We then take an antiderivative of velocity to get s = t3 −9t2 + 24t (linear position if we start at position s = 0 when t = 0). The total distance traveled is then Z 2 0 v(t)dt + Z 4 2 v(t)dt = t3 −9t2 + 24t 2 0 + t3 −9t2 + 24t 4 2 = 20 + (20 −16) = 24 meters. Page 11 25. Two semicircles are inscribed in a square of side length 2 with the base of each semicircle equal to a side of the square, where the two sides meet at a vertex. Find the area of the intersection of the two circles. 2 2 A. 1 + π 4 B. π 2 −1 C. 4 −π D. 3π −4 4 E. π + 1 4 Solution: Label the vertex where the bases of the two semicircles meet O, the point of intersection of the two circles P and label the point a distance one along the bottom edge of the square (the midpoint) M, and the point a distance one along the left edge H. Thus, MO = MP = HP = HO = 1. We can also see that ∠MOH = π 2 and ∠OHP = ∠OMP, which we deduce that each angle of quadrilateral MOHP is π 2 , so MOHP is a square of side length one and therefore area 1. The π 4 is the area of quarter circle with vertices M, P, O and the quarter circle with vertices H, P, O, so 2(π 4 ) = S + 1, where S is the area of the shaded region (which is included twice in the sum of the areas of the quarter circles). Thus result is S = π 2 −1. 26. A drawer contains eight blue socks, eight white socks, and eight green socks. If a person randomly draws four socks from the drawer, what is the probability that the person has drawn two socks of one color and two socks of another color (two matching pairs of socks of two different colors)? A. 56 231 B. 1 2 C. 112 251 D. 56 253 E. 1 4 Page 12 Solution: The number of ways to draw a pair of white socks and a pair of green socks is 8 2 8 2  . The number of ways to draw a pair of white and blue and the number of ways to draw a pair of blue and green is the same, so the number of ways to draw two pairs of different colors is 3 8 2 8 2  = 3(28)2. The number of ways to draw four socks from twenty four is 24 4  = (23)(22)(21). Thus, the probability of drawing two socks of two different colors is (3)(72)(42) (23)(22)(21) = 56 253. 27. A quadrilateral ABCD is inscribed in a circle. The side lengths of the quadrilateral are AB = 1, BC = 7, CD = 5 and DA = 5. Find the area enclosed by the quadrilateral. A. 2 √ 165 B. 12 C. 16 D. 35 E. There is not enough information to uniquely determine the area Solution: First, note that by the Inscribed Quadrilateral Theorem opposite angles of a quadrilateral inscribed inside a circle are supplementary, and therefore the cosines of opposite angles are negatives of each other. Let l = AC, and let θ = ∠ABC. Then π−θ = ∠ADC. By the law of cosines cos(θ) = 72 + 12 −l2 2(1)(7) = −cos(π−θ) = −52 + 52 −l2 (2)(5)(5) . This equation is true if the numerators are both zero, which happens if l2 = 50, which means that cos(θ) = cos(π −θ) = 0, so θ = π 2 . Thus, △ABC and △ADC are right triangles, so to find their areas we just multiply their side lengths (the sides that are not l) and divide by two. The total area enclosed by quadrilateral ABCD is therefore 7 2 + 25 2 = 16. 28. A total of 200 students are enrolled in the performing arts program at Heresville High School. The program has three classes: orchestra, choir and theater. There are 75 students in choir, 91 students in orchestra and 119 students in theater. There are 26 students enrolled in both choir and orchestra, 39 students in both orchestra and theater and 34 students in both theater and choir. How many students are enrolled in all three classes? A. 7 B. 14 C. 20 D. 42 E. 66 Solution: Let C, O, T denote the sets of students enrolled in choir, orchestra, and theater respectively. Then |C ∪O ∪T| = |C| + |T| + |O| −|C ∩T| −|C ∩O| −|O ∩T| + |C ∩O ∩T|. Page 13 Substituting the known cardinalities, we have 200 = 75 + 119 + 91 −(26 + 39 + 34) + |C ∩O ∩T|, so |C ∩O ∩T| = 14. This can also be solved using systems of equations. 29. How many ways can you cover a rectangular grid consisting of two squares by nine squares with domino-shaped tiles which can each tile two adjacent squares horizontally or two adjacent squares vertically? Sample Domino Tiling of 2 by 9 Grid A. 512 B. 60 C. 48 D. 55 E. 62 Solution: Let Sn be the number of ways to tile a 2 × n rectangular grid with domino shaped tiles. Then S1 = 1 and S2 = 2. For n > 2, we can tile the upper left square with a vertical domino, in which case there are Sn−1 ways to tile the rest of the grid, or we can tile the upper left square with a horizontal domino in which case the lower left square must be tiled with a horizontal domino, leaving Sn−2 ways to tile the rest of the grid. This means that Sn = Sn−1 + Sn−2 for n > 2. Hence, the number of ways to tile a 2 × n grid is is the n + 1st member of the Fibonacci sequence. The members of this sequence are 1, 2, 3, 5, 8, 13, 21, 34, 55 and so on, with S9 = 55. Page 14 30. Twenty five marbles are placed into two bags (so the total number of marbles in the two bags is twenty five, not the number in each individual bag). The marbles are all either black or white. With this distribution of marbles, if a marble is chosen at random from each bag then the probability that two black marbles are chosen is 17 50. Let m n be the probability of picking two white marbles if you choose one marble at random from each bag, where m and n are relatively prime positive integers. What is m + n? A. 13 B. 221 C. 27 D. 109 E. 5 Solution: Let b1, b2 be the number of black marbles in each bag. Then b1 t1  b2 t2  = 17 50, where we assume t1 > t2 are the totals of the marbles in the bags with b1 and b2 black marbles respectively. It follows that the total number of marbles in each bag must be divisible by five (or t1t2 in reduced terms could not be divisible by five). Thus, the possible totals are (15, 10), (20, 5) if we list the larger number first. If the totals are (20, 5) then t1t2 = 100, so we know that b1b2 = 34, which could only happen if b1 = 17 and b2 = 2. If the total numbers of marbles are (15, 10) then t1t2 = 150, so b1b2 = 51. There is no case where this is possible, however, since one of the factors of the numerator must be divisible by 17 for this to happen, and neither bag contains seventeen or more marbles. Hence, the only possibility is that there are 20 marbles in the first bag of which 3 are white and 5 in the second, of which 3 are white. Hence, the probability of drawing a white marble from each bag is  3 20  3 5  = 9 100, so m + n = 109. Page 15
714
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/87db5678b0405c1087b0e85181b64c1a_MIT18_S096F15_Ses12_14.pdf
4 Concentration Inequalities, Scalar and Matrix Versions 4.1 Large Deviation Inequalities Concentration and large deviations inequalities are among the most useful tools when understanding the performance of some algorithms. In a nutshell they control the probability of a random variable being very far from its expectation. The simplest such inequality is Markov’s inequality: Theorem 4.1 (Markov’s Inequality) Let X ≥0 be a non-negative random variable with E[X] < ∞. Then, E[X] Prob{X > t} ≤ t . (31) Proof. Let t > 0. Define a random variable Yt as Yt =  0 if X ≤t t if X > t Clearly, Yt ≤X, hence E[Yt] ≤E[X], and t Prob{X > t} = E[Yt] ≤E[X], concluding the proof. 2 Markov’s inequality can be used to obtain many more concentration inequalities. Chebyshev’s inequality is a simple inequality that control fluctuations from the mean. Theorem 4.2 (Chebyshev’s inequality) Let X be a random variable with E[X2] < ∞. Then, Var(X) Prob{|X −EX| > t} ≤ . t2 Proof. Apply Markov’s inequality to the random variable (X −E[X])2 to get: (X Prob X EX > t = Prob t2 E (X EX)2 > −EX)2 {| − | } { − } ≤ t2 Var(X) = . t2 2 4.1.1 Sums of independent random variables In what follows we’ll show two useful inequalities involving sums of independent random variables. The intuitive idea is that if we have a sum of independent random variables X = X1 + · · · + Xn, where Xi are iid centered random variables, then while the value of X can be of order O(n) it will very likely be of order O(√n) (note that this is the order of its standard deviation). The inequalities that follow are ways of very precisely controlling the probability of X being larger than O(√n). While we could use, for example, Chebyshev’s inequality for this, in the inequalities that follow the probabilities will be exponentially small, rather than quadratic, which will be crucial in many applications to come. 55 Theorem 4.3 (Hoeffding’s Inequality) Let X1, X2, . . . , Xn be independent bounded random vari-ables, i.e., |Xi| ≤a and E[Xi] = 0. Then, ( X n )  t2 Prob Xi > t =1 ≤2 exp i −2na2  . The inequality implies that fluctuations larger than O (√n) have small probability. For example, for t = a√2n log n we get that the probability is at most 2 . n n Proof. We first get a probability bound for the event i=1 Xi > t. The proof, again, will follow from Markov. Since we want an exponentially small probability, we use a classical trick that involves exponentiating with any λ > 0 and then choosing the optim P al λ. Prob (X n Xi > t i=1 ) = Prob (X n Xi > t i=1 ) (32) = Prob n eλ Pn X λt i=1 i > e E[eλ o ≤ Pn i=1 Xi] etλ n = e−tλ Y E[eλXi], (33) i=1 where the penultimate step follows from Markov’s inequality and the last equality follows from inde-pendence of the Xi’s. We now use the fact that |Xi| ≤a to bound E[eλXi]. Because the function f(x) = eλx is convex, λx a + x e ≤ 2a eλa + a −xe−λa, 2a for all x ∈[−a, a]. Since, for all i, E[Xi] = 0 we get E[eλXi] ≤E a + Xi 2a eλa + a −Xi 2a e−λa  ≤1 2  eλa + e−λa = cosh(λa) Note that15 cosh(x) ≤ex2/2, for all x ∈R Hence, E[eλXi] ≤E[e(λXi)2/2] ≤e(λa)2/2. Together with (32), this gives Prob (X n n 2 Xi > t ) ≤ e−tλ =1 Y e(λa) /2 i i=1 = e−tλen(λa)2/2 2 15This follows immediately from the Taylor expansions: cosh(x) = P∞ n x n=0 (2n)!, ex2/2 = P∞ n=0 x2n , 2n and (2n)! n! ≥2nn!. 56 This inequality holds for any choice of λ ≥0, so we choose the value of λ that minimizes min λ  (λa)2 n tλ 2 −  Differentiating readily shows that the minimizer is given by t λ = , na2 which satisfies λ > 0. For this choice of λ, n(λa)2 1 /2 −tλ = n  t2 2a2 −t2 a2  = −t2 2na2 Thus, Prob (X n Xi > t i=1 ) 2 ≤ e− t 2 2na By using the same argument on Pn i=1 (−Xi), and union bounding over the two events we get, Prob ( X n 2 Xi i > t ) ≤ 2e− t =1 2 2na 2 Remark 4.4 Let’s say that we have random variables r1, . . . , rn i.i.d. distributed as − ri =   1 with probability p/2  0 with probability 1 −p 1 with probability p/2. Then, E(ri) = 0 and |ri| ≤1 so Hoeffding’s inequality gives: Prob ( X n ri i=1 > t ) 2 2 exp  t ≤ − . 2n  Intuitively, the smallest p is the more concentrated |Pn i=1 ri| should be, however Hoeffding’s in-equality does not capture this behavior. n A natural way to quantify this intuition is by noting that the variance of i=1 ri depends on p as Var(ri) = p. The inequality that follows, Bernstein’s inequality, uses the variance of the summands to improve over Hoeffding’s inequality. P The way this is going to be achieved is by strengthening the proof above, more specifically in step (33) we will use the bound on the variance to get a better estimate on E[eλXi] essentially by realizing that if Xi is centered, EX2 i = σ2 2 , and |Xi| ≤a then, for k ≥2, EXk i ≤σ2ak−2 =  σ a a2  k. 57 Theorem 4.5 (Bernstein’s Inequality) Let X1, X2, . . . , Xn be independent centered bounded ran-dom variables, i.e., |Xi| ≤a and E[X 2 2 i] = 0, with variance E[Xi ] = σ . Then, n t2 Prob ( X Xi > t 2 exp i=1 ) ≤ −2nσ2 + 2 3at ! . Remark 4.6 Before proving Bernstein’s Inequality, note that on the example of Remark 4.4 we get Prob ( n X i=1 ri > t ) ≤2 exp − t2 2np + 2 , t 3 ! which exhibits a dependence on p and, for small values of p is considerably smaller than what Hoeffd-ing’s inequality gives. Proof. As before, we will prove (X n 2 Prob Xi t i=1 ) ≤exp t > −2nσ2 + 2 , at 3 ! − n and then union bound with the same result for i=1 Xi, to prove the Theorem. For any λ > 0 we have P Prob (X n Xi > t i=1 ) = Prob{eλ P Xi > eλt} E[eλ ≤ P Xi] eλt n = e−λt Y E[eλXi] i=1 Now comes the source of the improvement over Hoeffding’s, E[eλXi] = E " ∞ 1 + λXi + X λmXm i m=2 m! ≤ 1 + ∞ X m=2 λmam−2σ2 m! σ2 = 1 + ( m X ∞ λa)m a2 =2 m! = 1 + σ2 e a2  λa −1 −λa Therefore,  Prob (X n Xi > t i=1 ) ≤e−λt  σ2 1 + a2  eλa −1 −λa n 58 We will use a few simple inequalities (that can be easily proved with calculus) such as16 1 + x ≤ ex, for all x ∈R. This means that, σ2 2 σ (eλa 1 λa) 1 + eλa 1 a  − −λa  ≤e 2 a −− , 2 which readily implies ( n ) 2 ) Prob Xi > t i=1 ≤e− nσ λt 2 (eλa e a −1−λa . As before, we try to find the value X of λ > 0 that minimizes nσ2 min λt + (eλa 1 λa) 2 λ  − a − −  Differentiation gives nσ2 −t + (aeλa −a) = 0 a2 which implies that the optimal choice of λ is given by 1 at λ∗= log 1 + a  nσ2  If we set at u = , (34) nσ2 then λ∗= 1 log(1 + u). a Now, the value of the minimum is given by nσ2 nσ2 −∗ ∗ λ t + (eλ a −1 −λ∗a) = − [(1 + u) log(1 + u) a2 a2 −u] . Which means that, Prob ( n nσ Xi > t i=1 ) 2 ≤ exp  −a2 {(1 + u) log(1 + u) −u}  The rest of the proof follo X ws by noting that, for every u > 0, u (1 + u) log(1 + u) −u ≥ , (35) 2 + 2 u 3 which implies: (X n ) nσ2 u Prob Xi > t i=1 ≤ exp −a2 2 + 2 u 3 ! = exp t2 − . 2nσ2 + 2at 3 ! 2 16In fact y = 1 + x is a tangent line to the graph of f(x) = ex. 59 4.2 Gaussian Concentration One of the most important results in concentration of measure is Gaussian concentration, although being a concentration result specific for normally distributed random variables, it will be very useful throughout these lectures. Intuitively it says that if F : Rn →R is a function that is stable in terms of its input then F(g) is very well concentrated around its mean, where g ∈N(0, I). More precisely: Theorem 4.7 (Gaussian Concentration) Let X = [X1, . . . , Xn]T be a vector with i.i.d. standard Gaussian entries and F : Rn →R a σ-Lipschitz function (i.e.: |F(x) −F(y)| ≤σ∥x −y∥, for all x, y ∈Rn). Then, for every t ≥0 ≤  t2 Prob {|F(X) −EF(X)| ≥t} 2 exp −2σ2  . For the sake of simplicity we will show the proof for a slightly weaker bound (in terms of the constant inside the exponent): Prob {|F(X) −EF(X)| ≥t} ≤2 exp  −2 π2 t2 σ2 . This exposition follows closely the proof of Theorem 2.1.12 in [Tao12] and the original argument  is due to Maurey and Pisier. For a proof with the optimal constants see, for example, Theorem 3.25 in these notes [vH14]. We will also assume the function F is smooth — this is actually not a restriction, as a limiting argument can generalize the result from smooth functions to general Lipschitz functions. Proof. If F is smooth, then it is easy to see that the Lipschitz property implies that, for every x ∈Rn, ∥∇F(x)∥2 ≤σ. By subtracting a constant to F, we can assume that EF(X) = 0. Also, it is enough to show a one-sided bound 2 Prob {F(X) −EF(X) ≥t} ≤exp  −π2 t2 , σ2  since obtaining the same bound for −F(X) and taking a union bound would gives the result. We start by using the same idea as in the proof of the large deviation inequalities above; for any λ > 0, Markov’s inequality implies that Prob {F(X) ≥t} = Prob {exp (λF(X)) ≥exp (λt)} E [exp (λF(X))] ≤ exp (λt) This means we need to upper bound E [exp (λF(X))] using a bound on ∥∇F∥. The idea is to introduce a random independent copy Y of X. Since exp (λ·) is convex, Jensen’s inequality implies that E [exp (−λF(Y ))] ≥exp (−EλF(Y )) = exp(0) = 1. Hence, since X and Y are independent, E [exp (λ [F(X) −F(Y )])] = E [exp (λF(X))] E [exp (−λF(Y ))] ≥E [exp (λF(X))] Now we use the Fundamental Theorem of Calculus in a circular arc from X to Y : π F(X) −F(Y ) = Z 2 0 ∂ ∂θF (Y cos θ + X sin θ) dθ. 60 The advantage of using the circular arc is that, for any θ, Xθ := Y cos θ + X sin θ is another random variable with the same distribution. Also, its derivative with respect to θ, Xθ ′ = −Y sin θ + X cos θ also is. Moreover, Xθ and Xθ ′ are independent. In fact, note that E h XθXθ ′ T i = E T [Y cos θ + X sin θ] [−Y sin θ + X cos θ] = 0. We use Jensen’s again (with respect to the integral now) to get: π exp (λ [F(X) −F(Y )]) = exp λ 2 1 π/2 Z π/2 0 ∂ ∂θF (Xθ) dθ ! ≤ 1 π/2 Z π/2 0 exp  λπ 2 ∂F (Xθ) ∂θ  dθ Using the chain rule, 2 exp (λ [F(X) −F(Y )]) ≤π Z π/2 0 exp  λπ F 2 ∇ (Xθ) · Xθ ′ dθ, and taking expectations 2 E exp (λ [F(X) −F(Y )]) ≤π Z π/2 0 E exp  λπ F 2 ∇ (Xθ) · Xθ ′ dθ, If we condition on Xθ, since λ π 2 ∇F (Xθ) ≤λ π 2 σ, λ π 2 ∇F (Xθ) · X′ θ is a gaussian random variable with variance at most λ π 2 σ 2. This directly implies that, for every value of Xθ EX′ θ exp  λπ 2 ∇F (Xθ) · X′ θ  ≤exp 1 2  λπ 2 σ 2   Taking expectation now in Xθ, and putting everything together, gives E [exp (λF(X))] ≤exp 1 2  λπ 2 σ 2   , which means that Prob {F(X) ≥t} ≤exp 1 2  λπ 2 σ 2  −λt  , Optimizing for λ gives λ∗= 2 π 2 t , σ2 which gives Prob {F(X) ≥t} ≤exp  2 −π2 t2 σ2  . 2 61 4.2.1 Spectral norm of a Wigner Matrix We give an illustrative example of the utility of Gaussian concentration. Let W ∈Rn×n be a standard Gaussian Wigner matrix, a symmetric matrix with (otherwise) independent gaussian entries, the off-n(n+1) diagonal entries have unit variance and the diagonal entries have variance 2. ∥W∥depends on 2 independent (standard) gaussian random variables and it is easy to see that it is a √ 2-Lipschitz function of these variables, since ∥W (1)∥−∥W (2)∥≤ W (1) −W (2) ≤ W (1) −W (2) . F The symmetry √ of the matrix and the variance 2 of the diagon al entries are responsible for an extra factor of 2. Using Gaussian Concentration (Theorem 4.7) we immediately get Prob {∥W∥≥E∥W∥+ t} ≤exp  t2 −4  . Since17 E∥W∥≤2√n we get Proposition 4.8 Let W ∈Rn×n be a standard Gaussian Wigner matrix, a symmetric matrix with (otherwise) independent gaussian entries, the off-diagonal entries have unit variance and the diagonal entries have variance 2. Then, Prob  ∥W∥≥2√n + t ≤exp  −t2 . 4  Note that this gives an extremely precise control of the fluctuations of ∥W∥. In fact, for t = 2√log n this gives Prob n ∥W∥≥2√n + 2 p log n o ≤exp  −4 log n 4  = 1 . n 4.2.2 Talagrand’s concentration inequality A remarkable result by Talagrand [Tal95], Talangrad’s concentration inequality, provides an analogue of Gaussian concentration to bounded random variables. Theorem 4.9 (Talangrand concentration inequality, Theorem 2.1.13 [Tao12]) Let K > 0, and let X1, . . . , Xn be independent bounded random variables, |Xi| ≤K for all 1 n ≤i ≤n. Let F : R →R be a σ-Lipschitz and convex function. Then, for any t ≥0, t2 Prob {|F(X) −E [F(X)]| ≥tK} ≤c1 exp  −c2 , σ2  for positive constants c1, and c2. Other useful similar inequalities (with explicit constants) are available in [Mas00]. 17It is an excellent exercise to prove E∥W∥≤2√n using Slepian’s inequality. 62 4.3 Other useful large deviation inequalities This Section contains, without proof, some scalar large deviation inequalities that I have found useful. 4.3.1 Additive ChernoffBound The additive Chernoffbound, also known as Chernoff-Hoeffding theorem concerns Bernoulli random variables. Theorem 4.10 Given 0 < p < 1 and X1, . . . , Xn i.i.d. random variables distributed as Bernoulli(p) random variable (meaning that it is 1 with probability p and 0 with probability 1 −p), then, for any ε > 0: • Prob ( 1 n n X i=1 Xi ≥p + ε ) ≤ " p p + ε p+ε  1 −p 1 −p −ε 1−p−ε#n • Prob ( 1 n n X i=1 Xi ≤p −ε ) ≤ " p p −ε p−ε  1 −p n p 1 1−+ε −p + ε # 4.3.2 Multiplicative ChernoffBound There is also a multiplicative version (see, for example Lemma 2.3.3. in [Dur06]), which is particularly useful. Theorem 4.11 Let X1, . . . , Xn be independent random variables taking values is {0, 1} (meaning they are Bernoulli distributed but not necessarily identically distributed). Let µ = E δ Pn i=1 Xi, then, for any > 0: δ • Prob {X > (1 + )µ} <  e δ (1 + δ)(1+δ) µ • Prob {X < (1 −δ)µ} <  e−δ µ (1 −δ)(1−δ)  4.3.3 Deviation bounds on χ2 variables A particularly useful deviation inequality is Lemma 1 in Laurent and Massart [LM00]: Theorem 4.12 (Lemma 1 in Laurent and Massart [LM00]) Let X1, . . . , Xn be i.i.d. standard gaussian random variables (N(0, 1)), and a1, . . . , an non-negative numbers. Let n Z = X a 2 k k=1 Xk −1  . The following inequalities hold for any t > 0: • Prob Z √ { ≥2∥a∥2 x + 2∥a∥∞x} ≤exp(−x), 63 • Prob {Z ≤−2∥a∥2 √x} ≤exp(−x), ∥∥2 Pn where a 2 = k=1 a2 k and ∥a∥ = max ∞ 1≤k a . ≤n | k| Note that if ak = 1, for all k, then Z is a χ2 with n degrees of freedom, so this theorem immediately gives a deviation inequality for χ2 random variables. 4.4 Matrix Concentration In many important applications, some of which we will see in the proceeding lectures, one needs to use a matrix version of the inequalities above. Given {Xk}n k=1 independent random symmetric d × d matrices one is interested in deviation in-equalities for λmax n Xk k=1 ! . For example, a very useful adaptation of Bernstein’s X inequality exists for this setting. Theorem 4.13 (Theorem 1.4 in [Tro12]) Let {Xk}n k=1 be a sequence of independent random sym-metric d × d matrices. Assume that each Xk satisfies: EXk = 0 and λmax (Xk) ≤R almost surely. Then, for all t ≥0, Prob ( λmax X n Xk k=1 ! ≥t ) ≤d · exp −t2 2σ2 + 2 X n wher Rt 3 ! e σ2 = E k=1 X2 k  . Note that ∥A∥denotes the spectral norm of A. In what follows we will state and prove various matrix concentration results, somewhat similar to Theorem 4.13. Motivated by the derivation of Proposition 4.8, that allowed us to easily transform bounds on the expected spectral norm of a random matrix into tail bounds, we will mostly focus on bounding the expected spectral norm. Tropp’s monograph [Tro15b] is a nice introduction to matrix concentration and includes a proof of Theorem 4.13 as well as many other useful inequalities. A particularly important inequality of this type is for gaussian series, it is intimately related to the non-commutative Khintchine inequality [Pis03], and for that reason we will often refer to it as Non-commutative Khintchine (see, for example, (4.9) in [Tro12]). Theorem 4.14 (Non-commutative Khintchine (NCK)) Let A1, . . . , An ∈Rd×d be symmetric matrices and g1, . . . , gn ∼N(0, 1) i.i.d., then: E X n 1 gkAk k=1 ≤  2 + 2 log(2d)  2 σ, where σ2 = X n A2 k k=1 2 . (36) 64 PNote that, akin to Proposition 4.8, we can also use Gaussian Concentration to get a tail bound on ∥ n k=1 gkAk∥. We consider the function F : Rn → X n gkAk k=1 its . We now estimate Lipschitz constant; let g, h ∈Rn then X n X n gkAk hk k=1 − Ak k=1 ! ! ≤ X n n gkAk − k=1 X hkAk k=1 X n = (gk −hk)Ak k=1 v X n = max T (gk h v: ∥v∥=1 − k)Ak k=1 ! v n = max X (gk −hk) vT Akv v: ∥v∥=1 k=1  ≤ max v: ∥v∥=1 v u u t n X k=1 (gk −hk)2 v u u t n X k=1 (vT Akv)2 = v u u t n 2 max (vT Akv) v: ∥v ∥g −h∥2, ∥=1 X k=1 where the first inequality made use of the triangular inequality and the last one of the Cauchy-Schwarz inequality. This motivates us to define a new parameter, the weak variance σ . ∗ Definition 4.15 (Weak Variance (see, for example, [Tro15b])) Given A , . . . , A ∈Rd d 1 n × sym-metric matrices. We define the weak variance parameter as n σ2 = max X 2 vT Akv . ∗ v: ∥v∥=1 k=1  This means that, using Gaussian concentration (and setting t = uσ ), we have ∗ ( X n ≥  1 Prob gkAk k=1 2 + 2 log(2d)  2 σ + uσ∗ ) ≤exp  −1u2 2  . (37) This means that although the expected value of ∥Pn k=1 gkAk∥is controlled by the parameter σ, its fluctuations seem to be controlled by σ . We compare the two quantities in the following Proposition. ∗ 65 Proposition 4.16 Given A1, . . . , An ∈Rd×d symmetric matrices, recall that σ = v u u t n X k=1 A2 k 2 and σ∗= v u u t 2 T =1 X n max (v Akv) . v: ∥v∥ k=1 We have σ ≤σ. ∗ Proof. Using the Cauchy-Schwarz inequality, σ2 = max ∗ X n 2 vT Akv v: ∥v∥=1 k=1 n  2 = max vT [Akv] v: ∥v∥=1 X k=1 n  ≤ 2 max ( v Akv ) v: ∥ ∥v ∥∥∥ ∥=1 X k=1 X n = max v: ∥v∥=1 ∥Akv∥2 k=1 n = max vT A2 kv v: ∥v∥=1 X k=1 = X n A2 k k=1 = σ2. 2 4.5 Optimality of matrix concentration result for gaussian series The following simple calculation is suggestive that the parameter σ in Theorem 4.14 is indeed the correct parameter to understand E ∥ X n Pn k=1 gkAk∥. E 2 2 2 n n gkAk = E X g k ! T kA k=1 k=1 = E max v gkAk v v: ∥v∥=1 X k=1 ! ≥ max EvT v: ∥v∥=1 X 2 n gkAk k=1 ! v = max vT v: ∥v∥=1 X n A2 k k=1 ! v = σ2 (38) But a natural question is whether the logarithmic term is needed. Motivated by this question we’ll explore a couple of examples. 66 Example 4.17 We can write a d × d Wigner matrix W as a gaussian series, by taking Aij for i ≤j defined as Aij = eieT j + ejeT i , if i ̸= j, and Aii = √ 2eieT i . It is not difficult to see that, in this case, P i j A2 ij = (d + 1)I ≤ d×d, meaning that σ = √ d + 1. This means that Theorem 4.14 gives us E∥W∥≲ p d log d, however, we know that E∥W √ ∥≍ d, meaning that the bound given by NCK (Theorem 4.14) is, in this case, suboptimal by a logarithmic factor.18 The next example will show that the logarithmic factor is in fact needed in some examples T ∈Rd×d n Example 4.18 Consider Ak = ekek for k = 1, . . . , d. The matrix P k=1 gkAk corresponds to a diagonal matrix with independent standard gaussian random variables as diagonal entries, and so it’s spectral norm is given by maxk |gk|. It is known that max1≤k≤d |gk √ | ≍ log d. On the other hand, a direct calculation shows that σ = 1. This shows that the logarithmic factor cannot, in general, be removed. This motivates the question of trying to understand when is it that the extra dimensional factor n is needed. For both these examples, the resulting matrix X = P k=1 gkAk has independent entries (except for the fact that it is symmetric). The case of independent entries [RS13, Seg00, Lat05, BvH15] is now somewhat understood: Theorem 4.19 ([BvH15]) If X is a d × d random symmetric matrix with gaussian independent entries (except for the symmetry constraint) whose entry i, j has variance b2 ij then E∥X∥≲ v u u t max 1≤i≤d d X j=1 b2 ij + max ij |bij| p log d. Remark 4.20 X in the theorem above can be written in terms of a Gaussian series by taking Aij = bij eieT j + e T jei , for i < j and Aii = biieieT i . One can then compute σ and σ  : ∗ d σ2 = max X b2 and σ2 ∗≍b2 ij ij. 1≤i≤d j=1 This means that, when the random matrix in NCK (Theorem 4.14) has negative entries (modulo symmetry) then E∥X∥≲σ + p log dσ∗. (39) 18By a ≍b we mean a ≲b and a ≳b. 67 Theorem 4.19 together with a recent improvement of Theorem 4.14 by Tropp [Tro15c]19 motivate the bold possibility of (39) holding in more generality. Conjecture 4.21 Let A , . . . , A ∈Rd d 1 n × be symmetric matrices and g1, . . . , gn ∼N(0, 1) i.i.d., then: X n E 1 gkAk k=1 ≲σ + (log d) 2 σ , ∗ While it may very will be that this Conjecture 4.21 is false, no counter example is known, up to date. Open Problem 4.1 (Improvement on Non-Commutative Khintchine Inequality) Prove or disprove Conjecture 4.21. I would also be pretty excited to see interesting examples that satisfy the bound in Conjecture 4.21 while such a bound would not trivially follow from Theorems 4.14 or 4.19. 4.5.1 An interesting observation regarding random matrices with independent matrices For the in ndep oendent entries setting, Theorem 4.19 is tight (up to constants) for a wide range of variance profiles b2 ij – the details are available as Corollary 3.15 in [BvH15]; the basic idea is that if the i≤j largest variance is comparable to the variance of a sufficient number of entries, then the bound in Theorem 4.19 is tight up to constants. However, the situation is not as well understood when the variance profiles b2 ij are arbitrary. i≤j Since the spectral norm of a matrix is always at least the ℓ2 norm of a row, the n follo o wing lower bound holds (for X a symmetric random matrix with independent gaussian entries): E∥X∥≥E max k ∥Xek∥2. Observations in papers of Lata la [Lat05] and Riemer and Schutt [RS13], together with the results in [BvH15], motivate the conjecture that this lower bound is always tight (up to constants). Open Problem 4.2 (Lata la-Riemer-Schutt) Given X a symmetric random matrix with indepen-dent gaussian entries, is the following true? E∥X∥≲E max ∥Xek∥2. k The results in [BvH15] answer this in the positive for a large range of variance profiles, but not in full generality. Recently, van Handel [vH15] proved this conjecture in the positive with an extra factor of √log log d. More precisely, that E∥X∥≲ p log log dE max k ∥Xek∥2, where d is the number of rows (and columns) of X. 19We briefly discuss this improvement in Remark 4.32 68 4.6 A matrix concentration inequality for Rademacher Series In what follows, we closely follow [Tro15a] and present an elementary proof of a few useful matrix concentration inequalities. We start with a Master Theorem of sorts for Rademacher series (the Rademacher analogue of Theorem 4.14) Theorem 4.22 Let H1, . . . , Hn ∈Rd×d be symmetric matrices and ε1, . . . , εn i.i.d. Rademacher random variables (meaning = +1 with probability 1/2 and = −1 with probability 1/2), then: E X n 1 εkHk ≤ k=1  1 + 2⌈log(d)⌉  2 σ, where σ2 = X n H2 k (40) k=1 2 . Before proving this theorem, we take first a small detour in discrepancy theory followed by deriva-tions, using this theorem, of a couple of useful matrix concentration inequalities. 4.6.1 A small detour on discrepancy theory The following conjecture appears in a nice blog post of Raghu Meka [Mek14]. Conjecture 4.23 [Matrix Six-Deviations Suffice] There exists a universal constant C such that, for any choice of n symmetric matrices H n n 1, . . . , Hn ∈R × satisfying ∥Hk∥≤1 (for all k = 1, . . . , n), there exists ε1, . . . , εn ∈{±1} such that X n √ εkHk k=1 ≤C n. Open Problem 4.3 Prove or disprove Conjecture 4.23. Note that, when the matrices Hk are diagonal, this problem corresponds to Spencer’s Six Standard Deviations Suffice Theorem [Spe85]. Remark 4.24 Also, using Theorem 4.22, it is easy to show that if one picks εi as i.i.d. Rademacher random variables, then with positive probability (via the probabilistic method) the inequality will be satisfied with an extra √log n term. In fact one has E n X k=1 εkHk ≲ p log n v u u t X n H2 k k=1 ≤ p log n v u u t n X k=1 ∥Hk∥2 ≤ p log n√n. Remark 4.25 Remark 4.24 motivates asking whether Conjecture 4.23 can be strengthened to ask for ε1, . . . , εn such that 1 X n εkHk k=1 X n ≲ H2 k k=1 2 . (41) 69 4.6.2 Back to matrix concentration Using Theorem 4.22, we’ll prove the following Theorem. Theorem 4.26 Let T1, . . . , Tn ∈Rd×d be random independent positive semidefinite matrices, then E  X n Ti i=1 ≤ X 1 n  ETi i=1 2 + p C(d)  E max i ∥Ti∥  1 2 2  , where C(d) := 4 + 8⌈log d⌉. (42) A key step in the proof of Theorem 4.26 is an idea that is extremely useful in Probability, the trick of symmetrization. For this reason we isolate it in a lemma. Lemma 4.27 (Symmetrization) Let T1, . . . , Tn be independent random matrices (note that they don’t necessarily need to be positive semidefinite, for the sake of this lemma) and ε1, . . . , εn random i.i.d. Rademacher random variables (independent also from the matrices). Then E X n Ti i=1 X n ≤ ETi E εiT i=1 + 2 X n i i=1 Proof. Triangular inequality gives E X n n n Ti E ( =1 ≤ X Ti + E Ti ETi) . i i=1 X i=1 − Let us now introduce, for each i, a random matrix Ti ′ iden tically distributed to Ti and independent (all 2n matrices are independent). Then E X n (Ti −ETi) = ET X n  Ti −ETi E i=1 i=1 − Ti ′ h Ti ′ −ETi ′Ti ′i n n = ET ET ′ X Ti −Ti ′ i  Ti =1 ≤E X −Ti ′ i=1  mean that the expectation , where we use the notation Ea to is taken with respect to the variable a and the last step follows from Jensen’s inequality with respect to ET ′. Since Ti −Ti ′ is a symmetric random variable, it is identically distributed to εi (Ti −Ti ′) which gives E X n X n n n n Ti −Ti ′ = E εi Ti −Ti ′  ≤E X X εiTi + E εiTi ′ i=1 i=1 i=1 i = 2E =1 X εiTi i=1 , concluding the proof. 2 70 Proof. [of Theorem 4.26] Using Lemma 4.27 and Theorem 4.22 we get E X n n Ti E i=1 ≤ X Ti i=1 + p C(d)E n X i=1 T 2 i 1 2 The trick now is to make a term like the one in the LHS appear in the RHS. For that we start by noting (you can see Fact 2.3 in [Tro15a] for an elementary proof) that, since Ti ⪰0, X n T 2 i i=1 ≤max i ∥Ti∥ X n Ti i=1 . This means that E X n Ti i=1 ≤ X n ETi i=1 + p C(d)E    max i ∥Ti∥  1 2 n X i=1 Ti 1 2  . Further applying the Cauchy-Schwarz inequality for E gives, E X n Ti ≤ X n ETi + i=1 i=1 p C(d)  E max i ∥Ti∥  1 2 E n X i=1 Ti ! 1 2 , Now that the term E ∥Pn i=1 Ti∥appears in the RHS, the proof can be finished with a simple application of the quadratic formula (see Section 6.1. in [Tro15a] for details). 2 We now show an inequality for general symmetric matrices Theorem 4.28 Let Y1, . . . , Yn ∈Rd×d be random independent positive semidefinite matrices, then E X n p Yi i=1 ≤ C(d)σ + C(d)L, where, σ2 = X n EY 2 i i=1 2 and L = E max i ∥Yi∥2 (43) and, as in (42), C(d) := 4 + 8⌈log d⌉. Proof. Using Symmetrization (Lemma 4.27) and Theorem 4.22, we get E X n Yi ≤2E i=1 Y " Eε X n εiYi i=1 # ≤ p C(d)E n X i=1 Y 2 i 1 2 . 71 Jensen’s inequality gives E X n Y 2 i i=1 1 2 ≤ E n X i=1 Y 2 i ! 1 2 , and the proof can be concluded by noting that Y 2 i ⪰0 and using Theorem 4.26. 2 Remark 4.29 (The rectangular case) One can extend Theorem 4.28 to general rectangular ma-trices S1, . . . , S d1 d2 n ∈R × by setting Yi =  0 Si ST , i 0  and noting that 2  0 Si 2  S ST Y i = = i 0 i T T   = max ST SiST i Si , i . Si 0 0 Si Si We defer the details to [Tro15a] In order to prove Theorem 4.22, we will use an AM-GM like inequality for matrices for which, unlike the one on Open Problem 0.2. in [Ban15d], an elementary proof is known. Lemma 4.30 Given symmetric matrices H, W, Y ∈Rd×d and non-negative integers r, q satisfying q ≤2r, Tr HW qHY 2r−q + Tr HW 2r−qHY q ≤Tr H2 W 2r + Y 2r , and summing over q gives X 2r r 1 Tr HW qHY 2 −q 2r + q=0 ≤  + 2  Tr H2 W 2r Y 2r We refer to Fact 2.4 in [Tro15a] for an elementary proof but note that it is a matrix analogue to the inequality, µθλ1−θ + µ1−θλθ ≤λ + θ for µ, λ ≥0 and 0 ≤θ ≤1, which can be easily shown by adding two AM-GM inequalities µθλ1−θ ≤θµ + (1 −θ)λ and µ1−θλθ ≤(1 −θ)µ + θλ. Proof. [of Theorem 4.22] Let X = Pn k=1 εkHk, then for any positive integer p, 1 E∥X∥≤ E∥X∥2p 2p = E∥X2p∥  1 2p ≤ E Tr X2p 1 2p , where the first inequality follows from Jensen’s inequality and the last from X2p ⪰0 and the obser-vation that the trace of a positive semidefinite matrix is at least its spectral norm. In the sequel, we 72 upper bound E Tr X2p. We introduce X+i and X−i as X conditioned on εi being, respectively +1 or −1. More precisely X+i = Hi + X εjHj and X−i = −Hi + j̸=i X εjHj. j= ̸ i Then, we have E Tr X2p = E Tr XX2p−1 n = E X Tr εiHiX2p−1. i=1 Note that E Tr ε H X2p−1 = 1 εi i i 2 Tr h Hi  X2p−1 +i −X2p−1 −i i , this means that E Tr X2p = n X i=1 E1 h p Tr Hi  2 p X −1 +i − 2 X i −1 2 − i , where the expectation can be taken over εj for j ̸= i. 2p Now we rewrite X+i −1 − 2p X −1 −i as a telescopic sum: 2 X p p −2 2 1 +i −− 2p 1 X X−i − q 2p 2 q = X+i (X+i q=0 −X ) X −− −i −i . Which gives n 2p−2 1 E Tr X2p = X i=1 X E q=0 q Tr h p HiX+i (X+i − 2 2 q X i) X i −− 2 − − i . Since X+i −X−i = 2Hi we get X n 2 X p−2 E Tr X2p q 2p 2 = E Tr h q HiX+iHiX −− −i i=1 q=0 i . (44) We now make use of Lemma 4.30 to get20 to get n 1 E Tr X2p X 2p − ≤ i=1 2 E Tr h H2 i  X2p−2 +i + X2p−2 −i i . (45) 20See Remark 4.32 regarding the suboptimality of this step. 73 Hence, X n 2p −1 i=1 2p−2 2p−2 X E Tr 2 h n X+i + H2 2p 2 2p 2 2 −i i   X+i −+ X i −i = (2p 1) H − − X E Tr i=1   i  2  n  = (2p −1) X E Tr i=1 H2 i Eεi X2p−2 n = (2p −1) X E Tr i=1 H2 i X2p−2 = (2p −1)E Tr " X n H2 i i=1 ! X2p−2 # Since X2p−2 ⪰0 we Tr " have X n H2 2 2 i ! X2p−2 # ≤ X n 2p H2 2p 2 i − i=1 Tr X = σ Tr X i=1 −, (46) which gives E Tr X2p ≤σ2(2p −1)E Tr X2p−2. (47) Applying this inequality, recursively, we get E Tr X2p ≤[(2p −1)(2p −3) · · · (3)(1)] σ2pE Tr X0 = (2p −1)!!σ2pd Hence, E∥X∥≤ 1 E Tr X2p 2p ≤[(2p −1)!!] 1 2p σd 1 2p . Taking p = ⌈log d⌉and using the fact that (2p −1)!! ≤  2p+1 p (see [Tro15a] for an elementary proof e consisting essentially of taking logarithms and comparing the  sum with an integral) we get E∥X∥≤ 2⌈log d⌉+ 1 e  1 2 σd 1 2⌈log d⌉≤(2⌈log d⌉+ 1) 1 2 σ. Remark 4.31 A similar argument can be used to prove Theorem 4.14 (the gaussian series case) based on gaussian integration by parts, see Section 7.2. in [Tro15c]. Remark 4.32 Note that, up until the step from (44) to (45) all steps are equalities suggesting that this step may be the lossy step responsible by the suboptimal dimensional factor in several cases (al-though (46) can also potentially be lossy, it is not uncommon that H2 i is a multiple of the identity matrix, which would render this step also an equality). In fact, Joel Tropp [Tro15c] recently proved an improvement over P the NCK inequality that, essen-tially, consists in replacing inequality (45) with a tighter argument. In a nutshell, the idea is that, if the Hi’s are non-commutative, most summands in (44) are actually expected to be smaller than the ones corresponding to q = 0 and q = 2p −2, which are the ones that appear in (45). 74 2 4.7 Other Open Problems 4.7.1 Oblivious Sparse Norm-Approximating Projections There is an interesting random matrix problem related to Oblivious Sparse Norm-Approximating Projections [NN], a form of dimension reduction useful for fast linear algebra. In a nutshell, The idea is to try to find random matrices Π that achieve dimension reduction, meaning Π ∈Rm×n with m ≪n, and that preserve the norm of every point in a certain subspace [NN], moreover, for the sake of computational efficiency, these matrices should be sparse (to allow for faster matrix-vector multiplication). In some sense, this is a generalization of the ideas of the Johnson-Lindenstrauss Lemma and Gordon’s Escape through the Mesh Theorem that we will discuss next Section. Open Problem 4.4 (OSNAP [NN]) Let s ≤d ≤m ≤n. 1. Let Π ∈Rm×n be a random matrix with i.i.d. entries δriσri Πri = √ , s where σri is a Rademacher random variable and δri = ( 1 √s with probability s m 0 with probability 1 −s m Prove or disprove: there exist positive universal constants c1 and c2 such that For any U ∈Rn×d for which U T U = Id×d Prob (ΠU)T (ΠU) −I ≥ε < δ, d+log( 1 for m ≥c  1 δ) ε2 and s ≥c2 log( d δ). ε2 2. Same setting as in (1) but conditioning on X m δri = s, for all i, r=1 meaning that each column of Π has exactly s non-zero elements, rather than on average. The conjecture is then slightly different: Prove or disprove: there exist positive universal constants c1 and c2 such that For any U ∈Rn×d for which U T U = Id×d Prob  (ΠU)T (ΠU) −I 1 ≥ε < δ, for m ≥ d+log( c 1 δ) ε2 and s ≥c2 log( d δ) ε . 75 3. The conjecture in (1) but for the specific choice of U: U =  Id×d . 0(n−d)×d  In this case, the object in question is a sum of rank 1 independent matrices. More precisely, z1, . . . , zm ∈Rd (corresponding to the first d coordinates of each of the m rows of Π) are i.i.d. random vectors with i.i.d. entries 1 (zk)j   − =   √s with probability s 2m 0 with probability 1 −s m 1 √s with probability s 2m Note that EzkzT k = 1 I . The conjecture is then that, there exists c1 and c2 positive universal m d×d constants such that ( X m Prob T k k=1 zkzT k −Ezkz ) ≥ε < δ, for m ≥ d+log( 1 c1 δ) ε2 and s ≥c2 log( d δ). ε2 I think this would is an interesting question even for fixed δ, for say δ = 0.1, or even simply understand the value of E X m . k=1 zkzT k −EzkzT k 4.7.2 k-lifts of graphs Given a graph G, on n nodes and with max-degree ∆, and an integer k ≥2 a random k lift G⊗k of G is a graph on kn nodes obtained by replacing each edge of G by a random k k bipartite matching. More precisely, the adjacency matrix A⊗k k × of G⊗ is a nk × nk matrix with k × k blocks given by A⊗k ij = AijΠij, where Πij is uniformly randomly drawn from the set of permutations on k elements, and all the edges are independent, except for the fact that Πij = Πji. In other words, A⊗k = X Aij eieT j ⊗Πij + e T jei i<j ⊗ΠT ij  , where ⊗corresponds to the Kronecker product. Note that 1 EA⊗k = A ⊗  J k  , where J = 11T is the all-ones matrix. 76 Open Problem 4.5 (Random k-lifts of graphs) Give a tight upperbound to E A⊗k −EA⊗k . Oliveira [Oli10] gives a bound that is essentially of the form p ∆log(nk), while the results in [ABG12] suggest that one may expect more concentration for large k. It is worth noting that the case of k = 2 can essentially be reduced to a problem where the entries of the random matrix are independent and the results in [BvH15] can be applied to, in some case, remove the logarithmic factor. 4.8 Another open problem Feige [Fei05] posed the following remarkable conjecture (see also [Sam66, Sam69, Sam68]) Conjecture 4.33 Given n independent random variables X1, . . . , Xn s.t., for all i, Xi ≥0 and EXi = 1 we have Prob X n Xi i=1 ≥n + 1 ! ≤1 −e−1 Note t P hat, if Xi are i.i.d. andXi = n + 1 with probability 1/(n + 1) and Xi = 0 otherwise, then n Prob ( i=1 Xi ≥n + 1) = 1 − n n n+1  ≈1 −e−1. Open Problem 4.6 Prove or disprove Conjecture 4.33.21 21We thank Francisco Unda and Philippe Rigollet for suggesting this problem. 77 MIT OpenCourseWare 18.S096 Topics in Mathematics of Data Science Fall 2015 For information about citing these materials or our Terms of Use, visit:
715
https://www.cnblogs.com/zhgmaths/p/16952753.html
贵哥讲数学 自由教师一枚,乐于探讨数学、分享数学! 首页 新随笔 管理 4.2.2 等差数列的前n项和公式 ${\color{Red}{欢迎到学科网下载资料学习 }}$ 【基础过关系列】高二数学同步精品讲义与分层练习(人教A版2019) ({\color{Red}{ 跟贵哥学数学,so \quad easy!}}) 选择性第二册同步巩固,难度2颗星! 基础知识 前n项和 等差数列({a_n })的首项为(a_1),公差为(d),则其前(n)项和为 (S_n=\dfrac{\left(a_1+a_n\right) n}{2}), (S_n=n a_1+\dfrac{n(n-1)}{2} d) 解释 (1)证明 (S_n=a_1+a_2+⋯+a_{n-1}+a_n) (1) (S_n=a_n+a_{n-1}+⋯+a_2+a_1) (2) 两式相加可得(2S_n=(a_1+a_n)+(a_2+a_{n-1})+⋯+(a_{n-1}+a_2)+(a_n+a_1)), 有等差数列的性质:若(m+n=s+t), 则(a_m+a_n=a_s+a_t); 可得(2S_n=(a_1+a_n )+(a_1+a_n )+⋯+(a_1+a_n )+(a_1+a_n )=n(a_1+a_n)), 故(S_n=\dfrac{\left(a_1+a_n\right) n}{2}); 又(a_n=a_1+{n-1}d), 所以(S_n=\dfrac{\left[a_1+a_1+(n-1) d\right] n}{2}=\dfrac{2 n a_1+n(n-1) d}{2}=n a_1+\dfrac{n(n-1)}{2} d). 以上方法是 倒序相加法. (2)等边数列的前(n)项和(S_n=n a_1+\dfrac{n(n-1)}{2} d),可写成 (S_n=\dfrac{d}{2} n^2+\left(a_1-\dfrac{d}{2}\right) n), 当(d≠0)时,(S_n)可看成关于(n)的二次函数. 【例】 等差数列({a_n })中,(a_n=2n-1),则其前(n)项和(S_n=) (\underline{\quad \quad}). 解析 (S_n=\dfrac{\left(a_1+a_n\right) n}{2}=\dfrac{(1+2 n-1) n}{2}=n^2); 或(∵a_1=1),(d=2), (\therefore S_n=n a_1+\dfrac{n(n-1)}{2} d=n^2). 证明一个数列是等差数列的方法 ① 定义法: (a_{n+1}-a_n=d)((d)是常数,(n∈N^))(⟹a_n)是等差数列; ② 中项法: (2a_{n+1}=a_n+a_{n+2} (n∈N^)⟹a_n)是等差数列; ③ 通项公式法:(a_n=kn+b)((k ,b)是常数() ⟹a_n)是等差数列; ④ 前n项和公式法: (S_n=A n^2+Bn)((A ,B)是常数() ⟹a_n)是等差数列; 注 方法③④不可以在解答题里直接使用. 基本性质 若数列({a_n})是首项为(a_1),公差为(d)的等差数列,前(n)项和为(S_n),它具有以下性质: (1) (S_n) , (S_{2 n}-S_n), (S_{3 n}-S_{2 n}…)((n∈N^))成等差数列; 证明 (S_{2 n}-S_n=a_{n+1}+a_{n+2}+\cdots+a_{2 n-1}+a_{2 n}) (=\left(a_1+n d\right)+\left(a_2+n d\right)+\cdots+\left(a_{n-1}+n d\right)+\left(a_n+n d\right)=S_n+n^2 d); 即 (S_{2 n}-S_n-S_n=n^2 d); 同理 (S_{3 n}-S_{2 n}=S_{2 n}-S_n+n^2 d \Rightarrow S_{3 n}-S_{2 n}-\left(S_{2 n}-S_n\right)=n^2 d); (S_{4 n}-S_{3 n}-\left(S_{3 n}-S_{2 n}\right)=n^2 d…) 归纳得证. 【例】 (S_n)是一等差数列的前(n)项和,(S_3) ,(S_6-S_3) ,(S_9-S_6)成等差数列. (2) (S_{2 n-1}=(2 n-1) a_n). 证明 (S_{2 n-1}=\dfrac{(2 n-1)\left(a_1+a_{2 n-1}\right)}{2}=(2 n-1) \cdot \dfrac{a_1+a_{2 n-1}}{2}=(2 n-1) a_n). 【例】 (S_n)是一等差数列的前(n)项和,(S_7=7a_4),(S_{11}=11a_6). 基本方法 【题型1】 等差数列前n项和的基本运算 【典题1】 记(S_n)为等差数列({a_n})的前(n)项和,若(a_4+a_5=24),(S_6=48),则(S_9=) (\underline{\quad \quad}). 解析 设等差数列({a_n})的公差为(d), (\because a_4+a_5=24),(S_6=48), (\therefore\left{\begin{array}{c} 2 a_1+7 d=24 \ 6 a_1+\dfrac{6 \times 5}{2} d=48 \end{array}\right.),解得(a_1=-2),(d=4), (\therefore S_9=9 a_1+\dfrac{9 \times 8}{2} d=-18+144=126). 点拨 本题属于基本量法,(a_1),(d)是等差数列基本量,遇到(a_n)用上(a_n=a_1+(n-1)d), (S_n)用上 (S_n=n a_1+\dfrac{n(n-1)}{2} d). 【典题2】 数列({a_n })是等差数列,(a_1=50),(d=-0.6).   (1)该数列前多少项都是非负数? (\qquad \qquad) (2)求此数列的前(n)项和(S_n)的最大值. 解析 (1)由(a_1=50),(d=-0.6), 知 (a_n=50-0.6(n-1)=-0.6n+50.6). 令 (\left{\begin{array}{l} a_m \geq 0 \ a_{m+1}<0 \end{array}\right.),即 (\left{\begin{array}{l} -0.6 m+50.6 \geq 0 \ -0.6(m+1)+50.6<0 \end{array}\right.),解得 (\dfrac{250}{3}, 又(m∈N^),则(m=84), 即前(84)项都是非负数. (2)方法1 由(1)得(a_{84}>0),(a_{85}<0), 则(S_n)的最大值是 (S_{84}=50 \times 84+\dfrac{84 \times 83}{2} \times(-0.6)=2108.4). 方法2 (S_n=50 n+\dfrac{n(n-1)}{2} \cdot(-0.6)=-0.3 n^2+50.3 n)(=-0.3\left(n-\dfrac{503}{6}\right)^2+\dfrac{503^2}{120}), 由二次函数的性质知,当(n=84)时,(S_n)取最大值(S_{84}=2108.4). 点拨 求等差数列前n项和(S_n)的最值,显然当等差数列递减,(S_n)有最大值,可确定哪项开始为负值,便可知道最大值;当等差数列递增,(S_n)有最小值,可确定哪项开始为正值,便可知道最小值;方法2是求出(S_n)的解析式,再利用二次函数的性质求最值. 【典题3】 等差数列({a_n})中,(a_1=2020),前(n)项和为(S_n),若 (\dfrac{S_{12}}{12}-\dfrac{S_{10}}{10}=-2),则(S_{2022}=) (\underline{\quad \quad}) . 解析 由等差数列前(n)项和为 (S_n=n a_1+\dfrac{n(n-1)}{2} d), 则 (\dfrac{S_n}{n}=a_1+\dfrac{(n-1)}{2} d=-\dfrac{d}{2} n+a_1),显然 (\left{\dfrac{S_n}{n}\right})为等差数列, 设 (\left{\dfrac{S_n}{n}\right})公差为(d_1), (\because a_1=2020), (\therefore \dfrac{S_1}{1}=\dfrac{a_1}{1}=2020), (\because \dfrac{S_{12}}{12}-\dfrac{S_{10}}{10}=2 d_1=-2),(\therefore d_1=-1), (\therefore \dfrac{s_{2022}}{2022}=2020+2021 \times(-1)=-1),解得(S_{2022}=-2022). 点拨 等差数列中,前(n)项和为(S_n),则(\left{\dfrac{S_n}{n}\right})为等差数列. 【巩固练习】 1.已知({a_n})为等差数列,若(a_3=1),(S_4=0),则(a_6)的值为(  )  A.(6) (\qquad \qquad \qquad \qquad) B.(7) (\qquad \qquad \qquad \qquad) C.(8) (\qquad \qquad \qquad \qquad) D.(9) 2.(多选)设数列({a_n})是等差数列,(S_n)是其前(n)项和,(a_1>0)且(S_6=S_9),则(  )  A.(d>0) (\qquad \qquad \qquad) B.(a_8=0) (\qquad \qquad \qquad) C.(S_7)或(S_8)为(S_n)的最大值 (\qquad \qquad \qquad) D.(S_5>S_6) 3.已知在等差数列({a_n})中,(a_3=12),(S_{12} S_{13}<0),则(S_n)最大时(n=) (\underline{\quad \quad}). 参考答案 答案 (B) 解析 设等差数列({a_n})的公差为(d), 由 (\left{\begin{array}{l} a_3=1 \ S_4=0 \end{array}\right.),得 (\left{\begin{array}{l} a_1+2 d=1 \ 4 a_1+6 d=0 \end{array}\right.),解得 (\left{\begin{array}{l} a_1=-3 \ d=2 \end{array}\right.), 所以(a_6=-3+2×5=7). 故选:(B). 答案 (BC) 解析 (a_1>0)且(S_6=S_9), (\therefore 6 a_1+\dfrac{6 \times 5}{2} d=9 a_1+\dfrac{9 \times 8}{2} d),化为:(a_1+7d=0), 可得(a_8=0),(d<0). (S_7)或(S_8)为(S_n)的最大值,(S_5. 故选:(BC). 答案 (6) 解析 设等差数列({a_n})的公差为(d),(\because a_3=12),(S_{12} S_{13}<0), (\therefore a_1+2d=12), (\left(12 a_1+\dfrac{12 \times 11}{2} d\right)\left(13 a_1+\dfrac{13 \times 12}{2} d\right)<0), 化为: ((d+3)\left(d+\dfrac{24}{7}\right)<0), 解得(-\dfrac{24}{7},可得:(\dfrac{7}{2}<-\dfrac{12}{d}<4). 因此等差数列({a_n})单调递减, (\therefore S_{12}>0),(S_{13}<0). (a_n=a_1+(n-1)d=12-2d+(n-1)d=12+(n-3)d≥0), 可得 (n \leq 3-\dfrac{12}{d}), (\because \dfrac{13}{2} \leq 3-\dfrac{12}{d} \leq 7),(\therefore n≤6). 则(S_n)最大时(n=6). 【题型2】等差数列前n项和的性质 【典题1】 已知两个等差数列({a_n }),({b_n }),它们的前(n)项和分别记为(S_n),(T_n),若 (\dfrac{S_n}{T_n}=\dfrac{n+3}{n+1}),求 (\dfrac{a_{10}}{b_{10}}) . 解析 在等差数列({a_n }),({b_n })中 (\because \dfrac{S_{19}}{T_{19}}=\dfrac{19 a_{10}}{19 b_{10}}=\dfrac{a_{10}}{b_{10}}), (\therefore \dfrac{a_{10}}{b_{10}}=\dfrac{S_{19}}{T_{19}}=\dfrac{19+3}{19+1}=\dfrac{11}{10}). 点拨 性质 (S_{2 n-1}=(2 n-1) a_n)的运用. 【典题2】 一个等差数列的前(n)项和为(S_n),(S_{10}=100), (S_{100}=10),求(S_{110}). 解析 方法1 设等差数列({a_n })的公差为(d),前(n)项和为(S_n), 则 (S_n=n a_1+\dfrac{n(n-1)}{2} d). 由已知,得 (\left{\begin{array}{l} 10 a_1+\dfrac{10 \times 9}{2} d=100 \ 100 a_1+\dfrac{100 \times 99}{2} d=10 \end{array}\right.),解得 (d=-\dfrac{11}{50}), (a_1=\dfrac{1099}{100}), 则 (S_{110}=110 a_1+\dfrac{110 \times 109}{2} d=110 \times \dfrac{1099}{100}+\dfrac{110 \times 109}{2} \times\left(-\dfrac{11}{50}\right))(=110 \times\left(\dfrac{1099-109 \times 11}{100}\right)=-110). 故此数列的前(110)项之和为(-110). 方法2 设此等差数列的前(n)项和为(S_n=an^2+bn). (\because S_{10}=100), (S_{100}=10), (\therefore\left{\begin{array}{l} 10^2 a+10 b=100 \ 100^2 a+100 b=10 \end{array}\right.),解得 (\left{\begin{array}{l} a=-\dfrac{11}{100} \ b=\dfrac{111}{10} \end{array}\right.), (\therefore S_n=-\dfrac{11}{100} n^2+\dfrac{111}{10} n). (\therefore S_{110}=-\dfrac{11}{100} \times 110^2+\dfrac{111}{10} \times 110=-110). 方法3 数列 (S_{10}, S_{20}-S_{10}, S_{30}-S_{20}, \ldots, S_{100}-S_{90}, S_{110}-S_{100})成等差数列. 设其公差为(D),则前(10)项的和为 (10 S_{10}+\dfrac{10 \times 9}{2} \cdot D=S_{100}=10),解得(D=-22), (\therefore S_{110}-S_{100}=S_{10}+(11-1) D=100+10 \times(-22)=-120). (\therefore S_{110}=-120+S_{100}=-110). 方法4 (\because S_{100}-S_{10}=a_{11}+a_{12}+\cdots+a_{100}=\dfrac{90\left(a_{11}+a_{100}\right)}{2}=\dfrac{90\left(a_1+a_{110}\right)}{2}), 又 (S_{100}-S_{10}=10-100=-90), (\therefore a_1+a_{110}=-2). (\therefore S_{110}=\dfrac{110\left(a_1+a_{110}\right)}{2}=-110). 点拨 注意比较各种方法的优劣,掌握等差数列的基本性质. 【巩固练习】 1.等差数列({a_n})的前(n)项和为(S_n),若(a_2+a_8+a_{11}=60),则(S_{13})的值是(  )  A.(130) (\qquad \qquad \qquad \qquad) B.(260) (\qquad \qquad \qquad \qquad) C.(390) (\qquad \qquad \qquad \qquad) D.(520) 2.已知数列({a_n})为等差数列,(S_n)为其前(n项)和,(2+a_5=a_6+a_3),则(S_7=)(  )  A.(2) (\qquad \qquad \qquad \qquad) B.(7) (\qquad \qquad \qquad \qquad) C.(14) (\qquad \qquad \qquad \qquad) D.(28) 3.已知等差数列({a_n})的前(n)项和为(S_n),(S_4=3),(S_{n-4}=12(n≥5,n∈N^)),(S_n=17),则(n)的值为(  )  A.(8) (\qquad \qquad \qquad \qquad) B.(11) (\qquad \qquad \qquad \qquad) C.(13) (\qquad \qquad \qquad \qquad) D.(17) 4.等差数列({a_n}),({b_n})的前(n)项和分别为(S_n),(T_n) ,若(\dfrac{S_n}{T_n}=\dfrac{2 n}{3 n+1}),则 (\dfrac{a_n}{b_n}=)(  )  A. (\dfrac{2}{3}) (\qquad \qquad \qquad \qquad) B. (\dfrac{2 n-1}{3 n-1}) (\qquad \qquad \qquad \qquad) C. (\dfrac{2 n+1}{3 n+1}) (\qquad \qquad \qquad \qquad) D. (\dfrac{2 n-1}{3 n+4}) 5.设(S_n)是等差数列({a_n})的前(n)项和,若 (\dfrac{S_3}{S_6}=\dfrac{1}{3}),则(\dfrac{S_6}{S_{12}}=)(  )  A. (\dfrac{3}{10}) (\qquad \qquad \qquad \qquad) B. (\dfrac{1}{3}) (\qquad \qquad \qquad \qquad) C. (\dfrac{1}{8}) (\qquad \qquad \qquad \qquad) D. (\dfrac{1}{9}) 参考答案 答案 (B) 解析 设等差数列({a_n})的公差为(d), (\because a_2+a_8+a_{11}=60), (\therefore a_1+d+a_1+7d+a_1+10d=3(a_1+6d)=3a_7=60),解得(a_7=20), (\therefore S_{13}=\dfrac{13\left(a_1+a_{13}\right)}{2}=13 a_7=13 \times 20=260). 故选:(B). 答案 (C) 解析 (\because 2+a_5=a_6+a_3),(\therefore a_4=a_6+a_3-a_5=2). 则 (S_7=\dfrac{7\left(a_1+a_7\right)}{2}=7 a_4=14).故选:(C). 答案 (D) 解析 由题意可得,(S_4=a_1+a_2+a_3+a_4=3) ①, (\because S_{n-4}=12),(S_n=17), (\therefore a_{n-3}+a_{n-2}+a_{n-1}+a_n=17-12=5)②, ①+②可得, (\left(a_1+a_n\right)+\left(a_2+a_{n-1}\right)+\left(a_3+a_{n-2}\right)+\left(a_4+a_{n-3}\right)=8), (\therefore a_1+a_n=2), (\because S_n=17), (\therefore S_n=\dfrac{n\left(a_1+a_n\right)}{2}=17),解得(n=17). 故选:(D). 答案 (B) 解析 (\dfrac{a_n}{b_n}=\dfrac{2 a_n}{2 b_n}=\dfrac{a_1+a_{2 n-1}}{b_1+b_{2 n-1}}=\dfrac{\dfrac{1}{2}(2 n-1)\left(a_1+a_{2 n-1}\right)}{\dfrac{1}{2}(2 n-1)\left(b_1+b_{2 n-1}\right)})(=\dfrac{S_{2 n-1}}{T_{2 n-1}}=\dfrac{2(2 n-1)}{3(2 n-1)+1}=\dfrac{2 n-1}{3 n-1}),故选(B). 答案 (A) 解析 方法一 (\because \dfrac{S_3}{S_6}=\dfrac{1}{3}), (\therefore \dfrac{3 a_1+3 d}{6 a_1+15 d}=\dfrac{1}{3}),化简得(a_1=2d), (\therefore \dfrac{S_6}{S_{12}}=\dfrac{6 a_1+15 d}{12 a_1+66 d}=\dfrac{27 d}{90 d}=\dfrac{3}{10}).故选(A). 方法二 (\because \dfrac{S_3}{S_6}=\dfrac{1}{3}),令(S_3=1), (S_6=3), (\because S_3, S_6-S_3, S_9-S_6, S_{12}-S_9)成等差数列, (\therefore 1,2,S_9-3,S_{12}-S_9)成等差数列, 显然这等差数列公差为(1),所以(S_9-3=3),(S_{12}-S_9=4), 解得(S_{12}=10), (\therefore \dfrac{S_6}{S_{12}}=\dfrac{3}{10}),故选(A). 【题型3】 等差数列前n项和的综合 【典题1】 已知等差数列({a_n})满足:(a_1=2),(a_5=18).   (1)求数列({a_n})的通项公式;   (2)记(S_n)为数列({a_n})的前n项和,求正整数(n)的范围,使得(S_n>60n+800). 解析 (1)设等差数列({a_n})的公差为(d), 则(4d=a_5-a_1=18-2=16),解得(d=4), 故(a_n=a_1+{n-1}d=2+4{n-1}=4n-2). (2) (S_n=\dfrac{n[2+(4 n-2)]}{2}=2 n^2), 令(2n^2>60n+800),即(n^2-30n-400>0),解得(n>40)或(n<-10)(舍去), 故存在正整数(n),使得(S_n>60n+800)成立,(n)的最小值为(41). 【典题2】 某长江抗洪指挥部接到预报,(24)小时后有一洪峰到达.为确保安全,指挥部决定在洪峰来临前筑一道堤坝作为第二道防线.经计算,除现有的部队指战员和当地干部群众连续奋战外,还需用(20)台同型号的翻斗车,平均每辆车要工作(24)小时才能完成任务.但目前只有一辆车投入施工,其余的需从附近高速公路上抽调,每隔(20)分能有一辆车到达,且指挥部最多还可调集(24)辆车,那么在(24)时内能否构筑成第二道防线? 解析 设第(n)辆车工作的时间是(a_n)小时, 则有(a_n-a_{n+1}=\dfrac{20}{60}=\dfrac{1}{3}) (小时), 所以数列({a_n})是等差数列,公差 (d=-\dfrac{1}{3}),(a_1=24). 如果把所有的(25)辆车全部抽调到位,所用的时间是 (\dfrac{20}{60} \times 24=8) (小时)(<24)小时, 则这(25)辆车可以完成的工作量为 (S_{25}=a_1+a_2+\cdots+a_{25}=25 a_1+\dfrac{25 \times(25-1)}{2} d) (=25 \times 24+\dfrac{25 \times 24}{2} \times\left(-\dfrac{1}{3}\right)=500)(小时). 总共需要完成的工作量为(24×20=480)(小时). 由于(500>480), 所以,在(24)小时内能构筑成第二道防线. 【巩固练习】 1.某景区三绝之一的铁旗杆铸于道光元年,两根分别立于人口两侧,每根重约(12000)斤,旗杆分五节,每节分铸八卦龙等图案,每根杆,上还悬挂(24)只玲珑的铁风铃.已知每节长度约成等差数列,第一节长约(12)尺,总长约(48)尺,则第五节长约为几尺(  )  A.(7) (\qquad \qquad \qquad \qquad) B.(7.2) (\qquad \qquad \qquad \qquad) C.(7.6) (\qquad \qquad \qquad \qquad) D.(8) 2.公差不为零的等差数列({a_n})满足(a_3=a_5 a_8),(a_6=1).   (1)求({a_n})的通项公式;   (2)记({a_n})的前n项和为(S_n),求使(S_n成立的最大正整数(n). 3.在正项等差数列({a_n })中,其前(n)项和为(S_n),(a_2+a_3=12),(a_2⋅a_3=S_5).   (1)求(a_n);   (2)证明: (\dfrac{1}{3} \leq \dfrac{1}{S_1}+\dfrac{1}{S_2}+\cdots+\dfrac{1}{S_n}<\dfrac{3}{4}). 参考答案 答案 (B) 解析 设每旗杆节长度成等差数列({a_n}),其公差为(d), 由题意 (\left{\begin{array}{l} a_1=12 \ S_5=5 a_1+10 d=48 \end{array}\right.),则(60+10d=48),即(d=-1.2), 所以(a_5=a_1+4d=12+4×(-1.2)=7.2). 所以第五节长为(7.2)尺. 故选:(B). 答案 (1) (a_n=2n-11(n∈N^ )); (2)(10). 解析 (1)设等差数列({a_n})的公差为(d(d≠0)), 由(a_3=a_5 a_8),(a_6=1),得(1-3d=(1-d)(1+2d)), 即(2d^2-4d=0),解得(d=2),或(d=0)(舍去), 则(a_1=a_6-5d=1-10=-9), 所以(a_n=-9+2(n-1)=2n-11(n∈N^ )); (2)由(1)可知 (S_n=\dfrac{n}{2}\left(a_1+a_n\right)=\dfrac{n}{2}(-9+2 n-11)=n^2-10 n), 令(S_n,得(n^2-10n<2n-11), 即(n^2-12n+11<0),解得(1, 又(n∈N^), 故使(S_n成立的最大正整数(n)为(10). 答案 (1) (a_n=2n+1);(2)略. 解析 (1) (\because \left{\begin{array}{l} a_2+a_3=12 \ a_2 \cdot a_3=S_5=5 a_3 \end{array}\right.), 即 (\left{\begin{array}{l} a_2=a_1+d=5 \ a_3=a_1+2 d=7 \end{array}\right.),解得(a_1=3),(d=2), (\therefore a_n=2n+1). 证明:(2)(S_n=n(n+2)), (\dfrac{1}{S_n}=\dfrac{1}{n(n+2)}=\dfrac{1}{2}\left(\dfrac{1}{n}-\dfrac{1}{n+2}\right)), (\therefore \dfrac{1}{S_1}+\dfrac{1}{S_2}+\cdots+\dfrac{1}{S_n}=\dfrac{1}{2}\left(1+\dfrac{1}{2}+\cdots \ldots-\dfrac{1}{n+1}-\dfrac{1}{n+2}\right)<\dfrac{3}{4}), 当(n=1)时,取最大值(\dfrac{1}{3}) , 综上: (\dfrac{1}{3} \leq \dfrac{1}{S_1}+\dfrac{1}{S_2}+\cdots+\dfrac{1}{S_n}<\dfrac{3}{4}). 分层练习 【A组---基础题】 1.等差数列({a_n })的公差(d=2),(a_1=1),则(  )  A.(a_n=2n,S_n=n^2) (\qquad \qquad \qquad \qquad) B.(a_n=n,S_n=n^2+n)  C.(a_n=2n-1,S_n=n^2) (\qquad \qquad \qquad \qquad) D.(a_n=2n-1,S_n=n^2-n) 2.在等差数列({a_n})中(a_{10}=2a_8-2),则数列({a_n})的前(11)项的和(S_{11}=)(  )  A.(8) (\qquad \qquad \qquad \qquad) B.(16) (\qquad \qquad \qquad \qquad) C.(22) (\qquad \qquad \qquad \qquad) D.(44) 3.等差数列({a_n})的通项公式(a_n=2n+1)其前(n)项和为(S_n),则数列(\left{\dfrac{S_n}{n}\right})前(10)项的和为( )  A.(120) (\qquad \qquad \qquad \qquad) B.(70) (\qquad \qquad \qquad \qquad) C.(75) (\qquad \qquad \qquad \qquad) D.(100) 4.已知等差数列({a_n})的前(n)项和为(S_n),(S_4=40),(S_n=210), (S_{n-4}=130),则(n=)(  )  A.(12) (\qquad \qquad \qquad \qquad) B.(14) (\qquad \qquad \qquad \qquad) C.(16) (\qquad \qquad \qquad \qquad) D.(18) 5.中国古诗词中,有一道“八子分绵”的数学名题:“九百九十六斤绵,赠分八子作盘缠,次第每人多十七,要将第八数来言”.题意是:把(996)斤绵分给(8)个儿子作盘缠,按照年龄从大到小的顺序依次分绵,年龄小的比年龄大的多(17)斤绵,那么第(8)个儿子分到的绵是(  )  A.(201)斤 (\qquad \qquad \qquad \qquad) B.(191)斤 (\qquad \qquad \qquad \qquad) C.(184)斤(\qquad \qquad \qquad \qquad) D.(174)斤 6.(多选)等差数列({a_n})的前n项和为(S_n),(a_1+5a_3=S_8),则下列结论一定正确的是(  )  A.(a_{10}=0) (\qquad \qquad \qquad \qquad) B.当(n=9)或(10)时,(S_n)取最大值  C.(|a_9 |<|a_{11}|) (\qquad \qquad \qquad \qquad) D.(S_6=S_{13}) 7.已知等差数列({a_n})的前n项和为(S_n),若 (S_{10}=110), (S_{110}=10),则 (S_{120}=) (\underline{\quad \quad}). 8.设等差数列({a_n})的前n项和为(S_n),若(a_6=6),(S_{15}=15),则公差(d=) (\underline{\quad \quad}). 9.已知等差数列({a_n})的公差(d>0),(a_2=-11),(a_5^2-a_{10}^2=0),则(S_{15}=) (\underline{\quad \quad}). 10.记(S_n)为等差数列({a_n})的前(n)项和,已知(a_1=-3),(S_4=0).   (1)求({a_n })的通项公式(a_n)和(S_n);   (2)求(a_2+a_4+⋯+a_8+a_{10}+a_{12})的值. 11.已知(S_n)为等差数列({a_n })的前(n)项和,已知(S_2=2),(S_3=-6).   (1)求数列({a_n })的通项公式和前(n)项和(S_n);   (2)是否存在(n),使(S_n),(S_{n+2}+2n),(S_{n+3})成等差数列,若存在,求出(n),若不存在,说明理由. 参考答案 答案 (C) 答案 (C) 解析 在等差数列({a_n})中,由(a_10=2a_8-2),得(a_1+9d=2a_1+14d-2), 可得(a_1+5d=a_6=2),(\therefore S_{11}=11a_6=22). 故选:(C). 答案 (C) 解析 (\because a_n=2n+1), (\therefore S_n=\dfrac{n\left(a_1+a_n\right)}{2}=n^2+2 n) , (\therefore \dfrac{S_n}{n}=n+2), 所以数列(\left{\dfrac{S_n}{n}\right})也是等差数列,且通项公式为(n+2), 则首项为(3),第(10)项为(12), 所以前(10)项的和 (\dfrac{10(3+12)}{2}=75). 答案 (B) 解析 因为(S_4=40),所以(a_1+a_2+a_3+a_4=40), 因为 (S_n-S_{n-4}=80),所以 (a_n+a_{n-1}+a_{n-2}+a_{n-3}=80), 所以根据等差数列的性质可得:(4(a_1+a_n)=120),即(a_1+a_n=30). 由等差数列的前n项和的公式可得: (S_n=\dfrac{n\left(a_1+a_n\right)}{2}),并且(S_n=210), 所以解得(n=14). 故选:(B). 答案 (C) 解析 用(a_1,a_2,...,a_8)是表示(8)个儿子按照年龄从大到小得到的绵数, 由题意得数列(a_1,a_2,...,a_8)是公差为(17)的等差数列,且这(8)项的和为(996), (\therefore 8 a_1+\dfrac{8 \times 7}{2} \times 17=996),解得(a_1=65), (\therefore a_8=65+7×17=184). (\therefore)第(8)个儿子分到的绵是(184)斤. 故选:(C). 答案 (AD) 解析 (\because) 等差数列({a_n})的前(n)项和为(S_n),(a_1+5a_3=S_8), (\therefore a_1+5\left(a_1+2 d\right)=8 a_1+\dfrac{8 \times 7}{2} d),求得(a_1=-9d). 故(a_10=a_1+9d=0),故(A)正确; 该数列的前(n)项和 (S_n=n a_1+\dfrac{n(n-1)}{2} d=\dfrac{n^2}{2}-d-\dfrac{19}{2} d n), 它的最值,还跟(d)的值有关, 不能推出当(n=9)或(10)时,(S_n)取最大值,故(B)错误. (\because |a_9 |=|a_1+8d|=|-d|=|d|),(|a_{11} |=|a_1+10d|=|d|), 故有(|a_9 |=|a_{11}|),故(C)错误; 由于 (S_6=6 a_1+\dfrac{6 \times 5}{2} d=-39 d), (S_{13}=13 a_1+\dfrac{13 \times 12}{2} d=-39 d), 故(S_6=S_{13}),故(D)正确, 故选:(AD). 答案 (-120) 解析 令 (b_n=\dfrac{s_n}{n} \text {, }), (\because {a_n})是等差数列,(\therefore {b_n})也是等差数列,设其公差为(d), 则 (b_{10}=\dfrac{S_{10}}{10}=11), (b_{110}=\dfrac{S_{110}}{110}=\dfrac{10}{110}=\dfrac{1}{11}), (\therefore 100 d=b_{110}-b_{10}=\dfrac{1}{11}-11=-\dfrac{120}{11}),解得 (d=-\dfrac{6}{55}), (\therefore b_{120}=b_{110}+10 d=\dfrac{1}{11}-\dfrac{6}{55} \times 10=-1), (\therefore \dfrac{S_{120}}{120}=-1),即 (S_{120}=-120). 答案 (-\dfrac{5}{2}) 解析 (\because a_6=6),(S_{15}=15), (\therefore a_1+5d=6), (15 a_1+\dfrac{15 \times 14}{2} d=15), (\therefore d=-\dfrac{5}{2}). 答案 (15) 解析 等差数列({a_n})的公差(d>0),(a_2=-11),(a_5^2-a_{10}^2=0), (\therefore a_5=-a_{10}<0). (\therefore a_1+d=-11),(a_1+4d=-(a_1+9d)),解得:(a_1=-13),(d=2). 则 (S_{15}=-13 \times 15+\dfrac{15 \times 14}{2} \times 2=15). 故答案为:(15). 答案 (1) (a_n=2n-5),(S_n=n^2-4n); (2)(54). 解析 设等差数列({a_n })的公差为(d), 由(a_1=-3),(S_4=0),得(4a_1+6d=-12+6d=0),即(d=2). (1)(a_n=-3+2(n-1)=2n-5), (S_n=-3 n+\dfrac{n(n-1)}{2} \times 2=n^2-4 n); (2) (a_2+a_4+\cdots+a_8+a_{10}+a_{12}=-6+\dfrac{6 \times 5}{2} \times 4=54). 答案 (1)(a_n=10-6n),(S_n=7n-3n^2); (2)存在(n=5),使(S_n),(S_{n+2}+2n),(S_{n+3})成等差数列. 解析 (1)设等差数列({a_n })的公差为(d),(\because S_2=2),(S_3=-6). (\therefore 2a_1+d=2),(3a_1+3d=-6),解得(a_1=4),(d=-6). (\therefore a_n=4-6(n-1)=10-6n). (S_n=\dfrac{n(4+10-6 n)}{2}=7 n-3 n^2). (2)假设存在(n),使(S_n),(S_{n+2}+2n),(S_{n+3})成等差数列, 则 (12\left(S_{n+2}+2 n\right)=S_n+S_{n+3}), (\therefore 2[7{n+2}-3{n+2}^2+2n]=7n-3n^2+7(n+3)-3(n+3)^2), 解得(n=5). 因此存在(n=5),使(S_n),(S_{n+2}+2n),(S_{n+3})成等差数列. 【B组---提高题】 1.已知(S_n)是等差数列({a_n})的前(n)项和,且(S_6>S_7>S_5),给出下列五个命题:  ①(d<0); (\qquad \qquad) ②(S_{11}>0);(\qquad \qquad \qquad) ③(S_{12}<0);(\qquad \qquad \qquad)   ④数列({S_n})中的最大项为(S_{11}); (\qquad \qquad \qquad \qquad) ⑤(|a_6 |>|a_7 |). 其中正确命题的个数是(  )  A.(3) (\qquad \qquad \qquad \qquad) B.(4) (\qquad \qquad \qquad \qquad) C.(5) (\qquad \qquad \qquad \qquad) D.(1) 2.在数列({a_n})中,(a_{n+2}-a_n=2(n∈N^ )),(a_1=-23),(a_2=-19),(S_n)为({a_n})的前(n)项和,则(S_n)的最小值为(\underline{\quad \quad}). 3.递减的等差数列({a_n})的前(n)项和为(S_n),若(a_3 a_5=63),(a_2+a_6=16)   (1)求({a_n})的等差通项;   (2)当(n)为多少时,(S_n)取最大值,并求出其最大值;   (3)求(|a_1 |+|a_2 |+|a_3 |+⋯+|a_n |). 参考答案 答案 (A) 解析 (\because S_6>S_7>S_5), (\therefore a_6=S_6-S_5>0),(a_7=S_7-S_6<0),(a_6+a_7=S_7-S_5>0), ①(d=a_7-a_6<0) , 所以①正确; ② (S_{11}=\dfrac{11\left(a_1+a_{11}\right)}{2}=11 a_6>0),故②正确; ③(S_{12}=6(a_1+a_{12} )=6(a_6+a_7 )>0),故③错误; ④(\because a_6>0),(a_7<0),(\therefore)数列({S_n})中的最大项为(S_6),故④错误; ⑤(\because a_6>0),(a_7<0), (a_6+a_7>0), (\therefore |a_6 |>|a_7 |),故⑤正确. 综上,①②⑤正确,故选(A). 答案 (-243) 解析 (\because a_{n+2}-a_n=2(n∈N^ )),(a_1=-23), (\therefore)数列({a_n})的奇数项是以(-23)为首项,(2)为公差的等差数列, 故(a_n=-23+n-1=n-24), 故(n≤23)且(n)为奇数时,(a_n<0);(n≥25)且(n)为奇数时,(a_n>0), (\because a_2=-19),(a_{n+2}-a_n=2(n∈N^ )), (\therefore)数列({a_n})的偶数项是以(-19)为首项,(2)为公差的等差数列, 故(a_n=-19+n-2=n-21), 故(n≤20)且(n)为偶数时,(a_n<0);(n≥22)且(n)为偶数时,(a_n>0), 且(a_{22}=1),(a_{23}=-1),(a_{24}=3), 故(S_n)的最小值为 (S_{21}=S_{23}=-23 \times 11+\dfrac{11 \times 10}{2} \times 2-19 \times 10+\dfrac{10 \times 9}{2} \times 2=-243). 故答案为:(-243). 答案(1) (a_n=12-n);(2) (66);(3) (\left|a_1\right|+\left|a_2\right|+\left|a_3\right|+\cdots+\left|a_n\right|=\left{\begin{array}{l} -\dfrac{1}{2} n^2+\dfrac{23}{2} n, n \leq 12 \ \dfrac{1}{2} n^2-\dfrac{23}{2} n+132, n>12 \end{array}\right.). 解析 (1)(a_2+a_6=a_3+a_5=16),又(a_3\cdot a_5=63), 所以(a_3)与(a_5)是方程(x^2-16x+63=0)的两根, 解得(\left{\begin{array}{l} a_3=7 \ a_5=9 \end{array}\right.)或 (\left{\begin{array}{l} a_3=9 \ a_5=7 \end{array}\right.), 又该等差数列递减,所以(\left{\begin{array}{l} a_3=9 \ a_5=7 \end{array}\right.), 则公差(d=\dfrac{a_5-a_3}{2}=-1),(a_1=11), 所以(a_n=11+{n-1}(-1)=12-n); (2)由 (\left{\begin{array}{l} a_n \geq 0 \ a_{n+1} \leq 0 \end{array}\right.),即 (\left{\begin{array}{l} 12-n \geq 0 \ 11-n \leq 0 \end{array}\right.),解得(11≤n≤12), 又(n∈N^),所以当(n=11)或(12)时(S_n)取最大值, 最大值为 (S_{11}=S_{12}=12 \times 11+\dfrac{12 \times 11}{2}(-1)=66); (3)由(2)知,当(n≤12)时(a_n≥0),当(n>12)时(a_n<0), ①当(n≤12)时, (|a_1 |+|a_2 |+|a_3 |+⋯+|a_n |=a_1+a_2+a_3+⋯+a_n) (=S_n=\dfrac{n\left(a_1+a_n\right)}{2}=\dfrac{n(11+12-n)}{2}=-\dfrac{1}{2} n^2+\dfrac{23}{2} n); ②当(n>12)时, (|a_1 |+|a_2 |+|a_3 |+⋯+|a_n |=(a_1+a_2+a_3+⋯+a_{12} )-(a_{13}+a_{14}+⋯+a_n)) (=-S_n+2 S_{12}=\dfrac{1}{2} n^2-\dfrac{23}{2} n+2 \times 66=\dfrac{1}{2} n^2-\dfrac{23}{2} n+132); 所以(\left|a_1\right|+\left|a_2\right|+\left|a_3\right|+\cdots+\left|a_n\right|=\left{\begin{array}{l} -\dfrac{1}{2} n^2+\dfrac{23}{2} n, n \leq 12 \ \dfrac{1}{2} n^2-\dfrac{23}{2} n+132, n>12 \end{array}\right.). 【C组---拓展题】 1.(多选)已知等差数列({a_n })的首项为(a_1),公差为(d),前(n)项和为(S_n),若 (S_{20},则下列说法正确的是(  )  A. (a_1>0) (\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad) B. (d>0)  C. (\left|a_{18}+a_{19}\right|>\left|a_{20}+a_{21}\right|) (\qquad \qquad \qquad \qquad) D. 数列 (\left{\dfrac{S_n}{a_n}\right})的所有项中最小项为 (\dfrac{S_{20}}{a_{20}}) 2.设无穷等差数列({a_n })的前(n)项和为(S_n).   (1)若首项 (a_1=\dfrac{3}{2}),公差(d=1),求满足 (S_{k^2}=\left(S_k\right)^2)的正整数(k);   (2)求所有的无穷等差数列({a_n }),使得对于一切正整数k都有 (S_{k^2}=\left(S_k\right)^2)成立. 参考答案 答案 (AD) 解析 (\because S_{20}, (\therefore a_{19}+a_{20}<0, (\therefore a_{19}>0), (a_{20}<0), (\therefore a_1+18d>0),(a_1+19d<0), (\therefore a_1>0),(d<0), 又 (a_{19}+a_{20}<0, (\therefore a_{21}+a_{18}<0), 由 (a_{20}+a_{21}-\left(a_{18}+a_{19}\right)=4 d<0), (a_{19}+a_{20}+a_{21}+a_{18}<0), (\therefore\left|a_{20}+a_{21}\right|>\left|a_{18}+a_{19}\right|). 由以上可得: (a_1>a_2>\cdots>a_{19}>0>a_{20}>a_{21}>\cdots) (S_{37}=\dfrac{37\left(a_1+a_{37}\right)}{2}=37 a_{19}>0); (S_{38}=\dfrac{38\left(a_1+a_{38}\right)}{2}=19\left(a_{19}+a_{20}\right)<0). (n≤37)时,(S_n>0);(n≥38)时,(S_n<0). (n≤19)时,或(n≥38)时, (\dfrac{S_n}{a_n}>0);(19时, (\dfrac{S_n}{a_n}<0). 由 (0>a_{20}>a_{21}>\cdots>a_{37}), (S_{20}>S_{21}>\cdots>S_{37}>0), (\therefore)数列 (\left{\dfrac{S_n}{a_n}\right})的所有项中最小项为 (\dfrac{S_{20}}{a_{20}}). 综上可得:只有(AD)正确. 答案 (1) 4;(2) ①(a_n=0); ②(a_n=1);③(a_n=2n-1). 解析 (1)(\because) 首项 (a_1=\dfrac{3}{2}),公差(d=1). (\therefore S_n=n a_1+\dfrac{n(n-1)}{2} d=\dfrac{3}{2} n+\dfrac{n(n-1)}{2}=\dfrac{1}{2} n^2+n), 由 (S_{k^2}=\left(S_k\right)^2)得 (\dfrac{1}{2}\left(k^2\right)^2+k^2=\left(\dfrac{1}{2} k^2+k\right)^2),即 (\dfrac{1}{4} k^4-k^3=0), (\because k)是正整数, (\therefore k=4). (2)设数列({a_n })的公差为(d), 则在(S_{k^2}=\left(S_k\right)^2)中分别取(k=1),和(k=2)得 (\left{\begin{array}{l} S_1=\left(S_1\right)^2 \ S_4=\left(S_2\right)^2 \end{array}\right.), 即 (\left{\begin{array}{l} a_1=a_1^2,(1) \ 4 a_1+6 d=\left(2 a_1+d\right)^2 ,(2) \end{array}\right.) 由(1)得(a_1=0)或(a_1=1), 当(a_1=0)时,代入(2)得(d=0)或(d=6).若(a_1=0),(d=0)则本题成立; 若(a_1=0),(d=6),则(a_n=6(n-1)), 由(S_3=18),((S_3 )^2=324),(S_9=216)知(S_9≠(S_3 )^2),故所得数列不符合题意; 当(a_1=1)时,代入②得(4+6d=(2+d)^2),解得(d=0)或(d=2). 若(a_1=1),(d=0),则(a_n=1),(S_n=n)从而 (S_{k^2}=\left(S_k\right)^2)成立; 若(a_1=1),(d=2),则(a_n=2n-1),(S_n=n^2), 从而 (S_{k^2}=\left(S_k\right)^2)成立. 综上所述,只有(3)个满足条件的无穷等差数列: ①(a_n=0); ②(a_n=1);③(a_n=2n-1). 出处:贵哥讲数学 posted @ 2022-12-05 16:44 贵哥讲数学 阅读(954) 评论(0) 收藏 举报 刷新页面返回顶部 博客园 © 2004-2025 浙公网安备 33010602011771号 浙ICP备2021040463号-3 //更改网页ico // 实现数学符号与汉字间有间隙 //文章页加大页面,隐藏侧边栏
716
https://www.aljazeera.com/news/2024/7/3/how-do-hurricanes-form-and-how-do-they-differ-from-cyclones-and-typhoons
EXPLAINER News|Weather How do hurricanes form and how do they differ from cyclones and typhoons? With this year’s hurricane season under way, Al Jazeera visualises the differences between various storm systems. By AJLabs Published On 3 Jul 20243 Jul 2024 Save articles to read later and create your own reading list. Hurricane Beryl, the earliest hurricane on record to reach Category 5 strength in the Atlantic season, is barrelling towards Jamaica after battering the southeastern Caribbean, killing at least six people and leaving widespread destruction. The National Oceanic and Atmospheric Administration (NOAA) predicts an 85 percent chance that this year’s hurricane season will be more active than usual, driven primarily by La Nina conditions and warmer than average ocean temperatures. The World Meteorological Organization (WMO) publishes an alphabetical list of names for upcoming tropical cyclones. What’s the difference between hurricanes, cyclones and typhoons? Hurricanes, cyclones and typhoons are all essentially the same thing. All three are storm systems with winds exceeding 119km/h (74mph). The name differs based on where in the world the storm happens. Hurricanes: Occur in the North Atlantic Ocean and Northeast Pacific, often affecting the United States East Coast and Caribbean. The strength of a hurricane is measured on a wind scale from 1 to 5. A Category 1 hurricane will bring with it sustained winds of 119-153km/h (74-95mph) whereas a Category 5 storm can exceed 252km/h (157mph) Typhoons: Occur in the northwestern Pacific Ocean, frequently hitting the Philippines and Japan. Typhoon season is most common between May to October, but they can form year-round. The strength of a typhoon has various classification scales with the most severe storms named “super typhoons”. Cyclones: Occur in the South Pacific and the Indian Ocean, often affecting countries from Australia all the way to Mozambique. Cyclone season is typically between November and April. How does a tropical storm form? Tropical storms form over warm ocean waters near the equator. As this warm air rises, an area of lower air pressure is formed. As the air cools down again, it is pushed aside by more warm air rising below it. This cycle causes strong winds and rain. Advertisement When this cycle gains momentum and strengthens, it creates a tropical storm. As the storm system rotates ever faster, an eye forms in the centre. The eye of the storm is very calm and clear and has very low air pressure. Sign up for Al Jazeera Americas Coverage Newsletter protected by reCAPTCHA When winds reach speeds of 63km/h (39mph) the storm is called a tropical storm. When the wind speeds reach 119km/h (74mph) the storm becomes a tropical cyclone, typhoon or hurricane. Source: Al Jazeera Advertisement You rely on Al Jazeera for truth and transparency We process your personal information to measure and improve our sites and service, to assist our marketing campaigns and to provide personalised content and advertising. By clicking the button on the right, you can exercise your privacy rights. For more information see our privacy noticeLearn more about our Cookie Policy.
717
https://www.youtube.com/watch?v=mkBfsTeYi2U
Mastering Quadratic Functions: Discriminant for Positivity or Negativity iitutor.com 55700 subscribers 156 likes Description 10930 views Posted: 30 Jul 2019 📌 Receive Comprehensive Mathematics Practice Papers Weekly for FREE 💯 Click this link to get: ▶️▶️▶️ ◀️◀️◀️ Read this article: : Principles of Discriminant: Welcome to our comprehensive guide on mastering quadratic functions! If you've ever wondered how to determine whether a quadratic function is positive or negative, you're in the right place. In this video, we delve deep into the world of quadratic equations, exploring the powerful tool known as the discriminant. Whether you're a student looking to ace your math exams or just someone curious about the fascinating world of mathematics, this video has something for everyone. The discriminant is a crucial concept that helps us understand the behavior of quadratic functions. It allows us to determine whether the graph of a quadratic equation opens upwards (positivity) or downwards (negativity). We'll break down the discriminant formula, step by step, and provide clear examples to ensure you grasp the concept fully. By the end of this tutorial, you'll have the knowledge and confidence to analyze quadratic functions like a pro. Whether you're solving real-world problems or tackling math assignments, this skill will be a valuable addition to your mathematical toolkit. Don't let quadratic functions intimidate you any longer. Join us on this journey to mastery, and unlock the secrets of the discriminant. Hit that "Subscribe" button, give us a thumbs up if you find this video helpful, and let's dive into the world of quadratic functions together. Stay tuned for more math tutorials and educational content, and remember: math is not just a subject; it's an adventure waiting to be explored! Mathematics #QuadraticFunctions #Discriminant #MathTutorials #PositivityNegativity #Education #MathExplained #Mastery #LearningMath #MathematicsEducation #MathematicsTutorial #MathHelp #MathStudents #MathSuccess Positive-definite is the graph concave up, and the discriminant is negative. A negative definite is that the graph is concave down, and the discriminant is negative. 20 comments Transcript: okay now we're going to look at positive or negative definite functions okay so when the quadratic equation ax squared plus BX plus C equal to zero is the first case is when it's totally above the x-axis when it's floating above and it does not have any roots okay so I'm going to talk about those functions that quadratic functions that don't have any roots okay and that are above the x-axis so totally floating and not touching this axes okay so the first one is when it's positive definite this for this kind of case is called a positive definite function now in this case can you see it's a happy face which is means positive parabola isn't it so therefore a which is the coefficient of x squared must be positive right so for it for it to be a positive parabola the coefficient of the leading term x squared must be positive right that's why a has to be greater than zero okay so that's the first thing you must remember for positive definite functions now the discriminant must be less than zero okay the discriminant has to be less than zero discriminant is less than zero means is a negative number if we have a negative square root right if we have square root of a negative value its imaginary or unreal right the reason for that is because that particular function will have no roots so if you say that the discriminant is a negative number or less than zero that means it will have no roots therefore floating above the x-axis okay so these two cases please remember that for the positive definite function that looks like that all right that's the first case now I want to describe to you what is negative definite so it's a sad face so it's a negative parabola that's below the x-axis and not have any roots okay so it's not going to touch the x-axis at all okay basically this the opposite of that now in this one the negative definite parabola with this one a which is the coefficient of x squared must be less than zero okay has to be less than zero okay because for a parabola to have a negative for being to be negative or if make a sad face the coefficient of the leading term must be a negative value right that's why a has to be less than zero in the negative definite case okay but same as the positive one the discriminant must also still be less than zero a case for both cases positive and negative definite functions discriminant must be less than zero and this applies for all the parabolas positive negative both discriminants must be less than zero because see how this one doesn't have any roots as well that means if the discriminant is less than zero like a negative number okay when we square root a negative number it makes no sense it becomes imaginary or unreal the whole root so therefore that means it will have no roots which means it's floating down the x-axis below the x-axis like that okay so the only difference between positive and negative definite parabolas is when is only the shape ok the positive one is above the x-axis the negative one is below the discriminant is the same K must be less than zero but for positive one a the coefficient of x squared must be positive or greater than zero and the negative one a must be less than zero please remember these two cases okay cause that's what we're going to be applying on our next few questions okay sir starting with question seven find the value of a for which x squared minus X plus one minus a is greater than zero for all X so if this whole quadratic function is greater than zero that means above the x-axis isn't it like that you must be if it's greater than zero it must be floating above the x axis the x axis the point when y zero isn't it so this whole function is y 4y to be greater than zero it must be above the x axis that's why it must be a positive definite one in this case alright so have a look at the sign there try and draw a diagram if you like to have a good idea now remember the case for positive definite quadratic functions a must be greater than 0 the coefficient of x squared must be greater than zero but look the coefficient of x squared is 1 so yes the coefficient of x squared is positive ok so it's 1 so it's a positive so that's good now what I'm going to do is apply my discriminant so B squared minus 4ac must be less than zero right these ones these positive definite ones they don't have any roots so as I said previously the discriminant must be less than zero ok so let's apply our ABC into this formula so a is going to be 1 all right this is negative 1 X so in 1 is a negative 1 and negative 1 is B and 1 minus a this whole thing is C ok the whole thing 1 minus AC ok so don't get mixed up with that all right let's apply it on B is negative 1 so negative 1 squared is positive 1 so I just write positive 1 there - for a a is just 1 so I don't really need to put it there and C is 1 minus 8 so soft 1 - aim to that ok do you see that so I go 1 minus 4 times 1 minus a stubbing all the abcs into our pronumerals ok and then just simplify now from now on it's just algebra so I'm gonna go 1 minus 4 because negative 4 times 1 is negative 4 negative 4 times negative a is positive 4 a and that's all less than 0 simplify so c1 minus 4 is negative 3 if I move it over it becomes positive 3 so for a is less than positive 3 okay I'm just simplifying step by step and then now solve for a so you divide both sides by 4 so a becomes less than 3 on 4 make sure the sign is following you K so a is less than 3 on 4 and that's the answer ok because it asks us to find the valleys of 8 so for this function okay for this quadratic function to be greater than 0 the value of a which is that a there must be less than 3 on 4 ok if it's greater than it's probably not a positive definite function ok so that's a solution for question 7 please remember that discriminant must be less than 0 ok that's question 7 how to look so we can move on to the next question the question a says find all values of a for which x squared minus 2x plus a must be less than 0 for all real X ok so X must be real all right now this time this function this quadratic function here must be less than 0 ok if it's less than 0 remember how this is this whole function represents Y ok so if Y is less than 0 it must be negative definite below the x-axis sadface okay like that so must not touch the x-axis ok so it must be below the x-axis for this to make sense ok so now now that we need to solve for a negative definite function quadratic function the first thing I want to consider is always the coefficient of x squared which is a remember if it's negative definite like that if it's a sad face what are the values of a gonna be well first of all a a must be less than 0 a must be a negative number in order to make this shape look downwards so first of all aid must be less than zero so I've got that down make sure you put that down as a first step okay so let's just let that be our equation one now I'm not finished with that because I also need to apply my discriminant remember if it's negative definite as well as a positive definite quadratic function discriminant must be less than zero right because this function has no roots so therefore B squared minus 4ac is less than zero okay here that's a negative two is big and that's C okay so let's apply it all in so B is negative two so negative two squared minus four times and C is both a so I'm gonna write a a okay a times a all right so subbing in your valleys all into the formula now let's simplify negative two squared is positive 4 and this I'm just gonna simplify it to for a square all right and then okay here all basically what I did here guys was there I'm just gonna factorize it by 4 so I have 4 1 minus a squared is less than 0 right then I can just divide both sides by 4 so 1 minus a squared is less than 0 now I want to change the signs I'm going to multiply both of them everything by a negative 1 so this becomes negative 1 plus a squared and this side is just 0 but remember you from multiplying or dividing by a negative number we switch the sign right so we switch the sign like that okay and negative 1 plus a squared is the same thing as a squared minus 1 right I'm just switching those around and I get a squared minus 1 is greater than 0 okay so and it's added in those extra steps just to get to that point okay for those who are bit confused so do add your exit steps if you need to okay because I really want you to avoid your silly mistakes all right okay so now that I've got that a squared minus one is greater than zero please don't forget to switch your signs around if you're dividing or multiplying by a negative value all right now a plus 1 a minus 1 is greater than zero and just factorized that see how a squared minus one is a squared minus one squared so I'm gonna go a plus one and minus one is greater than zero and just factorize it so therefore hey guys if it's a plus one a minus one is greater than zero remember if I draw a parabola if that's negative one that's positive one if it's greater than zero it must be on either side isn't it on the outer two sides so therefore the solutions are a is for the side a is less than negative one see how it's going to the left-hand side so a must be less than negative one and he is going towards the right-hand side this all this shaded part so a must be greater than one so we have two different solutions for this one okay so you may like to draw another parabola just to get that straight thank you that's that step to that step I got two solutions for a alright okay but now that's not the end I've got a is less than negative 1 a is greater than 1 and I also got a less than zero right so I've got three different valleys for a but they all must make sense okay they must all make this negative definite quadratic function so therefore what I need to do is find the common parts of all three inequalities so how I do that is I draw my linear line yeah all I do is draw a little number line okay and then that will be the values for a now Lt I'll do this one a is less than zero let's say zero is there okay that's one that's negative one okay now zero is there if a is less than zero it must be going that lays in it these valleys for any values that's less than zero are going to the left-hand side represents the valleys of a which is less than zero isn't it okay so that's my first one so I've done that tick okay now a is less than negative one okay that's negative one if it's less than negative one it must be also going towards the left hand side okay so it's this way from that onwards okay now look at the comments I've done that one but now I need to do is look at the common parts of these two okay because we need to find the common part don't we what's the common part guys this pod is common isn't it this part is not common only this part is common so far all right so it's not going to be this place only gonna be this part now look at this a is greater than one let's apply this now a is greater than one is that's one if this great time must be going towards the right-hand side because these valleys are which is greater than one isn't it they must be here but is that common with these two no these are this as this one is not common with this one so therefore the only valleys so I've got that one there we've only put that we'll put that there but guys see the only valleys for which all three inequalities make sense will make is common with each other is just this part here so therefore only a is less than negative one which is represented by this shredded area here that's the answer okay so from that equation equation one and equation two I found that a is less than negative one which is that section there is the only applicable out sir okay so a must be negative one for this function to be a negative definite quadratic function alright does that make sense so that was a little bit more complex than the previous one and usually the negative definite okay you have to consider firstly the coefficient of x squared and then you can sit up the discriminant and then make sure make sure you always find the common parts like I did here I really find this doing number line or a number plane really helps okay so do this and like shade look you're some coloring this find the common parts okay and you see that this one is definitely not common with any of these two so we're just gonna ignore that okay we'll eliminate that alright so the common part is simply is less than negative one and that's the answer to question eight so that was positive and negative definite quadratic functions I hope you know what they look like by now okay and make sure you know the properties of each one for positive definite ones the coefficient of x squared a must be greater than zero and for the negative definite one the coefficient of x square a must be less than zero like what we applied here okay and all the time for both positive and negative definite functions b square minus 4ac the discriminant must always be less than zero okay that indicates having no roots all right that was question eight that was positive and negative definite functions
718
https://www.bigideasmath.com/external/state-resources/pdfs/NC_math2_08_03.pdf
Copyright © Big Ideas Learning, LLC All rights reserved. 8.3 Proving Triangle Congruence by SAS For use with Exploration 8.3 Name ________ Date _ Essential Question What can you conclude about two triangles when you know that two pairs of corresponding sides and the corresponding included angles are congruent? Go to BigIdeasMath.com for an interactive tool to investigate this exploration. Work with a partner. Use dynamic geometry software. a. Construct circles with radii of 2 units and 3 units centered at the origin. Construct a 40° angle with its vertex at the origin. Label the vertex A. b. Locate the point where one ray of the angle intersects the smaller circle and label this point B. Locate the point where the other ray of the angle intersects the larger circle and label this point C. Then draw . ABC  c. Find BC, , m B ∠ and . m C ∠ d. Repeat parts (a)–(c) several times, redrawing the angle in different positions. Keep track of your results by completing the table on the next page. What can you conclude? 1 EXPLORATION: Drawing Triangles 0 1 2 3 4 −1 −1 −2 −3 −2 −3 −4 0 1 2 A 3 4 5 40° 0 1 2 3 4 −1 −1 −2 −3 −2 −3 −4 0 1 2 A B C 3 4 5 40° 253 Copyright © Big Ideas Learning, LLC All rights reserved. Name ________ Date _ Communicate Your Answer 2. What can you conclude about two triangles when you know that two pairs of corresponding sides and the corresponding included angles are congruent? 3. How would you prove your conclusion in Exploration 1(d)? A B C AB AC BC m A ∠ m B ∠ m C ∠ 1. (0, 0) 2 3 40° 2. (0, 0) 2 3 40° 3. (0, 0) 2 3 40° 4. (0, 0) 2 3 40° 5. (0, 0) 2 3 40° 8.3 Proving Triangle Congruence by SAS (continued) 1 EXPLORATION: Drawing Triangles (continued) 254 Copyright © Big Ideas Learning, LLC All rights reserved. 8.3 For use after Lesson 8.3 Name ________ Date _ Theorems Side-Angle-Side (SAS) Congruence Theorem If two sides and the included angle of one triangle are congruent to two sides and the included angle of a second triangle, then the two triangles are congruent. If , , AB DE A D ≅ ∠ ≅∠ and , AC DF ≅ then . ABC DEF ≅   Notes: A B C D E F Practice Given — PQ bisects ∠SPT, Q T P S — SP ≅ — TP Prove △SPQ ≅ △TPQ STATEMENTS REASONS 1. — SP ≅ — TP , — PQ bisects ∠SPT. 1. Given 2. — PQ ≅ — PQ 2. Refl exive Property of Congruence 3. ∠SPQ ≅ ∠TPQ 3. Defi nition of angle bisector 4. △SPQ ≅ △TPQ 4. SAS Congruence Theorem Worked-Out Examples Example #1 Write a proof. 255 Copyright © Big Ideas Learning, LLC All rights reserved. 8.3 Name ________ Date _ Extra Practice In Exercises 1 and 2, write a proof. 1. Given , BD AC AD CD ⊥ ≅ Prove ABD CBD ≅   STATEMENTS REASONS A C D B Practice A Prove △ABC ≅ △DEC A D E B C 4y − 6 2x + 6 4x 3y + 1 STATEMENTS REASONS 1. — AC ≅ — DC , — BC ≅ — EC 1. Given (marked in diagram) 2. ∠ACB ≅ ∠DCE 2. Vertical Angles Congruence Theorem 3. △ABC ≅ △DEC 3. SAS Congruence Theorem Example #2 Prove that ABC > DEC. Then find the values of x and y. (continued) Practice AC = CD BC = CE 4y − 6 = 2x + 6 3y + 1 = 4x 4y = 2x + 12 3 ( 1 — 2 x + 3 ) + 1 = 4x y = 1 — 2 x + 3 1.5x + 9 + 1 = 4x 1.5x + 10 = 4x 10 = 2.5x x = 4   256 y = 1 — 2 ⋅ 4 + 3 = 2 + 3 = 5 So, x = 4 and y = 5. Copyright © Big Ideas Learning, LLC All rights reserved. 8.3 Name _________ Date _ In Exercises 3 and 4, use the given information to name two triangles that are congruent. Explain your reasoning. 3. , EPF GPH ∠ ≅∠ and P is the center of the circle. 4. ABCDEF is a regular hexagon. 5. A quilt is made of triangles. You know || PS QR and . PS QR ≅ Use the SAS Congruence Theorem to show that . PQR RSP ≅   P G H E F A B C D E F P Q S R 2. Given , JN MN NK NL ≅ ≅ Prove JNK MNL ≅   STATEMENTS REASONS L N K M J (continued) Practice 257 Copyright © Big Ideas Learning, LLC All rights reserved. 12.3 Practice B Name ________ Date _ In Exercises 1 and 2, decide whether enough information is given to prove that the triangles are congruent using the SAS Congruence Theorem. Explain. 1. 2. In Exercises 3 and 4, identify three congruent triangles and explain how to show that they are congruent. 3. P is the center of the circle. 4. Three squares border equiangular and equilateral . RST  5. Use the information given in the figure 6. Given , EB EC AED ≅  to find the values of x and y. is equilateral and equiangular. Prove ACD DBA ≅   P M L N R T S W X Y Z U V A D E B C D C F A B E (2y − 26)° 38° (4x − 16) 6 5y° 1 2 Practice B 258
719
https://www.whitman.edu/mathematics/multivariable/multivariable_14_Partial_Differentiation.pdf
14 Partial Differentiation 14.1 Fun tions of Severa l V a riables In single-variable calculus we were concerned with functions that map the real numbers R to R, sometimes called “real functions of one variable”, meaning the “input” is a single real number and the “output” is likewise a single real number. In the last chapter we considered functions taking a real number to a vector, which may also be viewed as functions f: R → R3, that is, for each input value we get a position in space. Now we turn to functions of several variables, meaning several input variables, functions f: Rn →R. We will deal primarily with n = 2 and to a lesser extent n = 3; in fact many of the techniques we discuss can be applied to larger values of n as well. A function f: R2 →R maps a pair of values (x, y) to a single real number. The three-dimensional coordinate system we have already used is a convenient way to visualize such functions: above each point (x, y) in the x-y plane we graph the point (x, y, z), where of course z = f(x, y). EXAMPLE 14.1.1 Consider f(x, y) = 3x+4y −5. Writing this as z = 3x+4y −5 and then 3x+4y−z = 5 we recognize the equation of a plane. In the form f(x, y) = 3x+4y−5 the emphasis has shifted: we now think of x and y as independent variables and z as a variable dependent on them, but the geometry is unchanged. EXAMPLE 14.1.2 We have seen that x2 + y2 + z2 = 4 represents a sphere of radius 2. We cannot write this in the form f(x, y), since for each x and y in the disk x2+y2 < 4 there are two corresponding points on the sphere. As with the equation of a circle, we can resolve 349 350 Chapter 14 Partial Differentiation this equation into two functions, f(x, y) = p 4 −x2 −y2 and f(x, y) = − p 4 −x2 −y2, representing the upper and lower hemispheres. Each of these is an example of a function with a restricted domain: only certain values of x and y make sense (namely, those for which x2 + y2 ≤4) and the graphs of these functions are limited to a small region of the plane. EXAMPLE 14.1.3 Consider f = √x + √y. This function is defined only when both x and y are non-negative. When y = 0 we get f(x, y) = √x, the familiar square root function in the x-z plane, and when x = 0 we get the same curve in the y-z plane. Generally speaking, we see that starting from f(0, 0) = 0 this function gets larger in every direction in roughly the same way that the square root function gets larger. For example, if we restrict attention to the line x = y, we get f(x, y) = 2√x and along the line y = 2x we have f(x, y) = √x + √ 2x = (1 + √ 2)√x. 10.0 7.5 5.0 0 0.0 y 2.5 1 2.5 5.0 x 2 7.5 0.0 10.0 3 4 5 6 Figure 14.1.1 f(x, y) = √x + √y (AP) A computer program that plots such surfaces can be very useful, as it is often difficult to get a good idea of what they look like. Still, it is valuable to be able to visualize relatively simple surfaces without such aids. As in the previous example, it is often a good idea to examine the function on restricted subsets of the plane, especially lines. It can also be useful to identify those points (x, y) that share a common z-value. EXAMPLE 14.1.4 Consider f(x, y) = x2 + y2. When x = 0 this becomes f = y2, a parabola in the y-z plane; when y = 0 we get the “same” parabola f = x2 in the x-z plane. Now consider the line y = kx. If we simply replace y by kx we get f(x, y) = (1 + k2)x2 which is a parabola, but it does not really “represent” the cross-section along y = kx, because the cross-section has the line y = kx where the horizontal axis should be. In 14.1 Functions of Several Variables 351 order to pretend that this line is the horizontal axis, we need to write the function in terms of the distance from the origin, which is p x2 + y2 = p x2 + k2x2. Now f(x, y) = x2 + k2x2 = ( p x2 + k2x2)2. So the cross-section is the “same” parabola as in the x-z and y-z planes, namely, the height is always the distance from the origin squared. This means that f(x, y) = x2 + y2 can be formed by starting with z = x2 and rotating this curve around the z axis. Finally, picking a value z = k, at what points does f(x, y) = k? This means x2+y2 = k, which we recognize as the equation of a circle of radius √ k. So the graph of f(x, y) has parabolic cross-sections, and the same height everywhere on concentric circles with center at the origin. This fits with what we have already discovered. −3 0 −3 −2 −2 2 −1 −1 4 0 0 6 1 1 8 2 2 3 3 Figure 14.1.2 f(x, y) = x2 + y2 (AP) As in this example, the points (x, y) such that f(x, y) = k usually form a curve, called a level curve of the function. A graph of some level curves can give a good idea of the shape of the surface; it looks much like a topographic map of the surface. In figure 14.1.2 both the surface and its associated level curves are shown. Note that, as with a topographic map, the heights corresponding to the level curves are evenly spaced, so that where curves are closer together the surface is steeper. Functions f: Rn →R behave much like functions of two variables; we will on occasion discuss functions of three variables. The principal difficulty with such functions is visual-izing them, as they do not “fit” in the three dimensions we are familiar with. For three variables there are various ways to interpret functions that make them easier to under-stand. For example, f(x, y, z) could represent the temperature at the point (x, y, z), or the pressure, or the strength of a magnetic field. It remains useful to consider those points at which f(x, y, z) = k, where k is some constant value. If f(x, y, z) is temperature, the set of points (x, y, z) such that f(x, y, z) = k is the collection of points in space with temperature 352 Chapter 14 Partial Differentiation k; in general this is called a level set; for three variables, a level set is typically a surface, called a level surface. EXAMPLE 14.1.5 Suppose the temperature at (x, y, z) is T(x, y, z) = e−(x2+y2+z2). This function has a maximum value of 1 at the origin, and tends to 0 in all directions. If k is positive and at most 1, the set of points for which T(x, y, z) = k is those points satisfying x2 + y2 + z2 = −ln k, a sphere centered at the origin. The level surfaces are the concentric spheres centered at the origin. Exercises 14.1. 1. Let f(x, y) = (x−y)2. Determine the equations and shapes of the cross-sections when x = 0, y = 0, x = y, and describe the level curves. Use a three-dimensional graphing tool to graph the surface. ⇒ 2. Let f(x, y) = |x|+|y|. Determine the equations and shapes of the cross-sections when x = 0, y = 0, x = y, and describe the level curves. Use a three-dimensional graphing tool to graph the surface. ⇒ 3. Let f(x, y) = e−(x2+y2) sin(x2+y2). Determine the equations and shapes of the cross-sections when x = 0, y = 0, x = y, and describe the level curves. Use a three-dimensional graphing tool to graph the surface. ⇒ 4. Let f(x, y) = sin(x −y). Determine the equations and shapes of the cross-sections when x = 0, y = 0, x = y, and describe the level curves. Use a three-dimensional graphing tool to graph the surface. ⇒ 5. Let f(x, y) = (x2 −y2)2. Determine the equations and shapes of the cross-sections when x = 0, y = 0, x = y, and describe the level curves. Use a three-dimensional graphing tool to graph the surface. ⇒ 6. Find the domain of each of the following functions of two variables: a. p 9 −x2 + p y2 −4 b. arcsin(x2 + y2 −2) c. p 16 −x2 −4y2 ⇒ 7. Below are two sets of level curves. One is for a cone, one is for a paraboloid. Which is which? Explain. 14.2 Limits and Continuity 353 14.2 Limits a nd Continuity To develop calculus for functions of one variable, we needed to make sense of the concept of a limit, which we needed to understand continuous functions and to define the derivative. Limits involving functions of two variables can be considerably more difficult to deal with; fortunately, most of the functions we encounter are fairly easy to understand. The potential difficulty is largely due to the fact that there are many ways to “ap-proach” a point in the x-y plane. If we want to say that lim (x,y)→(a,b) f(x, y) = L, we need to capture the idea that as (x, y) gets close to (a, b) then f(x, y) gets close to L. For functions of one variable, f(x), there are only two ways that x can approach a: from the left or right. But there are an infinite number of ways to approach (a, b): along any one of an infinite number of lines, or an infinite number of parabolas, or an infinite number of sine curves, and so on. We might hope that it’s really not so bad—suppose, for example, that along every possible line through (a, b) the value of f(x, y) gets close to L; surely this means that “f(x, y) approaches L as (x, y) approaches (a, b)”. Sadly, no. EXAMPLE 14.2.1 Consider f(x, y) = xy2/(x2 + y4). When x = 0 or y = 0, f(x, y) is 0, so the limit of f(x, y) approaching the origin along either the x or y axis is 0. Moreover, along the line y = mx, f(x, y) = m2x3/(x2 + m4x4). As x approaches 0 this expression approaches 0 as well. So along every line through the origin f(x, y) approaches 0. Now suppose we approach the origin along x = y2. Then f(x, y) = y2y2 y4 + y4 = y4 2y4 = 1 2, so the limit is 1/2. Looking at figure 14.2.1, it is apparent that there is a ridge above x = y2. Approaching the origin along a straight line, we go over the ridge and then drop down toward 0, but approaching along the ridge the height is a constant 1/2. Thus, there is no limit at (0, 0). Fortunately, we can define the concept of limit without needing to specify how a particular point is approached—indeed, in definition 2.3.2, we didn’t need the concept of “approach.” Roughly, that definition says that when x is close to a then f(x) is close to L; there is no mention of “how” we get close to a. We can adapt that definition to two variables quite easily: DEFINITION 14.2.2 Limit Suppose f(x, y) is a function. We say that lim (x,y)→(a,b) f(x, y) = L if for every ǫ > 0 there is a δ > 0 so that whenever 0 < p (x −a)2 + (y −b)2 < δ, |f(x, y) −L| < ǫ. 354 Chapter 14 Partial Differentiation Figure 14.2.1 f(x, y) = xy2 x2 + y4 (AP) This says that we can make |f(x, y) −L| < ǫ, no matter how small ǫ is, by making the distance from (x, y) to (a, b) “small enough”. EXAMPLE 14.2.3 We show that lim (x,y)→(0,0) 3x2y x2 + y2 = 0. Suppose ǫ > 0. Then 3x2y x2 + y2 = x2 x2 + y2 3|y|. Note that x2/(x2 + y2) ≤1 and |y| = p y2 ≤ p x2 + y2 < δ. So x2 x2 + y2 3|y| < 1 · 3 · δ. 14.2 Limits and Continuity 355 We want to force this to be less than ǫ by picking δ “small enough.” If we choose δ = ǫ/3 then 3x2y x2 + y2 < 1 · 3 · ǫ 3 = ǫ. Recall that a function f(x) is continuous at x = a if lim x→a f(x) = f(a); roughly this says that there is no “hole” or “jump” at x = a. We can say exactly the same thing about a function of two variables. DEFINITION 14.2.4 f(x, y) is continuous at (a, b) if lim (x,y)→(a,b) f(x, y) = f(a, b). EXAMPLE 14.2.5 The function f(x, y) = 3x2y/(x2 + y2) is not continuous at (0, 0), because f(0, 0) is not defined. However, we know that lim (x,y)→(0,0) f(x, y) = 0, so we can easily “fix” the problem, by extending the definition of f so that f(0, 0) = 0. This surface is shown in figure 14.2.2. -3 -1 1 3 3 1 -1 -3 -2.2 -1.2 -0.2 0.8 1.8 Figure 14.2.2 f(x, y) = 3x2y x2 + y2 (AP) 356 Chapter 14 Partial Differentiation Note that in contrast to this example we cannot fix example 14.2.1 at (0, 0) because the limit does not exist. No matter what value we try to assign to f at (0, 0) the surface will have a “jump” there. Fortunately, the functions we will examine will typically be continuous almost ev-erywhere. Usually this follows easily from the fact that closely related functions of one variable are continuous. As with single variable functions, two classes of common functions are particularly useful and easy to describe. A polynomial in two variables is a sum of terms of the form axmyn, where a is a real number and m and n are non-negative integers. A rational function is a quotient of polynomials. THEOREM 14.2.6 Polynomials are continuous everywhere. Rational functions are continuous everywhere they are defined. Exercises 14.2. Determine whether each limit exists. If it does, find the limit and prove that it is the limit; if it does not, explain how you know. 1. lim (x,y)→(0,0) x2 x2 + y2 ⇒ 2. lim (x,y)→(0,0) xy x2 + y2 ⇒ 3. lim (x,y)→(0,0) xy 2x2 + y2 ⇒ 4. lim (x,y)→(0,0) x4 −y4 x2 + y2 ⇒ 5. lim (x,y)→(0,0) sin(x2 + y2) x2 + y2 ⇒ 6. lim (x,y)→(0,0) xy p 2x2 + y2 ⇒ 7. lim (x,y)→(0,0) e−x2−y2 −1 x2 + y2 ⇒ 8. lim (x,y)→(0,0) x3 + y3 x2 + y2 ⇒ 9. lim (x,y)→(0,0) x2 + sin2 y 2x2 + y2 ⇒ 10. lim (x,y)→(1,0) (x −1)2 ln x (x −1)2 + y2 ⇒ 11. lim (x,y)→(1,−1)3x + 4y ⇒ 12. lim (x,y)→(0,0) 4x2y x2 + y2 ⇒ 14.3 Partial Differentiation 357 13. Does the function f(x, y) = x −y 1 + x + y have any discontinuities? What about f(x, y) = x −y 1 + x2 + y2 ? Explain. 14.3 P a r tia l Differentia tion When we first considered what the derivative of a vector function might mean, there was really not much difficulty in understanding either how such a thing might be computed or what it might measure. In the case of functions of two variables, things are a bit harder to understand. If we think of a function of two variables in terms of its graph, a surface, there is a more-or-less obvious derivative-like question we might ask, namely, how “steep” is the surface. But it’s not clear that this has a simple answer, nor how we might proceed. We will start with what seem to be very small steps toward the goal; surprisingly, it turns out that these simple ideas hold the keys to a more general understanding. −3 −3 −2 −2 0 −1 −1 2 0 0 1 x 1 4 2 2 6 3 8 Figure 14.3.1 f(x, y) = x2 + y2, cut by the plane x + y = 1 (AP) Imagine a particular point on a surface; what might we be able to say about how steep it is? We can limit the question to make it more familiar: how steep is the surface in a particular direction? What does this even mean? Here’s one way to think of it: Suppose we’re interested in the point (a, b, c). Pick a straight line in the x-y plane through the point (a, b, 0), then extend the line vertically into a plane. Look at the intersection of the 358 Chapter 14 Partial Differentiation plane with the surface. If we pay attention to just the plane, we see the chosen straight line where the x-axis would normally be, and the intersection with the surface shows up as a curve in the plane. Figure 14.3.1 shows the parabolic surface from figure 14.1.2, exposing its cross-section above the line x + y = 1. In principle, this is a problem we know how to solve: find the slope of a curve in a plane. Let’s start by looking at some particularly easy lines: those parallel to the x or y axis. Suppose we are interested in the cross-section of f(x, y) above the line y = b. If we substitute b for y in f(x, y), we get a function in one variable, describing the height of the cross-section as a function of x. Because y = b is parallel to the x-axis, if we view it from a vantage point on the negative y-axis, we will see what appears to be simply an ordinary curve in the x-z plane. −3 −2 −1 y 0 1 −3 x −2 2 −1 0 1 2 3 0 2 4 z 6 8 8.8 8.0 7.2 6.4 5.6 4.8 4.0 3.2 2.4 1.6 0.8 0.0 3 2 1 0 −1 −2 −3 Figure 14.3.2 f(x, y) = x2 + y2, cut by the plane y = 2 (AP) Consider again the parabolic surface f(x, y) = x2+y2. The cross-section above the line y = 2 consists of all points (x, 2, x2 + 4). Looking at this cross-section from somewhere on the negative y axis, we see what appears to be just the curve f(x) = x2 + 4. At any point on the cross-section, (a, 2, a2 + 4), the steepness of the surface in the direction of the line y = 2 is simply the slope of the curve f(x) = x2 + 4 at x = a, namely 2a. Figure 14.3.2 shows the same parabolic surface as before, but now cut by the plane y = 2. The left graph shows the cut-offsurface, the right shows just the cross-section, looking up from the negative y-axis toward the origin. 14.3 Partial Differentiation 359 If, say, we’re interested in the point (−1, 2, 5) on the surface, then the slope in the direction of the line y = 2 is 2x = 2(−1) = −2. This means that starting at (−1, 2, 5) and moving on the surface, above the line y = 2, in the direction of increasing x values, the surface goes down; of course moving in the opposite direction, toward decreasing x values, the surface will rise. If we’re interested in some other line y = k, there is really no change in the computa-tion. The equation of the cross-section above y = k is x2 + k2 with derivative 2x. We can save ourselves the effort, small as it is, of substituting k for y: all we are in effect doing is temporarily assuming that y is some constant. With this assumption, the derivative d dx(x2 + y2) = 2x. To emphasize that we are only temporarily assuming y is constant, we use a slightly different notation: ∂ ∂x(x2 +y2) = 2x; the “∂” reminds us that there are more variables than x, but that only x is being treated as a variable. We read the equation as “the partial derivative of (x2 + y2) with respect to x is 2x.” A convenient alternate notation for the partial derivative of f(x, y) with respect to x is fx(x, y). EXAMPLE 14.3.1 The partial derivative with respect to x of x3 + 3xy is 3x2 + 3y. Note that the partial derivative includes the variable y, unlike the example x2 + y2. It is somewhat unusual for the partial derivative to depend on a single variable; this example is more typical. Of course, we can do the same sort of calculation for lines parallel to the y-axis. We temporarily hold x constant, which gives us the equation of the cross-section above a line x = k. We can then compute the derivative with respect to y; this will measure the steepness of the curve in the y direction. EXAMPLE 14.3.2 The partial derivative with respect to y of f(x, y) = sin(xy) + 3xy is fy(x, y) = ∂ ∂y sin(xy) + 3xy = cos(xy) ∂ ∂y(xy) + 3x = x cos(xy) + 3x. So far, using no new techniques, we have succeeded in measuring the slope of a surface in two quite special directions. For functions of one variable, the derivative is closely linked to the notion of tangent line. For surfaces, the analogous idea is the tangent plane—a plane that just touches a surface at a point, and has the same “steepness” as the surface in all directions. Even though we haven’t yet figured out how to compute the slope in all directions, we have enough information to find tangent planes. Suppose we want the plane tangent to a surface at a particular point (a, b, c). If we compute the two partial derivatives of the function for that point, we get enough information to determine two lines tangent to the surface, both through (a, b, c) and both tangent to the surface in their 360 Chapter 14 Partial Differentiation Figure 14.3.3 Tangent vectors and tangent plane. (AP) respective directions. These two lines determine a plane, that is, there is exactly one plane containing the two lines: the tangent plane. Figure 14.3.3 shows (part of) two tangent lines at a point, and the tangent plane containing them. How can we discover an equation for this tangent plane? We know a point on the plane, (a, b, c); we need a vector normal to the plane. If we can find two vectors, one parallel to each of the tangent lines we know how to find, then the cross product of these vectors will give the desired normal vector. 0 1 2 3 0 1 2 3 4 z x fx(2, b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.3.4 A tangent vector. How can we find vectors parallel to the tangent lines? Consider first the line tangent to the surface above the line y = b. A vector ⟨u, v, w⟩parallel to this tangent line must have y component v = 0, and we may as well take the x component to be u = 1. The ratio 14.3 Partial Differentiation 361 of the z component to the x component is the slope of the tangent line, precisely what we know how to compute. The slope of the tangent line is fx(a, b), so fx(a, b) = w u = w 1 = w. In other words, a vector parallel to this tangent line is ⟨1, 0, fx(a, b)⟩, as shown in fig-ure 14.3.4. If we repeat the reasoning for the tangent line above x = a, we get the vector ⟨0, 1, fy(a, b)⟩. Now to find the desired normal vector we compute the cross product, ⟨0, 1, fy⟩× ⟨1, 0, fx⟩= ⟨fx, fy, −1⟩. From our earlier discussion of planes, we can write down the equation we seek: fx(a, b)x + fy(a, b)y −z = k, and k as usual can be computed by substituting a known point: fx(a, b)(a) + fy(a, b)(b) −c = k. There are various more-or-less nice ways to write the result: fx(a, b)x + fy(a, b)y −z = fx(a, b)a + fy(a, b)b −c fx(a, b)x + fy(a, b)y −fx(a, b)a −fy(a, b)b + c = z fx(a, b)(x −a) + fy(a, b)(y −b) + c = z fx(a, b)(x −a) + fy(a, b)(y −b) + f(a, b) = z EXAMPLE 14.3.3 Find the plane tangent to x2 + y2 + z2 = 4 at (1, 1, √ 2). This point is on the upper hemisphere, so we use f(x, y) = p 4 −x2 −y2. Then fx(x, y) = −x(4 −x2 −y2)−1/2 and fy(x, y) = −y(4 −x2 −y2)−1/2, so fx(1, 1) = fy(1, 1) = −1/ √ 2 and the equation of the plane is z = −1 √ 2(x −1) −1 √ 2(y −1) + √ 2. The hemisphere and this tangent plane are pictured in figure 14.3.3. So it appears that to find a tangent plane, we need only find two quite simple ordinary derivatives, namely fx and fy. This is true if the tangent plane exists. It is, unfortunately, not always the case that if fx and fy exist there is a tangent plane. Consider the function xy2/(x2 + y4) pictured in figure 14.2.1. This function has value 0 when x = 0 or y = 0, and we can “plug the hole” by agreeing that f(0, 0) = 0. Now it’s clear that fx(0, 0) = fy(0, 0) = 0, because in the x and y directions the surface is simply a horizontal line. But it’s also clear from the picture that this surface does not have anything that deserves to be called a “tangent plane” at the origin, certainly not the x-y plane containing these two tangent lines. 362 Chapter 14 Partial Differentiation When does a surface have a tangent plane at a particular point? What we really want from a tangent plane, as from a tangent line, is that the plane be a “good” approximation of the surface near the point. Here is how we can make this precise: DEFINITION 14.3.4 Let ∆x = x −x0, ∆y = y −y0, and ∆z = z −z0 where z0 = f(x0, y0). The function z = f(x, y) is differentiable at (x0, y0) if ∆z = fx(x0, y0)∆x + fy(x0, y0)∆y + ǫ1∆x + ǫ2∆y, and both ǫ1 and ǫ2 approach 0 as (x, y) approaches (x0, y0). This definition takes a bit of absorbing. Let’s rewrite the central equation a bit: z = fx(x0, y0)(x −x0) + fy(x0, y0)(y −y0) + f(x0, y0) + ǫ1∆x + ǫ2∆y. (14.3.1) The first three terms on the right give the value of z on the tangent plane, that is, fx(x0, y0)(x −x0) + fy(x0, y0)(y −y0) + f(x0, y0) is the z-value of the point on the plane above (x, y). Equation 14.3.1 says that the z-value of a point on the surface is equal to the z-value of a point on the plane plus a “little bit,” namely ǫ1∆x + ǫ2∆y. As (x, y) approaches (x0, y0), both ∆x and ∆y approach 0, so this little bit ǫ1∆x + ǫ2∆y also approaches 0, and the z-values on the surface and the plane get close to each other. But that by itself is not very interesting: since the surface and the plane both contain the point (x0, y0, z0), the z values will approach z0 and hence get close to each other whether the tangent plane is “tangent” to the surface or not. The extra condition in the definition says that as (x, y) approaches (x0, y0), the ǫ values approach 0—this means that ǫ1∆x + ǫ2∆y approaches 0 much, much faster, because ǫ1∆x is much smaller than either ǫ1 or ∆x. It is this extra condition that makes the plane a tangent plane. We can see that the extra condition on ǫ1 and ǫ2 fits neatly with the definition of partial derivatives. Suppose we temporarily fix y = y0, so ∆y = 0. Then the equation from the definition becomes ∆z = fx(x0, y0)∆x + ǫ1∆x or ∆z ∆x = fx(x0, y0) + ǫ1. Now taking the limit of the two sides as ∆x approaches 0, the left side turns into the partial derivative of z with respect to x at (x0, y0), or in other words fx(x0, y0), and the 14.3 Partial Differentiation 363 right side does the same, because as (x, y) approaches (x0, y0), ǫ1 approaches 0. Essentially the same calculation works for fy. Almost all of the functions we will encounter are differentiable at points we will be interested in, and often at all points. This is usually because the functions satisfy the hypotheses of this theorem. THEOREM 14.3.5 If f(x, y) and its partial derivatives are continuous at a point (x0, y0), then f is differentiable there. Exercises 14.3. 1. Find fx and fy where f(x, y) = cos(x2y) + y3. ⇒ 2. Find fx and fy where f(x, y) = xy x2 + y . ⇒ 3. Find fx and fy where f(x, y) = ex2+y2. ⇒ 4. Find fx and fy where f(x, y) = xy ln(xy). ⇒ 5. Find fx and fy where f(x, y) = p 1 −x2 −y2. ⇒ 6. Find fx and fy where f(x, y) = x tan(y). ⇒ 7. Find fx and fy where f(x, y) = 1 xy . ⇒ 8. Find an equation for the plane tangent to 2x2 + 3y2 −z2 = 4 at (1, 1, −1). ⇒ 9. Find an equation for the plane tangent to f(x, y) = sin(xy) at (π, 1/2, 1). ⇒ 10. Find an equation for the plane tangent to f(x, y) = x2 + y3 at (3, 1, 10). ⇒ 11. Find an equation for the plane tangent to f(x, y) = x ln(xy) at (2, 1/2, 0). ⇒ 12. Find an equation for the line normal to x2 + 4y2 = 2z at (2, 1, 4). ⇒ 13. Explain in your own words why, when taking a partial derivative of a function of multiple variables, we can treat the variables not being differentiated as constants. 14. Consider a differentiable function, f(x, y). Give physical interpretations of the meanings of fx(a, b) and fy(a, b) as they relate to the graph of f. 15. In much the same way that we used the tangent line to approximate the value of a function from single variable calculus, we can use the tangent plane to approximate a function from multivariable calculus. Consider the tangent plane found in Exercise 11. Use this plane to approximate f(1.98, 0.4). ⇒ 16. Suppose that one of your colleagues has calculated the partial derivatives of a given function, and reported to you that fx(x, y) = 2x + 3y and that fy(x, y) = 4x + 6y. Do you believe them? Why or why not? If not, what answer might you have accepted for fy? 17. Suppose f(t) and g(t) are single variable differentiable functions. Find ∂z/∂x and ∂z/∂y for each of the following functions of two variables. a. z = f(x)g(y) b. z = f(xy) c. z = f(x/y) 364 Chapter 14 Partial Differentiation ⇒ 14.4 The Chain R ule Consider the surface z = x2y + xy2, and suppose that x = 2 + t4 and y = 1 −t3. We can think of the latter two equations as describing how x and y change relative to, say, time. Then z = x2y + xy2 = (2 + t4)2(1 −t3) + (2 + t4)(1 −t3)2 tells us explicitly how the z coordinate of the corresponding point on the surface depends on t. If we want to know dz/dt we can compute it more or less directly—it’s actually a bit simpler to use the chain rule: dz dt = x2y′ + 2xx′y + x2yy′ + x′y2 = (2xy + y2)x′ + (x2 + 2xy)y′ = (2(2 + t4)(1 −t3) + (1 −t3)2)(4t3) + ((2 + t4)2 + 2(2 + t4)(1 −t3))(−3t2) If we look carefully at the middle step, dz/dt = (2xy +y2)x′ +(x2 +2xy)y′, we notice that 2xy + y2 is ∂z/∂x, and x2 + 2xy is ∂z/∂y. This turns out to be true in general, and gives us a new chain rule: THEOREM 14.4.1 Suppose that z = f(x, y), f is differentiable, x = g(t), and y = h(t). Assuming that the relevant derivatives exist, dz dt = ∂z ∂x dx dt + ∂z ∂y dy dt . Proof. If f is differentiable, then ∆z = fx(x0, y0)∆x + fy(x0, y0)∆y + ǫ1∆x + ǫ2∆y, where ǫ1 and ǫ2 approach 0 as (x, y) approaches (x0, y0). Then ∆z ∆t = fx ∆x ∆t + fy ∆y ∆t + ǫ1 ∆x ∆t + ǫ2 ∆y ∆t . (14.4.1) 14.4 The Chain Rule 365 As ∆t approaches 0, (x, y) approaches (x0, y0) and so lim ∆t→0 ∆z ∆t = dz dt lim ∆t→0 ǫ1 ∆x ∆t = 0 · dx dt lim ∆t→0 ǫ2 ∆y ∆t = 0 · dy dt and so taking the limit of (14.4.1) as ∆t goes to 0 gives dz dt = fx dx dt + fy dy dt , as desired. We can write the chain rule in way that is somewhat closer to the single variable chain rule: d f dt = ⟨fx, fy⟩· ⟨x′, y′⟩, or (roughly) the derivatives of the outside function “times” the derivatives of the inside functions. Not surprisingly, essentially the same chain rule works for functions of more than two variables, for example, given a function of three variables f(x, y, z), where each of x, y and z is a function of t, d f dt = ⟨fx, fy, fz⟩· ⟨x′, y′, z′⟩. We can even extend the idea further. Suppose that f(x, y) is a function and x = g(s, t) and y = h(s, t) are functions of two variables s and t. Then f is “really” a function of s and t as well, and ∂f ∂s = fxgs + fyhs ∂f ∂t = fxgt + fyht. The natural extension of this to f(x, y, z) works as well. Recall that we used the ordinary chain rule to do implicit differentiation. We can do the same with the new chain rule. EXAMPLE 14.4.2 x2 + y2 + z2 = 4 defines a sphere, which is not a function of x and y, though it can be thought of as two functions, the top and bottom hemispheres. We can think of z as one of these two functions, so really z = z(x, y), and we can think of x 366 Chapter 14 Partial Differentiation and y as particularly simple functions of x and y, and let f(x, y, z) = x2 + y2 + z2. Since f(x, y, z) = 4, ∂f/∂x = 0, but using the chain rule: 0 = ∂f ∂x = fx ∂x ∂x + fy ∂y ∂x + fz ∂z ∂x = (2x)(1) + (2y)(0) + (2z) ∂z ∂x, noting that since y is temporarily held constant its derivative ∂y/∂x = 0. Now we can solve for ∂z/∂x: ∂z ∂x = −2x 2z = −x z . In a similar manner we can compute ∂z/∂y. Exercises 14.4. 1. Use the chain rule to compute dz/dt for z = sin(x2 + y2), x = t2 + 3, y = t3. ⇒ 2. Use the chain rule to compute dz/dt for z = x2y, x = sin(t), y = t2 + 1. ⇒ 3. Use the chain rule to compute ∂z/∂s and ∂z/∂t for z = x2y, x = sin(st), y = t2 + s2. ⇒ 4. Use the chain rule to compute ∂z/∂s and ∂z/∂t for z = x2y2, x = st, y = t2 −s2. ⇒ 5. Use the chain rule to compute ∂z/∂x and ∂z/∂y for 2x2 + 3y2 −2z2 = 9. ⇒ 6. Use the chain rule to compute ∂z/∂x and ∂z/∂y for 2x2 + y2 + z2 = 9. ⇒ 7. Use the chain rule to compute ∂z/∂x and ∂z/∂y for xy2 + z2 = 5. ⇒ 8. Use the chain rule to compute ∂z/∂x and ∂z/∂y for 2 sin(xyz) = 1. ⇒ 9. Chemistry students will recognize the ideal gas law, given by PV = nRT which relates the Pressure, Volume, and Temperature of n moles of gas. (R is the ideal gas constant). Thus, we can view pressure, volume, and temperature as variables, each one dependent on the other two. a. If pressure of a gas is increasing at a rate of 0.2Pa/min and temperature is increasing at a rate of 1K/min, how fast is the volume changing? b. If the volume of a gas is decreasing at a rate of 0.3m3/min and temperature is increasing at a rate of .5K/min, how fast is the pressure changing? c. If the pressure of a gas is decreasing at a rate of 0.4Pa/min and the volume is increasing at a rate of 3L/min, how fast is the temperature changing? ⇒ 10. Verify the following identity in the case of the ideal gas law: ∂P ∂V ∂V ∂T ∂T ∂P = −1 11. The previous exercise was a special case of the following fact, which you are to verify here: If F(x, y, z) is a function of 3 variables, and the relation F(x, y, z) = 0 defines each of the 14.5 Directional Derivatives 367 variables in terms of the other two, namely x = f(y, z), y = g(x, z) and z = h(x, y), then ∂x ∂y ∂y ∂z ∂z ∂x = −1 14.5 Dire tiona l Deriv a tives We still have not answered one of our first questions about the steepness of a surface: starting at a point on a surface given by f(x, y), and walking in a particular direction, how steep is the surface? We are now ready to answer the question. We already know roughly what has to be done: as shown in figure 14.3.1, we extend a line in the x-y plane to a vertical plane, and we then compute the slope of the curve that is the cross-section of the surface in that plane. The major stumbling block is that what appears in this plane to be the horizontal axis, namely the line in the x-y plane, is not an actual axis—we know nothing about the “units” along the axis. Our goal is to make this line into a t axis; then we need formulas to write x and y in terms of this new variable t; then we can write z in terms of t since we know z in terms of x and y; and finally we can simply take the derivative. So we need to somehow “mark off” units on the line, and we need a convenient way to refer to the line in calculations. It turns out that we can accomplish both by using the vector form of a line. Suppose that u is a unit vector ⟨u1, u2⟩in the direction of interest. A vector equation for the line through (x0, y0) in this direction is v(t) = ⟨u1t + x0, u2t + y0⟩. The height of the surface above the point (u1t+x0, u2t+y0) is g(t) = f(u1t+x0, u2t+y0). Because u is a unit vector, the value of t is precisely the distance along the line from (x0, y0) to (u1t + x0, u2t + y0); this means that the line is effectively a t axis, with origin at the point (x0, y0), so the slope we seek is g′(0) = ⟨fx(x0, y0), fy(x0, y0)⟩· ⟨u1, u2⟩ = ⟨fx, fy⟩· u = ∇f · u Here we have used the chain rule and the derivatives d dt(u1t+x0) = u1 and d dt(u2t+y0) = u2. The vector ⟨fx, fy⟩is very useful, so it has its own symbol, ∇f, pronounced “del f”; it is also called the gradient of f. EXAMPLE 14.5.1 Find the slope of z = x2 +y2 at (1, 2) in the direction of the vector ⟨3, 4⟩. We first compute the gradient at (1, 2): ∇f = ⟨2x, 2y⟩, which is ⟨2, 4⟩at (1, 2). A unit vector in the desired direction is ⟨3/5, 4/5⟩, and the desired slope is then ⟨2, 4⟩·⟨3/5, 4/5⟩= 6/5 + 16/5 = 22/5. 368 Chapter 14 Partial Differentiation When doing such problems, it is easy to forget that we require a unit vector in the calculation ∇f · u. You may prefer to remember that this can always be written as ∇f · v/|v|. In the previous example, we might then have computed ⟨2, 4⟩· ⟨3, 4⟩/|⟨3, 4⟩|, rather than remembering to first compute u = ⟨3, 4⟩/|⟨3, 4⟩|. EXAMPLE 14.5.2 Find a tangent vector to z = x2 + y2 at (1, 2) in the direction of the vector ⟨3, 4⟩and show that it is parallel to the tangent plane at that point. Since ⟨3/5, 4/5⟩is a unit vector in the desired direction, we can easily expand it to a tangent vector simply by adding the third coordinate computed in the previous example: ⟨3/5, 4/5, 22/5⟩. To see that this vector is parallel to the tangent plane, we can compute its dot product with a normal to the plane. We know that a normal to the tangent plane is ⟨fx(1, 2), fy(1, 2), −1⟩= ⟨2, 4, −1⟩, and the dot product is ⟨2, 4, −1⟩· ⟨3/5, 4/5, 22/5⟩= 6/5 + 16/5 −22/5 = 0, so the two vectors are perpendicular. (Note that the vector normal to the surface, namely ⟨fx, fy, −1⟩, is simply the gradient with a −1 tacked on as the third component.) The slope of a surface given by z = f(x, y) in the direction of a (two-dimensional) unit vector u is called the directional derivative of f, written Duf. The directional derivative immediately provides us with some additional information. We know that Duf = ∇f · u = |∇f||u| cosθ = |∇f| cos θ if u is a unit vector; θ is the angle between ∇f and u. This tells us immediately that the largest value of Duf occurs when cos θ = 1, namely, when θ = 0, so ∇f is parallel to u. In other words, the gradient ∇f points in the direction of steepest ascent of the surface, and |∇f| is the slope in that direction. Likewise, the smallest value of Duf occurs when cos θ = −1, namely, when θ = π, so ∇f is anti-parallel to u. In other words, −∇f points in the direction of steepest descent of the surface, and −|∇f| is the slope in that direction. EXAMPLE 14.5.3 Investigate the direction of steepest ascent and descent for z = x2 + y2. The gradient is ⟨2x, 2y⟩= 2⟨x, y⟩; this is a vector parallel to the vector ⟨x, y⟩, so the direction of steepest ascent is directly away from the origin, starting at the point (x, y). The direction of steepest descent is thus directly toward the origin from (x, y). Note that at (0, 0) the gradient vector is ⟨0, 0⟩, which has no direction, and it is clear from the plot of this surface that there is a minimum point at the origin, and tangent vectors in all directions are parallel to the x-y plane. If ∇f is perpendicular to u, Duf = |∇f| cos(π/2) = 0, since cos(π/2) = 0. This means that in either of the two directions perpendicular to ∇f, the slope of the surface is 0; this 14.5 Directional Derivatives 369 implies that a vector in either of these directions is tangent to the level curve at that point. Starting with ∇f = ⟨fx, fy⟩, it is easy to find a vector perpendicular to it: either ⟨fy, −fx⟩ or ⟨−fy, fx⟩will work. If f(x, y, z) is a function of three variables, all the calculations proceed in essentially the same way. The rate at which f changes in a particular direction is ∇f · u, where now ∇f = ⟨fx, fy, fz⟩and u = ⟨u1, u2, u3⟩is a unit vector. Again ∇f points in the direction of maximum rate of increase, −∇f points in the direction of maximum rate of decrease, and any vector perpendicular to ∇f is tangent to the level surface f(x, y, z) = k at the point in question. Of course there are no longer just two such vectors; the vectors perpendicular to ∇f describe the tangent plane to the level surface, or in other words ∇f is a normal to the tangent plane. EXAMPLE 14.5.4 Suppose the temperature at a point in space is given by T(x, y, z) = T0/(1+x2 +y2 +z2); at the origin the temperature in Kelvin is T0 > 0, and it decreases in every direction from there. It might be, for example, that there is a source of heat at the origin, and as we get farther from the source, the temperature decreases. The gradient is ∇T = ⟨ −2T0x (1 + x2 + y2 + z2)2 , −2T0y (1 + x2 + y2 + z2)2 , −2T0z (1 + x2 + y2 + z2)2 ⟩ = −2T0 (1 + x2 + y2 + z2)2 ⟨x, y, z⟩. The gradient points directly at the origin from the point (x, y, z)—by moving directly toward the heat source, we increase the temperature as quickly as possible. EXAMPLE 14.5.5 Find the points on the surface defined by x2 +2y2 +3z2 = 1 where the tangent plane is parallel to the plane defined by 3x −y + 3z = 1. Two planes are parallel if their normals are parallel or anti-parallel, so we want to find the points on the surface with normal parallel or anti-parallel to ⟨3, −1, 3⟩. Let f = x2 + 2y2 + 3z2; the gradient of f is normal to the level surface at every point, so we are looking for a gradient parallel or anti-parallel to ⟨3, −1, 3⟩. The gradient is ⟨2x, 4y, 6z⟩; if it is parallel or anti-parallel to ⟨3, −1, 3⟩, then ⟨2x, 4y, 6z⟩= k⟨3, −1, 3⟩ for some k. This means we need a solution to the equations 2x = 3k 4y = −k 6z = 3k but this is three equations in four unknowns—we need another equation. What we haven’t used so far is that the points we seek are on the surface x2 + 2y2 + 3z2 = 1; this is the 370 Chapter 14 Partial Differentiation fourth equation. If we solve the first three equations for x, y, and z and substitute into the fourth equation we get 1 = 3k 2 2 + 2 −k 4 2 + 3 3k 6 2 = 9 4 + 2 16 + 3 4  k2 = 25 8 k2 so k = ±2 √ 2 5 . The desired points are 3 √ 2 5 , − √ 2 10 , √ 2 5 ! and −3 √ 2 5 , √ 2 10 , − √ 2 5 ! . The ellipsoid and the three planes are shown in figure 14.5.1. Figure 14.5.1 Ellipsoid with two tangent planes parallel to a given plane. (AP) Exercises 14.5. 1. Find Duf for f = x2 + xy + y2 in the direction of v = ⟨2, 1⟩at the point (1, 1). ⇒ 2. Find Duf for f = sin(xy) in the direction of v = ⟨−1, 1⟩at the point (3, 1). ⇒ 3. Find Duf for f = ex cos(y) in the direction 30 degrees from the positive x axis at the point (1, π/4). ⇒ 14.5 Directional Derivatives 371 4. The temperature of a thin plate in the x-y plane is T = x2 + y2. How fast does temperature change at the point (1, 5) moving in a direction 30 degrees from the positive x axis? ⇒ 5. Suppose the density of a thin plate at (x, y) is 1/ p x2 + y2 + 1. Find the rate of change of the density at (2, 1) in a direction π/3 radians from the positive x axis. ⇒ 6. Suppose the electric potential at (x, y) is ln p x2 + y2. Find the rate of change of the potential at (3, 4) toward the origin and also in a direction at a right angle to the direction toward the origin. ⇒ 7. A plane perpendicular to the x-y plane contains the point (2, 1, 8) on the paraboloid z = x2 + 4y2. The cross-section of the paraboloid created by this plane has slope 0 at this point. Find an equation of the plane. ⇒ 8. A plane perpendicular to the x-y plane contains the point (3, 2, 2) on the paraboloid 36z = 4x2 +9y2. The cross-section of the paraboloid created by this plane has slope 0 at this point. Find an equation of the plane. ⇒ 9. Suppose the temperature at (x, y, z) is given by T = xy + sin(yz). In what direction should you go from the point (1, 1, 1) to decrease the temperature as quickly as possible? What is the rate of change of temperature in this direction? ⇒ 10. Suppose the temperature at (x, y, z) is given by T = xyz. In what direction can you go from the point (1, 1, 1) to maintain the same temperature? ⇒ 11. Find an equation for the plane tangent to x2 −3y2 + z2 = 7 at (1, 1, 3). ⇒ 12. Find an equation for the plane tangent to xyz = 6 at (1, 2, 3). ⇒ 13. Find a vector function for the line normal to x2 + 2y2 + 4z2 = 26 at (2, −3, −1). ⇒ 14. Find a vector function for the line normal to x2 + y2 + 9z2 = 56 at (4, 2, −2). ⇒ 15. Find a vector function for the line normal to x2 + 5y2 −z2 = 0 at (4, 2, 6). ⇒ 16. Find the directions in which the directional derivative of f(x, y) = x2 + sin(xy) at the point (1, 0) has the value 1. ⇒ 17. Show that the curve r(t) = ⟨ln(t), t ln(t), t⟩is tangent to the surface xz2 −yz + cos(xy) = 1 at the point (0, 0, 1). 18. A bug is crawling on the surface of a hot plate, the temperature of which at the point x units to the right of the lower left corner and y units up from the lower left corner is given by T(x, y) = 100 −x2 −3y3. a. If the bug is at the point (2, 1), in what direction should it move to cool offthe fastest? How fast will the temperature drop in this direction? b. If the bug is at the point (1, 3), in what direction should it move in order to maintain its temperature? ⇒ 19. The elevation on a portion of a hill is given by f(x, y) = 100 −4x2 −2y. From the location above (2, 1), in which direction will water run? ⇒ 20. The contour map here shows wind speed in knots during Hurricane Andrew on August 24, 1992. Use it to estimate the value of the directional derivative of the wind speed at Homestead, FL, in the direction of the eye of the hurricane. Explain the meaning of your answer to a lay person. ⇒ 372 Chapter 14 Partial Differentiation 21. Suppose that g(x, y) = y −x2. Find the gradient at the point (−1, 3). Sketch the level curve to the graph of g when g(x, y) = 2, and plot both the tangent line and the gradient vector at the point (−1, 3). (Make your sketch large). What do you notice, geometrically? ⇒ 22. The gradient ∇f is a vector valued function of two variables. Prove the following gradient rules. Assume f(x, y) and g(x, y) are differentiable functions. a. ∇(fg) = f∇(g) + g∇(f) b. ∇(f/g) = (g∇f −f∇g)/g2 c. ∇((f(x, y))n) = nf(x, y)n−1∇f 14.6 Higher order deriv a tives In single variable calculus we saw that the second derivative is often useful: in appropriate circumstances it measures acceleration; it can be used to identify maximum and minimum points; it tells us something about how sharply curved a graph is. Not surprisingly, second derivatives are also useful in the multi-variable case, but again not surprisingly, things are a bit more complicated. It’s easy to see where some complication is going to come from: with two variables there are four possible second derivatives. To take a “derivative,” we must take a partial derivative with respect to x or y, and there are four ways to do it: x then x, x then y, y then x, y then y. EXAMPLE 14.6.1 Compute all four second derivatives of f(x, y) = x2y2. Using an obvious notation, we get: fxx = 2y2 fxy = 4xy fyx = 4xy fyy = 2x2. 14.7 Maxima and minima 373 You will have noticed that two of these are the same, the “mixed partials” computed by taking partial derivatives with respect to both variables in the two possible orders. This is not an accident—as long as the function is reasonably nice, this will always be true. THEOREM 14.6.2 Clairaut’s Theorem If the mixed partial derivatives are con-tinuous, they are equal. EXAMPLE 14.6.3 Compute the mixed partials of f = xy/(x2 + y2). fx = y3 −x2y (x2 + y2)2 fxy = −x4 −6x2y2 + y4 (x2 + y2)3 We leave fyx as an exercise. Exercises 14.6. 1. Find all first and second partial derivatives of f = xy/(x2 + y2). ⇒ 2. Find all first and second partial derivatives of x3y2 + y5. ⇒ 3. Find all first and second partial derivatives of 4x3 + xy2 + 10. ⇒ 4. Find all first and second partial derivatives of x sin y. ⇒ 5. Find all first and second partial derivatives of sin(3x) cos(2y). ⇒ 6. Find all first and second partial derivatives of ex+y2. ⇒ 7. Find all first and second partial derivatives of ln p x3 + y4. ⇒ 8. Find all first and second partial derivatives of z with respect to x and y if x2+4y2+16z2−64 = 0. ⇒ 9. Find all first and second partial derivatives of z with respect to x and y if xy + yz + xz = 1. ⇒ 10. Let α and k be constants. Prove that the function u(x, t) = e−α2k2t sin(kx) is a solution to the heat equation ut = α2uxx 11. Let a be a constant. Prove that u = sin(x−at)+ln(x+at) is a solution to the wave equation utt = a2uxx. 12. How many third-order derivatives does a function of 2 variables have? How many of these are distinct? 13. How many nth order derivatives does a function of 2 variables have? How many of these are distinct? 14.7 Maxim a a nd minim a Suppose a surface given by f(x, y) has a local maximum at (x0, y0, z0); geometrically, this point on the surface looks like the top of a hill. If we look at the cross-section in the plane y = y0, we will see a local maximum on the curve at (x0, z0), and we know from single-variable calculus that ∂z ∂x = 0 at this point. Likewise, in the plane x = x0, ∂z ∂y = 0. 374 Chapter 14 Partial Differentiation So if there is a local maximum at (x0, y0, z0), both partial derivatives at the point must be zero, and likewise for a local minimum. Thus, to find local maximum and minimum points, we need only consider those points at which both partial derivatives are 0. As in the single-variable case, it is possible for the derivatives to be 0 at a point that is neither a maximum or a minimum, so we need to test these points further. You will recall that in the single variable case, we examined three methods to identify maximum and minimum points; the most useful is the second derivative test, though it does not always work. For functions of two variables there is also a second derivative test; again it is by far the most useful test, though it doesn’t always work. THEOREM 14.7.1 Suppose that the second partial derivatives of f(x, y) are continuous near (x0, y0), and fx(x0, y0) = fy(x0, y0) = 0. We denote by D the discriminant: D(x0, y0) = fxx(x0, y0)fyy(x0, y0) −fxy(x0, y0)2. If D > 0: if fxx(x0, y0) < 0: there is a local maximum at (x0, y0); if fxx(x0, y0) > 0: there is a local minimum at (x0, y0); if D < 0: there is neither a maximum nor a minimum at (x0, y0); if D = 0: the test fails. EXAMPLE 14.7.2 Verify that f(x, y) = x2 + y2 has a minimum at (0, 0). First, we compute all the needed derivatives: fx = 2x fy = 2y fxx = 2 fyy = 2 fxy = 0. The derivatives fx and fy are zero only at (0, 0). Applying the second derivative test there: D(0, 0) = fxx(0, 0)fyy(0, 0) −fxy(0, 0)2 = 2 · 2 −0 = 4 > 0 and fxx(0, 0) = 2 > 0, so there is a local minimum at (0, 0), and there are no other possibilities. EXAMPLE 14.7.3 Find all local maxima and minima for f(x, y) = x2 −y2. The derivatives: fx = 2x fy = −2y fxx = 2 fyy = −2 fxy = 0. Again there is a single critical point, at (0, 0), and D(0, 0) = fxx(0, 0)fyy(0, 0) −fxy(0, 0)2 = 2 · −2 −0 = −4 < 0, so there is neither a maximum nor minimum there, and so there are no local maxima or minima. The surface is shown in figure 14.7.1. 14.7 Maxima and minima 375 −5.0 −2.5 0.0 5.0 y −5.0 x 2.5 0.0 2.5 −2.5 −2.5 −5.0 5.0 0.0 2.5 5.0 7.5 Figure 14.7.1 A saddle point, neither a maximum nor a minimum. (AP) EXAMPLE 14.7.4 Find all local maxima and minima for f(x, y) = x4 + y4. The derivatives: fx = 4x3 fy = 4y3 fxx = 12x2 fyy = 12y2 fxy = 0. Again there is a single critical point, at (0, 0), and D(0, 0) = fxx(0, 0)fyy(0, 0) −fxy(0, 0)2 = 0 · 0 −0 = 0, so we get no information. However, in this case it is easy to see that there is a minimum at (0, 0), because f(0, 0) = 0 and at all other points f(x, y) > 0. EXAMPLE 14.7.5 Find all local maxima and minima for f(x, y) = x3 + y3. The derivatives: fx = 3x2 fy = 3y2 fxx = 6x2 fyy = 6y2 fxy = 0. Again there is a single critical point, at (0, 0), and D(0, 0) = fxx(0, 0)fyy(0, 0) −fxy(0, 0)2 = 0 · 0 −0 = 0, so we get no information. In this case, a little thought shows there is neither a maximum nor a minimum at (0, 0): when x and y are both positive, f(x, y) > 0, and when x and 376 Chapter 14 Partial Differentiation y are both negative, f(x, y) < 0, and there are points of both kinds arbitrarily close to (0, 0). Alternately, if we look at the cross-section when y = 0, we get f(x, 0) = x3, which does not have either a maximum or minimum at x = 0. EXAMPLE 14.7.6 Suppose a box with no top is to hold a certain volume V . Find the dimensions for the box that result in the minimum surface area. The area of the box is A = 2hw + 2hl + lw, and the volume is V = lwh, so we can write the area as a function of two variables, A(l, w) = 2V l + 2V w + lw. Then Al = −2V l2 + w and Aw = −2V w2 + l. If we set these equal to zero and solve, we find w = (2V )1/3 and l = (2V )1/3, and the corresponding height is h = V/(2V )2/3. The second derivatives are All = 4V l3 Aww = 4V w3 Alw = 1, so the discriminant is D = 4V l3 4V w3 −1 = 4 −1 = 3 > 0. Since All is 2, there is a local minimum at the critical point. Is this a global minimum? It is, but it is difficult to see this analytically; physically and graphically it is clear that there is a minimum, in which case it must be at the single critical point. This applet shows an example of such a graph. Note that we must choose a value for V in order to graph it. Recall that when we did single variable global maximum and minimum problems, the easiest cases were those for which the variable could be limited to a finite closed interval, for then we simply had to check all critical values and the endpoints. The previous example is difficult because there is no finite boundary to the domain of the problem—both w and l can be in (0, ∞). As in the single variable case, the problem is often simpler when there is a finite boundary. THEOREM 14.7.7 If f(x, y) is continuous on a closed and bounded subset of R2, then it has both a maximum and minimum value. 14.7 Maxima and minima 377 As in the case of single variable functions, this means that the maximum and minimum values must occur at a critical point or on the boundary; in the two variable case, however, the boundary is a curve, not merely two endpoints. EXAMPLE 14.7.8 The length of the diagonal of a box is to be 1 meter; find the maximum possible volume. If the box is placed with one corner at the origin, and sides along the axes, the length of the diagonal is p x2 + y2 + z2, and the volume is V = xyz = xy p 1 −x2 −y2. Clearly, x2 + y2 ≤1, so the domain we are interested in is the quarter of the unit disk in the first quadrant. Computing derivatives: Vx = y −2yx2 −y3 p 1 −x2 −y2 Vy = x −2xy2 −x3 p 1 −x2 −y2 If these are both 0, then x = 0 or y = 0, or x = y = 1/ √ 3. The boundary of the domain is composed of three curves: x = 0 for y ∈[0, 1]; y = 0 for x ∈[0, 1]; and x2 + y2 = 1, where x ≥0 and y ≥0. In all three cases, the volume xy p 1 −x2 −y2 is 0, so the maximum occurs at the only critical point (1/ √ 3, 1/ √ 3, 1/ √ 3), giving a volume of 1/(3 √ 3). See figure 14.7.2. Exercises 14.7. 1. Find all local maximum and minimum points of f = x2 + 4y2 −2x + 8y −1. ⇒ 2. Find all local maximum and minimum points of f = x2 −y2 + 6x −10y + 2. ⇒ 3. Find all local maximum and minimum points of f = xy. ⇒ 4. Find all local maximum and minimum points of f = 9 + 4x −y −2x2 −3y2. ⇒ 5. Find all local maximum and minimum points of f = x2 + 4xy + y2 −6y + 1. ⇒ 6. Find all local maximum and minimum points of f = x2 −xy + 2y2 −5x + 6y −9. ⇒ 7. Find the absolute maximum and minimum points of f = x2 + 3y −3xy over the region bounded by y = x, y = 0, and x = 2. ⇒ 8. A six-sided rectangular box is to hold 1/2 cubic meter; what shape should the box be to minimize surface area? ⇒ 9. The post office will accept packages whose combined length and girth is at most 130 inches. (Girth is the maximum distance around the package perpendicular to the length; for a rect-angular box, the length is the largest of the three dimensions.) What is the largest volume that can be sent in a rectangular box? ⇒ 378 Chapter 14 Partial Differentiation 0.0 0.25 0.5 0.75 1.0 1.0 0.75 0.5 0.25 0.0 0.0 0.05 0.1 0.15 Figure 14.7.2 The volume of a box with fixed length diagonal. 10. The bottom of a rectangular box costs twice as much per unit area as the sides and top. Find the shape for a given volume that will minimize cost. ⇒ 11. Using the methods of this section, find the shortest distance from the origin to the plane x + y + z = 10. ⇒ 12. Using the methods of this section, find the shortest distance from the point (x0, y0, z0) to the plane ax + by + cz = d. You may assume that c ̸= 0; use of Sage or similar software is recommended. ⇒ 13. A trough is to be formed by bending up two sides of a long metal rectangle so that the cross-section of the trough is an isosceles trapezoid, as in figure 6.2.6. If the width of the metal sheet is 2 meters, how should it be bent to maximize the volume of the trough? ⇒ 14. Given the three points (1, 4), (5, 2), and (3, −2), (x −1)2 + (y −4)2 + (x −5)2 + (y −2)2 + (x −3)2 + (y + 2)2 is the sum of the squares of the distances from point (x, y) to the three points. Find x and y so that this quantity is minimized. ⇒ 15. Suppose that f(x, y) = x2 + y2 + kxy. Find and classify the critical points, and discuss how they change when k takes on different values. 16. Find the shortest distance from the point (0, b) to the parabola y = x2. ⇒ 17. Find the shortest distance from the point (0, 0, b) to the paraboloid z = x2 + y2. ⇒ 18. Consider the function f(x, y) = x3 −3x2y + y3. a. Show that (0, 0) is the only critical point of f. b. Show that the discriminant test is inconclusive for f. c. Determine the cross-sections of f obtained by setting y = kx for various values of k. 14.8 Lagrange Multipliers 379 d. What kind of critical point is (0, 0)? 19. Find the volume of the largest rectangular box with edges parallel to the axes that can be inscribed in the ellipsoid 2x2 + 72y2 + 18z2 = 288. ⇒ 14.8 La gra nge Mul tipliers Many applied max/min problems take the form of the last two examples: we want to find an extreme value of a function, like V = xyz, subject to a constraint, like 1 = p x2 + y2 + z2. Often this can be done, as we have, by explicitly combining the equations and then finding critical points. There is another approach that is often convenient, the method of Lagrange multipliers. It is somewhat easier to understand two variable problems, so we begin with one as an example. Suppose the perimeter of a rectangle is to be 100 units. Find the rectangle with largest area. This is a fairly straightforward problem from single variable calculus. We write down the two equations: A = xy, P = 100 = 2x + 2y, solve the second of these for y (or x), substitute into the first, and end up with a one-variable maximization problem. Let’s now think of it differently: the equation A = xy defines a surface, and the equation 100 = 2x + 2y defines a curve (a line, in this case) in the x-y plane. If we graph both of these in the three-dimensional coordinate system, we can phrase the problem like this: what is the highest point on the surface above the line? The solution we already understand effectively produces the equation of the cross-section of the surface above the line and then treats it as a single variable problem. Instead, imagine that we draw the level curves (the contour lines) for the surface in the x-y plane, along with the line. 30 40 y 0 30 20 40 50 50 10 10 20 x 0 Figure 14.8.1 Constraint line with contour plot of the surface xy. 380 Chapter 14 Partial Differentiation Imagine that the line represents a hiking trail and the contour lines are, as on a topographic map, the lines of constant altitude. How could you estimate, based on the graph, the high (or low) points on the path? As the path crosses contour lines, you know the path must be increasing or decreasing in elevation. At some point you will see the path just touch a contour line (tangent to it), and then begin to cross contours in the opposite order—that point of tangency must be a maximum or minimum point. If we can identify all such points, we can then check them to see which gives the maximum and which the minimum value. As usual, we also need to check boundary points; in this problem, we know that x and y are positive, so we are interested in just the portion of the line in the first quadrant, as shown. The endpoints of the path, the two points on the axes, are not points of tangency, but they are the two places that the function xy is a minimum in the first quadrant. How can we actually make use of this? At the points of tangency that we seek, the constraint curve (in this case the line) and the level curve have the same slope—their tangent lines are parallel. This also means that the constraint curve is perpendicular to the gradient vector of the function; going a bit further, if we can express the constraint curve itself as a level curve, then we seek the points at which the two level curves have parallel gradients. The curve 100 = 2x + 2y can be thought of as a level curve of the function 2x + 2y; figure 14.8.2 shows both sets of level curves on a single graph. We are interested in those points where two level curves are tangent—but there are many such points, in fact an infinite number, as we’ve only shown a few of the level curves. All along the line y = x are points at which two level curves are tangent. While this might seem to be a show-stopper, it is not. 50 50 30 0 10 20 0 20 y 30 40 40 10 x Figure 14.8.2 Contour plots for 2x + 2y and xy. 14.8 Lagrange Multipliers 381 The gradient of 2x + 2y is ⟨2, 2⟩, and the gradient of xy is ⟨y, x⟩. They are parallel when ⟨2, 2⟩= λ⟨y, x⟩, that is, when 2 = λy and 2 = λx. We have two equations in three unknowns, which typically results in many solutions (as we expected). A third equation will reduce the number of solutions; the third equation is the original constraint, 100 = 2x+2y. So we have the following system to solve: 2 = λy 2 = λx 100 = 2x + 2y. In the first two equations, λ can’t be 0, so we may divide by it to get x = y = 2/λ. Substituting into the third equation we get 2 2 λ + 2 2 λ = 100 8 100 = λ so x = y = 25. Note that we are not really interested in the value of λ—it is a clever tool, the Lagrange multiplier, introduced to solve the problem. In many cases, as here, it is easier to find λ than to find everything else without using λ. The same method works for functions of three variables, except of course everything is one dimension higher: the function to be optimized is a function of three variables and the constraint represents a surface—for example, the function may represent temperature, and we may be interested in the maximum temperature on some surface, like a sphere. The points we seek are those at which the constraint surface is tangent to a level surface of the function. Once again, we consider the constraint surface to be a level surface of some function, and we look for points at which the two gradients are parallel, giving us three equations in four unknowns. The constraint provides a fourth equation. EXAMPLE 14.8.1 Recall example 14.7.8: the diagonal of a box is 1, we seek to maximize the volume. The constraint is 1 = p x2 + y2 + z2, which is the same as 1 = x2 + y2 + z2. The function to maximize is xyz. The two gradient vectors are ⟨2x, 2y, 2z⟩ and ⟨yz, xz, xy⟩, so the equations to be solved are yz = 2xλ xz = 2yλ xy = 2zλ 1 = x2 + y2 + z2 If λ = 0 then at least two of x, y, z must be 0, giving a volume of 0, which will not be the maximum. If we multiply the first two equations by x and y respectively, we get xyz = 2x2λ xyz = 2y2λ 382 Chapter 14 Partial Differentiation so 2x2λ = 2y2λ or x2 = y2; in the same way we can show x2 = z2. Hence the fourth equation becomes 1 = x2 + x2 + x2 or x = 1/ √ 3, and so x = y = z = 1/ √ 3 gives the maximum volume. This is of course the same answer we obtained previously. Another possibility is that we have a function of three variables, and we want to find a maximum or minimum value not on a surface but on a curve; often the curve is the intersection of two surfaces, so that we really have two constraint equations, say g(x, y, z) = c1 and h(x, y, z) = c2. It turns out that at points on the intersection of the surfaces where f has a maximum or minimum value, ∇f = λ∇g + µ∇h. As before, this gives us three equations, one for each component of the vectors, but now in five unknowns, x, y, z, λ, and µ. Since there are two constraint functions, we have a total of five equations in five unknowns, and so can usually find the solutions we need. EXAMPLE 14.8.2 The plane x + y −z = 1 intersects the cylinder x2 + y2 = 1 in an ellipse. Find the points on the ellipse closest to and farthest from the origin. We want the extreme values of f = p x2 + y2 + z2 subject to the constraints g = x2 + y2 = 1 and h = x + y −z = 1. To simplify the algebra, we may use instead f = x2 + y2 + z2, since this has a maximum or minimum value at exactly the points at which p x2 + y2 + z2 does. The gradients are ∇f = ⟨2x, 2y, 2z⟩ ∇g = ⟨2x, 2y, 0⟩ ∇h = ⟨1, 1, −1⟩, so the equations we need to solve are 2x = λ2x + µ 2y = λ2y + µ 2z = 0 −µ 1 = x2 + y2 1 = x + y −z. Subtracting the first two we get 2y −2x = λ(2y −2x), so either λ = 1 or x = y. If λ = 1 then µ = 0, so z = 0 and the last two equations are 1 = x2 + y2 and 1 = x + y. Solving these gives x = 1, y = 0, or x = 0, y = 1, so the points of interest are (1, 0, 0) and (0, 1, 0), which are both distance 1 from the origin. If x = y, the fourth equation is 2x2 = 1, 14.8 Lagrange Multipliers 383 giving x = y = ±1/ √ 2, and from the fifth equation we get z = −1 ± √ 2. The distance from the origin to (1/ √ 2, 1/ √ 2, −1 + √ 2) is p 4 −2 √ 2 ≈1.08 and the distance from the origin to (−1/ √ 2, −1/ √ 2, −1 − √ 2) is p 4 + 2 √ 2 ≈2.6. Thus, the points (1, 0, 0) and (0, 1, 0) are closest to the origin and (−1/ √ 2, −1/ √ 2, −1 − √ 2) is farthest from the origin. This applet shows the cylinder, the plane, the four points of interest, and the origin. Exercises 14.8. 1. A six-sided rectangular box is to hold 1/2 cubic meter; what shape should the box be to minimize surface area? ⇒ 2. The post office will accept packages whose combined length and girth are at most 130 inches (girth is the maximum distance around the package perpendicular to the length). What is the largest volume that can be sent in a rectangular box? ⇒ 3. The bottom of a rectangular box costs twice as much per unit area as the sides and top. Find the shape for a given volume that will minimize cost. ⇒ 4. Using Lagrange multipliers, find the shortest distance from the point (x0, y0, z0) to the plane ax + by + cz = d. ⇒ 5. Find all points on the surface xy −z2 + 1 = 0 that are closest to the origin. ⇒ 6. The material for the bottom of an aquarium costs half as much as the high strength glass for the four sides. Find the shape of the cheapest aquarium that holds a given volume V . ⇒ 7. The plane x −y + z = 2 intersects the cylinder x2 + y2 = 4 in an ellipse. Find the points on the ellipse closest to and farthest from the origin. ⇒ 8. Find three positive numbers whose sum is 48 and whose product is as large as possible. ⇒ 9. Find all points on the plane x + y + z = 5 in the first octant at which f(x, y, z) = xy2z2 has a maximum value. ⇒ 10. Find the points on the surface x2 −yz = 5 that are closest to the origin. ⇒ 11. A manufacturer makes two models of an item, standard and deluxe. It costs $40 to manu-facture the standard model and $60 for the deluxe. A market research firm estimates that if the standard model is priced at x dollars and the deluxe at y dollars, then the manufacturer will sell 500(y −x) of the standard items and 45, 000 + 500(x −2y) of the deluxe each year. How should the items be priced to maximize profit? ⇒ 12. A length of sheet metal is to be made into a water trough by bending up two sides as shown in figure 14.8.3. Find x and φ so that the trapezoid–shaped cross section has maximum area, when the width of the metal sheet is 2 meters (that is, 2x + y = 2). ⇒ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x x y φ φ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.8.3 Cross-section of a trough. 13. Find the maximum and minimum values of f(x, y, z) = 6x+3y +2z subject to the constraint g(x, y, z) = 4x2 + 2y2 + z2 −70 = 0. ⇒ 384 Chapter 14 Partial Differentiation 14. Find the maximum and minimum values of f(x, y) = exy subject to the constraint g(x, y) = x3 + y3 −16 = 0. ⇒ 15. Find the maximum and minimum values of f(x, y) = xy + p 9 −x2 −y2 when x2 + y2 ≤9. ⇒ 16. Find three real numbers whose sum is 9 and the sum of whose squares is a small as possible. ⇒ 17. Find the dimensions of the closed rectangular box with maximum volume that can be in-scribed in the unit sphere. ⇒ 18. Find the isosceles triangle with perimeter 12 and maximum area. ⇒
720
https://law.justia.com/codes/oklahoma/title-59/section-59-858-351/
Oklahoma Statutes §59-858-351 (2024) - Definitions. :: 2024 Oklahoma Statutes :: U.S. Codes and Statutes :: U.S. Law :: Justia Log InSign Up Find a Lawyer Ask a Lawyer Research the Law Law Schools Laws & Regs Newsletters Marketing Solutions Justia Connect Pro Membership Practice Membership Public Membership Justia Lawyer Directory Platinum Placements Gold Placements Justia Elevate SEO Websites Blogs Justia Amplify PPC Management Google Business Profile Social Media Justia Onward Blog Justia › U.S. Law › U.S. Codes and Statutes › Oklahoma Statutes › 2024 Oklahoma Statutes › Title 59. Professions and Occupations › §59-858-351. Definitions. Go to Previous Versions of this Section 2024 Oklahoma Stat. (here) 2023 Oklahoma Stat. 2022 Oklahoma Stat. View All Versions 2024 Oklahoma Statutes Title 59. Professions and Occupations §59-858-351. Definitions. Universal Citation: 59 OK Stat § 858-351 (2024) Learn more This media-neutral citation is based on the American Association of Law Libraries Universal Citation Guide and is not necessarily the official citation. PreviousNext Unless the context clearly indicates otherwise, as used in Sections 858-351 through 858-363 of The Oklahoma Real Estate License Code: "Broker" means a real estate broker, an associated broker associate, sales associate, or provisional sales associate authorized by a real estate broker to provide brokerage services; "Brokerage services" means those services provided by a broker to a party in a transaction; "Party" means a person who is a seller, buyer, landlord, or tenant or a person who is involved in an option or exchange; "Transaction" means an activity or process to buy, sell, lease, rent, option or exchange real estate. Such activities or processes may include, without limitation, soliciting, advertising, showing or viewing real property, presenting offers or counteroffers, entering into agreements and closing such agreements; and "Firm" means a sole proprietor, corporation, association or partnership. Added by Laws 1999, c. 194, § 1, eff. Nov. 1, 2000. Amended by Laws 2005, c. 423, § 1, emerg. eff. June 6, 2005; Laws 2012, c. 251, § 1, eff. Nov. 1, 2013; Laws 2013, c. 240, § 2, eff. Nov. 1, 2013. PreviousNext Disclaimer: These codes may not be the most recent version. Oklahoma may have more current or accurate information. We make no warranties or guarantees about the accuracy, completeness, or adequacy of the information contained on this site or the information linked to on the state site. Please check official sources. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Free Daily Summaries in Your Inbox You're all set! You already receive all suggested Justia Opinion Summary Newsletters. You can explore additional available newsletters here. Sign up for our free summaries and get the latest delivered directly to you. Our Suggestions: Enter Your Email Webinars for All Practical Bystander Intervention in the Law Office 2025 [CLE] Ama K. Karikari Sep 30,1 pm ET / 10 am PT Generative AI: Managing Cybersecurity and Privacy Risks in Your Practice [CLE] Eran Kahana Oct 3,1 pm ET / 10 am PT Networking Connections That Convert: Using LinkedIn to Attract Clients and Build Authority Marc W. Halpert Oct 6,1 pm ET / 10 am PT View All Webinars › Justia Webinars are open to all, lawyers and non-lawyers. Lawyers, please visit individual webinar pages for more information about CLE accreditation. Ask a Lawyer Get Free Answers Have a legal question? Get free answers from experienced lawyers! Ask Question Find a Lawyer Lawyers - Get Listed Now!Get a free directory profile listing Toggle button Get free summaries of new opinions delivered to your inbox! Enter Your Email Sign Up Justia Legal Resources Find a Lawyer Bankruptcy Lawyers Business Lawyers Criminal Lawyers Employment Lawyers Estate Planning Lawyers Family Lawyers Personal Injury Lawyers More... Individuals Bankruptcy Criminal Divorce DUI Estate Planning Family Law Personal Injury More... Business Business Formation Business Operations Employment Intellectual Property International Trade Real Estate Tax Law More... Law Schools Dictionary Admissions Financial Aid Course Outlines Law Journals Blogs Employment More... U.S. Federal Law U.S. Constitution U.S. Code Regulations Supreme Court Circuit Courts District Courts Dockets & Filings More... U.S. State Law State Constitutions State Codes State Case Law California Florida New York Texas More... Other Databases Legal Jobs Legal Blogs Business Forms Product Recalls Patents Trademarks Countries More... Marketing Solutions Justia Connect Membership Justia Lawyer Directory Justia Premium Placements Justia Elevate (SEO, Websites) Justia Amplify (PPC, GBP) Justia Onward Blog Testimonials More... © 2025 JustiaJustia ConnectLegal PortalCompanyHelpTerms of ServicePrivacy PolicyMarketing Solutions
721
https://math.stackexchange.com/questions/2246264/calculate-price-of-multiple-items-knowing-its-total-sum
Skip to main content Calculate price of multiple items knowing its total sum Ask Question Asked Modified 8 years, 4 months ago Viewed 5k times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. I think I have here simple problem, but not having done any of this kind of math for long time, have got me in unpleasant situation. Let's say that we have grocery store and three customers. Customer U,W and Z. All of them bought the same type of products, but in different quantities and all we know is total price of those products. But not for customer Z. How to calculate total price of customer Z purchase? Customer U purchase: 36(apples)+18(oranges)+27(pomegranates)=$477 Customer W purchase: 9(apples)+27(oranges)+18(pomegranates)=$432 Customer Z purchase: 1(apples)+1(oranges)+1(pomegranates)=$X linear-algebra Share CC BY-SA 3.0 Follow this question to receive notifications edited Apr 22, 2017 at 14:12 MyCodeCleanCode asked Apr 22, 2017 at 10:54 MyCodeCleanCodeMyCodeCleanCode 311 silver badge33 bronze badges 4 Generally speaking, you need three equations to specify three unknowns. Your system does not have a unique solution. – lulu Commented Apr 22, 2017 at 11:33 Under the current formulation, this problem has multiple solutions. – mlc Commented Apr 22, 2017 at 11:33 ok, I made it more detailed. @mlc can you share one of the solutions ideas with which you came up? – MyCodeCleanCode Commented Apr 22, 2017 at 11:58 This is bizarre. You have chosen a third equation whose LHS makes the system linearly dependent. The probability that this happens by chance is (theoretically) negligible. Either you are pulling our leg or you need to clarify the context of your question. (Currenlty, your system of equation has a solution only if X=101/5.) – mlc Commented Apr 22, 2017 at 12:57 Add a comment | 1 Answer 1 Reset to default This answer is useful 0 Save this answer. Show activity on this post. Unfortunately, this system does not have a unique solution. It has an infinite number of solutions. This is due to the fact that we have two definite equations, but three unknowns (i.e., only the first two equations can be considered since we know nothing about the third equation). Let's convert the system into symbols and we'll see why: {36a+18o+27p=4779a+27o+18p=432. Let's multiply the 2nd equation by 4 to give us {36a+18o+27p=47736a+108o+72p=1728. I will then rearrange the 2nd equation like so: {36a+18o+27p=47736a=1728−108o−72p. So I know what 36a is, which I will replace the 36a in the first equation with the second equation: (1728−108o−72p)36a+18o+27p=477, which if I combine like terms and divide out any common factors, I get: 10o+5p=139. So I have managed to reduce the system to a two variable equation, but I don't have any other equations to reduce it to just one variable, which we know how to solve for a definite answer using basic algebra. I can't use the third equation because this introduces another variable, X. Even if, we'd still end up with this problem of too many variables and not enough information. So, the only thing that we can do is pick any random number for one of the variables and gain a solution that way (again, we don't have enough information, so the only thing we can do is just make it up ourselves). So maybe pomegranates are $1.98 each. Then, I just plug this into the above equation, 10o+5(1.98)=139, and I get that oranges are o=$12.98. (which is pretty expensive for an orange!). Then I use this information in the equation 36a=1728−108o−72p=1728−108(12.98)−72(1.98)=183.60, which tells us 36a=183.60, or when we divide by 36, which is a=$30.60. So this is a solution. A pretty bad one at that, but again we don't have enough information to really find an answer other than just making one up. (you can read about this concept here). I understand the concept of "making up an answer" will make less mathematically mature students cringe because it just sounds like something you'd get points taken off for on homework or an exam, but introducing a "free-variable" into systems with too many unknowns is pretty common practice in many real-world scenarios (the GPS on your phone or in your car being one of them). Share CC BY-SA 3.0 Follow this answer to receive notifications edited Apr 22, 2017 at 12:47 answered Apr 22, 2017 at 12:38 Decaf-MathDecaf-Math 4,73211 gold badge2626 silver badges4545 bronze badges 1 Thank you for this detailed answer! – MyCodeCleanCode Commented Apr 22, 2017 at 13:42 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions linear-algebra See similar questions with these tags. Featured on Meta Community help needed to clean up goo.gl links (by August 25) Related 1 How to calculate per unit costs for multiple items 0 Optimization Problem - Lowest Total Price from Multiple Suppliers 0 Optimization of shopping list by condition 1 Understanding matrix multiply analogously 3 Can anyone talk about matrices in terms of objects and type? 0 Modelling Sales in a software company: Poisson or simple linear algebra? 1 Generality of the inner product 0 How to reach an exact total of n apples, m oranges, p peaches, etc. given a finite set of combinations in which they can be bought together. Hot Network Questions NMinimize behaves strangely for this simple problem Other than the tank PSI, what else could cause a Whirlpool 3 stage RO Under Sink Water Filtration System to have low water pressure? Is Berk (1966)'s main theorem standard in Statistics/Probability? Is there a name for it? Can I use super glue or hot glue as an insulator/solder mask for a small circuit? A word for someone who seems unassuming and is overlooked, but turns out to be powerful Why does this Association show its Head, and how do I make it evaluate? The point of well-pointed spaces shift option doesn't work with tikz-matrix's nodes? Did the success of "Star Wars" contribute to the decision to make "Strangers" starring Don Henderson? Which passport to use at immigration counter Is Uni ever pronounced /y:'ni/? How to get eglot-server-programs executables? SciFi story about father and son after world is destroyed A friend told me he is flying into Birmingham, Alabama, on September 29, 2025. Is this the correct ticket? Confusion about infinity in gravitational potential energy (GPE) Is kernel memory mapped once or repeatedly for each spawned process How Do I Talk of Someone Whose Name Appeared in the Paper Why can the point of application of a force be moved along its line of action in a rigid body? Is the Royal Review Board’s Five Diamond Award in Ocean's Thirteen fictional? Why do we introduce the continuous functional calculus for self-adjoint operators? Fewest cages for unique Killer Kropki Sudoku Are some particular airline combinations completely unbookable? Why do aviation safety videos mime mouth-inflating the life vest? How do I fill holes in new pine furniture so that the color will continue to match as the wood ages? Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
722
https://www.youtube.com/watch?v=EzIYY-ML1BI&vl=en
Ellipses Vs. Hyperbolas Similarities and Differences Mario's Math Tutoring 458000 subscribers 527 likes Description 35262 views Posted: 30 May 2016 Learn the similarities and the differences between hyperbolas and ellipses in this free math video tutorial by Mario's Math Tutoring. Timestamps: 00:00 Intro 0:38 Standard Form of the Equation of an Ellipse & Hyperbola 0:54 Basic Graph of an ellipse with what the a, b, and c represent 1:40 More specific ellipse equations and their graphs 2:53 How to find the foci of an ellipse. 4:05 How to know which direction the Hyperbola opens 4:27 Example Equation and Graph of a Hyperbola opening horizontally 5:03 Finding the Foci of the Hyperbola 6:02 Another Example and Graph of a Hyperbola opening vertically Organized List of My Video Lessons to Help You Raise Your Scores & Pass Your Class. Videos Arranged by Math Subject as well as by Chapter/Topic. (Bookmark the Link Below) ➡️JOIN the channel as a CHANNEL MEMBER at the "ADDITIONAL VIDEOS" level to get access to my math video courses(Algebra 1, Algebra 2/College Algebra, Geometry, and PreCalculus), midterm & final exam reviews, ACT and SAT prep videos and more! (Over 390+ videos) 34 comments Transcript: Intro uh ellipses what you'll notice is that you're adding okay with hyperbolas you'll notice that you're subtracting both of the equations they equal one and they both have these denominators of a s and B2 okay so let's look at an ellipse okay and we'll just kind of look at the anatomy of an ellipse okay these ones that I've drawn here are centered about the origin okay so that means the center is right at the origin so if I draw an ellipse like this okay here's the center this distance from the center to the vertices okay I'll just write vertex here that distance is called a now it's Standard Form of the Equation of an Ellipse & Hyperbola possible that the ellipse could be longer in the y direction okay in the vertical Direction and in that case this is going to be a okay and this distance Basic Graph of an ellipse with what the a, b, and c represent to the minor vertices or the co-vertices that distance is called B so are you with me so far I'll give you some examples of the equation so say for example this one could be x^2 + y^2 = 1 and this is going to be maybe let's say 25 and let's just say 16 so what that means is that I'm going five to the right and five to the left so the number underneath X tells you you're going in the X Direction so it's easy to remember right whereas a number underneath the Y that tells you you're going to be going in the y direction plus or minus 4 this one here might look something like this x^2 + y^2 = 1 but the y^2 must let's say maybe is 100 whereas X squ maybe is only nine so this tells you More specific ellipse equations and their graphs you're going up 10 and down 10 in the y direction and you're only going three left or right in the X Direction so in this case the larger number is the A squ and the smaller number is the b^ squ so this is the larger one this is going to be the a squ this is the smaller one this is the b^ squ are you with me so far so that's the idea so the distance from the center to the vertex Center to the vertex that distance is a center to the co-vertex or minor vertex that distance is called B okay now the other component are the fosi and the fosi that distance is the distance from the center to the focal points the focal points are going to be on the major axis that's the longer axis that distance is C and for ellipses we use the formula c^2 = a^2 minus b^2 kind of like Pythagorean theorem but you're subtracting so for this one it's going to be c^2 = 25 - 16 which equals 9 and if we take the square root of both sides we get plus or minus 3 so you want to make sure that the fosi F okay are on the longer axis so we're going to be going right three and left How to find the foci of an ellipse. three and that's how we find the fosi for this one since it's orientated vertically because the number underneath the Y squ is larg than the number underneath the x s this is our major axis this is our minor axis our fi are going to Long lay I'm sorry lie along this axis like this and again what we would do is we would do c^2 = a^2 minus b^2 this comes out to 91 and if we take the square root we get plus or minus the square of 91 so that's going to be actually would be a little bit higher like this okay and a little bit lower so those are the fosi okay now we're going to switch gears to talk about hyperbolas now hyperbolas whichever one comes first that's the positive term that tells us which direction the hyperbola is going to open if it's x s that comes first it's going to be opening in the X Direction like the x- axis goes horizontal it's going to open horizontally if the y^ squ is first it's going to be opening in the direction that the y- axis is that's vertically up and down so let's look at an example let's take this first one over here say we wanted to graph a hyperbola that looks like this and like this so what we have is x^2 - y^2 equal 1 okay just like ellipses they equal one but this one we're going to say maybe is 4 and 1 okay How to know which direction the Hyperbola opens so what the a^ S is for this problem is going to be four so that means that a equals plus or minus 2 so we're going to be going right to left two that distance is our a value that's the distance from the center to the vertices now if we want to this will be the b^2 here and if we want to find the distance to the fosi the fosi are going Example Equation and Graph of a Hyperbola opening horizontally to be along that major axis okay the same axis that the vertices are on we're going to use the formula c^2 = A2 + B2 just like Pagan theorem and that's going to be 4 + 1 c^2 = 5 C = plus orus thek of 5 which is about maybe right about over here and over here okay so just a little bit further out from the center now some students what they'll kind of make these little uh notes to themselves will say hm so with ellipses the fos are closer to the center they're on the inside with hyperbolas the FI are further out from the center they're going to be on the Finding the Foci of the Hyperbola outside okay and the other way that students often times remember these formulas is that in hyperbolas you're subtracting but in your Focus Formula you're adding it's the the opposite with ellipses you're adding here but in your Focus Formula you're subtracting so I'm just trying to give you some hints or ways of remembering these different formulas let's look at one more example with hyperbolas and that's uh let's see if we can fit it in right here if it's opening like this this up and down that means that it's a y^2 variety so I'll write it down here y^2 - x^2 = 1 and here we it doesn't matter which one is larger like ellipses the denominator see whichever one's larger in the ellipses determines which direction it's going to be longer in here it's the positive one the first one that determines whether it's opening up or down like this one the Y squ is positive so opening up and down uh so let's just say that this is maybe four again and we'll just say maybe this is um let's just just say it's one so we kind of reversed it so Another Example and Graph of a Hyperbola opening vertically this is going to be two and two so we're going the square < TK of 4 which is two in the vertical Direction in the y direction to get to the vertices
723
https://byjus.com/maths/intervals-as-subsets-of-r/
In mathematics, an interval can be defined as a set of real numbers that contains all real numbers lying within any two specific numbers of the set R. We already know about the subsets of set of real numbers and how to represent them. From the subsets of real numbers, we can write the intervals for any given function to denote the domain and range of functions. Here, we can write the intervals as subsets of a set of real numbers and these intervals can be written as open and closed intervals. Intervals as Subsets of Real Numbers Suppose a and b be two real numbers, i.e. a, b ∈ R, such that a < b; then, using this notation, we can define different types of intervals called notations. Let’s understand the meaning of interval notation and the types of intervals along with their representations. What is Interval Notation? Interval Notation is a method of representing a subset of real numbers by those numbers that bound them. We can use this notation to describe inequalities. Consider an interval signified as 2 < x < 7, which means a set of numbers lying between 2 and 7 (excluding 2 and 7) represent the value of x. Types of Interval Notation There are different types of notations of intervals that are classified based on the endpoints of intervals. They are: Open intervals Closed intervals Half-open intervals Degenerate intervals Bounded and Unbounded intervals Open Intervals The set of real numbers {x : a < x < b} is called an open interval and is denoted by (a, b). Open intervals contain all the points between a and b belonging to (a, b), but a, b themselves do not belong to this interval. This can be represented on the real number line as: The hollow circles denote that the points at these circles are not included in the set of numbers of that interval. | | | Read more: Continuity in Interval Number lines Numbers Sets Subsets and Supersets | Closed Intervals The interval containing the endpoints is also called the closed interval and is denoted by [a, b], and it is written as [a, b] = {x : a ≤ x ≤ b}. Closed interval [a, b] can be described on a real number line as: The solid circles denote that the points at these circles are included in the set of numbers of that interval. Click here to know what are subsets in maths. Half-open Intervals Half-open intervals mean the intervals that are closed at one end and open at the other. These can be represented as: [a, b) = {x : a ≤ x < b} is an open interval from a to b, including a but excluding b. (a, b] = {x : a < x ≤ b} is an open interval from a to b including b but excluding a. These intervals can be represented on the real number line as shown in the below figure: Degenerate Interval A set consisting of a single real number or an interval of the form a to a, i.e. [a, a] is called a degenerate interval. Bounded and Unbounded Intervals An interval is said to be left-bounded if there is some real number that is smaller than all its elements and is called a right-bounded if there is some real number that is larger than all its elements. So, an interval is said to be bounded if it is both left- and right-bounded; otherwise, it is called an unbounded interval. Intervals that are bounded at only one end are called half-bounded. However, bounded intervals are also known as finite intervals. Thus, the intervals of subsets of real numbers are tabulated, below. | | | --- | | Intervals | Notations | | Empty interval | [b, a] = (b, a) = [b, a) = (b, a] = (a, a) = [a, a) = (a, a] = { } = Φ | | Degenerate interval | [a, a] = {a} | | Open interval | (a, b) = {x : a < x < b} | | Closed interval | [a, b] = {x : a ≤ x ≤ b} | | Left-closed, right-open interval | [a, b) = {x : a ≤ x < b} | | Left-open, right-closed interval | (a, b] = {x : a < x ≤ b} | | Left-bounded, open and right-unbounded | (a, +∞) = {x : x > a} | | Left-bounded, closed and right-unbounded | [a, +∞) = {x : x ≥ a} | | Left-unbounded and right-bounded, open | (-∞, b) = {x : x < b} | | Left-unbounded and right-bounded, closed | (-∞, b] = {x : x ≤ b} | | Unbounded interval at both ends | (-∞, +∞) or [-∞, +∞] = R | Also, check: Interval Notation Calculator An important note is that the number (b – a), the difference between b and a is called the length of any of the intervals (a, b), [a, b], [a, b) or (a, b]. Examples of Interval Notation Generally, an interval contains infinitely many points. Also, the given set of numbers can be written in the form of intervals and vice versa. Let’s have a look at the examples given below. The set {x : x ∈ R, –4 < x ≤ 9}, written in set-builder form, can be written in the form of the interval as (–4, 9]. The interval [–2, 3) can be written in set builder form as {x : x ∈ R, –2 ≤ x < 3}. The interval [-3, 5] can be written as {x : x ∈ R, –3 ≤ x < 5} in set builder form. The set {x : x ∈ R, -19 < x ≤ 1} can be written using the open interval as (-19, 1). Video Lesson on What are Sets All these types of intervals and representations are used in different areas to solve different mathematical problems. Comments Leave a Comment Cancel reply Register with BYJU'S & Download Free PDFs Register with BYJU'S & Watch Live Videos
724
https://math.stackexchange.com/questions/2057995/understanding-a-number-sequence
Understanding a number sequence - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Understanding a number sequence Ask Question Asked 8 years, 9 months ago Modified8 years, 9 months ago Viewed 78 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. \begingroup Firstly I hope this question is formed well enough for you. I am very much new to mathematics and this Stack site, you will have to excuse any incorrect terms. I appreciate your patience. Given a number of nodes on a line, I am calculating the maximum equidistant points it may have across it. As a criteria, the first and last point must be populated. So for example a line containing 7 points can have 4 occupied spaces if they are to be evenly spaced. ``` ••••••• ``` I have been studying the sequence which has emerged and it has left me confused, mostly unable to identify a name or any material so I may study further. As far as I can tell you can calculate an odd numbers maximum with (n+1)/2 I am however struggling to reliably calculate the same for a given even number. The sequence I have is (the bottom number is my calculated maximum) 4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28 2,3,2,4,2,5,4 ,6 ,2 ,7 ,2 ,8 ,6 ,9 ,2 ,10,2 ,11,8 ,12,2 ,13,6 ,14,10 I am keen to learn more and understand this set of numbers, I hope someone can lend me some wisdom. sequences-and-series Share Cite Follow Follow this question to receive notifications asked Dec 14, 2016 at 2:41 MattMatt 103 2 2 bronze badges \endgroup 4 \begingroup In your example the nodes are not equally spaced. Is the second row (after the final 28 of the first row) meant as a solution for the row above? Are repeated numbers OK in a solution, and what does "equally spaced" mean for a solution?\endgroup coffeemath –coffeemath 2016-12-14 02:49:04 +00:00 Commented Dec 14, 2016 at 2:49 1 \begingroup Sorry, that is confusing. So in the number sequence the top row relates to the bottom row in the bullet/star example. Equally spaced refers to the physical location on the line, so for the 7/4 case it means for 7 points I can place 4 markers on the line with equal distance between them. I'm hoping that makes sense!\endgroup Matt –Matt 2016-12-14 02:54:27 +00:00 Commented Dec 14, 2016 at 2:54 1 \begingroup Oh I got that now. So for the odd n in top row it's (n+1)/2 in the bottom row as the max of equally spaced points, with first and last spots filled and at least one space between any two adjacent filled spots. Is that the meaning?\endgroup coffeemath –coffeemath 2016-12-14 02:58:14 +00:00 Commented Dec 14, 2016 at 2:58 \begingroup Absolutely right, yes.\endgroup Matt –Matt 2016-12-14 02:58:57 +00:00 Commented Dec 14, 2016 at 2:58 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. \begingroup You fill spots 1 and n and are looking for the most spots you can fill in between that are evenly spaced. Note that you have n-1 spaces to divide evenly, so you want to factor n-1. If n is odd, n-1 will have a factor 2 so you can fill every other spot, giving a total of \frac {n+1}2 spots as you have found. You want the smallest factor of n-1 so you put the spots as close together as possible. In your table, you can see that all the even numbers that are one more that a prime have only two spots because the prime cannot be divided. If m is the smallest factor of n-1 the number of spots is \frac {n-1}m + 1 where the +1 comes because you have both ends like fenceposts. Share Cite Follow Follow this answer to receive notifications answered Dec 14, 2016 at 3:17 Ross MillikanRoss Millikan 384k 28 28 gold badges 264 264 silver badges 472 472 bronze badges \endgroup 1 \begingroup That was very easy to understand and answered my question perfectly. Thank you.\endgroup Matt –Matt 2016-12-14 03:30:44 +00:00 Commented Dec 14, 2016 at 3:30 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. \begingroup For even n you seek the arithmetic progression with the smallest difference d>1 such that there is k for which 1+kd=n. Here the last equation is so spot n gets filled, and d>1 is so there are definitely some space between the filled positions. Then the number is k+1 for your problem. Example n=10. Then n-1=9=3\cdot 3 so here the least d>1 is 3 and you get 3+1=4 points. Note the 3 used in the formula k+1 is (n-1)/d in general. The resulting thing for even n is erratic, since then the least factor exceeding 1 of n-1 is also erratic. Consider e.g. when n-1 is a prime. Share Cite Follow Follow this answer to receive notifications answered Dec 14, 2016 at 3:15 coffeemathcoffeemath 30.2k 2 2 gold badges 34 34 silver badges 53 53 bronze badges \endgroup 1 \begingroup Thank you for taking the time to answer, both of these answers were insightful and useful.\endgroup Matt –Matt 2016-12-14 06:27:43 +00:00 Commented Dec 14, 2016 at 6:27 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions sequences-and-series See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 2Understanding uniformly bounded sequence 0Number sequence as geometric sequence 2Trouble understanding Limit of a Sequence definition 0Help understanding monotocally decreasing sequence inequality 4Understanding uniform convergence of sequence 0Understanding a Cauchy Sequence Proof 2Help understanding sequence proof Hot Network Questions What is a "non-reversible filter"? Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" For every second-order formula, is there a first-order formula equivalent to it by reification? Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? alignment in a table with custom separator Do sum of natural numbers and sum of their squares represent uniquely the summands? Bypassing C64's PETSCII to screen code mapping Drawing the structure of a matrix How to locate a leak in an irrigation system? Who is the target audience of Netanyahu's speech at the United Nations? In the U.S., can patients receive treatment at a hospital without being logged? Can a cleric gain the intended benefit from the Extra Spell feat? An odd question Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners Why are LDS temple garments secret? Repetition is the mother of learning Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? Riffle a list of binary functions into list of arguments to produce a result Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator What's the expectation around asking to be invited to invitation-only workshops? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Languages in the former Yugoslavia Gluteus medius inactivity while riding more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
725
https://www.manufacturingtomorrow.com/news/2023/01/15/rough-machining-vs-finishing-machining/19944/
Online Trade Magazine - Industry 4.0 Advanced Manufacturing and Factory Automation Rough Machining VS Finishing Machining Visit for further information Generally, CNC machining fundamentals include standard material reduction manufacturing operations, such as turning, milling, end face machining, drilling, grooving, boring, etc. These processes involve removing excess material layer by layer from the solid workpiece to achieve the required part dimensions and features. 01/16/23, 06:15 AM | Engineering | SANS Machining However, it is impossible to obtain these features in a single machining operation. Roughing is usually the first process, followed by finishing. After all stages, the required CNC products can be obtained from the blank. In this paper, we will focus on what is rough machining, finish machining and the difference between rough machining and finish machining. More Headlines Bison's Lamb 6.6" Bypass Tangential Vacuum Motor Raises the Bar for Power and Longevity A Material Difference: Why a Metal Manufacturer Opted for Plastic in Cooling Tower Upgrade Wesgarde Highlights the Advanced MP Series 87 Heavy Duty Marine Rated Circuit Breaker Automation Alley's Project DIAMOnD Launches Peer-to-Peer Marketplace for Distributed 3D Printing Canadian Manufacturing Technology Show 2025 to Unveil New Technologies Driving the Future of Canadian Manufacturing Articles Talking PACK EXPO Las Vegas with Encoder Products Company (EPC) PACK EXPO 2025 Product Preview Natural Cleaner Manufacturer Cuts Delivery Times with Smarter Planning Talking PACK EXPO Las Vegas with NORD DRIVESYSTEMS BigRep ALTRA 280, the Ultimate High-Temperature Industrial 3D Printer What Is Rough Machining? The rough machining process in mechanical processing is mainly used to quickly remove large pieces of material and rough process the workpiece into the desired shape, so as to make the subsequent processing more convenient and efficient. The purpose of rough machining is to quickly remove the blank allowance. Generally, large feed and cutting depth are selected to remove as much chips as possible in a short time. Therefore, rough machined products often have low precision, rough surface and high productivity. Rough machining is often the preparation for semi finishing and finishing, which cannot provide good surface finish and tight tolerance. Advantages Of Rough Machining -Rough machining can realize rapid feed, and the error can be corrected by subsequent finishing to ensure quality. -Divide the processing stages and give full play to the advantages of rough and finish processing equipment. Rough machining equipment has high power, high efficiency, strong rigidity, and finish machining equipment has high precision and small error. -Rough machining can find various defects of the blank, such as sand hole, air hole, insufficient machining allowance, etc., which is convenient for timely repair or scrapping, so as to avoid wasting processing time and cost. -After hot working, the residual stress of the workpiece is large, so the rough and finish machining can be separated. The aging can be arranged to eliminate the residual stress, and the finishing can be arranged to eliminate the deformation after cooling. What Is Finishing In Machining? Finishing in machining refers to a manufacturing process that involves changing the surface of a manufactured part or component for a specific resolution. This mainly includes eliminating aesthetic defects to improve the appearance of parts or obtain some mechanical properties that can improve performance. Generally, finishing includes precision machining, grinding, electroplating, sandblasting, polishing, anodizing, powder coating, sandblasting, painting and other processes. Therefore, the manufacturer shall use a specific finishing process or a combination of appropriate finishing operations according to the required part characteristics to increase or improve the performance of the manufactured parts, such as hardness, adhesion, corrosion resistance, etc. In most CNC manufacturing projects, finish machining is usually the last process after the engineers rough machining the workpiece. In addition, the purpose of the finishing process is to remove the excess materials required and complete the manufactured parts to achieve the final dimensions in terms of flatness, roughness, thickness, tolerance and surface finish. Difference Between Rough Machining And Finish Machining In order to meet the basic requirements of CNC machining, it is necessary to perform many operations in the machining workshop, including turning, milling, end face machining, etc. Conventional machining process is applicable to high cutting amount and good surface quality, involving two stages or two processes. Rough machining operation is used to produce part geometry close to finished product shape in a short time, and finish machining operation is carried out after rough machining to obtain final geometry and other details. What is the specific difference between rough machining and finish machining? 1. Purpose. Rough cuts are made to quickly give the basic shape according to the required features. Here, surface roughness is not an important factor; On the contrary, the ultimate goal is to remove as much unnecessary material as possible. In contrast, finish machining is performed to improve the surface finish, dimensional accuracy, and tolerance of the required features. In the case of qualified finishing, the cutting rate is not important. 2. Process parameters and MRR: Cutting speed (Vc), feed rate (s or f) and cutting depth (t or a) are three process parameters of each traditional machining process. These parameters greatly affect the overall machining action and capability. Higher speeds, feeds, and cutting depths can improve material removal rate (MRR) at the expense of surface finish. MRR is proportional to speed, feed and cutting depth, so it can be expressed mathematically by multiplying speed, feed and cutting depth by the normal number of unit conversion. During processing, the speed is usually kept constant, because the speed is selected according to the workpiece and tool materials, machine capacity, vibration level and other important factors. In order to achieve the basic goal, higher feed and cutting depth are used in rough machining, so MRR is increased. On the other hand, the finishing tool path adopts low feed and cutting depth, so MRR is reduced. 3. Surface finish and dimensional accuracy: Due to the feed speed, every traditional machining process will have fan-shaped marks or feed marks on the finished product surface. This serrated scallop mark will cause primary surface roughness. In addition to the geometry of the cutting tool, the surface roughness also depends directly on the feed rate. Higher feed rates will result in poor surface finish. Higher cutting depths also tend to reduce surface finish and machining accuracy. In rough cutting, higher feed and cutting depth are used, so poor surface finish is obtained. It also fails to provide high dimensional accuracy and tight tolerances. On the other hand, due to the very low feed and cutting depth, the finishing tool path can improve the finish, accuracy and tolerance. 4. Tools Different levels of roughness have different requirements for the cutting tool and cutting angle. Negative rake inserts are best suited for rough machining because they absorb cutting forces and thus achieve higher cutting speeds. For finishing, the front angle blade is usually selected to obtain better surface finish. Precautions During Machining Precautions for rough machining Rough machining operation provides an efficient and fast method for manufacturers to manufacture workpiece datum shapes for subsequent processing. However, some considerations may come into play when rough machining is performed in machining. Check them below: 1. Processing parameters The CNC roughing tool software includes preselected options for feed rate, cutting speed and depth. However, these default machining parameters cannot predict the precautions for each specific roughing operation. In addition, applying these default parameters may cause processing errors. Therefore, you must select and optimize all rough machining parameters to adapt to each workpiece and tool to achieve machining efficiency. 2. Machine tool type and control software Rough machining process requires equipment with high power, high efficiency and rigidity. As a result, the manual equipment cannot handle the tool movement required to perform rough machining. Similarly, software programmed for complex 3D milling programs may not be able to maintain constant cutting on workpieces with narrow corners. Therefore, you must select the machining tools and software suitable for rough machining operations. 3. Heat and cutting fluid The roughing uses a higher feed rate, resulting in a greater return. This, in turn, creates greater cutting resistance, which generates a lot of heat. In addition, heat is transferred to the cutting tool and workpiece. At the same time, the heat aggravates the tool wear and the thermal deformation of the workpiece. Therefore, heat treatment measures should be formulated in the rough machining process to avoid processing complications. Machinists often use water-based cutting fluid in rough machining, which has considerable lubricating and cooling effects. If necessary, you can use oil bath or air cooling to reduce the impact of the heat generated. Precautions for finishing In Machining Finishing in machining is as important as any other operation in the manufacturing cycle. In addition, finishing will make your entire manufacturing effort go to waste. The following are some important factors to consider before implementing the collation process: 1. Dimensional accuracy It is important to note that the application of surface treatment to manufactured components may change their GD&T and other dimensional features. For example, coating metal parts with powder paint may increase their surface thickness. Therefore, it will be helpful if you always check these factors to ensure machining accuracy and precision before applying surface treatment. 2. Some applications When selecting finishing operations, careful consideration of the application of parts and the potential conditions to which such components will be subjected will help to make the right choice. For example, the finishing process of automobile hidden parts will pay less attention to beauty and more attention to improving the durability of parts. 3. Cost After considering the above factors, you must also consider the overall cost of completing the project. The best finishes usually require high-quality materials, tools and complex processes. 01/16/23, 06:15 AM | Engineering | SANS Machining Subscribe to Newsletter More Engineering News | Stories | Articles Featured Product OnLogic's Helix 520 Series of Scalable Fanless Computers The Helix 520 series utilizes the latest Intel Core Ultra processors with integrated edge AI capabilities to deliver exceptional performance and industrial-grade reliability for demanding applications in automation, robotics, machine vision, and more. Its unique modular design allows for flexible scaling of CPU and GPU performance, while robust connectivity and expansion options ensure seamless integration. More Products Feature Your Product More Advanced Manufacturing and Factory Automation Resources © 2010 - 2025 ManufacturingTomorrow - All Rights ReservedPowered by BTH Management
726
https://www.youtube.com/watch?v=VygN0FxfzMg
Euler's Formula for Polyhedra | Math with Mr. J Math with Mr. J 1690000 subscribers 194 likes Description 20903 views Posted: 19 Apr 2023 Welcome to Euler's Formula for Polyhedra with Mr. J! Need help with Euler's formula? You're in the right place! Whether you're just starting out, or need a quick refresher, this is the video for you if you're looking for help with faces, edges, vertices, and Euler's Formula. Mr. J will go through examples of polyhedra and explain how Euler's Formula applies. MORE RELATED VIDEOS: ✅ What is a Polyhedron? = ✅ Faces, Edges, and Vertices | Identify and Count = ✅ Cube | Faces, Edges, and Vertices = ✅ Triangular Prism | Faces, Edges, and Vertices = ✅ Rectangular Prism | Faces, Edges, and Vertices = ✅ Pentagonal Prism | Faces, Edges, and Vertices = ✅ Hexagonal Prism | Faces, Edges, and Vertices = ✅ Octagonal Prism | Faces, Edges, and Vertices = ✅ Square Pyramid | Faces, Edges, and Vertices = ✅ Triangular Pyramid | Faces, Edges, and Vertices = ✅ Rectangular Pyramid | Faces, Edges, and Vertices = ✅ Pentagonal Pyramid | Faces, Edges, and Vertices = ✅ Hexagonal Pyramid | Faces, Edges, and Vertices = ✅ Octagonal Pyramid | Faces, Edges, and Vertices = About Math with Mr. J: This channel offers instructional videos that are directly aligned with math standards. Teachers, parents/guardians, and students from around the world have used this channel to help with math content in many different ways. All material is absolutely free. #MathWithMrJ Click Here to Subscribe to the Greatest Math Channel On Earth: Follow Mr. J on Twitter: @MrJMath5 Email: math5.mrj@gmail.com Music: Hopefully this video is what you're looking for when it comes to Euler's Formula. Have a great rest of your day and thanks again for watching! ✌️✌️✌️ ✅ Thanks to Aloud, this video has been dubbed into Spanish and Portuguese. #DubbedWithAloud English This video has been dubbed into Spanish (United States) and Portuguese (Brazil) using an artificial voice via to increase accessibility. You can change the audio track language in the Settings menu. Spanish Este video ha sido doblado al español con voz artificial con para aumentar la accesibilidad. Puede cambiar el idioma de la pista de audio en el menú Configuración. Portuguese Este vídeo foi dublado para o português usando uma voz artificial via para melhorar sua acessibilidade. Você pode alterar o idioma do áudio no menu Configurações. 13 comments Transcript: Welcome to Math with Mr. J. In this video, I'm going to cover the basics of Euler's Formula for polyhedra and what it is. Now basically, a mathematician named Leonard Euler figured out that there is a relationship between the number of faces, vertices, and edges of a polyhedron. Simply put, the sum of the number of faces and the number of vertices, so the number of faces plus the number of vertices, is going to be two more than the number of edges. We can write this out as the number of faces plus the number of vertices minus the number of edges equals two. Let's go through a couple of examples and plug in the number of faces, vertices, and edges to show that the equation, this formula, and this relationship is true. Starting with number one, where we have a rectangular prism. We have six faces, twelve edges, and eight vertices. Let's plug those into the formula to verify that they are correct. So the number of faces plus the number of vertices minus the number of edges equals two. So six faces plus eight vertices minus twelve edges equals two. Six plus eight equals fourteen. And then fourteen minus twelve gives us two. So if we take a look at the number of faces and the number of vertices, six plus eight, that gives us fourteen. That's two more than the number of edges. Let's move on to number two and try another example. We have a square pyramid for number two. So five faces, eight edges, and five vertices. So the number of faces plus the number of vertices minus the number of edges equals two. Five faces plus five vertices minus eight edges equals two. Five plus five is ten, minus eight is two. There are five faces and five vertices. So if we combine those, we get ten, which is two more than eight edges. So there you have it. There is a basic explanation of Euler's Formula for polyhedra. I hope that helped. Thanks so much for watching. Until next time. Peace.
727
https://www.rcet.org.in/uploads/academics/rohini_89543060100.pdf
1.5 TURBINES Steam turbine is a heat engine which uses the heat energy stored in steam and performs work. The main parts of s steam turbine are as follows: A rotor on the circumference of which a series of blades or buckets are attached. To a great extent of performance of the turbine depends upon the design and construction of blades. The blades should be so designed that they are able to withstand the action of steam and the centrifugal force caused by high speed. As the steam pressure drops the length and size of blades should be increased in order to accommodate the increase in volume. The various materials used for the construction of blades materials used for the construction of blades depend upon the conditions under which they operated steel or alloys are the materials generally used. (i) Bearing to support the shaft. (ii) Metallic casing which surrounds blades, nozzles, rotor etc. (iii) Governor to control the speed. (iv) Lubricating oil system. Steam from nozzles is directed against blades thus causing the rotation. The steam attains high velocity during its expansion in nozzles and this velocity energy of the steam is converted into mechanical energy by the turbine. As a thermal prime mover, the thermal efficiency of turbine is the usual work energy appearing as shaft power presented as a percentage of the heat energy available. High pressure steam is sent in through the throttle valve of the turbine. From it comes torque energy at the shaft, exhaust steam, extracted steam, mechanical friction and radiation. Depending upon the methods of using steam arrangement and construction of blades, nozzle and steam passages, the steam turbines can be classified as follows: 1. According to the action of steam (i) Impulse turbine (ii) Reaction turbine (iii) Impulse and reaction turbine. In impulse turbine the steam expands in the stationary nozzles and attains high velocity. The resulting high velocity steam impinges against the blades which alter the direction of steam jet thus changing the momentum of jet and causing impulsive force on the blades. Figure 1.5.1 Impulse Turbine [Source: “power plant Engineering” by by Anup Goel ,Laxmikant D.jathar,Siddu :3page:27] In reaction turbine steam enters the fast moving blades on the rotor from stationary nozzles. Further expansion of steam through nozzles shaped blades changes the momentum of steam and causes a reaction force on the blades. Commercial turbines make use of combination of impulse and reaction forces because steam can be used efficiently by using the impulse and reaction blading on the same shaft. Figure shows impulsed reaction turbine. Figure 1.5.2 Reaction turbine [Source: “power plant Engineering” by P.K.Nag page:28] 2. According to the direction of steam flow (i) Axial (ii) Radial (iii) Mixed 3. According to pressure of exhaust (i) Condensing (ii) Non-condensing (iii) Bleeder 4. According to pressure of entering steam (i) Low pressure (ii) High pressure (iii) Mixed pressure 5. According to step reduction (i) Single stage (ii) Multi-stage 6. According to method of drive such as (i) Direct connected (ii) Geared CONDENSERS The thermal efficiency of a closed cycle power developing system using steam as working fluid and working on carnot cycle is given by an expression (T1-T2)/T1. This expression of efficiency shows that the efficiency increases with an increase in temperature T1 and decrease in temperature T2.The maximum temperature T2(temperature at which heat is rejected) can be reduced to the atmospheric temperature if the exhaust of the steam takes place below atmospheric pressure. If the exhaust is at atmospheric pressure, the heat rejection is at 100˚C. Low exhaust pressure is necessary to obtain low exhaust temperature. But the steam cannot be exhausted to the atmosphere if it is expanded in the engine or turbine to a pressure lower than the atmospheric pressure. Under this condition, the steam is exhausted into a vessel known as condenser where the pressure is maintained below the atmosphere by continuously condensing the steam by means of circulating cold water at atmospheric temperature. A closed vessel in which steam is condensed by abstracting the heat and where the pressure is maintained below atmospheric pressure is known as a condenser. The efficiency of the steam plant is considerably increased by the use of a condenser. In large turbine plants, the condensate recovery becomes very important and this is also made possible by the use of condenser. The steam condenser is one of the essential components of all modern steam power plants. Steam condensers are of two types: 1. Surface condenser (a) Down flow type (b) Central flow condenser (c) Evaporation condenser 2. Jet condenser (a) Low level jet condensers (parallel flow type) (b) High level or barometric condenser (c) Ejector condenser Surface condensers In surface condensers there is no direct contact between the steam and cooling water and the condensate can be re-used in the boiler. In such condenser even impure water can be used for cooling purpose whereas the cooling water must be pure in jet condensers. Although the capital cost and the space needed is more in surface condensers but it is justified by the saving in running cost and increase in efficiency of plant achieved by using this condenser. Depending upon the position of condensate extraction pump, flow of condensate and arrangement of tubes the surface condensers may be classified as follows: (a) Down flow type Figure shows a sectional view of down flow condenser. Steam enters at the top and flows downward. The water flowing through the tubes in one direction lower half comes out in the opposite direction in the upper half. figure shows a longitudinal section of a two pass down- flow condenser. Figure 1.5.3 Surface condenser [Source: “power plant Engineering” by by Anup Goel ,Laxmikant D.jathar,Siddu :3page:35] (b) Central flow condenser Figure shows a central flow condenser. In this condenser the steam passages are all around the periphery of the shell. Air is pumped away from the center of the condenser. The condensate moves radially towards the center of tube nest. Some of the exhaust steam which moves towards the center meets the undercooling condensate and pre-heats it thus reducing under cooling. Figure 1.5.4 Down flow condenser [Source: “power plant Engineering” by by Anup Goel ,Laxmikant D.jathar,Siddu :3page:35] (c) Evaporation condenser. In this condenser Figure steam to be condensed in passed through a series of tubes and the cooling water falls over these tubes in the form of spray. A steam of air flows over the tubes to increase evaporation of cooling water which further increases the condensation of steam. Figure 1.5.5 Evaporation condenser [Source: “power plant Engineering” by by Anup Goel ,Laxmikant D.jathar,Siddu :3page:35] Advantages (i) The condensate can be used as boiler feed water. (ii) Cooling water of even poor quality can be used because the cooling water does not come in direct contact with steam. (iii) High vacuum (about 73.5 of Hg) can be obtained in the surface condenser. This increasing the thermal efficiency of the plant. Disadvantages (i) The capital cost is more (ii) The maintenance cost and running cost this condenser is high (iii) It is bulky and requires more space. Jet condensers In jet condensers the exhaust steam and cooling water come in direct contact with each other. The temperature of cooling water and the condensate is same when leaving the condensers. Elements of the jet condenser are as follows: (i) Nozzles or distributors for the condensing water. (ii) Steam inlet (iii) Mixing chambers: They may be (a) Parallel flow type (b) Counter flow type depending on whether the steam and water move in the same direction before condensation or whether the flows are opposite (iv) Hot well In jet condensers the condensing water is called injection water. (a) Low level jet condensers (Parallel flow type) In this condenser Figure water is sprayed through jets and it mixes with steam. The air is removed at the top by an air pump. Figure 1.5.6 Low level and High level Jet condenser [Source: “power plant Engineering” by by Anup Goel ,Laxmikant D.jathar,Siddu :page:36] ROHINI COLLEGE OF ENGINEERING & TECHNOLOGY ME8792 POWER PLANT ENGINEERING High level or Barometric condenser Figure shows a high level jet condenser. The condenser shell is place at a height of 10.33m (barometric height) above the hot well. As compared to low level jet condenser this condenser does not flood the engine if the water extraction pumps facts. A separate air pump is used to remove the air. Ejector condenser Figure shows an ejector condenser. In this condenser cold water is discharged under a head of about 5 to 6m through a series of convergent nozzles. The steam and air enter the condenser through a non-return valve. Steam gets condensed by mixing with water. Pressure energy is partly converted into kinetic energy at the converging cones. In the diverging cone the kinetic energy is partly converted into pressure energy and a pressure higher that atmospheric pressure is achieved so as to discharge the condensate to the hot well. ROHINI COLLEGE OF ENGINEERING & TECHNOLOGY ME8792 POWER PLANT ENGINEERING Figure 1.5.7 Ejector Condenser [Source: “power plant Engineering” by by Anup Goel ,Laxmikant D.jathar,Siddu :page:36]
728
https://www.arthritis.org/diseases/gout
skip to main content 800-283-7800 Helpline Your Local Area Sign In ES Back Gout Gout is an inflammatory type of arthritis that can come and go. Gout is the most common type of inflammatory arthritis. It causes sudden and intense attacks of joint pain, often in the big toe and at night. It can also strike joints in other toes or the ankle or knee. People with osteoarthritis in their fingers may experience their first gout attack in their finger joints. Men are three times more likely than women to develop gout. It tends to affect men after age 40 and women after menopause, when they lose the protective effects of estrogen. Gout symptoms can be confused with another type of arthritis called calcium pyrophosphate deposition (CPPD), formerly called pseudogout. However, the crystals that irritate the joint in CPPD are calcium phosphate crystals, not the uric acid crystals that cause gout. What Causes Gout? Gout develops in some people who have high levels of uric acid from the breakdown of purines — natural chemicals found in every cell of your body and in many foods, especially red meat, organ meats, certain seafoods, sugary sodas and beer. When uric acid builds up, either because the kidneys don’t excrete it the way they should or from consuming too many from a high-purine diet, it can form needle-like crystals that lodge in joints, causing sudden, severe pain and swelling. Gout attacks usually peak after 12 to 24 hours, then slowly go away on their own, whether they’re treated or not. You may have only one gout attack in your lifetime or one every few years. Recurrent gout attacks that aren’t treated may involve more joints, last longer, and become increasingly severe over time. Some people eventually develop tophi, large masses of uric acid crystals that form in soft tissues or bones around joints and may appear as hard lumps. Risk Factors You’re more likely to develop gout if you: Eat lots of purine-rich foods, including red meat and some kinds of fish, especially scallops, sardines and tuna, though the health benefits of eating fish likely outweigh any gout risk. Consume food and drinks sweetened with high-fructose corn syrup or drink excessive amounts of alcohol, especially beer. Are overweight, leading your body to produce more uric acid and to have a harder time eliminating it. Have a family history of gout. Have certain chronic conditions, including diabetes, obesity and heart or kidney disease. Take high blood pressure drugs, such as diuretics and beta blockers. Have an imbalance in your microbiome, the trillions of bacteria, viruses and fungi that live in your gut and regulate the immune system. The microbiome is implicated in most inflammatory diseases, including arthritis. Diagnosing Gout Your medical history, a physical exam and tests can help diagnose gout. Your doctor will also want to rule out other reasons for your joint pain and inflammation such as an infection, injury or other type of arthritis. Tests you might have include: Joint fluid analysis. This is best way to diagnose gout. Your doctor withdraws fluid from the painful joint(s) and examines it under a microscope for uric acid crystals. Blood test to check uric acid levels. However, many people who have high blood uric acid never develop gout, and some people with gout have normal uric acid levels. Imaging tests, such as X-rays, ultrasound, magnetic resonance imaging and dual-energy computerized tomography, which helps visualize uric acid crystals in joints. Treatments The treatment plan you and your doctor choose for your gout depends on the frequency and severity of your symptoms and your personal preference. Lifestyle changes. For some people, weight loss, if needed, and a Mediterranean diet or DASH diet may help prevent gout attacks. For decades, doctors told gout patients to limit red meat (beef, pork, lamb and organ meats) and alcohol, but it’s now known that an overall healthy eating plan is far more effective and has added benefits for the heart — a common concern in people with gout. One study of nearly 45,000 men found that those who ate a typical American diet — red meat, French fries, sweets and alcohol — had a 42% greater chance of developing gout than those eating a DASH diet. Eating the low-sodium DASH diet, with an emphasis on fruits, veggies, nuts, whole grains and other whole, unprocessed foods, reduced uric acid levels and gout risk significantly. Anti-inflammatories. When you’re in the midst of an attack, you want to stop it as fast as possible. Doctors are likely to recommend a brief course of: Nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen (Motrin, Advil) and naproxen (Aleve), which are available over the counter or in stronger prescription versions. NSAIDs are generally prescribed for people under 65 who don’t take blood thinners or have a history of bleeding, because NSAIDs can cause ulcers and intestinal bleeding. Colchicine, a prescription anti-inflammatory relieves gout pain but may have unpleasant side effects like nausea, diarrhea or vomiting. Lower doses are as effective as higher doses and produce fewer side effects. Corticosteroids — also effective at bringing down inflammation quickly but with potentially serious side effects. Uric acid-lowering drugs. If you have several gout attacks a year, tophi or signs of joint damage on X-rays, your doctor may suggest taking drugs to lower uric acid and prevent further complications. According to the American College of Rheumatology’s (ACR) 2020 gout guidelines, allopurinol is the first choice for all patients. Febuxostat (Uloric) may be considered for some patients who cannot take allopurinol, but it carries a higher risk of heart-related death. The ACR also recommends trying a treat-to-target approach for gout, in which you and your doctor decide on a goal — usually less than 6 mg/dL blood level of uric acid — and adjust your medication and other treatments until you reach it. Stigma and Mental Health Gout has for centuries been associated with excess and is the butt of innumerable jokes. That stigma, along with fear of another painful flare, can increase stress and contribute to more inflammation in your body. Like other forms of arthritis, inflammation in gout is associated with a slightly increased chance of depression, especially in people who have frequent flares. If you feel down or discouraged, don’t be embarrassed to talk about gout to your friends and family. And keep in mind that regular exercise, restorative sleep and healthy food can go a long way toward improving your mood. The better your mood and outlook, the more able you’ll be to manage gout. Read Previous Gout or Pseudogout? Read Next A New View on Gout Flares and Treatments Stay in the Know. Live in the Yes. Get involved with the arthritis community. Tell us a little about yourself and, based on your interests, you’ll receive emails packed with the latest information and resources to live your best life and connect with others.
729
https://www.youtube.com/watch?v=H19LJU3NvM0
Arithmetic Series: Deriving the Sum Formula MATHguide 11700 subscribers 59 likes Description 10869 views Posted: 8 Aug 2016 This MATHguide math education video derives the arithmetic series sum formula. See our text lesson on arithmetic series at 4 comments Transcript: this is mathguide dcom and my name is Mark kedos in this video we're going to look at the arithmetic series formula and in our first section we're going to take an intuitive look at the formula and in the second section we're going to uh do a formal proof of that formula all right let's get started all right so when we're looking at arithmetic series and we're trying to find the sum let's take an intuitive look at how to do that so let's start with a series so let's say we had this series to start with so we had four 9 14 dot dot dot let's say then at the end we've got 74 we got 79 and 84 all right so what's going on here is that we've got this series and I have a few terms here in the beginning and a few ter terms here at the end and I got a whole bunch of them so what's the common difference just to make sure it's arithmatic it looks like we're adding five right okay so anytime we look at that we see that the common difference is five we're adding five there you go there's our common difference all right so if that's the common difference let's play a little trick or do a little trick here if we take the same sum except write it in reverse so I'm going to put the 84 first 79 next I could do that a little bit more neatly let's try that with a 79 74 dot dot dot so then we're going to have 14 9 and four so let's say if we took those two series yep it's the same series twice add them together if we add we're going to get 88 88 plus 88 yep and you guessed it 88 88 plus 88 you know it's a lot easier to add things when they're the same than when they're changing okay so this is kind of convenient so when I what I did here remember is I added down I took one series and I added it to the same Series so really I have two series now all right so when I have these two series let's figure out what's going on here how many 88s are there all right well I did a little calculation earlier and I found out that there are 17 88s yep there's 17 of them if there are 17 in these 88s That's which is equal to two sums I only want one sum so to find the value of one sum I'm going to divide both sides by two all right now this of course is neat that I'm able to calculate this sum using this formula but what's even more neat is what this 88 is if I were to rewrite the sum a little bit differently I would notice that that 88 is really nothing more than the first term and the second term combined together so what am I trying to say I'm trying to say that intuitively it looks like this is not a proof but int intuitively it looks like if you take the number of terms I said there's 17 of them you take the first term last term add them together and divide by two you get the formula now in our next section I'm going to prove that this is actually true but this is just intuitively just showing you how it all works all right so let's prove it in our next section so let's say we have a series okay so how do you write a series well the series looks like this okay so we have some terms right and all the terms are being added together no but we do know a little something more than this so we also know that there's something called a common difference that you add the same value to go from one term to the next and we call that value D okay so whatever the value it D is that's what we add to go from one term to the next all right so keeping that in mind we could be a little bit more clever how we write our series we could say that our first term is a right it's just whatever a is call it A1 okay now to get to A2 instead of writing A2 we just to get A2 you take the A1 and you add a d really that's how we get to A2 how do you get to A3 we get A1 but we really have to add two D's or I have to add a d to this A2 term so if I add a d to this one it's going to be two D's okay and so on okay when I get almost to the end it's going to be the second term from the end it's going to be uh let's see A1 Plus n minus 2 D's and then the last term is going to be A1 I don't know if I'm going to be able to squeeze it in a minus1 D yep able to squeeze it in so this is what all the terms look like so that we just write it in terms of the first term so A1 A1 Plus D A1 Plus 2D you could see that if this is the first term here's our second term I mean if you really line them up a little bit more carefully this is our uh n minus oneth term and this is our nth term you can see a connection here you can see that um you're adding 1D it's always one less than the term number okay so here's the third term you got to add two D's here's the n minus1 term you got to add one less n minus 2 D's if this is the nth term you got n minus 1 D so however many D's you add is always one less than the term number okay I don't know if that helps but that's kind of nice to see all right now what we're going to do is we're going to add the sum to itself just like we did earlier when we looked at the uh series intuitively all right let me get these black this black series out of here all right now if I take the same Series so let's say we do the series now I'm just going to write it in reverse order okay so if I put it in reverse order uh I want my last term first okay so this is going to be A1 Plus n -1 D okay now we have our second term okay our second one is going to be A1 Plus n - 2 D okay then our third term is going to be A1 + n -3 D okay and so on okay now if we want to get to the second to the last that's actually our second term here so it's going to be A1 Plus D and then our first term is just plain old A1 okay so what I want to do to make sure we actually could see this a little bit better um I want to draw some uh little circles here or ovals whatever so we could see what I'm going to combine so we could see which terms are lining up it's kind of it's kind of hard to do that right now so that's why I think that these yellow little blobs here might help us so we can see what is being added together okay so uh and I'm going to be adding these guys together also and being very careful so I can see which ones are being combined so if I add these together right we're just adding the first term with the last term the second term to the second to the last term and so on let's see what we actually get okay so let's let's add them together okay if we add them together I get A1 Plus A1 where there's two a1s I get an n - 1D oops trying to squeeze it in there but I forgot my plus sign all right now let's add the next one A1 Plus A1 so again that's another two a1s I got D plus n minus 2D okay so if I if I add 1 D plus nus 2 D's in other words I'm adding 1+ nus 2 that's going to be n minus1 d I'll let you do a little algebra there if you need to see that okay A1 + A1 is 2 a1's if I add two plus n minus 3 I get n minus1 d okay what do I get over here I'm going to get something similar I'm going to get dot dot dot I'm going to get let's see A1 2 a1's plus another n minus1 d and and I'm not going to be able to squeeze this in here so I'm going to write it down here I'm going to get another 2 A1 Plus n minus1 d all right so you could see that when I added I now really have two sums okay I have two sums that are being combined together um so for this two sums what do I really have well remember I have an N number of terms right there's n terms in this these two series that I added basically series added to itself but there's n terms so I know that there's n number of these right you know it doesn't take a genius to see that I have the same thing reoccurring over and over again I mean you could see it it's the same expression over and over again and there are n of them okay so there's n of them so if I have n of them I'd rather write it write it like that there's n of these 2 A1 Plus n minus one D's okay there's n of them but I don't want two sums I only want one sum so I got to divide both sides by two so the value of one sum is going to be this n to A1 + nus1 D all divided by two all right so I have this formula it's looking kind of complicated but I like to be able to figure out what it is okay what what is really going on here all right I'm have to scroll down a little bit so we can give ourselves a little bit more room but let's figure out what this is let's kind of unpack this so what the heck is 2 a1+ n -1 D okay well let's think about it let's see 2 A1 that's really a1+ A1 okay all right well this is our first term that's A1 what is all this okay I'm going to underline this all this in yellow what is all this well it turns out if you go back to our series I'm going to scroll back up a little bit if you go back to the series you can see that that was the last term right that's the last term in our series okay so I know that that's really just the last term so all this stuff in parentheses is really just the first term plus the last term so to write our formula maybe in a way that's more less complicated you would say that instead of writing it all that let's just put A1 + a n okay all over two all right so intuitively we knew that this was the case the way you find the sum of a series is you take the first term you take the last term you find the sum you multiply by the number of terms there are and then you divide that whole thing by two and there you go that's been proven it's been derived all right make sure you go back to math guu.vn
730
https://www.tandfonline.com/doi/full/10.1080/20002297.2017.1317579
Full article: Oral candidiasis among African human immunodeficiency virus-infected individuals: 10 years of systematic review and meta-analysis from sub-Saharan Africa Skip to Main Content Journals Close Journals Menu Browse all journals Browse all journals A-Z Open access journals Open Select (hybrid) journals Browse all books Explore Taylor & Francis eBooks Publish Find a journal Search calls for papers Journal Suggester Open access publishing Find guidance on Author Services Browse by subject Area Studies Arts Behavioral Sciences Bioscience Built Environment Communication Studies Computer Science Earth Sciences Economics, Finance, Business & Industry Education Engineering & Technology Environment & Agriculture Environment and Sustainability Food Science & Technology Geography Global Development Health and Social Care Humanities Information Science Language & Literature Law Mathematics & Statistics Medicine, Dentistry, Nursing & Allied Health Museum and Heritage Studies Physical Sciences Politics & International Relations Social Sciences Sports and Leisure Tourism, Hospitality and Events Urban Studies Search Close Search Menu Publish Why publish with us? Find a journal Search calls for papers Journal Suggester Step-by-step guide Open access publishing We're here to help Find guidance on Author Services Login|Register Log in or Register Login Register Cart Add to Cart Search, Browse, or Publish Close Menu Search Close search Journals Browse by subject Social Sciences & Humanities Area Studies Arts Communication Studies Economics, Finance, Business & Industry Education Geography Humanities Language & Literature Law Museum and Heritage Studies Politics & International Relations Social Sciences Sports and Leisure Tourism, Hospitality and Events Physical Sciences & Engineering Computer Science Engineering & Technology Food Science & Technology Information Science Mathematics & Statistics Physical Sciences Medicine, Health & Life Sciences Behavioral Sciences Bioscience Health and Social Care Medicine, Dentistry, Nursing & Allied Health Earth & Environmental Sciences Built Environment Earth Sciences Environment & Agriculture Environment and Sustainability Global Development Urban Studies Browse all journals A-Z Open access journals Open Select (hybrid) journals Explore Taylor & Francis eBooks Publish Why publish with us? Find a journal Search calls for papers Journal Suggester Step-by-step guide Open access publishing Find guidance on Author Services Home All Journals Medicine Journal of Oral Microbiology List of Issues Volume 9, Issue 1 Oral candidiasis among African human imm .... Search in: Advanced search Journal of Oral Microbiology Volume 9, 2017 - Issue 1 Submit an articleJournal homepage Open access 3,795 Views 41 CrossRef citations to date 0 Altmetric Listen Review Article Oral candidiasis among African human immunodeficiency virus-infected individuals: 10 years of systematic review and meta-analysis from sub-Saharan Africa Martha F. MushiDepartment of Microbiology and Immunology, Weill Bugando School of Medicine, Catholic University of Heath and Allied Sciences, Mwanza, Tanzania Correspondencemushimartha@gmail.com View further author information , Oliver BaderInstitute of Medical Microbiology, University Medical Center, Göttingen, Germany further author information , Liliane Taverne-GhadwalInstitute of Medical Microbiology, University Medical Center, Göttingen, GermanyView further author information , Christine BiiKenya Medical Research Institute, Center for Microbiology Research, Nairobi, KenyaView further author information , Uwe GroßInstitute of Medical Microbiology, University Medical Center, Göttingen, GermanyView further author information & Stephen E. MshanaDepartment of Microbiology and Immunology, Weill Bugando School of Medicine, Catholic University of Heath and Allied Sciences, Mwanza, TanzaniaView further author information Article: 1317579 | Received 26 Jan 2017, Accepted 04 Apr 2017, Published online: 21 Jun 2017 Cite this article CrossMark In this article Article contents Related research ABSTRACT Introduction Material and methods Results Discussion Acknowledgements Disclosure statement Additional information References Full Article Figures & data References Citations Metrics Licensing Reprints & Permissions View PDF (open in a new window)PDF (open in a new window) ABSTRACT Oral candidiasis (OC) is the most common opportunistic fungal infection among immunocompromised individuals. This systematic review and meta-analysis reports on the contribution of non-albicans Candida species in causing OC among human immunodeficiency virus (HIV)-infected individuals in sub-Saharan Africa between 2005 and 2015. Thirteen original research articles on oral Candida infection/colonization among HIV-infected African populations were reviewed. The prevalence of OC ranged from 7.6% to 75.3%. Pseudomembranous candidiasis was found to range from 12.1% to 66.7%. The prevalence of non-albicans Candida species causing OC was 33.5% [95% confidence interval (CI) 30.9–36.39%]. Of 458 non-albicans Candida species detected, C. glabrata (23.8%; 109/458) was the most common, followed by C. tropicalis (22%; 101/458) and C. krusei (10.7%; 49/458). The overall fluconazole resistance was 39.3% (95% CI 34.4–44.1%). Candida albicans was significantly more resistant than non-albicans Candida species to fluconazole (44.7% vs 21.9%; p<0.001). One-quarter of the cases of OC among HIV-infected individuals in sub-Saharan Africa were due to non-albicans Candida species. Candida albicans isolates were more resistant than the non-albicans Candida species to fluconazole and voriconazole. Strengthening the capacity for fungal diagnosis and antifungal susceptibility testing in sub-Saharan Africa is mandatory in order to track the azole resistance trend. KEYWORDS: Oral candidiasis Candida colonization HIV infection non-albicans Candida species fluconazole resistance sub-Saharan Africa Introduction Oral candidiasis (OC) is one of the most common fungal opportunistic infections in immunocompromised individuals [Citation 1]. OC occurs in up to 95% of human immunodeficiency virus (HIV)-infected individuals during the course of their illness [Citation 2,Citation 3], and is a prognostic indicator for acquired immune deficiency syndrome (AIDS) [Citation 4,Citation 5]. In sub-Saharan Africa, there is an increased prevalence of severe immunocompromised conditions, which is associated with a higher incidence of opportunistic infections [Citation 6]. Worldwide, it is estimated that 70% of the HIV-infected individuals living in sub-Saharan Africa [Citation 6] are at risk of infection with OC. OC is mainly caused by Candida albicans [Citation 7], which accounts for up to 81% of cases among HIV-infected individuals [Citation 8]. It is documented that between 17% and 75% of healthy individuals can be colonized by Candida species [Citation 9,Citation 10]. However, non-albicans Candida species have been implicated in colonization of the oral cavity, eventually causing infection in 20–40% of immunocompromised individuals [Citation 10–Citation 12]. The increased prevalence of OC among African HIV-infected individuals ranges from 18% [Citation 13,Citation 14] to >60% [Citation 15–Citation 17], and this has resulted in increased use of antifungal agents for both prophylactic and treatment purposes [Citation 18]. Furthermore, there is an increasing number of reports of Candida species that are resistant to azole antifungal agents [Citation 19,Citation 20]. This list of resistant species includes C. krusei, C. inconspicua, and C. norvegensis, which are all intrinsically resistant to fluconazole and have been isolated from patients with systemic candidiasis [Citation 20,Citation 21]. There have also been increased reports of fluconazole resistance in C. glabrata isolates, which manifests following the use of azole antifungal agents [Citation 19,Citation 21]. However, data on the spectrum of Candida species and the respective antifungal susceptibility profiles among HIV-infected individuals from sub-Saharan Africa are still limited. This systematic review and meta-analysis aimed to report the incidence of the non-albicans species in OC among the HIV-infected African population of sub-Saharan Africa between 2005 and 2015. Material and methods A literature search of English-language articles undertaking research on oral Candida colonization and/or infection was performed using PubMed/MEDLINE, Google Scholar, Web of Knowledge, Google Health, Embase, and POPLINE. The search terms included were ‘oral thrush’, ‘oral candidiasis’, ‘oral Candida’, ‘oral Candida colonization’, and ‘candidiasis of buccal cavity’, plus African country names in different combinations. New links shown in the abstract were followed to retrieve more abstracts. Thus, a total of 61 abstracts was obtained. All abstracts were carefully reviewed independently by two authors. Sixteen abstracts were excluded since nine were general reports on HIV/AIDS oral manifestations; three were restricted to pediatric populations; and four only described general opportunistic infections, C andida infections, or genetic variations of innate immunity and OC. None of the excluded abstracts contained details of oral Candida species, pattern of clinical presentation, or antifungal susceptibility. Further analysis excluded one case report and six review articles. The analysis led to 38 articles being obtained on studies on OC that had been conducted in Africa. All 38 articles were carefully reviewed, and a further 25 articles were excluded as they assessed OC among HIV-infected African children or neonates (n=12), were clinical trials (n=2), involved immunocompetent individuals (n=1), comprised a retrospective cohort study (n=1), or had been conducted before 2005 (n=9) (Figure 1). The remaining 13 relevant articles were reviewed independently by two authors. A wide selection of data was extracted from each article and transferred on to a spreadsheet. The data extracted included year of publication, region (country), study population, sampling technique, patient gender, method for Candida species identification, use of highly active antiretroviral therapy (HAART), CD4 cell count, prevalence of oral fungal colonization and infection, and the antifungal susceptibility testing scheme. Data were examined manually and analyzed to obtain the proportion of oral Candida colonization and infection. A meta-analysis model was used to calculate the pooled (weighted) proportion of OC, non-albicans Candida species, and fluconazole resistance among C. albicans and non-albicans Candida species. A proportion test was conducted using STATA v.11 to establish the statistical differences between the prevalences of oral Candida infection among the HIV-infected African population. A p value of <0.05 at a 95% confidence interval (CI) was used to define statistical significance. Ethical approval Ethical clearance for conducting this study was granted by the joint CUHAS/BMC research ethics and review committee, with certificate number CREC/048/2014. Results In total, 13 articles from Nigeria, South Africa, Ethiopia, Uganda, Cameroon, Tanzania, and Ghana were included in this review. The majority of the articles (n=12; 85.7%) reported on OC, four (28.6%) on both OC and Candida colonization, and two (14.3%) only on oral Candida species colonization (Table 1). Table 1. Summary of the published articles on oral candidiasis among human immunodeficiency virus (HIV)-infected African populations. Download CSVDisplay Table In six articles that reported on oral Candida species colonization among HIV-infected individuals, the prevalence ranged from 0.25% in Nigeria [Citation 26] to 82.3% in Ethiopia [Citation 22]. With the exception of one article from Tanzania that did not report on OC prevalence [Citation 25], the prevalence of oral Candida infection was reported to range from 7.6% in Nigeria to 75.3% in Ghana among HIV-infected individuals. The pooled prevalence of OC among HIV-infected Africans was 50.6% (95% CI 48.3–52.8%) (Figure 2). The lowest OC prevalence was detected in South Africa (7.6%, 95% CI 3.9–11.3%) and the highest OC prevalence was observed in Ghana (75.3%, 95% CI 70–80%) (Figure 2). Figure 1. Flowchart showing the literature search and selection criteria. Display full size Figure 2. Proportional estimate (ES) with 95% confidence interval (CI) of oral candidiasis (OC) among human immunodeficiency virus (HIV)-infected patients from Africa. The midpoint of each horizontal line segment shows the proportional estimate of OC of each study, while the rhombic mark shows the pooled proportions for all studies. Display full size No clear data were given regarding OC and HIV treatment status. Nine articles involving 2,239 individuals provided data on HIV treatment status. Among the 2,239 individuals, 1,407 (62%) did not receive HAART. Five articles [Citation 15,Citation 16,Citation 26,Citation 27,Citation 30] had detailed data on OC distribution among individuals receiving and not receiving HAART. Only one article showed that HAART was associated with a significantly lower isolation rate of Candida species [Citation 26]. Pseudomembranous candidiasis was the most prevalent form of OC reported in these studies. The prevalence of pseudomembranous candidiasis ranged from 12.1% in Uganda to 66.7% in South Africa. The prevalence of erythematous candidiasis (chronic atrophic candidiasis) was highest in Ethiopia, at 40.2%. Candida leukoplakia and hyperplastic candidiasis were reported by a single article each, one from Ethiopia and one from Tanzania (Figure 3). Of 1,795 Candida isolates analyzed, C. albicans was the most common species (n=1,337; 74.5%, 95% CI 72.2–76.8%), and non-albicans Candida species accounted for 458 (25.5%, 95% CI 21.5–29.5%) of isolates. The prevalence of non-albicans Candida species colonizing the oral cavity of the immunocompromised African population was found to range from 6.7% to 58.9% in Nigeria. Of 458 non-albicans Candida species detected, C. glabrata was the most frequent isolate (23.8%; 109/458), followed by C. tropicalis (22%; 101/458) and C. krusei (10.7%; 49/458) (Table 2). Table 2. Candida species distributions according to different studies. Download CSVDisplay Table The prevalence of non-albicans Candida species causing OC ranged from 13.3% (95% CI 9.6–17%) to 58.9% (95% CI 47.6–70.2%) and both reports involved Nigerian subjects (Figure 4). When the data for non-albicans Candida species causing OC among HIV-infected Africans were pooled, the overall prevalence was 33.5% (95% CI 30.9–36.39%) (Figure 4). Figure 3. Clinical patterns of oral candidiasis. Display full size Figure 4. Proportional estimate (ES) of non- albicans Candida species causing oral candidiasis (OC) with 95% confidence interval (CI). The midpoint of each horizontal line segment shows the proportional estimate of non-albicans Candida species in each study, while the rhombic mark shows the pooled proportions for all studies. Display full size Seven articles reported on the occurrence of mixed Candida species. Altogether, 1,914 HIV-infected patients were studied, with 236 (12.3%) having mixed Candida species. In total, 201 individuals (85.2%) had a mixture of C. albicans and a non-albicans Candida species. There were many variations on the breakpoints used in the determination of the antifungal susceptibility. Of the 13 articles analyzed, only five reported on antifungal susceptibility pattern. All five reported the minimum inhibitory concentrations (MICs) by broth microdilution techniques. The Clinical and Laboratory Standards Institute (CLSI) breakpoints were used for interpretation of the drug susceptibility of echinocandins, intraconazole, fluconazole, and amphotericin B (Table 3). One multicenter study undertaken in South Africa and Cameroon [Citation 17] used the previously suggested breakpoints for flucytosine [Citation 31], voriconazole [Citation 32], and posaconale [Citation 33] (Table 3). A study conducted in Ethiopia by Mulu et al. [Citation 22] used 2 µg/ml as the breakpoint for amphotericin B, as previously reported by Brito et al. [Citation 34]. In the study by Mulu et al. [Citation 22], the MIC for micafungin was defined as the lowest concentration in which at least 50% of growth of the sample was inhibited. Table 3. Breakpoints for minimum inhibitory concentration determination. Download CSVDisplay Table The incidence of fluconazole resistance among Candida species was found to range from 5% in Tanzania to 40% in South Africa. The highest rate (13%) of Candida species that were resistant to echinocandins (micafungin) was detected in Cameroon (Table 4). Table 4. Antifungal resistance patterns for Candida albican s and non-albicans Candid a species from different countries. Download CSVDisplay Table Among C. albicans, micafungin resistance ranged from 0% to 4%, while for non-albicans Candida species it ranged from 0% to 51.6% (Table 4). In total, 252 C. albicans samples were tested for susceptibility to voriconazole. The resistance rate was found to range from 1.8% to 54.7%, while for non-albicans Candida species it ranged from 1.7% to 6% (Table 4). Overall, C. albicans was significantly more resistant than non-albicans Candida species to voriconazole (104/252 vs 4/115; p<0.001). When the data for fluconazole resistance were pooled, the overall fluconazole resistance rate was 39.3% (95% CI 34.4–44.1%), while the rate of fluconazole resistance among C. albicans was significantly higher than that among non-albicans Candida species (44.7%, 95% CI 38.7–50.8% vs 21.9%, 95% CI 15.1–28.7%; p<0.001) (Figure 5). Figure 5. Proportional estimate (ES) of fluconazole resistance by Candida species. The midpoint of each horizontal line segment shows the proportional estimate of fluconazole-resistant Candida species of each study, while the rhombic mark shows the pooled proportions for all studies by Candida species with 95% confidence interval (CI). Display full size Discussion OC is the leading opportunistic infection among immunocompromised individuals. Sub-Saharan Africa has the world’s highest prevalence of HIV/AIDS patients, with an estimated 24.7 million cases [Citation 35]. In the current review, up to 82% of HIV-infected patients were orally colonized by Candida species. A similar prevalence has been reported previously in southern India [Citation 36,Citation 37] and in North America [Citation 38]. The overall prevalence in the current review is much higher than that in previous reports from Italy, Brazil, and China [Citation 39–Citation 41]. The variations in prevalence across the world are considered to be due to differences in diagnostic techniques, geographic and/or ethnic differences, and oral hygiene [Citation 38,Citation 39]. Oral Candida colonization among HIV-infected individuals predicted the subsequent development of OC [Citation 7,Citation 15,Citation 42], mainly owing to the impaired immune system in these patients [Citation 43]. In the current review, the highest prevalence of OC among HIV-infected populations was 75%, in Ghana. The incidence of OC was considered relatively stable as it was comparable to a review undertaken between 1984 and 2000 [Citation 6]. OC has different clinical presentations with diverse histopathological features [Citation 44]. In the current review, pseudomembranous candidiasis (or thrush) was the most common clinical presentation of OC among HIV-infected populations in sub-Saharan Africa. Pseudomembranous candidiasis has also been noted as the most common clinical manifestation of acute OC among immunocompromised individuals in the UK [Citation 1,Citation 45]. Chronic erythematous candidiasis, which is commonly detected in patients wearing dentures [Citation 1], was also commonly found in AIDS patients in a study conducted in Ethiopia by Mulu et al. [Citation 22]. This clinical form of OC is characterized by localized chronic erythematous tissues on the dorsum of the tongue, palate, or buccal mucosa [Citation 1,Citation 46]. Among HIV-infected individuals, erythematous candidiasis is associated with the chronic use of corticosteroids and topical and systemic antibiotics [Citation 47]. Its increased prevalence has also been associated with the shedding of the pseudomembranes in persistent or acute pseudomembranous candidiasis [Citation 46]. In general, among African HIV-infected individuals, non-albicans Candida species contributed about 33.5% of OC. The prevalence of non-albicans Candida species was within the range that was observed in Brazil and New Delhi, India [Citation 40,Citation 48]. As previously documented in Greece, Spain, and New Zealand [Citation 18,Citation 49,Citation 50], the predominant non-albicans Candida species detected were C. glabrata (24%), C. tropicalis (22%), and C. krusei (11%). The high prevalence of C. glabrata and C. krusei among HIV-infected populations from sub-Saharan Africa is of public health importance because of the fluconazole resistance pattern that is normally associated with these species [Citation 4,Citation 5,Citation 51]. Contrary to previous reports from the USA and Finland, where non-albicans Candida species were commonly detected in co-infection with C. albicans and associated with treatment failure [Citation 52,Citation 53], in most of the studies in sub-Saharan Africa non-albicans Candida species were sensitive to azole and dual presentation was not reported. The prevalence of non-albicans Candida species associated with OC has been linked to a history of fluconazole use [Citation 25,Citation 28]. However, in the current review, the majority of non-albicans Candida species were significantly more sensitive to fluconazole than they were to C. albicans. This could be because the non-albicans Candida species that are intrinsically resistant to fluconazole contributed only 35% of non-albicans Candida species in this review. Therefore, the use of fluconazole may not be the only reason for non-albicans Candida species infection. HIV infection with significant depression of the immune system may contribute to the ability of non-pathogenic non-albicans Candida species to cause OC in this population. In Africa, fluconazole is considered to be the drug of choice in both the treatment and prophylactic prevention of fungal infections in HIV-infected individuals and people with AIDS [Citation 25,Citation 28,Citation 57]. The use of fluconazole has been associated with the development of resistance [Citation 22,Citation 25]. This could explain the observed high rate of fluconazole resistance among C. albicans. It is documented that the overexpression of drug efflux pumps by C. albicans due to inappropriate use of azole antifungals leads to the development of resistance to several azole antifungal agents [Citation 58,Citation 59]. This could explain the high rate of voriconazole resistance among C. albicans. However, this mechanism does spare amphotericin B [Citation 58], which is expensive and not available in most centers in developing countries. This was confirmed in this review, where the rate of amphotericin B resistance was found to range from 0% to 8.5% among C. albicans. With increased inappropriate use of azole antifungal agents [Citation 60], resistant strains of C. albicans and non-albicans Candida species could be selected, underscoring the importance of monitoring antifungal resistance and limiting over-the-counter availability of antimycotic drugs. Despite the good-quality data summarized in this review, differences in diagnostic techniques and incomplete data reported by most of the studies may have compromised the findings. Most of the studies did not report the HIV disease stage, the use of antiretrovirals, or trimethoprim/sulphamethoxazole prophylaxis. All these factors are known to have an effect on the manifestation of OC. In conclusion, about one-quarter of the cases of OC among HIV-infected individuals in sub-Saharan Africa are due to non-albicans Candida species. In HIV-infected individuals, C. albicans was more resistant than non-albicans Candida species to fluconazole and voriconazole. There is a need to strengthen the capacity for fungal diagnosis and antifungal susceptibility testing in sub-Saharan African in order to be able to track the resistance trend of Candida species in developing countries. Data from these centers will be used to guide the appropriate use of azoles so that they can be preserved for future generations. Acknowledgments The authors acknowledge the contribution of Well Cornell Medical Library through Mr Yanga Machumi and the Institute of Medical Microbiology, University Medical Center Göttingen, Germany, for support in accessing the full articles. Disclosure statement No potential conflict of interest was reported by the authors. Additional information Funding This work was supported by research funds from the Catholic University of Health and Allied Sciences to MFM and from Pfizer and Gilead to OB and UG. Notes on contributors Martha F. Mushi Martha F. Mushi, B.Sc, M.Sc is a Lecturer of Microbiology and Immunology and a consultant medical Microbiologist at Catholic University of Health and Allied Sciences and Bugando Medical Centre. She is the chairperson of MYCAFRICA; a network to promote medical mycology in Africa. She has published over 30 publications in the field of infectious diseases. Currently she is a PhD student at CUHAS focus in the epidemiology of azole resistant Candida and Aspergillus species in Tanzania. Oliver Bader Dr. Oliver Bader heads the mycology group at the Institute for Medical Microbiology at the University Medical Center Göttingen. His research interests include the epidemiology, mechanisms and diagnostics of fungal infections and their drug susceptibility patterns. Liliane Taverne-Ghadwal Dr. med. Liliane Taverne-Ghadwal: She has been trained at the Institute for Medical Microbiology, University Medical Centre Göttingen. She has interest on the epidemiology and diagnostics of fungal infections and has written her doctoral thesis on oral mycosis in HIV patients from Chad. Besides that from 2009 she is has been working and being trained as an anesthesiologist at the Medical University Göttingen and since 2015 at the Evangelical Klinikum Bethel in Bielefeld. Christine Bii Dr Christine C. Bii, BSc, MSc, PhD Medical Mycology is a Principle Research Officer at the Center for Microbiology Research, Kenya Medical Research Institute, Nairobi Kenya. She heads the Medical Mycology Research in the Institute and her focus is in opportunistic fungal infections and emerging antifungal resistance. She has published widely in the area of Cryptococcus, Candida, PCP and mycotoxins and mentored over 50 post graduate students. Uwe Groß Prof. Dr. med. Uwe Groß is Head of the Institute for Medical Microbiology of the University Medical Center Goettingen, Germany. He is coordinating the Goettingen International Health Network that is cooperating with partners from sub-Saharan Africa, South-East Asia and South America in the field of infectious diseases. He has published several research papers on the epidemiology, diagnosis and pathogenesis of infections caused by bacteria, fungi, and parasites. Stephen E. Mshana Prof. Stephen E. Mshana, MD, M.Med, PhD, Fell Med. Ed: Professor of Clinical Microbiology and Consultant Clinical Microbiologist at the Catholic University of Health and Allied Sciences (CUHAS)/Bugando Medical Centre (BMC), Mwanza Tanzania. Throughout his career, Prof. Mshana has contributed to and co-authored more than 100 scientific articles on the field of clinical microbiology focus mainly on antimicrobial resistance. References Akpan A, Morgan R. Oral candidiasis. Postgrad Med J. 2002;78:1–10. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Feigal DW, Katz MH, Greenspan D, et al. The prevalence of oral lesions in HIV-infected homosexual and bisexual men: three San Francisco epidemiological cohorts. Aids. 1991;5:519–526. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Dupont B, Graybill J, Armstrong D, et al. Fungal infections in AIDS patients. J Med Vet Mycol. 1992;30:19–28. (Open in a new window)PubMed(Open in a new window)Google Scholar Thanyasrisung P, Kesakomol P, Pipattanagovit P, et al. Oral Candida carriage and immune status in Thai human immunodeficiency virus-infected individuals. J Med Microbiol. 2014;63:753–759. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Meurman J, Siikala E, Richardson M, et al. Non-Candida albicans Candida yeasts of the oral cavity. In: Mendez-Vilas A., editor. Communicating current research and educational topics and trends in applied microbiology. Microbiology book series. Badajoz: 2007. p. 719–731. (Open in a new window)Google Scholar Hodgson T, Rachanis C. Oral fungal and bacterial infections in HIV‐infected individuals: an overview in Africa. Oral Dis. 2002;8:80–87. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Guida R. Candidiasis of the oropharynx and esophagus. Ear Nose Throat J. 1988;67:834-836, 838-840. (Open in a new window)Google Scholar Sangeorzan JA, Bradley SF, He X, et al. Epidemiology of oral candidiasis in HIV-infected patients: colonization, infection, treatment, and emergence of fluconazole resistance. Am J Med. 1994;97:339–346. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Bastiaan RJ, Reade PC. The prevalence of Candida albicans in the mouths of tobacco smokers with and without oral mucous membrane keratoses. Oral Surg Oral Med Oral Pathol. 1982;53:148–151. (Open in a new window)Google Scholar Mushi MF, Mtemisika CI, Bader O, et al. High oral carriage of non-albicans Candida spp. among HIV-infected individuals. Int J Infect Dis. 2016;49:185–188. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Kuhn DM, Mukherjee PK, Clark TA, et al. Candida parapsilosis characterization in an outbreak setting. Emerg Infect Dis. 2004;10:1074–1081. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Li L, Redding S, Dongari-Bagtzoglou A. Candida glabrata, an emerging oral opportunistic pathogen. J Dent Res. 2007;86:204–215. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Mayanja B, Morgan D, Ross A, et al. The burden of mucocutaneous conditions and the association with HIV‐1 infection in a rural community in Uganda. Trop Med Int Health. 1999;4:349–354. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Matee M, Scheutz F, Moshy J. Occurrence of oral lesions in relation to clinical and immunological status among HIV‐infected adult Tanzanians. Oral Dis. 2000;6:106–111. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Nweze EI, Ogbonnaya UL. Oral Candida isolates among HIV-infected subjects in Nigeria. J Microbiol Immunol Infect. 2011;44:172–177. (Open in a new window)Google Scholar Kwamin F, Nartey NO, Codjoe FS, et al. Distribution of Candida species among HIV-positive patients with oropharyngeal candidiasis in Accra, Ghana. J Infect Dev Ctries. 2013;7:041–045. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Dos Santos Abrantes PM, McArthur CP, Africa CWJ. Multi-drug resistant oral Candida species isolated from HIV-positive patients in South Africa and Cameroon. Diagn Microbiol Infect Dis. 2014;79:222–227. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Belazi M, Velegraki A, Koussidou‐Eremondi T, et al. Oral Candida isolates in patients undergoing radiotherapy for head and neck cancer: prevalence, azole susceptibility profiles and response to antifungal treatment. Oral Microbiol Immunol. 2004;19:347–351. (Open in a new window)Google Scholar White TC, Marr KA, Bowden RA. Clinical, cellular, and molecular factors that contribute to antifungal drug resistance. Clin Microbiol Rev. 1998;11:382–402. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Fournier P, Schwebel C, Maubon D, et al. Antifungal use influences Candida species distribution and susceptibility in the intensive care unit. J Antimicrob Chemother. 2011;66:2880–2886. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Moran GP, Sullivan DJ, Coleman DC. Emergence of non-Candida albicans Candida species as pathogens. Washington (DC): Candida and candidiasis ASM Press; 2002. p. 37–53. (Open in a new window)Google Scholar Mulu A, Kassu A, Anagaw B, et al. Frequent detection of ‘azole’ resistant Candida species among late presenting AIDS patients in northwest Ethiopia. BMC Infect Dis. 2013;13:82. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Owotade FJ, Patel M, Ralephenya TR, et al. Oral Candida colonization in HIV-positive women: associated factors and changes following antiretroviral therapy. J Med Microbiol. 2013;62:126–132. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Nanteza M, Tusiime JB, Kalyango J, et al. Association between oral candidiasis and low CD4+ count among HIV positive patients in Hoima Regional Referral Hospital. BMC Oral Health. 2014;14:143. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Hamza OJ, Matee MI, Moshi MJ, et al. Species distribution and in vitro antifungal susceptibility of oral yeast isolates from Tanzanian HIV-infected patients with primary and recurrent oropharyngeal candidiasis. BMC Microbiol. 2008;8:1. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Esebelahie NO, Enweani IB, Omoregie R. Candida colonisation in asymptomatic HIV patients attending a tertiary hospital in Benin City, Nigeria. Libyan J Med. 2013;18(8):20322. (Open in a new window)Google Scholar Owotade FJ, Patel M. Virulence of oral Candida isolated from HIV-positive women with oral candidiasis and asymptomatic carriers. Oral Surg Oral Med Oral Pathol Oral Radiol. 2014;118:455–460. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Enwuru C, Ogunledun A, Idika N, et al. Fluconazole resistant opportunistic oro-pharyngeal candida and non-candida yeast-like isolates from HIV infected patients attending ARV clinics in Lagos, Nigeria. Afr Health Sci. 2008;8:142–148. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Agwu E, Ihongbe JC, McManus BA, et al. Distribution of yeast species associated with oral lesions in HIV-infected patients in Southwest Uganda. Med Mycol. 2012;50:276–280. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Yitayew B, Woldeamanuel Y, Asrat D, et al. Oral Candida carriage among HIV infected and non-‐infected individuals in Tikur Anbesa specialized hospital. Addis Ababa, Ethiopia: GJMEDPH; 2015;4:2. (Open in a new window)Google Scholar Pfaller M, Espinel-Ingroff A, Canton E, et al. Wild-type MIC distributions and epidemiological cutoff values for amphotericin B, flucytosine, and itraconazole and Candida spp. as determined by CLSI broth microdilution. J Clin Microbiol. 2012;50:2040–2046. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Pfaller M, Diekema D, Rex J, et al. Correlation of MIC with outcome for Candida species tested against voriconazole: analysis and proposal for interpretive breakpoints. J Clin Microbiol. 2006;44:819–826. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Messer SA, Diekema DJ, Hollis RJ, et al. Evaluation of disk diffusion and Etest compared to broth microdilution for antifungal susceptibility testing of posaconazole against clinical isolates of filamentous fungi. J Clin Microbiol. 2007;45:1322–1324. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Brito GNB, Inocêncio AC, Querido SMR, et al. In vitro antifungal susceptibility of Candida spp. oral isolates from HIV-positive patients and control individuals. Braz Oral Res. 2011;25:28–33. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Barchiesi F, Maracci M, Radi B, et al. Point prevalence, microbiology and fluconazole susceptibility patterns of yeast isolates colonizing the oral cavities of HIV-infected patients in the era of highly active antiretroviral therapy. J Antimicrob Chemother. 2002;50:999–1002. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Girish Kumar C, Menon T, Rajasekaran S, et al. Carriage of Candida species in oral cavities of HIV infected patients in South India. Mycoses. 2009;52:44–48. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Jeddy N, Ranganathan K, Devi U, et al. A study of antifungal drug sensitivity of Candida isolated from human immunodeficiency virus infected patients in Chennai, South India. J Oral Maxillofac Pathol. 2011;15:182. (Open in a new window)Google Scholar Vargas KG, Joly S. Carriage frequency, intensity of carriage, and strains of oral yeast species vary in the progression to oral candidiasis in human immunodeficiency virus-positive individuals. J Clin Microbiol. 2002;40:341–350. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Campisi G, Pizzo G, Milici ME, et al. Candidal carriage in the oral cavity of human immunodeficiency virus–infected subjects. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2002;93:281–286. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Junqueira JC, Vilela SF, Rossoni RD, et al. Oral colonization by yeasts in HIV-positive patients in Brazil. Rev Inst Med Trop Sao Paulo. 2012;54:17–24. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Liu X, Liu H, Guo Z, et al. Association of asymptomatic oral candidal carriage, oral candidiasis and CD4+ lymphocyte count in HIV‐positive patients in China. Oral Dis. 2006;12:41–44. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Lott TJ, Holloway BP, Logan DA, et al. Towards understanding the evolution of the human commensal yeast Candida albicans. Microbiology. 1999;145:1137–1143. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Samaranayake L, MacFarlane T. Host factors and oral candidosis. Oral Candidosis. 1990;66–103. (Open in a new window)Google Scholar Feller L, Khammissa R, Chandran R, et al. Oral candidosis in relation to oral immunity. J Oral Pathol Med. 2014;43:563–569. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Samaranayake L. Nutritional factors and oral candidosis. J Oral Pathol Med. 1986;15:61–65. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Lehner T. Oral candidosis. Dent Pract Dent Rec. 1967;17:209–216. (Open in a new window)PubMed(Open in a new window)Google Scholar Scully C, Ei-Kabir M, Samaranayake LP. Candida and oral candidosis: a review. Crit Rev Oral Biol Med. 1994;5:125–1257. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Maheshwari M, Kaur R, Chadha S. Candida species prevalence profile in HIV seropositive patients from a major tertiary care hospital in New Delhi, India. J of Pathogens. 2016;2016. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Dronda F, Alonso-Sanz M, Laguna F, et al. Mixed oropharyngeal candidiasis due to Candida albicans and non-albicans Candida strains in HIV-infected patients. Europ J Clin Microbiol Infect Dis. 1996;15:446–4452. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Cannon R, Chaffin W. Oral colonization by Candida albicans. Crit Rev Oral Biol Med. 1999;10:359–383. (Open in a new window)PubMed(Open in a new window)Google Scholar Pappas PG, Kauffman CA, Andes DR, et al. Clinical practice guideline for the management of candidiasis: 2016 update by the infectious diseases society of America. Clinical Infect Dis. 2015;civ933. (Open in a new window)Google Scholar Cartledge J, Midgley J, Gazzard B. Non-albicans oral candidosis in HIV-positive patients. J Antimicrob Chemother. 1999;43:419–422. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Meurman J, Siikala E, Richardson M, et al. Non-Candida albicans Candida yeasts of the oral cavity. Commun Curr Res Educ Top Trends Appl Microbiol. 2007;1:719–731. (Open in a new window)Google Scholar Wayne P. Clinical and Laboratory Standards Institute: reference method for broth dilution antifungal susceptibility testing of yeasts; approved standard. In: CLSI document M27-A3 and Supplement S.3.2008 (Open in a new window)Google Scholar Barrio EE, Ruesga M, Vidal MV, et al. Comparative evaluation of ATB Fungus 2 and Sensititre YeastOne panels for testing in vitro Candida antifungal susceptibility. Revista Iberoamericana De Micología. 2008;25:3–6. (Open in a new window)Web of Science ®(Open in a new window)Google Scholar Arendrup MC, Garcia-Effron G, Lass-Flörl C, et al. Echinocandin susceptibility testing of Candida species: comparison of EUCAST EDef 7.1, CLSI M27-A3, Etest, disk diffusion, and agar dilution methods with RPMI and isosensitest media. Antimicrob Agents Chemother. 2010;54:426–439. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Maenza JR, Keruly JC, Moore RD, et al. Risk factors for fluconazole-resistant candidiasis in human immunodeficiency virus-infected patients. J Infect Dis. 1996;173:219–225. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Albertson GD, Niimi M, Cannon RD, et al. Multiple efflux mechanisms are involved in Candida albicans fluconazole resistance. Antimicrob Agents Chemother. 1996;40:2835–2841. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Niimi M, Firth NA, Cannon RD. Antifungal drug resistance of oral fungi. Odontology. 2010;98:15–25. (Open in a new window)PubMed(Open in a new window)Web of Science ®(Open in a new window)Google Scholar Mushi MF, Masewa B, Jande M, et al. Prevalence and factor associated with over-the-counter use of antifungal agents’, in Mwanza City, Tanzania. Tanzania J of Health Res. 2017;19.1 (Open in a new window)Google Scholar Download PDF Share Back to Top Related research People also read lists articles that other readers of this article have read. Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine. Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab. People also read Recommended articles Cited by To cite this article: Reference style:APA Chicago Harvard Citation copied to clipboard Copy citation to clipboard Reference styles above use APA (6th edition), Chicago (16th edition) & Harvard (10th edition) Download citation Download a citation file in RIS format that can be imported by citation management software including EndNote, ProCite, RefWorks and Reference Manager. Choose format:RIS BibTex RefWorks Direct Export Choose options:Citation Citation & abstract Citation & references Download citations Your download is now in progress and you may close this window Did you know that with a free Taylor & Francis Online account you can gain access to the following benefits? Choose new content alerts to be informed about new research of interest to you Easy remote access to your institution's subscriptions on any device, from any location Save your searches and schedule alerts to send you new results Export your search results into a .csv file to support your research Have an account? Login nowDon't have an account? Register for free Login or register to access this feature Have an account? Login nowDon't have an account? Register for free Register a free Taylor & Francis Online account today to boost your research and gain these benefits: Choose new content alerts to be informed about new research of interest to you Easy remote access to your institution's subscriptions on any device, from any location Save your searches and schedule alerts to send you new results Export your search results into a .csv file to support your research Register now or learn more Information for Authors R&D professionals Editors Librarians Societies Open access Overview Open journals Open Select Dove Medical Press F1000Research Opportunities Reprints and e-prints Advertising solutions Accelerated publication Corporate access solutions Help and information Help and contact Newsroom All journals Books Keep up to date Register to receive personalised research and resources by email Sign me up Taylor and Francis Group Facebook page Taylor and Francis Group X Twitter page Taylor and Francis Group Linkedin page Taylor and Francis Group Youtube page Taylor and Francis Group Weibo page Taylor and Francis Group Bluesky page Copyright © 2025Informa UK LimitedPrivacy policyCookiesTerms & conditionsAccessibility Registered in England & Wales No. 01072954 5 Howick Place | London | SW1P 1WG
731
https://allen.in/jee/chemistry/molality
HomeJEE ChemistryMolality Molality Solutions have several properties, some of which depend on volume, while others rely on mass. Molality is one such property. It measures the concentration of a solute in a solution by considering the number of moles of solute per kilogram of solvent. In simpler terms, molality indicates the number of moles of solute present in 1 kg of solvent. Since molality is based on the mass of the solvent, it is independent of temperature, making it a reliable concentration measurement even when temperatures change. 1.0Molality Molality (m) is a measure of concentration that describes the amount of solute present in a given mass of solvent. It is defined as the number of moles of solute per kilogram of solvent. Molality is frequently used in chemical calculations because it remains unaffected by changes in temperature and pressure. 2.0Formula and Unit for Molality The formula for calculating molality is: m=moles of solute kilograms of solvent m = kilograms of solventmoles of solute​m = kilograms of solvent - moles of solute​ The unit for molality is mol/kg. Example of Molality Calculation To determine the molality of a solution, you need the number of solute moles and the mass of the solvent in kilograms. For instance, if a solution contains 0.5 moles of solute in 2 kg of solvent, the molality would be 0.25 mol/kg. 3.0Advantages of Molality Molality provides distinct benefits compared to other concentration measurements like molarity and per cent by mass: Temperature Stability: Unlike other units, molality does not change with temperature, making it a reliable option in environments with temperature variations. Mass-Based Precision: By focusing on the mass of the solvent, molality delivers an accurate assessment of solute concentration, which is particularly useful when comparing solutions of different densities. 4.0Drawbacks of Molality Although molality has its benefits, there are some drawbacks to using this concentration measure: Calculation Complexity: Determining molality can be challenging because it requires accurate information about the number of solute moles and the solvent's mass in kilograms. Less Common in Practice: Molality is less widely used than other concentration units, such as molarity. This can make applying it less familiar and more challenging in some contexts. Overall, while molality is a valuable concentration measure with distinct advantages, it can sometimes be challenging to work with and is only sometimes the preferred choice. Example Suppose a solution contains 0.1 moles of sodium chloride (NaCl) dissolved in 1 kg of water. The molality of this solution would be calculated as follows: m=1 kg of water0.1 moles NaCl​=0.1mm 5.0Relationship Between Molarity and Molality The connection between molarity and molality can be derived from their definitions, as both are fundamental concentration units in chemistry. Molarity (M) is defined as: M=Volume of SolutionMoles of Solute​(in litres) Molality (m) is defined as: m=Mass of Solvant(in kilograms)Moles of Solute​ To derive the relationship, let's start by rearranging the molarity equation to express the volume of the solution: Volume of Solution(in litres) = MolarityMoles of Solute​ Substituting this expression into the molality equation: m = \frac{Moles\ of\ Solute}{(\frac{Moles\ of\ Solute}{Molarity}) \times Mass\ of\ Solvent(in\ kilograms)} Simplifying the expression gives: m=Density of Solution(in kg/L)Molarity​ This equation indicates that molality equals molarity divided by the solution's density. Molarity and molality are both vital in chemical calculations. While molarity is more commonly used, molality is advantageous when the solution’s volume is sensitive to temperature or pressure changes. The relationship between the two can be summarised with the equation: m=Density of Solution(in kg/L)Molarity​ This connection helps convert these two concentration units when needed. 6.0Comparison of Molality and Molarity | | | | --- | Feature | Molality | Molarity | | Definition | Number of moles of solute per kilogram of solvent | Number of moles of solute per litre of solution | | Formula | m=Moles of SoluteMass of Solvent(kg)​​ | M=Moles of SoluteVolume of Solution(L)​​ | | Accuracy | More accurate (independent of temperature) | Less accurate (depends on temperature and pressure) | | Units | mol/kg | mol/L | | Applications | Used when the volume of the solution changes significantly | Used when the volume of the solution is relatively constant | Table of Contents 1.0Molality 2.0Formula and Unit for Molality 3.0Advantages of Molality 4.0Drawbacks of Molality 5.0Relationship Between Molarity and Molality 6.0Comparison of Molality and Molarity Frequently Asked Questions Molality measures the number of moles of solute per kilogram of solvent, while molarity refers to the number of moles of solute per litre of solution. Unlike molarity, which fluctuates with temperature, molality is unaffected by temperature changes, making it a stable measurement. The unit of molality is mol/kg (moles per kilogram of solvent), while the unit of molarity is mol/L (moles per litre of solution). Temperature changes do not affect molality because it relies on the mass of the solvent. However, molarity can change with temperature as the solution’s volume may expand or contract. Molarity is more convenient for laboratory work since measuring the volume of liquids directly is easier. Molality requires accurately measuring the mass of the solvent, which can be more time-consuming. Join ALLEN! (Session 2025 - 26) Choose class Choose your goal Preferred Mode Choose State
732
https://fiveable.me/key-terms/inorganic-chemistry-i/cu2
Cu2+ - (Inorganic Chemistry I) - Vocab, Definition, Explanations | Fiveable | Fiveable new!Printable guides for educators Printable guides for educators. Bring Fiveable to your classroom ap study content toolsprintablespricing my subjectsupgrade All Key Terms Inorganic Chemistry I Cu2+ 🧶inorganic chemistry i review key term - Cu2+ Citation: MLA Definition Cu2+ is the copper ion with a +2 charge, formed when a copper atom loses two electrons. This ion plays a significant role in various chemical processes and reactions, particularly in coordination chemistry and the study of hard and soft acids and bases. The properties and behavior of Cu2+ are crucial for understanding its interactions with ligands and its stability in different chemical environments. 5 Must Know Facts For Your Next Test Cu2+ is classified as a soft acid according to HSAB theory, meaning it prefers to bond with soft bases, such as phosphines or sulfides. In aqueous solutions, Cu2+ often forms hydrated complexes, typically represented as [Cu(H2O)6]^{2+}, influencing its solubility and reactivity. The redox behavior of Cu2+ is essential in various biological systems, particularly in electron transfer processes and enzymatic functions. Cu2+ can exhibit both oxidation and reduction reactions, making it a versatile species in inorganic chemistry. When complexed with different ligands, Cu2+ can lead to significant changes in color, demonstrating its role in coordination chemistry. Review Questions How does the classification of Cu2+ as a soft acid influence its interactions with various ligands? Cu2+'s classification as a soft acid means that it has a tendency to form stronger bonds with soft bases. This preference influences which ligands can effectively coordinate with Cu2+. For example, it will have stronger interactions with ligands like phosphines or sulfides compared to hard bases like oxides or hydroxides. Understanding this classification helps predict the stability and reactivity of Cu2+ complexes in different environments. Discuss the role of Cu2+ in redox reactions and its significance in biological systems. Cu2+ plays a critical role in redox reactions by acting as an electron acceptor or donor. In biological systems, it is essential for various enzymatic processes, such as those found in cytochrome c oxidase, where it helps facilitate electron transfer. The ability of Cu2+ to switch between oxidation states allows it to participate in vital biochemical pathways, highlighting its importance beyond just inorganic chemistry. Evaluate the impact of ligand choice on the stability and color of Cu2+ coordination complexes. The choice of ligand significantly affects both the stability and color of Cu2+ coordination complexes due to changes in crystal field splitting. For instance, when Cu2+ is coordinated with ammonia (NH3) versus water (H2O), the electronic environment alters, resulting in different absorption spectra. This leads to distinct colors observed in solutions or solid complexes. By evaluating these interactions, one can better understand how ligand properties influence coordination chemistry involving Cu2+, showcasing its versatility in various applications. Related terms Ligand:A molecule or ion that binds to a central metal atom to form a coordination complex, often influencing the reactivity and properties of the metal. Acid-Base Reaction:A chemical reaction that involves the transfer of protons (H+) between reactants, which can include hard and soft acids and bases. Coordination Complex:A structure consisting of a central metal atom or ion bonded to surrounding molecules or ions (ligands), which can influence the stability and reactivity of the complex. "Cu2+" also found in: Subjects (1) Intro to Chemistry Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. 0
733
https://www.idcmjournal.org/typical-evolution-of-a-cutaneous-anthrax-lesion
IDCM — Infectious Diseases and Clinical Microbiology The Official Journal of the Turkish Society Of Clinical Microbiology and Infectious Diseases (KLİMİK) Issues Current Issue VOLUME 7, ISSUE 3, SEPTEMBER 2025 Recent Issues VOLUME 7, ISSUE 2, JUNE 2025 VOLUME 7, ISSUE 1, MARCH 2025 VOLUME 6, ISSUE 4, DECEMBER 2024 View All Issue Index Topics Selected Topics COVID-19 Bacterial Infections Vaccination Fungal Infections Antibiotics Healthcare Workers Parasitic Infections View All Topics COVID-19Bacterial InfectionsVaccination Author Center Instructions to Authors Editorial Evaluation Process Submissions About Us Aims and Scope Open Access Statement Copyright Indexes Editorial Board Contact Us ADVANCED SEARCH Case Report Typical Evolution of a Cutaneous Anthrax Lesion Cansu Çimen ×Affiliations Infectious Diseases and Clinical Microbiology Clinic, Ardahan State Hospital, Ardahan, Turkey ContentsSharePrintDownload PDF Table of Contents Abstract Introduction Case Presentation Discussion and Conclusion References VOLUME 2, ISSUE 1, APRIL 2020 Correspondence Cansu Çimen E-mail cansucmn@yahoo.com ReceivedApril 20, 2020 AcceptedApril 28, 2020 PublishedApril 30, 2020 Suggested Citation Çimen C. Typical Evolution of a Cutaneous Anthrax Lesion. Infect Dis Clin Microbiol 2020; 1: 27-29. DOI 10.36519/idcm.2020.0007 LICENSE This work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License. Abstract An illustrated case of cutaneous anthrax acquired in eastern Turkey was described in this report. A 56-year-old female patient applied to the infectious diseases outpatient clinic with a painless, dark-colored swelling over her right middle finger accompanied by edema extending to the right hand. Typical disease course of a cutaneous anthrax lesion evolved in a few days. This case report was presented in order to help clinicians to recognize the different stages of the disease in clinical practice. Keywords: anthrax , cutaneous anthrax , eschar , edema Introduction Anthrax i s a zoonotic infection mainly affecting herbivores and caused by Bacillus anthracis. Humans can contract the disease after direct or indirect exposure to animals or animal products (1). Human-to -human transmission has never been reported (2). Worldwide, most of the cases are among persons who come in contact with animals in agricultural regions of south and central America, sub-Saharan Africa, central and southwestern Asia, and southern and eastern Europe (3,4). Besides, anthrax is still an endemic disease in Turkey (5). Depending on the inoculation site; cutaneous, gastrointestinal or respiratory anthrax may develop. Cutaneous anthrax accounts for more than 95% of all human cases worldwide (1,2,4,5), and mostly seen in the hand, arm, head, or neck, yet this depends on the exposed area (6). The incubation period was noted to be from 1 to 19 days, but usually is between 2 and 9 days (5,7,8) . Initiation of appropriate therapy prevents systemic disease but does not alter the evolution of the cutaneous lesion (9). Thus, it is important to accurately diagnose cutaneous anthrax based on the history of the patient and the characteristics of the skin lesions (1). In this report, we present an illustrated case of cutaneous anthrax acquired in eastern Turkey. Figure 1. Anthrax lesion at the day of presentation (a) top view, (b) lateral view Figure 2. Anthrax lesion on the third day of treatment (a) top view, (b) lateral view Case Presentation A 56-year-old female patient applied in July 2018 to the infectious diseases outpatient clinic of Ardahan State Hospital with a painless, dark- coloured swelling over her right middle finger accompanied by oedema extending to the right hand (Figure 1a and 1b). As a farmer, she had noticed the sudden death of several sheep; therefore, she cut their mutton. Three days later, the first lesion appeared. The patient described the initial lesion as a small papule, which expanded over two days and ended with a swollen hand. Despite its violaceous view, the lesion was remarkably painless. On physical examination; she was afebrile, hemodynamically stable, and there was no neurovascular or orthopaedic injury. She had a tender epitrochlear lymph node of about 3 cm in diameter. The laboratory examination was regular. After puncture of the lesion, a Gram’s stain and culture tests were performed on the aspiration material. Gram-positive, endospore-forming-rods were observed. The patient was started on amoxicillin-clavulanic acid 3gr/day. Three days after the start of the treatment, the swelling had decreased (Figure 2a and 2b). Figure 3. A typical anthrax eschar After ten days of treatment, a typical anthrax eschar with a sharp-edged, black ulcer in the middle of the wound appeared, and the antibiotic was stopped (Figure 3). Discussion and Conclusion This case demonstrates the typical disease course of a cutaneous anthrax lesion. It is helpful to clinicians, as the diagnosis mainly depends on clinical features. A cutaneous anthrax lesion usually begins with itching at the site of entry. Subsequently,small, painless papule emerges. This papule quickly enlarges and develops a central vesicle,surrounded by oedema. In a couple of days, the vesicle fluid becomes darker. The wound turns into a necrotic ulcer, followed by a depressed painless eschar. Swollen and painful regional lymph nodes and lymphangitis often accompany this lesion (5,10,11). The differential diagnosis of cutaneous anthrax includes a wide range of infectious diseases: erysipelas, cat-scratch disease, cutaneous plague, ulceroglanduler tularemia, clostridial infection, orf, vaccinia and cowpox, leishmaniasis, ecthyma gangrenosum, blastomycosis, sporotrichosis or herpetic withlow. However, these infections lack the characteristic oedema of anthrax (1,7). The fatality rate of cutaneous anthrax among humans is <1% with adequate treatment, but the rate could increase by up to 20% in the case of late diagnosis and treatment (4). Oedema associated tracheal compression, severe oedema and shock may develop as a complication (4,7). Anthrax is an endemic disease, particularly in eastern and southeastern Turkey, where animal husbandry and farming are common. The cutaneous form of the disease is diagnosed based on a history of contact with animals or animal products in an endemic region and the presence of a violaceous but painless skin lesion on an edematic background. Considering anthrax in the differential diagnosis is crucial for the prompt and appropriate treatment. This case report may help clinicians to recognize the different stages of the disease in clinical practice. Peer-review:Externally peer-reviewed Statement:This case report was presented in Clinical Grand Round of ECCMID 2019, Amsterdam, April 2019. Financial Disclosure:The authors declared that this study has received no financial support. Show References References Anthrax in Humans and Animals. 4th ed. Turnbull P, editor. Geneva: World Health Organisation; 2008. Dixon TC, Meselson M, Guillemin J, Hanna PC. Anthrax. N Engl. J. Med 1999; 341: 815–826. Hendricks KA, Wright ME, Shadomy SV, Bradley JS, Morrow MG, Pavia AT, et al. Centers for Disease Control and Prevention expert panel meetings on prevention and treatment of anthrax in adults. Emerg Infect Dis 2014; 20: e130687. Shadom SV, Smith TL. Zoonosis update. Anthrax. J Am Vet Med Assoc 2008; 233: 63-72. Doganay M, Metan G, Alp E. A review of cutaneous anthrax and its outcome. J Infect Public Health 2010; 3: 98-105. Kamal SM, Rashid AK, Bakar MA, Ahad MA. Anthrax: An update. Asian Pac J Trop Biomed 2011; 1: 496–501. Kaya A, Tasyaran MA, Erol S, Ozkurt Z, Ozkan B. Anthrax in adults and children: A review of 132 cases in Turkey. Eur J Clin Microbiol. Infect. Dis 2002; 21: 258–261. Abdenour D, Larouze B, Dalichaouche M, Aouati M. Familial Occurrence of Anthrax in Eastern Algeria. J Infect Dis 1987; 155: 1083–4. Parlak E, Parlak M. Human Cutaneous Anthrax, the East Anatolian Region of Turkey 2008-2014. Vector-Borne Zoonotic Dis 2016; 16: 42–7. Wenner KA, Kenner JR. Anthrax. Dermatol Clin 2004; 22: 247–256. Baykam N, Ergonul O, Ulu A, Eren S, Celikbas A, Eroglu M, et al. Characteristics of cutaneous anthrax in Turkey. J Infect Dev Ctries 2009; 3: 599–603. Follow Us OwnerPublisher & Designer Copyright © 2025 Infectious Diseases and Clinical Microbiology. EISSN 2667-646X.
734
https://english.stackexchange.com/questions/424420/bereaved-vs-bereft
word choice - "Bereaved" vs. "bereft" - English Language & Usage Stack Exchange Join English Language & Usage By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community English Language & Usage helpchat English Language & Usage Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more "Bereaved" vs. "bereft" Ask Question Asked 7 years, 9 months ago Modified7 years, 8 months ago Viewed 2k times This question shows research effort; it is useful and clear 6 Save this question. Show activity on this post. I saw the sentence below, and I think it would sound better after changing "bereaved" to "bereft": Having lost his father in early childhood, he was bereaved of his love and affection. The Oxford Advanced Learner's Dictionary definitions of bereaved and bereft are: bereaved (adjective): having lost a relative or close friend who has recently died bereft (adjective): bereft of something completely lacking something; having lost something word-choice past-participles ed-vs-t Share Share a link to this question Copy linkCC BY-SA 3.0 Improve this question Follow Follow this question to receive notifications edited Jan 4, 2018 at 13:58 RegDwigнt 98k 40 40 gold badges 315 315 silver badges 407 407 bronze badges asked Dec 31, 2017 at 7:59 TakaTaka 101 6 6 bronze badges 4 As suggested by user: @pablopaul “bereaved is also the correct form to use when referring to the loss of cherished characteristics of the deceased.” user 66974 –user 66974 2017-12-31 08:57:07 +00:00 Commented Dec 31, 2017 at 8:57 1 @pablopaul With a supporting reference, that would be the correct answer.Edwin Ashworth –Edwin Ashworth 2017-12-31 09:12:06 +00:00 Commented Dec 31, 2017 at 9:12 Interesting question. To me, bereaved suggested he didn't receive a father's love and affection, and bereft suggests he has none to give others. The choice changes the antecedent of the pronoun his.Phil Sweet –Phil Sweet 2017-12-31 16:06:19 +00:00 Commented Dec 31, 2017 at 16:06 Dear user159691, Edwin Ashworth, and Phil Sweet, thank you for your replying. I can understand it.Taka –Taka 2018-01-05 08:19:24 +00:00 Commented Jan 5, 2018 at 8:19 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. The distinction may become clearer if the sentence is rephrased as,"Having been bereaved in early childhood, he remained bereft of fatherly love and affection. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Jan 4, 2018 at 7:40 Anand BapatAnand Bapat 21 1 1 bronze badge 1 Dear Anand Bapat thank you for your replying. Thank you for your rephrased sentence. I can understand it.Taka –Taka 2018-01-05 08:22:48 +00:00 Commented Jan 5, 2018 at 8:22 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Using "bereft" should be better. As the focus of this sentence is the lost of his father's love and affection but not his dad. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Jan 1, 2018 at 3:34 SunShine WongSunShine Wong 31 2 2 bronze badges 1 Dear SunShine Wong thank you for your replying. >Using "bereft" should be better. I think so too. Thank you very much.Taka –Taka 2018-01-05 08:21:01 +00:00 Commented Jan 5, 2018 at 8:21 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions word-choice past-participles ed-vs-t See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Hot Network Questions Another way to draw RegionDifference of a cylinder and Cuboid My dissertation is wrong, but I already defended. How to remedy? Interpret G-code Is it safe to route top layer traces under header pins, SMD IC? alignment in a table with custom separator Who is the target audience of Netanyahu's speech at the United Nations? Lingering odor presumably from bad chicken What's the expectation around asking to be invited to invitation-only workshops? Does the curvature engine's wake really last forever? Clinical-tone story about Earth making people violent how do I remove a item from the applications menu Are there any world leaders who are/were good at chess? Do we need the author's permission for reference How to locate a leak in an irrigation system? Triangle with Interlacing Rows Inequality [Programming] How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? Direct train from Rotterdam to Lille Europe Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Is encrypting the login keyring necessary if you have full disk encryption? Can you formalize the definition of infinitely divisible in FOL? Why do universities push for high impact journal publications? How do you emphasize the verb "to be" with do/does? Numbers Interpreted in Smallest Valid Base ICC in Hague not prosecuting an individual brought before them in a questionable manner? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today English Language & Usage Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
735
https://study.com/skill/learn/naming-the-quadrant-or-axis-of-a-point-given-the-signs-of-its-coordinates-explanation.html
Naming the Quadrant or Axis of a Point Given the Signs of its Coordinates | Algebra | Study.com Log In Sign Up Menu Plans Courses By Subject College Courses High School Courses Middle School Courses Elementary School Courses By Subject Arts Business Computer Science Education & Teaching English (ELA) Foreign Language Health & Medicine History Humanities Math Psychology Science Social Science Subjects Art Business Computer Science Education & Teaching English Health & Medicine History Humanities Math Psychology Science Social Science Art Architecture Art History Design Performing Arts Visual Arts Business Accounting Business Administration Business Communication Business Ethics Business Intelligence Business Law Economics Finance Healthcare Administration Human Resources Information Technology International Business Operations Management Real Estate Sales & Marketing Computer Science Computer Engineering Computer Programming Cybersecurity Data Science Software Education & Teaching Education Law & Policy Pedagogy & Teaching Strategies Special & Specialized Education Student Support in Education Teaching English Language Learners English Grammar Literature Public Speaking Reading Vocabulary Writing & Composition Health & Medicine Counseling & Therapy Health Medicine Nursing Nutrition History US History World History Humanities Communication Ethics Foreign Languages Philosophy Religious Studies Math Algebra Basic Math Calculus Geometry Statistics Trigonometry Psychology Clinical & Abnormal Psychology Cognitive Science Developmental Psychology Educational Psychology Organizational Psychology Social Psychology Science Anatomy & Physiology Astronomy Biology Chemistry Earth Science Engineering Environmental Science Physics Scientific Research Social Science Anthropology Criminal Justice Geography Law Linguistics Political Science Sociology Teachers Teacher Certification Teaching Resources and Curriculum Skills Practice Lesson Plans Teacher Professional Development For schools & districts Certifications Teacher Certification Exams Nursing Exams Real Estate Exams Military Exams Finance Exams Human Resources Exams Counseling & Social Work Exams Allied Health & Medicine Exams All Test Prep Teacher Certification Exams Praxis Test Prep FTCE Test Prep TExES Test Prep CSET & CBEST Test Prep All Teacher Certification Test Prep Nursing Exams NCLEX Test Prep TEAS Test Prep HESI Test Prep All Nursing Test Prep Real Estate Exams Real Estate Sales Real Estate Brokers Real Estate Appraisals All Real Estate Test Prep Military Exams ASVAB Test Prep AFOQT Test Prep All Military Test Prep Finance Exams SIE Test Prep Series 6 Test Prep Series 65 Test Prep Series 66 Test Prep Series 7 Test Prep CPP Test Prep CMA Test Prep All Finance Test Prep Human Resources Exams SHRM Test Prep PHR Test Prep aPHR Test Prep PHRi Test Prep SPHR Test Prep All HR Test Prep Counseling & Social Work Exams NCE Test Prep NCMHCE Test Prep CPCE Test Prep ASWB Test Prep CRC Test Prep All Counseling & Social Work Test Prep Allied Health & Medicine Exams ASCP Test Prep CNA Test Prep CNS Test Prep All Medical Test Prep College Degrees College Credit Courses Partner Schools Success Stories Earn credit Sign Up Copyright Naming the Quadrant or Axis of a Point Given the Signs of its Coordinates Algebra 2 Skills Practice Click for sound 2:59 You must c C reate an account to continue watching Register to access this and thousands of other videos Are you a student or a teacher? I am a student I am a teacher Try Study.com, risk-free As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. Get unlimited access to over 88,000 lessons. Try it risk-free It only takes a few minutes to setup and you can cancel any time. It only takes a few minutes. Cancel any time. Already registered? Log in here for access Back What teachers are saying about Study.com Try it risk-free for 30 days Already registered? Log in here for access 00:04 Start Jump to a specific example Speed Normal 0.5x Normal 1.25x 1.5x 1.75x 2x Speed Dana Hansen, Amy McKenney Instructors Dana Hansen Dana has taught high school mathematics for over 9 years. She has a masters in mathematics education from CU Denver. She is certified to teach both mathematics and physics in both Colorado as well as Iowa. She loves getting outdoors as much as possible hopefully with her two dogs. Dana has tutored with study.com for over a year, and loves the opportunity to work one on one with students to help them develop their content knowledge. View bio Amy McKenney Amy has taught high school mathematics for over 14 years. She has a master's degree in education from Plymouth State University and her undergraduate degree in mathematics. She is certified to teach grades 7-12 mathematics. View bio Example SolutionsPractice Questions How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates Step 1: Determine the sign of the x-coordinate and the sign of the y-coordinate. If either coordinate is zero, then that point falls on one of the coordinate axis. Step 2: Use the signs of the x-coordinate and y-coordinate to determine the quadrant the point lies in based on the descriptions listed below. If one of the coordinates is 0 skip to step 3. If a point has the signs x = positive and y = positive, then the point lies in quadrant 1 of the coordinate plane. If a point has the signs x = negative and y = negative, then the point lies in quadrant 3 of the coordinate plane. If a point has the signs x = negative and y = positive, then the point lies in quadrant 2 of the coordinate plane. If a point has the signs x = positive and y = negative, then the point lies in quadrant 4 of the coordinate plane. Step 3: If one of the coordinates has a value of 0 use the descriptions listed below to determine if that point falls on the x-axis or y-axis. If the point has an x-value of 0, then the point falls along the y-axis. If the point has a y-value of 0, then the point falls along the x-axis. How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates Vocabulary Coordinate Plane: This is a two-dimensional plane that allows us to graph both points and functions. The y-axis is the vertical axis and the x-axis is the horizontal axis. The center of our coordinate plane is considered the origin, and is where x = 0 and y = 0. Anything to the left of this point is negative on the x-axis, and anything to the right of this point is positive on the x-axis. Anything above this point is positive on the y-axis, and anything below this point is negative on the y-axis. Quadrant of a Coordinate Plane: The coordinate plane is divided up into four quadrants. The given diagram shows the location of each of these quadrants. Coordinate Point: This is a notation of the form (x,y) that allows us to determine the location of a point in space on a coordinate plane. The first value in the coordinate pair is the x-value and gives the horizontal location of the point. The second value in the coordinate pair is the y-value and gives the vertical location of the point. Let's try using these steps to determine the quadrant or axis of a point given the signs of its coordinates, in the following three examples. First, we will look at two examples that have non-zero values for both coordinate points. Then, we will do an example of a point that falls along one of the axes on a coordinate plane. How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates: Nonzero Coordinates Example 1 Give the quadrant or axis where the following point lies: (-2, 3) Step 1: Determine the sign of the x-coordinate and the sign of the y-coordinate. If either coordinate is zero skip to step 3. x = negative and y = positive Step 2: Use the signs of the x-coordinate and y-coordinate to determine the quadrant the point lies in. Using our rules, we have that since our x = negative and y = positive the point lies in quadrant 2 of the coordinate plane. Step 3: If one of the coordinates has a value of 0, determine if that point falls on the x-axis or y-axis. We skip this step since we do not have a value of 0 for either of our coordinates. Thus, we have that the point (-2,3) lies in Quadrant 2. How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates: Nonzero Coordinates Example 2 Give the quadrant or axis where the following point lies: (-6, -10) Step 1: Determine the sign of the x-coordinate and the sign of the y-coordinate. If either coordinate is zero skip to step 3. x = negative and y = negative Step 2: Use the signs of the x-coordinate and y-coordinate to determine the quadrant the point lies in. By our rules, since our x = negative and y = negative the point lies in quadrant 3 of the coordinate plane. Step 3: If one of the coordinates has a value of 0, determine if that point falls on the x-axis or y-axis. We skip this step since we do not have a value of 0 for either of our coordinates. Therefore, the point (-6,-10) lies in Quadrant 3. How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates: Zero Coordinate Example Give the quadrant or axis where the following point lies: (0, 5) Step 1: Determine the sign of the x-coordinate and the sign of the y-coordinate. If either coordinate is zero skip to step 3. x = 0 and y = positive Step 2: Use the signs of the x-coordinate and y-coordinate to determine the quadrant the point lies in. Since we have a value of 0 for one of our coordinates, we can skip this step. Step 3: If one of the coordinates has a value of 0, determine if that point falls on the x-axis or y-axis. According to our rules, since the point has an x-value of 0, it falls along the y-axis. We have that the point (0,5) lies on the y-axis. Get access to thousands of practice questions and explanations! Create an account Table of Contents How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates How to Determine the Quadrant or Axis of a Point Given the Signs of its Coordinates Vocabulary Nonzero Coordinates Example 1 Nonzero Coordinates Example 2 Zero Coordinate Example Test your current knowledge Practice Naming the Quadrant or Axis of a Point Given the Signs of its Coordinates Related Courses ELM: CSU Math Study Guide High School Geometry: Help and Review Related Lessons Plotting Points on the Coordinate Plane Horizontal Line | Definition, Equations & Examples Recently updated on Study.com Videos Courses Lessons Articles Quizzes Concepts Teacher Resources Electromagnetic Induction: Conductor to Conductor &... Reproduction of Nematoda | System & Process Hyaline Arteriolosclerosis | Mechanisms, Causes &... The Invention of Hugo Cabret by B. Selznick | Summary &... Omar Khayyam | Biography, Achievements & Poems Dendrogram Overview, Characteristics & Examples Oocyte Definition, Cell Division & Life Cycle Frequency Histogram | Parts & Calculation Surface Area of a Hemisphere | Overview & Formula Graphing Absolute Value Functions | Definition & Translation Kubla Khan by Samuel Coleridge | Summary, Meaning & Analysis Education 107: Intro to Curriculum, Instruction, and... Economics 301: Intermediate Microeconomics TOEFL iBT (2026) Study Guide and Test Prep ILTS 230 Study Guide - Early Childhood Special Education... PiCAT Study Guide and Test Prep TExES 186 Special Education Specialist EC-12 Math 201: Linear Algebra MTTC 137 Study Guide - Science (5-9) Exam Prep Business 103: Introductory Business Law GACE Behavioral Science (550) Study Guide and Test Prep Earth Science: Middle School Glencoe Pre-Algebra: Online Textbook Help Art of the Western World MoGEA Writing Subtest (067) Study Guide and Test Prep Elements of a Cohesive Work Team NES Middle Grades Social Science (202) Study Guide and... Praxis Environmental Education (0831) Study Guide and... Holt McDougal Modern Biology: Online Textbook Help Sri Lankan Culture | History, Traditions & Facts Knee Injuries | Types, Symptoms & Treatment Religious Policies in Schools | Practices, Benefits &... Life Skills Education | Definition, Importance & Advantages Management of Adverse Drug Reactions | Diagnosis & Treatment Immune Responses Definition, Diagram & Examples Government of Ancient China | History, Role & Laws Theme Analysis Brave New World Chapter 10 Summary Jane Eyre by Charlotte Bronte: Ch. 22 | Summary & Quotes Constructing & Conveying Meaning in Nonprint Texts Jane Eyre Chapter 23 Summary Parametric Equations for Projectile Motion | Graphs &... Professional Organizations for Special Education Teachers What Are The Screwtape Letters? - Format & Style How Math Is Fundamental to Scientific Progress Analyzing Justifications for Human Rights Violations How Do Sound and Matter Interact? How to Use & Cite AI Tools in College Saver Course... Understanding Generative AI as a Student: Uses, Benefits... WEST Prep Product Comparison What Are the Features of My Institutional Student Account... How to Pass the SLLA Exam How to Pass the TExES Exams Oregon Trail Lesson Plan Colorado State Standards for Social Studies Dr. Seuss Bulletin Board Ideas Multiplication Rhymes & Tricks for Kids Oklahoma State Standards for Social Studies Quiz & Worksheet - Life of Charles Martel Quiz & Worksheet - Dodgson in Jurassic Park Quiz & Worksheet - Fushimi Inari Taisha Shrine in Japan Quiz & Worksheet - Kennedy Ulcers Quiz & Worksheet - Haloform Reaction Quiz & Worksheet - Canine Bloat Symptoms Quiz & Worksheet - Project Management Tools for Systems... Math Social Sciences Science Business Humanities Education Art and Design History Tech and Engineering Health and Medicine Plans Study help Test prep College credit Teacher resources Working Scholars® School group plans Online tutoring About us Blog Careers Teach for us Press Center Ambassador Scholarships Support Contact support FAQ Site feedback Resources and Guides Download the app Study.com on Facebook Study.com on YouTube Study.com on Instagram Study.com on Twitter Study.com on LinkedIn © Copyright 2025 Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. Contact us by phone at (877)266-4919, or by mail at 100 View Street#202, Mountain View, CA 94041. About Us Terms of Use Privacy Policy DMCA Notice ADA Compliance Honor Code For Students Support ×
736
https://www.physicsforums.com/threads/quick-calculation-check-please.798659/post-5023552
Quick calculation check please | Page 2 • Physics Forums Insights Blog-- Browse All Articles --Physics ArticlesPhysics TutorialsPhysics GuidesPhysics FAQMath ArticlesMath TutorialsMath GuidesMath FAQEducation ArticlesEducation GuidesBio/Chem ArticlesTechnology GuidesComputer Science Tutorials ForumsClassical PhysicsQuantum PhysicsQuantum Interpretations Special and General RelativityAtomic and Condensed MatterNuclear and Particle PhysicsBeyond the Standard ModelCosmologyAstronomy and AstrophysicsOther Physics Topics Trending Log inRegister What's new Classical Physics Quantum PhysicsQuantum Interpretations Special and General Relativity Atomic and Condensed Matter Nuclear and Particle Physics Beyond the Standard Model Cosmology Astronomy and Astrophysics Other Physics Topics Menu Log in Register Navigation More options Style variation SystemLightDark Contact us Close Menu Forums Physics Special and General Relativity Quick calculation check please Thread starter name123 Start date Feb 18, 2015 TagsCalculation Prev 1 2 3 Next FirstPrev2 of 3 Go to page Go NextLast Feb 26, 2015 51 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: What do you mean the length contracted? This: PeterDonis said: Length contraction has made the ##10^{10}## atoms in the sphere, which are moving in the ##x## direction, fit into the same space, in the ##x## direction, as ##5 \times 10^9## atoms between the poles that are at rest. The space between the poles fits atoms at rest, but twice as many atoms () that are moving at 87% of the speed of light. That is an example of relativistic length contraction. When people in SR talk about "length contraction", that is what they are talking about. Physics news on Phys.org New adaptive optics system promises sharper gravitational-wave observations Physics-informed AI learns local rules behind flocking and collective motion behaviors New perspectives on light-matter interaction: How virtual charges influence material responses Feb 26, 2015 52 name123 510 5 PeterDonis said: The space between the poles fits ##5 \times 10^9## atoms at rest, but twice as many atoms (##10^{10}##) that are moving at 87% of the speed of light. That is an example of relativistic length contraction. When people in SR talk about "length contraction", that is what they are talking about. So you are measuring distance by atoms at rest, and the distance is the space between the poles? But if the universe was imagined to be one in which the sphere had shrunk, you'd realize that doing it that way wouldn't always give the right answer. An experimenter on the sphere could come to the conclusion that the poles had parted (that the distance between them had got greater), but why should it be that any more than the sphere had shrunk? Also what were you thinking such a claim would imply, that more atoms could be fitted in it in its rest frame? In a computer model of it for example, you could imagine a difference between whether the sphere shrank or the poles parted. Would it be considered as likely that when a spaceship applied its thrusters the rest of the universe changed size, as it being that the spaceship did? There would be some change in distance somewhere presumably. Last edited: Feb 26, 2015 Feb 26, 2015 53 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: So you are measuring distance by atoms at rest. In this example, yes. There are other possible observables you could use. name123 said: if the universe was imagined to be one in which the sphere had shrunk And what observable would tell you that "the sphere had shrunk"? You keep on talking as if there is some way to tell that "shrinking" has occurred even if there is no physical observable that has changed. One more time: that makes no sense. If "shrinking" has occurred in the rest frame of the object, some observable must change. So unless you can tell me what observable changed in the sphere's rest frame, imagining that it has "shrunk" is imagining something that makes no sense. I have repeatedly said this and you have repeatedly ignored it. name123 said: An experimenter on the sphere could come to the conclusion that the poles had parted No, he wouldn't. He would observe the distance between the poles to be length contracted in the direction. But he would also observe them to be offset in the direction, so the sphere can slip between them. (Relativity of simultaneity is involved here, and you have to include it to understand what is going on; maline has been trying to describe this to you in his posts.) name123 said: why should it be that any more than the sphere had shrunk? Because the experimenter, at rest on the sphere, measures all the same observables for the sphere. The only observable changes he measures involve the poles, not the sphere. Feb 26, 2015 54 name123 510 5 PeterDonis said: And what observable would tell you that "the sphere had shrunk"? You keep on talking as if there is some way to tell that "shrinking" has occurred even if there is no physical observable that has changed. One more time: that makes no sense. If "shrinking" has occurred in the rest frame of the object, some observable must change. So unless you can tell me what observable changed in the sphere's rest frame, imagining that it has "shrunk" is imagining something that makes no sense. I have repeatedly said this and you have repeatedly ignored it. I'm not saying that shrinking has occurred even if no physical observable has changed, the sphere managed to get through the poles which it couldn't before. Nor am I saying that there is a way to tell if shrinking had occurred, and I'm not saying that anything in an observers rest frame will be observed to shrink relative to anything else in the same reference frame. What I am saying is that if you were to make a computer model and used some arbitrary coordinates for the spatial locations of the objects, you could imagine doing a 3d rendition of the situation (using some 3d modelling tool) and either making the poles wider apart, or making the sphere diameter smaller with motion, or some combination of the two. You could also have some cartoon characters watching the simulation, and reasoning that if they couldn't tell which way the author had done it, it meant that the truth is that it wasn't either way, but I think that reasoning is flawed. Feb 26, 2015 55 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: What I am saying is that if you were to make a computer model and used some arbitrary coordinates for the spatial locations of the objects, you could imagine doing a 3d rendition of the situation (using some 3d modelling tool) and either making the poles wider apart, or making the sphere diameter smaller with motion, or some combination of the two. So what? The numbers in the computer don't mean anything by themselves; they only have meaning when they are linked to observable quantities. "Making the poles wider apart" vs. "making the sphere diameter smaller with motion" doesn't change any observable quantities; all it changes is the mathematical relationship between observable quantities and the numbers in the computer. That mathematical relationship has no physical meaning; it's just an artifact of the computer model. name123 said: You could also have some cartoon characters watching the simulation, and reasoning that if they couldn't tell which way the author had done it, it meant that the truth is that it wasn't either way, but I think that reasoning is flawed. Of course it is. The cartoon characters are ignoring the actual observable quantities. If they looked at those, it would be obvious "which way the author had done it", because the two different ways he could have done it correspond to two different mathematical relationships between the numbers in the computer and the observable quantities. Feb 26, 2015 56 name123 510 5 PeterDonis said: So what? The numbers in the computer don't mean anything by themselves; they only have meaning when they are linked to observable quantities. "Making the poles wider apart" vs. "making the sphere diameter smaller with motion" doesn't change any observable quantities; all it changes is the mathematical relationship between observable quantities and the numbers in the computer. That mathematical relationship has no physical meaning; it's just an artifact of the computer model. Why couldn't it have a physical meaning, why couldn't it be that when the spaceship applies its thrusters its size changes and not the size of the universe (cause and effect for starters I would have thought) for some physical reason. PeterDonis said: Of course it is. The cartoon characters are ignoring the actual observable quantities. If they looked at those, it would be obvious "which way the author had done it", because the two different ways he could have done it correspond to two different mathematical relationships between the numbers in the computer and the observable quantities. The cartoon characters could use a whole library of cartoons on the same simulation, with switching camera angles or whatever, in slow motion or however, showing experimenters doing whatever tests from the various reference frames they like. What are you saying the cartoon characters would be ignoring that you wouldn't be? Feb 26, 2015 57 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Why couldn't it have a physical meaning Um, because it's a computer model, not the real thing? name123 said: why couldn't it be that when the spaceship applies its thrusters its size changes and not the size of the universe (cause and effect for starters I would have thought) for some physical reason. Here you're not talking about a computer model, you're talking about a real spaceship. The reason its size doesn't change when its thrusters are turned on is that its size doesn't change: a person in the spaceship making measurements of its size will get the same results as they did before the thrusters were turned on. That's what "its size doesn't change" means. It is simply meaningless to talk about its "size changing" in its own rest frame when all the measurement results in its own rest frame are the same. name123 said: The cartoon characters could use a whole library of cartoons on the same simulation, with switching camera angles or whatever, in slow motion or however, showing experimenters doing whatever tests from the various reference frames they like. And what results do the experimenters get? name123 said: What are you saying the cartoon characters would be ignoring that you wouldn't be? I was saying that, in order for them to not be able to tell whether the moving sphere was shrinking, or the stationary poles were getting farther apart, they would have to be ignoring the measurement results obtained by the experimenters. If they take those results into account, they will see that the experimenters at rest relative to the poles have unchanged measurement results for the distance between the poles, but different measurement results for the diameter of the sphere (compared to when the sphere was at rest relative to them and the poles). And that makes it clear that the sphere shrank; that is what "the sphere shrank but the poles stayed the same" means. If the numbers in the computer representing the sphere stayed the same while the numbers representing the poles changed, that just means the relationship between those numbers and the measurement results had to change. That's fine, because the numbers in the computers have no physical meaning by themselves; the only things with physical meaning are the measurement results. Feb 26, 2015 58 name123 510 5 PeterDonis said: Um, because it's a computer model, not the real thing? You had seemed to be saying that shrinking or expanding had no physical meaning and only had a meaning in the computer model. So I was asking you why shrinking couldn't have a physical meaning, why couldn't it be that when the spaceship applies its thrusters its size changes and not the size of the universe (cause and effect for starters I would have thought) for some physical reason? PeterDonis said: Here you're not talking about a computer model, you're talking about a real spaceship. The reason its size doesn't change when its thrusters are turned on is that its size doesn't change: a person in the spaceship making measurements of its size will get the same results as they did before the thrusters were turned on. That's what "its size doesn't change" means. It is simply meaningless to talk about its "size changing" in its own rest frame when all the measurement results in its own rest frame are the same. So what happens to its size from the perspective of an observer not in its rest frame? Regarding the cartoon characters you asked: PeterDonis said: And what results do the experimenters get? I was saying that, in order for them to not be able to tell whether the moving sphere was shrinking, or the stationary poles were getting farther apart, they would have to be ignoring the measurement results obtained by the experimenters. If they take those results into account, they will see that the experimenters at rest relative to the poles have unchanged measurement results for the distance between the poles, but different measurement results for the diameter of the sphere (compared to when the sphere was at rest relative to them and the poles). And that makes it clear that the sphere shrank; that is what "the sphere shrank but the poles stayed the same" means. If the numbers in the computer representing the sphere stayed the same while the numbers representing the poles changed, that just means the relationship between those numbers and the measurement results had to change. That's fine, because the numbers in the computers have no physical meaning by themselves; the only things with physical meaning are the measurement results. Why would they be ignoring them, they can have the experimenter results from any rest frame they like. And imagine they look at all the simulation perspectives that you think they'd need to look at. And they get the results you would expect from the equations. Also in reality it wouldn't be correct to say that only the measurement results were important. For example, consider the "now" that you experience. At any point in the "now" that you experience, it seems reasonable to assume that others are also experiencing the "now", and that there was only one experience they were having associated with your now, so you can conclude that if you were an A team observer passing a B team observer, if you disagree about what the now is for the A team member further down, you can be sure that you're not both right and still keeping the same meaning of now, because it won't be having an experience of two different times simultaneously. Just because no clock can be shown to be in synch with the now, it doesn't mean that there isn't one. Presumably it would have to be one of them. We can understand "now" from the experience. Could a robot have such an experience, could you measure it, could it ever be meaningless? If the answer to the last one is no, as what you know is that you're not experiencing nothing, then any reasoning that leads you contradict reality, i.e. claim it would be meaningless would be flawed (I think). Last edited: Feb 26, 2015 Feb 26, 2015 59 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: You had seemed to be saying that shrinking or expanding had no physical meaning No, I said that the numbers in the computer model, by themselves, have no physical meaning. I have repeatedly explained what the physical meaning of "shrinking" is: it means the result you get when you measure the object's length, using the observable that defines "length", changes. name123 said: So what happens to its size from the perspective of an observer not in its rest frame? We've been over this. See post #49. The spaceship acts just like the sphere in that post; if the spaceship is, say, atoms long, then those atoms, when the spaceship is moving, will fit in the same space as some smaller number of atoms at rest ( atoms in the example in post #49). Feb 26, 2015 60 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Why would they be ignoring them Why are you asking me? You're the one that made up the computer scenario. I merely pointed out a logical consequence of your original description of the scenario. Your original description said the cartoon characters couldn't tell whether the spaceship had shrunk or the poles had gotten farther apart. I pointed out that, if the characters looked at the experimenters' measurement results, they would be able to tell; so your statement that they couldn't tell implies that they must not be looking at the experimenters' measurement results. name123 said: For example, consider the "now" that you experience. I really think you need to get clear on what "length" means before you start speculating about "now". name123 said: At any point in the "now" that you experience, it seems reasonable to assume that others are also experiencing the "now" I'm not sure what this (and the rest of your post that expands on it) means, but if you are trying to say that "now" must have some absolute meaning, that's not correct. In relativity, "now" has no absolute meaning; it is frame-dependent. name123 said: We can understand "now" from the experience. Relativity is not a theory of consciousness. The word "observer" is used, but an "observer" in the sense of relativity does not have to be conscious. Anything that can record permanent records of experimental results is an "observer" in the sense of relativity. We don't need to bring in consciousness (which is good, since consciousness is off topic for this forum anyway). Feb 26, 2015 61 name123 510 5 PeterDonis said: Why are you asking me? You're the one that made up the computer scenario. I merely pointed out a logical consequence of your original description of the scenario. Your original description said the cartoon characters couldn't tell whether the spaceship had shrunk or the poles had gotten farther apart. I pointed out that, if the characters looked at the experimenters' measurement results, they would be able to tell; so your statement that they couldn't tell implies that they must not be looking at the experimenters' measurement results. And I've said you can imagine that they that they have a video library of what happened. In slow motion, from different rest frame perspectives, different experimenter results. Now from your other post you seem to have acknowledged that in the computer model how it is done whether the sphere shrank or the poles grew further apart can have a meaning, and when I asked about why shrinking and expanding couldn't have a physical meaning (so that the computer simulation one is analogous to an imagined physical one) that when the spaceship applies its thrusters its size changes and not the size of the universe (cause and effect for starters I would have thought) for some physical reason, you seem to be saying that you are ok with shrinking and expanding having a physical meaning, and that you were just saying that the computer model itself wouldn't have that literal physical meaning. You can imagine you are the cartoon character you can look at any of the videos you like, and get any experimenter results you like, and you can conclude if you like that because you can't tell whether the author had it that the sphere shrank or the poles got further apart, that it is safe to conclude that it wasn't done either way, or that it is a meaningless question (even though if the cartoon avatar of the author came on it could understand the question, and give the right answer). As I said, I think there is a flaw in that reasoning. But what would you deduce from the videos as the cartoon character (the experiment results are as expected given the equations)? PeterDonis said: I'm not sure what this (and the rest of your post that expands on it) means, but if you are trying to say that "now" must have some absolute meaning, that's not correct. In relativity, "now" has no absolute meaning; it is frame-dependent. Now does have an absolute meaning for you personally though doesn't it? And would have for the next team A observer down. So can you see how it could true that simultaneously to you experiencing, a team A member was experiencing simultaneously to the team B member opposite to it who was experiencing simultaneously with another team B member experiencing who was simultaneously experiencing with you in the future? Last edited: Feb 26, 2015 Feb 26, 2015 62 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: And I've said you can imagine that they that they have a video library of what happened. In slow motion, from different rest frame perspectives, different experimenter results. In which case the cartoon characters can tell, from the experimental results, whether the sphere shrank or the poles got farther apart. Which you said they couldn't do, when you originally described the computer model setup. Make up your mind what you are trying to describe. name123 said: you seem to be saying that you are ok with shrinking and expanding having a physical meaning Sure; it means somebody's experimental results changed. name123 said: you were just saying that the computer model itself wouldn't have that literal physical meaning Not by itself, no, because the computer model is just a computer model. For it to have physical meaning, you have to know the relationship between the numbers in the computer model and experimental results. If the numbers in the computer model change but the experimental results stay the same, then nothing physical has changed. name123 said: You can imagine you are the cartoon character you can look at any of the videos you like, and get any experimenter results you like How? Isn't this computer model supposed to be a model of what actually happens in the actual experiments in the real world? If whatever is labeled "experimental results" in the computer model doesn't match up with any experimental results in the real world, what's the point? And if whatever is labeled "experimental results" in the computer does have to match up with experimental results in the real world, then the cartoon characters cannot just "get any experimenter results" that they like. They can only get the actual experimental results that happen in the actual world. Otherwise the computer model is broken and needs to be fixed. You seem to think you can make a computer model, label some numbers in it "experimental results", have those numbers be different from actual experimental results in the real world, and still conclude something about the real world from the computer model. That does not seem like a fruitful strategy to me. name123 said: what would you deduce from the videos as the cartoon character (the experiment results are as expected given the equations)? If the cartoon characters can only get the experimental results that are obtained in the real world, they they can tell whether the sphere shrank or the poles got further apart. They just look at the appropriate experimental results. I've been over this repeatedly; I'm not going to go through the details again. Feb 26, 2015 63 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Now does have an absolute meaning for you personally though doesn't it? If by "now" you mean "what is happening to me at a given reading on my clock", then yes, of course. But I don't see what that has to do with the rest of what you said. Feb 26, 2015 64 name123 510 5 PeterDonis said: In which case the cartoon characters can tell, from the experimental results, whether the sphere shrank or the poles got farther apart. Which you said they couldn't do, when you originally described the computer model setup. Make up your mind what you are trying to describe. Well you know the scenario, and they can have videos from the pole's rest frame, the sphere's rest frame whatever. So imagine you are the cartoon character. You can imagine the experiment, point out what the result would be in which ever frame you might choose to look at a video from. So what would the results tell you about whether the author shrank the sphere, or made the poles wider or did something else? In other words explain how the videos will tell them which way it happened. Feb 26, 2015 65 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: what would the results tell you about whether the author shrank the sphere, or made the poles wider or did something else? Look at whichever results changed when the sphere is moving relative to the poles, compared to when the sphere is at rest relative to the poles. I've already explained this. And I've already pointed you again at that explanation. I'm not going to keep repeating myself. Feb 26, 2015 66 name123 510 5 PeterDonis said: Look at whichever results changed when the sphere is moving relative to the poles, compared to when the sphere is at rest relative to the poles. I've already explained this. And I've already pointed you again at that explanation. I'm not going to keep repeating myself. You may have think you've explained it but I can't remember reading an explanation. As far as I have read, you'd be saying that from the perspective of the poles the sphere contracted and the poles didn't change size. From the perspective of the sphere it looks as though the distance between the poles expands. So there would be some experimental results that I'd assume imply something has changed the distance it spans. But I also understand you to be stating that if you go around asking in any given frame of reference nothing has changed distance and had seemed to conclude that it therefore was meaningless to ask whether that something was now spanning a larger or smaller distance. But seemed to accept that it could have a meaning in the computer model, and that analogous to that it might be possible to imagine a physical reality in which there were physical reasons why it was the spaceship getting smaller, and not the universe getting bigger. I haven't heard you state how any of this would enable the cartoon characters to tell if the computer program running had shrunk the sphere or expanded the distance between the poles, or whether you would conclude neither, or both, or that the question was meaningless. So could you please just explain clearly how you could tell if you were the cartoon character in a video library of with whatever videos of the event you think you'd need. So what videos you'd need and what conclusion you thought you could draw from them would be useful, because I don't see how you can do it. (The author could have made any rest frame absolute rest, and you could't tell which. The equations allow any frame of reference to be considered absolute rest). Last edited: Feb 26, 2015 Feb 26, 2015 67 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: As far as I have read, you'd be saying that from the perspective of the poles the sphere contracted and the poles didn't change size. More precisely: from the perspective of the poles, the sphere, which is atoms long (note that that number is an invariant, it's the same in all frames), can fit between the poles, which are separated by only atoms, when the sphere is moving relative to the poles at 87% of the speed of light. name123 said: From the perspective of the sphere it looks as though the distance between the poles expands. No, it doesn't. I talked about this in a previous post, and maline was trying to explain it to you too. From the perspective of the sphere, the distance between the poles in the direction contracts. Length contraction is symmetric: the sphere looks shorter in the frame of the poles, and the pole separation looks shorter in the frame of the spheres. However, from the perspective of the sphere, the poles also move in the direction (just as, from the perspective of the poles, the sphere moves in the direction in order to slip between the poles), and they start moving in that direction at different times. (This is where relativity of simultaneity comes into it.) One pole moves before the other, so the sphere can slip in between the poles even though the distance between the poles in the direction is shortened. In terms of the "length in atoms" that I have been using as the definition of length, in the sphere's frame, there are atoms between the poles in the direction, but they are moving at 87% of the speed of light in the direction, so they fit in the direction alongside only atoms of the sphere's diameter--i.e., 1/4 of that diameter. The motion of the poles in the direction is what let's the sphere slip between them. Feb 26, 2015 68 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: I also understand you to be stating that if you go around asking in any given frame of reference nothing has changed distance. That's not quite what I said. What I said is that the length of each object in its own rest frame doesn't change. That's because the number of atoms that make up its length doesn't change. The sphere's length is atoms, and the pole separation is atoms, and those numbers never change, and are the same in all frames. And since the number of atoms is the same, the rest length is the same--any measurement of length that is done at rest relative to the object will always get the same answer. The length contraction only comes in when there is relative motion between the object and whatever is measuring its length. name123 said: could you please just explain clearly how you could tell if you were the cartoon character In each video, count the number of atoms separating the two poles, and the number of atoms in the sphere along its diameter. They are always the same. That indicates that none of the rest lengths (lengths measured by a device at rest relative to the object) change. So any change in length of objects that are moving (the sphere appearing shorter in the frame of the poles, or the pole separation appearing shorter in the frame of the sphere) can only be due to the relative motion; it can't be due to any "intrinsic" change in the objects, since that would have shown up as a change in the number of atoms making up the length of the object. Feb 26, 2015 69 name123 510 5 PeterDonis said: In each video, count the number of atoms separating the two poles, and the number of atoms in the sphere along its diameter. They are always the same. That indicates that none of the rest lengths (lengths measured by a device at rest relative to the object) change. So any change in length of objects that are moving (the sphere appearing shorter in the frame of the poles, or the pole separation appearing shorter in the frame of the sphere) can only be due to the relative motion; it can't be due to any "intrinsic" change in the objects, since that would have shown up as a change in the number of atoms making up the length of the object. Why couldn't it be a greater density, rather than a greater amount? From the poles frame of reference the atom density is greater in sphere and so it can claim that the sphere has undergone length contraction as you defined it. So there is a claim there that the sphere has shrunk, and from the sphere's perspective less atoms fit between the poles, and presumably this is the same for the change of distance in y between the two poles if there had of been one to start with (it would have got smaller). So they both note that a change in span has occurred. So they're pretty sure something has changed span but neither of them can notice anything changing in their rest frame, so you think it might be ok to conclude that the change occurred in neither (you mention that they won't detect any change in the rest frame), or both, or one maybe, I'm not clear, but you seem to be stating that you thought it was logical to assume that neither could have been the the rest frame in the program, even though you knew the equations allowed any frame of reference to be considered the rest frame? Last edited: Feb 26, 2015 Feb 26, 2015 70 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Why couldn't it be a greater density, rather than a greater amount? In other words, why couldn't the distance between the atoms that are lined up change? We can verify that that doesn't change too, by measurements. In addition to counting the atoms, we also use strain gauges to measure the inter-atomic forces, and verify that they don't change either--all of the strain gauges mounted on the sphere read the same when it is moving at 87% of the speed of light relative to the poles, as when it is at rest relative to the poles. (Or think up any other measure of density you like; the measurement, if made by apparatus at rest relative to the sphere, will be the same regardless of the sphere's speed relative to the poles.) To cut things short: think up any measurement you like that can be made on the sphere. As long as it is made by measuring devices at rest relative to the sphere, it will give the same results regardless of the sphere's speed relative to the poles. That is why the length contraction of the sphere, as measured from the perspective of the poles, cannot be due to any intrinsic change in the sphere. name123 said: there is a claim there that the sphere has shrunk, and from the sphere's perspective less atoms fit between the poles Just to be clear, these are two different claims. The claim that "the sphere is shrunk" is from the perspective of the poles: the moving sphere, with atoms lined up, can fit between the two poles at rest, with only atoms between them. The claim that "from the sphere's perspective less atoms fit between the poles" is, as it says right there, from the sphere's perspective, not the poles' perspective. In the direction, the two poles take up only atoms along the sphere, even though there are atoms between the poles. name123 said: presumably this is the same for the change of distance in y between the two poles No. The length contraction is only in the direction. The relative motion in the direction between the sphere and the poles is too slow to have any significant length contraction effect. name123 said: they both note that a change in span has occurred. They each measure the other to be length contracted in the direction. But these are two different measurements. name123 said: they're pretty sure something has changed span in neither of them can notice anything changing in their rest frame. This sentence is a bit garbled, but I assume you mean "they're pretty sure something has changed even though neither of them can notice anything changing in their rest frame". As you state it, this is false; each of them, in their own rest frame, measures the other to be length contracted, whereas if they were both at rest relative to each other they would not. So their relative motion does cause an observable change. The point is simply that there is no observable change in measurements that each one makes in their own rest frame of themselves--no measurement of the sphere in the sphere's rest frame changes, and no measurement of the poles in the poles' rest frame changes. But that doesn't mean there is no observable change anywhere. Feb 26, 2015 71 name123 510 5 PeterDonis said: In other words, why couldn't the distance between the atoms that are lined up change? We can verify that that doesn't change too, by measurements. In addition to counting the atoms, we also use strain gauges to measure the inter-atomic forces, and verify that they don't change either--all of the strain gauges mounted on the sphere read the same when it is moving at 87% of the speed of light relative to the poles, as when it is at rest relative to the poles. (Or think up any other measure of density you like; the measurement, if made by apparatus at rest relative to the sphere, will be the same regardless of the sphere's speed relative to the poles.) Well just rewinding to the conveyor belt scenario, where roughly 29,979,245.8 A-Team 1m rulers are laid out between each of the B-Team observers and then the conveyor belt is started. When it gets up to speed and they make their own 1m rulers from the same material and they find that the distance between the B-Team observers is 30,130,275.702 B-Team 1m rulers. Are you saying that the atomic density in those B team rulers are the same but they are just shorter (have less atoms)? PeterDonis said: No. The length contraction is only in the ##x## direction. The relative motion in the ##y## direction between the sphere and the poles is too slow to have any significant length contraction effect. It might have been traveling faster in the y than the x. PeterDonis said: This sentence is a bit garbled, but I assume you mean "they're pretty sure something has changed even though neither of them can notice anything changing in their rest frame". As you state it, this is false; each of them, in their own rest frame, measures the other to be length contracted, whereas if they were both at rest relative to each other they would not. So their relative motion does cause an observable change. The point is simply that there is no observable change in measurements that each one makes in their own rest frame of themselves--no measurement of the sphere in the sphere's rest frame changes, and no measurement of the poles in the poles' rest frame changes. But that doesn't mean there is no observable change anywhere. Yes it is the observable change with relative motion that we are talking about. Is it ok for the cartoon characters to conclude that something has changed span? Feb 26, 2015 72 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: roughly 29,979,245.8 A-Team 1m rulers are laid out between each of the B-Team observers Ok. name123 said: it gets up to speed and they make their own 1m rulers from the same material and they find that the distance between the B-Team observers is 30,130,275.702 B-Team 1m rulers No, they wouldn't. They would find that the distance between the B-Team observers is 29,979,245.8 B-Team rulers. But now that the conveyor is moving, the distance between B-Team observers, as measured in Team A's frame, is less than 29,979,245.8 Team A rulers. name123 said: It might have been traveling faster in the y than the x. If so, then the analysis of the scenario gets a lot more complicated, because you have length contraction in both the x and the y directions. I was not trying to analyze that case; I was assuming that the relative speed in the y direction is too small to cause any significant relativistic effects, so that we only have length contraction in the x direction. Mar 1, 2015 73 name123 510 5 PeterDonis said: No, they wouldn't. They would find that the distance between the B-Team observers is 29,979,245.8 B-Team rulers. But now that the conveyor is moving, the distance between B-Team observers, as measured in Team A's frame, is less than 29,979,245.8 Team A rulers. Assume the A-Team and the B-Team are both at rest, the conveyor belt hasn't started. The A-Team make their A-Team 1m rulers, and the B-Team make their first batch of 1m rulers, and they both measure the span between the A-Team members standing opposite B-Team members and find the distance between the A-Team members and the distance between the B-Team members to be the equivalent to the span of roughly 29, 979,245.8 1m rulers, each of the rulers that the A-team made being roughly the same size as each of the rulers the B-Team made. The conveyor belt then starts up to 0.1c. Presumably there will still be the same amount of rulers between the B-Team members (both types can be laid out between them). Imagine the B-Team make a new batch of 1m rulers (while going at 0.1c on the conveyor belt). How many of this new batch will the B-Team members find it will take to span the distance between the A-Team and B-Team members? Regarding the poles and the sphere, you had said:"So their relative motion does cause an observable change." And I asked: "Is it ok for the cartoon characters to conclude that something has changed span?" But you didn't reply. Last edited: Mar 1, 2015 Mar 1, 2015 74 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: the conveyor belt hasn't started. The A-Team make their A-Team 1m rulers, and the B-Team make their first batch of 1m rulers, and they both measure the span between the A-Team members standing opposite B-Team members and find the distance between the A-Team members and the distance between the B-Team members to be the equivalent to the span of roughly 29, 979,245.8 1m rulers Ok. (I don't know why you use the word "roughly", though--they're both measuring to 9 significant figures.) name123 said: The conveyor belt then starts up to 0.1c. Presumably there will still be the same amount of rulers between the B-Team members (both types can be laid out between them). The same amount of B-Team rulers, yes, because those rulers are at rest relative to the B Team. But not the same amount of A-Team rulers; those rulers are moving relative to the B-Team, so the number of A-Team rulers between two B-Team members will be smaller. The gamma factor for a relative speed of 0.100000000 c (I'm using 9 significant figures since that's your stated accuracy) is 1.00503781; dividing 29,979,245.8 by this gives 29,828,973.1, so that is how many A-Team rulers now fit between two B-Team members. name123 said: Imagine the B-Team make a new batch of 1m rulers (while going at 0.1c on the conveyor belt). How many of this new batch will the B-Team members find it will take to span the distance between the A-Team and B-Team members? There will be 29,979,245.8 of these new B-Team rulers between two B-Team members (the same as the original B-Team rulers)--this is assuming that these new rulers are at rest relative to the B-Team. There will be a smaller number of these new B-Team rulers between two A-Team members, because these rulers are moving relative to the A-Team. This will be the same number we calculated above: 29,828,973.1 new B-Team rulers (and the same number of original B-Team rulers) will fit between two A-Team members. name123 said: Regarding the poles and the sphere, you had said:"So their relative motion does cause an observable change." And I asked: "Is it ok for the cartoon characters to conclude that something has changed span?" But you didn't reply. Because that scenario brings in other elements that are irrelevant, like the whole computer simulation thing. The A-Team and B-Team scenario with the rulers is simpler and clearer, so I'm sticking to that one. You should be able to figure out the answer to the cartoon character question from what I'm saying about the rulers. Mar 1, 2015 75 name123 510 5 name123 said: Assume the A-Team and the B-Team are both at rest, the conveyor belt hasn't started. The A-Team make their A-Team 1m rulers, and the B-Team make their first batch of 1m rulers, and they both measure the span between the A-Team members standing opposite B-Team members and find the distance between the A-Team members and the distance between the B-Team members to be the equivalent to the span of roughly 29, 979,245.8 1m rulers, each of the rulers that the A-team made being roughly the same size as each of the rulers the B-Team made. The conveyor belt then starts up to 0.1c. Presumably there will still be the same amount of rulers between the B-Team members (both types can be laid out between them). Imagine the B-Team make a new batch of 1m rulers (while going at 0.1c on the conveyor belt). How many of this new batch will the B-Team members find it will take to span the distance between the A-Team and B-Team members? PeterDonis said: The same amount of B-Team rulers, yes, because those rulers are at rest relative to the B Team. But not the same amount of A-Team rulers; those rulers are moving relative to the B-Team, so the number of A-Team rulers between two B-Team members will be smaller. The gamma factor for a relative speed of 0.100000000 c (I'm using 9 significant figures since that's your stated accuracy) is 1.00503781; dividing 29,979,245.8 by this gives 29,828,973.1, so that is how many A-Team rulers now fit between two B-Team members. There will be 29,979,245.8 of these new B-Team rulers between two B-Team members (the same as the original B-Team rulers)--this is assuming that these new rulers are at rest relative to the B-Team. There will be a smaller number of these new B-Team rulers between two A-Team members, because these rulers are moving relative to the A-Team. This will be the same number we calculated above: 29,828,973.1 new B-Team rulers (and the same number of original B-Team rulers) will fit between two A-Team members. So they can observe that the A-Team observers have 29,979,245.8 of the old B-Team rulers between them, and that you wouldn't get as many of the new ones. So an old B-Team ruler according to the B-Team is 1/29,979,245.8th of the span between the A-Team members and according to their observations a new B-Team ruler is 1/ 29,828,973.1th of the span between A-Team members. When they slow back down again the B-Team find that the new B-Team rulers were only 1/ 29,979,245.8th of the span? Would their be any difference in the clock times? name123 said: Regarding the poles and the sphere, you had said:"So their relative motion does cause an observable change." And I asked: "Is it ok for the cartoon characters to conclude that something has changed span?" But you didn't reply. PeterDonis said: Because that scenario brings in other elements that are irrelevant, like the whole computer simulation thing. The A-Team and B-Team scenario with the rulers is simpler and clearer, so I'm sticking to that one. You should be able to figure out the answer to the cartoon character question from what I'm saying about the rulers. They are two different points. The cartoon character scenario is an analogy where the cartoon characters are analogous to physicists, or philosophers, and the computer simulation is analogous to imagined physical causes. So since it's a different question, and it seems a simple enough question, and was less effort to answer I would have thought, could you just say why the computer simulation could not be thought to be analogous to imagined physical causes (you've already accepted the question has meaning with both) or answer the question “Is it ok for the cartoon characters to conclude that something has changed span?" Last edited: Mar 1, 2015 Mar 1, 2015 76 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: So they can observe that the A-Team observers have 29,979,245.8 of the old B-Team rulers between them Not if the old B-Team rulers are moving with the B-Team. I assumed that they were, but now it seems like you're saying they aren't. If the old B-Team rulers are left behind when the B-Team starts moving, so they stay at rest relative to the A-Team, then they are just the same as the A-Team rulers. Is this what you mean? I'll assume it is for the rest of this post. name123 said: So an old B-Team ruler according to the B-Team is 1/29,979,245.8th of the span between the A-Team members and according to their observations a new B-Team ruler is 1/ 29,828,973.1th of the span between A-Team members. While the B-Team is moving relative to the A-Team, yes. But a new B-Team ruler (which is at rest relative to the B-Team) is 1/29,979,245.8th of the span between B-Team members. name123 said: When they slow back down again the B-Team find that the new B-Team rulers were only 1/ 29,979,245.8th of the span? Meaning, when the conveyor stops again and the B-Team is now at rest again relative to the A-Team, the new B-Team rulers are 1/29,979,245.8th of the span between A-Team members? Yes, that's correct. name123 said: Would their be any difference in the clock times? Yes, the B-Team's clocks would show less elapsed time than the A-Team's clocks. name123 said: They are two different points. No, they're not--at least, not if you intend the computer simulation to be correct. If the computer simulation doesn't have to model what actually happens in the real world, then I have nothing to say about it, because you can make the numbers anything you like. If the computer simulation does model what actually happens in the real world, then, as I said, you can figure out what the cartoon characters can conclude by analogy with the A-Team/B-Team case that we're discussing. name123 said: it seems a simple enough question, and was less effort to answer I would have thought It didn't seem at all simple to me, because if the computer simulation is a correct model of what happens in the real world, why not just talk about what happens in the real world? And if the computer simulation does not have to correctly model what happens in the real world, what's the point of it? Mar 1, 2015 77 name123 510 5 PeterDonis said: Not if the old B-Team rulers are moving with the B-Team. I assumed that they were, but now it seems like you're saying they aren't. I thought I'd made it clear, there are two sets of old B-Team rulers, one set between the A-Team members and one set between the B-Team members. name123 said: Would their be any difference in the clock times? PeterDonis said: Yes, the B-Team's clocks would show less elapsed time than the A-Team's clocks. When the conveyor belt stops and they are both in the A-Team's frame of reference? name123 said: They are two different points. PeterDonis said: No, they're not--at least, not if you intend the computer simulation to be correct. If the computer simulation doesn't have to model what actually happens in the real world, then I have nothing to say about it, because you can make the numbers anything you like. If the computer simulation does model what actually happens in the real world, then, as I said, you can figure out what the cartoon characters can conclude by analogy with the A-Team/B-Team case that we're discussing. Ok, we can have the cartoon characters watching videos on the A-Team/B-Team case, but we can change it slightly and imagine that each observer is represented by a thin pole. And that the B-Team have a y velocity also. So we can imagine a pair of adjacent B-Team members thinking that they are going to pass two specific adjacent A team members on the y axis. So if you imagine the B-Team member closer to x'=0 to be facing in the positive y direction (I know it is also being thought of as a thin pole, but you can just switch as convenient) can it think that it passed the A-Team pair on the y-axis with both of them to its right, and the other member of the B-Team pair think that it passed the A-Team pair on the y-axis with both of them on its left (imagine it too is facing in positive y)? name123 said: The cartoon character scenario is an analogy where the cartoon characters are analogous to physicists, or philosophers, and the computer simulation is analogous to imagined physical causes. So since it's a different question, and it seems a simple enough question, PeterDonis said: It didn't seem at all simple to me, because if the computer simulation is a correct model of what happens in the real world, why not just talk about what happens in the real world? And if the computer simulation does not have to correctly model what happens in the real world, what's the point of it? The point is that it is clear with the computer model that it could have been modeled using any frame as absolute rest. So it analogous, and yet highlights that jumping to a certain conclusion involves a logical fallacy. name123 said: and was less effort to answer I would have thought, could you just say why the computer simulation could not be thought to be analogous to imagined physical causes (you've already accepted the question has meaning with both) or answer the question “Is it ok for the cartoon characters to conclude that something has changed span?" So could you now say why the computer simulation couldn't be thought of as analogous to imagined physical causes, or answer the question? Last edited: Mar 1, 2015 Mar 1, 2015 78 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: I thought I'd made it clear, there are two sets of old B-Team rulers, one set between the A-Team members and one set between the B-Team members. It's clear now, but it wasn't in previous posts. name123 said: When the conveyor belt stops and they are both in the A-Team's frame of reference? Yes. The B-Team's clocks will be running at the same rate as the A-Team's clocks once the conveyor belt stops, but they won't show the same total elapsed time, because they were running slower than the A-Team's clocks during the period when the conveyor belt was running. name123 said: we can have the cartoon characters watching videos on the A-Team/B-Team case, but we can change it slightly Can you draw a diagram? I'm having real trouble following your verbal descriptions. You keep piling on scenario after scenario and I can't keep up. It looks like you're trying to reproduce the sphere and poles scenario with the conveyor belts, but I don't understand how you're doing it. Or, if you can't draw a diagram, at least pick some reference frame (the A-Team's would be fine), and give coordinates (time and space) of all relevant events in this frame. You should be doing this anyway in order to analyze the scenario; just waving your hands with verbal descriptions without doing the math is not likely to give useful results. name123 said: it is clear with the computer model that it could have been modeled using any frame as absolute rest And doing that means you're not simulating the real world, since in the real world there is no such thing as absolute rest. I get that you believe you are able to draw some kind of useful conclusion about the real world by thinking about this computer model, but I don't see how. If the model is not constrained by the actual laws of physics, then how can its conclusions be relevant? name123 said: could you now say why the computer simulation couldn't be thought of as analogous to imagined physical causes I have said so, repeatedly. I just said it again, above. Mar 2, 2015 79 name123 510 5 PeterDonis said: Can you draw a diagram? I'm having real trouble following your verbal descriptions. You keep piling on scenario after scenario and I can't keep up. It looks like you're trying to reproduce the sphere and poles scenario with the conveyor belts, but I don't understand how you're doing it. Or, if you can't draw a diagram, at least pick some reference frame (the A-Team's would be fine), and give coordinates (time and space) of all relevant events in this frame. You should be doing this anyway in order to analyze the scenario; just waving your hands with verbal descriptions without doing the math is not likely to give useful results. Is there a way I can attach a diagram? Imagine the A-Team/B-Team scenario except that the A-Team are on suspended walkway (the z axis is used for the height) which is parallel to the conveyor belt, but having greater Y coordinates. You can imagine poles coming down underneath where each of the A-Team members are standing. The major difference is that the floor the conveyor belt is on moves in the +ve y direction so that the whole conveyor belt passes under the suspended walkway. The poles from the suspended walkway being long enough to touch a standing B-Team observer, but not long enough to hinder the conveyor belt passing underneath. Can you get the idea from that description?. So from what you’ve said once up at 0.1c the B-Team will think there is less space between the A-Team members than the B-Team members and the A-Team members will think there is less space between the B-Team members than the A-Team members. So now there is some y momentum so that the two team will pass on the y. And the question is whether a pair of B-Team will be able to pass under the suspended walkway with a pair of A-Team members poles passing in between the B-Team members. Presumably the A-Team members will think the gap between the B-Team members is smaller, and it is that two B-Team members will be able to pass in between 2 of the suspended poles. I find it difficult to see how they could both be right, and so wondered what would happen. Does that help at all? name123 said: The point is that it is clear with the computer model that it could have been modeled using any frame as absolute rest. So it analogous, and yet highlights that jumping to a certain conclusion involves a logical fallacy. PeterDonis said: And doing that means you're not simulating the real world, since in the real world there is no such thing as absolute rest. I get that you believe you are able to draw some kind of useful conclusion about the real world by thinking about this computer model, but I don't see how. If the model is not constrained by the actual laws of physics, then how can its conclusions be relevant? So could you now say why the computer simulation couldn't be thought of as analogous to imagined physical causes, or answer the question? PeterDonis said: I have said so, repeatedly. I just said it again, above. I’ve already said that the computer simulation follows the same laws of physics. But you are surely aware that with those laws could have been modeled using any rest frame as absolute rest. It’d just work everything out from that frame’s perspective.Are you denying that this is possible? So everything in the video library would look as you'd expect it to, but the videos themselves wouldn't allow you to tell if it was modeled using a particular frame as the absolute rest, or no frame as absolute rest. So the point is that if it was is a logical fallacy for the computer characters to conclude that there isn’t absolute rest (made clear by us being able to understand that from the equations it can modeled that way) , then why wouldn’t it be a logical fallacy for you to conclude that there isn’t absolute rest? What information do you have about the physical underlying which they don’t have about the computer model? Last edited: Mar 2, 2015 Mar 2, 2015 80 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Is there a way I can attach a diagram? Yes, there is an "Upload a file" button at the bottom right of the editing window. name123 said: Can you get the idea from that description? Yes, thanks. name123 said: at 0.1c the B-Team will think there is less space between the A-Team members than the B-Team members and the A-Team members will think there is less space between the B-Team members than the A-Team members. Yes. However, that doesn't make the situation completely symmetrical, because the poles are at rest relative to the A-Team members, not the B-Team members. See below. name123 said: the question is whether a pair of B-Team will be able to pass under the suspended walkway with a pair of A-Team members poles passing in between the B-Team members. The answer to this is "no". Here's why: First, if we look at things in the A-Team's rest frame, it is obvious that the B-Team members will pass in between the poles. But, as I noted above, what makes it obvious is that the poles are at rest relative to the A-Team members, so the only things that are moving are the B-Team members. (Note that we are also assuming that the relative velocity in the direction is slow enough that we can ignore length contraction in the direction. As I noted a number of posts ago, adding length contraction in the direction would make this scenario considerably more complicated.) Now look at things in the B-Team's rest frame. In this frame, the poles are closer together, in the direction, than the B-Team members are. However, and this is the key point, in this frame, the poles are offset in the y direction, i.e., they are at different coordinates at any given instant of time in the B-Team frame (in the A-Team frame, they are at the same coordinate). This is because of relativity of simultaneity: if the poles are at the same coordinate at the same time in the A-Team frame, they cannot be at the same coordinate at the same time in the B-Team frame, because "at the same time" means different things in the two frames. So in the B-Team frame, what happens is that the two B-team members slip between the poles because the combination of the poles' separation in the direction and their separation in the direction creates an opening that is large enough for them to pass through. (Note that the poles are moving "diagonally" in the B-team frame.) name123 said: I’ve already said that the computer simulation follows the same laws of physics. Yes, and I've already said that that is not consistent with this: name123 said: you are surely aware that with those laws could have been modeled using any rest frame as absolute rest. No, I am not aware of that. See below. name123 said: It’d just work everything out from that frame’s perspective.Are you denying that this is possible? You can run the model in any frame you like, yes. That is not the same as defining that frame as "absolute rest". The existence of an "absolute rest" frame would have physical consequences--there would be experimental results that would be different from the ones we actually find in the real world. (For example, the Michelson-Morley experiment would have different results.) Otherwise "absolute rest" is just a meaningless label. name123 said: why wouldn’t it be a logical fallacy for you to conclude that there isn’t absolute rest? Because we've done experiments that test whether there is absolute rest, and those experiments said there isn't. See above. Mar 2, 2015 81 name123 510 5 PeterDonis said: The answer to this is "no". Here's why: First, if we look at things in the A-Team's rest frame, it is obvious that the B-Team members will pass in between the poles. But, as I noted above, what makes it obvious is that the poles are at rest relative to the A-Team members, so the only things that are moving are the B-Team members. (Note that we are also assuming that the relative velocity in the ##y## direction is slow enough that we can ignore length contraction in the ##y## direction. As I noted a number of posts ago, adding length contraction in the ##y## direction would make this scenario considerably more complicated.) Now look at things in the B-Team's rest frame. In this frame, the poles are closer together, in the ##x## direction, than the B-Team members are. However, and this is the key point, in this frame, the poles are offset in the y direction, i.e., they are at different ##y## coordinates at any given instant of time in the B-Team frame (in the A-Team frame, they are at the same ##y## coordinate). This is because of relativity of simultaneity: if the poles are at the same ##y## coordinate at the same time in the A-Team frame, they cannot be at the same ##y## coordinate at the same time in the B-Team frame, because "at the same time" means different things in the two frames. So in the B-Team frame, what happens is that the two B-team members slip between the poles because the combination of the poles' separation in the ##x## direction and their separation in the ##y## direction creates an opening that is large enough for them to pass through. (Note that the poles are moving "diagonally" in the B-team frame.) Consider the B-Team pair. One is closer to x' = 0. You seem to be saying that when it passed the A-Team pair member closest to x = 0 with the A-Team member being to its left (imagine the B-Team member in question was looking in the +ve y direction), is that correct? PeterDonis said: You can run the model in any frame you like, yes. That is not the same as defining that frame as "absolute rest". The existence of an "absolute rest" frame would have physical consequences--there would be experimental results that would be different from the ones we actually find in the real world. (For example, the Michelson-Morley experiment would have different results.) Otherwise "absolute rest" is just a meaningless label. I'm not sure what an experiment to do with whether anything could be considered to physically support light waves has anything to do with it. Perhaps you'd like to mention what properties you were thinking absolute rest would need to have. I was simply thinking that it would have to be that one was absolute rest either by virtue of the frame of reference the computer simulation used, in which velocity = 0. Or that there was perhaps some imagined physical reason why something was not at rest. I thought that space wasn't exactly empty, so doesn't motion in relation to that background have experimental consequences? Or that God maintains the model, and like the computer simulation any frame could have been chosen as absolute rest. All three make sense to me, so I don't see how you can say they have no meaning, as opposed to perhaps you not understanding the meaning. Perhaps when you say what properties you thought absolute rest implied I'll understand. Mar 2, 2015 82 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Consider the B-Team pair. One is closer to x' = 0. You seem to be saying that when it passed the A-Team pair member closest to x = 0 with the A-Team member being to its left (imagine the B-Team member in question was looking in the +ve y direction), is that correct? "To its left" is ambiguous, and also doesn't fully capture the relative motion. Also, more importantly, "when it passed" is ambiguous, and in fact is not even well-defined in the B-Team Frame. That is to say, in the A-Team frame, there is a single instant of time (which we can call ) at which all four objects--the two poles, and the two B-Team members passing between them--have the same coordinate (which we can designate as ). But in the B-Team frame, there is no single instant of time at which this is true. That is, there is no instant of time, in the B-Team frame, at which all four objects have the same coordinate. That is because of relativity of simultaneity (as I have already explained), and is part of the key to understanding this scenario. To answer the question as best I can, given the limitations I've just mentioned, if we designate the A-Team pair (with the poles) as A1 (at ) and A2 (at , where is the separation between the poles in the A-Team frame), and if we designate the B-Team pair as B1 (the one that passes closest to A1) and B2 (the one that passes closest to A2), then when B1 passes A1 (meaning, when both of them have the same or coordinate, depending on which frame we are using), B1's (or ) coordinate is larger than A1's; and when B2 passes A2, B2's (or ) coordinate is smaller than A2's. I have suggested a couple of times now that, instead of waving your hands with ordinary language descriptions, you either draw a spacetime diagram or write down explicitly the math involved--the coordinates of all important events, and how they transform between the two frames. Doing that will make the answers to this and a lot of other questions obvious. name123 said: Perhaps you'd like to mention what properties you were thinking absolute rest would need to have. It depends on whether you think "absolute rest" has physical consequences. If it doesn't, then it's not a physical property or a physical thing, and talking about it is off topic in this forum. That's why I haven't bothered addressing that possibility. If "absolute rest" does have physical consequences (which is how I've been using the term), then, as I've said several times, now, there will be experiments that will give different results depending on whether there is "absolute rest" or not. (One famous one is the Michelson-Morley experiment.) As far as your computer simulation is concerned, once again, if "absolute rest" has physical consequences, and if the simulation is correct, then the simulation will simulate different experimental results depending on whether "absolute rest" exists or not. So it's easy to tell whether the simulation is using "absolute rest" by just looking at what experimental results appear in it. name123 said: I thought that space wasn't exactly empty, so doesn't motion in relation to that background have experimental consequences? What do you mean by "background"? There are certainly other objects in the universe, and those objects have particular states of motion, so one can measure whether one is at rest or moving relative to those objects. But there is no "background" independent of the objects; there is no way to measure "motion" relative to some "background" that is different from motion relative to any of the objects. name123 said: All three make sense to me, so I don't see how you can say they have no meaning I didn't say they have "no meaning" period. I said they have no physical meaning--that is, if we have two different simulations, and they both make the same predictions for all experimental results, then there is no physical meaning to saying that one uses the "absolute rest" frame but the other uses some other frame that is not at "absolute rest". There might well be non-physical meaning to that statement--after all, you can just look at the numbers in the two computers and see that they're different. But the label "absolute rest" for one set of numbers is not a physical label; it doesn't correspond to any physical difference, because all the experimental results are the same in both simulations. Mar 2, 2015 83 name123 510 5 PeterDonis said: To answer the question as best I can, given the limitations I've just mentioned, if we designate the A-Team pair (with the poles) as A1 (at ##x = 0##) and A2 (at ##x = L##, where ##L## is the separation between the poles in the A-Team frame), and if we designate the B-Team pair as B1 (the one that passes closest to A1) and B2 (the one that passes closest to A2), then when B1 passes A1 (meaning, when both of them have the same ##y## or ##y'## coordinate, depending on which frame we are using), B1's ##x## (or ##x'##) coordinate is larger than A1's; and when B2 passes A2, B2's ##x## (or ##x'##) coordinate is smaller than A2's. Two questions here. Firstly, how do the B-Team explain that they passed through the A-Team poles as you described when they are saying the gap between those two poles is narrower than the gap between the B-Team pair? Secondly can both teams conclude that something has changed span? PeterDonis said: It depends on whether you think "absolute rest" has physical consequences. If it doesn't, then it's not a physical property or a physical thing, and talking about it is off topic in this forum. That's why I haven't bothered addressing that possibility. The whole point was that it wouldn't have any physical consequences, that is why they could no more tell in the simulation which way it was done, than you could tell how it was done in an imagined physical universe. There could be an imagined physical reason why it should be considered that the spaceship changed size when it applied its thrusters and not the universe. If it was imagined that way round, then it would clearly be a physical property that was being imagined. Understanding that several physical realities with different physical properties could be measured to be the same, is like understanding that the cartoon characters could understand that although they might not be able to detect which way it was modeled, because different models could be measured the same, it doesn't follow that how it was modeled wasn't a property of the model. PeterDonis said: What do you mean by "background"? There are certainly other objects in the universe, and those objects have particular states of motion, so one can measure whether one is at rest or moving relative to those objects. But there is no "background" independent of the objects; there is no way to measure "motion" relative to some "background" that is different from motion relative to any of the objects. Virtual particles appearing in a vacuum for example (as the background). Presumably there is some effect that is measured for the theory that it happens. Last edited: Mar 2, 2015 Mar 2, 2015 84 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Firstly, how do the B-Team explain that they passed through the A-Team poles as you described when they are saying the gap between those two poles is narrower than the gap between the B-Team pair? Um, because they are not saying the gap between the poles is narrower? They are saying it is shorter in the x direction; but the gap in the B-Team frame is not just in the x direction. It is also in the y direction. The full gap, combining both the x and y components, is large enough for the two B-Team members to fit through. name123 said: can both teams conclude that something has changed span? If "changed span" means "shows relativistic length contraction", then yes. Otherwise I don't know what you mean by "changed span". You keep talking as if this has some other meaning, but I don't know what it is. The only "change in length" that occurs in this scenario is relativistic length contraction. name123 said: The whole point was that it wouldn't have any physical consequences Then any questions about it are not physics questions and are off topic in this forum. name123 said: Virtual particles appearing in a vacuum The quantum vacuum is Lorentz invariant; it does not define an "absolute rest" frame. It looks the same in every frame. Mar 3, 2015 85 name123 510 5 PeterDonis said: Um, because they are not saying the gap between the poles is narrower? They are saying it is shorter in the x direction; but the gap in the B-Team frame is not just in the x direction. It is also in the y direction. The full gap, combining both the x and y components, is large enough for the two B-Team members to fit through. Oh, so they are saying the gap between the A-Team members spans more rulers that the gap between the B-Team members? PeterDonis said: If "changed span" means "shows relativistic length contraction", then yes. Otherwise I don't know what you mean by "changed span". You keep talking as if this has some other meaning, but I don't know what it is. The only "change in length" that occurs in this scenario is relativistic length contraction. I find it hard to believe that you couldn't understand any other meaning, such as certain physical lengths changing, because I had explained it, and showed why it makes sense as a physical property and thus it would be a logical fallacy to conclude there was no rest frame. Remember you effectively said that Ok, you could understand it as a physical property but because it didn't make any difference to the physics it was off topic on this forum. Which is fine, but if you assume there is no absolute rest frame it is still a logical fallacy even if you don't wish it discussed. Another sense of the absolute rest frame would be that its clocks are in synch with the "now" (so presentism). I'm not sure of the motivation for promoting the fallacy. Last edited: Mar 3, 2015 Mar 3, 2015 86 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Oh, so they are saying the gap between the A-Team members spans more rulers that the gap between the B-Team members? No. They are saying that the gap between the B-Team members is purely in the direction; but the gap between the A-Team members is diagonal in the B-Team frame (it has components in both the and the directions). So the component of the B-Team gap that is in the same direction as the A-Team gap is shorter than the A-Team gap; but that doesn't mean the full B-Team gap is shorter than the A-Team gap. Note also that length contraction of the A-Team rulers in the B-Team frame only applies in the direction, so you can't just plug numbers into the length contraction formula and expect it to give correct answers for the comparison between the rulers. This is why I have said (repeatedly) that you need to actually assign coordinates to all the events of interest and actually do the math of transforming them from one frame to the other. name123 said: you effectively said that Ok, you could understand it as a physical property but because it didn't make any difference to the physics it was off topic on this forum. No, that is not what I said. I said that since it is not a physical property (because it doesn't make any difference to any physical measurements), it is off topic on this forum. Which means that discussion of it is closed. Last edited: Mar 3, 2015 Mar 3, 2015 87 name123 510 5 PeterDonis said: No. They are saying that the gap between the B-Team members is purely in the ##x'## direction; but the gap between the A-Team members is diagonal in the B-Team frame (it has components in both the ##x'## and the ##y'## directions). So the component of the B-Team gap that is in the same direction as the A-Team gap is shorter than the A-Team gap; but that doesn't mean the full B-Team gap is shorter than the A-Team gap. But remember we can imagine the rulers laid out between the A-Team pair, and rulers laid out between the B-Team pair before they start the conveyor belt. So there is a set number of rulers between them. But the B-Team think that the rulers in the A-Teams frame of reference have shrunk, and that fewer rulers of the span they make now would fit. Presumably if we imagine the A-Team poles and the B-Team members to be replaced by very thin filaments, the thinner they are imagined to be the better, then the y momentum could be relatively quite slow say 1/10,000,000th m/s. Do you think it shows that the 29,979,245.8 old B-Team rulers between the A-Team pair span a greater distance than the 29,979,245.8 old B-Team rulers between the B-Team pair, or what type of angle between them were you thinking that y-momentum would make? name123 said: I find it hard to believe that you couldn't understand any other meaning, such as certain physical lengths changing, because I had explained it, and showed why it makes sense as a physical property and thus it would be a logical fallacy to conclude there was no rest frame. Remember you effectively said that Ok, you could understand it as a physical property but because it didn't make any difference to the physics it was off topic on this forum. PeterDonis said: No, that is not what I said. I said that since it is not a physical property (because it doesn't make any difference to any physical measurements), it is off topic on this forum. Which means that discussion of it is closed. Which is fine, but if you assume there is no absolute rest frame it is still a logical fallacy even if you don't wish it discussed. Another sense of the absolute rest frame would be that its clocks are in synch with the "now" (so presentism). I'm not sure of the motivation for promoting the fallacy. Regarding my understanding that you were effectively saying that you understood it as a physical property but because it didn't make any difference to the physics it was off topic in this forum, I got the impression when I said: name123 said: The whole point was that it wouldn't have any physical consequences, that is why they could no more tell in the simulation which way it was done, than you could tell how it was done in an imagined physical universe. There could be an imagined physical reason why it should be considered that the spaceship changed size when it applied its thrusters and not the universe. If it was imagined that way round, then it would clearly be a physical property that was being imagined. Understanding that several physical realities with different physical properties could be measured to be the same, is like understanding that the cartoon characters could understand that although they might not be able to detect which way it was modeled, because different models could be measured the same, it doesn't follow that how it was modeled wasn't a property of the model. And you replied to the part "The whole point was that it wouldn't have any physical consequences" PeterDonis said: Then any questions about it are not physics questions and are off topic in this forum. I had assumed you were saying that if it hadn't any physical consequences that it wasn't a topic for the forum, and were accepting the point that it could have no physical consequences but still be a physical property, by virtue of you not disputing that the assumption that there is no rest frame is a logical fallacy. Last edited: Mar 3, 2015 Mar 3, 2015 88 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: I had assumed you were saying that if it hadn't any physical consequences that it wasn't a topic for the forum, and were accepting the point that it could have no physical consequences but still be a physical property, by virtue of you not disputing that the assumption that there is no rest frame is a logical fallacy. You assumed incorrectly. The fact that I did not bother to dispute something you said does not imply that I agree with it. I may just have not bothered because it's off topic for this forum. The topic is closed. If you bring it up again I will issue a warning. Last edited: Mar 3, 2015 Mar 3, 2015 89 name123 510 5 PeterDonis said: You assumed incorrectly. The fact that I did not bother to dispute something you said does not imply that I agree with it. The topic is closed. If you bring it up again I will issue a warning. So regarding the other point we can imagine the rulers laid out between the A-Team pair, and rulers laid out between the B-Team pair before they start the conveyor belt. So there is a set number of rulers between them. But the B-Team think that the rulers in the A-Teams frame of reference have shrunk, and that fewer rulers of the span they make now would fit. Presumably if we imagine the A-Team poles and the B-Team members to be replaced by very thin filaments, the thinner they are imagined to be the better, then the y momentum could be relatively quite slow say 1/10,000,000th m/s. Do you think it shows that it can't be as the B-Team thought which is that 29,979,245.8 new B-Team rulers would fit between the B-Team pair but only 29,828,973.1 between the A-Team pair they wouldn't have fitted through, or were you thinking a 1/10,000,000th m/s momentum would create a larger enough angle that it'd fit through? Mar 3, 2015 90 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: we can imagine the rulers laid out between the A-Team pair, and rulers laid out between the B-Team pair before they start the conveyor belt. Yes. But also, before the belt is started, both sets of rulers are pointing solely in the direction. After the belt is started, that is not the case. name123 said: the B-Team think that the rulers in the A-Teams frame of reference have shrunk, and that fewer rulers of the span they make now would fit Yes. name123 said: if we imagine the A-Team poles and the B-Team members to be replaced by very thin filaments, the thinner they are imagined to be the better, then the y momentum could be relatively quite slow say 1/10,000,000th m/s. Why don't you work out the actual numbers, as I have repeatedly suggested, instead of guessing? name123 said: Do you think it shows that the 29,979,245.8 old B-Team rulers between the A-Team pair span a greater distance than the 29,979,245.8 old B-Team rulers between the B-Team pair Obviously the same set of rulers, pointed in the same direction, at rest relative to each other, will span the same distance. However, once the conveyor belt starts, the B-Team pair is not pointed in the same direction as the old B-Team rulers. I have repeatedly pointed this out. Why don't you work out the actual numbers? name123 said: what type of angle between them were you thinking that y-momentum would make? Why don't you work out the actual numbers, as I have repeatedly suggested? I have repeatedly given you the solution of the problem. All you have to do is work out the numbers. Mar 3, 2015 91 name123 510 5 PeterDonis said: Why don't you work out the actual numbers, as I have repeatedly suggested? I have repeatedly given you the solution of the problem. All you have to do is work out the numbers. Well, well I've laid out the numbers in the scenario, but I've not done the calculation with x and y before. And if others are reading it, maybe it'd be the same for them. Could you do it please? Mar 3, 2015 92 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Could you do it please? No, but I'll give the Lorentz transformation equations that apply. If the relative velocity between the frames is in the direction and in the direction, then the transformation from the unprimed (A-Team) frame to the primed (B-Team) frame is where (we assume that is too small to affect length contraction or time dilation), and we leave out the coordinate since there is no relative motion in that direction. (Note that this transformation is approximate; it is not valid if is not small. That case is considerably more complicated mathematically, and you don't need that extra complication to solve this problem.) So if you define coordinates for all the relevant events in the A-Team frame, you can use the above transformation to get the coordinates of those events in the B-Team frame. Answers to all the questions you want to ask can then be read off of those coordinate values. Mar 3, 2015 93 name123 510 5 PeterDonis said: No, but I'll give the Lorentz transformation equations that apply. If the relative velocity between the frames is ##v## in the ##x## direction and ##w## in the ##y## direction, then the transformation from the unprimed (A-Team) frame to the primed (B-Team) frame is $$ t' = \gamma \left( t - v x \right) $$ $$ x' = \gamma \left( x - v t \right) $$ $$ y' = y - w t $$ where ##\gamma = 1 / \sqrt{1 - v^2}## (we assume that ##w## is too small to affect length contraction or time dilation), and we leave out the ##z## coordinate since there is no relative motion in that direction. (Note that this transformation is approximate; it is not valid if ##w## is not small. That case is considerably more complicated mathematically, and you don't need that extra complication to solve this problem.) So if you define coordinates for all the relevant events in the A-Team frame, you can use the above transformation to get the coordinates of those events in the B-Team frame. Answers to all the questions you want to ask can then be read off of those coordinate values. I thought the gamma equation was was that a mistake? Mar 3, 2015 94 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: I thought the gamma equation was ##\gamma = 1 / \sqrt{1 - (v^2/c^2)}## was that a mistake? I am using units in which . So in these units (given your specification of the problem). The easiest units of time and distance, given your numbers, are probably seconds and light-seconds, so 29,979,245.8 meters is a distance of 0.1 (i.e., 0.1 light-seconds). Mar 3, 2015 95 name123 510 5 Ok, so I had a try at doing the calculations. So with v = 29,979,245.8 and w = 0.00000001 gamma is roughly 1.0050378. And at t = 0 the x coordinates are as follows: A1: x = 0 A2: x = 29,979,245.8 B1: x = 50,000 B2: x = 29878973.1 According to the B-Team the time and coordinates are roughly as follows: A1: t' = 0 x' = 0 A2: t' = -0.01005038 x' = 30,130,275.70 B1: t' = -0.00001676 x' = 50,251.89 B2: t' = -0.01001676 x' = 30,029,497.85 It seemed to me that from the B-Team's perspective at t' = -.006 A1: x' = 179,875.47 A2: x' = 30,008,848.42 B1: x' = 50,251.89 B2: x' = 30,029,497.85 So A-Team is within the B-Team bounds, and it seems to me that there is no time according to the B-Team when: B1.x' > A1.x' AND B2.x' < A2.x' is true from their perspective. I presume I've done it wrong. Taking into account the Y distance didn't seem to make any significant difference (as it was so small). Mar 3, 2015 96 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: I had a try at doing the calculations. On re-checking the approximation I gave in my previous post, I realized that it's not good enough; there are additional terms that have to be taken into account. (Briefly, I was assuming that all of the corrections were second order in , but I was wrong; there are correction terms that are first order in and therefore have to be included.) A better approximation is: Note that even this might not be good enough, depending on how small you pick and how many significant figures you look at. The fully correct transformation is: where now we are using so we take both speeds into account (and remember units are always such that ). Sorry for the mixup on my part. name123 said: Taking into account the Y distance didn't seem to make any significant difference (as it was so small). If you leave out the motion in the y direction, you leave out the critical feature of the problem, because if you only look at the "fit" in the x direction, the A-Team will not fit. You have to look at the y coordinates. What you should be checking for is the path in the x'-y' plane of the two A-Team poles, and where each pole passes relative to the two B-Team members (who are at rest in the B-Team frame). You can't figure that out just from looking at x coordinates. I also suggest picking a value for w that doesn't require you to do calculations accurate to 30 or more significant figures in order to see actual variation in the y coordinates. Try, for example, a value of w that is 1/100 or 1/1000 the value of v, so you only need 7 or 8 figures at most. Last edited: Mar 3, 2015 Mar 3, 2015 97 name123 510 5 PeterDonis said: On re-checking the approximation I gave in my previous post, I realized that it's not good enough; there are additional terms that have to be taken into account. (Briefly, I was assuming that all of the corrections were second order in ##w##, but I was wrong; there are correction terms that are first order in ##w## and therefore have to be included.) A better approximation is: $$ t' = \gamma \left( t - v x - w y \right) $$ $$ x' = \gamma \left( x - v t \right) + \left( \gamma - 1 \right) \frac{w}{v} y $$ $$ y' = y - \gamma w t + \left( \gamma - 1 \right) \frac{w}{v} x $$ Note that even this might not be good enough, depending on how small you pick ##w## and how many significant figures you look at. The fully correct transformation is: $$ t' = \gamma \left( t - v x - w y \right) $$ $$ x' = - \gamma v t + \left[ 1 + \left( \gamma - 1 \right) \frac{v^2}{v^2 + w^2} \right] x + \left( \gamma - 1 \right) \frac{vw}{v^2 + w^2} y $$ $$ y' = - \gamma w t + \left( \gamma - 1 \right) \frac{vw}{v^2 + w^2} x + \left[ 1 + \left( \gamma - 1 \right) \frac{w^2}{v^2 + w^2} \right] y $$ where now we are using ##\gamma = 1 / \sqrt{1 - v^2 - w^2}## so we take both speeds into account (and remember units are always such that ##c = 1##). Sorry for the mixup on my part. If you leave out the motion in the y direction, you leave out the critical feature of the problem, because if you only look at the "fit" in the x direction, the A-Team will not fit. You have to look at the y coordinates. What you should be checking for is the path in the x'-y' plane of the two A-Team poles, and where each pole passes relative to the two B-Team members (who are at rest in the B-Team frame). You can't figure that out just from looking at x coordinates. I also suggest picking a value for w that doesn't require you to do calculations accurate to 30 or more significant figures in order to see actual variation in the y coordinates. Try, for example, a value of w that is 1/100 or 1/1000 the value of v, so you only need 7 or 8 figures at most. Have you the equations in the normal form where the unit of c isn't 1? (a link maybe, I'm having trouble finding it) Last edited by a moderator: Mar 3, 2015 Mar 3, 2015 98 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Have you the equations in the normal form where the unit of c isn't 1? It's easy to modify them. The general rule is to put , in place of , and , in place of , . If you work it out, though, you will see that the only equation that actually changes is the equation; if you put , in place of , in that equation, that will do it. Mar 4, 2015 99 name123 510 5 PeterDonis said: It's easy to modify them. The general rule is to put ##ct##, ##ct'## in place of ##t##, ##t'## and ##v/c##, ##w/c## in place of ##v##, ##w##. If you work it out, though, you will see that the only equation that actually changes is the ##t'## equation; if you put ##v/c^2##, ##w/c^2## in place of ##v##, ##w## in that equation, that will do it. Not clear if that is , in place of , or , in place of , . Have you a link to the equations in normal form? Also regarding the calculations I'd done they'd be correct for x right? So could the gap between the A-Team members if there was a y-direction also be worked out using gap = SQRT(x-length ^2 + y-length ^2) so that I could work out the minimum y-length that would be required to allow the B-Team members through, and just use the normal equation to see if that y-length is big enough? Mar 4, 2015 100 PeterDonis Mentor Insights Author 48,988 25,080 name123 said: Not clear if that is ##v/c##, ##w/c## in place of ##v##, ##w## or ##v/c^2##, ##w/c^2## in place of ##v##, ##w##. Modify just the equation using the latter. name123 said: Have you a link to the equations in normal form? Have you tried googling "Lorentz transformation"? name123 said: regarding the calculations I'd done they'd be correct for x right? Not if you used the formulas I gave before. Note that the new formulas I gave add terms in all the equations (, , and ). name123 said: could the gap between the A-Team members if there was a y-direction also be worked out using gap = SQRT(x-length ^2 + y-length ^2) so that I could work out the minimum y-length that would be required to allow the B-Team members through, and just use the normal equation to see if that y-length is big enough? No. You are still missing a key aspect of the situation: in the B-Team frame, the line connecting the two A-Team members is at an angle from the line connecting the two B-Team members. You need to look at the x-y plane; there is no shortcut. Here is an outline of the steps I recommend: (1) We have four observers, two A-Team (call them A1 and A2) and two B-Team (call them B1 and B2). The key condition of the problem is that there is an instant of time in the A-Team frame (call it ) at which these four observers are lined up along the axis (i.e., they all have ) in the following order (going from smaller to larger coordinates): A1, B1, B2, A2. This is what it means to say that the two B-Team members pass between the two A-Team members. (2) The above condition gives us coordinates for four events, in the A-Team frame. (3) Use the Lorentz transformation to obtain the coordinates for these four events in the B-Team frame. (4) In the B-Team frame, B1 and B2 are at rest, so the coordinates obtained for them are valid at any time . So to find out whether they still pass between A1 and A2 in the B-Team frame, compute the worldlines of A1 and A2 in that frame, using the coordinates obtained above and the fact that A1 and A2 both move with speed in this frame, and check to see where they are in relation to the fixed coordinates of B1 and B2. Note that you can actually do all of this using general formulas; you don't need to plug in numbers. However, picking specific numbers and then graphing the results in the plane may help to visualize what is going on. Prev 1 2 3 Next FirstPrev2 of 3 Go to page Go NextLast Similar threads IPrincipal and Gaussian curvature of the FRW metric Dec 13, 2024 Replies 8 Views 1K ICalculating Time Elapsed in Rocket Collision Jul 1, 2021 Replies 18 Views 2K IUnderstanding Microcausality in QFT Jun 6, 2020 Replies 16 Views 3K IGamma - A Minkowski Spacetime Diagram Generator Feb 3, 2022 Replies 5 Views 2K BMetric Line Element Use: Do's & Don'ts for Accelerated Dummies? Nov 14, 2022 Replies 29 Views 2K ICan Lagrangian mechanics be applied to motion in an expanding universe Oct 18, 2019 Replies 22 Views 3K ASolving Linearized EFE for Newtonian Potential Under Lorentz Gauge May 22, 2021 Replies 3 Views 2K IUnderstanding how to derive the relativistic-relative velocity formula May 29, 2020 Replies 12 Views 3K How should I use the averaging approximation to find this? Oct 16, 2024 Replies 3 Views 2K Understanding calculation of 2nd order LTI DE response to step input Mar 12, 2024 Replies 8 Views 1K Share: BlueskyLinkedInShare Forums Physics Special and General Relativity Hot Threads AMinimal property of Spacelike geodesics in GR/curved spacetime? Started by Kostik May 19, 2025 Replies: 76 Special and General Relativity ADirac's "GTR" Eq (27.4): how momentum ##p^\mu## varies Started by Kostik Jul 6, 2025 Replies: 50 Special and General Relativity A BNo object actually approaches the speed of light Started by Atlan0001 Apr 20, 2025 Replies: 9 Special and General Relativity AQuestion on Dirac's derivatives of the 4-velocity w.r.t. coordinates Started by Kostik Jun 6, 2025 Replies: 53 Special and General Relativity E BWhen I jump up and down what is the Einsteinian way to describe it? Started by ESponge2000 Apr 25, 2025 Replies: 39 Special and General Relativity Recent Insights InsightsQuantum Entanglement is a Kinematic Fact, not a Dynamical Effect Started by Greg Bernhardt Sep 2, 2025 Replies: 11 Quantum Physics InsightsWhat Exactly is Dirac’s Delta Function? - Insight Started by Greg Bernhardt Sep 2, 2025 Replies: 3 General Math InsightsRelativator (Circular Slide-Rule): Simulated with Desmos - Insight Started by Greg Bernhardt Sep 2, 2025 Replies: 1 Special and General Relativity P InsightsFixing Things Which Can Go Wrong With Complex Numbers Started by PAllen Jul 20, 2025 Replies: 7 General Math F InsightsFermat's Last Theorem Started by fresh_42 May 21, 2025 Replies: 105 General Math F InsightsWhy Vector Spaces Explain The World: A Historical Perspective Started by fresh_42 Mar 13, 2025 Replies: 0 General Math Change width Contact About Terms Privacy Help RSS 2025 © Physics Forums, All Rights Reserved Back Top
737
https://kingofthecurve.org/blog/mcat-starling-forces-guide
Skip to Content Understanding Starling Forces: Mastering Capillary Exchange for the MCAT MCAT Physiology Written By Usman Ali Capillary exchange is one of those sneaky high-yield topics the MCAT loves to wrap in physiology passages and clinical experiments. Whether you’re reading about edema or interpreting fluid shifts across membranes, Starling forces are key. Today’s blog will help you visualize, memorize, and apply the four forces behind capillary exchange—with King of the Curve tools to make it stick. 🧪 What Are Starling Forces? Starling forces describe the movement of fluid across capillary walls due to pressure gradients. These gradients determine whether fluid moves out of the capillary (filtration) or back into it (reabsorption). There are four main forces: | Force | Description | --- | | Capillary Hydrostatic Pressure (Pc) | Pushes fluid out of the capillary (blood pressure) | | Interstitial Hydrostatic Pressure (Pi) | Pushes fluid into the capillary from tissue | | Capillary Oncotic Pressure (πc) | Pulls fluid into the capillary (plasma proteins) | | Interstitial Oncotic Pressure (πi) | Pulls fluid out toward tissue proteins | 🧠 MCAT tip: Net flow = filtration – reabsorption, and the Starling equation integrates all four! ✍️ The Starling Equation Net Filtration Pressure (NFP) = (Pc − Pi​) − (πc​ − πi​) Where: Pc​ = Capillary hydrostatic pressure Pi = Interstitial hydrostatic pressure πc​ = Capillary oncotic pressure πi​ = Interstitial oncotic pressure 🧠 Positive NFP = net fluid leaves the capillary (filtration)🧠 Negative NFP = net fluid enters the capillary (reabsorption) 💡 Clinical Tie-Ins the MCAT Loves ✅ 1. Liver Failure → Low Oncotic Pressure Less albumin = ↓ πc → less reabsorption → edema ✅ 2. Heart Failure → High Pc Increased venous pressure = ↑ capillary hydrostatic pressure → fluid pushed out = edema ✅ 3. Burns or Inflammation → ↑ πi Proteins leak into interstitial space → pulls fluid out of capillaries = swelling KOTC QOTDs regularly include real-world MCAT-style cases like this to train your reasoning. 📚 High-Yield Starling Forces Summary Table | Situation | Change in Force | Result | --- | Liver failure | ↓ πc (Capillary Oncotic Pressure) | Edema | | Heart failure | ↑ Pc (Capillary Hydrostatic Pressure) | Edema | | Severe dehydration | ↑ πc (Capillary Oncotic Pressure) | Reabsorption | | Burn injury | ↑ πi (Interstitial Oncotic Pressure) | Fluid loss into tissue | | Lymphatic blockage | ↑ Pi (Interstitial Hydrostatic Pressure) | Impaired drainage | 🎯 Final Tips to Master Starling Forces Know the equation, but more importantly, understand the balance Think like the test: What force is changing, and what’s the net fluid movement? Use visual aids and cause-effect flashcards (available in the KOTC app) ✅ Call-to-Action (CTA) Understanding Starling forces means mastering MCAT physiology, clinical logic, and visual memory all in one. Don’t memorize—internalize it with King of the Curve’s visuals, timed quizzes, and QOTDs. 👉 Start your free trial now Frequently Asked Questions (FAQs) Aim for 4-6 focused hours, ensuring you incorporate breaks to avoid burnout. #### Practice mindfulness techniques, take practice exams under realistic conditions, and maintain a balanced lifestyle. #### Set short-term goals, seek support from mentors, and reward yourself for small achievements. #### Regular exercise improves focus, reduces stress, and enhances overall mental clarity. #### KOTC offers personalized learning tools, gamification features, and adaptive question banks to help students stay on track without burnout. MCAT Starling forcescapillary fluid exchangehydrostatic pressure MCAToncotic pressurecapillary physiology MCATStarling equation MCAT Usman Ali Previous Previous Pneumothorax – Spotting the Collapse Before the Exam Does Next Next 🍬 Type I vs. Type II Diabetes: NCLEX Clarity in One Visual
738
https://www.wikihow.com/Multiply-or-Divide-Two-Percentages
Log in How to Multiply or Divide Two Percentages Last Updated: March 22, 2024 References This article was co-authored by David Jia. David Jia is an Academic Tutor and the Founder of LA Math Tutoring, a private tutoring company based in Los Angeles, California. With over 10 years of teaching experience, David works with students of all ages and grades in various subjects, as well as college admissions counseling and test preparation for the SAT, ACT, ISEE, and more. After attaining a perfect 800 math score and a 690 English score on the SAT, David was awarded the Dickinson Scholarship from the University of Miami, where he graduated with a Bachelor’s degree in Business Administration. Additionally, David has worked as an instructor for online videos for textbook companies such as Larson Texts, Big Ideas Learning, and Big Ideas Math. This article has been viewed 221,043 times. Do you have two percentages that you wish to multiply together, or divide? Multiplying and dividing percents is different from adding or subtracting them. You cannot remove the percent sign from each number and multiply or divide the numbers by each other; you will need to convert the percentages to decimal or fraction. Therefore, this process takes more work than adding or subtracting percentages, but it can still be completed in minutes. Things You Should Know Steps Multiplication: Converting the Percentages to Decimal Multiplication: Converting the Percentages to Fractions Dividing: Converting the Percentages to Decimal Dividing: Converting the Percent to a Fraction Expert Q&A Video Tips You Might Also Like Expert Interview Thanks for reading our article! If you’d like to learn more about math, check out our in-depth interview with David Jia. References About This Article Multiplying percents is different from adding or subtracting them. You’ll need to convert the percentages to decimals first by moving the decimal point two spaces to the left, or dividing by 100. For example, 30 percent would become 0.3. You can then multiply the numbers. When you have the product, count the total number of values behind the decimal points and move the decimal point that many spaces. For example, if you multiply 0.3 by 0.7, you’d get 0.21. If you want to convert your answer back into a percent, multiply it by 100, so you’d get 21 percent. To learn how to convert percentages to fractions to multiply them, keep reading! Did this summary help you?YesNo Reader Success Stories Anonymous May 17 Did this article help you? Anonymous May 17 Anonymous Oct 14, 2017 Anonymous Jan 31, 2017 Anonymous Oct 3, 2017 Quizzes Do I Have a Dirty Mind Quiz Personality Analyzer: How Deep Am I? Am I a Good Kisser Quiz Rizz Game: Test Your Rizz "Hear Me Out" Character Analyzer What's Your Red Flag Quiz You Might Also Like How to Convert Percents, Fractions, and Decimals How to Subtract Percentages in Math How to Calculate Percentages on a Calculator How to Multiply Fractions Featured Articles How to Stop Instagram from Suggesting Adult Content Will I Get Together With My Crush Quiz The Easiest Way to Clean Your Room from Top to Bottom Wondering if Someone Likes You Online? 11 Important Signs to Watch Out For How to Increase Your Self Confidence with Positive Daily Practices How to Fix Painful Shoes Trending Articles How Rare Is Your Name? 5 Different Types of Butts: Find Your Shape How Will I Die Quiz How Many People Have a Crush On Me Quiz Labubu Blind Box Generator: Unbox & Collect Your Own Online Lababus How to Know if a Person Is Interested in You Featured Articles How to Use Castor Oil to Promote Healthy Hair How to Apply for an Internship Tips for Trimming Long Hair Evenly (Your Own or Someone Else’s) How to Whiten Teeth With Baking Soda 130+ Sexy, Sweet, & Seductive Messages for Him Something Bit Me! Common Insect Bites (with Pictures) Featured Articles Do I Have Common Sense Quiz Watch Articles Trending Articles Quizzes Follow Us Get all the best how-tos! Sign up for wikiHow's weekly email newsletter
739
https://courses.lumenlearning.com/suny-hccc-wm-concepts-statistics/chapter/hypothesis-test-for-difference-in-two-population-proportions-6-of-6/
Hypothesis Test for Difference in Two Population Proportions (6 of 6) | Statistics for the Social Sciences Skip to main content Statistics for the Social Sciences Chapter 9: Inference for Two Proportions Search for: Hypothesis Test for Difference in Two Population Proportions (6 of 6) Learning Objectives Identify type I and type II errors and select an appropriate significance level based on an analysis of the consequences of each type of error. Review of Type I and Type II Errors Inference is based on probability, so there is always some chance of making a wrong decision. Recall that two types of wrong decisions can be made in hypothesis testing. When we reject a null hypothesis that is true, we commit a type I error. When we fail to reject a null hypothesis that is false, we commit a type II error. The following table summarizes the logic behind type I and type II errors. It is possible to have some influence over the likelihoods of committing these errors, but decreasing the chance of a type I error increases the chance of a type II error. We have to decide which error is more serious for a given situation. Sometimes a type I error is more serious, and other times a type II error is more serious. Learn By Doing Teens and Antidepressants Recall the description of a clinical trial in which researchers study the effect of a new antidepressant on teens. Researchers design a randomized, controlled, double-blind experiment to study the effect of the antidepressant Fluoxetine combined with psychiatric therapy. The control group receives a placebo and psychiatric therapy. The response variable is improvement, which means symptoms of depression improve. The hypotheses are as follows, with p 1 = proportion of teens who improve in the treatment group (Fluoxetine and psychiatric therapy) and p 2 = proportion of teens who improve in the control group (placebo and psychiatric therapy). H 0: p 1 − p 2 = 0 H a: p 1 − p 2> 0 Decreasing the Chance of Type I or Type II Error How can we decrease the chance of a type I or type II error? Because decreasing the chance of a type I error increases the chance of a type II error, we have to weigh the consequences of these errors before deciding how to proceed. Recall that the probability of committing a type I error is α. Why? Well, when we choose a level of significance (α) , we are choosing a benchmark for rejecting the null hypothesis. If the null hypothesis is true, then the probability that we will reject a true null hypothesis is α. So the smaller α is, the smaller the probability of a type I error. It is more complicated to calculate the probability of a type II error. The best way to reduce the probability of a type II error is to increase the sample size. But once the sample size is set, larger values of α will decrease the probability of a type II error (while increasing the probability of a type I error). Following are general guidelines for choosing a level of significance: If the consequences of a type I error are more serious, choose a small level of significance (α). If the consequences of a type II error are more serious, choose a larger level of significance (α). But remember that the level of significance is the probability of committing a type I error. In general, we pick the largest level of significance that we can tolerate as the chance of a type I error. Note: It is not always the case that one type of error is worse than the other. Learn By Doing Hormone Replacement Therapy Recall the experiment that investigated the side effects of hormone replacement therapy (HRT) for women with menopausal symptoms. The experiment randomly assigned over 16,000 U.S. women to receive a hormone treatment or a placebo. The experiment was double blind. After 5 years, a larger proportion of the hormone group had breast cancer and heart disease. This observed difference was statistically significant. Researchers were so alarmed by the results that the experiment was ended early to prevent further harm to the health of the women participating in the hormone group. The type I error in this situation is that we conclude that HRT increases the risk of breast cancer and heart disease, but it does not. The type II error is that we conclude that HRT does not increase the risk of breast cancer and heart disease, but it does. Identify the type of error associated with each consequence. Let’s Summarize Hypothesis tests for two proportions can answer research questions about two populations or two treatments that involve categorical data. The null hypothesis for the two-proportions test is always a statement of “no difference.” H 0: p 1 − p 2 = 0 The alternative hypothesis is one of the following. H a: _p_ 1 − _p_ 2< 0, or H a: _p_ 1 − _p_ 2> 0, or H a: _p_ 1 − _p_ 2 ≠ 0 The test statistic for the two proportions test is similar to the test statistic for one sample proportion tests. Z=s t a t i s t i c−p a r a m e t e r s t a n d a r d e r r o r Z=s t a t i s t i c−p a r a m e t e r s t a n d a r d e r r o r Z=(d i f f e r e n c e i n s a m p l e p r o p o r t i o n s)−(d i f f e r e n c e i n p o p u l a t i o n p r o p o r t i o n s)s t a n d a r d e r r o r Z=(d i f f e r e n c e i n s a m p l e p r o p o r t i o n s)−(d i f f e r e n c e i n p o p u l a t i o n p r o p o r t i o n s)s t a n d a r d e r r o r Z=(ˆ p 1−ˆ p 2)−(p 1−p 2)√ˆ p(1−ˆ p)n 1+ˆ p(1−ˆ p)n 2 Z=(p ˆ 1−p ˆ 2)−(p 1−p 2)p ˆ(1−p ˆ)n 1+p ˆ(1−p ˆ)n 2 This statistic is approximately normal in its distribution if each sample has at least ten successes and failures. Note that the standard error is estimated with pooled proportion. The normal distribution may be used to provide P-values for a two-proportions test if each sample has at least 10 successes and failures. When the P-value in a two-proportions test is less than the level of significance (α), we should reject the null hypothesis in favor of the alternative. In this case, we say that the differences are statistically significant. Two types of errors can be made when conducting a hypothesis test. A type I error occurs when we reject a true null hypothesis. A type II error occurs when we fail to reject a false null hypothesis. The level of significance, α, is the probability of a type I error. Increasing the sample size lowers the probability of a type II error. After considering the consequences of the type I and II errors, we should choose the largest value for α that we can tolerate, because increasing α decreases the probability of a type II error. After conducting a hypothesis test, it is important to consider whether the conclusions are reasonable. We discussed two common pitfalls in drawing conclusions from statistical studies.(1) The conclusion is not appropriate to the study design. The study makes an inference based on nonrandom data. The study makes inappropriate cause-and-effect conclusions. The study overgeneralizes its conclusions. (2) The conclusion confuses statistical significance with practical importance. Candela Citations CC licensed content, Shared previously Concepts in Statistics. Provided by: Open Learning Initiative. Located at: License: CC BY: Attribution Licenses and Attributions CC licensed content, Shared previously Concepts in Statistics. Provided by: Open Learning Initiative. Located at: License: CC BY: Attribution PreviousNext Privacy Policy
740
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Chemical_Equilibria/Effect_of_Pressure_on_Gas-Phase_Equilibria
Skip to main content Effect of Pressure on Gas-Phase Equilibria Last updated : Jan 30, 2023 Save as PDF Dissociation Constant Equilibrium Calculations Page ID : 1375 ( \newcommand{\kernel}{\mathrm{null}\,}) Le Chatelier's Principle states that a system at equilibrium will adjust to relieve stress when there are changes in the concentration of a reactant or product, the partial pressures of components, the volume of the system, and the temperature of reaction. There are three ways to change the pressure of a constant-temperature reaction system involving gaseous components: Add or remove a gaseous reactant or product: Adding or remove a gaseous reactant or product changes the concentrations. If the concentration of reactant or product is increased, the system will shift away from the side in which concentration was increased (i.e. if the concentration of reactants is increased, the system will shift toward the products. If more products are added, the system will shift to form more reactants). Conversely, if the concentration of reactant or product is decreased, the system will shift toward the side in which concentration was decreased (i.e. If reactants are removed, the system will shift to form more reactants. If the concentration of products is decreased, the equilibrium will shift toward the products). Add an inert gas (one that is not involved in the reaction) to the constant-volume reaction mixture: This will increase the total pressure of the system, but will have no effect on the equilibrium condition. That is, there will be no effect on the concentrations or the partial pressures of reactants or products. Change the volume of the system: When the volume is changed, the concentrations and the partial pressures of both reactants and products are changed. If the volume is decreased, the reaction will shift towards the side of the reaction that has fewer gaseous particles. If the volume is increased, the reaction will shift towards the side of the reaction that has more gaseous particles. When a system at equilibrium undergoes a change in pressure, the equilibrium of the system will shift to offset the change and establish a new equilibrium. The system can shift in one of two ways: Toward the reactants (i.e. in favor of the reverse reaction) Toward the products (i.e. in favor of the forward reaction) The effects of changes in pressure can be described as follows (this only applies to reactions involving gases): When there is an increase in pressure, the equilibrium will shift towards the side of the reaction with fewer moles of gas. When there is a decrease in pressure, the equilibrium will shift towards the side of the reaction with more moles of gas. Pressure is inversely related to volume. Therefore, the effects of changes in pressure are opposite of the effects of changes in volume. Additionally, this does not apply to a change in the pressure in the system due to the addition of an inert gas. References Petrucci, Ralph H., et al. General Chemistry: Principles and Modern Applications Ninth Edition. New Jersey: Pearson Prentice Hall, 2007. Treptow, Richard S. "Le Chatelier's Principle: A reexamination and method of graphic illustration." Journal of Chemical Education: Journal 57, Issue 6 (June 1980): 417. Cheung, Derek. "The Adverse Effects of Le Chatelier's Principle on Teacher Understanding of Chemical Equilibrium." Journal of Chemical Education: Journal 86, Issue 4 (April 2009): 514-518. Huddle, Bejamin P. "'Conceptual Questions' on Le Chatelier's Principle." Journal of Chemical Education: Journal 75, Issue 9 (September 1998): 1175. Problems Consider the decomposition of NOCl: 2NOCl(g)⇌2NO(g)+Cl2(g)(1) In which direction will the reaction shift if the overall pressure is decreased? Does this favor the forward reaction or the reverse reaction? 2. Consider the decomposition of HBr: 2HBr(g)⇌H2(g)+Br2(g)(2) In which direction will the reaction shift when overall pressure is increased? Which direction will it shift when overall pressure is decreased? 3. Consider the reaction: C(s)+2H2(g)⇌CH4(g)(3) What will happen to the equilibrium if the overall pressure is increased? (In which direction will the reaction shift? Does it favor reactants or products? Does this favor the formation of CH4? Is the rate of the forward reaction greater than the rate of the reverse reaction?) 4. Consider the decomposition of MgCO3: MgCO3(s)⇌MgO(s)+CO2(g)(4) Will the formation of CaCO3 or the decomposition of CaCO3 occur faster if the overall pressure is increased? 5. Consider the reaction in a closed container: 2SO2(g)+O2(g)⇌2SO3(g)(5) You want the reaction to favor the formation of SO3. You have two options: decrease the overall volume of the container, or increase the overall volume. Which should you choose? Solutions There are 2 moles of gas particles on the side of the reactants, and 3 moles of gas particles on the side of the products. (Note: 2 moles on the reactant's side come from 2 moles NOCl; 3 moles on the product's comes from 2 moles NO + 1 mole Cl2.). A decrease in pressure favors the side with more particles. ∴ The reaction will shift towards the products, and will favor the forward reaction. There are 2 moles of gas particles on the side of the reactants, and 2 moles of gas particles on the side of the products. Increasing pressure favors the side with fewer particles, and decreasing pressure favors the side with more particles. However, because there is an equal number of particles on both sides, change in pressure will have no effect on the system. ∴ No effect - the reaction will not shift in either direction regardless of pressure changes. There are 2 moles of gas particles on the side of the reactants, and 1 mole of gas particles on the side of the products. Increasing pressure favors the side with fewer particles. ∴ The reaction will shift towards the products. This means that the reaction will favor the forward reaction, which means that it favors the formation of CH4, and that the rate of the forward reaction is greater than the rate of the reverse reaction. There are 0 moles of gas particles on the side of the reactants, and 1 mole of gas particles on the side of the products. Increasing the pressure favors the side with fewer particles, so the reaction favors the reactants. The products are the decomposition of MgCO3, while the reactants are the formation of MgCO3. ∴ The formation of MgCO3 is favored, meaning that the formation of MgCO3 will occur faster than its decomposition. There are 3 moles of gas particles on the side of the reactants, and 2 moles of gas particles on the side of the products. Because you want the reaction to favor the formation SO3, you want the reaction to favor the forward reaction/shift to the right, which is the side with fewer moles of gas particles. For a system to shift towards the side of a reaction with fewer moles of gas, you need to increase the overall pressure. Recall that pressure and volume are inversely related, so in order to increase the overall pressure, you need to decrease the overall volume. ∴ You should decrease the overall volume. Contributors and Attributions Christina Chen (UC Davis) Dissociation Constant Equilibrium Calculations
741
https://thirdspacelearning.com/us/math-resources/topic-guides/number-and-quantity/even-numbers/
High Impact Tutoring Built By Math Experts Personalized standards-aligned one-on-one math tutoring for schools and districts Request a demo In order to access this I need to be confident with: Number sense Whole numbers Natural numbers What are even numbers? Common Core State Standards How to identify even numbers Even numbers examples Example 1: identifying even numbers Example 2: identifying even numbers Example 3: identifying even numbers Example 4: apply properties of even numbers Example 5: apply properties of even numbers Example 6: determine if a group of objects is even Teaching tips for even numbers Easy mistakes to make Related types of numbers lessons Practice even numbers questions Even numbers FAQs Next lessons Still stuck? Math resources Number and quantity Types of numbers Even numbers Even numbers Here you will learn about even numbers, including examples of even numbers, even numbers on a number line and a number chart, and properties of even numbers. Students will first learn about even numbers as part of operations and algebraic thinking in 2nd grade. They expand upon their knowledge of even numbers in 3rd grade when they identify arithmetic patterns and properties of numbers. What are even numbers? Even numbers are whole numbers that are multiples of 2. This means that every even number is divisible by 2 with no remainders. Here are the even numbers from 0 to 100: 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 20, 22, 24, 26, 28, 30, 32, 34, 36, 38 40, 42, 44, 46, 48, 50, 52, 54, 56, 58 60, 62, 64, 66, 68, 70, 72, 74, 76, 78 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100. The last digit of an even number (the digit in the ones place), is always 0, 2, 4, 6, or 8. The smallest even number is zero. Even numbers are the opposite of odd numbers, which are not divisible by 2 without remainders. An odd number’s last digit is 1, 3, 5, 7, or 9. For example, | Examples of even numbers | Examples of odd numbers | --- | | 0, 2, 4, 6, 8, 10, 100, 122, 156, 178, 194, 1,000, 1,258, 1,000,000, and so on… | 1, 3, 5, 7, 9, 11, 101, 137, 143, 189, 191 1,225, 1,649, 1,000,007, and so on… | To identify an even number, you can look at the last digit, use a number line, or use a number chart. | | | --- | | Last digit | The last digit of a number, or the digit in the ones place, will tell you if it is even or odd. The last digit of an even number will always be \bf{0, 2, 4, 6,} or \bf{8} . 1,597,631,58\underline{6} In this number, the last digit is 6, so it is an even number. | | Number line | Start with a number whose last digit is 0, 2, 4, 6, or 8 and jump every other number, or count by twos. | | Number chart | On a number chart each column that begins with 2, 4, 6, 8, or 10 includes even numbers. This chart shows a list of even numbers between 1 and 100. | Properties of even numbers and odd numbers Property of addition If you add an even number to an even number, the sum will always be an even number. For example, 8 + 4 = 12 . If you add an even number to an odd number, the sum will always be an odd number. For example, 8 + 3 = 11 . If you add an odd number to an odd number, the sum will always be an even number. For example, 5 + 3 = 8 . Property of subtraction If you subtract an even number from an even number, the difference will always be an even number. For example, 16-10 = 6 . If you subtract an even number from an odd number or an odd number from an even number, the difference will always be an odd number. For example, 16-9 = 7; 17-10 = 7 . If you subtract an odd number from an odd number, the difference will always be an even number. For example, 9-5 = 4 . Property of multiplication If you multiply an even number by an even number, the product will always be an even number. For example, 4 \times 2 = 8. If you multiply an even number by an odd number, the product will always be an even number. For example, 4 \times 5 = 20. If you multiply an odd number by an odd number, the product will always be an odd number. For example, 3 \times 5 = 15. Groups of objects To determine if a group of objects has an even number of objects or an odd number of objects, group the objects into pairs or equal groups of 2. If each object can be grouped into a pair, there is an even number of objects. If there is one object left over after pairing, there is an odd number of objects. For example, | | | --- | | Since all of the objects in this group can be grouped into pairs, there is an even number of objects. | After grouping the objects into pairs, there is one left over. Therefore, there is an odd number of objects in this group. | What are even numbers? Common Core State Standards How does this relate to 2nd grade and 3rd grade math? Grade 2 – Operations and Algebraic Thinking (2.OA.3)Determine whether a group of objects (up to 20 ) has an odd or even number of members, for example, by pairing objects or counting them by 2 s; write an equation to express an even number as a sum of two equal addends. Grade 3 – Operations and Algebraic Thinking (3.OA.9)Identify arithmetic patterns (including patterns in the addition table or multiplication table), and explain them using properties of operations. For example, observe that 4 times a number is always even, and explain why 4 times a number can be decomposed into two equal addends. How to identify even numbers In order to identify even numbers: Look at the last digit, or the digit in the ones place. If the digit is \bf{0, 2, 4, 6,} or \bf{8,} the number is even. Use this strategy to answer the question. In order to determine if the answer to an equation will be an even number: Recall the properties of addition, subtraction, or multiplication for even numbers. Apply the correct property. In order to determine if there is an even number of objects in a group: Group the objects into pairs. If all objects can be grouped into pairs, there is an even number of objects. If there is one left over, there is an odd number of objects. [FREE] Types of Number Check for Understanding Quiz (Grade 2, 4 and 6) Use this quiz to check your grade 6 students’ understanding of types of numbers. 10+ practice questions with answers covering a range of 2nd, 4th and 6th grade types of numbers topics to identify areas of strength and support! DOWNLOAD FREE x [FREE] Types of Number Check for Understanding Quiz (Grade 2, 4 and 6) Use this quiz to check your grade 6 students’ understanding of types of numbers. 10+ practice questions with answers covering a range of 2nd, 4th and 6th grade types of numbers topics to identify areas of strength and support! DOWNLOAD FREE Even numbers examples Example 1: identifying even numbers True or false: 754 is an even number. Look at the last digit, or the digit in the ones place. If the digit is \bf{0, 2, 4, 6,} or \bf{8,} the number is even. The last digit, or the digit in the ones place, is 4. 2Use this strategy to answer the question. Since the last digit in 754 is 4, it is an even number, so the statement is true. Example 2: identifying even numbers List all even numbers between 45 and 55. Look at the last digit, or the digit in the ones place. If the digit is \bf{0, 2, 4, 6,} or \bf{8,} the number is even. All numbers between 45 and 55 are: 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55 Find the numbers that end in 0, 2, 4, 6, or 8 : 45, 4\underline{6}, 47, 4\underline{8}, 49, 5\underline{0}, 51, 5\underline{2}, 53, 5\underline{4}, 55 Use this strategy to answer the question. The even numbers between 45 and 55 are 46, 48, 50, 52, and 54. Example 3: identifying even numbers Look at the list of numbers. Which numbers are even? 95, 236, 148, 300, 721, 688, 177, 533, 339, 410 Look at the last digit, or the digit in the ones place. If the digit is \bf{0, 2, 4, 6,} or \bf{8,} the number is even. Use this strategy to answer the question. The even numbers are 236, 148, 300, 688, and 410. Example 4: apply properties of even numbers Lincoln is subtracting an even number from an even number. He thinks the difference will be an even number, but his friend Beth says it will be odd. Who is correct? Recall the properties of addition, subtraction, or multiplication for even numbers. Apply the correct property. Since Lincoln is subtracting, he will use the property of subtraction, which says an even number subtracted from an even number will always result in an even number. Therefore, Lincoln is correct. Example 5: apply properties of even numbers Maria solves the following equation: 725,984 + 539,046 = 1,265,031 Her friend glances at the equation and tells her it’s incorrect. How does her friend know this is incorrect? Recall the properties of addition, subtraction, or multiplication for even numbers. Apply the correct property. The two numbers being added together end in a 4 and a 6, respectively. That means they are both even numbers. 725,98\underline{4} + 539,04\underline{6} = 1,265,03\underline{1} The property of addition of even numbers says an even number plus an even number will always result in an even number. The last digit of Maria’s answer is a 1, which means it is an odd number, so Maria’s answer can’t be correct. Example 6: determine if a group of objects is even Look at the objects in the square. Is there an even number of objects? Group the objects into pairs. If all objects can be grouped into pairs, there is an even number of objects. If there is one left over, there is an odd number of objects. All of the objects have been grouped into pairs, but there is one left over. Therefore, there is NOT an even number of objects. Teaching tips for even numbers Hang a number chart, or hundreds chart, in your classroom with even numbers highlighted. Provide worksheets that require students to find even numbers between any given numbers. For example, instead of always starting at 0, give them a starting number of 76 and ask them to find the even numbers through 100. Easy mistakes to make Thinking that zero is not an even numberZero and all numbers ending in zero are even numbers. This is because if you divide zero by 2, the quotient is zero, which is an integer. Therefore, it fits the definition of an even number. There are also many multiples of 2 that end in zero such as 10, 20, 30, and so on. Thinking that fractions and decimals can be even numbersOnly integers (whole numbers and their corresponding negative numbers) can be even numbers. Students may think that a number such as 1.52 is an even number because its last digit is a 2. However, this is not the case as fractions and decimals can not be even numbers or odd numbers. Related types of numbers lessons Types of numbers Odd numbers Absolute value Natural numbers Rational numbers Irrational numbers Number sets Integers Prime numbers Composite number Whole numbers Practice even numbers questions Which of the following is an even number? Select the correct answer. 123 25 11 18 The last digit of an even number is 0, 2, 4, 6, or 8. Therefore, 18 is an even number. What is the smallest even number? 2 1 0 0.2 Zero is the smallest even number because it is the smallest number that is divisible by 2 and the quotient is an integer. (Zero divided by two is zero.) What are the even numbers between 99 and 110? 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110 99, 101, 103, 105, 107, 109 102, 104, 106, 108, 110 100, 102, 104, 106, 108, 110 The last digit of an even number is always 0, 2, 4, 6, or 8. 100, 102, 104, 106, 108, and 110 are the only set of numbers that all end in one of those digits. When you add an even number and an even number, the answer will… always be an even number. sometimes be an even number. never be an even number. always be an odd number The property of addition of even numbers says that when you add an even number to an even number the sum will always be an even number. True or false: When you subtract an even number from an even number, you will get an odd number. True because the property of subtraction of even numbers says \text{even number } – \text{ even number } = \text{ odd number} False because the property of subtraction of even numbers says \text{even number } – \text{ even number } = \text{ even number} True because the property of subtraction of even numbers says \text{even number } – \text{ even number } = \text{ even number} False because the property of subtraction of even numbers says \text{even number } – \text{ even number } = \text{ odd number} The property of subtraction of even numbers says when you subtract an even number from an even number the difference will always be an even number. Look at the group of triangles in the circle. Is there an even number of triangles? Yes because if you group the triangles into pairs, there is one left over. No because if you group the triangles into pairs, there is one left over. Yes because if you group the triangles into pairs, there are none left over. No because if you group the triangles into pairs, there are none left over. If you group the triangles into pairs, you will see there are none left over. This means there is an even number of triangles. Even numbers FAQs What is an even number? An even number is a whole number that is a multiple of 2. This means that every even number is divisible by 2 with no remainders. What is the difference between an even number and an odd number? An even number is divisible by 2 with no remainders and an odd number is not. The last digit of an even number is 0, 2, 4, 6, or 8, while the last digit of an odd number is 1, 3, 5, 7, or 9. Can fractions be even numbers? Even numbers and odd numbers are integers. Therefore, fractions and decimals are neither even nor odd. The next lessons are Rounding numbers Factors and multiples Fractions Fraction operations Still stuck? At Third Space Learning, we specialize in helping teachers and school leaders to provide personalized math support for more of their students through high-quality, online one-on-one math tutoring delivered by subject experts. Each week, our tutors support thousands of students who are at risk of not meeting their grade-level expectations, and help accelerate their progress and boost their confidence. Find out how we can help your students achieve success with our math tutoring programs. Introduction What are even numbers? Common Core State Standards How to identify even numbers Even numbers examples ↓ Example 1: identifying even numbers Example 2: identifying even numbers Example 3: identifying even numbers Example 4: apply properties of even numbers Example 5: apply properties of even numbers Example 6: determine if a group of objects is even Teaching tips for even numbers Easy mistakes to make Related types of numbers lessons Practice even numbers questions Even numbers FAQs Next lessons Still stuck? x [FREE] Common Core Practice Tests (3rd to 8th Grade) Prepare for math tests in your state with these 3rd Grade to 8th Grade practice assessments for Common Core and state equivalents. Get your 6 multiple choice practice tests with detailed answers to support test prep, created by US math teachers for US math teachers! Download free We use essential and non-essential cookies to improve the experience on our website. Please read our Cookies Policy for information on how we use cookies and how to manage or change your cookie settings.Accept Privacy & Cookies Policy Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience. Necessary Always Enabled Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Non-necessary Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website. SAVE & ACCEPT
742
https://ecmsmath6.weebly.com/uploads/6/8/1/9/68198273/unit_7_msg_adv_katz.pdf
Pg.1a pg. 1b Unit 7 Rational Explorations Numbers & their Opposites Number Lines Real World Examples Absolute Value Order Rational Numbers Graph on Coordinate Plane Distance on Coordinate Plane Reflect on Coordinate Plane Draw Polygons on Coordinate Plane Name: Math Teacher: Advanced Math 6 Unit 7 Calendar 2/25 2.26 2/27 2/28 3/1 Intro to Integers Graphing on Number Lines Absolute Value IXL Skills Week of 2/25: MM.1, MM.2 3/4 3/5 3/6 3/7 3/8 Comparing & Ordering Graphing on a Coordinate Plane QUIZ #1 Distance & Area of Polygons on Coordinate Plane Missing Points & Reflections IXL Skills Week of 3/4: MM.3, MM.4, MM.5, MM.6 3/11 3/12 3/13 3/14 3/15 QUIZ #2 Computer Lab Unit 7 Post Test Review End of Unit Test IXL Skills Week of 3/11: XX.1, XX.2, XX.4, XX.5 Pg.2a pg. 2b Unit 7: Rational Explorations: Numbers & their Opposites Standards, Checklist and Concept Map Georgia Standards of Excellence (GSE): MGSE6.NS.5: Understand that positive and negative numbers are used together to describe quantities having opposite directions or values (e.g., temperature above/below zero, elevation above/below sea level, debits/credits); use positive and negative numbers to represent quantities in real-world contexts, explaining the meaning of 0 in each situation. MGSE6.NS.6: Understand a rational number as a point on the number line. Extend number line diagrams and coordinate axes familiar from previous grades to represent points on the line and in the plane with negative number coordinates. MGSE6.NS.6a: Recognize opposite signs of numbers as indicating locations on opposite sides of 0 on the number line; recognize that the opposite of the opposite of a number is the number itself, e.g., -(-3) = 3, and that 0 is its own opposite. MGSE6.NS.6b: Understand signs of numbers in ordered pairs as indicating locations in quadrants of the coordinate plane; recognize that when two ordered pairs differ only by signs, the locations of the points are related by reflections across one or both axes. MGSE6.NS.6c : Find and position integers and other rational numbers on a horizontal or vertical number line diagram; find and position pairs of integers and other rational numbers on a coordinate plane MGSE6.NS.7: Understand ordering and absolute value of rational numbers. MGSE6.NS.7a : Interpret statements of inequality as statements about the relative position of two numbers on a number line diagram. MGSE6.NS.7b : Write, interpret, and explain statements of order for rational numbers in real-world contexts. MGSE6.NS.7c : Understand the absolute value of a rational number as its distance from 0 on the number line; interpret absolute value as magnitude for a positive or negative quantity in a real-world situation. MGSE6.NS.7d : Distinguish comparisons of absolute value from statements about order. MGSE6.NS.8 : Solve real-world and mathematical problems by graphing points in all four quadrants of the coordinate plane. Include use of coordinates and absolute value to find distances between points with the same first coordinate or the same second coordinate. MGSE6.G.3 : Draw polygons in the coordinate plane given coordinates for the vertices; use coordinates to find the length of a side joining points with the same first coordinate or the same second coordinate. Apply these techniques in the context of solving real-world and mathematical problems. Unit 7 Concept Map: Make a concept map of the standards listed above. Underline the verbs and circle the nouns they modify. Then, place those verbs on the connector lines of your concept map, and the nouns in the bubbles of the concept map. What Will I Need to Learn?? _ How to describe real-world situations using positive and negative numbers _ To represent numbers as locations on number lines _ To understand opposites (inverses) on a number line _ To graph ordered pairs (including negatives) on a coordinate plane _ To understand that opposites in ordered pairs indicate a reflection on a coordinate plane _ Interpret inequalities, comparing two numbers on a number line _ Order rational numbers _ Understand absolute value (distance from zero) _ Compare and order absolute value _ Determine the distance between points on a coordinate plane _ Draw polygons in the coordinate plane, given the coordinates for the vertices Unit 7 IXL Tracking Log Required Skills Skill Your Score Week of 2/25 MM.1 (Understanding Integers) MM.2 (Integers on Number Lines) Week of 3/3 MM.3 (Absolute Value and Opposites) MM.4 (Graph Integers on Horizontal & Vertical Number Lines) MM.5 (Comparing Integers) MM.6 (Ordering Integers) Week of 3/3 XX.1 (Objects on Coordinate Planes) XX.2 (Graph Points on a Coordinate Plane) XX.4 (Coordinate Planes as Maps) XX.5 (Distance Between Two Points) Optional Skills Pg.3a pg. 3b Unit 7 Vocabulary Vocabulary Term Definition absolute value The distance between a number and zero on a number line. coordinate plane A plane, also called a coordinate grid or coordinate system, in which a horizontal number line and a vertical number line intersect at their zero points. (0,0) inequality A statement that compares two quantities using the symbols >, <, >, <, or ≠. integer Any number from the set {… -4, -3, -2, -1, 0, 1, 2, 3, 4 …} where … means continues without end. negative integer A number that is less than zero. opposites Two integers are opposites if they are represented on the number line by points that are the same distance from zero, but on opposite sides of zero. The sum of two opposites is zero. ordered pair A pair of numbers used to locate a point in the coordinate plane. An ordered pair is written in the form (x-coordinate, y-coordinate). origin The point (0, 0) in a coordinate plane where the x-axis and the y-axis intersect. positive integer A number that is greater than zero. It can be written with our without a + sign. quadrants The four regions in a coordinate plane separated by the x-axis and y-axis. reflection A transformation in which a figure or ordered pair is flipped over a line of symmetry. sign A symbol that indicates whether a number is positive or negative. x-coordinate The first number in an ordered pair. (It tells you how far left or right to go from the origin.) y-coordinate The second number in an ordered pair. (It tells you how far up or down to go from the origin.) Unit 7 Vocabulary – You Try Vocabulary Term Definition absolute value The distance between a number zero on a number line. coordinate plane A plane, also called a coordinate grid or coordinate system, in which a horizontal number line and a vertical number line intersect at their zero points. (0,0) inequality A statement that compares two quantities using the symbols >, <, >, <, or ≠. integer Any number from the set {… -4, -3, -2, -1, 0, 1, 2, 3, 4 …} where … means continues without end. negative integer A number that is less than zero. opposites Two integers are opposites if they are represented on the number line by points that are the same distance from zero, but on opposite sides of zero. The sum of two opposites is zero. ordered pair A pair of numbers used to locate a point in the coordinate plane. An ordered pair is written in the form (x-coordinate, y-coordinate). origin The point (0, 0) in a coordinate plane where the x-axis and the y-axis intersect. positive integer A number that is greater than zero. It can be written with our without a + sign. quadrants The four regions in a coordinate plane separated by the x-axis and y-axis. reflection A transformation in which a figure or ordered pair is flipped over a line of symmetry. sign A symbol that indicates whether a number is positive or negative. x-coordinate The first number in an ordered pair. (It tells you how far left or right to go from the origin.) y-coordinate The second number in an ordered pair. (It tells you how far up or down to go from the origin.) Pg.4a pg. 4b Integers & Graphing on a Number Line Positive whole numbers, their opposites and the number zero are called __. To represent data that are less than a 0, you can use __ integers. A negative integer is written with a ___ sign. Data that are greater than zero are represented by __ integers. __ and sets of integers can be graphed on a horizontal or vertical __ line. To graph a point on a number line, draw a __ on the number line at its location. A set of integers is written using braces, such as {2, -9, 0}. Example: Write an integer for each situation. a) a 10-yard loss - Because it represents a loss, the integer is -10. In football, the integer 0 represents the normal amount of rain. b) 4 inches above normal - Because it represents above normal, the integer is 4. In this situation, the integer 0 represents the normal amount of rain. c) 16 feet under the ground - Because it is under the ground, the integer is –16. d) a gain of 5 hours - Because it is a gain, the integer is 5. You Try: Write an integer for each situation. 1) a profit of $60 2) a decrease of 10° 3) a loss of 3 yards 4) a gain of 12 ounces 5) a gain of $2 6) 20° below zero Example: Graph the set of integers {–5, –2, 3} on a number line. You Try: 1) Graph the set {–6, 5, –4, 3, 0, 7} on a number line. 2) Graph the set {–5, 1, –3, -1, 3, 5} on a number line. Pg.5a pg. 5b Opposites Positive numbers, such as 2, are graphed to the _ of zero on a number line. Negative numbers, such as -2, are graphed to the _ of zero on a number line. Opposites are numbers that are the same __ from zero in opposite directions. Since 0 is not negative or positive, it is its own opposite. Example: Find the opposite of the given number. 1) The opposite of -12 is: 12 2) The opposite of 8 is: -8 You Try: Find the opposite of the given number. 1) The opposite of -5 is: 2) The opposite of 0 is: 3) The opposite of 100 is: 4) The opposite of -34 is: 5) The opposite of -13 is: 6) The opposite of 7 is: 7) The opposite of -1000 is: 8) The opposite of 50 is: 9) The opposite of -48 is: 10) The opposite of 1 is: Absolute Value WORDS The absolute value of a number is the _ between the number and zero on a number line. MODEL SYMBOLS |5| = 5 The absolute value of 5 is 5. |-5| = 5 The absolute value of -5 is 5. __ ___ is always __! Absolute value is a distance and distance is always positive. Example: |125| = 125 |-5|+ |25| = 5 + 25 = 30 |-8-5| = 8 – 5 = 3 -|-16| = -16 You Try: Find the absolute value for each of the problems below. 1) |25| 2) |-150| 3) -|379| 4) |-2486| 5) |1273| 6) -|-68| 7) |-5| + |16| 8) |-30-12| 7) |-7| + |13| + |49| 10) Graph |-6| on the number line below and show that it is a distance from zero. Pg.6a pg. 6b Above and Below Sea Level In the space to the right, draw the following and then answer the questions below to discover the shipwreck’s treasure. A wavy line for sea level, a bird at +10 meters, a diver at +20 meters, an airplane taking off at +70 meters, a fish at -20 meters, a whale at -50 meters, a shipwreck at -90 meters, an underwater diver at -30 meters, a boat at sea level, and a submarine at -70 meters. Also draw a cliff with a height of +80 meters. What is the treasure in the shipwreck? To find the treasure, draw the items on the next page and then answer the questions below and write the letters in the spaces that represent the correct answers. 1) How many meters from the top of the cliff to the shipwreck? _ (O) 2) How many meters from the whale to the submarine? _ (S) 3) How many meters from the airplane to the boat? _ (A) 4) How many meters from the fish to the whale? _ (E) 5) Which is farther from the submarine, the fish or the whale? BY how many meters? _ (I) 6) Which is closer to the shipwreck, the fish or the underwater diver? By how many meters? _ (L) 7) The whale swims to sea level and then swims to the shipwreck. How far does he swim in all? _ (R) 8) The submarine rises to sea level and then dives to the bottom of the sea. How far does a the submarine travel in all? _ (P) 9) The boat springs a leak and sinks to the bottom of the sea. How many meters did it sink? _ (M) 10) The underwater diver wants to reach the submarine, how much farther does he need to swim? _ (N) 11) The diver makes 3 trips from the boat (before it sinks) to the shipwreck. How many meters will he travel? _ (D) _ _ _ _ _ _ 540 50 70 90 170 40 540 20 70 40 540 _ _ _ 160 30 70 140 60 20 Meters +80 +70 +60 +50 +40 +30 +20 +10 sea level 0 -10 -20 -30 -40 -50 -60 -70 -80 -90 Pg.7a pg. 7b Comparing Integers & Absolute Values To _ integers, you can compare signs as well as the magnitude, or size, of the numbers. Greater numbers are graphed farther to the __. If two numbers are different signs, the _ number is always greater than the negative number. If two numbers are the same sign, use a _ line to determine which number is greater. Don’t forget, alligators always eat the bigger number. You Try: 1) |8| _ |-6| 2) |-6| |6| 3) -122 300 4) |-4| _ 4 5) |-12| 9 6) |-21| 0 7) 1 _ |-1| 8) -2 -4 9) |4| -4 10) 20 _ 0 Ordering Integers & Absolute Values You can use a number line to order a set of integers. __ can be ordered from least to greatest or from greatest to least. Example: Before you put absolute values in order, find their value. Example: Put the following numbers in order from LEAST to GREATEST: |6|, |-12|, |-2|, |1| |6| = 6 |-12| = 12 |-2| = 2 |1| = 1 From least to greatest: |1|, |-2|, |6|, |-12| 3 -2 -20 13 Pg.8a pg. 8b You Try 1) 0, 3, −21, 9, −89, 8, −65, −56 2) 70, −9, 67, −78, 0, 45, −36, −19 3) . 0, -1, |-2|, |3| 4) -24, |-20|, 21, -26 5) . |1|, -1, |-2|, -2 6) 12, 8, −9, −12, 10, 16 Extra Practice For #’s 1-4, write an integer for each situation: 1) 45 feet below sea level 2) a gain of 8 yards 3) $528 deposit into your account 4) 10 units to the left on a number line 5) Graph the set {–4, 3, 0, -3, 7, -5} on the number line. 6) The opposite of -57 is: 7) The opposite of -43 is: 8) The opposite of 1000 is: 9) The opposite of 325 is: Find the absolute value for each of the problems below. 10) |4| 11) |-41| 12) -|11| 13) |-125| 14) |526| 15) -|-3| Use the symbols <, >, = to compare the following numbers. 16) |66| |33| 17) |-24| |82| 18) 88 _ -99 19) |-37| 37 Put the numbers in order from least to greatest. 20) -89, 42, -26, 8 21) -91, -46, 52, 12, 0 Pg.9a pg. 9b The Coordinate Plane • The Coordinate Plane is a grid consisting of two perpendicular number lines, the (horizontal) x-axis and (vertical) y-axis • The axes intersect at point (0,0), also known as the “origin” • The four open areas are called “quadrants” • Points can be plotted on the plane using a pair of x- and y- coordinates called “ordered pairs”. Plotting Points ALL ordered pairs are written as (x,y). The 1st number tells how far to go ACROSS on the X-axis The 2nd number tells how far to go UP OR DOWN the Y-axis. Remember you have to walk IN a building before you can go UP or DOWN the elevator! Points and Ordered Pairs Use the coordinate grid above to find the coordinates for each point and tell what quadrant they are in. Example: A: (5 , 6) Quadrant I You Try: B: ( , ) Quadrant C: ( , ) Quadrant _ D: ( , ) Quadrant E: ( , ) Quadrant _ F: ( , ) Quadrant Pg.10a pg. 10b Use the coordinate plane below to graph the following points. Example: J (-5, 4) You Try: C (0,0) H (4,3) O (-2,-1) R (-4,0) A (-2,3) K (3,-1) M (-4,5) T (0,4) S (4,-3) Reflections on the Coordinate Plane A __ is a “mirror image” of an object that has been “flipped” over an axis. You can use what you know about number lines and opposites to compare locations on the coordinate plane. Consider the number line and coordinate plane below. Example: J Pg.11a pg. 11b You Try: Find the ordered pair that is a reflection over the x-axis and then the y-axis of each of the points below. Original Point Reflected over x-axis Reflected over y-axis (-2,5) ( , ) ( , ) (-3,-1) ( , ) ( , ) (1,-4) ( , ) ( , ) Graphing Polygons You can graph polygons on a coordinate plane by graphing their vertices and connecting them. Example: A rectangle has vertices A(1,1), B(1,3), C(5,3), and D(5,1). Graph the polygon on the coordinate plane. You Try: A rectangle has the following vertices: D(–1, –1), E(–1, 3), F(2, 4), and G(2, –3) Graph the polygon on the coordinate plane. (1,-4) (-2,5) (-3,-1) Pg.12a pg. 12b Distance on a Coordinate Plane When two ordered pairs have the same x-coordinate or y-coordinate, they are on the same line. The distance between these two points can be found by counting the spaces between the points. You can also use absolute value to determine the distance between points! • Notice Point A = (-3,3) and Point B = (2,3). They have the same y-coordinate, __. • That means you’re finding the distance between the x-coordinates, and . • -3 is 3 units from the y-axis, or |-3| = _ • 2 is 2 units from the y-axis, or |2| = • |-3| + |2| = _ units Examples: 1) On the coordinate plane below, (2,9) and (2,3) have the same x-coordinate. The distance between them is 6 unites. You can figure this out by: 2) Area of a triangle = ½ (b • h). In the figure below, the base is the distance from A to C and which is _. The height is the distance from B to C which is _. What is the area of the triangle? _ Points A and C have the same first coordinate. The distance between them is 7 units. Points A and B have the same second coordinate. The distance between them is 5 units. Point A is 5 units from Point B. Likewise, B is 5 units from A. We wouldn’t say that they are -5 units away, even though you may move to the left on the number line, because distance is ALWAYS positive. For example, if you traveled 5 blocks to school and forgot your lunch and had to go back for it, you would have traveled another 5 blocks for 10 round trip. In other words, absolute value is always used to calculate distance! Point A = (-3,3) Point B = (2,3) Point C = (-3,4) There are 2 WAYS to find the distance between two points… (1) Count the spaces between the points! --- OR --- (2) If one point is positive and one negative, use absolute value and add. Pg.13a pg. 13b You Try: Use the graph below to answer the questions in Part 1: PART 1 1) Write the ordered pair next to each point on the graph. 2) Determine the length of each side of the rectangle. If you have room, you may also label them on the graph. AB = _ BC = _ CD = _ DA = _ 3) What is the perimeter of rectangle ABCD? __ 4) What is the area of rectangle ABCD? __ 5) Determine the length of the triangle’s base and height: PQ = _ QR = _ 6) What is the area of ΔPQR? __ PART 2 Bugs Bunny’s home is located at point B (-5 , 4). Yosemite Sam’s home is located at point Y (6 , 4). Sylvester’s home is located at point S (6 , -2). Daffy Duck’s home is located at point D (-5 , -2). 7) Plot each character’s home on the graph above. Label them B, Y, S and D. Connect their homes in the same order they are listed (then connect B & D). 8) What polygon was formed? 9) Find the distance from each house (length of sides): BY = __ YS = _ SD = _ DB = _ 10) If they march in a parade that begins at Bugs’ house, goes around the rectangle and ends at Bugs’ house, how many units did they travel? Pg.14a pg. 14b Area and Perimeter of Polygons When two ordered pairs have the same x-coordinate or y-coordinate, they are on the same line. The __ between these two points can be found by counting the spaces between the points. Example: A rectangle has vertices A(1,1), B(1,3), C(5,3), and D(5,1). Find the length of the sides of the rectangle. 𝐴𝐵 ̅̅ ̅̅ = 2 𝐵𝐶 ̅ ̅ ̅ ̅ = 4 𝐶𝐷 ̅̅ ̅̅ = 2 𝐷𝐴 ̅̅̅̅ = 4 Use the lengths of the sides to find the area and perimeter of the rectangle. Example: Perimeter is the distance around the rectangle. Add all of your sides. P = 2 + 4 + 2 + 4 = 12 units Find the area by multiplying the base times the height. A = 4 • 2 = 8 units2 You Try: A rectangle has the following vertices: D(–1, –1), E(–1, 3), F(2, -3), and G(2, –1) 1) Find the length of each side of the rectangle. 𝐷𝐸 ̅̅̅̅ = _ 𝐸𝐹 ̅ ̅ ̅ ̅ = 𝐹𝐺 ̅ ̅ ̅ ̅ = 𝐺𝐷 ̅̅̅̅ = _ 2) Find the perimeter of the rectangle above. 3) Find the area of the rectangle above. Pg.15a pg. 15b Find the Missing Points If the points on the coordinate plane below are three of the vertices of a rectangle, what are the coordinates of the fourth vertex? Remember that opposite sides of a rectangle are congruent (equal)! Example: 1) What is the missing point? 2) What is the perimeter of the rectangle? 3) What is the area of the rectangle? You Try: Graph the given coordinates below to find the missing ordered pair to finish the rectangle. (-3, 4), (-3, -2), (2, -2) 1) What is the missing point? 2) What is the perimeter of the rectangle? 3) What is the area of the rectangle? (-4,2) (-4,-3) (2,2) Pg.16a pg. 16b Reflecting a Polygon Using what we know about reflections, we can reflect a polygon across an axis as well. Simply reflect each _ and then redraw the figure. Example: Graph the following points to form a rectangle and then reflect it across the Y axis. A(1, 3) B(4, 3) C(1, -2) D(4, -2) A′ (-1, 3) B′ (-4, 3) C′ (-1, -2) D′ (-4, -2) A′ is said A “prime” and it represents the new, reflected, point. That way it is easy to match up the original point with its reflection. Remember: Perimeter is the sum of all the sides. Find the distance of each side and add them together. Area is the base times the height. Find those distances and then find the product. You Try: Graph the following points to form a rectangle and then reflect it across the Y axis. A(2,5) B(5,5) C(2,-5) D(5, -5) A′ ( , ) B′ ( , ) C′ ( , ) D′ ( , ) Graph the following points to form a rectangle and then reflect it across the X axis. A(-4, 3) B(-4,1) C(3,3) D(3, 1) A′ ( , ) B′ ( , ) C′ ( , ) D′ ( , ) A B C D A′ B′ C′ D′ 1) What is the perimeter of the new rectangle? 2) What is the area of the new rectangle? 1) What is the perimeter of the new rectangle? 2) What is the area of the new rectangle? Pg.17a pg. 17b Unit 7 Study Guide Knowledge and Understanding 1) What does the absolute value of a number tell you about the number? 2) Describe how to use a number line to order integers. Proficiency of Skills 3) Evaluate |-15| = __ 4) Evaluate |2| = __ 5) Order from least to greatest: -10, 0, |-12|, -12, |-9| _ , _ , _ , _ , _ 6) Plot and label the following points on the coordinate plane A (-3, 2) B (0,-3) C (-2, -10) D (8,-5) 7) Finish labeling the number line below. Plot a point on 4 and its opposite. Application 8) Kellen has reached the peak of Mathclassrocks Mountain at 1,000 feet above sea level. He hikes down 400 feet to check out an old cannon. How many more feet must he hike to reach sea level ? (Hint: Drawing a picture may help to visualize the problem!!) 9) The table below shows today’s temperature for 5 cities in Alaska. a) Write an inequality statement comparing the temperature of King Salmon and Bethel: ___ b) Order the cities from warmest to coldest: City McKinley Park Bethel Fairbanks King Salmon Temperature (⁰Celsius) -22 -11 -20 -13 0 1 -1 Pg.18a pg. 18b 10) Graph point A (4, -8) on the coordinate plane. a) Reflect the point across the x-axis. b) What is the distance between point A and the reflected point? _ units Justify your answer: 11) Andrew owes $6.50 in late fees to the library. Represent this value on the number line below. Mark the point A (Hint: If he OWES, is that a positive or negative number?) a) Hayleigh owes $0.50 in late fees to the library. Plot a point for this value on the number line. Mark the point H. b) How much more does Andrew owe than Hayleigh? __ Use the map below for questions 12 – 14. 12) Name the ordered pair that represents the location of the gas station. 13) How many blocks apart are the hospital and the cemetery? ___ blocks 14) Name the building that is located in quadrant 3. _ 15) Graph (7,-3) and (7, 5) on the coordinate plane to the right. a) Reflect both points across the y-axis to form the vertices of a rectangle. b) Name the two reflected ordered pairs: _ & c) What is the perimeter of the rectangle? _ d) What is the area of the rectangle? ____ 0 1 -1 Pg.19a pg. 19b 16) If you reflected the ordered pair (-2, 5) across the x-axis, what would be the coordinates of the reflection? a) (-2, -5) b) (2, 5) c) (2, -5) d) (-2, 5) 17) Which statement below is NOT true? a) -3 < -1 b) -2 ≥ -5 c) -4 ≤ -14 d) -3 < 4 18) It is 89 degrees above zero in Miami. It is 20 degrees below zero in Anchorage. Use the number line below to determine how many degrees warmer it is in Miami than in Anchorage. a) 69⁰F b) 79⁰F c) 109⁰F d) 129⁰F 19) A Bolivian monkey is jumping around on a number line. He starts at -3 and jumps 8 units to the right. Where is he now on the number line? Performance Task 20) A newly developed neighborhood has dedicated a portion of their land to be used as a children’s playground. The neighborhood would like to build a fence around a rectangular area of 100 square yards for a dog run. The coordinate planes below each represent the dedicated land. Each square on the grid represents one square yard. Each yard of fencing costs $12. Develop two plans for the neighborhood to choose from. Label the coordinates of the vertices and determine the price of the fencing for each plan (based on the perimeter). Then write a letter to the neighborhood explaining which design you recommend and why. Plan 1 Plan 2 a) -5 b) -11 c) -11 d) 5 0⁰ F -20⁰ F 89⁰ F 0 1 -1
743
https://johnkerl.org/doc/eix.pdf
Derivation of sum and difference identities for sine and cosine John Kerl January 2, 2012 The authors of your trigonometry textbook give a geometric derivation of the sum and difference identities for sine and cosine. I find this argument unwieldy — I don’t expect you to remember it; in fact, I don’t remember it. There’s a standard algebraic derivation which is far simpler. The only catch is that you need to use complex arithmetic, which we don’t cover in Math 111. Nonetheless, I will present the derivation so that you will have seen how simple the truth can be, and so that you may come to understand it after you’ve had a few more math courses. And in fact, all you need are the following facts: • Complex numbers are of the form a+bi, where a and b are real numbers and i is defined to be a square root of −1. That is, i2 = −1. (Of course, (−i)2 = −1 as well, so −i is the other square root of −1.) • The number a is called the real part of a + bi; the number b is called the imaginary part of a + bi. All the real numbers you’re used to working with are already complex numbers — they simply have zero imaginary part. • To add or subtract complex numbers, add the corresponding real and imaginary parts. For example, 2 + 3i plus 4 + 5i is 6 + 8i. • To multiply two complex numbers a + bi and c + di, just FOIL out the product (a + bi)(c + di) and use the fact that i2 = −1. Then collect like terms. • The familiar exponential function f(x) = ex takes real-valued input. However, it can be extended to take complex-valued input. All the usual rules for exponents apply, so ea+bi = eaebi. We compute ea as always — this is the same exponential function as always. The question is, what does it mean to raise e to an imaginary power? I assert to you that we write ebi = cos(b) + i sin(b) 1 where the cosine and sine functions are as usual. This famous formula is called Euler’s formula (Euler is pronounced Oiler). You can read all about this formula on Wikipedia — also see their nice article on the complex numbers. Given these facts, we can simply write down what ei(α+β) is: the sum and difference formulas for sine and cosine fall out as a consequence. Using the usual rules for exponents, we can write this as ei(α+β) = eiαeiβ. Now all we need to do is write out the two sides using Euler’s formula. The left-hand side is ei(α+β) = cos(α + β) + i sin(α + β). Using the definition, FOILing, and collecting like terms, the right-hand side is eiαeiβ = (cos α + i sin α)(cos β + i sin β) = (cos α cos β −sin α sin β) + i(sin α cos β + cos α sin β). Equating real and imaginary parts of the left-hand side and the right-hand side gives us, two for the price of one, the familiar sum identities for sine and cosine: sin(α + β) = sin α cos β + cos α sin β cos(α + β) = cos α cos β −sin α sin β. Repeat this for ei(α−β) to get the difference identities. You can do that — just remember that cosine and sine are even and odd functions, respectively, so cos(−β) = cos(β) and sin(−β) = −sin(β). In summary, we have: sin(α ± β) = sin α cos β ± cos α sin β cos(α ± β) = cos α cos β ∓sin α sin β. 2
744
http://fgw.gzlps.gov.cn/bmxxgk/zfxxgk/fdzdgknr/bmwj/202504/t20250414_87515674.html
政府信息 公开指南 政府信息 公开制度 法定主动 公开内容 政府信息 公开年报 关于加快推进虚拟电厂发展的指导意见(发改能源〔2025〕357号) 国家发展改革委 国家能源局关于 加快推进虚拟电厂发展的指导意见 发改能源〔2025〕357号 各省、自治区、直辖市、新疆生产建设兵团发展改革委、能源局,北京市城市管理委员会,天津市工业和信息化局、辽宁省工业和信息化厅、上海市经济和信息化委员会、重庆市经济和信息化委员会、甘肃省工业和信息化厅,国家能源局各派出机构,国家电网有限公司、中国南方电网有限责任公司,有关中央企业:   随着新型电力系统建设和电力市场建设的加快推进,虚拟电厂的发展条件日益成熟、作用日益显著、需求日益增长。为加快推进虚拟电厂发展,现提出如下意见。   一、总体要求   以习近平新时代中国特色社会主义思想为指导,全面贯彻党的二十大和二十届二中、三中全会精神,深入落实“四个革命、一个合作”能源安全新战略,加快提升虚拟电厂的发展规模和水平,充分发挥调节作用。坚持统一认识,明确虚拟电厂的定义和功能定位。坚持开放包容,健全支持虚拟电厂发展的政策和市场体系。坚持安全可靠,将虚拟电厂纳入电力安全管理体系并明确安全管理要求。坚持多元参与,鼓励民营企业等各类社会资本结合自身优势参与虚拟电厂投资、建设和运营。   到2027年,虚拟电厂建设运行管理机制成熟规范,参与电力市场的机制健全完善,全国虚拟电厂调节能力达到2000万千瓦以上。到2030年,虚拟电厂应用场景进一步拓展,各类商业模式创新发展,全国虚拟电厂调节能力达到5000万千瓦以上。   二、规范虚拟电厂的定义和定位   (一)虚拟电厂的定义。虚拟电厂是基于电力系统架构,运用现代信息通信、系统集成控制等技术,聚合分布式电源、可调节负荷、储能等各类分散资源,作为新型经营主体协同参与电力系统优化和电力市场交易的电力运行组织模式。   (二)虚拟电厂的功能定位。虚拟电厂对增强电力保供能力、促进新能源消纳、完善电力市场体系具有重要作用。在系统运行方面,可提供调峰、调频、备用等多种调节服务。在需求侧管理方面,可组织负荷资源开展需求响应。在市场交易方面,可聚合分散的资源参与市场交易。   三、积极推动虚拟电厂因地制宜发展   (三)加快培育虚拟电厂主体。省级主管部门要结合本地区实际制定虚拟电厂发展方案,在发展规模、业务类型、运营模式、技术要求等方面作出安排,针对省级、地市级电力调节需要,培育不同特点的虚拟电厂主体,完善虚拟电厂发展体系,围绕聚合分散电力资源、增强灵活调节能力、减小供电缺口、促进新能源消纳等场景加快推进虚拟电厂规模化发展。鼓励能源企业、能源产业链上下游企业及其他各类企业积极投资虚拟电厂,大力支持民营企业参与虚拟电厂投资开发与运营管理,共同推动技术及模式创新。   (四)持续丰富虚拟电厂商业模式。省级主管部门及有关单位要推动虚拟电厂立足核心功能,公平参与各类电力市场或需求响应,获取相应收益。鼓励虚拟电厂开展业务创新,提供节能服务、能源数据分析、能源解决方案设计、碳交易相关服务等综合能源服务,拓宽收益渠道。   四、持续提升虚拟电厂建设运行管理水平   (五)建立健全虚拟电厂建设运行管理机制。省级主管部门组织制定本地区虚拟电厂建设运行管理办法,统一省内虚拟电厂建设运行管理规范,明确项目建设、接入管理、系统调试、能力检测、上线运行等流程,提升虚拟电厂项目实施和运行效率。动态监测评估虚拟电厂的运行效果、发展趋势等,不断完善虚拟电厂管理体系。   (六)完善虚拟电厂接入调用机制。虚拟电厂根据参与业务的技术要求、电力市场建设进程及运行管理需要接入相应系统。参与需求响应的虚拟电厂接入新型电力负荷管理系统(以下简称负荷系统);参与电力现货市场、辅助服务市场的虚拟电厂接入电力调度自动化系统(以下简称调度系统),或通过接入负荷系统参与部分交易品种。电力调度机构或电力负荷管理中心要根据虚拟电厂发展需要优化工作机制,做好系统接入服务,开展虚拟电厂调节能力评估,确保满足参与电力市场或需求响应的准入要求,高效有序实施虚拟电厂调用或资源组织。   (七)提升虚拟电厂资源聚合水平。虚拟电厂运营商建立相关技术支持系统,具备监测、预测、指令分解执行等信息交互功能,按有关规则响应调度系统或负荷系统的指令,对聚合资源进行优化调控。参与现货市场的虚拟电厂所聚合资源原则上应位于同一市场出清节点,电网条件和市场规则允许的情况下,也可跨节点聚合资源。单一资源不能同时被两个及以上虚拟电厂聚合。   五、完善虚拟电厂参与电力市场等机制   (八)明确参与电力市场的准入条件。虚拟电厂在满足《电力市场注册基本规则》要求及相应市场的准入要求后,可按独立主体身份参与电力中长期市场、现货市场及辅助服务市场。各类分散资源在被虚拟电厂聚合期间,不得重复参与电力市场交易。省级主管部门、国家能源局派出机构结合职责,明确并组织发布虚拟电厂参与各类电力市场的规则细则。在虚拟电厂参与电力市场初期,可结合实际适当放宽准入要求,并根据运行情况逐步优化。   (九)健全参与电能量市场的机制。加快推进虚拟电厂作为资源聚合类新型经营主体整体参与电力中长期市场和现货市场交易,并明确相应的电量电费计算原则。虚拟电厂在电力中长期市场和现货市场开展购售电业务,应具备售电公司资质。健全完善中长期市场价格形成机制,适当拉大现货市场限价区间。在具备条件的地区,积极探索虚拟电厂参与跨省电力交易。   (十)完善参与辅助服务市场的机制。加快推进辅助服务市场向虚拟电厂开放,针对虚拟电厂特点完善交易品种和技术要求。完善辅助服务市场交易和价格机制,公平设定各类辅助服务品种申报价格上限,不应对各类主体设立不同上限。建立完善适应虚拟电厂发展阶段的考核机制,保障虚拟电厂调节能力可靠性。   (十一)优化需求响应机制。完善虚拟电厂参与市场化需求响应机制,扩大聚合分散需求侧资源的规模,提升需求侧资源响应水平。根据“谁服务、谁获利,谁受益、谁承担”的原则,合理确定需求响应补偿标准,需求响应价格上限由省级价格主管部门研究确定,同步健全需求响应分摊机制。   六、提高虚拟电厂安全运行水平   (十二)提升虚拟电厂参与电力系统运行的安全水平。纳入涉网安全管理范围的虚拟电厂,要接受电力调度机构统一调度,执行涉网安全管理规章制度。虚拟电厂与调度系统、负荷系统等交互应满足相应系统的网络安全防护要求,加强软硬件配置和安全监测,确保安全可靠。将虚拟电厂纳入电力安全应急模拟演练,制定电网应急预案和处置流程,明确虚拟电厂与电网企业各部门的责任和分工,不断提升应急响应与快速恢复水平。虚拟电厂根据接入的系统,定期向电力负荷管理中心或电力调度机构提交聚合资源清单和变更申请。电网发生紧急情况时,虚拟电厂聚合资源应按要求执行调节指令。   (十三)提升虚拟电厂自身安全水平。虚拟电厂运营商要加强自身安全管理,落实技术监督等要求,在相关协议中明确虚拟电厂及各分散资源的安全责任。加快网络安全防护体系建设,严格遵守《电力监控系统安全防护规定》等政策法规和标准规范,落实网络安全防护要求。加强数据安全管理,使用满足要求的密码产品,确保数据源头加密和防篡改。加强异常监测,及时发现数据安全隐患并快速排除。   七、推动虚拟电厂技术创新和标准规范体系建设   (十四)加强虚拟电厂关键技术研发应用。开展虚拟电厂资源聚合、调节能力、智慧调控、交易辅助决策、安全稳定及评估检测等领域的关键技术攻关,推进智能量测与通信技术的研发应用,支撑实现分散资源广泛感知、精准响应、高速互联,持续提升虚拟电厂调节性能和运行控制水平。   (十五)建立健全虚拟电厂全环节标准体系。加快推动虚拟电厂聚合响应、并网调控、智能计量、数据交互、安全防护等技术标准的立项、编制与发布。对行业亟需但标准尚未覆盖的领域,应通过技术指引等政策性文件先行规范。结合发展实际修订已发布标准中的不适应条款,增强标准的适用性。   八、加强组织实施   (十六)落实各方责任。省级主管部门牵头建立完善虚拟电厂发展工作机制,协调解决推进过程中的问题,支持各有关主体建立虚拟电厂发展交流平台;会同有关部门编制虚拟电厂发展方案、建设运行管理规范等政策文件;与国家能源局派出机构按分工编制虚拟电厂参与电力市场交易的配套文件。省级价格主管部门牵头完善相关价格政策。电网企业、电力市场运营机构持续提升服务虚拟电厂参与系统运行和电力市场的水平,虚拟电厂运营商高效组织分散资源参与电力系统互动。电网企业与虚拟电厂运营商按各自职责承担相应的电力安全供应责任。国家能源局派出机构加强监管,及时发现问题并结合职责推动解决。   (十七)完善支持政策。积极落实“两新”(大规模设备更新和消费品以旧换新)等政策,对符合条件的虚拟电厂项目,给予资金支持。鼓励金融机构为虚拟电厂提供低息贷款、信用担保、绿色债券等支持。   (十八)加强评估推广。国家发展改革委、国家能源局结合虚拟电厂建设、运行情况,适时总结先进项目和经验,做好推广工作,及时完善考核等相关政策。省级主管部门要加强对虚拟电厂相关支持政策、标准规范和市场化运营机制等的宣传培训和政策解读,培育先进虚拟电厂运营商,形成良好的发展氛围。 国家发展改革委 国 家 能 源 局 2025年3月25日 主办单位: 六盘水市发展和改革委员会 网站标识码:5202000020 贵公网安备 52020102520232号 黔ICP备17003236号-3 技术支持:贵州多彩博虹科技有限公司 联系电话:0858-8223215
745
https://pubmed.ncbi.nlm.nih.gov/22424586/
Zolpidem for insomnia - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page. To: Subject: Body: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Similar articles Cited by Publication types MeSH terms Substances Related information Review Expert Opin Pharmacother Actions Search in PubMed Search in NLM Catalog Add to Search . 2012 Apr;13(6):879-93. doi: 10.1517/14656566.2012.667074. Epub 2012 Mar 19. Zolpidem for insomnia David J Greenblatt1,Thomas Roth Affiliations Expand Affiliation 1 Tufts University School of Medicine, Department of Molecular Physiology and Pharmacology, 136 Harrison Avenue, Boston, MA 02111, USA. dj.greenblatt@tufts.edu PMID: 22424586 DOI: 10.1517/14656566.2012.667074 Item in Clipboard Review Zolpidem for insomnia David J Greenblatt et al. Expert Opin Pharmacother.2012 Apr. Show details Display options Display options Format Expert Opin Pharmacother Actions Search in PubMed Search in NLM Catalog Add to Search . 2012 Apr;13(6):879-93. doi: 10.1517/14656566.2012.667074. Epub 2012 Mar 19. Authors David J Greenblatt1,Thomas Roth Affiliation 1 Tufts University School of Medicine, Department of Molecular Physiology and Pharmacology, 136 Harrison Avenue, Boston, MA 02111, USA. dj.greenblatt@tufts.edu PMID: 22424586 DOI: 10.1517/14656566.2012.667074 Item in Clipboard Cite Display options Display options Format Abstract Introduction: The imidazopyridine derivative zolpidem , which acts as a benzodiazepine (BZ) receptor agonist, is the most widely prescribed hypnotic drug in the US. Areas covered: This review addresses the neuroreceptor properties of zolpidem; clinical pharmacokinetics, pharmacodynamics and drug interactions; efficacy as a hypnotic; adverse effects; tolerance, dependence and withdrawal; relation to motor vehicle accidents and complex sleep behaviors; and new dosage forms. Expert opinion: Approved doses of zolpidem (10 mg for adults, 5 mg for the elderly) are consistently effective in reducing sleep latency and consequently increasing sleep duration in patients with insomnia. However, favorable effects on sleep maintenance are observed less consistently. Residual daytime effects are unlikely with recommended doses, and provided that at least 8 h elapse prior to arising. Hypnotic efficacy is maintained with repeated nightly use, and the risk of rebound insomnia is low. Dependence and abuse of zolpidem are no more likely to occur than with typical benzodiazepines. Newly available novel dosage forms of zolpidem have increased therapeutic options for patients with insomnia variants such as sleep maintenance insomnia and middle-of-the-night awakening. PubMed Disclaimer Similar articles Zolpidem: a nonbenzodiazepine hypnotic for treatment of insomnia.Hoehns JD, Perry PJ.Hoehns JD, et al.Clin Pharm. 1993 Nov;12(11):814-28.Clin Pharm. 1993.PMID: 8275648 Review. Zolpidem: an update of its pharmacology, therapeutic efficacy and tolerability in the treatment of insomnia.Holm KJ, Goa KL.Holm KJ, et al.Drugs. 2000 Apr;59(4):865-89. doi: 10.2165/00003495-200059040-00014.Drugs. 2000.PMID: 10804040 Review. A multicenter, placebo-controlled study evaluating zolpidem in the treatment of chronic insomnia.Scharf MB, Roth T, Vogel GW, Walsh JK.Scharf MB, et al.J Clin Psychiatry. 1994 May;55(5):192-9.J Clin Psychiatry. 1994.PMID: 8071269 Clinical Trial. Zolpidem: a review of its use in the management of insomnia.Swainston Harrison T, Keating GM.Swainston Harrison T, et al.CNS Drugs. 2005;19(1):65-89. doi: 10.2165/00023210-200519010-00008.CNS Drugs. 2005.PMID: 15651908 Review. Zolpidem's use for insomnia.Monti JM, Spence DW, Buttoo K, Pandi-Perumal SR.Monti JM, et al.Asian J Psychiatr. 2017 Feb;25:79-90. doi: 10.1016/j.ajp.2016.10.006. Epub 2016 Oct 12.Asian J Psychiatr. 2017.PMID: 28262178 Review. See all similar articles Cited by Pharmacotherapy Treatment Options for Insomnia: A Primer for Clinicians.Asnis GM, Thomas M, Henderson MA.Asnis GM, et al.Int J Mol Sci. 2015 Dec 30;17(1):50. doi: 10.3390/ijms17010050.Int J Mol Sci. 2015.PMID: 26729104 Free PMC article.Review. Orexin OX 2 Receptor Antagonists as Sleep Aids.Jacobson LH, Chen S, Mir S, Hoyer D.Jacobson LH, et al.Curr Top Behav Neurosci. 2017;33:105-136. doi: 10.1007/7854_2016_47.Curr Top Behav Neurosci. 2017.PMID: 27909987 Review. Quality measures for the care of patients with insomnia.Edinger JD, Buysse DJ, Deriy L, Germain A, Lewin DS, Ong JC, Morgenthaler TI.Edinger JD, et al.J Clin Sleep Med. 2015 Mar 15;11(3):311-34. doi: 10.5664/jcsm.4552.J Clin Sleep Med. 2015.PMID: 25700881 Free PMC article. Sleep Pathologies and Eating Disorders: A Crossroad for Neurology, Psychiatry and Nutrition.Mutti C, Malagutti G, Maraglino V, Misirocchi F, Zilioli A, Rausa F, Pizzarotti S, Spallazzi M, Rosenzweig I, Parrino L.Mutti C, et al.Nutrients. 2023 Oct 23;15(20):4488. doi: 10.3390/nu15204488.Nutrients. 2023.PMID: 37892563 Free PMC article.Review. Zolpidem use and risk of fractures: a systematic review and meta-analysis.Park SM, Ryu J, Lee DR, Shin D, Yun JM, Lee J.Park SM, et al.Osteoporos Int. 2016 Oct;27(10):2935-44. doi: 10.1007/s00198-016-3605-8. Epub 2016 Apr 22.Osteoporos Int. 2016.PMID: 27105645 See all "Cited by" articles Publication types Review Actions Search in PubMed Search in MeSH Add to Search MeSH terms Animals Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Hypnotics and Sedatives / adverse effects Actions Search in PubMed Search in MeSH Add to Search Hypnotics and Sedatives / pharmacokinetics Actions Search in PubMed Search in MeSH Add to Search Hypnotics and Sedatives / pharmacology Actions Search in PubMed Search in MeSH Add to Search Hypnotics and Sedatives / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Pyridines / adverse effects Actions Search in PubMed Search in MeSH Add to Search Pyridines / pharmacokinetics Actions Search in PubMed Search in MeSH Add to Search Pyridines / pharmacology Actions Search in PubMed Search in MeSH Add to Search Pyridines / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Sleep Initiation and Maintenance Disorders / drug therapy Actions Search in PubMed Search in MeSH Add to Search Zolpidem Actions Search in PubMed Search in MeSH Add to Search Substances Hypnotics and Sedatives Actions Search in PubMed Search in MeSH Add to Search Pyridines Actions Search in PubMed Search in MeSH Add to Search Zolpidem Actions Search in PubMed Search in MeSH Add to Search Related information MedGen PubChem Compound PubChem Compound (MeSH Keyword) PubChem Substance [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
746
https://farside.ph.utexas.edu/teaching/qmech/qmech.pdf
Quantum Mechanics Richard Fitzpatrick Professor of Physics The University of Texas at Austin Contents 1 Introduction 5 1.1 Intended audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Major Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Aim of Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Outline of Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Probability Theory 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 What is Probability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Combining Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Mean, Variance, and Standard Deviation . . . . . . . . . . . . . . . . . . . 9 2.5 Continuous Probability Distributions . . . . . . . . . . . . . . . . . . . . . . 11 3 Wave-Particle Duality 13 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Wavefunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Plane Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4 Representation of Waves via Complex Functions . . . . . . . . . . . . . . . 15 3.5 Classical Light Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.6 Photoelectric Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7 Quantum Theory of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.8 Classical Interference of Light Waves . . . . . . . . . . . . . . . . . . . . . 21 3.9 Quantum Interference of Light . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.10 Classical Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.11 Quantum Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.12 Wave Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2 QUANTUM MECHANICS 3.13 Evolution of Wave Packets . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.14 Heisenberg’s Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . 32 3.15 Schr¨ odinger’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.16 Collapse of the Wave Function . . . . . . . . . . . . . . . . . . . . . . . . . 36 4 Fundamentals of Quantum Mechanics 39 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2 Schr¨ odinger’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 Normalization of the Wavefunction . . . . . . . . . . . . . . . . . . . . . . 39 4.4 Expectation Values and Variances . . . . . . . . . . . . . . . . . . . . . . . 42 4.5 Ehrenfest’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.6 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.7 Momentum Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.8 Heisenberg’s Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . 50 4.9 Eigenstates and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.10 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.11 Continuous Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.12 Stationary States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5 One-Dimensional Potentials 63 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2 Infinite Potential Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3 Square Potential Barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.4 WKB Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.5 Cold Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.6 Alpha Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.7 Square Potential Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.8 Simple Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6 Multi-Particle Systems 85 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.3 Non-Interacting Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 6.4 Two-Particle Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6.5 Identical Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7 Three-Dimensional Quantum Mechanics 93 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7.2 Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7.3 Particle in a Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 7.4 Degenerate Electron Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 7.5 White-Dwarf Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 CONTENTS 3 8 Orbital Angular Momentum 103 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 8.2 Angular Momentum Operators . . . . . . . . . . . . . . . . . . . . . . . . . 103 8.3 Representation of Angular Momentum . . . . . . . . . . . . . . . . . . . . 105 8.4 Eigenstates of Angular Momentum . . . . . . . . . . . . . . . . . . . . . . . 106 8.5 Eigenvalues of Lz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 8.6 Eigenvalues of L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8.7 Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 9 Central Potentials 115 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 9.2 Derivation of Radial Equation . . . . . . . . . . . . . . . . . . . . . . . . . 115 9.3 Infinite Spherical Potential Well . . . . . . . . . . . . . . . . . . . . . . . . 118 9.4 Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 9.5 Rydberg Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 10 Spin Angular Momentum 129 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 10.2 Spin Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 10.3 Spin Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 10.4 Eigenstates of Sz and S2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 10.5 Pauli Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 10.6 Spin Precession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 11 Addition of Angular Momentum 141 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 11.2 General Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 11.3 Angular Momentum in the Hydrogen Atom . . . . . . . . . . . . . . . . . . 144 11.4 Two Spin One-Half Particles . . . . . . . . . . . . . . . . . . . . . . . . . . 147 12 Time-Independent Perturbation Theory 151 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 12.2 Improved Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 12.3 Two-State System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 12.4 Non-Degenerate Perturbation Theory . . . . . . . . . . . . . . . . . . . . . 155 12.5 Quadratic Stark Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 12.6 Degenerate Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . 160 12.7 Linear Stark Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 12.8 Fine Structure of Hydrogen . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 12.9 Zeeman Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 12.10 Hyperfine Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 13 Time-Dependent Perturbation Theory 175 4 QUANTUM MECHANICS 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 13.2 Preliminary Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 13.3 Two-State System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 13.4 Spin Magnetic Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 13.5 Perturbation Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 13.6 Harmonic Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 13.7 Electromagnetic Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 13.8 Electric Dipole Approximation . . . . . . . . . . . . . . . . . . . . . . . . . 186 13.9 Spontaneous Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 13.10 Radiation from a Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . 190 13.11 Selection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 13.12 2P →1S Transitions in Hydrogen . . . . . . . . . . . . . . . . . . . . . . . 192 13.13 Intensity Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 13.14 Forbidden Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 14 Variational Methods 197 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 14.2 Variational Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 14.3 Helium Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 14.4 Hydrogen Molecule Ion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 15 Scattering Theory 209 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 15.2 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 15.3 Born Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 15.4 Partial Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 15.5 Determination of Phase-Shifts . . . . . . . . . . . . . . . . . . . . . . . . . 216 15.6 Hard Sphere Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 15.7 Low Energy Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 15.8 Resonances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Introduction 5 1 Introduction 1.1 Intended audience These lecture notes outline a single semester course on non-relativistic quantum mechanics which is primarily intended for upper-division undergraduate physics majors. The course assumes some previous knowledge of physics and mathematics. In particular, prospective students should be reasonably familiar with Newtonian dynamics, elementary classical electromagnetism and special relativity, the physics and mathematics of waves (includ-ing the representation of waves via complex functions), basic probability theory, ordinary and partial differential equations, linear algebra, vector algebra, and Fourier series and transforms. 1.2 Major Sources The textbooks which I have consulted most frequently whilst developing course material are: The Principles of Quantum Mechanics, P.A.M. Dirac, 4th Edition (revised), (Oxford Univer-sity Press, Oxford UK, 1958). Quantum Mechanics, E. Merzbacher, 2nd Edition, (John Wiley & Sons, New York NY, 1970). Introduction to the Quantum Theory, D. Park, 2nd Edition, (McGraw-Hill, New York NY, 1974). Modern Quantum Mechanics, J.J. Sakurai, (Benjamin/Cummings, Menlo Park CA, 1985). Quantum Theory, D. Bohm, (Dover, New York NY, 1989). Problems in Quantum Mechanics, G.L. Squires, (Cambridge University Press, Cambridge UK, 1995). Quantum Physics, S. Gasiorowicz, 2nd Edition, (John Wiley & Sons, New York NY, 1996). Nonclassical Physics, R. Harris, (Addison-Wesley, Menlo Park CA, 1998). Introduction to Quantum Mechanics, D.J. Griffiths, 2nd Edition, (Pearson Prentice Hall, Upper Saddle River NJ, 2005). 6 QUANTUM MECHANICS 1.3 Aim of Course The aim of this course is to develop non-relativistic quantum mechanics as a complete theory of microscopic dynamics, capable of making detailed predictions, with a minimum of abstract mathematics. 1.4 Outline of Course The first part of the course is devoted to an in-depth exploration of the basic principles of quantum mechanics. After a brief review of probability theory, in Chapter 2, we shall start, in Chapter 3, by examining how many of the central ideas of quantum mechanics are a direct consequence of wave-particle duality—i.e., the concept that waves sometimes act as particles, and particles as waves. We shall then proceed to investigate the rules of quantum mechanics in a more systematic fashion in Chapter 4. Quantum mechanics is used to examine the motion of a single particle in one dimension, many particles in one dimension, and a single particle in three dimensions, in Chapters 5, 6, and 7, respectively. Chapter 8 is devoted to the investigation of orbital angular momentum, and Chapter 9 to the closely related subject of particle motion in a central potential. Finally, in Chapters 10 and 11, we shall examine spin angular momentum, and the addition of orbital and spin angular momentum, respectively. The second part of this course describes selected practical applications of quantum mechanics. In Chapter 12, time-independent perturbation theory is used to investigate the Stark effect, the Zeeman effect, fine structure, and hyperfine structure, in the hydrogen atom. Time-dependent perturbation theory is employed to study radiative transitions in the hydrogen atom in Chapter 13. Chapter 14 illustrates the use of variational methods in quantum mechanics. Finally, Chapter 15 contains an introduction to quantum scattering theory. Probability Theory 7 2 Probability Theory 2.1 Introduction This section is devoted to a brief, and fairly low level, introduction to a branch of mathe-matics known as probability theory. 2.2 What is Probability? What is the scientific definition of probability? Well, let us consider an observation made on a general system, S. This can result in any one of a number of different possible outcomes. Suppose that we wish to find the probability of some general outcome, X. In order to ascribe a probability, we have to consider the system as a member of a large set, Σ, of similar systems. Mathematicians have a fancy name for a large group of similar systems. They call such a group an ensemble, which is just the French for “group.” So, let us consider an ensemble, Σ, of similar systems, S. The probability of the outcome X is defined as the ratio of the number of systems in the ensemble which exhibit this outcome to the total number of systems, in the limit that the latter number tends to infinity. We can write this symbolically as P(X) = lim Ω(Σ)→∞ Ω(X) Ω(Σ), (2.1) where Ω(Σ) is the total number of systems in the ensemble, and Ω(X) the number of systems exhibiting the outcome X. We can see that the probability P(X) must be a number between 0 and 1. The probability is zero if no systems exhibit the outcome X, even when the number of systems goes to infinity. This is just another way of saying that there is no chance of the outcome X. The probability is unity if all systems exhibit the outcome X in the limit as the number of systems goes to infinity. This is another way of saying that the outcome X is bound to occur. 2.3 Combining Probabilities Consider two distinct possible outcomes, X and Y, of an observation made on the system S, with probabilities of occurrence P(X) and P(Y), respectively. Let us determine the prob-ability of obtaining the outcome X or Y, which we shall denote P(X | Y). From the basic definition of probability, P(X | Y) = lim Ω(Σ)→∞ Ω(X | Y) Ω(Σ) , (2.2) 8 QUANTUM MECHANICS where Ω(X | Y) is the number of systems in the ensemble which exhibit either the outcome X or the outcome Y. Now, Ω(X | Y) = Ω(X) + Ω(Y) (2.3) if the outcomes X and Y are mutually exclusive (which must be the case if they are two distinct outcomes). Thus, P(X | Y) = P(X) + P(Y). (2.4) So, the probability of the outcome X or the outcome Y is just the sum of the individual probabilities of X and Y. For instance, with a six-sided die the probability of throwing any particular number (one to six) is 1/6, because all of the possible outcomes are considered to be equally likely. It follows, from what has just been said, that the probability of throwing either a one or a two is simply 1/6 + 1/6, which equals 1/3. Let us denote all of the M, say, possible outcomes of an observation made on the system S by Xi, where i runs from 1 to M. Let us determine the probability of obtaining any of these outcomes. This quantity is unity, from the basic definition of probability, because each of the systems in the ensemble must exhibit one of the possible outcomes. But, this quantity is also equal to the sum of the probabilities of all the individual outcomes, by (2.4), so we conclude that this sum is equal to unity: i.e., M X i=1 P(Xi) = 1. (2.5) The above expression is called the normalization condition, and must be satisfied by any complete set of probabilities. This condition is equivalent to the self-evident statement that an observation of a system must definitely result in one of its possible outcomes. There is another way in which we can combine probabilities. Suppose that we make an observation on a system picked at random from the ensemble, and then pick a second system completely independently and make another observation. We are assuming here that the first observation does not influence the second observation in any way. The fancy mathematical way of saying this is that the two observations are statistically independent. Let us determine the probability of obtaining the outcome X in the first system and the outcome Y in the second system, which we shall denote P(X ⊗Y). In order to determine this probability, we have to form an ensemble of all of the possible pairs of systems which we could choose from the ensemble Σ. Let us denote this ensemble Σ ⊗Σ. The number of pairs of systems in this new ensemble is just the square of the number of systems in the original ensemble, so Ω(Σ ⊗Σ) = Ω(Σ) Ω(Σ). (2.6) Furthermore, the number of pairs of systems in the ensemble Σ ⊗Σ which exhibit the outcome X in the first system and Y in the second system is simply the product of the number of systems which exhibit the outcome X and the number of systems which exhibit the outcome Y in the original ensemble, so that Ω(X ⊗Y) = Ω(X) Ω(Y). (2.7) Probability Theory 9 It follows from the basic definition of probability that P(X ⊗Y) = lim Ω(Σ)→∞ Ω(X ⊗Y) Ω(Σ ⊗Σ) = P(X) P(Y). (2.8) Thus, the probability of obtaining the outcomes X and Y in two statistically independent observations is the product of the individual probabilities of X and Y. For instance, the probability of throwing a one and then a two on a six-sided die is 1/6 × 1/6, which equals 1/36. 2.4 Mean, Variance, and Standard Deviation What is meant by the mean or average of a quantity? Well, suppose that we wished to calculate the average age of undergraduates at the University of Texas at Austin. We could go to the central administration building and find out how many eighteen year-olds, nineteen year-olds, etc. were currently enrolled. We would then write something like Average Age ≃N18 × 18 + N19 × 19 + N20 × 20 + · · · N18 + N19 + N20 · · · , (2.9) where N18 is the number of enrolled eighteen year-olds, etc. Suppose that we were to pick a student at random and then ask “What is the probability of this student being eighteen?” From what we have already discussed, this probability is defined P18 ≃ N18 Nstudents , (2.10) where Nstudents is the total number of enrolled students. (Actually, this definition is only accurate in the limit that Nstudents is very large.) We can now see that the average age takes the form Average Age ≃P18 × 18 + P19 × 19 + P20 × 20 + · · · . (2.11) Well, there is nothing special about the age distribution of students at UT Austin. So, for a general variable u, which can take on any one of M possible values u1, u2, · · · , uM, with corresponding probabilities P(u1), P(u2), · · ·, P(uM), the mean or average value of u, which is denoted ⟨u⟩, is defined as ⟨u⟩≡ M X i=1 P(ui) ui. (2.12) Suppose that f(u) is some function of u. Then, for each of the M possible values of u there is a corresponding value of f(u) which occurs with the same probability. Thus, f(u1) corresponds to u1 and occurs with the probability P(u1), and so on. It follows from our previous definition that the mean value of f(u) is given by ⟨f(u)⟩≡ M X i=1 P(ui) f(ui). (2.13) 10 QUANTUM MECHANICS Suppose that f(u) and g(u) are two general functions of u. It follows that ⟨f(u) + g(u)⟩= M X i=1 P(ui) [f(ui) + g(ui)] = M X i=1 P(ui) f(ui) + M X i=1 P(ui) g(ui), (2.14) so ⟨f(u) + g(u)⟩= ⟨f(u)⟩+ ⟨g(u)⟩. (2.15) Finally, if c is a general constant then ⟨c f(u)⟩= c ⟨f(u)⟩. (2.16) We now know how to define the mean value of the general variable u. But, how can we characterize the scatter around the mean value? We could investigate the deviation of u from its mean value ⟨u⟩, which is denoted ∆u ≡u −⟨u⟩. (2.17) In fact, this is not a particularly interesting quantity, since its average is zero: ⟨∆u⟩= ⟨(u −⟨u⟩)⟩= ⟨u⟩−⟨u⟩= 0. (2.18) This is another way of saying that the average deviation from the mean vanishes. A more interesting quantity is the square of the deviation. The average value of this quantity, D (∆u)2E = M X i=1 P(ui) (ui −⟨u⟩)2, (2.19) is usually called the variance. The variance is a positive number, unless there is no scatter at all in the distribution, so that all possible values of u correspond to the mean value ⟨u⟩, in which case it is zero. The following general relation is often useful D (u −⟨u⟩)2E = D (u2 −2 u ⟨u⟩+ ⟨u⟩2) E = D u2E −2 ⟨u⟩⟨u⟩+ ⟨u⟩2, (2.20) giving D (∆u)2E = D u2E −⟨u⟩2. (2.21) The variance of u is proportional to the square of the scatter of u around its mean value. A more useful measure of the scatter is given by the square root of the variance, σu = h D (∆u)2E i1/2 , (2.22) which is usually called the standard deviation of u. The standard deviation is essentially the width of the range over which u is distributed around its mean value ⟨u⟩. Probability Theory 11 2.5 Continuous Probability Distributions Suppose, now, that the variable u can take on a continuous range of possible values. In general, we expect the probability that u takes on a value in the range u to u + du to be directly proportional to du, in the limit that du →0. In other words, P(u ∈u : u + du) = P(u) du, (2.23) where P(u) is known as the probability density. The earlier results (2.5), (2.12), and (2.19) generalize in a straightforward manner to give 1 = Z ∞ −∞ P(u) du, (2.24) ⟨u⟩ = Z ∞ −∞ P(u) u du, (2.25) D (∆u)2E = Z ∞ −∞ P(u) (u −⟨u⟩)2 du = D u2E −⟨u⟩2, (2.26) respectively. Exercises 1. In the “game” of Russian roulette, the player inserts a single cartridge into the drum of a revolver, leaving the other five chambers of the drum empty. The player then spins the drum, aims at his/her head, and pulls the trigger. (a) What is the probability of the player still being alive after playing the game N times? (b) What is the probability of the player surviving N −1 turns in this game, and then being shot the Nth time he/she pulls the trigger? (c) What is the mean number of times the player gets to pull the trigger? 2. Suppose that the probability density for the speed s of a car on a road is given by P(s) = A s exp  −s s0  , where 0 ≤s ≤∞. Here, A and s0 are positive constants. More explicitly, P(s) ds gives the probability that a car has a speed between s and s + ds. (a) Determine A in terms of s0. (b) What is the mean value of the speed? (c) What is the “most probable” speed: i.e., the speed for which the probability density has a maximum? (d) What is the probability that a car has a speed more than three times as large as the mean value? 12 QUANTUM MECHANICS 3. An radioactive atom has a uniform decay probability per unit time w: i.e., the probability of decay in a time interval dt is w dt. Let P(t) be the probability of the atom not having decayed at time t, given that it was created at time t = 0. Demonstrate that P(t) = e−w t. What is the mean lifetime of the atom? Wave-Particle Duality 13 3 Wave-Particle Duality 3.1 Introduction In classical mechanics, waves and particles are two completely distinct types of physical entity. Waves are continuous and spatially extended, whereas particles are discrete and have little or no spatial extent. However, in quantum mechanics, waves sometimes act as particles, and particles sometimes act as waves—this strange behaviour is known as wave-particle duality. In this chapter, we shall examine how wave-particle duality shapes the general features of quantum mechanics. 3.2 Wavefunctions A wave is defined as a disturbance in some physical system which is periodic in both space and time. In one dimension, a wave is generally represented in terms of a wavefunction: e.g., ψ(x, t) = A cos(k x −ω t + ϕ), (3.1) where x represents position, t represents time, and A, k, ω > 0. For instance, if we are considering a sound wave then ψ(x, t) might correspond to the pressure perturbation associated with the wave at position x and time t. On the other hand, if we are considering a light wave then ψ(x, t) might represent the wave’s transverse electric field. As is well-known, the cosine function, cos(θ), is periodic in its argument, θ, with period 2π: i.e., cos(θ + 2π) = cos θ for all θ. The function also oscillates between the minimum and maximum values −1 and +1, respectively, as θ varies. It follows that the wavefunction (3.1) is periodic in x with period λ = 2π/k: i.e., ψ(x + λ, t) = ψ(x, t) for all x and t. Moreover, the wavefunction is periodic in t with period T = 2π/ω: i.e., ψ(x, t+T) = ψ(x, t) for all x and t. Finally, the wavefunction oscillates between the minimum and maximum values −A and +A, respectively, as x and t vary. The spatial period of the wave, λ, is known as its wavelength, and the temporal period, T, is called its period. Furthermore, the quantity A is termed the wave amplitude, the quantity k the wavenumber, and the quantity ω the wave angular frequency. Note that the units of ω are radians per second. The conventional wave frequency, in cycles per second (otherwise known as hertz), is ν = 1/T = ω/2π. Finally, the quantity ϕ, appearing in expression (3.1), is termed the phase angle, and determines the exact positions of the wave maxima and minima at a given time. In fact, the maxima are located at k x −ω t + ϕ = j 2π, where j is an integer. This follows because the maxima of cos(θ) occur at θ = j 2π. Note that a given maximum satisfies x = (j −ϕ/2π) λ + v t, where v = ω/k. It follows that the maximum, and, by implication, the whole wave, propagates in the positive x-direction at the velocity ω/k. Analogous 14 QUANTUM MECHANICS d plane r origin n Figure 3.1: The solution of n · r = d is a plane. reasoning reveals that ψ(x, t) = A cos(−k x −ω t + ϕ) = A cos(k x + ω t −ϕ), (3.2) is the wavefunction of a wave of amplitude A, wavenumber k, angular frequency ω, and phase angle ϕ, which propagates in the negative x-direction at the velocity ω/k. 3.3 Plane Waves As we have just seen, a wave of amplitude A, wavenumber k, angular frequency ω, and phase angle ϕ, propagating in the positive x-direction, is represented by the following wavefunction: ψ(x, t) = A cos(k x −ω t + ϕ). (3.3) Now, the type of wave represented above is conventionally termed a one-dimensional plane wave. It is one-dimensional because its associated wavefunction only depends on the single Cartesian coordinate x. Furthermore, it is a plane wave because the wave maxima, which are located at k x −ω t + ϕ = j 2π, (3.4) where j is an integer, consist of a series of parallel planes, normal to the x-axis, which are equally spaced a distance λ = 2π/k apart, and propagate along the positive x-axis at the velocity v = ω/k. These conclusions follow because Eq. (3.4) can be re-written in the form x = d, (3.5) where d = (j −ϕ/2π) λ + v t. Moreover, as is well-known, (3.5) is the equation of a plane, normal to the x-axis, whose distance of closest approach to the origin is d. Wave-Particle Duality 15 The previous equation can also be written in the coordinate-free form n · r = d, (3.6) where n = (1, 0, 0) is a unit vector directed along the positive x-axis, and r = (x, y, z) rep-resents the vector displacement of a general point from the origin. Since there is nothing special about the x-direction, it follows that if n is re-interpreted as a unit vector point-ing in an arbitrary direction then (3.6) can be re-interpreted as the general equation of a plane. As before, the plane is normal to n, and its distance of closest approach to the origin is d. See Fig. 3.1. This observation allows us to write the three-dimensional equivalent to the wavefunction (3.3) as ψ(x, y, z, t) = A cos(k · r −ω t + ϕ), (3.7) where the constant vector k = (kx, ky, kz) = k n is called the wavevector. The wave represented above is conventionally termed a three-dimensional plane wave. It is three-dimensional because its wavefunction, ψ(x, y, z, t), depends on all three Cartesian coordi-nates. Moreover, it is a plane wave because the wave maxima are located at k · r −ω t + ϕ = j 2π, (3.8) or n · r = (j −ϕ/2π) λ + v t, (3.9) where λ = 2π/k, and v = ω/k. Note that the wavenumber, k, is the magnitude of the wavevector, k: i.e., k ≡|k|. It follows, by comparison with Eq. (3.6), that the wave maxima consist of a series of parallel planes, normal to the wavevector, which are equally spaced a distance λ apart, and which propagate in the k-direction at the velocity v. See Fig. 3.2. Hence, the direction of the wavevector specifies the wave propagation direction, whereas its magnitude determines the wavenumber, k, and, thus, the wavelength, λ = 2π/k. 3.4 Representation of Waves via Complex Functions In mathematics, the symbol i is conventionally used to represent the square-root of minus one: i.e., one of the solutions of i2 = −1. Now, a real number, x (say), can take any value in a continuum of different values lying between −∞and +∞. On the other hand, an imaginary number takes the general form i y, where y is a real number. It follows that the square of a real number is a positive real number, whereas the square of an imaginary number is a negative real number. In addition, a general complex number is written z = x + i y, (3.10) where x and y are real numbers. In fact, x is termed the real part of z, and y the imaginary part of z. This is written mathematically as x = Re(z) and y = Im(z). Finally, the complex conjugate of z is defined z∗= x −i y. 16 QUANTUM MECHANICS k λ Figure 3.2: Wave maxima associated with a three-dimensional plane wave. Now, just as we can visualize a real number as a point on an infinite straight-line, we can visualize a complex number as a point in an infinite plane. The coordinates of the point in question are the real and imaginary parts of the number: i.e., z ≡(x, y). This idea is illustrated in Fig. 3.3. The distance, r = q x2 + y2, of the representative point from the origin is termed the modulus of the corresponding complex number, z. This is written mathematically as |z| = q x2 + y2. Incidentally, it follows that z z∗= x2 + y2 = |z|2. The angle, θ = tan−1(y/x), that the straight-line joining the representative point to the origin subtends with the real axis is termed the argument of the corresponding complex number, z. This is written mathematically as arg(z) = tan−1(y/x). It follows from standard trigonometry that x = r cos θ, and y = r sin θ. Hence, z = r cos θ + i r sin θ. Complex numbers are often used to represent wavefunctions. All such representations depend ultimately on a fundamental mathematical identity, known as de Moivre’s theorem, which takes the form e i φ ≡cos φ + i sin φ, (3.11) where φ is a real number. Incidentally, given that z = r cos θ+i r sin θ = r (cos θ+i sin θ), where z is a general complex number, r = |z| its modulus, and θ = arg(z) its argument, it follows from de Moivre’s theorem that any complex number, z, can be written z = r e i θ, (3.12) where r = |z| and θ = arg(z) are real numbers. Now, a one-dimensional wavefunction takes the general form ψ(x, t) = A cos(k x −ω t + ϕ), (3.13) where A is the wave amplitude, k the wavenumber, ω the angular frequency, and ϕ the phase angle. Consider the complex wavefunction ψ(x, t) = ψ0 e i (k x−ω t), (3.14) Wave-Particle Duality 17 r real z θ imaginary x y Figure 3.3: Representation of a complex number as a point in a plane. where ψ0 is a complex constant. We can write ψ0 = A e i ϕ, (3.15) where A is the modulus, and ϕ the argument, of ψ0. Hence, we deduce that Re h ψ0 e i (k x−ω t)i = Re h A e i ϕ e i (k x−ω t)i = Re h A e i (k x−ω t+ϕ)i = A Re h e i (k x−ω t+ϕ)i . (3.16) Thus, it follows from de Moirve’s theorem, and Eq. (3.13), that Re h ψ0 e i (k x−ω t)i = A cos(k x −ω t + ϕ) = ψ(x, t). (3.17) In other words, a general one-dimensional real wavefunction, (3.13), can be represented as the real part of a complex wavefunction of the form (3.14). For ease of notation, the “take the real part” aspect of the above expression is usually omitted, and our general one-dimension wavefunction is simply written ψ(x, t) = ψ0 e i (k x−ω t). (3.18) The main advantage of the complex representation, (3.18), over the more straightforward real representation, (3.13), is that the former enables us to combine the amplitude, A, and 18 QUANTUM MECHANICS the phase angle, ϕ, of the wavefunction into a single complex amplitude, ψ0. Finally, the three dimensional generalization of the above expression is ψ(r, t) = ψ0 e i (k·r−ω t), (3.19) where k is the wavevector. 3.5 Classical Light Waves Consider a classical, monochromatic, linearly polarized, plane light wave, propagating through a vacuum in the x-direction. It is convenient to characterize a light wave (which is, of course, a type of electromagnetic wave) by specifying its associated electric field. Sup-pose that the wave is polarized such that this electric field oscillates in the y-direction. (Ac-cording to standard electromagnetic theory, the magnetic field oscillates in the z-direction, in phase with the electric field, with an amplitude which is that of the electric field divided by the velocity of light in vacuum.) Now, the electric field can be conveniently represented in terms of a complex wavefunction: ψ(x, t) = ¯ ψ e i (k x−ω t). (3.20) Here, i = √ −1, k and ω are real parameters, and ¯ ψ is a complex wave amplitude. By convention, the physical electric field is the real part of the above expression. Suppose that ¯ ψ = |¯ ψ| e i ϕ, (3.21) where ϕ is real. It follows that the physical electric field takes the form Ey(x, t) = Re[ψ(x, t)] = |¯ ψ| cos(k x −ω t + ϕ), (3.22) where |¯ ψ| is the amplitude of the electric oscillation, k the wavenumber, ω the angular frequency, and ϕ the phase angle. In addition, λ = 2π/k is the wavelength, and ν = ω/2π the frequency (in hertz). According to standard electromagnetic theory, the frequency and wavelength of light waves are related according to the well-known expression c = ν λ, (3.23) or, equivalently, ω = k c, (3.24) where c = 3 × 108 m/s. Equations (3.22) and (3.24) yield Ey(x, t) = |¯ ψ| cos (k [x −(ω/k) t] + ϕ) = |¯ ψ| cos (k [x −c t] + ϕ) . (3.25) Note that Ey depends on x and t only via the combination x −c t. It follows that the wave maxima and minima satisfy x −c t = constant. (3.26) Wave-Particle Duality 19 Thus, the wave maxima and minima propagate in the x-direction at the fixed velocity dx dt = c. (3.27) An expression, such as (3.24), which determines the wave angular frequency as a func-tion of the wavenumber, is generally termed a dispersion relation. As we have already seen, and as is apparent from Eq. (3.25), the maxima and minima of a plane wave propagate at the characteristic velocity vp = ω k , (3.28) which is known as the phase velocity. Hence, the dispersion relation (3.24) is effectively saying that the phase velocity of a plane light wave propagating through a vacuum always takes the fixed value c, irrespective of its wavelength or frequency. Now, from standard electromagnetic theory, the energy density (i.e., the energy per unit volume) of a light wave is U = E 2 y ǫ0 , (3.29) where ǫ0 = 8.85 × 10−12 F/m is the permittivity of free space. Hence, it follows from Eqs. (3.20) and (3.22) that U ∝|ψ| 2. (3.30) Furthermore, a light wave possesses linear momentum, as well as energy. This momentum is directed along the wave’s direction of propagation, and is of density G = U c . (3.31) 3.6 Photoelectric Effect The so-called photoelectric effect, by which a polished metal surface emits electrons when illuminated by visible and ultra-violet light, was discovered by Heinrich Hertz in 1887. The following facts regarding this effect can be established via careful observation. First, a given surface only emits electrons when the frequency of the light with which it is il-luminated exceeds a certain threshold value, which is a property of the metal. Second, the current of photoelectrons, when it exists, is proportional to the intensity of the light falling on the surface. Third, the energy of the photoelectrons is independent of the light intensity, but varies linearly with the light frequency. These facts are inexplicable within the framework of classical physics. In 1905, Albert Einstein proposed a radical new theory of light in order to account for the photoelectric effect. According to this theory, light of fixed frequency ν consists of a collection of indivisible discrete packages, called quanta,1 whose energy is E = h ν. (3.32) 1Plural of quantum: Latin neuter of quantus: how much? 20 QUANTUM MECHANICS K 0 0 W/h h ν Figure 3.4: Variation of the kinetic energy K of photoelectrons with the wave-frequency ν. Here, h = 6.6261 × 10−34 J s is a new constant of nature, known as Planck’s constant. Incidentally, h is called Planck’s constant, rather than Einstein’s constant, because Max Planck first introduced the concept of the quantization of light, in 1900, whilst trying to account for the electromagnetic spectrum of a black body (i.e., a perfect emitter and absorber of electromagnetic radiation). Suppose that the electrons at the surface of a metal lie in a potential well of depth W. In other words, the electrons have to acquire an energy W in order to be emitted from the surface. Here, W is generally called the work function of the surface, and is a property of the metal. Suppose that an electron absorbs a single quantum of light. Its energy therefore increases by h ν. If h ν is greater than W then the electron is emitted from the surface with residual kinetic energy K = h ν −W. (3.33) Otherwise, the electron remains trapped in the potential well, and is not emitted. Here, we are assuming that the probability of an electron simultaneously absorbing two or more light quanta is negligibly small compared to the probability of it absorbing a single light quantum (as is, indeed, the case for low intensity illumination). Incidentally, we can calculate Planck’s constant, and the work function of the metal, by simply plotting the kinetic energy of the emitted photoelectrons as a function of the wave frequency, as shown in Fig. 3.4. This plot is a straight-line whose slope is h, and whose intercept with the ν axis is W/h. Finally, the number of emitted electrons increases with the intensity of the light because the more intense the light the larger the flux of light quanta onto the surface. Thus, Einstein’s quantum theory is capable of accounting for all three of the previously mentioned observational facts regarding the photoelectric effect. Wave-Particle Duality 21 3.7 Quantum Theory of Light According to Einstein’s quantum theory of light, a monochromatic light wave of angular frequency ω, propagating through a vacuum, can be thought of as a stream of particles, called photons, of energy E = ¯ h ω, (3.34) where ¯ h = h/2π = 1.0546 × 10−34 J s. Since classical light waves propagate at the fixed velocity c, it stands to reason that photons must also move at this velocity. Now, according to Einstein’s special theory of relativity, only massless particles can move at the speed of light in vacuum. Hence, photons must be massless. Special relativity also gives the following relationship between the energy E and the momentum p of a massless particle, p = E c . (3.35) Note that the above relation is consistent with Eq. (3.31), since if light is made up of a stream of photons, for which E/p = c, then the momentum density of light must be the energy density divided by c. It follows from the previous two equations that photons carry momentum p = ¯ h k (3.36) along their direction of motion, since ω/c = k for a light wave [see Eq. (3.24)]. 3.8 Classical Interference of Light Waves Let us now consider the classical interference of light waves. Figure 3.5 shows a standard double-slit interference experiment in which monochromatic plane light waves are nor-mally incident on two narrow parallel slits which are a distance d apart. The light from the two slits is projected onto a screen a distance D behind them, where D ≫d. Consider some point on the screen which is located a distance y from the centre-line, as shown in the figure. Light from the first slit travels a distance x1 to get to this point, whereas light from the second slit travels a slightly different distance x2. It is easily demon-strated that ∆x = x2 −x1 ≃d D y, (3.37) provided d ≪D. It follows from Eq. (3.20), and the well-known fact that light waves are superposible, that the wavefunction at the point in question can be written ψ(y, t) ∝ψ1(t) e ik x1 + ψ2(t) e i k x2, (3.38) where ψ1 and ψ2 are the wavefunctions at the first and second slits, respectively. However, ψ1 = ψ2, (3.39) 22 QUANTUM MECHANICS double slits y d D screen projection incoming wave x1 x2 Figure 3.5: Classical double-slit interference of light. since the two slits are assumed to be illuminated by in-phase light waves of equal am-plitude. (Note that we are ignoring the difference in amplitude of the waves from the two slits at the screen, due to the slight difference between x1 and x2, compared to the difference in their phases. This is reasonable provided D ≫λ.) Now, the intensity (i.e., the energy flux) of the light at some point on the projection screen is approximately equal to the energy density of the light at this point times the velocity of light (provided that y ≪D). Hence, it follows from Eq. (3.30) that the light intensity on the screen a distance y from the center-line is I(y) ∝|ψ(y, t)| 2. (3.40) Using Eqs. (3.37)–(3.40), we obtain I(y) ∝cos2 k ∆x 2 ! ≃cos2 k d 2 D y ! . (3.41) Figure 3.6 shows the characteristic interference pattern corresponding to the above ex-pression. This pattern consists of equally spaced light and dark bands of characteristic width ∆y = D λ d . (3.42) 3.9 Quantum Interference of Light Let us now consider double-slit light interference from a quantum mechanical point of view. According to quantum theory, light waves consist of a stream of massless photons Wave-Particle Duality 23 0 ∆y y I(y) Figure 3.6: Classical double-slit interference pattern. moving at the speed of light. Hence, we expect the two slits in Fig. 3.5 to be spraying photons in all directions at the same rate. Suppose, however, that we reduce the intensity of the light source illuminating the slits until the source is so weak that only a single photon is present between the slits and the projection screen at any given time. Let us also replace the projection screen by a photographic film which records the position where it is struck by each photon. So, if we wait a sufficiently long time that a great many photons have passed through the slits and struck the photographic film, and then develop the film, do we see an interference pattern which looks like that shown in Fig. 3.6? The answer to this question, as determined by experiment, is that we see exactly the same interference pattern. Now, according to the above discussion, the interference pattern is built up one photon at a time: i.e., the pattern is not due to the interaction of different photons. Moreover, the point at which a given photon strikes the film is not influenced by the points at which previous photons struck the film, given that there is only one photon in the apparatus at any given time. Hence, the only way in which the classical interference pattern can be reconstructed, after a great many photons have passed through the apparatus, is if each photon has a greater probability of striking the film at points where the classical interference pattern is bright, and a lesser probability of striking the film at points where the interference pattern is dark. Suppose, then, that we allow N photons to pass through our apparatus, and then count the number of photons which strike the recording film between y and y + ∆y, where ∆y is a relatively small division. Let us call this number n(y). Now, the number of photons which strike a region of the film in a given time interval is equivalent to the intensity of the light illuminating that region of the film multiplied by the area of the region, since each photon carries a fixed amount of energy. Hence, in order to reconcile the classical and quantum viewpoints, we need Py(y) ≡lim N→∞ "n(y) N # ∝I(y) ∆y, (3.43) where I(y) is given in Eq. (3.41). Here, Py(y) is the probability that a given photon strikes 24 QUANTUM MECHANICS the film between y and y + ∆y. This probability is simply a number between 0 and 1. A probability of 0 means that there is no chance of a photon striking the film between y and y + ∆y, whereas a probability of 1 means that every photon is certain to strike the film in this interval. Note that Py ∝∆y. In other words, the probability of a photon striking a region of the film of width ∆y is directly proportional to this width. Actually, this is only true as long as ∆y is relatively small. It is convenient to define a quantity known as the probability density, P(y), which is such that the probability of a photon striking a region of the film of infinitesimal width dy is Py(y) = P(y) dy. Now, Eq. (3.43) yields Py(y) ∝ I(y) dy, which gives P(y) ∝I(y). However, according to Eq. (3.40), I(y) ∝|ψ(y, t)| 2. Thus, we obtain P(y) ∝|ψ(y, t)| 2. (3.44) In other words, the probability density of a photon striking a given point on the film is proportional to the modulus squared of the wavefunction at that point. Another way of saying this is that the probability of a measurement of the photon’s distance from the centerline, at the location of the film, yielding a result between y and y+dy is proportional to |ψ(y, t)| 2 dy. Note that, in the quantum mechanical picture, we can only predict the probability that a given photon strikes a given point on the film. If photons behaved classically then we could, in principle, solve their equations of motion and predict exactly where each photon was going to strike the film, given its initial position and velocity. This loss of determinancy in quantum mechanics is a direct consequence of wave-particle duality. In other words, we can only reconcile the wave-like and particle-like properties of light in a statistical sense. It is impossible to reconcile them on the individual particle level. In principle, each photon which passes through our apparatus is equally likely to pass through one of the two slits. So, can we determine which slit a given photon passed through? Well, suppose that our original interference experiment involves sending N ≫1 photons through our apparatus. We know that we get an interference pattern in this experiment. Suppose that we perform a modified interference experiment in which we close off one slit, send N/2 photons through the apparatus, and then open the slit and close off the other slit, and send N/2 photons through the apparatus. In this second experiment, which is virtually identical to the first on the individual photon level, we know exactly which slit each photon passed through. However, the wave theory of light (which we expect to agree with the quantum theory in the limit N ≫1) tells us that our modified interference experiment will not result in the formation of an interference pattern. After all, according to wave theory, it is impossible to obtain a two-slit interference pattern from a single slit. Hence, we conclude that any attempt to measure which slit each photon in our two-slit interference experiment passes through results in the destruction of the interference pattern. It follows that, in the quantum mechanical version of the two-slit interference experiment, we must think of each photon as essentially passing through both slits simultaneously. Wave-Particle Duality 25 3.10 Classical Particles In this course, we are going to concentrate, almost exclusively, on the behaviour of non-relativistic particles of non-zero mass (e.g., electrons). In the absence of external forces, such particles, of mass m, energy E, and momentum p, move classically in a straight-line with velocity v = p m, (3.45) and satisfy E = p2 2 m. (3.46) 3.11 Quantum Particles Just as light waves sometimes exhibit particle-like properties, it turns out that massive particles sometimes exhibit wave-like properties. For instance, it is possible to obtain a double-slit interference pattern from a stream of mono-energetic electrons passing through two closely spaced narrow slits. Now, the effective wavelength of the electrons can be determined by measuring the width of the light and dark bands in the interference pattern [see Eq. (3.42)]. It is found that λ = h p. (3.47) The same relation is found for other types of particles. The above wavelength is called the de Broglie wavelength, after Louis de Broglie who first suggested that particles should have wave-like properties in 1923. Note that the de Broglie wavelength is generally pretty small. For instance, that of an electron is λe = 1.2 × 10−9 [E(eV)]−1/2 m, (3.48) where the electron energy is conveniently measured in units of electron-volts (eV). (An electron accelerated from rest through a potential difference of 1000 V acquires an energy of 1000 eV, and so on.) The de Broglie wavelength of a proton is λp = 2.9 × 10−11 [E(eV)]−1/2 m. (3.49) Given the smallness of the de Broglie wavelengths of common particles, it is actually quite difficult to do particle interference experiments. In general, in order to perform an effective interference experiment, the spacing of the slits must not be too much greater than the wavelength of the wave. Hence, particle interference experiments require either very low energy particles (since λ ∝E−1/2), or very closely spaced slits. Usually the “slits” consist of crystals, which act a bit like diffraction gratings with a characteristic spacing of order the inter-atomic spacing (which is generally about 10−9 m). Equation (3.47) can be rearranged to give p = ¯ h k, (3.50) 26 QUANTUM MECHANICS which is exactly the same as the relation between momentum and wavenumber that we obtained earlier for photons [see Eq. (3.36)]. For the case of a particle moving the three dimensions, the above relation generalizes to give p = ¯ h k, (3.51) where p is the particle’s vector momentum, and k its wavevector. It follows that the mo-mentum of a quantum particle, and, hence, its velocity, is always parallel to its wavevector. Since the relation (3.36) between momentum and wavenumber applies to both photons and massive particles, it seems plausible that the closely related relation (3.34) between energy and wave angular frequency should also apply to both photons and particles. If this is the case, and we can write E = ¯ h ω (3.52) for particle waves, then Eqs. (3.46) and (3.50) yield the following dispersion relation for such waves: ω = ¯ h k2 2 m . (3.53) Now, we saw earlier that a plane wave propagates at the so-called phase velocity, vp = ω k . (3.54) However, according to the above dispersion relation, a particle plane wave propagates at vp = p 2 m. (3.55) Note, from Eq. (3.45), that this is only half of the classical particle velocity. Does this imply that the dispersion relation (3.53) is incorrect? Let us investigate further. 3.12 Wave Packets The above discussion suggests that the wavefunction of a massive particle of momentum p and energy E, moving in the positive x-direction, can be written ψ(x, t) = ¯ ψ e i (k x−ω t), (3.56) where k = p/¯ h > 0 and ω = E/¯ h > 0. Here, ω and k are linked via the dispersion relation (3.53). Expression (3.56) represents a plane wave whose maxima and minima propagate in the positive x-direction with the phase velocity vp = ω/k. As we have seen, this phase velocity is only half of the classical velocity of a massive particle. From before, the most reasonable physical interpretation of the wavefunction is that |ψ(x, t)| 2 is proportional to the probability density of finding the particle at position x at time t. However, the modulus squared of the wavefunction (3.56) is |¯ ψ| 2, which depends on neither x nor t. In other words, this wavefunction represents a particle which is equally Wave-Particle Duality 27 likely to be found anywhere on the x-axis at all times. Hence, the fact that the maxima and minima of the wavefunction propagate at a phase velocity which does not correspond to the classical particle velocity does not have any real physical consequences. So, how can we write the wavefunction of a particle which is localized in x: i.e., a particle which is more likely to be found at some positions on the x-axis than at others? It turns out that we can achieve this goal by forming a linear combination of plane waves of different wavenumbers: i.e., ψ(x, t) = Z ∞ −∞ ¯ ψ(k) e i (k x−ω t) dk. (3.57) Here, ¯ ψ(k) represents the complex amplitude of plane waves of wavenumber k in this combination. In writing the above expression, we are relying on the assumption that particle waves are superposable: i.e., it is possible to add two valid wave solutions to form a third valid wave solution. The ultimate justification for this assumption is that particle waves satisfy a differential wave equation which is linear in ψ. As we shall see, in Sect. 3.15, this is indeed the case. Incidentally, a plane wave which varies as exp[i (k x − ω t)] and has a negative k (but positive ω) propagates in the negative x-direction at the phase velocity ω/|k|. Hence, the superposition (3.57) includes both forward and backward propagating waves. Now, there is a useful mathematical theorem, known as Fourier’s theorem, which states that if f(x) = 1 √ 2π Z ∞ −∞ ¯ f(k) e i k x dk, (3.58) then ¯ f(k) = 1 √ 2π Z ∞ −∞ f(x) e−i k x dx. (3.59) Here, ¯ f(k) is known as the Fourier transform of the function f(x). We can use Fourier’s theorem to find the k-space function ¯ ψ(k) which generates any given x-space wavefunction ψ(x) at a given time. For instance, suppose that at t = 0 the wavefunction of our particle takes the form ψ(x, 0) ∝exp " i k0 x −(x −x0) 2 4 (∆x) 2 # . (3.60) Thus, the initial probability density of the particle is written |ψ(x, 0)| 2 ∝exp " −(x −x0) 2 2 (∆x) 2 # . (3.61) This particular probability distribution is called a Gaussian distribution, and is plotted in Fig. 3.7. It can be seen that a measurement of the particle’s position is most likely to yield the value x0, and very unlikely to yield a value which differs from x0 by more than 3 ∆x. Thus, (3.60) is the wavefunction of a particle which is initially localized around x = x0 in 28 QUANTUM MECHANICS Figure 3.7: A Gaussian probability distribution in x-space. some region whose width is of order ∆x. This type of wavefunction is known as a wave packet. Now, according to Eq. (3.57), ψ(x, 0) = Z ∞ −∞ ¯ ψ(k) e i k x dk. (3.62) Hence, we can employ Fourier’s theorem to invert this expression to give ¯ ψ(k) ∝ Z ∞ −∞ ψ(x, 0) e−i k x dx. (3.63) Making use of Eq. (3.60), we obtain ¯ ψ(k) ∝e−i (k−k0) x0 Z ∞ −∞ exp " −i (k −k0) (x −x0) −(x −x0)2 4 (∆x)2 # dx. (3.64) Changing the variable of integration to y = (x −x0)/(2 ∆x), this reduces to ¯ ψ(k) ∝e−i k x0 Z ∞ −∞ exp h −i β y −y2i dy, (3.65) where β = 2 (k −k0) ∆x. The above equation can be rearranged to give ¯ ψ(k) ∝e−i k x0−β2/4 Z ∞ −∞ e−(y−y0) 2 dy, (3.66) Wave-Particle Duality 29 where y0 = −i β/2. The integral now just reduces to a number, as can easily be seen by making the change of variable z = y −y0. Hence, we obtain ¯ ψ(k) ∝exp " −i k x0 −(k −k0) 2 4 (∆k)2 # , (3.67) where ∆k = 1 2 ∆x. (3.68) Now, if |ψ(x)| 2 is proportional to the probability density of a measurement of the par-ticle’s position yielding the value x then it stands to reason that |¯ ψ(k)| 2 is proportional to the probability density of a measurement of the particle’s wavenumber yielding the value k. (Recall that p = ¯ h k, so a measurement of the particle’s wavenumber, k, is equivalent to a measurement of the particle’s momentum, p). According to Eq. (3.67), |¯ ψ(k)| 2 ∝exp " −(k −k0) 2 2 (∆k) 2 # . (3.69) Note that this probability distribution is a Gaussian in k-space. See Eq. (3.61) and Fig. 3.7. Hence, a measurement of k is most likely to yield the value k0, and very unlikely to yield a value which differs from k0 by more than 3 ∆k. Incidentally, a Gaussian is the only mathematical function in x-space which has the same form as its Fourier transform in k-space. We have just seen that a Gaussian probability distribution of characteristic width ∆x in x-space [see Eq. (3.61)] transforms to a Gaussian probability distribution of characteristic width ∆k in k-space [see Eq. (3.69)], where ∆x ∆k = 1 2. (3.70) This illustrates an important property of wave packets. Namely, if we wish to construct a packet which is very localized in x-space (i.e., if ∆x is small) then we need to combine plane waves with a very wide range of different k-values (i.e., ∆k will be large). Conversely, if we only combine plane waves whose wavenumbers differ by a small amount (i.e., if ∆k is small) then the resulting wave packet will be very extended in x-space (i.e., ∆x will be large). 3.13 Evolution of Wave Packets We have seen, in Eq. (3.60), how to write the wavefunction of a particle which is initially localized in x-space. But, how does this wavefunction evolve in time? Well, according to Eq. (3.57), we have ψ(x, t) = Z ∞ −∞ ¯ ψ(k) e i φ(k) dk, (3.71) 30 QUANTUM MECHANICS where φ(k) = k x −ω(k) t. (3.72) The function ¯ ψ(k) is obtained by Fourier transforming the wavefunction at t = 0. See Eqs. (3.63) and (3.67). Now, according to Eq. (3.69), |¯ ψ(k)| is strongly peaked around k = k0. Thus, it is a reasonable approximation to Taylor expand φ(k) about k0. Keeping terms up to second-order in k −k0, we obtain ψ(x, t) ∝ Z ∞ −∞ ¯ ψ(k) exp " i φ0 + φ′ 0 (k −k0) + 1 2 φ′′ 0 (k −k0) 2 # , (3.73) where φ0 = φ(k0) = k0 x −ω0 t, (3.74) φ′ 0 = dφ(k0) dk = x −vg t, (3.75) φ′′ 0 = d2φ(k0) dk2 = −α t, (3.76) with ω0 = ω(k0), (3.77) vg = dω(k0) dk , (3.78) α = d2ω(k0) dk2 . (3.79) Substituting from Eq. (3.67), rearranging, and then changing the variable of integration to y = (k −k0)/(2 ∆k), we get ψ(x, t) ∝e i (k0 x−ω0 t) Z ∞ −∞ e i β1 y−(1+i β2) y 2 dy, (3.80) where β1 = 2 ∆k (x −x0 −vg t), (3.81) β2 = 2 α (∆k) 2 t. (3.82) Incidentally, ∆k = 1/(2 ∆x), where ∆x is the initial width of the wave packet. The above expression can be rearranged to give ψ(x, t) ∝e i (k0 x−ω0 t)−(1+i β2) β 2/4 Z ∞ −∞ e−(1+i β2) (y−y0) 2 dy, (3.83) Wave-Particle Duality 31 where y0 = i β/2 and β = β1/(1 + i β2). Again changing the variable of integration to z = (1 + i β2)1/2 (y −y0), we get ψ(x, t) ∝(1 + i β2)−1/2 e i (k0 x−ω0 t)−(1+i β2) β 2/4 Z ∞ −∞ e−z2 dz. (3.84) The integral now just reduces to a number. Hence, we obtain ψ(x, t) ∝ exp h i (k0 x −ω0 t) −(x −x0 −vg t)2 {1 −i 2 α (∆k) 2 t}/(4 σ2) i [1 + i 2 α (∆k) 2 t]1/2 , (3.85) where σ2(t) = (∆x) 2 + α2 t2 4 (∆x) 2. (3.86) Note that the above wavefunction is identical to our original wavefunction (3.60) at t = 0. This, justifies the approximation which we made earlier by Taylor expanding the phase factor φ(k) about k = k0. According to Eq. (3.85), the probability density of our particle as a function of time is written |ψ(x, t)| 2 ∝σ−1(t) exp " −(x −x0 −vg t) 2 2 σ 2(t) # . (3.87) Hence, the probability distribution is a Gaussian, of characteristic width σ, which peaks at x = x0 + vg t. Now, the most likely position of our particle coincides with the peak of the distribution function. Thus, the particle’s most likely position is given by x = x0 + vg t. (3.88) It can be seen that the particle effectively moves at the uniform velocity vg = dω dk , (3.89) which is known as the group velocity. In other words, a plane wave travels at the phase velocity, vp = ω/k, whereas a wave packet travels at the group velocity, vg = dω/dt. Now, it follows from the dispersion relation (3.53) for particle waves that vg = p m. (3.90) However, it can be seen from Eq. (3.45) that this is identical to the classical particle veloc-ity. Hence, the dispersion relation (3.53) turns out to be consistent with classical physics, after all, as soon as we realize that individual particles must be identified with wave packets rather than plane waves. In fact, a plane wave is usually interpreted as a continuous stream of particles propagating in the same direction as the wave. 32 QUANTUM MECHANICS According to Eq. (3.86), the width of our wave packet grows as time progresses. In-deed, it follows from Eqs. (3.53) and (3.79) that the characteristic time for a wave packet of original width ∆x to double in spatial extent is t2 ∼m (∆x)2 ¯ h . (3.91) For instance, if an electron is originally localized in a region of atomic scale (i.e., ∆x ∼ 10−10 m) then the doubling time is only about 10−16 s. Evidently, particle wave packets (for freely moving particles) spread very rapidly. Note, from the previous analysis, that the rate of spreading of a wave packet is ulti-mately governed by the second derivative of ω(k) with respect to k. See Eqs. (3.79) and (3.86). This is why a functional relationship between ω and k is generally known as a dispersion relation: i.e., because it governs how wave packets disperse as time progresses. However, for the special case where ω is a linear function of k, the second derivative of ω with respect to k is zero, and, hence, there is no dispersion of wave packets: i.e., wave packets propagate without changing shape. Now, the dispersion relation (3.24) for light waves is linear in k. It follows that light pulses propagate through a vacuum with-out spreading. Another property of linear dispersion relations is that the phase velocity, vp = ω/k, and the group velocity, vg = dω/dk, are identical. Thus, both plane light waves and light pulses propagate through a vacuum at the characteristic speed c = 3 × 108 m/s. Of course, the dispersion relation (3.53) for particle waves is not linear in k. Hence, par-ticle plane waves and particle wave packets propagate at different velocities, and particle wave packets also gradually disperse as time progresses. 3.14 Heisenberg’s Uncertainty Principle According to the analysis contained in the previous two sections, a particle wave packet which is initially localized in x-space with characteristic width ∆x is also localized in k-space with characteristic width ∆k = 1/(2 ∆x). However, as time progresses, the width of the wave packet in x-space increases, whilst that of the wave packet in k-space stays the same. [After all, our previous analysis obtained ψ(x, t) from Eq. (3.71), but assumed that ¯ ψ(k) was given by Eq. (3.67) at all times.] Hence, in general, we can say that ∆x ∆k > ∼ 1 2. (3.92) Furthermore, we can think of ∆x and ∆k as characterizing our uncertainty regarding the values of the particle’s position and wavenumber, respectively. Now, a measurement of a particle’s wavenumber, k, is equivalent to a measurement of its momentum, p, since p = ¯ h k. Hence, an uncertainty in k of order ∆k translates to an uncertainty in p of order ∆p = ¯ h ∆k. It follows from the above inequality that ∆x ∆p > ∼ ¯ h 2 . (3.93) Wave-Particle Duality 33 incoming photon lens electron f α θ D y x scattered photon Figure 3.8: Heisenberg’s microscope. This is the famous Heisenberg uncertainty principle, first proposed by Werner Heisenberg in 1927. According to this principle, it is impossible to simultaneously measure the position and momentum of a particle (exactly). Indeed, a good knowledge of the particle’s position implies a poor knowledge of its momentum, and vice versa. Note that the uncertainty principle is a direct consequence of representing particles as waves. It can be seen from Eqs. (3.53), (3.79), and (3.86) that at large t a particle wavefunc-tion of original width ∆x (at t = 0) spreads out such that its spatial extent becomes σ ∼ ¯ h t m ∆x. (3.94) It is easily demonstrated that this spreading is a consequence of the uncertainty principle. Since the initial uncertainty in the particle’s position is ∆x, it follows that the uncertainty in its momentum is of order ¯ h/∆x. This translates to an uncertainty in velocity of ∆v = ¯ h/(m ∆x). Thus, if we imagine that parts of the wavefunction propagate at v0 + ∆v/2, and others at v0 −∆v/2, where v0 is the mean propagation velocity, then the wavefunction will spread as time progresses. Indeed, at large t we expect the width of the wavefunction to be σ ∼∆v t ∼ ¯ h t m ∆x, (3.95) which is identical to Eq. (3.94). Evidently, the spreading of a particle wavefunction must be interpreted as an increase in our uncertainty regarding the particle’s position, rather than an increase in the spatial extent of the particle itself. Figure 3.8 illustrates a famous thought experiment known as Heisenberg’s microscope. Suppose that we try to image an electron using a simple optical system in which the ob-jective lens is of diameter D and focal-length f. (In practice, this would only be possible 34 QUANTUM MECHANICS using extremely short wavelength light.) It is a well-known result in optics that such a system has a minimum angular resolving power of λ/D, where λ is the wavelength of the light illuminating the electron. If the electron is placed at the focus of the lens, which is where the minimum resolving power is achieved, then this translates to a uncertainty in the electron’s transverse position of ∆x ≃f λ D. (3.96) However, tan α = D/2 f , (3.97) where α is the half-angle subtended by the lens at the electron. Assuming that α is small, we can write α ≃D 2 f, (3.98) so ∆x ≃λ 2 α. (3.99) It follows that we can reduce the uncertainty in the electron’s position by minimizing the ratio λ/α: i.e., by using short wavelength radiation, and a wide-angle lens. Let us now examine Heisenberg’s microscope from a quantum mechanical point of view. According to quantum mechanics, the electron is imaged when it scatters an incoming photon towards the objective lens. Let the wavevector of the incoming photon have the (x, y) components (k, 0). See Fig. 3.8. If the scattered photon subtends an angle θ with the center-line of the optical system, as shown in the figure, then its wavevector is written (k sin θ, k cos θ). Here, we are ignoring any wavelength shift of the photon on scattering— i.e., the magnitude of the k-vector is assumed to be the same before and after scattering. Thus, the change in the x-component of the photon’s wavevector is ∆kx = k (sin θ−1). This translates to a change in the photon’s x-component of momentum of ∆px = ¯ h k (sin θ − 1). By momentum conservation, the electron’s x-momentum will change by an equal and opposite amount. However, θ can range all the way from −α to +α, and the scattered photon will still be collected by the imaging system. It follows that the uncertainty in the electron’s momentum is ∆p ≃2 ¯ h k sin α ≃4π ¯ h α λ . (3.100) Note that in order to reduce the uncertainty in the momentum we need to maximize the ratio λ/α. This is exactly the opposite of what we need to do to reduce the uncertainty in the position. Multiplying the previous two equations, we obtain ∆x ∆p ∼h, (3.101) which is essentially the uncertainty principle. According to Heisenberg’s microscope, the uncertainty principle follows from two facts. First, it is impossible to measure any property of a microscopic dynamical system without Wave-Particle Duality 35 disturbing the system somewhat. Second, particle and light energy and momentum are quantized. Hence, there is a limit to how small we can make the aforementioned dis-turbance. Thus, there is an irreducible uncertainty in certain measurements which is a consequence of the act of measurement itself. 3.15 Schr¨ odinger’s Equation We have seen that the wavefunction of a free particle of mass m satisfies ψ(x, t) = Z ∞ −∞ ¯ ψ(k) e i (k x−ω t) dk, (3.102) where ¯ ψ(k) is determined by ψ(x, 0), and ω(k) = ¯ h k2 2 m . (3.103) Now, it follows from Eq. (3.102) that ∂ψ ∂x = Z ∞ −∞ (i k) ¯ ψ(k) e i (k x−ω t) dk, (3.104) and ∂2ψ ∂x2 = Z ∞ −∞ (−k2) ¯ ψ(k) e i (k x−ω t) dk, (3.105) whereas ∂ψ ∂t = Z ∞ −∞ (−i ω) ¯ ψ(k) e i (k x−ω t) dk. (3.106) Thus, i ∂ψ ∂t + ¯ h 2 m ∂2ψ ∂x2 = Z ∞ −∞ ω −¯ h k2 2 m ! ¯ ψ(k) e i (k x−ω t) dk = 0, (3.107) where use has been made of the dispersion relation (3.103). Multiplying through by ¯ h, we obtain i ¯ h ∂ψ ∂t = −¯ h2 2 m ∂2ψ ∂x2 . (3.108) This expression is known as Schr¨ odinger’s equation, since it was first introduced by Erwin Schr¨ odinger in 1925. Schr¨ odinger’s equation is a linear, second-order, partial differential equation which governs the time evolution of a particle wavefunction, and is generally easier to solve than the integral equation (3.102). Of course, Eq. (3.108) is only applicable to freely moving particles. Fortunately, it is fairly easy to guess the generalization of this equation for particles moving in some po-tential V(x). It is plausible, from Eq. (3.104), that we can identify k with the differential operator −i ∂/∂x. Hence, the differential operator on the right-hand side of Eq. (3.108) is 36 QUANTUM MECHANICS equivalent to ¯ h2 k2/(2 m). But, p = ¯ h k. Thus, the operator is also equivalent to p2/(2 m), which is just the energy of a freely moving particle. However, in the presence of a potential V(x), the particle’s energy is written p2/(2 m) + V. Thus, it seems reasonable to make the substitution −¯ h2 2 m ∂2 ∂x2 →−¯ h2 2 m ∂2 ∂x2 + V(x). (3.109) This leads to the general form of Schr¨ odinger’s equation: i ¯ h ∂ψ ∂t = −¯ h2 2 m ∂2ψ ∂x2 + V(x) ψ. (3.110) 3.16 Collapse of the Wave Function Consider an extended wavefunction ψ(x, t). According to our usual interpretation, |ψ(x, t)| 2 is proportional to the probability density of a measurement of the particle’s position yield-ing the value x at time t. If the wavefunction is extended then there is a wide range of likely values that this measurement could give. Suppose that we make such a measure-ment, and obtain the value x0. We now know that the particle is located at x = x0. If we make another measurement immediately after the first one then what value do we expect to obtain? Well, common sense tells us that we must obtain the same value, x0, since the particle cannot have shifted position appreciably in an infinitesimal time interval. Thus, immediately after the first measurement, a measurement of the particle’s position is cer-tain to give the value x0, and has no chance of giving any other value. This implies that the wavefunction must have collapsed to some sort of “spike” function located at x = x0. This is illustrated in Fig. 3.9. Of course, as soon as the wavefunction has collapsed, it starts to expand again, as discussed in Sect. 3.13. Thus, the second measurement must be made reasonably quickly after the first, in order to guarantee that the same result will be obtained. The above discussion illustrates an important point in quantum mechanics. Namely, that the wavefunction of a particle changes discontinuously (in time) whenever a mea-surement is made. We conclude that there are two types of time evolution of the wave-function in quantum mechanics. First, there is a smooth evolution which is governed by Schr¨ odinger’s equation. This evolution takes place between measurements. Second, there is a discontinuous evolution which takes place each time a measurement is made. Exercises 1. A He-Ne laser emits radiation of wavelength λ = 633 nm. How many photons are emitted per second by a laser with a power of 1 mW? What force does such laser exert on a body which completely absorbs its radiation? 2. The ionization energy of a hydrogen atom in its ground state is Eion = 13.60 eV (1 eV is the energy acquired by an electron accelerated through a potential difference of 1 V). Calculate Wave-Particle Duality 37 |ψ|2 → AFTER BEFORE x → x → x0 |ψ|2 → Figure 3.9: Collapse of the wavefunction upon measurement of x. the frequency, wavelength, and wavenumber of the electromagnetic radiation which will just ionize the atom. 3. The maximum energy of photoelectrons from aluminium is 2.3 eV for radiation of wavelength 2000 ˚ A, and 0.90 eV for radiation of wavelength 2580 ˚ A. Use this data to calculate Planck’s constant, and the work function of aluminium. 4. Show that the de Broglie wavelength of an electron accelerated from rest across a potential difference V is given by λ = 1.29 × 10−9 V−1/2 m, where V is measured in volts. 5. If the atoms in a regular crystal are separated by 3×10−10 m demonstrate that an accelerating voltage of about 1.5 kV would be required to produce an electron diffraction pattern from the crystal. 6. The relationship between wavelength and frequency for electromagnetic waves in a waveg-uide is λ = c q ν2 −ν 2 0 , where c is the velocity of light in vacuum. What are the group and phase velocities of such waves as functions of ν0 and λ? 7. Nuclei, typically of size 10−14 m, frequently emit electrons with energies of 1–10 MeV. Use the uncertainty principle to show that electrons of energy 1 MeV could not be contained in the nucleus before the decay. 38 QUANTUM MECHANICS 8. A particle of mass m has a wavefunction ψ(x, t) = A exp[−a (m x2/¯ h + i t)], where A and a are positive real constants. For what potential function V(x) does ψ satisfy the Schr¨ odinger equation? Fundamentals of Quantum Mechanics 39 4 Fundamentals of Quantum Mechanics 4.1 Introduction The previous chapter serves as a useful introduction to many of the basic concepts of quantum mechanics. In this chapter, we shall examine these concepts in a more systematic fashion. For the sake of simplicity, we shall concentrate on one-dimensional systems. 4.2 Schr¨ odinger’s Equation Consider a dynamical system consisting of a single non-relativistic particle of mass m mov-ing along the x-axis in some real potential V(x). In quantum mechanics, the instantaneous state of the system is represented by a complex wavefunction ψ(x, t). This wavefunction evolves in time according to Schr¨ odinger’s equation: i ¯ h ∂ψ ∂t = −¯ h2 2 m ∂2ψ ∂x2 + V(x) ψ. (4.1) The wavefunction is interpreted as follows: |ψ(x, t)| 2 is the probability density of a mea-surement of the particle’s displacement yielding the value x. Thus, the probability of a measurement of the displacement giving a result between a and b (where a < b) is Px ∈a:b(t) = Z b a |ψ(x, t)| 2 dx. (4.2) Note that this quantity is real and positive definite. 4.3 Normalization of the Wavefunction Now, a probability is a real number between 0 and 1. An outcome of a measurement which has a probability 0 is an impossible outcome, whereas an outcome which has a probability 1 is a certain outcome. According to Eq. (4.2), the probability of a measurement of x yielding a result between −∞and +∞is Px ∈−∞:∞(t) = Z ∞ −∞ |ψ(x, t)| 2 dx. (4.3) However, a measurement of x must yield a value between −∞and +∞, since the particle has to be located somewhere. It follows that Px ∈−∞:∞= 1, or Z ∞ −∞ |ψ(x, t)| 2 dx = 1, (4.4) 40 QUANTUM MECHANICS which is generally known as the normalization condition for the wavefunction. For example, suppose that we wish to normalize the wavefunction of a Gaussian wave packet, centered on x = x0, and of characteristic width σ (see Sect. 3.12): i.e., ψ(x) = ψ0 e−(x−x0) 2/(4 σ2). (4.5) In order to determine the normalization constant ψ0, we simply substitute Eq. (4.5) into Eq. (4.4), to obtain |ψ0| 2 Z ∞ −∞ e−(x−x0) 2/(2 σ2) dx = 1. (4.6) Changing the variable of integration to y = (x −x0)/( √ 2 σ), we get |ψ0| 2√ 2 σ Z ∞ −∞ e−y2 dy = 1. (4.7) However, Z ∞ −∞ e−y2 dy = √π, (4.8) which implies that |ψ0| 2 = 1 (2π σ2)1/2. (4.9) Hence, a general normalized Gaussian wavefunction takes the form ψ(x) = e i ϕ (2π σ2)1/4 e−(x−x0) 2/(4 σ2), (4.10) where ϕ is an arbitrary real phase-angle. Now, it is important to demonstrate that if a wavefunction is initially normalized then it stays normalized as it evolves in time according to Schr¨ odinger’s equation. If this is not the case then the probability interpretation of the wavefunction is untenable, since it does not make sense for the probability that a measurement of x yields any possible outcome (which is, manifestly, unity) to change in time. Hence, we require that d dt Z ∞ −∞ |ψ(x, t)| 2 dx = 0, (4.11) for wavefunctions satisfying Schr¨ odinger’s equation. The above equation gives d dt Z ∞ −∞ ψ∗ψ dx = Z ∞ −∞ ∂ψ∗ ∂t ψ + ψ∗∂ψ ∂t ! dx = 0. (4.12) Now, multiplying Schr¨ odinger’s equation by ψ∗/(i ¯ h), we obtain ψ∗∂ψ ∂t = i ¯ h 2 m ψ∗∂2ψ ∂x2 −i ¯ h V |ψ| 2. (4.13) Fundamentals of Quantum Mechanics 41 The complex conjugate of this expression yields ψ ∂ψ∗ ∂t = −i ¯ h 2 m ψ ∂2ψ∗ ∂x2 + i ¯ h V |ψ| 2 (4.14) [since (A B)∗= A∗B∗, A∗∗= A, and i∗= −i]. Summing the previous two equations, we get ∂ψ∗ ∂t ψ + ψ∗∂ψ ∂t = i ¯ h 2 m ψ∗∂2ψ ∂x2 −ψ ∂2ψ∗ ∂x2 ! = i ¯ h 2 m ∂ ∂x ψ∗∂ψ ∂x −ψ ∂ψ∗ ∂x ! . (4.15) Equations (4.12) and (4.15) can be combined to produce d dt Z ∞ −∞ |ψ| 2 dx = i ¯ h 2 m " ψ∗∂ψ ∂x −ψ ∂ψ∗ ∂x #∞ −∞ = 0. (4.16) The above equation is satisfied provided |ψ| →0 as |x| →∞. (4.17) However, this is a necessary condition for the integral on the left-hand side of Eq. (4.4) to converge. Hence, we conclude that all wavefunctions which are square-integrable [i.e., are such that the integral in Eq. (4.4) converges] have the property that if the normalization condition (4.4) is satisfied at one instant in time then it is satisfied at all subsequent times. It is also possible to demonstrate, via very similar analysis to the above, that dPx ∈a:b dt + j(b, t) −j(a, t) = 0, (4.18) where Px ∈a:b is defined in Eq. (4.2), and j(x, t) = i ¯ h 2 m ψ ∂ψ∗ ∂x −ψ∗∂ψ ∂x ! (4.19) is known as the probability current. Note that j is real. Equation (4.18) is a probability conservation equation. According to this equation, the probability of a measurement of x lying in the interval a to b evolves in time due to the difference between the flux of probability into the interval [i.e., j(a, t)], and that out of the interval [i.e., j(b, t)]. Here, we are interpreting j(x, t) as the flux of probability in the +x-direction at position x and time t. Note, finally, that not all wavefunctions can be normalized according to the scheme set out in Eq. (4.4). For instance, a plane wave wavefunction ψ(x, t) = ψ0 e i (k x−ω t) (4.20) is not square-integrable, and, thus, cannot be normalized. For such wavefunctions, the best we can say is that Px ∈a:b(t) ∝ Z b a |ψ(x, t)| 2 dx. (4.21) In the following, all wavefunctions are assumed to be square-integrable and normalized, unless otherwise stated. 42 QUANTUM MECHANICS 4.4 Expectation Values and Variances We have seen that |ψ(x, t)| 2 is the probability density of a measurement of a particle’s displacement yielding the value x at time t. Suppose that we made a large number of independent measurements of the displacement on an equally large number of identical quantum systems. In general, measurements made on different systems will yield different results. However, from the definition of probability, the mean of all these results is simply ⟨x⟩= Z ∞ −∞ x |ψ| 2 dx. (4.22) Here, ⟨x⟩is called the expectation value of x. Similarly the expectation value of any function of x is ⟨f(x)⟩= Z ∞ −∞ f(x) |ψ| 2 dx. (4.23) In general, the results of the various different measurements of x will be scattered around the expectation value ⟨x⟩. The degree of scatter is parameterized by the quantity σ2 x = Z ∞ −∞ (x −⟨x⟩) 2 |ψ| 2 dx ≡⟨x2⟩−⟨x⟩2, (4.24) which is known as the variance of x. The square-root of this quantity, σx, is called the standard deviation of x. We generally expect the results of measurements of x to lie within a few standard deviations of the expectation value. For instance, consider the normalized Gaussian wave packet [see Eq. (4.10)] ψ(x) = e i ϕ (2π σ2)1/4 e−(x−x0) 2/(4 σ2). (4.25) The expectation value of x associated with this wavefunction is ⟨x⟩= 1 √ 2π σ2 Z ∞ −∞ x e−(x−x0) 2/(2 σ2) dx. (4.26) Let y = (x −x0)/( √ 2 σ). It follows that ⟨x⟩= x0 √π Z ∞ −∞ e−y2 dy + √ 2 σ √π Z ∞ −∞ y e−y2 dy. (4.27) However, the second integral on the right-hand side is zero, by symmetry. Hence, making use of Eq. (4.8), we obtain ⟨x⟩= x0. (4.28) Evidently, the expectation value of x for a Gaussian wave packet is equal to the most likely value of x (i.e., the value of x which maximizes |ψ| 2). Fundamentals of Quantum Mechanics 43 The variance of x associated with the Gaussian wave packet (4.25) is σ2 x = 1 √ 2π σ2 Z ∞ −∞ (x −x0) 2 e−(x−x0) 2/(2 σ2) dx. (4.29) Let y = (x −x0)/( √ 2 σ). It follows that σ2 x = 2 σ2 √π Z ∞ −∞ y2 e−y2 dy. (4.30) However, Z ∞ −∞ y2 e−y2 dy = √π 2 , (4.31) giving σ 2 x = σ2. (4.32) This result is consistent with our earlier interpretation of σ as a measure of the spatial extent of the wave packet (see Sect. 3.12). It follows that we can rewrite the Gaussian wave packet (4.25) in the convenient form ψ(x) = e i ϕ (2π σ 2 x)1/4 e−(x−⟨x⟩) 2/(4 σ 2 x ). (4.33) 4.5 Ehrenfest’s Theorem A simple way to calculate the expectation value of momentum is to evaluate the time derivative of ⟨x⟩, and then multiply by the mass m: i.e., ⟨p⟩= m d⟨x⟩ dt = m d dt Z ∞ −∞ x |ψ| 2 dx = m Z ∞ −∞ x ∂|ψ| 2 ∂t dx. (4.34) However, it is easily demonstrated that ∂|ψ| 2 ∂t + ∂j ∂x = 0 (4.35) [this is just the differential form of Eq. (4.18)], where j is the probability current defined in Eq. (4.19). Thus, ⟨p⟩= −m Z ∞ −∞ x ∂j ∂x dx = m Z ∞ −∞ j dx, (4.36) where we have integrated by parts. It follows from Eq. (4.19) that ⟨p⟩= −i ¯ h 2 Z ∞ −∞ ψ∗∂ψ ∂x −∂ψ∗ ∂x ψ ! dx = −i ¯ h Z ∞ −∞ ψ∗∂ψ ∂x dx, (4.37) 44 QUANTUM MECHANICS where we have again integrated by parts. Hence, the expectation value of the momentum can be written ⟨p⟩= m d⟨x⟩ dt = −i ¯ h Z ∞ −∞ ψ∗∂ψ ∂x dx. (4.38) It follows from the above that d⟨p⟩ dt = −i ¯ h Z ∞ −∞ ∂ψ∗ ∂t ∂ψ ∂x + ψ∗∂2ψ ∂t∂x ! dx = Z ∞ −∞ " i ¯ h ∂ψ ∂t !∗∂ψ ∂x + ∂ψ∗ ∂x i ¯ h ∂ψ ∂t !# dx, (4.39) where we have integrated by parts. Substituting from Schr¨ odinger’s equation (4.1), and simplifying, we obtain d⟨p⟩ dt = Z ∞ −∞ " −¯ h2 2 m ∂ ∂x ∂ψ∗ ∂x ∂ψ ∂x ! + V(x) ∂|ψ| 2 ∂x # dx = Z ∞ −∞ V(x) ∂|ψ| 2 ∂x dx. (4.40) Integration by parts yields d⟨p⟩ dt = − Z ∞ −∞ dV dx |ψ| 2 dx = − dV dx + . (4.41) Hence, according to Eqs. (4.34) and (4.41), m d⟨x⟩ dt = ⟨p⟩, (4.42) d⟨p⟩ dt = − dV dx + . (4.43) Evidently, the expectation values of displacement and momentum obey time evolution equations which are analogous to those of classical mechanics. This result is known as Ehrenfest’s theorem. Suppose that the potential V(x) is slowly varying. In this case, we can expand dV/dx as a Taylor series about ⟨x⟩. Keeping terms up to second order, we obtain dV(x) dx = dV(⟨x⟩) d⟨x⟩ + dV2(⟨x⟩) d⟨x⟩2 (x −⟨x⟩) + 1 2 dV3(⟨x⟩) d⟨x⟩3 (x −⟨x⟩) 2. (4.44) Substitution of the above expansion into Eq. (4.43) yields d⟨p⟩ dt = −dV(⟨x⟩) d⟨x⟩ −σ 2 x 2 dV3(⟨x⟩) d⟨x⟩3 , (4.45) since ⟨1⟩= 1, and ⟨x −⟨x⟩⟩= 0, and ⟨(x −⟨x⟩) 2⟩= σ 2 x. The final term on the right-hand side of the above equation can be neglected when the spatial extent of the particle Fundamentals of Quantum Mechanics 45 wavefunction, σx, is much smaller than the variation length-scale of the potential. In this case, Eqs. (4.42) and (4.43) reduce to m d⟨x⟩ dt = ⟨p⟩, (4.46) d⟨p⟩ dt = −dV(⟨x⟩) d⟨x⟩ . (4.47) These equations are exactly equivalent to the equations of classical mechanics, with ⟨x⟩ playing the role of the particle displacement. Of course, if the spatial extent of the wave-function is negligible then a measurement of x is almost certain to yield a result which lies very close to ⟨x⟩. Hence, we conclude that quantum mechanics corresponds to classical mechanics in the limit that the spatial extent of the wavefunction (which is typically of order the de Boglie wavelength) is negligible. This is an important result, since we know that classical mechanics gives the correct answer in this limit. 4.6 Operators An operator, O (say), is a mathematical entity which transforms one function into another: i.e., O(f(x)) →g(x). (4.48) For instance, x is an operator, since x f(x) is a different function to f(x), and is fully speci-fied once f(x) is given. Furthermore, d/dx is also an operator, since df(x)/dx is a different function to f(x), and is fully specified once f(x) is given. Now, x df dx ̸= d dx (x f) . (4.49) This can also be written x d dx ̸= d dx x, (4.50) where the operators are assumed to act on everything to their right, and a final f(x) is un-derstood [where f(x) is a general function]. The above expression illustrates an important point: i.e., in general, operators do not commute. Of course, some operators do commute: e.g., x x2 = x2 x. (4.51) Finally, an operator, O, is termed linear if O(c f(x)) = c O(f(x)), (4.52) where f is a general function, and c a general complex number. All of the operators employed in quantum mechanics are linear. 46 QUANTUM MECHANICS Now, from Eqs. (4.22) and (4.38), ⟨x⟩ = Z ∞ −∞ ψ∗x ψ dx, (4.53) ⟨p⟩ = Z ∞ −∞ ψ∗ −i ¯ h ∂ ∂x ! ψ dx. (4.54) These expressions suggest a number of things. First, classical dynamical variables, such as x and p, are represented in quantum mechanics by linear operators which act on the wave-function. Second, displacement is represented by the algebraic operator x, and momentum by the differential operator −i ¯ h ∂/∂x: i.e., p ≡−i ¯ h ∂ ∂x. (4.55) Finally, the expectation value of some dynamical variable represented by the operator O(x) is simply ⟨O⟩= Z ∞ −∞ ψ∗(x, t) O(x) ψ(x, t) dx. (4.56) Clearly, if an operator is to represent a dynamical variable which has physical signifi-cance then its expectation value must be real. In other words, if the operator O represents a physical variable then we require that ⟨O⟩= ⟨O⟩∗, or Z ∞ −∞ ψ∗(O ψ) dx = Z ∞ −∞ (O ψ)∗ψ dx, (4.57) where O∗is the complex conjugate of O. An operator which satisfies the above constraint is called an Hermitian operator. It is easily demonstrated that x and p are both Hermitian. The Hermitian conjugate, O†, of a general operator, O, is defined as follows: Z ∞ −∞ ψ∗(O ψ) dx = Z ∞ −∞ (O† ψ)∗ψ dx. (4.58) The Hermitian conjugate of an Hermitian operator is the same as the operator itself: i.e., p† = p. For a non-Hermitian operator, O (say), it is easily demonstrated that (O†)† = O, and that the operator O + O† is Hermitian. Finally, if A and B are two operators, then (A B)† = B† A†. Suppose that we wish to find the operator which corresponds to the classical dynamical variable x p. In classical mechanics, there is no difference between x p and p x. However, in quantum mechanics, we have already seen that x p ̸= p x. So, should be choose x p or p x? Actually, neither of these combinations is Hermitian. However, (1/2) [x p + (x p)†] is Hermitian. Moreover, (1/2) [x p + (x p)†] = (1/2) (x p + p† x†) = (1/2) (x p + p x), which neatly resolves our problem of which order to put x and p. Fundamentals of Quantum Mechanics 47 It is a reasonable guess that the operator corresponding to energy (which is called the Hamiltonian, and conventionally denoted H) takes the form H ≡p2 2 m + V(x). (4.59) Note that H is Hermitian. Now, it follows from Eq. (4.55) that H ≡−¯ h2 2 m ∂2 ∂x2 + V(x). (4.60) However, according to Schr¨ odinger’s equation, (4.1), we have −¯ h2 2 m ∂2 ∂x2 + V(x) = i ¯ h ∂ ∂t, (4.61) so H ≡i ¯ h ∂ ∂t. (4.62) Thus, the time-dependent Schr¨ odinger equation can be written i ¯ h ∂ψ ∂t = H ψ. (4.63) Finally, if O(x, p, E) is a classical dynamical variable which is a function of displace-ment, momentum, and energy, then a reasonable guess for the corresponding opera-tor in quantum mechanics is (1/2) [O(x, p, H) + O†(x, p, H)], where p = −i ¯ h ∂/∂x, and H = i ¯ h ∂/∂t. 4.7 Momentum Representation Fourier’s theorerm (see Sect. 3.12), applied to one-dimensional wavefunctions, yields ψ(x, t) = 1 √ 2π Z ∞ −∞ ¯ ψ(k, t) e+ik x dk, (4.64) ¯ ψ(k, t) = 1 √ 2π Z ∞ −∞ ψ(x, t) e−i k x dx, (4.65) where k represents wavenumber. However, p = ¯ h k. Hence, we can also write ψ(x, t) = 1 √ 2π ¯ h Z ∞ −∞ φ(p, t) e+i p x/¯ h dp, (4.66) φ(p, t) = 1 √ 2π ¯ h Z ∞ −∞ ψ(x, t) e−i p x/¯ h dx, (4.67) where φ(p, t) = ¯ ψ(k, t)/ √ ¯ h is the momentum-space equivalent to the real-space wave-function ψ(x, t). 48 QUANTUM MECHANICS At this stage, it is convenient to introduce a useful function called the Dirac delta-function. This function, denoted δ(x), was first devised by Paul Dirac, and has the following rather unusual properties: δ(x) is zero for x ̸= 0, and is infinite at x = 0. However, the singularity at x = 0 is such that Z ∞ −∞ δ(x) dx = 1. (4.68) The delta-function is an example of what is known as a generalized function: i.e., its value is not well-defined at all x, but its integral is well-defined. Consider the integral Z ∞ −∞ f(x) δ(x) dx. (4.69) Since δ(x) is only non-zero infinitesimally close to x = 0, we can safely replace f(x) by f(0) in the above integral (assuming f(x) is well behaved at x = 0), to give Z ∞ −∞ f(x) δ(x) dx = f(0) Z ∞ −∞ δ(x) dx = f(0), (4.70) where use has been made of Eq. (4.68). A simple generalization of this result yields Z ∞ −∞ f(x) δ(x −x0) dx = f(x0), (4.71) which can also be thought of as an alternative definition of a delta-function. Suppose that ψ(x) = δ(x −x0). It follows from Eqs. (4.67) and (4.71) that φ(p) = e−i p x0/¯ h √ 2π ¯ h . (4.72) Hence, Eq. (4.66) yields the important result δ(x −x0) = 1 2π ¯ h Z ∞ −∞ e+i p (x−x0)/¯ h dp. (4.73) Similarly, δ(p −p0) = 1 2π ¯ h Z ∞ −∞ e+i (p−p0) x/¯ h dx. (4.74) It turns out that we can just as well formulate quantum mechanics using momentum-space wavefunctions, φ(p, t), as real-space wavefunctions, ψ(x, t). The former scheme is known as the momentum representation of quantum mechanics. In the momentum rep-resentation, wavefunctions are the Fourier transforms of the equivalent real-space wave-functions, and dynamical variables are represented by different operators. Furthermore, by analogy with Eq. (4.56), the expectation value of some operator O(p) takes the form ⟨O⟩= Z ∞ −∞ φ∗(p, t) O(p) φ(p, t) dp. (4.75) Fundamentals of Quantum Mechanics 49 Consider momentum. We can write ⟨p⟩ = Z ∞ −∞ ψ∗(x, t) −i ¯ h ∂ ∂x ! ψ(x, t) dx = 1 2π ¯ h Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ φ∗(p′, t) φ(p, t) p e+i(p−p′) x/¯ h dx dp dp′, (4.76) where use has been made of Eq. (4.66). However, it follows from Eq. (4.74) that ⟨p⟩= Z ∞ −∞ Z ∞ −∞ φ∗(p′, t) φ(p, t) p δ(p −p′) dp dp′. (4.77) Hence, using Eq. (4.71), we obtain ⟨p⟩= Z ∞ −∞ φ∗(p, t) p φ(p, t) dp = Z ∞ −∞ p |φ| 2 dp. (4.78) Evidently, momentum is represented by the operator p in the momentum representation. The above expression also strongly suggests [by comparison with Eq. (4.22)] that |φ(p, t)| 2 can be interpreted as the probability density of a measurement of momentum yielding the value p at time t. It follows that φ(p, t) must satisfy an analogous normalization condition to Eq. (4.4): i.e., Z ∞ −∞ |φ(p, t)| 2 dp = 1. (4.79) Consider displacement. We can write ⟨x⟩ = Z ∞ −∞ ψ∗(x, t) x ψ(x, t) dx (4.80) = 1 2π ¯ h Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ φ∗(p′, t) φ(p, t) −i ¯ h ∂ ∂p ! e+i (p−p′) x/¯ h dx dp dp′. Integration by parts yields ⟨x⟩= 1 2π ¯ h Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ φ∗(p′, t) e+i (p−p′) x/¯ h i ¯ h ∂ ∂p ! φ(p, t) dx dp dp′. (4.81) Hence, making use of Eqs. (4.74) and (4.71), we obtain ⟨x⟩= 1 2π ¯ h Z ∞ −∞ φ∗(p) i ¯ h ∂ ∂p ! φ(p) dp. (4.82) Evidently, displacement is represented by the operator x ≡i ¯ h ∂ ∂p (4.83) 50 QUANTUM MECHANICS in the momentum representation. Finally, let us consider the normalization of the momentum-space wavefunction φ(p, t). We have Z ∞ −∞ ψ∗(x, t) ψ(x, t) dx = 1 2π ¯ h Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ φ∗(p′, t) φ(p, t) e+i(p−p′) x/¯ h dx dp dp′. (4.84) Thus, it follows from Eqs. (4.71) and (4.74) that Z ∞ −∞ |ψ(x, t)| 2 dx = Z ∞ −∞ |φ(p, t)| 2 dp. (4.85) Hence, if ψ(x, t) is properly normalized [see Eq. (4.4)] then φ(p, t), as defined in Eq. (4.67), is also properly normalized [see Eq. (4.79)]. The existence of the momentum representation illustrates an important point: i.e., that there are many different, but entirely equivalent, ways of mathematically formulating quantum mechanics. For instance, it is also possible to represent wavefunctions as row and column vectors, and dynamical variables as matrices which act upon these vectors. 4.8 Heisenberg’s Uncertainty Principle Consider a real-space Hermitian operator O(x). A straightforward generalization of Eq. (4.57) yields Z ∞ −∞ ψ∗ 1 (O ψ2) dx = Z ∞ −∞ (O ψ1)∗ψ2 dx, (4.86) where ψ1(x) and ψ2(x) are general functions. Let f = (A −⟨A⟩) ψ, where A(x) is an Hermitian operator, and ψ(x) a general wave-function. We have Z ∞ −∞ |f| 2 dx = Z ∞ −∞ f∗f dx = Z ∞ −∞ [(A −⟨A⟩) ψ] ∗[(A −⟨A⟩) ψ] dx. (4.87) Making use of Eq. (4.86), we obtain Z ∞ −∞ |f| 2 dx = Z ∞ −∞ ψ∗(A −⟨A⟩) 2 ψ dx = σ 2 A, (4.88) where σ 2 A is the variance of A [see Eq. (4.24)]. Similarly, if g = (B −⟨B⟩) ψ, where B is a second Hermitian operator, then Z ∞ −∞ |g| 2 dx = σ 2 B, (4.89) Now, there is a standard result in mathematics, known as the Schwartz inequality, which states that Z b a f∗(x) g(x) dx 2 ≤ Z b a |f(x)| 2 dx Z b a |g(x)| 2 dx, (4.90) Fundamentals of Quantum Mechanics 51 where f and g are two general functions. Furthermore, if z is a complex number then |z| 2 = [Re(z)] 2 + [Im(z)] 2 ≥[Im(z)] 2 = " 1 2 i (z −z∗) # 2 . (4.91) Hence, if z = R∞ −∞f∗g dx then Eqs. (4.88)–(4.91) yield σ 2 A σ 2 B ≥ " 1 2 i (z −z∗) # 2 . (4.92) However, z = Z ∞ −∞ [(A −⟨A⟩) ψ] ∗[(B −⟨B⟩) ψ] dx = Z ∞ −∞ ψ∗(A −⟨A⟩) (B −⟨B⟩) ψ dx, (4.93) where use has been made of Eq. (4.86). The above equation reduces to z = Z ∞ −∞ ψ∗A B ψ dx −⟨A⟩⟨B⟩. (4.94) Furthermore, it is easily demonstrated that z∗= Z ∞ −∞ ψ∗B A ψ dx −⟨A⟩⟨B⟩. (4.95) Hence, Eq. (4.92) gives σ 2 A σ 2 B ≥ 1 2 i⟨[A, B]⟩ !2 , (4.96) where [A, B] ≡A B −B A. (4.97) Equation (4.96) is the general form of Heisenberg’s uncertainty principle in quantum mechanics. It states that if two dynamical variables are represented by the two Hermitian operators A and B, and these operators do not commute (i.e., A B ̸= B A), then it is im-possible to simultaneously (exactly) measure the two variables. Instead, the product of the variances in the measurements is always greater than some critical value, which depends on the extent to which the two operators do not commute. For instance, displacement and momentum are represented (in real-space) by the op-erators x and p ≡−i ¯ h ∂/∂x, respectively. Now, it is easily demonstrated that [x, p] = i ¯ h. (4.98) Thus, σx σp ≥¯ h 2 , (4.99) 52 QUANTUM MECHANICS which can be recognized as the standard displacement-momentum uncertainty principle (see Sect. 3.14). It turns out that the minimum uncertainty (i.e., σx σp = ¯ h/2) is only achieved by Gaussian wave packets (see Sect. 3.12): i.e., ψ(x) = e+i p0 x/¯ h (2π σ 2 x)1/4 e−(x−x0) 2/4 σ 2 x , (4.100) φ(p) = e−i p x0/¯ h (2π σ 2 p)1/4 e−(p−p0) 2/4 σ 2 p, (4.101) where φ(p) is the momentum-space equivalent of ψ(x). Energy and time are represented by the operators H ≡i ¯ h ∂/∂t and t, respectively. These operators do not commute, indicating that energy and time cannot be measured simultaneously. In fact, [H, t] = i ¯ h, (4.102) so σE σt ≥¯ h 2 . (4.103) This can be written, somewhat less exactly, as ∆E ∆t > ∼¯ h, (4.104) where ∆E and ∆t are the uncertainties in energy and time, respectively. The above expres-sion is generally known as the energy-time uncertainty principle. For instance, suppose that a particle passes some fixed point on the x-axis. Since the particle is, in reality, an extended wave packet, it takes a certain amount of time ∆t for the particle to pass. Thus, there is an uncertainty, ∆t, in the arrival time of the particle. Moreover, since E = ¯ h ω, the only wavefunctions which have unique energies are those with unique frequencies: i.e., plane waves. Since a wave packet of finite extent is made up of a combination of plane waves of different wavenumbers, and, hence, different frequen-cies, there will be an uncertainty ∆E in the particle’s energy which is proportional to the range of frequencies of the plane waves making up the wave packet. The more compact the wave packet (and, hence, the smaller ∆t), the larger the range of frequencies of the constituent plane waves (and, hence, the large ∆E), and vice versa. To be more exact, if ψ(t) is the wavefunction measured at the fixed point as a function of time, then we can write ψ(t) = 1 √ 2π ¯ h Z ∞ −∞ χ(E) e−i E t/¯ h dE. (4.105) In other words, we can express ψ(t) as a linear combination of plane waves of definite energy E. Here, χ(E) is the complex amplitude of plane waves of energy E in this combi-nation. By Fourier’s theorem, we also have χ(E) = 1 √ 2π ¯ h Z ∞ −∞ ψ(t) e+i E t/¯ h dt. (4.106) Fundamentals of Quantum Mechanics 53 For instance, if ψ(t) is a Gaussian then it is easily shown that χ(E) is also a Gaussian: i.e., ψ(t) = e−i E0 t/¯ h (2π σ 2 t )1/4 e−(t−t0) 2/4 σ 2 t , (4.107) χ(E) = e+i E t0/¯ h (2π σ 2 E)1/4 e−(E−E0) 2/4 σ 2 E, (4.108) where σE σt = ¯ h/2. As before, Gaussian wave packets satisfy the minimum uncertainty principle σE σt = ¯ h/2. Conversely, non-Gaussian wave packets are characterized by σE σt > ¯ h/2. 4.9 Eigenstates and Eigenvalues Consider a general real-space operator A(x). When this operator acts on a general wave-function ψ(x) the result is usually a wavefunction with a completely different shape. How-ever, there are certain special wavefunctions which are such that when A acts on them the result is just a multiple of the original wavefunction. These special wavefunctions are called eigenstates, and the multiples are called eigenvalues. Thus, if A ψa(x) = a ψa(x), (4.109) where a is a complex number, then ψa is called an eigenstate of A corresponding to the eigenvalue a. Suppose that A is an Hermitian operator corresponding to some physical dynamical variable. Consider a particle whose wavefunction is ψa. The expectation of value A in this state is simply [see Eq. (4.56)] ⟨A⟩= Z ∞ −∞ ψ∗ a A ψa dx = a Z ∞ −∞ ψ∗ a ψa dx = a, (4.110) where use has been made of Eq. (4.109) and the normalization condition (4.4). Moreover, ⟨A2⟩= Z ∞ −∞ ψ∗ a A2 ψa dx = a Z ∞ −∞ ψ∗ a A ψa dx = a2 Z ∞ −∞ ψ∗ a ψa dx = a2, (4.111) so the variance of A is [cf., Eq. (4.24)] σ 2 A = ⟨A2⟩−⟨A⟩2 = a2 −a2 = 0. (4.112) The fact that the variance is zero implies that every measurement of A is bound to yield the same result: namely, a. Thus, the eigenstate ψa is a state which is associated with a unique value of the dynamical variable corresponding to A. This unique value is simply the associated eigenvalue. 54 QUANTUM MECHANICS It is easily demonstrated that the eigenvalues of an Hermitian operator are all real. Recall [from Eq. (4.86)] that an Hermitian operator satisfies Z ∞ −∞ ψ∗ 1 (A ψ2) dx = Z ∞ −∞ (A ψ1)∗ψ2 dx. (4.113) Hence, if ψ1 = ψ2 = ψa then Z ∞ −∞ ψ∗ a (A ψa) dx = Z ∞ −∞ (A ψa)∗ψa dx, (4.114) which reduces to [see Eq. (4.109)] a = a∗, (4.115) assuming that ψa is properly normalized. Two wavefunctions, ψ1(x) and ψ2(x), are said to be orthogonal if Z ∞ −∞ ψ∗ 1 ψ2 dx = 0. (4.116) Consider two eigenstates of A, ψa and ψa′, which correspond to the two different eigen-values a and a′, respectively. Thus, A ψa = a ψa, (4.117) A ψa′ = a′ ψa′. (4.118) Multiplying the complex conjugate of the first equation by ψa′, and the second equation by ψ∗ a, and then integrating over all x, we obtain Z ∞ −∞ (A ψa)∗ψa′ dx = a Z ∞ −∞ ψ∗ a ψa′ dx, (4.119) Z ∞ −∞ ψ∗ a (A ψa′) dx = a′ Z ∞ −∞ ψ∗ a ψa′ dx. (4.120) However, from Eq. (4.113), the left-hand sides of the above two equations are equal. Hence, we can write (a −a′) Z ∞ −∞ ψ∗ a ψa′ dx = 0. (4.121) By assumption, a ̸= a′, yielding Z ∞ −∞ ψ∗ a ψa′ dx = 0. (4.122) In other words, eigenstates of an Hermitian operator corresponding to different eigenvalues are automatically orthogonal. Fundamentals of Quantum Mechanics 55 Consider two eigenstates of A, ψa and ψ′ a, which correspond to the same eigenvalue, a. Such eigenstates are termed degenerate. The above proof of the orthogonality of different eigenstates fails for degenerate eigenstates. Note, however, that any linear combination of ψa and ψ′ a is also an eigenstate of A corresponding to the eigenvalue a. Thus, even if ψa and ψ′ a are not orthogonal, we can always choose two linear combinations of these eigenstates which are orthogonal. For instance, if ψa and ψ′ a are properly normalized, and Z ∞ −∞ ψ∗ a ψ′ a dx = c, (4.123) then it is easily demonstrated that ψ′′ a = |c| q 1 −|c|2  ψa −c−1 ψ′ a  (4.124) is a properly normalized eigenstate of A, corresponding to the eigenvalue a, which is orthogonal to ψa. It is straightforward to generalize the above argument to three or more degenerate eigenstates. Hence, we conclude that the eigenstates of an Hermitian operator are, or can be chosen to be, mutually orthogonal. It is also possible to demonstrate that the eigenstates of an Hermitian operator form a complete set: i.e., that any general wavefunction can be written as a linear combination of these eigenstates. However, the proof is quite difficult, and we shall not attempt it here. In summary, given an Hermitian operator A, any general wavefunction, ψ(x), can be written ψ = X i ci ψi, (4.125) where the ci are complex weights, and the ψi are the properly normalized (and mutually orthogonal) eigenstates of A: i.e., A ψi = ai ψi, (4.126) where ai is the eigenvalue corresponding to the eigenstate ψi, and Z ∞ −∞ ψ∗ i ψj dx = δij. (4.127) Here, δij is called the Kronecker delta-function, and takes the value unity when its two indices are equal, and zero otherwise. It follows from Eqs. (4.125) and (4.127) that ci = Z ∞ −∞ ψ∗ i ψ dx. (4.128) Thus, the expansion coefficients in Eq. (4.125) are easily determined, given the wavefunc-tion ψ and the eigenstates ψi. Moreover, if ψ is a properly normalized wavefunction then Eqs. (4.125) and (4.127) yield X i |ci|2 = 1. (4.129) 56 QUANTUM MECHANICS 4.10 Measurement Suppose that A is an Hermitian operator corresponding to some dynamical variable. By analogy with the discussion in Sect. 3.16, we expect that if a measurement of A yields the result a then the act of measurement will cause the wavefunction to collapse to a state in which a measurement of A is bound to give the result a. What sort of wavefunction, ψ, is such that a measurement of A is bound to yield a certain result, a? Well, expressing ψ as a linear combination of the eigenstates of A, we have ψ = X i ci ψi, (4.130) where ψi is an eigenstate of A corresponding to the eigenvalue ai. If a measurement of A is bound to yield the result a then ⟨A⟩= a, (4.131) and σ 2 A = ⟨A2⟩−⟨A⟩= 0. (4.132) Now it is easily seen that ⟨A⟩ = X i |ci|2 ai, (4.133) ⟨A2⟩ = X i |ci|2 a 2 i . (4.134) Thus, Eq. (4.132) gives X i a 2 i |ci|2 −  X i ai |ci|2   2 = 0. (4.135) Furthermore, the normalization condition yields X i |ci|2 = 1. (4.136) For instance, suppose that there are only two eigenstates. The above two equations then reduce to |c1|2 = x, and |c2|2 = 1 −x, where 0 ≤x ≤1, and (a1 −a2)2 x (1 −x) = 0. (4.137) The only solutions are x = 0 and x = 1. This result can easily be generalized to the case where there are more than two eigenstates. It follows that a state associated with a definite value of A is one in which one of the |ci|2 is unity, and all of the others are zero. In other words, the only states associated with definite values of A are the eigenstates of A. It immediately follows that the result of a measurement of A must be one of the eigenvalues of A. Moreover, if a general wavefunction is expanded as a linear combination Fundamentals of Quantum Mechanics 57 of the eigenstates of A, like in Eq. (4.130), then it is clear from Eq. (4.133), and the general definition of a mean, that the probability of a measurement of A yielding the eigenvalue ai is simply |ci|2, where ci is the coefficient in front of the ith eigenstate in the expansion. Note, from Eq. (4.136), that these probabilities are properly normalized: i.e., the probability of a measurement of A resulting in any possible answer is unity. Finally, if a measurement of A results in the eigenvalue ai then immediately after the measurement the system will be left in the eigenstate corresponding to ai. Consider two physical dynamical variables represented by the two Hermitian operators A and B. Under what circumstances is it possible to simultaneously measure these two variables (exactly)? Well, the possible results of measurements of A and B are the eigen-values of A and B, respectively. Thus, to simultaneously measure A and B (exactly) there must exist states which are simultaneous eigenstates of A and B. In fact, in order for A and B to be simultaneously measurable under all circumstances, we need all of the eigenstates of A to also be eigenstates of B, and vice versa, so that all states associated with unique values of A are also associated with unique values of B, and vice versa. Now, we have already seen, in Sect. 4.8, that if A and B do not commute (i.e., if A B ̸= B A) then they cannot be simultaneously measured. This suggests that the condition for simultaneous measurement is that A and B should commute. Suppose that this is the case, and that the ψi and ai are the normalized eigenstates and eigenvalues of A, respectively. It follows that (A B −B A) ψi = (A B −B ai) ψi = (A −ai) B ψi = 0, (4.138) or A (B ψi) = ai (B ψi). (4.139) Thus, B ψi is an eigenstate of A corresponding to the eigenvalue ai (though not necessarily a normalized one). In other words, B ψi ∝ψi, or B ψi = bi ψi, (4.140) where bi is a constant of proportionality. Hence, ψi is an eigenstate of B, and, thus, a simultaneous eigenstate of A and B. We conclude that if A and B commute then they possess simultaneous eigenstates, and are thus simultaneously measurable (exactly). 4.11 Continuous Eigenvalues In the previous two sections, it was tacitly assumed that we were dealing with operators possessing discrete eigenvalues and square-integrable eigenstates. Unfortunately, some operators—most notably, x and p—possess eigenvalues which lie in a continuous range and non-square-integrable eigenstates (in fact, these two properties go hand in hand). Let us, therefore, investigate the eigenstates and eigenvalues of the displacement and momentum operators. 58 QUANTUM MECHANICS Let ψx(x, x′) be the eigenstate of x corresponding to the eigenvalue x′. It follows that x ψx(x, x′) = x′ ψx(x, x′) (4.141) for all x. Consider the Dirac delta-function δ(x −x′). We can write x δ(x −x′) = x′ δ(x −x′), (4.142) since δ(x −x′) is only non-zero infinitesimally close to x = x′. Evidently, ψx(x, x′) is proportional to δ(x −x′). Let us make the constant of proportionality unity, so that ψx(x, x′) = δ(x −x′). (4.143) Now, it is easily demonstrated that Z ∞ −∞ δ(x −x′) δ(x −x′′) dx = δ(x′ −x′′). (4.144) Hence, ψx(x, x′) satisfies the orthonormality condition Z ∞ −∞ ψ∗ x(x, x′) ψx(x, x′′) dx = δ(x′ −x′′). (4.145) This condition is analogous to the orthonormality condition (4.127) satisfied by square-integrable eigenstates. Now, by definition, δ(x −x′) satisfies Z ∞ −∞ f(x) δ(x −x′) dx = f(x′), (4.146) where f(x) is a general function. We can thus write ψ(x) = Z ∞ −∞ c(x′) ψx(x, x′) dx′, (4.147) where c(x′) = ψ(x′), or c(x′) = Z ∞ −∞ ψ∗ x(x, x′) ψ(x) dx. (4.148) In other words, we can expand a general wavefunction ψ(x) as a linear combination of the eigenstates, ψx(x, x′), of the displacement operator. Equations (4.147) and (4.148) are analogous to Eqs. (4.125) and (4.128), respectively, for square-integrable eigenstates. Fi-nally, by analogy with the results in Sect. 4.9, the probability density of a measurement of x yielding the value x′ is |c(x′)| 2, which is equivalent to the standard result |ψ(x′)| 2. More-over, these probabilities are properly normalized provided ψ(x) is properly normalized [cf., Eq. (4.129)]: i.e., Z ∞ −∞ |c(x′)| 2 dx′ = Z ∞ −∞ |ψ(x′)| 2 dx′ = 1. (4.149) Fundamentals of Quantum Mechanics 59 Finally, if a measurement of x yields the value x′ then the system is left in the corre-sponding displacement eigenstate, ψx(x, x′), immediately after the measurement: i.e., the wavefunction collapses to a “spike-function”, δ(x −x′), as discussed in Sect. 3.16. Now, an eigenstate of the momentum operator p ≡−i ¯ h ∂/∂x corresponding to the eigenvalue p′ satisfies −i ¯ h ∂ψp(x, p′) ∂x = p′ ψp(x, p′). (4.150) It is evident that ψp(x, p′) ∝e+i p′ x/¯ h. (4.151) Now, we require ψp(x, p′) to satisfy an analogous orthonormality condition to Eq. (4.145): i.e., Z ∞ −∞ ψ∗ p(x, p′) ψp(x, p′′) dx = δ(p′ −p′′). (4.152) Thus, it follows from Eq. (4.74) that the constant of proportionality in Eq. (4.151) should be (2π ¯ h)−1/2: i.e., ψp(x, p′) = e+i p′ x/¯ h (2π ¯ h)1/2. (4.153) Furthermore, according to Eqs. (4.66) and (4.67), ψ(x) = Z ∞ −∞ c(p′) ψp(x, p′) dp′, (4.154) where c(p′) = φ(p′) [see Eq. (4.67)], or c(p′) = Z ∞ −∞ ψ∗ p(x, p′) ψ(x) dx. (4.155) In other words, we can expand a general wavefunction ψ(x) as a linear combination of the eigenstates, ψp(x, p′), of the momentum operator. Equations (4.154) and (4.155) are again analogous to Eqs. (4.125) and (4.128), respectively, for square-integrable eigen-states. Likewise, the probability density of a measurement of p yielding the result p′ is |c(p′)| 2, which is equivalent to the standard result |φ(p′)| 2. The probabilities are also properly normalized provided ψ(x) is properly normalized [cf., Eq. (4.85)]: i.e., Z ∞ −∞ |c(p′)| 2 dp′ = Z ∞ −∞ |φ(p′)| 2 dp′ = Z ∞ −∞ |ψ(x′)| 2 dx′ = 1. (4.156) Finally, if a mesurement of p yields the value p′ then the system is left in the corresponding momentum eigenstate, ψp(x, p′), immediately after the measurement. 60 QUANTUM MECHANICS 4.12 Stationary States An eigenstate of the energy operator H ≡i ¯ h ∂/∂t corresponding to the eigenvalue Ei satisfies i ¯ h ∂ψE(x, t, Ei) ∂t = Ei ψE(x, t, Ei). (4.157) It is evident that this equation can be solved by writing ψE(x, t, Ei) = ψi(x) e−i Ei t/¯ h, (4.158) where ψi(x) is a properly normalized stationary (i.e., non-time-varying) wavefunction. The wavefunction ψE(x, t, Ei) corresponds to a so-called stationary state, since the probability density |ψE| 2 is non-time-varying. Note that a stationary state is associated with a unique value for the energy. Substitution of the above expression into Schr¨ odinger’s equation (4.1) yields the equation satisfied by the stationary wavefunction: ¯ h2 2 m d2ψi dx2 = [V(x) −Ei] ψi. (4.159) This is known as the time-independent Schr¨ odinger equation. More generally, this equation takes the form H ψi = Ei ψi, (4.160) where H is assumed not to be an explicit function of t. Of course, the ψi satisfy the usual orthonormality condition: Z ∞ −∞ ψ∗ i ψj dx = δij. (4.161) Moreover, we can express a general wavefunction as a linear combination of energy eigen-states: ψ(x, t) = X i ci ψi(x) e−i Ei t/¯ h, (4.162) where ci = Z ∞ −∞ ψ∗ i (x) ψ(x, 0) dx. (4.163) Here, |ci| 2 is the probability that a measurement of the energy will yield the eigenvalue Ei. Furthermore, immediately after such a measurement, the system is left in the corre-sponding energy eigenstate. The generalization of the above results to the case where H has continuous eigenvalues is straightforward. If a dynamical variable is represented by some Hermitian operator A which commutes with H (so that it has simultaneous eigenstates with H), and contains no specific time dependence, then it is evident from Eqs. (4.161) and (4.162) that the expectation value and variance of A are time independent. In this sense, the dynamical variable in question is a constant of the motion. Fundamentals of Quantum Mechanics 61 Exercises 1. Monochromatic light with a wavelength of 6000 ˚ A passes through a fast shutter that opens for 10−9 sec. What is the subsequent spread in wavelengths of the no longer monochromatic light? 2. Calculate ⟨x⟩, ⟨x2⟩, and σx, as well as ⟨p⟩, ⟨p2⟩, and σp, for the normalized wavefunction ψ(x) = s 2 a3 π 1 x2 + a2 . Use these to find σx σp. Note that R∞ −∞dx/(x2 + a2) = π/a. 3. Classically, if a particle is not observed then the probability of finding it in a one-dimensional box of length L, which extends from x = 0 to x = L, is a constant 1/L per unit length. Show that the classical expectation value of x is L/2, the expectation value of x2 is L2/3, and the standard deviation of x is L/ √ 12. 4. Demonstrate that if a particle in a one-dimensional stationary state is bound then the expec-tation value of its momentum must be zero. 5. Suppose that V(x) is complex. Obtain an expression for ∂P(x, t)/∂t and d/dt R P(x, t) dx from Schr¨ odinger’s equation. What does this tell us about a complex V(x)? 6. ψ1(x) and ψ2(x) are normalized eigenfunctions corresponding to the same eigenvalue. If Z∞ −∞ ψ∗ 1 ψ2 dx = c, where c is real, find normalized linear combinations of ψ1 and ψ2 which are orthogonal to (a) ψ1, (b) ψ1 + ψ2. 7. Demonstrate that p = −i ¯ h ∂/∂x is an Hermitian operator. Find the Hermitian conjugate of a = x + i p. 8. An operator A, corresponding to a physical quantity α, has two normalized eigenfunctions ψ1(x) and ψ2(x), with eigenvalues a1 and a2. An operator B, corresponding to another physical quantity β, has normalized eigenfunctions φ1(x) and φ2(x), with eigenvalues b1 and b2. The eigenfunctions are related via ψ1 = (2 φ1 + 3 φ2) .√ 13, ψ2 = (3 φ1 −2 φ2) .√ 13. α is measured and the value a1 is obtained. If β is then measured and then α again, show that the probability of obtaining a1 a second time is 97/169. 9. Demonstrate that an operator which commutes with the Hamiltonian, and contains no ex-plicit time dependence, has an expectation value which is constant in time. 62 QUANTUM MECHANICS 10. For a certain system, the operator corresponding to the physical quantity A does not commute with the Hamiltonian. It has eigenvalues a1 and a2, corresponding to properly normalized eigenfunctions φ1 = (u1 + u2) .√ 2, φ2 = (u1 −u2) .√ 2, where u1 and u2 are properly normalized eigenfunctions of the Hamiltonian with eigenvalues E1 and E2. If the system is in the state ψ = φ1 at time t = 0, show that the expectation value of A at time t is ⟨A⟩= a1 + a2 2  + a1 −a2 2  cos [E1 −E2] t ¯ h  . One-Dimensional Potentials 63 5 One-Dimensional Potentials 5.1 Introduction In this chapter, we shall investigate the interaction of a non-relativistic particle of mass m and energy E with various one-dimensional potentials, V(x). Since we are searching for stationary solutions with unique energies, we can write the wavefunction in the form (see Sect. 4.12) ψ(x, t) = ψ(x) e−i E t/¯ h, (5.1) where ψ(x) satisfies the time-independent Schr¨ odinger equation: d2ψ dx2 = 2 m ¯ h2 [V(x) −E] ψ. (5.2) In general, the solution, ψ(x), to the above equation must be finite, otherwise the probabil-ity density |ψ| 2 would become infinite (which is unphysical). Likewise, the solution must be continuous, otherwise the probability current (4.19) would become infinite (which is also unphysical). 5.2 Infinite Potential Well Consider a particle of mass m and energy E moving in the following simple potential: V(x) = 0 for 0 ≤x ≤a ∞ otherwise . (5.3) It follows from Eq. (5.2) that if d2ψ/dx2 (and, hence, ψ) is to remain finite then ψ must go to zero in regions where the potential is infinite. Hence, ψ = 0 in the regions x ≤0 and x ≥a. Evidently, the problem is equivalent to that of a particle trapped in a one-dimensional box of length a. The boundary conditions on ψ in the region 0 < x < a are ψ(0) = ψ(a) = 0. (5.4) Furthermore, it follows from Eq. (5.2) that ψ satisfies d2ψ dx2 = −k2 ψ (5.5) in this region, where k2 = 2 m E ¯ h2 . (5.6) Here, we are assuming that E > 0. It is easily demonstrated that there are no solutions with E < 0 which are capable of satisfying the boundary conditions (5.4). 64 QUANTUM MECHANICS The solution to Eq. (5.5), subject to the boundary conditions (5.4), is ψn(x) = An sin(kn x), (5.7) where the An are arbitrary (real) constants, and kn = n π a , (5.8) for n = 1, 2, 3, · · ·. Now, it can be seen from Eqs. (5.6) and (5.8) that the energy E is only allowed to take certain discrete values: i.e., En = n2 π2 ¯ h2 2 m a2 . (5.9) In other words, the eigenvalues of the energy operator are discrete. This is a general feature of bounded solutions: i.e., solutions in which |ψ| →0 as |x| →∞. According to the discussion in Sect. 4.12, we expect the stationary eigenfunctions ψn(x) to satisfy the orthonormality constraint Z a 0 ψn(x) ψm(x) dx = δnm. (5.10) It is easily demonstrated that this is the case, provided An = q 2/a. Hence, ψn(x) = s 2 a sin  n π x a  (5.11) for n = 1, 2, 3, · · ·. Finally, again from Sect. 4.12, the general time-dependent solution can be written as a linear superposition of stationary solutions: ψ(x, t) = X n=0,∞ cn ψn(x) e−i En t/¯ h, (5.12) where cn = Z a 0 ψn(x) ψ(x, 0) dx. (5.13) 5.3 Square Potential Barrier Consider a particle of mass m and energy E > 0 interacting with the simple square poten-tial barrier V(x) = V0 for 0 ≤x ≤a 0 otherwise , (5.14) where V0 > 0. In the regions to the left and to the right of the barrier, ψ(x) satisfies d2ψ dx2 = −k2 ψ, (5.15) One-Dimensional Potentials 65 where k is given by Eq. (5.6). Let us adopt the following solution of the above equation to the left of the barrier (i.e., x < 0): ψ(x) = e i k x + R e−i k x. (5.16) This solution consists of a plane wave of unit amplitude traveling to the right [since the time-dependent wavefunction is multiplied by exp(−i ω t), where ω = E/¯ h > 0], and a plane wave of complex amplitude R traveling to the left. We interpret the first plane wave as an incoming particle (or, rather, a stream of incoming particles), and the second as a particle (or stream of particles) reflected by the potential barrier. Hence, |R| 2 is the probability of reflection. This can be seen by calculating the probability current (4.19) in the region x < 0, which takes the form jl = v (1 −|R| 2), (5.17) where v = p/m = ¯ h k/m is the classical particle velocity. Let us adopt the following solution to Eq. (5.15) to the right of the barrier (i.e. x > a): ψ(x) = T e i k x. (5.18) This solution consists of a plane wave of complex amplitude T traveling to the right. We interpret this as a particle (or stream of particles) transmitted through the barrier. Hence, |T| 2 is the probability of transmission. The probability current in the region x > a takes the form jr = v |T| 2. (5.19) Now, according to Eq. (4.35), in a stationary state (i.e., ∂|ψ| 2/∂t = 0), the probability current is a spatial constant (i.e., ∂j/∂x = 0). Hence, we must have jl = jr, or |R| 2 + |T| 2 = 1. (5.20) In other words, the probabilities of reflection and transmission sum to unity, as must be the case, since reflection and transmission are the only possible outcomes for a particle incident on the barrier. Inside the barrier (i.e., 0 ≤x ≤a), ψ(x) satisfies d2ψ dx2 = −q2 ψ, (5.21) where q2 = 2 m (E −V0) ¯ h2 . (5.22) Let us, first of all, consider the case where E > V0. In this case, the general solution to Eq. (5.21) inside the barrier takes the form ψ(x) = A e i q x + B e−i q x, (5.23) 66 QUANTUM MECHANICS where q = q 2 m (E −V0)/¯ h2. Now, the boundary conditions at the edges of the barrier (i.e., at x = 0 and x = a) are that ψ and dψ/dx are both continuous. These boundary conditions ensure that the probability current (4.19) remains finite and continuous across the edges of the boundary, as must be the case if it is to be a spatial constant. Continuity of ψ and dψ/dx at the left edge of the barrier (i.e., x = 0) yields 1 + R = A + B, (5.24) k (1 −R) = q (A −B). (5.25) Likewise, continuity of ψ and dψ/dx at the right edge of the barrier (i.e., x = a) gives A e i q a + B e−i q a = T e i k a, (5.26) q  A e i q a −B e−i q a = k T e i k a. (5.27) After considerable algebra, the above four equations yield |R| 2 = (k2 −q2) 2 sin2(q a) 4 k2 q2 + (k2 −q2) 2 sin2(q a), (5.28) and |T| 2 = 4 k2 q2 4 k2 q2 + (k2 −q2) 2 sin2(q a). (5.29) Note that the above two expression satisfy the constraint (5.20). It is instructive to compare the quantum mechanical probabilities of reflection and transmission—(5.28) and (5.29), respectively—with those derived from classical physics. Now, according to classical physics, if a particle of energy E is incident on a potential barrier of height V0 < E then the particle slows down as it passes through the barrier, but is otherwise unaffected. In other words, the classical probability of reflection is zero, and the classical probability of transmission is unity. The reflection and transmission probabilities obtained from Eqs. (5.28) and (5.29), respectively, are plotted in Figs. 5.1 and 5.2. It can be seen, from Fig. 5.1, that the classical result, |R| 2 = 0 and |T| 2 = 1, is obtained in the limit where the height of the barrier is relatively small (i.e., V0 ≪E). However, when V0 is of order E, there is a substantial probability that the incident particle will be reflected by the barrier. According to classical physics, reflection is impossible when V0 < E. It can also be seen, from Fig. 5.2, that at certain barrier widths the probability of reflection goes to zero. It turns out that this is true irrespective of the energy of the incident particle. It is evident, from Eq. (5.28), that these special barrier widths correspond to q a = n π, (5.30) where n = 1, 2, 3, · · ·. In other words, the special barriers widths are integer multiples of half the de Broglie wavelength of the particle inside the barrier. There is no reflection at One-Dimensional Potentials 67 Figure 5.1: Transmission (solid-curve) and reflection (dashed-curve) probabilities for a square potential barrier of width a = 1.25 λ, where λ is the free-space de Broglie wavelength, as a function of the ratio of the height of the barrier, V0, to the energy, E, of the incident particle. Figure 5.2: Transmission (solid-curve) and reflection (dashed-curve) probabilities for a par-ticle of energy E incident on a square potential barrier of height V0 = 0.75 E, as a function of the ratio of the width of the barrier, a, to the free-space de Broglie wavelength, λ. 68 QUANTUM MECHANICS the special barrier widths because, at these widths, the backward traveling wave reflected from the left edge of the barrier interferes destructively with the similar wave reflected from the right edge of the barrier to give zero net reflected wave. Let us, now, consider the case E < V0. In this case, the general solution to Eq. (5.21) inside the barrier takes the form ψ(x) = A e q x + B e−q x, (5.31) where q = q 2 m (V0 −E)/¯ h2. Continuity of ψ and dψ/dx at the left edge of the barrier (i.e., x = 0) yields 1 + R = A + B, (5.32) i k (1 −R) = q (A −B). (5.33) Likewise, continuity of ψ and dψ/dx at the right edge of the barrier (i.e., x = a) gives A e q a + B e−q a = T e i k a, (5.34) q (A e q a −B e−q a) = i k T e i k a. (5.35) After considerable algebra, the above four equations yield |R| 2 = (k2 + q2) 2 sinh2(q a) 4 k2 q2 + (k2 + q2) 2 sinh2(q a) , (5.36) and |T| 2 = 4 k2 q2 4 k2 q2 + (k2 + q2) 2 sinh2(q a) . (5.37) These expressions can also be obtained from Eqs. (5.28) and (5.29) by making the substi-tution q →−i q. Note that Eqs. (5.36) and (5.37) satisfy the constraint (5.20). It is again instructive to compare the quantum mechanical probabilities of reflection and transmission—(5.36) and (5.37), respectively—with those derived from classical physics. Now, according to classical physics, if a particle of energy E is incident on a potential bar-rier of height V0 > E then the particle is reflected. In other words, the classical probability of reflection is unity, and the classical probability of transmission is zero. The reflection and transmission probabilities obtained from Eqs. (5.36) and (5.37), respectively, are plotted in Figs. 5.3 and 5.4. It can be seen, from Fig. 5.3, that the classical result, |R| 2 = 1 and |T| 2 = 0, is obtained for relatively thin barriers (i.e., q a ∼1) in the limit where the height of the barrier is relatively large (i.e., V0 ≫E). However, when V0 is of order E, there is a substantial probability that the incident particle will be transmitted by the barrier. According to classical physics, transmission is impossible when V0 > E. It can also be seen, from Fig. 5.4, that the transmission probability decays exponen-tially as the width of the barrier increases. Nevertheless, even for very wide barriers (i.e., q a ≫1), there is a small but finite probability that a particle incident on the barrier will be transmitted. This phenomenon, which is inexplicable within the context of classical physics, is called tunneling. One-Dimensional Potentials 69 Figure 5.3: Transmission (solid-curve) and reflection (dashed-curve) probabilities for a square potential barrier of width a = 0.5 λ, where λ is the free-space de Broglie wavelength, as a function of the ratio of the energy, E, of the incoming particle to the height, V0, of the barrier. Figure 5.4: Transmission (solid-curve) and reflection (dashed-curve) probabilities for a parti-cle of energy E incident on a square potential barrier of height V0 = (4/3) E, as a function of the ratio of the width of the barrier, a, to the free-space de Broglie wavelength, λ. 70 QUANTUM MECHANICS 5.4 WKB Approximation Consider a particle of mass m and energy E > 0 moving through some slowly varying potential V(x). The particle’s wavefunction satisfies d2ψ(x) dx2 = −k2(x) ψ(x), (5.38) where k2(x) = 2 m [E −V(x)] ¯ h2 . (5.39) Let us try a solution to Eq. (5.38) of the form ψ(x) = ψ0 exp Z x 0 i k(x′) dx′ ! , (5.40) where ψ0 is a complex constant. Note that this solution represents a particle propagating in the positive x-direction [since the full wavefunction is multiplied by exp(−i ω t), where ω = E/¯ h > 0] with the continuously varying wavenumber k(x). It follows that dψ(x) dx = i k(x) ψ(x), (5.41) and d2ψ(x) dx2 = i k′(x) ψ(x) −k2(x) ψ(x), (5.42) where k′ ≡dk/dx. A comparison of Eqs. (5.38) and (5.42) reveals that Eq. (5.40) repre-sents an approximate solution to Eq. (5.38) provided that the first term on its right-hand side is negligible compared to the second. This yields the validity criterion |k′| ≪k2, or k |k′| ≫k−1. (5.43) In other words, the variation length-scale of k(x), which is approximately the same as the variation length-scale of V(x), must be much greater than the particle’s de Broglie wavelength (which is of order k−1). Let us suppose that this is the case. Incidentally, the approximation involved in dropping the first term on the right-hand side of Eq. (5.42) is generally known as the WKB approximation. 1 Similarly, Eq. (5.40) is termed a WKB solution. According to the WKB solution (5.40), the probability density remains constant: i.e., |ψ(x)| 2 = |ψ0| 2, (5.44) as long as the particle moves through a region in which E > V(x), and k(x) is consequently real (i.e., an allowed region according to classical physics). Suppose, however, that the 1After G. Wentzel, H.A. Kramers, and L. Brillouin. One-Dimensional Potentials 71 particle encounters a potential barrier (i.e., a region from which the particle is excluded according to classical physics). By definition, E < V(x) inside such a barrier, and k(x) is consequently imaginary. Let the barrier extend from x = x1 to x2, where 0 < x1 < x2. The WKB solution inside the barrier is written ψ(x) = ψ1 exp − Z x x1 |k(x′)| dx′ ! , (5.45) where ψ1 = ψ0 exp Z x1 0 i k(x′) dx′ ! . (5.46) Here, we have neglected the unphysical exponentially growing solution. According to the WKB solution (5.45), the probability density decays exponentially in-side the barrier: i.e., |ψ(x)| 2 = |ψ1| 2 exp −2 Z x x1 |k(x′)| dx′ ! , (5.47) where |ψ1| 2 is the probability density at the left-hand side of the barrier (i.e., x = x1). It follows that the probability density at the right-hand side of the barrier (i.e., x = x2) is |ψ2| 2 = |ψ1| 2 exp −2 Z x2 x1 |k(x′)| dx′ ! . (5.48) Note that |ψ2| 2 < |ψ1| 2. Of course, in the region to the right of the barrier (i.e., x > x2), the probability density takes the constant value |ψ2| 2. We can interpret the ratio of the probability densities to the right and to the left of the potential barrier as the probability, |T| 2, that a particle incident from the left will tunnel through the barrier and emerge on the other side: i.e., |T| 2 = |ψ2| 2 |ψ1| 2 = exp −2 Z x2 x1 |k(x′)| dx′ ! (5.49) (see Sect. 5.3). It is easily demonstrated that the probability of a particle incident from the right tunneling through the barrier is the same. Note that the criterion (5.43) for the validity of the WKB approximation implies that the above transmission probability is very small. Hence, the WKB approximation only applies to situations in which there is very little chance of a particle tunneling through the potential barrier in question. Unfortunately, the validity criterion (5.43) breaks down completely at the edges of the barrier (i.e., at x = x1 and x2), since k(x) = 0 at these points. However, it can be demonstrated that the contribution of those regions, around x = x1 and x2, in which the WKB approximation breaks down to the integral in Eq. (5.49) is fairly negligible. Hence, the above expression for the tunneling probability is a reasonable approximation provided that the incident particle’s de Broglie wavelength is much smaller than the spatial extent of the potential barrier. 72 QUANTUM MECHANICS                       x → x1 x2 V −E = W −e E x VACUUM METAL Energy → E Figure 5.5: The potential barrier for an electron in a metal surface subject to an external electric field. 5.5 Cold Emission Suppose that an unheated metal surface is subject to a large uniform external electric field of strength E, which is directed such that it accelerates electrons away from the surface. We have already seen (in Sect. 3.6) that electrons just below the surface of a metal can be regarded as being in a potential well of depth W, where W is called the work function of the surface. Adopting a simple one-dimensional treatment of the problem, let the metal lie at x < 0, and the surface at x = 0. Now, the applied electric field is shielded from the interior of the metal. Hence, the energy, E, say, of an electron just below the surface is unaffected by the field. In the absence of the electric field, the potential barrier just above is the surface is simply V(x)−E = W. The electric field modifies this to V(x)−E = W−e E x. The potential barrier is sketched in Fig. 5.5. It can be seen, from Fig. 5.5, that an electron just below the surface of the metal is confined by a triangular potential barrier which extends from x = x1 to x2, where x1 = 0 and x2 = W/e E. Making use of the WKB approximation (see the previous subsection), the probability of such an electron tunneling through the barrier, and consequently being emitted from the surface, is |T| 2 = exp −2 √ 2 m ¯ h Z x2 x1 q V(x) −E dx ! , (5.50) or |T| 2 = exp −2 √ 2 m ¯ h Z W/e E 0 √ W −e E x dx ! . (5.51) One-Dimensional Potentials 73 This reduces to |T| 2 = exp −2 √ 2 m1/2 W 3/2 ¯ h e E Z 1 0 q 1 −y dy ! , (5.52) or |T| 2 = exp −4 √ 2 3 m1/2 W 3/2 ¯ h e E ! . (5.53) The above result is known as the Fowler-Nordheim formula. Note that the probability of emission increases exponentially as the electric field-strength above the surface of the metal increases. The cold emission of electrons from a metal surface is the basis of an important device known as a scanning tunneling microscope, or an STM. An STM consists of a very sharp con-ducting probe which is scanned over the surface of a metal (or any other solid conducting medium). A large voltage difference is applied between the probe and the surface. Now, the surface electric field-strength immediately below the probe tip is proportional to the applied potential difference, and inversely proportional to the spacing between the tip and the surface. Electrons tunneling between the surface and the probe tip give rise to a weak electric current. The magnitude of this current is proportional to the tunneling probability (5.53). It follows that the current is an extremely sensitive function of the surface electric field-strength, and, hence, of the spacing between the tip and the surface (assuming that the potential difference is held constant). An STM can thus be used to construct a very accurate contour map of the surface under investigation. In fact, STMs are capable of achieving sufficient resolution to image individual atoms 5.6 Alpha Decay Many types of heavy atomic nucleus spontaneously decay to produce daughter nucleii via the emission of α-particles (i.e., helium nucleii) of some characteristic energy. This process is know as α-decay. Let us investigate the α-decay of a particular type of atomic nucleus of radius R, charge-number Z, and mass-number A. Such a nucleus thus decays to produce a daughter nucleus of charge-number Z1 = Z −2 and mass-number A1 = A −4, and an α-particle of charge-number Z2 = 2 and mass-number A2 = 4. Let the characteristic energy of the α-particle be E. Incidentally, nuclear radii are found to satisfy the empirical formula R = 1.5 × 10−15 A1/3 m = 2.0 × 10−15 Z1/3 1 m (5.54) for Z ≫1. In 1928, George Gamov proposed a very successful theory of α-decay, according to which the α-particle moves freely inside the nucleus, and is emitted after tunneling through the potential barrier between itself and the daughter nucleus. In other words, the α-particle, whose energy is E, is trapped in a potential well of radius R by the potential barrier V(r) = Z1 Z2 e2 4π ǫ0 r (5.55) 74 QUANTUM MECHANICS for r > R. Making use of the WKB approximation (and neglecting the fact that r is a radial, rather than a Cartesian, coordinate), the probability of the α-particle tunneling through the bar-rier is |T| 2 = exp −2 √ 2 m ¯ h Z r2 r1 q V(r) −E dr ! , (5.56) where r1 = R and r2 = Z1 Z2 e2/(4π ǫ0 E). Here, m = 4 mp is the α-particle mass. The above expression reduces to |T| 2 = exp  −2 √ 2 β Z Ec/E 1 "1 y −E Ec #1/2 dy  , (5.57) where β = Z1 Z2 e2 m R 4π ǫ0 ¯ h2 !1/2 = 0.74 Z2/3 1 (5.58) is a dimensionless constant, and Ec = Z1 Z2 e2 4π ǫ0 R = 1.44 Z2/3 1 MeV (5.59) is the characteristic energy the α-particle would need in order to escape from the nucleus without tunneling. Of course, E ≪Ec. It is easily demonstrated that Z 1/ǫ 1 "1 y −ǫ #1/2 dy ≃ π 2 √ǫ −2 (5.60) when ǫ ≪1. Hence. |T| 2 ≃exp  −2 √ 2 β  π 2 s Ec E −2    . (5.61) Now, the α-particle moves inside the nucleus with the characteristic velocity v = q 2 E/m. It follows that the particle bounces backward and forward within the nucleus at the frequency ν ≃v/R, giving ν ≃2 × 1028 yr−1 (5.62) for a 1 MeV α-particle trapped inside a typical heavy nucleus of radius 10−14 m. Thus, the α-particle effectively attempts to tunnel through the potential barrier ν times a second. If each of these attempts has a probability |T| 2 of succeeding, then the probability of decay per unit time is ν |T|2. Hence, if there are N(t) ≫1 undecayed nuclii at time t then there are only N + dN at time t + dt, where dN = −N ν |T|2 dt. (5.63) One-Dimensional Potentials 75 This expression can be integrated to give N(t) = N(0) exp(−ν |T|2 t). (5.64) Now, the half-life, τ, is defined as the time which must elapse in order for half of the nuclii originally present to decay. It follows from the above formula that τ = ln 2 ν |T|2. (5.65) Note that the half-life is independent of N(0). Finally, making use of the above results, we obtain log10[τ(yr)] = −C1 −C2 Z 2/3 1 + C3 Z1 q E(MeV) , (5.66) where C1 = 28.5, (5.67) C2 = 1.83, (5.68) C3 = 1.73. (5.69) The half-life, τ, the daughter charge-number, Z1 = Z −2, and the α-particle energy, E, for atomic nucleii which undergo α-decay are indeed found to satisfy a relationship of the form (5.66). The best fit to the data (see Fig. 5.6) is obtained using C1 = 28.9, (5.70) C2 = 1.60, (5.71) C3 = 1.61. (5.72) Note that these values are remarkably similar to those calculated above. 5.7 Square Potential Well Consider a particle of mass m and energy E interacting with the simple square potential well V(x) = −V0 for −a/2 ≤x ≤a/2 0 otherwise , (5.73) where V0 > 0. Now, if E > 0 then the particle is unbounded. Thus, when the particle encounters the well it is either reflected or transmitted. As is easily demonstrated, the reflection and 76 QUANTUM MECHANICS Figure 5.6: The experimentally determined half-life, τex, of various atomic nucleii which decay via α emission versus the best-fit theoretical half-life log10(τth) = −28.9 −1.60 Z 2/3 1 + 1.61 Z1/ √ E. Both half-lives are measured in years. Here, Z1 = Z −2, where Z is the charge number of the nucleus, and E the characteristic energy of the emitted α-particle in MeV. In order of increasing half-life, the points correspond to the following nucleii: Rn 215, Po 214, Po 216, Po 197, Fm 250, Ac 225, U 230, U 232, U 234, Gd 150, U 236, U 238, Pt 190, Gd 152, Nd 144. Data obtained from IAEA Nuclear Data Centre. One-Dimensional Potentials 77 transmission probabilities are given by Eqs. (5.28) and (5.29), respectively, where k2 = 2 m E ¯ h2 , (5.74) q2 = 2 m (E + V0) ¯ h2 . (5.75) Suppose, however, that E < 0. In this case, the particle is bounded (i.e., |ψ|2 →0 as |x| →∞). Is is possible to find bounded solutions of Schr¨ odinger’s equation in the finite square potential well (5.73)? Now, it is easily seen that independent solutions of Schr¨ odinger’s equation (5.2) in the symmetric [i.e., V(−x) = V(x)] potential (5.73) must be either totally symmetric [i.e., ψ(−x) = ψ(x)], or totally anti-symmetric [i.e., ψ(−x) = −ψ(x)]. Moreover, the solutions must satisfy the boundary condition ψ →0 as |x| →∞. (5.76) Let us, first of all, search for a totally symmetric solution. In the region to the left of the well (i.e. x < −a/2), the solution of Schr¨ odinger’s equation which satisfies the boundary condition ψ →0 and x →−∞is ψ(x) = A e k x, (5.77) where k2 = 2 m |E| ¯ h2 . (5.78) By symmetry, the solution in the region to the right of the well (i.e., x > a/2) is ψ(x) = A e−k x. (5.79) The solution inside the well (i.e., |x| ≤a/2) which satisfies the symmetry constraint ψ(−x) = ψ(x) is ψ(x) = B cos(q x), (5.80) where q2 = 2 m (V0 + E) ¯ h2 . (5.81) Here, we have assumed that E > −V0. The constraint that ψ(x) and its first derivative be continuous at the edges of the well (i.e., at x = ±a/2) yields k = q tan(q a/2). (5.82) Let y = q a/2. It follows that E = E0 y2 −V0, (5.83) where E0 = 2 ¯ h2 m a2. (5.84) 78 QUANTUM MECHANICS Figure 5.7: The curves tan y (solid) and q λ −y2/y (dashed), calculated for λ = 1.5 π2. The latter curve takes the value 0 when y > √ λ. Moreover, Eq. (5.82) becomes q λ −y2 y = tan y, (5.85) with λ = V0 E0 . (5.86) Here, y must lie in the range 0 < y < √ λ: i.e., E must lie in the range −V0 < E < 0. Now, the solutions to Eq. (5.85) correspond to the intersection of the curve q λ −y2/y with the curve tan y. Figure 5.7 shows these two curves plotted for a particular value of λ. In this case, the curves intersect twice, indicating the existence of two totally symmetric bound states in the well. Moreover, it is evident, from the figure, that as λ increases (i.e., as the well becomes deeper) there are more and more bound states. However, it is also evident that there is always at least one totally symmetric bound state, no matter how small λ becomes (i.e., no matter how shallow the well becomes). In the limit λ ≫1 (i.e., the limit in which the well becomes very deep), the solutions to Eq. (5.85) asymptote to the roots of tan y = ∞. This gives y = (2 j −1) π/2, where j is a positive integer, or q = (2 j −1) π a . (5.87) These solutions are equivalent to the odd-n infinite square well solutions specified by Eq. (5.8). One-Dimensional Potentials 79 Figure 5.8: The curves tan y (solid) and −y/ q λ −y2 (dashed), calculated for λ = 1.5 π2. For the case of a totally anti-symmetric bound state, similar analysis to the above yields − y q λ −y2 = tan y. (5.88) The solutions of this equation correspond to the intersection of the curve tan y with the curve −y/ q λ −y2. Figure 5.8 shows these two curves plotted for the same value of λ as that used in Fig. 5.7. In this case, the curves intersect once, indicating the existence of a single totally anti-symmetric bound state in the well. It is, again, evident, from the figure, that as λ increases (i.e., as the well becomes deeper) there are more and more bound states. However, it is also evident that when λ becomes sufficiently small [i.e., λ < (π/2)2] then there is no totally anti-symmetric bound state. In other words, a very shallow potential well always possesses a totally symmetric bound state, but does not generally possess a totally anti-symmetric bound state. In the limit λ ≫1 (i.e., the limit in which the well becomes very deep), the solutions to Eq. (5.88) asymptote to the roots of tan y = 0. This gives y = j π, where j is a positive integer, or q = 2 j π a . (5.89) These solutions are equivalent to the even-n infinite square well solutions specified by Eq. (5.8). 80 QUANTUM MECHANICS 5.8 Simple Harmonic Oscillator The classical Hamiltonian of a simple harmonic oscillator is H = p2 2 m + 1 2 K x2, (5.90) where K > 0 is the so-called force constant of the oscillator. Assuming that the quan-tum mechanical Hamiltonian has the same form as the classical Hamiltonian, the time-independent Schr¨ odinger equation for a particle of mass m and energy E moving in a simple harmonic potential becomes d2ψ dx2 = 2 m ¯ h2 1 2 K x2 −E ! ψ. (5.91) Let ω = q K/m, where ω is the oscillator’s classical angular frequency of oscillation. Furthermore, let y = rm ω ¯ h x, (5.92) and ǫ = 2 E ¯ h ω. (5.93) Equation (5.91) reduces to d2ψ dy2 −(y2 −ǫ) ψ = 0. (5.94) We need to find solutions to the above equation which are bounded at infinity: i.e., solu-tions which satisfy the boundary condition ψ →0 as |y| →∞. Consider the behavior of the solution to Eq. (5.94) in the limit |y| ≫1. As is easily seen, in this limit the equation simplifies somewhat to give d2ψ dy2 −y2 ψ ≃0. (5.95) The approximate solutions to the above equation are ψ(y) ≃A(y) e±y2/2, (5.96) where A(y) is a relatively slowly varying function of y. Clearly, if ψ(y) is to remain bounded as |y| →∞then we must chose the exponentially decaying solution. This sug-gests that we should write ψ(y) = h(y) e−y2/2, (5.97) where we would expect h(y) to be an algebraic, rather than an exponential, function of y. Substituting Eq. (5.97) into Eq. (5.94), we obtain d2h dy2 −2 y dh dy + (ǫ −1) h = 0. (5.98) One-Dimensional Potentials 81 Let us attempt a power-law solution of the form h(y) = ∞ X i=0 ci yi. (5.99) Inserting this test solution into Eq. (5.98), and equating the coefficients of yi, we obtain the recursion relation ci+2 = (2 i −ǫ + 1) (i + 1) (i + 2) ci. (5.100) Consider the behavior of h(y) in the limit |y| →∞. The above recursion relation simplifies to ci+2 ≃2 i ci. (5.101) Hence, at large |y|, when the higher powers of y dominate, we have h(y) ∼C X j y2 j j! ∼C e y2. (5.102) It follows that ψ(y) = h(y) exp(−y2/2) varies as exp( y2/2) as |y| →∞. This behavior is unacceptable, since it does not satisfy the boundary condition ψ →0 as |y| →∞. The only way in which we can prevent ψ from blowing up as |y| →∞is to demand that the power series (5.99) terminate at some finite value of i. This implies, from the recursion relation (5.100), that ǫ = 2 n + 1, (5.103) where n is a non-negative integer. Note that the number of terms in the power series (5.99) is n + 1. Finally, using Eq. (5.93), we obtain E = (n + 1/2) ¯ h ω, (5.104) for n = 0, 1, 2, · · ·. Hence, we conclude that a particle moving in a harmonic potential has quantized en-ergy levels which are equally spaced. The spacing between successive energy levels is ¯ h ω, where ω is the classical oscillation frequency. Furthermore, the lowest energy state (n = 0) possesses the finite energy (1/2) ¯ hω. This is sometimes called zero-point energy. It is eas-ily demonstrated that the (normalized) wavefunction of the lowest energy state takes the form ψ0(x) = e−x2/2 d2 π1/4 √ d, (5.105) where d = q ¯ h/m ω. Let ψn(x) be an energy eigenstate of the harmonic oscillator corresponding to the eigen-value En = (n + 1/2) ¯ h ω. (5.106) 82 QUANTUM MECHANICS Assuming that the ψn are properly normalized (and real), we have Z ∞ −∞ ψn ψm dx = δnm. (5.107) Now, Eq. (5.94) can be written −d2 dy2 + y2 ! ψn = (2n + 1) ψn, (5.108) where x = d y, and d = q ¯ h/m ω. It is helpful to define the operators a± = 1 √ 2 ∓d dy + y ! . (5.109) As is easily demonstrated, these operators satisfy the commutation relation [a+, a−] = −1. (5.110) Using these operators, Eq. (5.108) can also be written in the forms a+ a−ψn = n ψn, (5.111) or a−a+ ψn = (n + 1) ψn. (5.112) The above two equations imply that a+ ψn = √ n + 1 ψn+1, (5.113) a−ψn = √n ψn−1. (5.114) We conclude that a+ and a−are raising and lowering operators, respectively, for the har-monic oscillator: i.e., operating on the wavefunction with a+ causes the quantum number n to increase by unity, and vice versa. The Hamiltonian for the harmonic oscillator can be written in the form H = ¯ h ω a+ a−+ 1 2 ! , (5.115) from which the result H ψn = (n + 1/2) ¯ h ω ψn = En ψn (5.116) is readily deduced. Finally, Eqs. (5.107), (5.113), and (5.114) yield the useful expression Z ∞ −∞ ψm x ψn dx = d √ 2 Z ∞ −∞ ψm (a+ + a−) ψn dx (5.117) = s ¯ h 2 m ω √m δm,n+1 + √n δm,n−1  . One-Dimensional Potentials 83 Exercises 1. Show that the wavefunction of a particle of mass m in an infinite one-dimensional square-well of width a returns to its original form after a quantum revival time T = 4 m a2/π ¯ h. 2. A particle of mass m moves freely in one dimension between impenetrable walls located at x = 0 and a. Its initial wavefunction is ψ(x, 0) = q 2/a sin(3π x/a). What is the subsequent time evolution of the wavefunction? Suppose that the initial wave-function is ψ(x, 0) = q 1/a sin(π x/a) [1 + 2 cos(π x/a)]. What now is the subsequent time evolution? Calculate the probability of finding the particle between 0 and a/2 as a function of time in each case. 3. A particle of mass m is in the ground-state of an infinite one-dimensional square-well of width a. Suddenly the well expands to twice its original size, as the right wall moves from a to 2a, leaving the wavefunction momentarily undisturbed. The energy of the particle is now measured. What is the most probable result? What is the probability of obtaining this result? What is the next most probable result, and what is its probability of occurrence? What is the expectation value of the energy? 4. A stream of particles of mass m and energy E > 0 encounter a potential step of height W(< E): i.e., V(x) = 0 for x < 0 and V(x) = W for x > 0 with the particles incident from −∞. Show that the fraction reflected is R = k −q k + q 2 , where k2 = (2m/¯ h2) E and q 2 = (2m/¯ h2) (E −W). 5. A stream of particles of mass m and energy E > 0 encounter the delta-function potential V(x) = −α δ(x), where α > 0. Show that the fraction reflected is R = β2/(1 + β2), where β = m α/¯ h2 k, and k2 = (2m/¯ h2) E. Does such a potential have a bound state? If so, what is its energy? 6. Two potential wells of width a are separated by a distance L ≫a. A particle of mass m and energy E is in one of the wells. Estimate the time required for the particle to tunnel to the other well. 7. Consider the half-infinite potential well V(x) =    ∞ x ≤0 −V0 0 < x < L 0 x ≥L , 84 QUANTUM MECHANICS where V0 > 0. Demonstrate that the bound-states of a particle of mass m and energy −V0 < E < 0 satisfy tan q 2 m (V0 + E) L/¯ h  = − q (V0 + E)/(−E). 8. Find the properly normalized first two excited energy eigenstates of the harmonic oscillator, as well as the expectation value of the potential energy in the nth energy eigenstate. Hint: Consider the raising and lowering operators a± defined in Eq. (5.109). Multi-Particle Systems 85 6 Multi-Particle Systems 6.1 Introduction In this chapter, we shall extend the single particle, one-dimensional formulation of non-relativistic quantum mechanics introduced in the previous sections in order to investigate one-dimensional chapters containing multiple particles. 6.2 Fundamental Concepts We have already seen that the instantaneous state of a system consisting of a single non-relativistic particle, whose position coordinate is x, is fully specified by a complex wave-function ψ(x, t). This wavefunction is interpreted as follows. The probability of finding the particle between x and x + dx at time t is given by |ψ(x, t)|2 dx. This interpretation only makes sense if the wavefunction is normalized such that Z ∞ −∞ |ψ(x, t)|2 dx = 1 (6.1) at all times. The physical significance of this normalization requirement is that the prob-ability of the particle being found anywhere on the x-axis must always be unity (which corresponds to certainty). Consider a system containing N non-relativistic particles, labeled i = 1, N, moving in one dimension. Let xi and mi be the position coordinate and mass, respectively, of the ith particle. By analogy with the single-particle case, the instantaneous state of a multi-particle system is specified by a complex wavefunction ψ(x1, x2, . . ., xN, t). The probability of finding the first particle between x1 and x1 + dx1, the second particle between x2 and x2 + dx2, etc., at time t is given by |ψ(x1, x2, . . ., xN, t)|2 dx1 dx2 . . . dxN. It follows that the wavefunction must satisfy the normalization condition Z |ψ(x1, x2, . . ., xN, t)|2 dx1 dx2 . . . dxN = 1 (6.2) at all times, where the integration is taken over all x1 x2 . . .xN space. In a single-particle system, position is represented by the algebraic operator x, whereas momentum is represented by the differential operator −i ¯ h ∂/∂x (see Sect. 4.6). By anal-ogy, in a multi-particle system, the position of the ith particle is represented by the alge-braic operator xi, whereas the corresponding momentum is represented by the differential operator pi = −i ¯ h ∂ ∂xi . (6.3) 86 QUANTUM MECHANICS Since the xi are independent variables (i.e., ∂xi/∂xj = δij), we conclude that the various position and momentum operators satisfy the following commutation relations: [xi, xj] = 0, (6.4) [pi, pj] = 0, (6.5) [xi, pj] = i ¯ h δij. (6.6) Now, we know, from Sect. 4.10, that two dynamical variables can only be (exactly) mea-sured simultaneously if the operators which represent them in quantum mechanics com-mute with one another. Thus, it is clear, from the above commutation relations, that the only restriction on measurement in a one-dimensional multi-particle system is that it is impossible to simultaneously measure the position and momentum of the same particle. Note, in particular, that a knowledge of the position or momentum of a given particle does not in any way preclude a similar knowledge for a different particle. The commutation relations (6.4)–(6.6) illustrate an important point in quantum mechanics: namely, that operators corresponding to different degrees of freedom of a dynamical system tend to com-mute with one another. In this case, the different degrees of freedom correspond to the different motions of the various particles making up the system. Finally, if H(x1, x2, . . ., xN, t) is the Hamiltonian of the system then the multi-particle wavefunction ψ(x1, x2, . . ., xN, t) satisfies the usual time-dependent Schr¨ odinger equation [see Eq. (4.63)] i ¯ h ∂ψ ∂t = H ψ. (6.7) Likewise, a multi-particle state of definite energy E (i.e., an eigenstate of the Hamiltonian with eigenvalue E) is written (see Sect. 4.12) ψ(x1, x2, . . ., xN, t) = ψE(x1, x2, . . ., xN) e−i E t/¯ h, (6.8) where the stationary wavefunction ψE satisfies the time-independent Schr¨ odinger equation [see Eq. (4.160)] H ψE = E ψE. (6.9) Here, H is assumed not to be an explicit function of t. 6.3 Non-Interacting Particles In general, we expect the Hamiltonian of a multi-particle system to take the form H(x1, x2, . . ., xN, t) = X i=1,N p 2 i 2 mi + V(x1, x2, . . ., xN, t). (6.10) Here, the first term on the right-hand side represents the total kinetic energy of the system, whereas the potential V specifies the nature of the interaction between the various particles making up the system, as well as the interaction of the particles with any external forces. Multi-Particle Systems 87 Suppose that the particles do not interact with one another. This implies that each particle moves in a common potential: i.e., V(x1, x2, . . ., xN, t) = X i=1,N V(xi, t). (6.11) Hence, we can write H(x1, x2, . . ., xN, t) = X i=1,N Hi(xi, t), (6.12) where Hi = p 2 i 2 mi + V(xi, t). (6.13) In other words, for the case of non-interacting particles, the multi-particle Hamiltonian of the system can be written as the sum of N independent single-particle Hamiltonians. Here, Hi represents the energy of the ith particle, and is completely unaffected by the energies of the other particles. Furthermore, given that the various particles which make up the system are non-interacting, we expect their instantaneous positions to be completely un-correlated with one another. This immediately implies that the multi-particle wavefunction ψ(x1, x2, . . .xN, t) can be written as the product of N independent single-particle wave-functions: i.e., ψ(x1, x2, . . ., xN, t) = ψ1(x1, t) ψ2(x2, t) . . .ψN(xN, t). (6.14) Here, |ψi(xi, t)|2 dxi is the probability of finding the ith particle between xi and xi + dxi at time t. This probability is completely unaffected by the positions of the other particles. It is evident that ψi(xi, t) must satisfy the normalization constraint Z ∞ −∞ |ψi(xi, t)|2 dxi = 1. (6.15) If this is the case then the normalization constraint (6.2) for the multi-particle wavefunc-tion is automatically satisfied. Equation (6.14) illustrates an important point in quantum mechanics: namely, that we can generally write the total wavefunction of a many degree of freedom system as a product of different wavefunctions corresponding to each degree of freedom. According to Eqs. (6.12) and (6.14), the time-dependent Schr¨ odinger equation (6.7) for a system of N non-interacting particles factorizes into N independent equations of the form i ¯ h ∂ψi ∂t = Hi ψi. (6.16) Assuming that V(x, t) ≡V(x), the time-independent Schr¨ odinger equation (6.9) also fac-torizes to give Hi ψEi = Ei ψEi, (6.17) 88 QUANTUM MECHANICS where ψi(xi, t) = ψEi(xi) exp(−i Ei t/¯ h), and Ei is the energy of the ith particle. Hence, a multi-particle state of definite energy E has a wavefunction of the form ψ(x1, x2, . . ., xn, t) = ψE(x1, x2, . . ., xN) e−i E t/¯ h, (6.18) where ψE(x1, x2, . . ., xN) = ψE1(x1) ψE2(x2) . . .ψEN(xN), (6.19) and E = X i=1,N Ei. (6.20) Clearly, for the case of non-interacting particles, the energy of the whole system is simply the sum of the energies of the component particles. 6.4 Two-Particle Systems Consider a system consisting of two particles, mass m1 and m2, interacting via the potential V(x1 −x2) which only depends on the relative positions of the particles. According to Eqs. (6.3) and (6.10), the Hamiltonian of the system is written H(x1, x2) = −¯ h2 2 m1 ∂2 ∂x 2 1 −¯ h2 2 m2 ∂2 ∂x 2 2 + V(x1 −x2). (6.21) Let x′ = x1 −x2 (6.22) be the particles’ relative position, and X = m1 x1 + m2 x2 m1 + m2 (6.23) the position of the center of mass. It is easily demonstrated that ∂ ∂x1 = m1 m1 + m2 ∂ ∂X + ∂ ∂x′, (6.24) ∂ ∂x2 = m2 m1 + m2 ∂ ∂X −∂ ∂x′. (6.25) Hence, when expressed in terms of the new variables, x′ and X, the Hamiltonian becomes H(x′, X) = −¯ h2 2 M ∂2 ∂X2 −¯ h2 2 µ ∂2 ∂x′ 2 + V(x′), (6.26) where M = m1 + m2 (6.27) Multi-Particle Systems 89 is the total mass of the system, and µ = m1 m2 m1 + m2 (6.28) the so-called reduced mass. Note that the total momentum of the system can be written P = −i ¯ h ∂ ∂x1 + ∂ ∂x2 ! = −i ¯ h ∂ ∂X. (6.29) The fact that the Hamiltonian (6.26) is separable when expressed in terms of the new coordinates [i.e., H(x′, X) = Hx′(x′) + HX(X)] suggests, by analogy with the analysis in the previous section, that the wavefunction can be factorized: i.e., ψ(x1, x2, t) = ψx′(x′, t) ψX(X, t). (6.30) Hence, the time-dependent Schr¨ odinger equation (6.7) also factorizes to give i ¯ h ∂ψx′ ∂t = −¯ h2 2 µ ∂2ψx′ ∂x′ 2 + V(x′) ψx′, (6.31) and i ¯ h ∂ψX ∂t = −¯ h2 2 M ∂2ψX ∂X2 . (6.32) The above equation can be solved to give ψX(X, t) = ψ0 e i (P ′ X/¯ h−E′ t/¯ h), (6.33) where ψ0, P ′, and E′ = P ′ 2/2 M are constants. It is clear, from Eqs. (6.29), (6.30), and (6.33), that the total momentum of the system takes the constant value P ′: i.e., momentum is conserved. Suppose that we work in the centre of mass frame of the system, which is characterized by P ′ = 0. It follows that ψX = ψ0. In this case, we can write the wavefunction of the system in the form ψ(x1, x2, t) = ψx′(x′, t) ψ0 ≡ψ(x1 −x2, t), where i ¯ h ∂ψ ∂t = −¯ h2 2 µ ∂2ψ ∂x 2 + V(x) ψ. (6.34) In other words, in the center of mass frame, two particles of mass m1 and m2, moving in the potential V(x1 −x2), are equivalent to a single particle of mass µ, moving in the potential V(x), where x = x1 −x2. This is a familiar result from classical dynamics. 6.5 Identical Particles Consider a system consisting of two identical particles of mass m. As before, the instanta-neous state of the system is specified by the complex wavefunction ψ(x1, x2, t). However, 90 QUANTUM MECHANICS the only thing which this wavefunction tells us is that the probability of finding the first particle between x1 and x1 + dx1, and the second between x2 and x2 + dx2, at time t is |ψ(x1, x2, t)|2 dx1 dx2. However, since the particles are identical, this must be the same as the probability of finding the first particle between x2 and x2+dx2, and the second between x1 and x1 + dx1, at time t (since, in both cases, the physical outcome of the measurement is exactly the same). Hence, we conclude that |ψ(x1, x2, t)|2 = |ψ(x2, x1, t)|2, (6.35) or ψ(x1, x2, t) = e i ϕ ψ(x2, x1, t), (6.36) where ϕ is a real constant. However, if we swap the labels on particles 1 and 2 (which are, after all, arbitrary for identical particles), and repeat the argument, we also conclude that ψ(x2, x1, t) = e i ϕ ψ(x1, x2, t). (6.37) Hence, e 2 i ϕ = 1. (6.38) The only solutions to the above equation are ϕ = 0 and ϕ = π. Thus, we infer that for a system consisting of two identical particles, the wavefunction must be either symmetric or anti-symmetric under interchange of particle labels: i.e., either ψ(x2, x1, t) = ψ(x1, x2, t), (6.39) or ψ(x2, x1, t) = −ψ(x1, x2, t). (6.40) The above argument can easily be extended to systems containing more than two identical particles. It turns out that whether the wavefunction of a system containing many identical par-ticles is symmetric or anti-symmetric under interchange of the labels on any two parti-cles is determined by the nature of the particles themselves. Particles with wavefunctions which are symmetric under label interchange are said to obey Bose-Einstein statistics, and are called bosons—for instance, photons are bosons. Particles with wavefunctions which are anti-symmetric under label interchange are said to obey Fermi-Dirac statistics, and are called fermions—for instance, electrons, protons, and neutrons are fermions. Consider a system containing two identical and non-interacting bosons. Let ψ(x, E) be a properly normalized, single-particle, stationary wavefunction corresponding to a state of definite energy E. The stationary wavefunction of the whole system is written ψE boson(x1, x2) = 1 √ 2 [ψ(x1, Ea) ψ(x2, Eb) + ψ(x2, Ea) ψ(x1, Eb)] , (6.41) when the energies of the two particles are Ea and Eb. This expression automatically sat-isfies the symmetry requirement on the wavefunction. Incidentally, since the particles are Multi-Particle Systems 91 identical, we cannot be sure which particle has energy Ea, and which has energy Eb—only that one particle has energy Ea, and the other Eb. For a system consisting of two identical and non-interacting fermions, the stationary wavefunction of the whole system takes the form ψE fermion(x1, x2) = 1 √ 2 [ψ(x1, Ea) ψ(x2, Eb) −ψ(x2, Ea) ψ(x1, Eb)] , (6.42) Again, this expression automatically satisfies the symmetry requirement on the wavefunc-tion. Note that if Ea = Eb then the total wavefunction becomes zero everywhere. Now, in quantum mechanics, a null wavefunction corresponds to the absence of a state. We thus conclude that it is impossible for the two fermions in our system to occupy the same single-particle stationary state. Finally, if the two particles are somehow distinguishable then the stationary wavefunc-tion of the system is simply ψE dist(x1, x2) = ψ(x1, Ea) ψ(x2, Eb). (6.43) Let us evaluate the variance of the distance, x1 −x2, between the two particles, using the above three wavefunctions. It is easily demonstrated that if the particles are distin-guishable then ⟨(x1 −x2)2⟩dist = ⟨x2⟩a + ⟨x2⟩b −2 ⟨x⟩a ⟨x⟩b, (6.44) where ⟨xn⟩a,b = Z ∞ −∞ ψ∗(x, Ea,b) xn ψ(x, Ea,b) dx. (6.45) For the case of two identical bosons, we find ⟨(x1 −x2)2⟩boson = ⟨(x1 −x2)2⟩dist −2 |⟨x⟩ab|2, (6.46) where ⟨x⟩ab = Z ∞ −∞ ψ∗(x, Ea) x ψ(x, Eb) dx. (6.47) Here, we have assumed that Ea ̸= Eb, so that Z ∞ −∞ ψ∗(x, Ea) ψ(x, Eb) dx = 0. (6.48) Finally, for the case of two identical fermions, we obtain ⟨(x1 −x2)2⟩fermion = ⟨(x1 −x2)2⟩dist + 2 |⟨x⟩ab|2, (6.49) Equation (6.46) shows that the symmetry requirement on the total wavefunction of two identical bosons forces the particles to be, on average, closer together than two similar distinguishable particles. Conversely, Eq. (6.49) shows that the symmetry requirement on the total wavefunction of two identical fermions forces the particles to be, on average, 92 QUANTUM MECHANICS further apart than two similar distinguishable particles. However, the strength of this effect depends on square of the magnitude of ⟨x⟩ab, which measures the overlap between the wavefunctions ψ(x, Ea) and ψ(x, Eb). It is evident, then, that if these two wavefunctions do not overlap to any great extent then identical bosons or fermions will act very much like distinguishable particles. For a system containing N identical and non-interacting fermions, the anti-symmetric stationary wavefunction of the system is written ψE(x1, x2, . . .xN) = 1 √ N! ψ(x1, E1) ψ(x2, E1) . . . ψ(xN, E1) ψ(x1, E2) ψ(x2, E2) . . . ψ(xN, E2) . . . . . . . . . . . . ψ(x1, EN) ψ(x2, EN) . . . ψ(xN, EN) . (6.50) This expression is known as the Slater determinant, and automatically satisfies the symme-try requirements on the wavefunction. Here, the energies of the particles are E1, E2, . . . , EN. Note, again, that if any two particles in the system have the same energy (i.e., if Ei = Ej for some i ̸= j) then the total wavefunction is null. We conclude that it is impossible for any two identical fermions in a multi-particle system to occupy the same single-particle stationary state. This important result is known as the Pauli exclusion principle. Exercises (N.B. Neglect spin in the following questions.) 1. Consider a system consisting of two non-interacting particles, and three one-particle states, ψa(x), ψb(x), and ψc(x). How many different two-particle states can be constructed if the particles are (a) distinguishable, (b) indistinguishable bosons, or (c) indistinguishable fermions? 2. Consider two non-interacting particles, each of mass m, in a one-dimensional harmonic os-cillator potential of classical oscillation frequency ω. If one particle is in the ground-state, and the other in the first excited state, calculate ⟨(x1 −x2)2⟩assuming that the particles are (a) distinguishable, (b) indistinguishable bosons, or (c) indistinguishable fermions. 3. Two non-interacting particles, with the same mass m, are in a one-dimensional box of length a. What are the four lowest energies of the system? What are the degeneracies of these energies if the two particles are (a) distinguishable, (b) indistinguishable bosons, or (c) in-distingishable fermions? 4. Two particles in a one-dimensional box of length a occupy the n = 4 and n′ = 3 states. Write the properly normalized wavefunctions if the particles are (a) distinguishable, (b) indistin-guishable bosons, or (c) indistinguishable fermions. Three-Dimensional Quantum Mechanics 93 7 Three-Dimensional Quantum Mechanics 7.1 Introduction In this chapter, we shall extend our previous one-dimensional formulation of non-relativistic quantum mechanics to produce a fully three-dimensional theory. 7.2 Fundamental Concepts We have seen that in one dimension the instantaneous state of a single non-relativistic particle is fully specified by a complex wavefunction, ψ(x, t). The probability of finding the particle at time t between x and x + dx is P(x, t) dx, where P(x, t) = |ψ(x, t)|2. (7.1) Moreover, the wavefunction is normalized such that Z ∞ −∞ |ψ(x, t)|2 dx = 1 (7.2) at all times. In three dimensions, the instantaneous state of a single particle is also fully specified by a complex wavefunction, ψ(x, y, z, t). By analogy with the one-dimensional case, the probability of finding the particle at time t between x and x + dx, between y and y + dx, and between z and z + dz, is P(x, y, z, t) dx dy dz, where P(x, y, z, t) = |ψ(x, y, z, t)|2. (7.3) As usual, this interpretation of the wavefunction only makes sense if the wavefunction is normalized such that Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ |ψ(x, y, z, t)|2 dx dy dz = 1. (7.4) This normalization constraint ensures that the probability of finding the particle anywhere is space is always unity. In one dimension, we can write the probability conservation equation (see Sect. 4.5) ∂|ψ|2 ∂t + ∂j ∂x = 0, (7.5) where j = i ¯ h 2 m ψ ∂ψ∗ ∂x −ψ∗∂ψ ∂x ! (7.6) 94 QUANTUM MECHANICS is the flux of probability along the x-axis. Integrating Eq. (7.5) over all space, and making use of the fact that ψ →0 as |x| →∞if ψ is to be square-integrable, we obtain d dt Z ∞ −∞ |ψ(x, t)|2 dx = 0. (7.7) In other words, if the wavefunction is initially normalized then it stays normalized as time progresses. This is a necessary criterion for the viability of our basic interpretation of |ψ|2 as a probability density. In three dimensions, by analogy with the one dimensional case, the probability conser-vation equation becomes ∂|ψ|2 ∂t + ∂jx ∂x + ∂jy ∂y + ∂jz ∂z = 0. (7.8) Here, jx = i ¯ h 2 m ψ ∂ψ∗ ∂x −ψ∗∂ψ ∂x ! (7.9) is the flux of probability along the x-axis, and jy = i ¯ h 2 m ψ ∂ψ∗ ∂y −ψ∗∂ψ ∂y ! (7.10) the flux of probability along the y-axis, etc. Integrating Eq. (7.8) over all space, and making use of the fact that ψ →0 as |r| →∞if ψ is to be square-integrable, we obtain d dt Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ |ψ(x, y, z, t)|2 dx dy dz = 0. (7.11) Thus, the normalization of the wavefunction is again preserved as time progresses, as must be the case if |ψ|2 is to be interpreted as a probability density. In one dimension, position is represented by the algebraic operator x, whereas momen-tum is represented by the differential operator −i ¯ h ∂/∂x (see Sect. 4.6). By analogy, in three dimensions, the Cartesian coordinates x, y, and z are represented by the algebraic operators x, y, and z, respectively, whereas the three Cartesian components of momentum, px, py, and pz, have the following representations: px ≡ −i ¯ h ∂ ∂x, (7.12) py ≡ −i ¯ h ∂ ∂y, (7.13) pz ≡ −i ¯ h ∂ ∂z. (7.14) Let x1 = x, x2 = y, x3 = z, and p1 = px, etc. Since the xi are independent variables (i.e., ∂xi/∂xj = δij), we conclude that the various position and momentum operators satisfy the Three-Dimensional Quantum Mechanics 95 following commutation relations: [xi, xj] = 0, (7.15) [pi, pj] = 0, (7.16) [xi, pj] = i ¯ h δij. (7.17) Now, we know, from Sect. 4.10, that two dynamical variables can only be (exactly) mea-sured simultaneously if the operators which represent them in quantum mechanics com-mute with one another. Thus, it is clear, from the above commutation relations, that the only restriction on measurement in a system consisting of a single particle moving in three dimensions is that it is impossible to simultaneously measure a given position coordinate and the corresponding component of momentum. Note, however, that it is perfectly pos-sible to simultaneously measure two different positions coordinates, or two different com-ponents of the momentum. The commutation relations (7.15)–(7.17) again illustrate the point that quantum mechanical operators corresponding to different degrees of freedom of a dynamical system (in this case, motion in different directions) tend to commute with one another (see Sect. 6.2). In one dimension, the time evolution of the wavefunction is given by [see Eq. (4.63)] i ¯ h ∂ψ ∂t = H ψ, (7.18) where H is the Hamiltonian. The same equation governs the time evolution of the wave-function in three dimensions. Now, in one dimension, the Hamiltonian of a non-relativistic particle of mass m takes the form H = p 2 x 2 m + V(x, t), (7.19) where V(x) is the potential energy. In three dimensions, this expression generalizes to H = p 2 x + p 2 y + p 2 z 2 m + V(x, y, z, t). (7.20) Hence, making use of Eqs. (7.12)–(7.14) and (7.18), the three-dimensional version of the time-dependent Schr¨ ondiger equation becomes [see Eq. (4.1)] i ¯ h ∂ψ ∂t = −¯ h2 2 m ∇2ψ + V ψ. (7.21) Here, the differential operator ∇2 ≡∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 (7.22) is known as the Laplacian. Incidentally, the probability conservation equation (7.8) is easily derivable from Eq. (7.21). An eigenstate of the Hamiltonian corresponding to the eigenvalue E satisfies H ψ = E ψ. (7.23) 96 QUANTUM MECHANICS It follows from Eq. (7.18) that (see Sect. 4.12) ψ(x, y, z, t) = ψ(x, y, z) e−i E t/¯ h, (7.24) where the stationary wavefunction ψ(x, y, z) satisfies the three-dimensional version of the time-independent Schr¨ ondiger equation [see Eq. (4.159)]: ∇2ψ = 2 m ¯ h2 (V −E) ψ, (7.25) where V is assumed not to depend explicitly on t. 7.3 Particle in a Box Consider a particle of mass m trapped inside a cubic box of dimension a (see Sect. 5.2). The particle’s stationary wavefunction, ψ(x, y, z), satisfies ∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 ! ψ = −2 m ¯ h2 E ψ, (7.26) where E is the particle energy. The wavefunction satisfies the boundary condition that it must be zero at the edges of the box. Let us search for a separable solution to the above equation of the form ψ(x, y, z) = X(x) Y(y) Z(z). (7.27) The factors of the wavefunction satisfy the boundary conditions X(0) = X(a) = 0, Y(0) = Y(a) = 0, and Z(0) = Z(a) = 0. Substituting (7.27) into Eq. (7.26), and rearranging, we obtain X′′ X + Y ′′ Y + Z′′ Z = −2 m ¯ h2 E, (7.28) where ′ denotes a derivative with respect to argument. It is evident that the only way in which the above equation can be satisfied at all points within the box is if X′′ X = −k 2 x , (7.29) Y ′′ Y = −k 2 y, (7.30) Z′′ Z = −k 2 z , (7.31) where k 2 x , k 2 y, and k 2 z are spatial constants. Note that the right-hand sides of the above equations must contain negative, rather than positive, spatial constants, because it would not otherwise be possible to satisfy the boundary conditions. The solutions to the above Three-Dimensional Quantum Mechanics 97 equations which are properly normalized, and satisfy the boundary conditions, are [see Eq. (5.11)] X(x) = s 2 a sin(kx x), (7.32) Y(y) = s 2 a sin(ky y), (7.33) Z(z) = s 2 a sin(kz z), (7.34) where kx = lx π a , (7.35) ky = ly π a , (7.36) kz = lz π a . (7.37) Here, lx, ly, and lz are positive integers. Thus, from Eqs. (7.28)–(7.31), the energy is written [see Eq. (5.9)] E = l2 π2 ¯ h2 2 m a2 . (7.38) where l2 = l 2 x + l 2 y + l 2 z . (7.39) 7.4 Degenerate Electron Gases Consider N electrons trapped in a cubic box of dimension a. Let us treat the electrons as essentially non-interacting particles. According to Sect. 6.3, the total energy of a system consisting of many non-interacting particles is simply the sum of the single-particle ener-gies of the individual particles. Furthermore, electrons are subject to the Pauli exclusion principle (see Sect. 6.5), since they are indistinguishable fermions. The exclusion princi-ple states that no two electrons in our system can occupy the same single-particle energy level. Now, from the previous section, the single-particle energy levels for a particle in a box are characterized by the three quantum numbers lx, ly, and lz. Thus, we conclude that no two electrons in our system can have the same set of values of lx, ly, and lz. It turns out that this is not quite true, because electrons possess an intrinsic angular momentum called spin (see Cha. 10). The spin states of an electron are governed by an additional quantum number, which can take one of two different values. Hence, when spin is taken into account, we conclude that a maximum of two electrons (with different spin quantum 98 QUANTUM MECHANICS numbers) can occupy a single-particle energy level corresponding to a particular set of val-ues of lx, ly, and lz. Note, from Eqs. (7.38) and (7.39), that the associated particle energy is proportional to l2 = l 2 x + l 2 y + l 2 z . Suppose that our electrons are cold: i.e., they have comparatively little thermal energy. In this case, we would expect them to fill the lowest single-particle energy levels available to them. We can imagine the single-particle energy levels as existing in a sort of three-dimensional quantum number space whose Cartesian coordinates are lx, ly, and lz. Thus, the energy levels are uniformly distributed in this space on a cubic lattice. Moreover, the distance between nearest neighbour energy levels is unity. This implies that the number of energy levels per unit volume is also unity. Finally, the energy of a given energy level is proportional to its distance, l2 = l 2 x + l 2 y + l 2 z , from the origin. Since we expect cold electrons to occupy the lowest energy levels available to them, but only two electrons can occupy a given energy level, it follows that if the number of electrons, N, is very large then the filled energy levels will be approximately distributed in a sphere centered on the origin of quantum number space. The number of energy levels contained in a sphere of radius l is approximately equal to the volume of the sphere— since the number of energy levels per unit volume is unity. It turns out that this is not quite correct, because we have forgotten that the quantum numbers lx, ly, and lz can only take positive values. Hence, the filled energy levels actually only occupy one octant of a sphere. The radius lF of the octant of filled energy levels in quantum number space can be calculated by equating the number of energy levels it contains to the number of electrons, N. Thus, we can write N = 2 × 1 8 × 4 π 3 l 3 F . (7.40) Here, the factor 2 is to take into account the two spin states of an electron, and the factor 1/8 is to take account of the fact that lx, ly, and lz can only take positive values. Thus, lF = 3 N π !1/3 . (7.41) According to Eq. (7.38), the energy of the most energetic electrons—which is known as the Fermi energy—is given by EF = l 2 F π2 ¯ h2 2 me a2 = π2 ¯ h2 2 m a2 3 N π !2/3 , (7.42) where me is the electron mass. This can also be written as EF = π2 ¯ h2 2 me 3 n π !2/3 , (7.43) where n = N/a3 is the number of electrons per unit volume (in real space). Note that the Fermi energy only depends on the number density of the confined electrons. Three-Dimensional Quantum Mechanics 99 The mean energy of the electrons is given by ¯ E = EF Z lF 0 l2 4π l2 dl , 4 3 π l 5 F = 3 5 EF, (7.44) since E ∝l2, and the energy levels are uniformly distributed in quantum number space inside an octant of radius lF. Now, according to classical physics, the mean thermal energy of the electrons is (3/2) kB T, where T is the electron temperature, and kB the Boltzmann constant. Thus, if kB T ≪EF then our original assumption that the electrons are cold is valid. Note that, in this case, the electron energy is much larger than that predicted by classical physics—electrons in this state are termed degenerate. On the other hand, if kB T ≫EF then the electrons are hot, and are essentially governed by classical physics— electrons in this state are termed non-degenerate. The total energy of a degenerate electron gas is Etotal = N ¯ E = 3 5 N EF. (7.45) Hence, the gas pressure takes the form P = −∂Etotal ∂V = 2 5 n EF, (7.46) since EF ∝a−2 = V−2/3 [see Eq. (7.42)]. Now, the pressure predicted by classical physics is P = n kB T. Thus, a degenerate electron gas has a much higher pressure than that which would be predicted by classical physics. This is an entirely quantum mechanical effect, and is due to the fact that identical fermions cannot get significantly closer together than a de Broglie wavelength without violating the Pauli exclusion principle. Note that, according to Eq. (7.43), the mean spacing between degenerate electrons is d ∼n−1/3 ∼ h √me E ∼h p ∼λ, (7.47) where λ is the de Broglie wavelength. Thus, an electron gas is non-degenerate when the mean spacing between the electrons is much greater than the de Broglie wavelength, and becomes degenerate as the mean spacing approaches the de Broglie wavelength. In turns out that the conduction (i.e., free) electrons inside metals are highly degener-ate (since the number of electrons per unit volume is very large, and EF ∝n2/3). Indeed, most metals are hard to compress as a direct consequence of the high degeneracy pres-sure of their conduction electrons. To be more exact, resistance to compression is usually measured in terms of a quantity known as the bulk modulus, which is defined B = −V ∂P ∂V (7.48) Now, for a fixed number of electrons, P ∝V−5/3 [see Eqs. (7.42) and (7.46)]. Hence, B = 5 3 P = π3 ¯ h2 9 m 3 n π !5/3 . (7.49) 100 QUANTUM MECHANICS For example, the number density of free electrons in magnesium is n ∼8.6×1028 m−3. This leads to the following estimate for the bulk modulus: B ∼6.4×1010 N m−2. The actual bulk modulus is B = 4.5 × 1010 N m−2. 7.5 White-Dwarf Stars A main-sequence hydrogen-burning star, such as the Sun, is maintained in equilibrium via the balance of the gravitational attraction tending to make it collapse, and the thermal pressure tending to make it expand. Of course, the thermal energy of the star is generated by nuclear reactions occurring deep inside its core. Eventually, however, the star will run out of burnable fuel, and, therefore, start to collapse, as it radiates away its remaining thermal energy. What is the ultimate fate of such a star? A burnt-out star is basically a gas of electrons and ions. As the star collapses, its density increases, and so the mean separation between its constituent particles decreases. Even-tually, the mean separation becomes of order the de Broglie wavelength of the electrons, and the electron gas becomes degenerate. Note, that the de Broglie wavelength of the ions is much smaller than that of the electrons, so the ion gas remains non-degenerate. Now, even at zero temperature, a degenerate electron gas exerts a substantial pressure, because the Pauli exclusion principle prevents the mean electron separation from becoming signif-icantly smaller than the typical de Broglie wavelength (see previous section). Thus, it is possible for a burnt-out star to maintain itself against complete collapse under gravity via the degeneracy pressure of its constituent electrons. Such stars are termed white-dwarfs. Let us investigate the physics of white-dwarfs in more detail. The total energy of a white-dwarf star can be written E = K + U, (7.50) where K is the kinetic energy of the degenerate electrons (the kinetic energy of the ion is negligible), and U is the gravitational potential energy. Let us assume, for the sake of simplicity, that the density of the star is uniform. In this case, the gravitational potential energy takes the form U = −3 5 G M2 R , (7.51) where G is the gravitational constant, M is the stellar mass, and R is the stellar radius. From the previous subsection, the kinetic energy of a degenerate electron gas is simply K = N ¯ E = 3 5 N EF = 3 5 N π2 ¯ h2 2 me 3 N π V !2/3 , (7.52) where N is the number of electrons, V the volume of the star, and me the electron mass. The interior of a white-dwarf star is composed of atoms like C12 and O16 which contain equal numbers of protons, neutrons, and electrons. Thus, M = 2 N mp, (7.53) Three-Dimensional Quantum Mechanics 101 where mp is the proton mass. Equations (7.50)–(7.53) can be combined to give E = A R2 −B R, (7.54) where A = 3 20 9π 8 !2/3 ¯ h2 me M mp !5/3 , (7.55) B = 3 5 G M2. (7.56) The equilibrium radius of the star, R∗, is that which minimizes the total energy E. In fact, it is easily demonstrated that R∗= 2 A B , (7.57) which yields R∗= (9π)2/3 8 ¯ h2 G me m 5/3 p M1/3. (7.58) The above formula can also be written R∗ R⊙ = 0.010 M⊙ M !1/3 , (7.59) where R⊙= 7 × 105 km is the solar radius, and M⊙= 2 × 1030 kg the solar mass. It follows that the radius of a typical solar mass white-dwarf is about 7000 km: i.e., about the same as the radius of the Earth. The first white-dwarf to be discovered (in 1862) was the companion of Sirius. Nowadays, thousands of white-dwarfs have been observed, all with properties similar to those described above. Note from Eqs. (7.52), (7.53), and (7.59) that ¯ E ∝M4/3. In other words, the mean energy of the electrons inside a white dwarf increases as the stellar mass increases. Hence, for a sufficiently massive white dwarf, the electrons can become relativistic. It turns out that the degeneracy pressure for relativistic electrons only scales as R−1, rather that R−2, and thus is unable to balance the gravitational pressure [which also scales as R−1—see Eq. (7.54)]. It follows that electron degeneracy pressure is only able to halt the collapse of a burnt-out star provided that the stellar mass does not exceed some critical value, known as the Chandrasekhar limit, which turns out to be about 1.4 times the mass of the Sun. Stars whose mass exceeds the Chandrasekhar limit inevitably collapse to produce extremely compact objects, such as neutron stars (which are held up by the degeneracy pressure of their constituent neutrons), or black holes. 102 QUANTUM MECHANICS Exercises 1. Consider a particle of mass m moving in a three-dimensional isotropic harmonic oscillator potential of force constant k. Solve the problem via the separation of variables, and obtain an expression for the allowed values of the total energy of the system (in a stationary state). 2. Repeat the calculation of the Fermi energy of a gas of fermions by assuming that the fermions are massless, so that the energy-momentum relation is E = p c. 3. Calculate the density of states of an electron gas in a cubic box of volume L3, bearing in mind that there are two electrons per energy state. In other words, calculate the number of electron states in the interval E to E + dE. This number can be written dN = ρ(E) dE, where ρ is the density of states. 4. Repeat the above calculation for a two-dimensional electron gas in a square box of area L2. 5. Given that the number density of free electrons in copper is 8.5 × 1028 m−3, calculate the Fermi energy in electron volts, and the velocity of an electron whose kinetic energy is equal to the Fermi energy. 6. Obtain an expression for the Fermi energy (in eV) of an electron in a white dwarf star as a function of the stellar mass (in solar masses). At what mass does the Fermi energy equal the rest mass energy? Orbital Angular Momentum 103 8 Orbital Angular Momentum 8.1 Introduction As is well-known, angular momentum plays a vitally important role in the classical descrip-tion of three-dimensional motion. Let us now investigate the role of angular momentum in the quantum mechanical description of such motion. 8.2 Angular Momentum Operators In classical mechanics, the vector angular momentum, L, of a particle of position vector r and linear momentum p is defined as L = r × p. (8.1) It follows that Lx = y pz −z py, (8.2) Ly = z px −x pz, (8.3) Lz = x py −y px. (8.4) Let us, first of all, consider whether it is possible to use the above expressions as the defini-tions of the operators corresponding to the components of angular momentum in quantum mechanics, assuming that the xi and pi (where x1 ≡x, p1 ≡px, x2 ≡y, etc.) correspond to the appropriate quantum mechanical position and momentum operators. The first point to note is that expressions (8.2)–(8.4) are unambiguous with respect to the order of the terms in multiplicative factors, since the various position and momentum operators ap-pearing in them all commute with one another [see Eqs. (7.17)]. Moreover, given that the xi and the pi are Hermitian operators, it is easily seen that the Li are also Hermitian. This is important, since only Hermitian operators can represent physical variables in quantum mechanics (see Sect. 4.6). We, thus, conclude that Eqs. (8.2)–(8.4) are plausible defini-tions for the quantum mechanical operators which represent the components of angular momentum. Let us now derive the commutation relations for the Li. For instance, [Lx, Ly] = [y pz −z py, z px −x pz] = y px [pz, z] + x py [z, pz] = i ¯ h (x py −y px) = i ¯ h Lz, (8.5) where use has been made of the definitions of the Li [see Eqs. (8.2)–(8.4)], and com-mutation relations (7.15)–(7.17) for the xi and pi. There are two similar commutation 104 QUANTUM MECHANICS relations: one for Ly and Lz, and one for Lz and Lx. Collecting all of these commutation relations together, we obtain [Lx, Ly] = i ¯ h Lz, (8.6) [Ly, Lz] = i ¯ h Lx, (8.7) [Lz, Lx] = i ¯ h Ly. (8.8) By analogy with classical mechanics, the operator L2, which represents the magnitude squared of the angular momentum vector, is defined L2 = L 2 x + L 2 y + L 2 z . (8.9) Now, it is easily demonstrated that if A and B are two general operators then [A2, B] = A [A, B] + [A, B] A. (8.10) Hence, [L2, Lx] = [L 2 y, Lx] + [L 2 z , Lx] = Ly [Ly, Lx] + [Ly, Lx] Ly + Lz [Lz, Lx] + [Lz, Lx] Lz = i ¯ h (−Ly Lz −Lz Ly + Lz Ly + Ly Lz) = 0, (8.11) where use has been made of Eqs. (8.6)–(8.8). In other words, L2 commutes with Lx. Likewise, it is easily demonstrated that L2 also commutes with Ly, and with Lz. Thus, [L2, Lx] = [L2, Ly] = [L2, Lz] = 0. (8.12) Recall, from Sect. 4.10, that in order for two physical quantities to be (exactly) mea-sured simultaneously, the operators which represent them in quantum mechanics must commute with one another. Hence, the commutation relations (8.6)–(8.8) and (8.12) imply that we can only simultaneously measure the magnitude squared of the angular mo-mentum vector, L2, together with, at most, one of its Cartesian components. By convention, we shall always choose to measure the z-component, Lz. Finally, it is helpful to define the operators L± = Lx ± i Ly. (8.13) Note that L+ and L−are not Hermitian operators, but are the Hermitian conjugates of one another (see Sect. 4.6): i.e., (L±)† = L∓, (8.14) Moreover, it is easily seen that L+ L− = (Lx + i Ly) (Lx −i Ly) = L 2 x + L 2 y −i [Lx, Ly] = L 2 x + L 2 y + ¯ h Lz = L2 −L 2 z + ¯ h Lz. (8.15) Orbital Angular Momentum 105 Likewise, L−L+ = L2 −L 2 z −¯ h Lz, (8.16) giving [L+, L−] = 2 ¯ h Lz. (8.17) We also have [L+, Lz] = [Lx, Lz] + i [Ly, Lz] = −i ¯ h Ly −¯ h Lx = −¯ h L+, (8.18) and, similarly, [L−, Lz] = ¯ h L−. (8.19) 8.3 Representation of Angular Momentum Now, we saw earlier, in Sect. 7.2, that the operators, pi, which represent the Cartesian components of linear momentum in quantum mechanics, can be represented as the spa-tial differential operators −i ¯ h ∂/∂xi. Let us now investigate whether angular momentum operators can similarly be represented as spatial differential operators. It is most convenient to perform our investigation using conventional spherical polar coordinates: i.e., r, θ, and φ. These are defined with respect to our usual Cartesian coordi-nates as follows: x = r sin θ cos φ, (8.20) y = r sin θ sin φ, (8.21) z = r cos θ. (8.22) It follows, after some tedious analysis, that ∂ ∂x = sin θ cos φ ∂ ∂r + cos θ cos φ r ∂ ∂θ −sin φ r sin θ ∂ ∂φ, (8.23) ∂ ∂y = sin θ sin φ ∂ ∂r + cos θ sin φ r ∂ ∂θ + cos φ r sin θ ∂ ∂φ, (8.24) ∂ ∂z = cos θ ∂ ∂r −sin θ r ∂ ∂θ. (8.25) Making use of the definitions (8.2)–(8.4), (8.9), and (8.13), the fundamental represen-tation (7.12)–(7.14) of the pi operators as spatial differential operators, the Eqs. (8.20)– (8.25), and a great deal of tedious algebra, we finally obtain Lx = −i ¯ h −sin φ ∂ ∂θ −cos φ cot θ ∂ ∂φ ! , (8.26) Ly = −i ¯ h cos φ ∂ ∂θ −sin φ cot θ ∂ ∂φ ! , (8.27) Lz = −i ¯ h ∂ ∂φ, (8.28) 106 QUANTUM MECHANICS as well as L2 = −¯ h2 " 1 sin θ ∂ ∂θ sin θ ∂ ∂θ ! + 1 sin2 θ ∂2 ∂φ2 # , (8.29) and L± = ¯ h e±i φ ± ∂ ∂θ + i cot θ ∂ ∂φ ! . (8.30) We, thus, conclude that all of our angular momentum operators can be represented as dif-ferential operators involving the angular spherical coordinates, θ and φ, but not involving the radial coordinate, r. 8.4 Eigenstates of Angular Momentum Let us find the simultaneous eigenstates of the angular momentum operators Lz and L2. Since both of these operators can be represented as purely angular differential operators, it stands to reason that their eigenstates only depend on the angular coordinates θ and φ. Thus, we can write Lz Yl,m(θ, φ) = m ¯ h Yl,m(θ, φ), (8.31) L2 Yl,m(θ, φ) = l (l + 1) ¯ h 2 Yl,m(θ, φ). (8.32) Here, the Yl,m(θ, φ) are the eigenstates in question, whereas the dimensionless quantities m and l parameterize the eigenvalues of Lz and L2, which are m ¯ h and l (l + 1) ¯ h2, re-spectively. Of course, we expect the Yl,m to be both mutually orthogonal and properly normalized (see Sect. 4.9), so that I Y ∗ l′,m′(θ, φ) Yl,m(θ, φ) dΩ= δll′ δmm′, (8.33) where dΩ= sin θ dθ dφ is an element of solid angle, and the integral is over all solid angle. Now, Lz (L+ Yl,m) = (L+ Lz + [Lz, L+]) Yl,m = (L+ Lz + ¯ h L+) Yl,m = (m + 1) ¯ h (L+ Yl,m), (8.34) where use has been made of Eq. (8.18). We, thus, conclude that when the operator L+ operates on an eigenstate of Lz corresponding to the eigenvalue m ¯ h it converts it to an eigenstate corresponding to the eigenvalue (m + 1) ¯ h. Hence, L+ is known as the raising operator (for Lz). It is also easily demonstrated that Lz (L−Yl,m) = (m −1) ¯ h (L−Yl,m). (8.35) Orbital Angular Momentum 107 In other words, when L−operates on an eigenstate of Lz corresponding to the eigenvalue m ¯ h it converts it to an eigenstate corresponding to the eigenvalue (m −1) ¯ h. Hence, L−is known as the lowering operator (for Lz). Writing L+ Yl,m = c+ l,m Yl,m+1, (8.36) L−Yl,m = c− l,m Yl,m−1, (8.37) we obtain L−L+ Yl,m = c+ l,m c− l,m+1 Yl,m = [l (l + 1) −m (m + 1)] ¯ h2 Yl,m, (8.38) where use has been made of Eq. (8.16). Likewise, L+ L−Yl,m = c+ l,m−1 c− l,m Yl,m = [l (l + 1) −m (m −1)] ¯ h2 Yl,m, (8.39) where use has been made of Eq. (8.15). It follows that c+ l,m c− l,m+1 = [l (l + 1) −m (m + 1)] ¯ h2, (8.40) c+ l,m−1 c− l,m = [l (l + 1) −m (m −1)] ¯ h2. (8.41) These equations are satisfied when c± l,m = [l (l + l) −m (m ± 1)]1/2 ¯ h. (8.42) Hence, we can write L+ Yl,m = [l (l + 1) −m (m + 1)]1/2 ¯ h Yl,m+1, (8.43) L−Yl,m = [l (l + 1) −m (m −1)]1/2 ¯ h Yl,m−1. (8.44) 8.5 Eigenvalues of Lz It seems reasonable to attempt to write the eigenstate Yl,m(θ, φ) in the separable form Yl,m(θ, φ) = Θl,m(θ) Φm(φ). (8.45) We can satisfy the orthonormality constraint (8.33) provided that Zπ −π Θ ∗ l′,m′(θ) Θl,m(θ) sin θ dθ = δll′, (8.46) Z 2π 0 Φ ∗ m′(φ) Φm(φ) dφ = δmm′. (8.47) Note, from Eq. (8.28), that the differential operator which represents Lz only depends on the azimuthal angle φ, and is independent of the polar angle θ. It therefore follows from Eqs. (8.28), (8.31), and (8.45) that −i ¯ hdΦm dφ = m ¯ h Φm. (8.48) 108 QUANTUM MECHANICS The solution to this equation is Φm(φ) ∼e i m φ. (8.49) Here, the symbol ∼just means that we are neglecting multiplicative constants. Now, our basic interpretation of a wavefunction as a quantity whose modulus squared represents the probability density of finding a particle at a particular point in space sug-gests that a physical wavefunction must be single-valued in space. Otherwise, the proba-bility density at a given point would not, in general, have a unique value, which does not make physical sense. Hence, we demand that the wavefunction (8.49) be single-valued: i.e., Φm(φ + 2 π) = Φm(φ) for all φ. This immediately implies that the quantity m is quantized. In fact, m can only take integer values. Thus, we conclude that the eigenval-ues of Lz are also quantized, and take the values m ¯ h, where m is an integer. [A more rigorous argument is that Φm(φ) must be continuous in order to ensure that Lz is an Her-mitian operator, since the proof of hermiticity involves an integration by parts in φ that has canceling contributions from φ = 0 and φ = 2π.] Finally, we can easily normalize the eigenstate (8.49) by making use of the orthonor-mality constraint (8.47). We obtain Φm(φ) = e i m φ √ 2π . (8.50) This is the properly normalized eigenstate of Lz corresponding to the eigenvalue m ¯ h. 8.6 Eigenvalues of L2 Consider the angular wavefunction ψ(θ, φ) = L+ Yl,m(θ, φ). We know that I ψ∗(θ, φ) ψ(θ, φ) dΩ≥0, (8.51) since ψ∗ψ ≡|ψ|2 is a positive-definite real quantity. Hence, making use of Eqs. (4.58) and (8.14), we find that I (L+ Yl,m)∗(L+ Yl,m) dΩ = I Y ∗ l,m (L+)† (L+ Yl,m) dΩ = I Y ∗ l,m L−L+ Yl,m dΩ≥0. (8.52) It follows from Eqs. (8.16), and (8.31)–(8.33) that I Y ∗ l,m (L2 −L 2 z −¯ h Lz) Yl,m dΩ = I Y ∗ l,m ¯ h2 [l (l + 1) −m (m + 1)] Yl,m dΩ = ¯ h2 [l (l + 1) −m (m + 1)] I Y ∗ l,m Yl,m dΩ = ¯ h2 [l (l + 1) −m (m + 1)] ≥0. (8.53) Orbital Angular Momentum 109 We, thus, obtain the constraint l (l + 1) ≥m (m + 1). (8.54) Likewise, the inequality I (L−Yl,m)∗(L−Yl,m) dΩ= I Y ∗ l,m L+ L−Yl,m dΩ≥0 (8.55) leads to a second constraint: l (l + 1) ≥m (m −1). (8.56) Without loss of generality, we can assume that l ≥0. This is reasonable, from a physical standpoint, since l (l+1) ¯ h2 is supposed to represent the magnitude squared of something, and should, therefore, only take non-negative values. If l is non-negative then the con-straints (8.54) and (8.56) are equivalent to the following constraint: −l ≤m ≤l. (8.57) We, thus, conclude that the quantum number m can only take a restricted range of integer values. Well, if m can only take a restricted range of integer values then there must exist a lowest possible value it can take. Let us call this special value m−, and let Yl,m−be the corresponding eigenstate. Suppose we act on this eigenstate with the lowering operator L−. According to Eq. (8.35), this will have the effect of converting the eigenstate into that of a state with a lower value of m. However, no such state exists. A non-existent state is represented in quantum mechanics by the null wavefunction, ψ = 0. Thus, we must have L−Yl,m−= 0. (8.58) Now, from Eq. (8.15), L2 = L+ L−+ L 2 z −¯ h Lz (8.59) Hence, L2 Yl,m−= (L+ L−+ L 2 z −¯ h Lz) Yl,m−, (8.60) or l (l + 1) Yl,m−= m−(m−−1) Yl,m−, (8.61) where use has been made of (8.31), (8.32), and (8.58). It follows that l (l + 1) = m−(m−−1). (8.62) Assuming that m−is negative, the solution to the above equation is m−= −l. (8.63) 110 QUANTUM MECHANICS We can similarly show that the largest possible value of m is m+ = +l. (8.64) The above two results imply that l is an integer, since m−and m+ are both constrained to be integers. We can now formulate the rules which determine the allowed values of the quan-tum numbers l and m. The quantum number l takes the non-negative integer values 0, 1, 2, 3, · · ·. Once l is given, the quantum number m can take any integer value in the range −l, −l + 1, · · · 0, · · · , l −1, l. (8.65) Thus, if l = 0 then m can only take the value 0, if l = 1 then m can take the values −1, 0, +1, if l = 2 then m can take the values −2, −1, 0, +1, +2, and so on. 8.7 Spherical Harmonics The simultaneous eigenstates, Yl,m(θ, φ), of L2 and Lz are known as the spherical harmonics. Let us investigate their functional form. Now, we know that L+ Yl,l(θ, φ) = 0, (8.66) since there is no state for which m has a larger value than +l. Writing Yl,l(θ, φ) = Θl,l(θ) e i l φ (8.67) [see Eqs. (8.45) and (8.49)], and making use of Eq. (8.30), we obtain ¯ h e i φ ∂ ∂θ + i cot θ ∂ ∂φ ! Θl,l(θ) e i l φ = 0. (8.68) This equation yields dΘl,l dθ −l cot θ Θl,l = 0. (8.69) which can easily be solved to give Θl,l ∼(sin θ)l. (8.70) Hence, we conclude that Yl,l(θ, φ) ∼(sin θ)l e i l φ. (8.71) Likewise, it is easy to demonstrate that Yl,−l(θ, φ) ∼(sin θ)l e−i l φ. (8.72) Orbital Angular Momentum 111 Once we know Yl,l, we can obtain Yl,l−1 by operating on Yl,l with the lowering operator L−. Thus, Yl,l−1 ∼L−Yl,l ∼e−i φ −∂ ∂θ + i cot θ ∂ ∂φ ! (sin θ)l e i l φ, (8.73) where use has been made of Eq. (8.30). The above equation yields Yl,l−1 ∼e i (l−1) φ d dθ + l cot θ ! (sin θ)l. (8.74) Now, d dθ + l cot θ ! f(θ) ≡ 1 (sin θ)l d dθ h (sin θ)l f(θ) i , (8.75) where f(θ) is a general function. Hence, we can write Yl,l−1(θ, φ) ∼e i (l−1) φ (sin θ)l−1 1 sin θ d dθ ! (sin θ)2 l. (8.76) Likewise, we can show that Yl,−l+1(θ, φ) ∼L+ Yl,−l ∼e−i (l−1) φ (sin θ)l−1 1 sin θ d dθ ! (sin θ)2 l. (8.77) We can now obtain Yl,l−2 by operating on Yl,l−1 with the lowering operator. We get Yl,l−2 ∼L−Yl,l−1 ∼e−i φ −∂ ∂θ + i cot θ ∂ ∂φ ! e i (l−1) φ (sin θ)l−1 1 sin θ d dθ ! (sin θ)2 l, (8.78) which reduces to Yl,l−2 ∼e−i (l−2) φ " d dθ + (l −1) cot θ # 1 (sin θ)l−1 1 sin θ d dθ ! (sin θ)2 l. (8.79) Finally, making use of Eq. (8.75), we obtain Yl,l−2(θ, φ) ∼e i (l−2) φ (sin θ)l−2 1 sin θ d dθ !2 (sin θ)2 l. (8.80) Likewise, we can show that Yl,−l+2(θ, φ) ∼L+ Yl,−l+1 ∼e−i (l−2) φ (sin θ)l−2 1 sin θ d dθ !2 (sin θ)2 l. (8.81) A comparison of Eqs. (8.71), (8.76), and (8.80) reveals the general functional form of the spherical harmonics: Yl,m(θ, φ) ∼ e i m φ (sin θ)m 1 sin θ d dθ !l−m (sin θ)2 l. (8.82) 112 QUANTUM MECHANICS Figure 8.1: The |Yl,m(θ, φ)| 2 plotted as a functions of θ. The solid, short-dashed, and long-dashed curves correspond to l, m = 0, 0, and 1, 0, and 1, ±1, respectively. Here, m is assumed to be non-negative. Making the substitution u = cos θ, we can also write Yl,m(u, φ) ∼e i m φ (1 −u2)−m/2 d du !l−m (1 −u2)l. (8.83) Finally, it is clear from Eqs. (8.72), (8.77), and (8.81) that Yl,−m ∼Y ∗ l,m. (8.84) We now need to normalize our spherical harmonic functions so as to ensure that I |Yl,m(θ, φ)|2 dΩ= 1. (8.85) After a great deal of tedious analysis, the normalized spherical harmonic functions are found to take the form Yl,m(θ, φ) = (−1)m "2 l + 1 4π (l −m)! (l + m)! 1/2 Pl,m(cos θ) e i m φ (8.86) for m ≥0, where the Pl,m are known as associated Legendre polynomials, and are written Pl,m(u) = (−1)l+m (l + m)! (l −m)! (1 −u2)−m/2 2l l! d du !l−m (1 −u2)l (8.87) for m ≥0. Alternatively, Pl,m(u) = (−1)l (1 −u2)m/2 2l l! d du !l+m (1 −u2)l, (8.88) Orbital Angular Momentum 113 Figure 8.2: The |Yl,m(θ, φ)| 2 plotted as a functions of θ. The solid, short-dashed, and long-dashed curves correspond to l, m = 2, 0, and 2, ±1, and 2, ±2, respectively. for m ≥0. The spherical harmonics characterized by m < 0 can be calculated from those characterized by m > 0 via the identity Yl,−m = (−1)m Y ∗ l,m. (8.89) The spherical harmonics are orthonormal: i.e., I Y ∗ l′,m′ Yl,m dΩ= δll′ δmm′, (8.90) and also form a complete set. In other words, any function of θ and φ can be represented as a superposition of spherical harmonics. Finally, and most importantly, the spherical harmonics are the simultaneous eigenstates of Lz and L2 corresponding to the eigenvalues m ¯ h and l (l + 1) ¯ h2, respectively. All of the l = 0, l = 1, and l = 2 spherical harmonics are listed below: Y0,0 = 1 √ 4π, (8.91) Y1,0 = s 3 4π cos θ, (8.92) Y1,±1 = ∓ s 3 8π sin θ e±i φ, (8.93) Y2,0 = s 5 16π (3 cos2 θ −1), (8.94) 114 QUANTUM MECHANICS Y2,±1 = ∓ s 15 8π sin θ cos θ e±i φ, (8.95) Y2,±2 = s 15 32π sin2 θ e±2 i φ. (8.96) The θ variation of these functions is illustrated in Figs. 8.1 and 8.2. Exercises 1. A system is in the state ψ = Yl,m(θ, φ). Calculate ⟨Lx⟩and ⟨L 2 x ⟩. 2. Find the eigenvalues and eigenfunctions (in terms of the angles θ and φ) of Lx. 3. Consider a beam of particles with l = 1. A measurement of Lx yields the result ¯ h. What values will be obtained by a subsequent measurement of Lz, and with what probabilities? Repeat the calculation for the cases in which the measurement of Lx yields the results 0 and −¯ h. 4. The Hamiltonian for an axially symmetric rotator is given by H = L 2 x + L 2 y 2 I1 + L 2 z 2 I2 . What are the eigenvalues of H? Central Potentials 115 9 Central Potentials 9.1 Introduction In this chapter, we shall investigate the interaction of a non-relativistic particle of mass m and energy E with various so-called central potentials, V(r), where r = q x2 + y2 + z2 is the radial distance from the origin. It is, of course, most convenient to work in spherical polar coordinates—r, θ, φ—during such an investigation (see Sect. 8.3). Thus, we shall be searching for stationary wavefunctions, ψ(r, θ, φ), which satisfy the time-independent Schr¨ odinger equation (see Sect. 4.12) H ψ = E ψ, (9.1) where the Hamiltonian takes the standard non-relativistic form H = p2 2 m + V(r). (9.2) 9.2 Derivation of Radial Equation Now, we have seen that the Cartesian components of the momentum, p, can be represented as (see Sect. 7.2) pi = −i ¯ h ∂ ∂xi (9.3) for i = 1, 2, 3, where x1 ≡x, x2 ≡y, x3 ≡z, and r ≡(x1, x2, x3). Likewise, it is easily demonstrated, from the above expressions, and the basic definitions of the spherical polar coordinates [see Eqs. (8.20)–(8.25)], that the radial component of the momentum can be represented as pr ≡p · r r = −i ¯ h ∂ ∂r. (9.4) Recall that the angular momentum vector, L, is defined [see Eq. (8.1)] L = r × p. (9.5) This expression can also be written in the following form: Li = ǫijk xj pk. (9.6) Here, the ǫijk (where i, j, k all run from 1 to 3) are elements of the so-called totally anti-symmetric tensor. The values of the various elements of this tensor are determined via a simple rule: ǫijk =      0 if i, j, k not all different 1 if i, j, k are cyclic permutation of 1, 2, 3 −1 if i, j, k are anti-cyclic permutation of 1, 2, 3 . (9.7) 116 QUANTUM MECHANICS Thus, ǫ123 = ǫ231 = 1, ǫ321 = ǫ132 = −1, and ǫ112 = ǫ131 = 0, etc. Equation (9.6) also makes use of the Einstein summation convention, according to which repeated indices are summed (from 1 to 3). For instance, ai bi ≡a1 b1 + a2 b2 + a3 b3. Making use of this convention, as well as Eq. (9.7), it is easily seen that Eqs. (9.5) and (9.6) are indeed equivalent. Let us calculate the value of L2 using Eq. (9.6). According to our new notation, L2 is the same as Li Li. Thus, we obtain L2 = ǫijk xj pk ǫilm xl pm = ǫijk ǫilm xj pk xl pm. (9.8) Note that we are able to shift the position of ǫilm because its elements are just numbers, and, therefore, commute with all of the xi and the pi. Now, it is easily demonstrated that ǫijk ǫilm ≡δjl δkm −δjm δkl. (9.9) Here δij is the usual Kronecker delta, whose elements are determined according to the rule δij =  1 if i and j the same 0 if i and j different . (9.10) It follows from Eqs. (9.8) and (9.9) that L2 = xi pj xi pj −xi pj xj pi. (9.11) Here, we have made use of the fairly self-evident result that δij ai bj ≡ai bi. We have also been careful to preserve the order of the various terms on the right-hand side of the above expression, since the xi and the pi do not necessarily commute with one another. We now need to rearrange the order of the terms on the right-hand side of Eq. (9.11). We can achieve this by making use of the fundamental commutation relation for the xi and the pi [see Eq. (7.17)]: [xi, pj] = i ¯ h δij. (9.12) Thus, L2 = xi (xi pj −[xi, pj]) pj −xi pj (pi xj + [xj, pi]) = xi xi pj pj −i ¯ h δij xi pj −xi pj pi xj −i ¯ h δij xi pj = xi xi pj pj −xi pi pj xj −2 i ¯ h xi pi. (9.13) Here, we have made use of the fact that pj pi = pi pj, since the pi commute with one another [see Eq. (7.16)]. Next, L2 = xi xi pj pj −xi pi (xj pj −[xj, pj]) −2 i ¯ h xi pi. (9.14) Now, according to (9.12), [xj, pj] ≡[x1, p1] + [x2, p2] + [x3, p3] = 3 i ¯ h. (9.15) Central Potentials 117 Hence, we obtain L2 = xi xi pj pj −xi pi xj pj + i ¯ h xi pi. (9.16) When expressed in more conventional vector notation, the above expression becomes L2 = r2 p2 −(r · p)2 + i ¯ h r · p. (9.17) Note that if we had attempted to derive the above expression directly from Eq. (9.5), using standard vector identities, then we would have missed the final term on the right-hand side. This term originates from the lack of commutation between the xi and pi operators in quantum mechanics. Of course, standard vector analysis assumes that all terms commute with one another. Equation (9.17) can be rearranged to give p2 = r−2 h (r · p)2 −i ¯ h r · p + L2i . (9.18) Now, r · p = r pr = −i ¯ h r ∂ ∂r, (9.19) where use has been made of Eq. (9.4). Hence, we obtain p2 = −¯ h2 "1 r ∂ ∂r r ∂ ∂r ! + 1 r ∂ ∂r − L2 ¯ h2 r2 # . (9.20) Finally, the above equation can be combined with Eq. (9.2) to give the following expression for the Hamiltonian: H = −¯ h2 2 m ∂2 ∂r2 + 2 r ∂ ∂r − L2 ¯ h2 r2 ! + V(r). (9.21) Let us now consider whether the above Hamiltonian commutes with the angular mo-mentum operators Lz and L2. Recall, from Sect. 8.3, that Lz and L2 are represented as differential operators which depend solely on the angular spherical polar coordinates, θ and φ, and do not contain the radial polar coordinate, r. Thus, any function of r, or any differential operator involving r (but not θ and φ), will automatically commute with L2 and Lz. Moreover, L2 commutes both with itself, and with Lz (see Sect. 8.2). It is, therefore, clear that the above Hamiltonian commutes with both Lz and L2. Now, according to Sect. 4.10, if two operators commute with one another then they possess simultaneous eigenstates. We thus conclude that for a particle moving in a central potential the eigenstates of the Hamiltonian are simultaneous eigenstates of Lz and L2. Now, we have already found the simultaneous eigenstates of Lz and L2—they are the spheri-cal harmonics, Yl,m(θ, φ), discussed in Sect. 8.7. It follows that the spherical harmonics are also eigenstates of the Hamiltonian. This observation leads us to try the following separable form for the stationary wavefunction: ψ(r, θ, φ) = R(r) Yl,m(θ, φ). (9.22) 118 QUANTUM MECHANICS It immediately follows, from (8.31) and (8.32), and the fact that Lz and L2 both obviously commute with R(r), that Lz ψ = m ¯ h ψ, (9.23) L2 ψ = l (l + 1) ¯ h2 ψ. (9.24) Recall that the quantum numbers m and l are restricted to take certain integer values, as explained in Sect. 8.6. Finally, making use of Eqs. (9.1), (9.21), and (9.24), we obtain the following differen-tial equation which determines the radial variation of the stationary wavefunction: −¯ h2 2 m d2 dr2 + 2 r d dr −l (l + 1) r2 ! Rn,l + V Rn,l = E Rn,l. (9.25) Here, we have labeled the function R(r) by two quantum numbers, n and l. The second quantum number, l, is, of course, related to the eigenvalue of L2. [Note that the azimuthal quantum number, m, does not appear in the above equation, and, therefore, does not influ-ence either the function R(r) or the energy, E.] As we shall see, the first quantum number, n, is determined by the constraint that the radial wavefunction be square-integrable. 9.3 Infinite Spherical Potential Well Consider a particle of mass m and energy E > 0 moving in the following simple central potential: V(r) = 0 for 0 ≤r ≤a ∞ otherwise . (9.26) Clearly, the wavefunction ψ is only non-zero in the region 0 ≤r ≤a. Within this re-gion, it is subject to the physical boundary conditions that it be well behaved (i.e., square-integrable) at r = 0, and that it be zero at r = a (see Sect. 5.2). Writing the wavefunction in the standard form ψ(r, θ, φ) = Rn,l(r) Yl,m(θ, φ), (9.27) we deduce (see previous section) that the radial function Rn,l(r) satisfies d2Rn,l dr2 + 2 r dRn,l dr + k2 −l (l + 1) r2 ! Rn,l = 0 (9.28) in the region 0 ≤r ≤a, where k2 = 2 m E ¯ h2 . (9.29) Defining the scaled radial variable z = k r, the above differential equation can be trans-formed into the standard form d2Rn,l dz2 + 2 z dRn,l dz + " 1 −l (l + 1) z2 # Rn,l = 0. (9.30) Central Potentials 119 Figure 9.1: The first few spherical Bessel functions. The solid, short-dashed, long-dashed, and dot-dashed curves show j0(z), j1(z), y0(z), and y1(z), respectively. The two independent solutions to this well-known second-order differential equation are called spherical Bessel functions,1 and can be written jl(z) = zl −1 z d dz !l sin z z ! , (9.31) yl(z) = −zl −1 z d dz !l cos z z  . (9.32) Thus, the first few spherical Bessel functions take the form j0(z) = sin z z , (9.33) j1(z) = sin z z2 −cos z z , (9.34) y0(z) = −cos z z , (9.35) y1(z) = −cos z z2 −sin z z . (9.36) These functions are also plotted in Fig. 9.1. It can be seen that the spherical Bessel func-tions are oscillatory in nature, passing through zero many times. However, the yl(z) func-tions are badly behaved (i.e., they are not square-integrable) at z = 0, whereas the jl(z) 1M. Abramowitz, and I.A. Stegun, Handbook of Mathematical Functions (Dover, New York NY, 1965), Sect. 10.1. 120 QUANTUM MECHANICS n = 1 n = 2 n = 3 n = 4 l = 0 3.142 6.283 9.425 12.566 l = 1 4.493 7.725 10.904 14.066 l = 2 5.763 9.095 12.323 15.515 l = 3 6.988 10.417 13.698 16.924 l = 4 8.183 11.705 15.040 18.301 Table 9.1: The first few zeros of the spherical Bessel function jl(z). functions are well behaved everywhere. It follows from our boundary condition at r = 0 that the yl(z) are unphysical, and that the radial wavefunction Rn,l(r) is thus proportional to jl(k r) only. In order to satisfy the boundary condition at r = a [i.e., Rn,l(a) = 0], the value of k must be chosen such that z = k a corresponds to one of the zeros of jl(z). Let us denote the nth zero of jl(z) as zn,l. It follows that k a = zn,l, (9.37) for n = 1, 2, 3, . . .. Hence, from (9.29), the allowed energy levels are En,l = z 2 n,l ¯ h2 2 m a2. (9.38) The first few values of zn,l are listed in Table 9.1. It can be seen that zn,l is an increasing function of both n and l. We are now in a position to interpret the three quantum numbers—n, l, and m—which determine the form of the wavefunction specified in Eq. (9.27). As is clear from Sect. 8, the azimuthal quantum number m determines the number of nodes in the wavefunction as the azimuthal angle φ varies between 0 and 2π. Thus, m = 0 corresponds to no nodes, m = 1 to a single node, m = 2 to two nodes, etc. Likewise, the polar quantum number l determines the number of nodes in the wavefunction as the polar angle θ varies between 0 and π. Again, l = 0 corresponds to no nodes, l = 1 to a single node, etc. Finally, the radial quantum number n determines the number of nodes in the wavefunction as the radial variable r varies between 0 and a (not counting any nodes at r = 0 or r = a). Thus, n = 1 corresponds to no nodes, n = 2 to a single node, n = 3 to two nodes, etc. Note that, for the case of an infinite potential well, the only restrictions on the values that the various quantum numbers can take are that n must be a positive integer, l must be a non-negative integer, and m must be an integer lying between −l and l. Note, further, that the allowed energy levels (9.38) only depend on the values of the quantum numbers n and l. Finally, it is easily demonstrated that the spherical Bessel functions are mutually orthogonal: i.e., Z a 0 jl(zn,l r/a) jl(zn′,l r/a) r2 dr = 0 (9.39) when n ̸= n′. Given that the Yl,m(θ, φ) are mutually orthogonal (see Sect. 8), this ensures that wavefunctions (9.27) corresponding to distinct sets of values of the quantum numbers n, l, and m are mutually orthogonal. Central Potentials 121 9.4 Hydrogen Atom A hydrogen atom consists of an electron, of charge −e and mass me, and a proton, of charge +e and mass mp, moving in the Coulomb potential V(r) = − e2 4π ǫ0 |r|, (9.40) where r is the position vector of the electron with respect to the proton. Now, according to the analysis in Sect. 6.4, this two-body problem can be converted into an equivalent one-body problem. In the latter problem, a particle of mass µ = me mp me + mp (9.41) moves in the central potential V(r) = − e2 4π ǫ0 r. (9.42) Note, however, that since me/mp ≃1/1836 the difference between me and µ is very small. Hence, in the following, we shall write neglect this difference entirely. Writing the wavefunction in the usual form, ψ(r, θ, φ) = Rn,l(r) Yl,m(θ, φ), (9.43) it follows from Sect. 9.2 that the radial function Rn,l(r) satisfies −¯ h2 2 me d2 dr2 + 2 r d dr −l (l + 1) r2 ! Rn,l − e2 4π ǫ0 r + E ! Rn,l = 0. (9.44) Let r = a z, with a = v u u t ¯ h2 2 me (−E) = s E0 E a0, (9.45) where E0 and a0 are defined in Eqs. (9.57) and (9.58), respectively. Here, it is assumed that E < 0, since we are only interested in bound-states of the hydrogen atom. The above differential equation transforms to d2 dz2 + 2 z d dz −l (l + 1) z2 + ζ z −1 ! Rn,l = 0, (9.46) where ζ = 2 me a e2 4π ǫ0 ¯ h2 = 2 s E0 E . (9.47) Suppose that Rn,l(r) = Z(r/a) exp(−r/a)/(r/a). It follows that d2 dz2 −2 d dz −l (l + 1) z2 + ζ z ! Z = 0. (9.48) 122 QUANTUM MECHANICS We now need to solve the above differential equation in the domain z = 0 to z = ∞, subject to the constraint that Rn,l(r) be square-integrable. Let us look for a power-law solution of the form Z(z) = X k ck zk. (9.49) Substituting this solution into Eq. (9.48), we obtain X k ck  k (k −1) zk−2 −2 k zk−1 −l (l + 1) zk−2 + ζ zk−1 = 0. (9.50) Equating the coefficients of zk−2 gives the recursion relation ck [k (k −1) −l (l + 1)] = ck−1 [2 (k −1) −ζ] . (9.51) Now, the power series (9.49) must terminate at small k, at some positive value of k, otherwise Z(z) behaves unphysically as z →0 [i.e., it yields an Rn,l(r) that is not square-integrable as r →0]. From the above recursion relation, this is only possible if [kmin (kmin− 1) −l (l + 1)] = 0, where the first term in the series is ckmin zkmin. There are two possi-bilities: kmin = −l or kmin = l + 1. However, the former possibility predicts unphysical behaviour of Z(z) at z = 0. Thus, we conclude that kmin = l + 1. Note that, since Rn,l(r) ≃Z(r/a)/(r/a) ≃(r/a)l at small r, there is a finite probability of finding the elec-tron at the nucleus for an l = 0 state, whereas there is zero probability of finding the electron at the nucleus for an l > 0 state [i.e., |ψ|2 = 0 at r = 0, except when l = 0]. For large values of z, the ratio of successive coefficients in the power series (9.49) is ck ck−1 = 2 k, (9.52) according to Eq. (9.51). This is the same as the ratio of successive coefficients in the power series X k (2 z)k k! , (9.53) which converges to exp(2 z). We conclude that Z(z) →exp(2 z) as z →∞. It thus fol-lows that Rn,l(r) ∼Z(r/a) exp(−r/a)/(r/a) →exp(r/a)/(r/a) as r →∞. This does not correspond to physically acceptable behaviour of the wavefunction, since R |ψ|2 dV must be finite. The only way in which we can avoid this unphysical behaviour is if the power series (9.49) terminates at some maximum value of k. According to the recursion relation (9.51), this is only possible if ζ 2 = n, (9.54) where n is an integer, and the last term in the series is cn zn. Since the first term in the series is cl+1 zl+1, it follows that n must be greater than l, otherwise there are no terms in the series at all. Finally, it is clear from Eqs. (9.45), (9.47), and (9.54) that E = E0 n2 (9.55) Central Potentials 123 and a = n a0, (9.56) where E0 = − me e4 2 (4π ǫ0)2 ¯ h2 = − e2 8π ǫ0 a0 = −13.6 eV, (9.57) and a0 = 4π ǫ0 ¯ h2 me e2 = 5.3 × 10−11 m. (9.58) Here, E0 is the energy of so-called ground-state (or lowest energy state) of the hydrogen atom, and the length a0 is known as the Bohr radius. Note that |E0| ∼α2 me c2, where α = e2/(4π ǫ0 ¯ h c) ≃1/137 is the dimensionless fine-structure constant. The fact that |E0| ≪me c2 is the ultimate justification for our non-relativistic treatment of the hydrogen atom. We conclude that the wavefunction of a hydrogen atom takes the form ψn,l,m(r, θ, φ) = Rn,l(r) Yl,m(θ, φ). (9.59) Here, the Yl,m(θ, φ) are the spherical harmonics (see Sect 8.7), and Rn,l(z = r/a) is the solution of 1 z2 d dz z2 d dz −l (l + 1) z2 + 2 n z −1 ! Rn,l = 0 (9.60) which varies as zl at small z. Furthermore, the quantum numbers n, l, and m can only take values which satisfy the inequality |m| ≤l < n, (9.61) where n is a positive integer, l a non-negative integer, and m an integer. Now, we expect the stationary states of the hydrogen atom to be orthonormal: i.e., Z ψ∗ n′,l′,m′ ψn,l,m dV = δnn′ δll′ δmm′, (9.62) where dV is a volume element, and the integral is over all space. Of course, dV = r2 dr dΩ, where dΩis an element of solid angle. Moreover, we already know that the spherical harmonics are orthonormal [see Eq. (8.90)]: i.e., I Y ∗ l′,m′ Yl,m dΩ= δll′ δmm′. (9.63) It, thus, follows that the radial wavefunction satisfies the orthonormality constraint Z ∞ 0 R∗ n′,l Rn,l r2 dr = δnn′. (9.64) 124 QUANTUM MECHANICS Figure 9.2: The a0 r2 |Rn,l(r)| 2 plotted as a functions of r/a0. The solid, short-dashed, and long-dashed curves correspond to n, l = 1, 0, and 2, 0, and 2, 1, respectively. The first few radial wavefunctions for the hydrogen atom are listed below: R1,0(r) = 2 a 3/2 0 exp  −r a0  , (9.65) R2,0(r) = 2 (2 a0)3/2  1 − r 2 a0  exp  −r 2 a0  , (9.66) R2,1(r) = 1 √ 3 (2 a0)3/2 r a0 exp  −r 2 a0  , (9.67) R3,0(r) = 2 (3 a0)3/2 1 −2 r 3 a0 + 2 r2 27 a 2 0 ! exp  −r 3 a0  , (9.68) R3,1(r) = 4 √ 2 9 (3 a0)3/2 r a0  1 − r 6 a0  exp  −r 3 a0  , (9.69) R3,2(r) = 2 √ 2 27 √ 5 (3 a0)3/2  r a0 2 exp  −r 3 a0  . (9.70) These functions are illustrated in Figs. 9.2 and 9.3. Given the (properly normalized) hydrogen wavefunction (9.59), plus our interpretation of |ψ|2 as a probability density, we can calculate ⟨rk⟩= Z ∞ 0 r2+k |Rn,l(r)| 2 dr, (9.71) Central Potentials 125 Figure 9.3: The a0 r2 |Rn,l(r)| 2 plotted as a functions of r/a0. The solid, short-dashed, and long-dashed curves correspond to n, l = 3, 0, and 3, 1, and 3, 2, respectively. where the angle-brackets denote an expectation value. For instance, it can be demon-strated (after much tedious algebra) that ⟨r2⟩ = a 2 0 n2 2 [5 n2 + 1 −3 l (l + 1)], (9.72) ⟨r⟩ = a0 2 [3 n2 −l (l + 1)], (9.73) 1 r + = 1 n2 a0 , (9.74) 1 r2 + = 1 (l + 1/2) n3 a 2 0 , (9.75) 1 r3 + = 1 l (l + 1/2) (l + 1) n3 a 3 0 . (9.76) According to Eq. (9.55), the energy levels of the bound-states of a hydrogen atom only depend on the radial quantum number n. It turns out that this is a special property of a 1/r potential. For a general central potential, V(r), the quantized energy levels of a bound-state depend on both n and l (see Sect. 9.3). The fact that the energy levels of a hydrogen atom only depend on n, and not on l and m, implies that the energy spectrum of a hydrogen atom is highly degenerate: i.e., there are many different states which possess the same energy. According to the inequality (9.61) (and the fact that n, l, and m are integers), for a given value of l, there are 2 l+1 different allowed values of m (i.e., −l, −l + 1, · · ·, l −1, l). Likewise, for a given value of n, there 126 QUANTUM MECHANICS are n different allowed values of l (i.e., 0, 1, · · ·, n−1). Now, all states possessing the same value of n have the same energy (i.e., they are degenerate). Hence, the total number of degenerate states corresponding to a given value of n is 1 + 3 + 5 + · · · + 2 (n −1) + 1 = n2. (9.77) Thus, the ground-state (n = 1) is not degenerate, the first excited state (n = 2) is four-fold degenerate, the second excited state (n = 3) is nine-fold degenerate, etc. [Actually, when we take into account the two spin states of an electron (see Sect. 10), the degeneracy of the nth energy level becomes 2 n2.] 9.5 Rydberg Formula An electron in a given stationary state of a hydrogen atom, characterized by the quantum numbers n, l, and m, should, in principle, remain in that state indefinitely. In practice, if the state is slightly perturbed—e.g., by interacting with a photon—then the electron can make a transition to another stationary state with different quantum numbers. Suppose that an electron in a hydrogen atom makes a transition from an initial state whose radial quantum number is ni to a final state whose radial quantum number is nf. According to Eq. (9.55), the energy of the electron will change by ∆E = E0 1 n 2 f −1 n 2 i ! . (9.78) If ∆E is negative then we would expect the electron to emit a photon of frequency ν = −∆E/h [see Eq. (3.32)]. Likewise, if ∆E is positive then the electron must absorb a photon of energy ν = ∆E/h. Given that λ−1 = ν/c, the possible wavelengths of the photons emitted by a hydrogen atom as its electron makes transitions between different energy levels are 1 λ = R 1 n 2 f −1 n 2 i ! , (9.79) where R = −E0 h c = me e4 (4π)3 ǫ 2 0 ¯ h3 c = 1.097 × 107 m−1. (9.80) Here, it is assumed that nf < ni. Note that the emission spectrum of hydrogen is quan-tized: i.e., a hydrogen atom can only emit photons with certain fixed set of wavelengths. Likewise, a hydrogen atom can only absorb photons which have the same fixed set of wavelengths. This set of wavelengths constitutes the characteristic emission/absorption spectrum of the hydrogen atom, and can be observed as “spectral lines” using a spectro-scope. Equation (9.79) is known as the Rydberg formula. Likewise, R is called the Rydberg con-stant. The Rydberg formula was actually discovered empirically in the nineteenth century by spectroscopists, and was first explained theoretically by Bohr in 1913 using a primitive Central Potentials 127 version of quantum mechanics. Transitions to the ground-state (nf = 1) give rise to spec-tral lines in the ultraviolet band—this set of lines is called the Lyman series. Transitions to the first excited state (nf = 2) give rise to spectral lines in the visible band—this set of lines is called the Balmer series. Transitions to the second excited state (nf = 3) give rise to spectral lines in the infrared band—this set of lines is called the Paschen series, and so on. Exercises 1. A particle of mass m is placed in a finite spherical well: V(r) = −V0 for r ≤a 0 for r > a , with V0 > 0 and a > 0. Find the ground-state by solving the radial equation with l = 0. Show that there is no ground-state if V0 a2 < π2 ¯ h2/8 m. 2. Consider a particle of mass m in the three-dimensional harmonic oscillator potential V(r) = (1/2) m ω2 r2. Solve the problem by separation of variables in spherical polar coordinates, and, hence, determine the energy eigenvalues of the system. 3. The normalized wavefunction for the ground-state of a hydrogen-like atom (neutral hydro-gen, He+, Li++, etc.) with nuclear charge Z e has the form ψ = A exp(−β r), where A and β are constants, and r is the distance between the nucleus and the electron. Show the following: (a) A2 = β3/π. (b) β = Z/a0, where a0 = (¯ h2/me) (4π ǫ0/e2). (c) The energy is E = −Z2 E0 where E0 = (me/2 ¯ h2) (e2/4π ǫ0)2. (d) The expectation values of the potential and kinetic energies are 2 E and −E, respectively. (e) The expectation value of r is (3/2) (a0/Z). (f) The most probable value of r is a0/Z. 4. An atom of tritium is in its ground-state. Suddenly the nucleus decays into a helium nucleus, via the emission of a fast electron which leaves the atom without perturbing the extranuclear electron, Find the probability that the resulting He+ ion will be left in an n = 1, l = 0 state. Find the probability that it will be left in a n = 2, l = 0 state. What is the probability that the ion will be left in an l > 0 state? 5. Calculate the wavelengths of the photons emitted from the n = 2, l = 1 to n = 1, l = 0 transition in hydrogen, deuterium, and positronium. 6. To conserve linear momentum, an atom emitting a photon must recoil, which means that not all of the energy made available in the downward jump goes to the photon. Find a hydrogen atom’s recoil energy when it emits a photon in an n = 2 to n = 1 transition. What fraction of the transition energy is the recoil energy? 128 QUANTUM MECHANICS Spin Angular Momentum 129 10 Spin Angular Momentum 10.1 Introduction Broadly speaking, a classical extended object (e.g., the Earth) can possess two types of angular momentum. The first type is due to the rotation of the object’s center of mass about some fixed external point (e.g., the Sun)—this is generally known as orbital angu-lar momentum. The second type is due to the object’s internal motion—this is generally known as spin angular momentum (since, for a rigid object, the internal motion consists of spinning about an axis passing through the center of mass). By analogy, quantum parti-cles can possess both orbital angular momentum due to their motion through space (see Cha. 8), and spin angular momentum due to their internal motion. Actually, the analogy with classical extended objects is not entirely accurate, since electrons, for instance, are structureless point particles. In fact, in quantum mechanics, it is best to think of spin an-gular momentum as a kind of intrinsic angular momentum possessed by particles. It turns out that each type of elementary particle has a characteristic spin angular momentum, just as each type has a characteristic charge and mass. 10.2 Spin Operators Since spin is a type of angular momentum, it is reasonable to suppose that it possesses sim-ilar properties to orbital angular momentum. Thus, by analogy with Sect. 8.2, we would expect to be able to define three operators—Sx, Sy, and Sz—which represent the three Cartesian components of spin angular momentum. Moreover, it is plausible that these operators possess analogous commutation relations to the three corresponding orbital an-gular momentum operators, Lx, Ly, and Lz [see Eqs. (8.6)–(8.8)]. In other words, [Sx, Sy] = i ¯ h Sz, (10.1) [Sy, Sz] = i ¯ h Sx, (10.2) [Sz, Sx] = i ¯ h Sy. (10.3) We can represent the magnitude squared of the spin angular momentum vector by the operator S2 = S 2 x + S 2 y + S 2 z . (10.4) By analogy with the analysis in Sect. 8.2, it is easily demonstrated that [S2, Sx] = [S2, Sy] = [S2, Sz] = 0. (10.5) We thus conclude (see Sect. 4.10) that we can simultaneously measure the magnitude squared of the spin angular momentum vector, together with, at most, one Cartesian com-ponent. By convention, we shall always choose to measure the z-component, Sz. 130 QUANTUM MECHANICS By analogy with Eq. (8.13), we can define raising and lowering operators for spin angular momentum: S± = Sx ± i Sy. (10.6) If Sx, Sy, and Sz are Hermitian operators, as must be the case if they are to represent physical quantities, then S± are the Hermitian conjugates of one another: i.e., (S±)† = S∓. (10.7) Finally, by analogy with Sect. 8.2, it is easily demonstrated that S+ S− = S2 −S 2 z + ¯ h Sz, (10.8) S−S+ = S2 −S 2 z −¯ h Sz, (10.9) [S+, Sz] = −¯ h S+, (10.10) [S−, Sz] = +¯ h S−. (10.11) 10.3 Spin Space We now have to discuss the wavefunctions upon which the previously introduced spin op-erators act. Unlike regular wavefunctions, spin wavefunctions do not exist in real space. Likewise, the spin angular momentum operators cannot be represented as differential op-erators in real space. Instead, we need to think of spin wavefunctions as existing in an abstract (complex) vector space. The different members of this space correspond to the different internal configurations of the particle under investigation. Note that only the directions of our vectors have any physical significance (just as only the shape of a regular wavefunction has any physical significance). Thus, if the vector χ corresponds to a partic-ular internal state then c χ corresponds to the same state, where c is a complex number. Now, we expect the internal states of our particle to be superposable, since the superpos-ability of states is one of the fundamental assumptions of quantum mechanics. It follows that the vectors making up our vector space must also be superposable. Thus, if χ1 and χ2 are two vectors corresponding to two different internal states then c1 χ1 + c2 χ2 is another vector corresponding to the state obtained by superposing c1 times state 1 with c2 times state 2 (where c1 and c2 are complex numbers). Finally, the dimensionality of our vector space is simply the number of linearly independent vectors required to span it (i.e., the number of linearly independent internal states of the particle under investigation). We now need to define the length of our vectors. We can do this by introducing a second, or dual, vector space whose elements are in one to one correspondence with the elements of our first space. Let the element of the second space which corresponds to the element χ of the first space be called χ†. Moreover, the element of the second space which corresponds to c χ is c∗χ†. We shall assume that it is possible to combine χ and χ† in a multiplicative fashion to generate a real positive-definite number which we interpret as the length, or norm, of χ. Let us denote this number χ† χ. Thus, we have χ† χ ≥0 (10.12) Spin Angular Momentum 131 for all χ. We shall also assume that it is possible to combine unlike states in an analogous multiplicative fashion to produce complex numbers. The product of two unlike states χ and χ′ is denoted χ† χ′. Two states χ and χ′ are said to be mutually orthogonal, or independent, if χ† χ′ = 0. Now, when a general spin operator, A, operates on a general spin-state, χ, it coverts it into a different spin-state which we shall denote A χ. The dual of this state is (A χ)† ≡ χ† A†, where A† is the Hermitian conjugate of A (this is the definition of an Hermitian conjugate in spin space). An eigenstate of A corresponding to the eigenvalue a satisfies A χa = a χa. (10.13) As before, if A corresponds to a physical variable then a measurement of A will result in one of its eigenvalues (see Sect. 4.10). In order to ensure that these eigenvalues are all real, A must be Hermitian: i.e., A† = A (see Sect. 4.9). We expect the χa to be mutually orthogonal. We can also normalize them such that they all have unit length. In other words, χ† a χa′ = δaa′. (10.14) Finally, a general spin state can be written as a superposition of the normalized eigenstates of A: i.e., χ = X a ca χa. (10.15) A measurement of χ will then yield the result a with probability |ca|2. 10.4 Eigenstates of Sz and S2 Since the operators Sz and S2 commute, they must possess simultaneous eigenstates (see Sect. 4.10). Let these eigenstates take the form [see Eqs. (8.31) and (8.32)]: Sz χs,ms = ms ¯ h χs,ms, (10.16) S2 χs,ms = s (s + 1) ¯ h 2 χs,ms. (10.17) Now, it is easily demonstrated, from the commutation relations (10.10) and (10.11), that Sz (S+ χs,ms) = (ms + 1) ¯ h (S+ χs,ms), (10.18) and Sz (S−χs,ms) = (ms −1) ¯ h (S−χs,ms). (10.19) Thus, S+ and S−are indeed the raising and lowering operators, respectively, for spin angu-lar momentum (see Sect. 8.4). The eigenstates of Sz and S2 are assumed to be orthonormal: i.e., χ† s,ms χs′,m′ s = δss′ δmsm′ s. (10.20) 132 QUANTUM MECHANICS Consider the wavefunction χ = S+ χs,ms. Since we know, from Eq. (10.12), that χ† χ ≥ 0, it follows that (S+ χs,ms)† (S+ χs,ms) = χ† s,ms S† + S+ χs,ms = χ† s,ms S−S+ χs,ms ≥0, (10.21) where use has been made of Eq. (10.7). Equations (10.9), (10.16), (10.17), and (10.20) yield s (s + 1) ≥ms (ms + 1). (10.22) Likewise, if χ = S−χs,ms then we obtain s (s + 1) ≥ms (ms −1). (10.23) Assuming that s ≥0, the above two inequalities imply that −s ≤ms ≤s. (10.24) Hence, at fixed s, there is both a maximum and a minimum possible value that ms can take. Let ms min be the minimum possible value of ms. It follows that (see Sect. 8.6) S−χs,ms min = 0. (10.25) Now, from Eq. (10.8), S2 = S+ S−+ S 2 z −¯ h Sz. (10.26) Hence, S2 χs,ms min = (S+ S−+ S 2 z −¯ h Sz) χs,ms min, (10.27) giving s (s + 1) = ms min (ms min −1). (10.28) Assuming that ms min < 0, this equation yields ms min = −s. (10.29) Likewise, it is easily demonstrated that ms max = +s. (10.30) Moreover, S−χs,−s = S+ χs,s = 0. (10.31) Now, the raising operator S+, acting upon χs,−s, converts it into some multiple of χs,−s+1. Employing the raising operator a second time, we obtain a multiple of χs,−s+2. However, this process cannot continue indefinitely, since there is a maximum possible value of ms. Indeed, after acting upon χs,−s a sufficient number of times with the raising operator S+, we must obtain a multiple of χs,s, so that employing the raising operator one more time Spin Angular Momentum 133 leads to the null state [see Eq. (10.31)]. If this is not the case then we will inevitably obtain eigenstates of Sz corresponding to ms > s, which we have already demonstrated is impossible. It follows, from the above argument, that ms max −ms min = 2 s = k, (10.32) where k is a positive integer. Hence, the quantum number s can either take positive integer or positive half-integer values. Up to now, our analysis has been very similar to that which we used earlier to investigate orbital angular momentum (see Sect. 8). Recall, that for or-bital angular momentum the quantum number m, which is analogous to ms, is restricted to take integer values (see Cha. 8.5). This implies that the quantum number l, which is analogous to s, is also restricted to take integer values. However, the origin of these re-strictions is the representation of the orbital angular momentum operators as differential operators in real space (see Sect. 8.3). There is no equivalent representation of the corre-sponding spin angular momentum operators. Hence, we conclude that there is no reason why the quantum number s cannot take half-integer, as well as integer, values. In 1940, Wolfgang Pauli proved the so-called spin-statistics theorem using relativistic quantum mechanics. According to this theorem, all fermions possess half-integer spin (i.e., a half-integer value of s), whereas all bosons possess integer spin (i.e., an integer value of s). In fact, all presently known fermions, including electrons and protons, possess spin one-half. In other words, electrons and protons are characterized by s = 1/2 and ms = ±1/2. 10.5 Pauli Representation Let us denote the two independent spin eigenstates of an electron as χ± ≡χ1/2,±1/2. (10.33) It thus follows, from Eqs. (10.16) and (10.17), that Sz χ± = ±1 2 ¯ h χ±, (10.34) S2 χ± = 3 4 ¯ h2 χ±. (10.35) Note that χ+ corresponds to an electron whose spin angular momentum vector has a pos-itive component along the z-axis. Loosely speaking, we could say that the spin vector points in the +z-direction (or its spin is “up”). Likewise, χ−corresponds to an electron whose spin points in the −z-direction (or whose spin is “down”). These two eigenstates satisfy the orthonormality requirements χ† + χ+ = χ† −χ−= 1, (10.36) 134 QUANTUM MECHANICS and χ† + χ−= 0. (10.37) A general spin state can be represented as a linear combination of χ+ and χ−: i.e., χ = c+ χ+ + c−χ−. (10.38) It is thus evident that electron spin space is two-dimensional. Up to now, we have discussed spin space in rather abstract terms. In the following, we shall describe a particular representation of electron spin space due to Pauli. This so-called Pauli representation allows us to visualize spin space, and also facilitates calculations involving spin. Let us attempt to represent a general spin state as a complex column vector in some two-dimensional space: i.e., χ ≡ c+ c− ! . (10.39) The corresponding dual vector is represented as a row vector: i.e., χ† ≡(c∗ +, c∗ −). (10.40) Furthermore, the product χ† χ is obtained according to the ordinary rules of matrix multi-plication: i.e., χ† χ = (c∗ +, c∗ −) c+ c− ! = c∗ + c+ + c∗ −c−= |c+|2 + |c−|2 ≥0. (10.41) Likewise, the product χ† χ′ of two different spin states is also obtained from the rules of matrix multiplication: i.e., χ† χ′ = (c∗ +, c∗ −) c′ + c′ − ! = c∗ + c′ + + c∗ −c′ −. (10.42) Note that this particular representation of spin space is in complete accordance with the discussion in Sect. 10.3. For obvious reasons, a vector used to represent a spin state is generally known as spinor. A general spin operator A is represented as a 2 × 2 matrix which operates on a spinor: i.e., A χ ≡ A11, A12 A21, A22 ! c+ c− ! . (10.43) As is easily demonstrated, the Hermitian conjugate of A is represented by the transposed complex conjugate of the matrix used to represent A: i.e., A† ≡ A∗ 11, A∗ 21 A∗ 12, A∗ 22 ! . (10.44) Spin Angular Momentum 135 Let us represent the spin eigenstates χ+ and χ−as χ+ ≡ 1 0 ! , (10.45) and χ−≡ 0 1 ! , (10.46) respectively. Note that these forms automatically satisfy the orthonormality constraints (10.36) and (10.37). It is convenient to write the spin operators Si (where i = 1, 2, 3 corresponds to x, y, z) as Si = ¯ h 2 σi. (10.47) Here, the σi are dimensionless 2 × 2 matrices. According to Eqs. (10.1)–(10.3), the σi satisfy the commutation relations [σx, σy] = 2 i σz, (10.48) [σy, σz] = 2 i σx, (10.49) [σz, σx] = 2 i σy. (10.50) Furthermore, Eq. (10.34) yields σz χ± = ±χ±. (10.51) It is easily demonstrated, from the above expressions, that the σi are represented by the following matrices: σx ≡ 0, 1 1, 0 ! , (10.52) σy ≡ 0, −i i, 0 ! , (10.53) σz ≡ 1, 0 0, −1 ! . (10.54) Incidentally, these matrices are generally known as the Pauli matrices. Finally, a general spinor takes the form χ = c+ χ+ + c−χ−= c+ c− ! . (10.55) If the spinor is properly normalized then χ† χ = |c+|2 + |c−|2 = 1. (10.56) In this case, we can interpret |c+|2 as the probability that an observation of Sz will yield the result +¯ h/2, and |c−|2 as the probability that an observation of Sz will yield the result −¯ h/2. 136 QUANTUM MECHANICS 10.6 Spin Precession According to classical physics, a small current loop possesses a magnetic moment of mag-nitude µ = I A, where I is the current circulating around the loop, and A the area of the loop. The direction of the magnetic moment is conventionally taken to be normal to the plane of the loop, in the sense given by a standard right-hand circulation rule. Consider a small current loop consisting of an electron in uniform circular motion. It is easily demon-strated that the electron’s orbital angular momentum L is related to the magnetic moment µ of the loop via µ = − e 2 me L, (10.57) where e is the magnitude of the electron charge, and me the electron mass. The above expression suggests that there may be a similar relationship between mag-netic moment and spin angular momentum. We can write µ = −g e 2 me S, (10.58) where g is called the gyromagnetic ratio. Classically, we would expect g = 1. In fact, g = 2  1 + α 2π + · · ·  = 2.0023192, (10.59) where α = e2/(2 ǫ0 h c) ≃1/137 is the so-called fine-structure constant. The fact that the gyromagnetic ratio is (almost) twice that expected from classical physics is only explicable using relativistic quantum mechanics. Furthermore, the small corrections to the relativistic result g = 2 come from quantum field theory. The energy of a classical magnetic moment µ in a uniform magnetic field B is H = −µ · B. (10.60) Assuming that the above expression also holds good in quantum mechanics, the Hamilto-nian of an electron in a z-directed magnetic field of magnitude B takes the form H = ΩSz, (10.61) where Ω= g e B 2 me . (10.62) Here, for the sake of simplicity, we are neglecting the electron’s translational degrees of freedom. Schr¨ odinger’s equation can be written [see Eq. (4.63)] i ¯ h ∂χ ∂t = H χ, (10.63) Spin Angular Momentum 137 where the spin state of the electron is characterized by the spinor χ. Adopting the Pauli representation, we obtain χ = c+(t) c−(t) ! , (10.64) where |c+|2 + |c−|2 = 1. Here, |c+|2 is the probability of observing the spin-up state, and |c−|2 the probability of observing the spin-down state. It follows from Eqs. (10.47), (10.54), (10.61), (10.63), and (10.64) that i ¯ h ˙ c+ ˙ c− ! = Ω¯ h 2 1, 0 0, −1 ! c+ c− ! , (10.65) where ˙ ≡d/dt. Hence, ˙ c± = ∓i Ω 2 c±. (10.66) Let c+(0) = cos(α/2), (10.67) c−(0) = sin(α/2). (10.68) The significance of the angle α will become apparent presently. Solving Eq. (10.66), sub-ject to the initial conditions (10.67) and (10.68), we obtain c+(t) = cos(α/2) exp(−i Ωt/2), (10.69) c−(t) = sin(α/2) exp(+i Ωt/2). (10.70) We can most easily visualize the effect of the time dependence in the above expressions for c± by calculating the expectation values of the three Cartesian components of the electron’s spin angular momentum. By analogy with Eq. (4.56), the expectation value of a general spin operator A is simply ⟨A⟩= χ† A χ. (10.71) Hence, the expectation value of Sz is ⟨Sz⟩= ¯ h 2 (c∗ +, c∗ −) 1, 0 0, −1 ! c+ c− ! , (10.72) which reduces to ⟨Sz⟩= ¯ h 2 cos α (10.73) with the help of Eqs. (10.69) and (10.70). Likewise, the expectation value of Sx is ⟨Sx⟩= ¯ h 2 (c∗ +, c∗ −) 0, 1 1, 0 ! c+ c− ! , (10.74) 138 QUANTUM MECHANICS which reduces to ⟨Sx⟩= ¯ h 2 sin α cos(Ωt). (10.75) Finally, the expectation value of Sy is ⟨Sy⟩= ¯ h 2 sin α sin(Ωt). (10.76) According to Eqs. (10.73), (10.75), and (10.76), the expectation value of the spin angular momentum vector subtends a constant angle α with the z-axis, and precesses about this axis at the frequency Ω≃e B me . (10.77) This behaviour is actually equivalent to that predicted by classical physics. Note, however, that a measurement of Sx, Sy, or Sz will always yield either +¯ h/2 or −¯ h/2. It is the relative probabilities of obtaining these two results which varies as the expectation value of a given component of the spin varies. Exercises 1. Find the Pauli representations of Sx, Sy, and Sz for a spin-1 particle. 2. Find the Pauli representations of the normalized eigenstates of Sx and Sy for a spin-1/2 particle. 3. Suppose that a spin-1/2 particle has a spin vector which lies in the x-z plane, making an angle θ with the z-axis. Demonstrate that a measurement of Sz yields ¯ h/2 with probability cos2(θ/2), and −¯ h/2 with probability sin2(θ/2). 4. An electron is in the spin-state χ = A 1 −2 i 2 ! in the Pauli representation. Determine the constant A by normalizing χ. If a measurement of Sz is made, what values will be obtained, and with what probabilities? What is the expecta-tion value of Sz? Repeat the above calculations for Sx and Sy. 5. Consider a spin-1/2 system represented by the normalized spinor χ = cos α sin α exp( i β) ! in the Pauli representation, where α and β are real. What is the probability that a measure-ment of Sy yields −¯ h/2? 6. An electron is at rest in an oscillating magnetic field B = B0 cos(ω t) ez, where B0 and ω are real positive constants. Spin Angular Momentum 139 (a) Find the Hamiltonian of the system. (b) If the electron starts in the spin-up state with respect to the x-axis, determine the spinor χ(t) which represents the state of the system in the Pauli representation at all subse-quent times. (c) Find the probability that a measurement of Sx yields the result −¯ h/2 as a function of time. (d) What is the minimum value of B0 required to force a complete flip in Sx? 140 QUANTUM MECHANICS Addition of Angular Momentum 141 11 Addition of Angular Momentum 11.1 Introduction Consider an electron in a hydrogen atom. As we have already seen, the electron’s motion through space is parameterized by the three quantum numbers n, l, and m (see Sect. 9.4). To these we must now add the two quantum numbers s and ms which parameterize the electron’s internal motion (see the previous chapter). Now, the quantum numbers l and m specify the electron’s orbital angular momentum vector, L, (as much as it can be specified) whereas the quantum numbers s and ms specify its spin angular momentum vector, S. But, if the electron possesses both orbital and spin angular momentum then what is its total angular momentum? 11.2 General Principles The three basic orbital angular momentum operators, Lx, Ly, and Lz, obey the commutation relations (8.6)–(8.8), which can be written in the convenient vector form: L × L = i ¯ h L. (11.1) Likewise, the three basic spin angular momentum operators, Sx, Sy, and Sz, obey the commutation relations (10.1)–(10.3), which can also be written in vector form: i.e., S × S = i ¯ h S. (11.2) Now, since the orbital angular momentum operators are associated with the electron’s motion through space, whilst the spin angular momentum operators are associated with its internal motion, and these two types of motion are completely unrelated (i.e., they correspond to different degrees of freedom—see Sect. 6.2), it is reasonable to suppose that the two sets of operators commute with one another: i.e., [Li, Sj] = 0, (11.3) where i, j = 1, 2, 3 corresponds to x, y, z. Let us now consider the electron’s total angular momentum vector J = L + S. (11.4) We have J × J = (L + S) × (L + S) = L × L + S × S + L × S + S × L = L × L + S × S = i ¯ h L + i ¯ h S = i ¯ h J. (11.5) 142 QUANTUM MECHANICS In other words, J × J = i ¯ h J. (11.6) It is thus evident that the three basic total angular momentum operators, Jx, Jy, and Jz, obey analogous commutation relations to the corresponding orbital and spin angular momen-tum operators. It therefore follows that the total angular momentum has similar properties to the orbital and spin angular momenta. For instance, it is only possible to simultaneously measure the magnitude squared of the total angular momentum vector, J2 = J 2 x + J 2 y + J 2 z , (11.7) together with a single Cartesian component. By convention, we shall always choose to measure Jz. A simultaneous eigenstate of Jz and J2 satisfies Jz ψj,mj = mj ¯ h ψj,mj, (11.8) J2 ψj,mj = j (j + 1) ¯ h 2 ψj,mj, (11.9) where the quantum number j can take positive integer, or half-integer, values, and the quantum number mj is restricted to the following range of values: −j, −j + 1, · · ·, j −1, j. (11.10) Now J2 = (L + S) · (L + S) = L2 + S2 + 2 L · S, (11.11) which can also be written as J2 = L2 + S2 + 2 Lz Sz + L+ S−+ L−S+. (11.12) We know that the operator L2 commutes with itself, with all of the Cartesian components of L (and, hence, with the raising and lowering operators L±), and with all of the spin angular momentum operators (see Sect. 8.2). It is therefore clear that [J2, L2] = 0. (11.13) A similar argument allows us to also conclude that [J2, S2] = 0. (11.14) Now, the operator Lz commutes with itself, with L2, with all of the spin angular momentum operators, but not with the raising and lowering operators L± (see Sect. 8.2). It follows that [J2, Lz] ̸= 0. (11.15) Likewise, we can also show that [J2, Sz] ̸= 0. (11.16) Addition of Angular Momentum 143 Finally, we have Jz = Lz + Sz, (11.17) where [Jz, Lz] = [Jz, Sz] = 0. Recalling that only commuting operators correspond to physical quantities which can be simultaneously measured (see Sect. 4.10), it follows, from the above discussion, that there are two alternative sets of physical variables associated with angular momentum which we can measure simultaneously. The first set correspond to the operators L2, S2, Lz, Sz, and Jz. The second set correspond to the operators L2, S2, J2, and Jz. In other words, we can always measure the magnitude squared of the orbital and spin angular momentum vectors, together with the z-component of the total angular momentum vector. In addition, we can either choose to measure the z-components of the orbital and spin angular momentum vectors, or the magnitude squared of the total angular momentum vector. Let ψ(1) l,s;m,ms represent a simultaneous eigenstate of L2, S2, Lz, and Sz corresponding to the following eigenvalues: L2 ψ(1) l,s;m,ms = l (l + 1) ¯ h2 ψ(1) l,s;m,ms, (11.18) S2 ψ(1) l,s;m,ms = s (s + 1) ¯ h2 ψ(1) l,s;m,ms, (11.19) Lz ψ(1) l,s;m,ms = m ¯ h ψ(1) l,s;m,ms, (11.20) Sz ψ(1) l,s;m,ms = ms ¯ h ψ(1) l,s;m,ms. (11.21) It is easily seen that Jz ψ(1) l,s;m,ms = (Lz + Sz) ψ(1) l,s;m,ms = (m + ms) ¯ h ψ(1) l,s;m,ms = mj ¯ h ψ(1) l,s;m,ms. (11.22) Hence, mj = m + ms. (11.23) In other words, the quantum numbers controlling the z-components of the various angular momentum vectors can simply be added algebraically. Finally, let ψ(2) l,s;j,mj represent a simultaneous eigenstate of L2, S2, J2, and Jz correspond-ing to the following eigenvalues: L2 ψ(2) l,s;j,mj = l (l + 1) ¯ h2 ψ(2) l,s;j,mj, (11.24) S2 ψ(2) l,s;j,mj = s (s + 1) ¯ h2 ψ(2) l,s;j,mj, (11.25) J2 ψ(2) l,s;j,mj = j (j + 1) ¯ h2 ψ(2) l,s;j,mj, (11.26) Jz ψ(2) l,s;j,mj = mj ¯ h ψ(2) l,s;j,mj. (11.27) 144 QUANTUM MECHANICS 11.3 Angular Momentum in the Hydrogen Atom In a hydrogen atom, the wavefunction of an electron in a simultaneous eigenstate of L2 and Lz has an angular dependence specified by the spherical harmonic Yl,m(θ, φ) (see Sect. 8.7). If the electron is also in an eigenstate of S2 and Sz then the quantum numbers s and ms take the values 1/2 and ±1/2, respectively, and the internal state of the electron is specified by the spinors χ± (see Sect. 10.5). Hence, the simultaneous eigenstates of L2, S2, Lz, and Sz can be written in the separable form ψ(1) l,1/2;m,±1/2 = Yl,m χ±. (11.28) Here, it is understood that orbital angular momentum operators act on the spherical har-monic functions, Yl,m, whereas spin angular momentum operators act on the spinors, χ±. Since the eigenstates ψ(1) l,1/2;m,±1/2 are (presumably) orthonormal, and form a complete set, we can express the eigenstates ψ(2) l,1/2;j,mj as linear combinations of them. For instance, ψ(2) l,1/2;j,m+1/2 = α ψ(1) l,1/2;m,1/2 + β ψ(1) l,1/2;m+1,−1/2, (11.29) where α and β are, as yet, unknown coefficients. Note that the number of ψ(1) states which can appear on the right-hand side of the above expression is limited to two by the constraint that mj = m + ms [see Eq. (11.23)], and the fact that ms can only take the values ±1/2. Assuming that the ψ(2) eigenstates are properly normalized, we have α2 + β2 = 1. (11.30) Now, it follows from Eq. (11.26) that J2 ψ(2) l,1/2;j,m+1/2 = j (j + 1) ¯ h2 ψ(2) l,1/2;j,m+1/2, (11.31) where [see Eq. (11.12)] J2 = L2 + S2 + 2 Lz Sz + L+ S−+ L−S+. (11.32) Moreover, according to Eqs. (11.28) and (11.29), we can write ψ(2) l,1/2;j,m+1/2 = α Yl,m χ+ + β Yl,m+1 χ−. (11.33) Recall, from Eqs. (8.43) and (8.44), that L+ Yl,m = [l (l + 1) −m (m + 1)]1/2 ¯ h Yl,m+1, (11.34) L−Yl,m = [l (l + 1) −m (m −1)]1/2 ¯ h Yl,m−1. (11.35) By analogy, when the spin raising and lowering operators, S±, act on a general spinor, χs,ms, we obtain S+ χs,ms = [s (s + 1) −ms (ms + 1)]1/2 ¯ h χs,ms+1, (11.36) S−χs,ms = [s (s + 1) −ms (ms −1)]1/2 ¯ h χs,ms−1. (11.37) Addition of Angular Momentum 145 For the special case of spin one-half spinors (i.e., s = 1/2, ms = ±1/2), the above expres-sions reduce to S+ χ+ = S−χ−= 0, (11.38) and S± χ∓= ¯ h χ±. (11.39) It follows from Eqs. (11.32) and (11.34)–(11.39) that J2 Yl,m χ+ = [l (l + 1) + 3/4 + m] ¯ h2 Yl,m χ+ +[l (l + 1) −m (m + 1)]1/2 ¯ h2 Yl,m+1 χ−, (11.40) and J2 Yl,m+1 χ− = [l (l + 1) + 3/4 −m −1] ¯ h2 Yl,m+1 χ− +[l (l + 1) −m (m + 1)]1/2 ¯ h2 Yl,m χ+. (11.41) Hence, Eqs. (11.31) and (11.33) yield (x −m) α −[l (l + 1) −m (m + 1)]1/2 β = 0, (11.42) −[l (l + 1) −m (m + 1)]1/2 α + (x + m + 1) β = 0, (11.43) where x = j (j + 1) −l (l + 1) −3/4. (11.44) Equations (11.42) and (11.43) can be solved to give x (x + 1) = l (l + 1), (11.45) and α β = [(l −m) (l + m + 1)]1/2 x −m . (11.46) It follows that x = l or x = −l −1, which corresponds to j = l + 1/2 or j = l −1/2, respectively. Once x is specified, Eqs. (11.30) and (11.46) can be solved for α and β. We obtain ψ(2) l+1/2,m+1/2 = l + m + 1 2 l + 1 !1/2 ψ(1) m,1/2 + l −m 2 l + 1 !1/2 ψ(1) m+1,−1/2, (11.47) and ψ(2) l−1/2,m+1/2 = l −m 2 l + 1 !1/2 ψ(1) m,1/2 − l + m + 1 2 l + 1 !1/2 ψ(1) m+1,−1/2. (11.48) 146 QUANTUM MECHANICS m, 1/2 m + 1, −1/2 m, ms l + 1/2, m + 1/2 √ (l+m+1)/(2 l+1) √ (l−m)/(2 l+1) l −1/2, m + 1/2 √ (l−m)/(2 l+1) −√ (l+m+1)/(2 l+1) j, mj Table 11.1: Clebsch-Gordon coefficients for adding spin one-half to spin l. Here, we have neglected the common subscripts l, 1/2 for the sake of clarity: i.e., ψ(2) l+1/2,m+1/2 ≡ ψ(2) l,1/2;l+1/2,m+1/2, etc. The above equations can easily be inverted to give the ψ(1) eigenstates in terms of the ψ(2) eigenstates: ψ(1) m,1/2 = l + m + 1 2 l + 1 !1/2 ψ(2) l+1/2,m+1/2 + l −m 2 l + 1 !1/2 ψ(2) l−1/2,m+1/2, (11.49) ψ(1) m+1,−1/2 = l −m 2 l + 1 !1/2 ψ(2) l+1/2,m+1/2 − l + m + 1 2 l + 1 !1/2 ψ(2) l−1/2,m+1/2. (11.50) The information contained in Eqs. (11.47)–(11.50) is neatly summarized in Table 11.1. For instance, Eq. (11.47) is obtained by reading the first row of this table, whereas Eq. (11.50) is obtained by reading the second column. The coefficients in this type of table are gener-ally known as Clebsch-Gordon coefficients. As an example, let us consider the l = 1 states of a hydrogen atom. The eigenstates of L2, S2, Lz, and Sz, are denoted ψ(1) m,ms. Since m can take the values −1, 0, 1, whereas ms can take the values ±1/2, there are clearly six such states: i.e., ψ(1) 1,±1/2, ψ(1) 0,±1/2, and ψ(1) −1,±1/2. The eigenstates of L2, S2, J2, and Jz, are denoted ψ(2) j,mj. Since l = 1 and s = 1/2 can be combined together to form either j = 3/2 or j = 1/2 (see earlier), there are also six such states: i.e., ψ(2) 3/2,±3/2, ψ(2) 3/2,±1/2, and ψ(2) 1/2,±1/2. According to Table 11.1, the various different eigenstates are interrelated as follows: ψ(2) 3/2,±3/2 = ψ(1) ±1,±1/2, (11.51) ψ(2) 3/2,1/2 = s 2 3 ψ(1) 0,1/2 + s 1 3 ψ(1) 1,−1/2, (11.52) ψ(2) 1/2,1/2 = s 1 3 ψ(1) 0,1/2 − s 2 3 ψ(1) 1,−1/2, (11.53) ψ(2) 1/2,−1/2 = s 2 3 ψ(1) −1,1/2 − s 1 3 ψ(1) 0,−1/2, (11.54) ψ(2) 3/2,−1/2 = s 1 3 ψ(1) −1,1/2 + s 2 3 ψ(1) 0,−1/2, (11.55) and ψ(1) ±1,±1/2 = ψ(2) 3/2,±3/2, (11.56) Addition of Angular Momentum 147 −1, −1/2 −1, 1/2 0, −1/2 0, 1/2 1, −1/2 1, 1/2 m, ms 3/2, −3/2 1 3/2, −1/2 √ 1/3 √ 2/3 1/2, −1/2 √ 2/3 −√ 1/3 3/2, 1/2 √ 2/3 √ 1/3 1/2, 1/2 √ 1/3 −√ 2/3 3/2, 3/2 1 j, mj Table 11.2: Clebsch-Gordon coefficients for adding spin one-half to spin one. Only non-zero coefficients are shown. ψ(1) 1,−1/2 = s 1 3 ψ(2) 3/2,1/2 − s 2 3 ψ(2) 1/2,1/2, (11.57) ψ(1) 0,1/2 = s 2 3 ψ(2) 3/2,1/2 + s 1 3 ψ(2) 1/2,1/2, (11.58) ψ(1) 0,−1/2 = s 2 3 ψ(2) 3/2,−1/2 − s 1 3 ψ(2) 1/2,−1/2, (11.59) ψ(1) −1,1/2 = s 1 3 ψ(2) 3/2,−1/2 + s 2 3 ψ(2) 1/2,−1/2, (11.60) Thus, if we know that an electron in a hydrogen atom is in an l = 1 state characterized by m = 0 and ms = 1/2 [i.e., the state represented by ψ(1) 0,1/2] then, according to Eq. (11.58), a measurement of the total angular momentum will yield j = 3/2, mj = 1/2 with probability 2/3, and j = 1/2, mj = 1/2 with probability 1/3. Suppose that we make such a measure-ment, and obtain the result j = 3/2, mj = 1/2. As a result of the measurement, the electron is thrown into the corresponding eigenstate, ψ(2) 3/2,1/2. It thus follows from Eq. (11.52) that a subsequent measurement of Lz and Sz will yield m = 0, ms = 1/2 with probability 2/3, and m = 1, ms = −1/2 with probability 1/3. The information contained in Eqs. (11.51)–(11.59) is neatly summed up in Table 11.2. Note that each row and column of this table has unit norm, and also that the different rows and different columns are mutually orthogonal. Of course, this is because the ψ(1) and ψ(2) eigenstates are orthonormal. 11.4 Two Spin One-Half Particles Consider a system consisting of two spin one-half particles. Suppose that the system does not possess any orbital angular momentum. Let S1 and S2 be the spin angular momentum 148 QUANTUM MECHANICS operators of the first and second particles, respectively, and let S = S1 + S2 (11.61) be the total spin angular momentum operator. By analogy with the previous analysis, we conclude that it is possible to simultaneously measure either S 2 1 , S 2 2, S2, and Sz, or S 2 1 , S 2 2, S1z, S2z, and Sz. Let the quantum numbers associated with measurements of S 2 1 , S1z, S 2 2, S2z, S2, and Sz be s1, ms1, s2, ms2, s, and ms, respectively. In other words, if the spinor χ(1) s1,s2;ms1,ms2 is a simultaneous eigenstate of S 2 1 , S 2 2 , S1z, and S2z, then S 2 1 χ(1) s1,s2;ms1,ms2 = s1 (s1 + 1) ¯ h2 χ(1) s1,s2;ms1,ms2, (11.62) S 2 2 χ(1) s1,s2;ms1,ms2 = s2 (s2 + 1) ¯ h2 χ(1) s1,s2;ms1,ms2, (11.63) S1z χ(1) s1,s2;ms1,ms2 = ms1 ¯ h χ(1) s1,s2;ms1,ms2, (11.64) S2z χ(1) s1,s2;ms1,ms2 = ms2 ¯ h χ(1) s1,s2;ms1,ms2, (11.65) Sz χ(1) s1,s2;ms1,ms2 = ms ¯ h χ(1) s1,s2;ms1,ms2. (11.66) Likewise, if the spinor χ(2) s1,s2;s,ms is a simultaneous eigenstate of S 2 1, S 2 2, S2, and Sz, then S 2 1 χ(2) s1,s2;s,ms = s1 (s1 + 1) ¯ h2 χ(2) s1,s2;s,ms, (11.67) S 2 2 χ(2) s1,s2;s,ms = s2 (s2 + 1) ¯ h2 χ(2) s1,s2;s,ms, (11.68) S2 χ(2) s1,s2;s,ms = s (s + 1) ¯ h2 χ(2) s1,s2;s,ms, (11.69) Sz χ(2) s1,s2;s,ms = ms ¯ h χ(2) s1,s2;s,ms. (11.70) Of course, since both particles have spin one-half, s1 = s2 = 1/2, and s1z, s2z = ±1/2. Furthermore, by analogy with previous analysis, ms = ms1 + ms2. (11.71) Now, we saw, in the previous section, that when spin l is added to spin one-half then the possible values of the total angular momentum quantum number are j = l ± 1/2. By analogy, when spin one-half is added to spin one-half then the possible values of the total spin quantum number are s = 1/2 ± 1/2. In other words, when two spin one-half particles are combined, we either obtain a state with overall spin s = 1, or a state with overall spin s = 0. To be more exact, there are three possible s = 1 states (corresponding to ms = −1, 0, 1), and one possible s = 0 state (corresponding to ms = 0). The three s = 1 states are generally known as the triplet states, whereas the s = 0 state is known as the singlet state. The Clebsch-Gordon coefficients for adding spin one-half to spin one-half can easily be inferred from Table 11.1 (with l = 1/2), and are listed in Table 11.3. It follows from this table that the three triplet states are: χ(2) 1,−1 = χ(1) −1/2,−1.2, (11.72) Addition of Angular Momentum 149 −1/2, −1/2 −1/2, 1/2 1/2, −1/2 1/2, 1/2 ms1, ms2 1, −1 1 1, 0 1/ √ 2 1/ √ 2 0, 0 1/ √ 2 −1/ √ 2 1, 1 1 s, ms Table 11.3: Clebsch-Gordon coefficients for adding spin one-half to spin one-half. Only non-zero coefficients are shown. χ(2) 1,0 = 1 √ 2  χ(1) −1/2,1/2 + χ(1) 1/2,−1/2  , (11.73) χ(2) 1,1 = χ(1) 1/2,1/2, (11.74) where χ(2) s,ms is shorthand for χ(2) s1,s2;s,ms, etc. Likewise, the singlet state is written: χ(2) 0,0 = 1 √ 2  χ(1) −1/2,1/2 −χ(1) 1/2,−1/2  . (11.75) Exercises 1. An electron in a hydrogen atom occupies the combined spin and position state R2,1 q 1/3 Y1,0 χ+ + q 2/3 Y1,1 χ−  . (a) What values would a measurement of L2 yield, and with what probabilities? (b) Same for Lz. (c) Same for S2. (d) Same for Sz. (e) Same for J2. (f) Same for Jz. (g) What is the probability density for finding the electron at r, θ, φ? (h) What is the probability density for finding the electron in the spin up state (with respect to the z-axis) at radius r? 2. In a low energy neutron-proton system (with zero orbital angular momentum) the potential energy is given by V(r) = V1(r) + V2(r)  3 (σ1 · r) (σ2 · r) r2 −σ1 · σ2  + V3(r) σ1 · σ2, where σ1 denotes the vector of the Pauli matrices of the neutron, and σ2 denotes the vector of the Pauli matrices of the proton. Calculate the potential energy for the neutron-proton system: 150 QUANTUM MECHANICS (a) In the spin singlet state. (b) In the spin triplet state. 3. Consider two electrons in a spin singlet state. (a) If a measurement of the spin of one of the electrons shows that it is in the state with Sz = ¯ h/2, what is the probability that a measurement of the z-component of the spin of the other electron yields Sz = ¯ h/2? (b) If a measurement of the spin of one of the electrons shows that it is in the state with Sy = ¯ h/2, what is the probability that a measurement of the x-component of the spin of the other electron yields Sx = −¯ h/2? Finally, if electron 1 is in a spin state described by cos α1 χ+ + sin α1 e i β1 χ−, and electron 2 is in a spin state described by cos α2 χ+ + sin α2 e i β2 χ−, what is the probability that the two-electron spin state is a triplet state? Time-Independent Perturbation Theory 151 12 Time-Independent Perturbation Theory 12.1 Introduction Consider the following very commonly occurring problem. The Hamiltonian of a quantum mechanical system is written H = H0 + H1. (12.1) Here, H0 is a simple Hamiltonian whose eigenvalues and eigenstates are known exactly. H1 introduces some interesting additional physics into the problem, but is sufficiently com-plicated that when we add it to H0 we can no longer find the exact energy eigenvalues and eigenstates. However, H1 can, in some sense (which we shall specify more precisely later on), be regarded as being small compared to H0. Can we find approximate eigenvalues and eigenstates of the modified Hamiltonian, H0 +H1, by performing some sort of perturbation expansion about the eigenvalues and eigenstates of the original Hamiltonian, H0? Let us investigate. Incidentally, in this chapter, we shall only discuss so-called time-independent perturba-tion theory, in which the modification to the Hamiltonian, H1, has no explicit dependence on time. It is also assumed that the unperturbed Hamiltonian, H0, is time-independent. 12.2 Improved Notation Before commencing our investigation, it is helpful to introduce some improved notation. Let the ψi be a complete set of eigenstates of the Hamiltonian, H, corresponding to the eigenvalues Ei: i.e., H ψi = Ei ψi. (12.2) Now, we expect the ψi to be orthonormal (see Sect. 4.9). In one dimension, this implies that Z ∞ −∞ ψ∗ i ψj dx = δij. (12.3) In three dimensions (see Cha. 7), the above expression generalizes to Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ ψ∗ i ψj dx dy dz = δij. (12.4) Finally, if the ψi are spinors (see Cha. 10) then we have ψ† i ψj = δij. (12.5) The generalization to the case where ψ is a product of a regular wavefunction and a spinor is fairly obvious. We can represent all of the above possibilities by writing ⟨ψi|ψj⟩≡⟨i|j⟩= δij. (12.6) 152 QUANTUM MECHANICS Here, the term in angle brackets represents the integrals in Eqs. (12.3) and (12.4) in one-and three-dimensional regular space, respectively, and the spinor product (12.5) in spin-space. The advantage of our new notation is its great generality: i.e., it can deal with one-dimensional wavefunctions, three-dimensional wavefunctions, spinors, etc. Expanding a general wavefunction, ψa, in terms of the energy eigenstates, ψi, we obtain ψa = X i ci ψi. (12.7) In one dimension, the expansion coefficients take the form (see Sect. 4.9) ci = Z ∞ −∞ ψ∗ i ψa dx, (12.8) whereas in three dimensions we get ci = Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ ψ∗ i ψa dx dy dz. (12.9) Finally, if ψ is a spinor then we have ci = ψ† i ψa. (12.10) We can represent all of the above possibilities by writing ci = ⟨ψi|ψa⟩≡⟨i|a⟩. (12.11) The expansion (12.7) thus becomes ψa = X i ⟨ψi|ψa⟩ψi ≡ X i ⟨i|a⟩ψi. (12.12) Incidentally, it follows that ⟨i|a⟩∗= ⟨a|i⟩. (12.13) Finally, if A is a general operator, and the wavefunction ψa is expanded in the manner shown in Eq. (12.7), then the expectation value of A is written (see Sect. 4.9) ⟨A⟩= X i,j c∗ i cj Aij. (12.14) Here, the Aij are unsurprisingly known as the matrix elements of A. In one dimension, the matrix elements take the form Aij = Z ∞ −∞ ψ∗ i A ψj dx, (12.15) whereas in three dimensions we get Aij = Z ∞ −∞ Z ∞ −∞ Z ∞ −∞ ψ∗ i A ψj dx dy dz. (12.16) Time-Independent Perturbation Theory 153 Finally, if ψ is a spinor then we have Aij = ψ† i A ψj. (12.17) We can represent all of the above possibilities by writing Aij = ⟨ψi|A|ψj⟩≡⟨i|A|j⟩. (12.18) The expansion (12.14) thus becomes ⟨A⟩≡⟨a|A|a⟩= X i,j ⟨a|i⟩⟨i|A|j⟩⟨j|a⟩. (12.19) Incidentally, it follows that [see Eq. (4.58)] ⟨i|A|j⟩∗= ⟨j|A†|i⟩. (12.20) Finally, it is clear from Eq. (12.19) that X i |i⟩⟨i| ≡1, (12.21) where the ψi are a complete set of eigenstates, and 1 is the identity operator. 12.3 Two-State System Consider the simplest possible non-trivial quantum mechanical system. In such a system, there are only two independent eigenstates of the unperturbed Hamiltonian: i.e., H0 ψ1 = E1 ψ1, (12.22) H0 ψ2 = E2 ψ2. (12.23) It is assumed that these states, and their associated eigenvalues, are known. We also expect the states to be orthonormal, and to form a complete set. Let us now try to solve the modified energy eigenvalue problem (H0 + H1) ψE = E ψE. (12.24) We can, in fact, solve this problem exactly. Since the eigenstates of H0 form a complete set, we can write [see Eq. (12.12)] ψE = ⟨1|E⟩ψ1 + ⟨2|E⟩ψ2. (12.25) It follows from (12.24) that ⟨i|H0 + H1|E⟩= E ⟨i|E⟩, (12.26) 154 QUANTUM MECHANICS where i = 1 or 2. Equations (12.22), (12.23), (12.25), (12.26), and the orthonormality condition ⟨i|j⟩= δij, (12.27) yield two coupled equations which can be written in matrix form: E1 −E + e11 e12 e∗ 12 E2 −E + e22 ! ⟨1|E⟩ ⟨2|E⟩ ! = 0 0 ! , (12.28) where e11 = ⟨1|H1|1⟩, (12.29) e22 = ⟨2|H1|2⟩, (12.30) e12 = ⟨1|H1|2⟩= ⟨2|H1|1⟩∗. (12.31) Here, use has been made of the fact that H1 is an Hermitian operator. Consider the special (but not uncommon) case of a perturbing Hamiltonian whose diagonal matrix elements are zero, so that e11 = e22 = 0. (12.32) The solution of Eq. (12.28) (obtained by setting the determinant of the matrix to zero) is E = (E1 + E2) ± q (E1 −E2)2 + 4 |e12|2 2 . (12.33) Let us expand in the supposedly small parameter ǫ = |e12| |E1 −E2|. (12.34) We obtain E ≃1 2 (E1 + E2) ± 1 2 (E1 −E2)(1 + 2 ǫ2 + · · ·). (12.35) The above expression yields the modification of the energy eigenvalues due to the perturb-ing Hamiltonian: E′ 1 = E1 + |e12|2 E1 −E2 + · · · , (12.36) E′ 2 = E2 − |e12|2 E1 −E2 + · · · . (12.37) Note that H1 causes the upper eigenvalue to rise, and the lower to fall. It is easily demon-strated that the modified eigenstates take the form ψ′ 1 = ψ1 + e∗ 12 E1 −E2 ψ2 + · · · , (12.38) ψ′ 2 = ψ2 − e12 E1 −E2 ψ1 + · · · . (12.39) Time-Independent Perturbation Theory 155 Thus, the modified energy eigenstates consist of one of the unperturbed eigenstates, plus a slight admixture of the other. Now our expansion procedure is only valid when ǫ ≪1. This suggests that the condition for the validity of the perturbation method as a whole is |e12| ≪|E1 −E2|. (12.40) In other words, when we say that H1 needs to be small compared to H0, what we are really saying is that the above inequality must be satisfied. 12.4 Non-Degenerate Perturbation Theory Let us now generalize our perturbation analysis to deal with systems possessing more than two energy eigenstates. Consider a system in which the energy eigenstates of the unperturbed Hamiltonian, H0, are denoted H0 ψn = En ψn, (12.41) where n runs from 1 to N. The eigenstates are assumed to be orthonormal, so that ⟨m|n⟩= δnm, (12.42) and to form a complete set. Let us now try to solve the energy eigenvalue problem for the perturbed Hamiltonian: (H0 + H1) ψE = E ψE. (12.43) If follows that ⟨m|H0 + H1|E⟩= E ⟨m|E⟩, (12.44) where m can take any value from 1 to N. Now, we can express ψE as a linear superposition of the unperturbed energy eigenstates: ψE = X k ⟨k|E⟩ψk, (12.45) where k runs from 1 to N. We can combine the above equations to give (Em −E + emm) ⟨m|E⟩+ X k̸=m emk ⟨k|E⟩= 0, (12.46) where emk = ⟨m|H1|k⟩. (12.47) Let us now develop our perturbation expansion. We assume that emk Em −Ek ∼O(ǫ) (12.48) 156 QUANTUM MECHANICS for all m ̸= k, where ǫ ≪1 is our expansion parameter. We also assume that emm Em ∼O(ǫ) (12.49) for all m. Let us search for a modified version of the nth unperturbed energy eigenstate for which E = En + O(ǫ), (12.50) and ⟨n|E⟩ = 1, (12.51) ⟨m|E⟩ = O(ǫ) (12.52) for m ̸= n. Suppose that we write out Eq. (12.46) for m ̸= n, neglecting terms which are O(ǫ2) according to our expansion scheme. We find that (Em −En) ⟨m|E⟩+ emn ≃0, (12.53) giving ⟨m|E⟩≃− emn Em −En . (12.54) Substituting the above expression into Eq. (12.46), evaluated for m = n, and neglecting O(ǫ3) terms, we obtain (En −E + enn) − X k̸=n |enk|2 Ek −En ≃0. (12.55) Thus, the modified nth energy eigenstate possesses an eigenvalue E′ n = En + enn + X k̸=n |enk|2 En −Ek + O(ǫ3), (12.56) and a wavefunction ψ′ n = ψn + X k̸=n ekn En −Ek ψk + O(ǫ2). (12.57) Incidentally, it is easily demonstrated that the modified eigenstates remain orthonormal to O(ǫ2). 12.5 Quadratic Stark Effect Suppose that a hydrogen atom is subject to a uniform external electric field, of magnitude |E|, directed along the z-axis. The Hamiltonian of the system can be split into two parts. Namely, the unperturbed Hamiltonian, H0 = p2 2 me − e2 4πǫ0 r, (12.58) Time-Independent Perturbation Theory 157 and the perturbing Hamiltonian H1 = e |E| z. (12.59) Note that the electron spin is irrelevant to this problem (since the spin operators all commute with H1), so we can ignore the spin degrees of freedom of the system. Hence, the energy eigenstates of the unperturbed Hamiltonian are characterized by three quantum numbers—the radial quantum number n, and the two angular quantum numbers l and m (see Cha. 9). Let us denote these states as the ψnlm, and let their corresponding energy eigenvalues be the Enlm. According to the analysis in the previous section, the change in energy of the eigenstate characterized by the quantum numbers n, l, m in the presence of a small electric field is given by ∆Enlm = e |E| ⟨n, l, m|z|n, l, m⟩ +e2 |E|2 X n′,l′,m′̸=n,l,m |⟨n, l, m|z|n′, l′, m′⟩|2 Enlm −En′l′m′ . (12.60) This energy-shift is known as the Stark effect. The sum on the right-hand side of the above equation seems very complicated. How-ever, it turns out that most of the terms in this sum are zero. This follows because the matrix elements ⟨n, l, m|z|n′, l′, m′⟩are zero for virtually all choices of the two sets of quantum number, n, l, m and n′, l′, m′. Let us try to find a set of rules which determine when these matrix elements are non-zero. These rules are usually referred to as the selec-tion rules for the problem in hand. Now, since [see Eq. (8.4)] Lz = x py −y px, (12.61) it follows that [see Eqs. (7.15)–(7.17)] [Lz, z] = 0. (12.62) Thus, ⟨n, l, m|[Lz, z]|n′, l′, m′⟩ = ⟨n, l, m|Lz z −z Lz|n′, l′, m′⟩ = ¯ h (m −m′) ⟨n, l, m|z|n′, l′, m′⟩= 0, (12.63) since ψnlm is, by definition, an eigenstate of Lz corresponding to the eigenvalue m ¯ h. Hence, it is clear, from the above equation, that one of the selection rules is that the matrix element ⟨n, l, m|z|n′, l′, m′⟩is zero unless m′ = m. (12.64) Let us now determine the selection rule for l. We have [L2, z] = [L 2 x, z] + [L 2 y, z] 158 QUANTUM MECHANICS = Lx [Lx, z] + [Lx, z] Lx + Ly [Ly, z] + [Ly, z] Ly = i ¯ h (−Lx y −y Lx + Ly x + x Ly) = 2 i ¯ h (Ly x −Lx y + i ¯ h z) = 2 i ¯ h (Ly x −y Lx) = 2 i ¯ h (x Ly −Lx y), (12.65) where use has been made of Eqs. (7.15)–(7.17), (8.2)–(8.4), and (8.10). Thus, [L2, [L2, z]] = 2 i ¯ h  L2, Ly x −Lx y + i ¯ h z  = 2 i ¯ h  Ly [L2, x] −Lx [L2, y] + i ¯ h [L2, z]  = −4 ¯ h2 Ly (y Lz −Ly z) + 4 ¯ h2 Lx (Lx z −x Lz) −2 ¯ h2 (L2 z −z L2), (12.66) which reduces to [L2, [L2, z]] = −¯ h2  4 (Lx x + Ly y + Lz z) Lz −4 (L 2 x + L 2 y + L 2 z ) z +2 (L2 z −z L2) = −¯ h2  4 (Lx x + Ly y + Lz z) Lz −2 (L2 z + z L2) . (12.67) However, it is clear from Eqs. (8.2)–(8.4) that Lx x + Ly y + Lz z = 0. (12.68) Hence, we obtain [L2, [L2, z]] = 2 ¯ h2 (L2 z + z L2). (12.69) Finally, the above expression expands to give L4 z −2 L2 z L2 + z L4 −2 ¯ h2 (L2 z + z L2) = 0. (12.70) Equation (12.70) implies that ⟨n, l, m|L4 z −2 L2 z L2 + z L4 −2 ¯ h2 (L2 z + z L2)|n′, l′, m⟩= 0. (12.71) Since, by definition, ψnlm is an eigenstate of L2 corresponding to the eigenvalue l (l+1) ¯ h2, this expression yields  l2 (l + 1)2 −2 l (l + 1) l′ (l′ + 1) + l′2 (l′ + 1)2 −2 l (l + 1) −2 l′ (l′ + 1)} ⟨n, l, m|z|n′, l′, m⟩ = 0, (12.72) which reduces to (l + l′ + 2) (l + l′) (l −l′ + 1) (l −l′ −1) ⟨n, l, m|z|n′, l′, m⟩= 0. (12.73) Time-Independent Perturbation Theory 159 According to the above formula, the matrix element ⟨n, l, m|z|n′, l′, m⟩vanishes unless l = l′ = 0 or l′ = l ± 1. [Of course, the factor l + l′ + 2, in the above equation, can never be zero, since l and l′ can never be negative.] Recall, however, from Cha. 9, that an l = 0 wavefunction is spherically symmetric. It, therefore, follows, from symmetry, that the matrix element ⟨n, l, m|z|n′, l′, m⟩is zero when l = l′ = 0. In conclusion, the selection rule for l is that the matrix element ⟨n, l, m|z|n′, l′, m⟩is zero unless l′ = l ± 1. (12.74) Application of the selection rules (12.64) and (12.74) to Eq. (12.60) yields ∆Enlm = e2 |E|2 X n′,l′=l±1 |⟨n, l, m|z|n′, l′, m⟩|2 Enlm −En′l′m . (12.75) Note that, according to the selection rules, all of the terms in Eq. (12.60) which vary linearly with the electric field-strength vanish. Only those terms which vary quadratically with the field-strength survive. Hence, this type of energy-shift of an atomic state in the presence of a small electric field is known as the quadratic Stark effect. Now, the electric polarizability of an atom is defined in terms of the energy-shift of the atomic state as follows: ∆E = −1 2 α |E|2. (12.76) Hence, we can write αnlm = 2 e2 X n′,l′=l±1 |⟨n, l, m|z|n′, l′, m⟩|2 En′l′m −Enlm . (12.77) Unfortunately, there is one fairly obvious problem with Eq. (12.75). Namely, it predicts an infinite energy-shift if there exists some non-zero matrix element ⟨n, l, m|z|n′, l′, m⟩ which couples two degenerate unperturbed energy eigenstates: i.e., if ⟨n, l, m|z|n′, l′, m⟩̸= 0 and Enlm = En′l′m. Clearly, our perturbation method breaks down completely in this situation. Hence, we conclude that Eqs. (12.75) and (12.77) are only applicable to cases where the coupled eigenstates are non-degenerate. For this reason, the type of pertur-bation theory employed here is known as non-degenerate perturbation theory. Now, the unperturbed eigenstates of a hydrogen atom have energies which only depend on the ra-dial quantum number n (see Cha. 9). It follows that we can only apply the above results to the n = 1 eigenstate (since for n > 1 there will be coupling to degenerate eigenstates with the same value of n but different values of l). Thus, according to non-degenerate perturbation theory, the polarizability of the ground-state (i.e., n = 1) of a hydrogen atom is given by α = 2 e2 X n>1 |⟨1, 0, 0|z|n, 1, 0⟩|2 En00 −E100 . (12.78) 160 QUANTUM MECHANICS Here, we have made use of the fact that En10 = En00. The sum in the above expression can be evaluated approximately by noting that (see Sect. 9.4) En00 = − e2 8π ǫ0 a0 n2, (12.79) where a0 = 4πǫ0 ¯ h2 me e2 (12.80) is the Bohr radius. Hence, we can write En00 −E100 ≥E200 −E100 = 3 4 e2 8π ǫ0 a0 , (12.81) which implies that α < 16 3 4πǫ0 a0 X n>1 |⟨1, 0, 0|z|n, 1, 0⟩|2. (12.82) However, [see Eq. (12.21)] X n>1 |⟨1, 0, 0|z|n, 1, 0⟩|2 = X n>1 ⟨1, 0, 0|z|n, 1, 0⟩⟨n, 1, 0|z|1, 0, 0⟩ = X n′,l′,m′ ⟨1, 0, 0|z|n′, l′, m′⟩⟨n′, l′, m′|z|1, 0, 0⟩ = ⟨1, 0, 0|z2|1, 0, 0⟩= 1 3 ⟨1, 0, 0|r2|1, 0, 0⟩, (12.83) where we have made use of the selection rules, the fact that the ψn′,l′,m′ form a complete set, and the fact the the ground-state of hydrogen is spherically symmetric. Finally, it follows from Eq. (9.72) that ⟨1, 0, 0|r2|1, 0, 0⟩= 3 a 2 0 . (12.84) Hence, we conclude that α < 16 3 4πǫ0 a 3 0 ≃5.3 4πǫ0 a 3 0 . (12.85) The exact result (which can be obtained by solving Schr¨ odinger’s equation in parabolic coordinates) is α = 9 2 4πǫ0 a 3 0 = 4.5 4πǫ0 a 3 0 . (12.86) 12.6 Degenerate Perturbation Theory Let us, rather naively, investigate the Stark effect in an excited (i.e., n > 1) state of the hydrogen atom using standard non-degenerate perturbation theory. We can write H0 ψnlm = En ψnlm, (12.87) Time-Independent Perturbation Theory 161 since the energy eigenstates of the unperturbed Hamiltonian only depend on the quan-tum number n. Making use of the selection rules (12.64) and (12.74), non-degenerate perturbation theory yields the following expressions for the perturbed energy levels and eigenstates [see Eqs. (12.56) and (12.57)]: E′ nl = En + enlnl + X n′,l′=l±1 |en′l′nl|2 En −En′ , (12.88) and ψ′ nlm = ψnlm + X n′,l′=l±1 en′l′nl En −En′ ψn′l′m, (12.89) where en′l′nl = ⟨n′, l′, m|H1|n, l, m⟩. (12.90) Unfortunately, if n > 1 then the summations in the above expressions are not well-defined, because there exist non-zero matrix elements, enl′nl, which couple degenerate eigenstates: i.e., there exist non-zero matrix elements which couple states with the same value of n, but different values of l. These particular matrix elements give rise to singular factors 1/(En −En) in the summations. This does not occur if n = 1 because, in this case, the selection rule l′ = l ± 1, and the fact that l = 0 (since 0 ≤l < n), only allow l′ to take the single value 1. Of course, there is no n = 1 state with l′ = 1. Hence, there is only one coupled state corresponding to the eigenvalue E1. Unfortunately, if n > 1 then there are multiple coupled states corresponding to the eigenvalue En. Note that our problem would disappear if the matrix elements of the perturbed Hamil-tonian corresponding to the same value of n, but different values of l, were all zero: i.e., if ⟨n, l′, m|H1|n, l, m⟩= λnl δll′. (12.91) In this case, all of the singular terms in Eqs. (12.88) and (12.89) would reduce to zero. Unfortunately, the above equation is not satisfied. Fortunately, we can always redefine the unperturbed eigenstates corresponding to the eigenvalue En in such a manner that Eq. (12.91) is satisfied. Suppose that there are Nn coupled eigenstates belonging to the eigenvalue En. Let us define Nn new states which are linear combinations of our Nn original degenerate eigenstates: ψ(1) nlm = X k=1,Nn ⟨n, k, m|n, l(1), m⟩ψnkm. (12.92) Note that these new states are also degenerate energy eigenstates of the unperturbed Hamiltonian, H0, corresponding to the eigenvalue En. The ψ(1) nlm are chosen in such a manner that they are also eigenstates of the perturbing Hamiltonian, H1: i.e., they are simultaneous eigenstates of H0 and H1. Thus, H1 ψ(1) nlm = λnl ψ(1) nlm. (12.93) 162 QUANTUM MECHANICS The ψ(1) nlm are also chosen so as to be orthonormal: i.e., ⟨n, l′(1), m|n, l(1), m⟩= δll′. (12.94) It follows that ⟨n, l′(1), m|H1|n, l(1), m⟩= λnl δll′. (12.95) Thus, if we use the new eigenstates, instead of the old ones, then we can employ Eqs. (12.88) and (12.89) directly, since all of the singular terms vanish. The only remaining difficulty is to determine the new eigenstates in terms of the original ones. Now [see Eq. (12.21)] X l=1,Nn |n, l, m⟩⟨n, l, m| ≡1, (12.96) where 1 denotes the identity operator in the sub-space of all coupled unperturbed eigen-states corresponding to the eigenvalue En. Using this completeness relation, the eigenvalue equation (12.93) can be transformed into a straightforward matrix equation: X l′′=1,Nn ⟨n, l′, m|H1|n, l′′, m⟩⟨n, l′′, m|n, l(1), m⟩= λnl ⟨n, l′, m|n, l(1), m⟩. (12.97) This can be written more transparently as U x = λ x, (12.98) where the elements of the Nn × Nn Hermitian matrix U are Ujk = ⟨n, j, m|H1|n, k, m⟩. (12.99) Provided that the determinant of U is non-zero, Eq. (12.98) can always be solved to give Nn eigenvalues λnl (for l = 1 to Nn), with Nn corresponding eigenvectors xnl. The normalized eigenvectors specify the weights of the new eigenstates in terms of the original eigenstates: i.e., (xnl)k = ⟨n, k, m|n, l(1), m⟩, (12.100) for k = 1 to Nn. In our new scheme, Eqs. (12.88) and (12.89) yield E′ nl = En + λnl + X n′̸=n,l′=l±1 |en′l′nl|2 En −En′ , (12.101) and ψ(1)′ nlm = ψ(1) nlm + X n′̸=n,l′=l±1 en′l′nl En −En′ ψn′l′m. (12.102) There are no singular terms in these expressions, since the summations are over n′ ̸= n: i.e., they specifically exclude the problematic, degenerate, unperturbed energy eigenstates corresponding to the eigenvalue En. Note that the first-order energy shifts are equivalent to the eigenvalues of the matrix equation (12.98). Time-Independent Perturbation Theory 163 12.7 Linear Stark Effect Returning to the Stark effect, let us examine the effect of an external electric field on the energy levels of the n = 2 states of a hydrogen atom. There are four such states: an l = 0 state, usually referred to as 2S, and three l = 1 states (with m = −1, 0, 1), usually referred to as 2P. All of these states possess the same unperturbed energy, E200 = −e2/(32π ǫ0 a0). As before, the perturbing Hamiltonian is H1 = e |E| z. (12.103) According to the previously determined selection rules (i.e., m′ = m, and l′ = l ± 1), this Hamiltonian couples ψ200 and ψ210. Hence, non-degenerate perturbation theory breaks down when applied to these two states. On the other hand, non-degenerate perturbation theory works fine for the ψ211 and ψ21−1 states, since these are not coupled to any other n = 2 states by the perturbing Hamiltonian. In order to apply perturbation theory to the ψ200 and ψ210 states, we have to solve the matrix eigenvalue equation U x = λ x, (12.104) where U is the matrix of the matrix elements of H1 between these states. Thus, U = e |E| 0 ⟨2, 0, 0|z|2, 1, 0⟩ ⟨2, 1, 0|z|2, 0, 0⟩ 0 ! , (12.105) where the rows and columns correspond to ψ200 and ψ210, respectively. Here, we have again made use of the selection rules, which tell us that the matrix element of z between two hydrogen atom states is zero unless the states possess l quantum numbers which differ by unity. It is easily demonstrated, from the exact forms of the 2S and 2P wavefunctions, that ⟨2, 0, 0|z|2, 1, 0⟩= ⟨2, 1, 0|z|2, 0, 0⟩= 3 a0. (12.106) It can be seen, by inspection, that the eigenvalues of U are λ1 = 3 e a0 |E| and λ2 = −3 e a0 |E|. The corresponding normalized eigenvectors are x1 =  1/ √ 2 1/ √ 2  , (12.107) x2 =  1/ √ 2 −1/ √ 2  . (12.108) It follows that the simultaneous eigenstates of H0 and H1 take the form ψ1 = ψ200 + ψ210 √ 2 , (12.109) ψ2 = ψ200 −ψ210 √ 2 . (12.110) 164 QUANTUM MECHANICS In the absence of an external electric field, both of these states possess the same energy, E200. The first-order energy shifts induced by an external electric field are given by ∆E1 = +3 e a0 |E|, (12.111) ∆E2 = −3 e a0 |E|. (12.112) Thus, in the presence of an electric field, the energies of states 1 and 2 are shifted upwards and downwards, respectively, by an amount 3 e a0 |E|. These states are orthogonal linear combinations of the original ψ200 and ψ210 states. Note that the energy shifts are linear in the electric field-strength, so this effect—which is known as the linear Stark effect—is much larger than the quadratic effect described in Sect. 12.5. Note, also, that the energies of the ψ211 and ψ21−1 states are not affected by the electric field to first-order. Of course, to second-order the energies of these states are shifted by an amount which depends on the square of the electric field-strength (see Sect. 12.5). 12.8 Fine Structure of Hydrogen According to special relativity, the kinetic energy (i.e., the difference between the total energy and the rest mass energy) of a particle of rest mass m and momentum p is T = q p2 c2 + m2 c4 −m c2. (12.113) In the non-relativistic limit p ≪m c, we can expand the square-root in the above expres-sion to give T = p2 2 m " 1 −1 4  p m c 2 + O  p m c 4# . (12.114) Hence, T ≃p2 2 m − p4 8 m3 c2. (12.115) Of course, we recognize the first term on the right-hand side of this equation as the stan-dard non-relativistic expression for the kinetic energy. The second term is the lowest-order relativistic correction to this energy. Let us consider the effect of this type of correction on the energy levels of a hydrogen atom. So, the unperturbed Hamiltonian is given by Eq. (12.58), and the perturbing Hamiltonian takes the form H1 = − p4 8 m 3 e c2. (12.116) Now, according to standard first-order perturbation theory (see Sect. 12.4), the lowest-order relativistic correction to the energy of a hydrogen atom state characterized by the standard quantum numbers n, l, and m is given by ∆Enlm = ⟨n, l, m|H1|n, l, m⟩= − 1 8 m 3 e c2 ⟨n, l, m|p4|n, l, m⟩ = − 1 8 m 3 e c2 ⟨n, l, m|p2 p2|n, l, m⟩. (12.117) Time-Independent Perturbation Theory 165 However, Schr¨ odinger’s equation for a unperturbed hydrogen atom can be written p2 ψn,l,m = 2 me (En −V) ψn,l,m, (12.118) where V = −e2/(4πǫ0 r). Since p2 is an Hermitian operator, it follows that ∆Enlm = − 1 2 me c2 ⟨n, l, m|(En −V)2|n, l, m⟩ = − 1 2 me c2  E 2 n −2 En ⟨n, l, m|V|n, l, m⟩+ ⟨n, l, m|V2|n, l, m⟩  = − 1 2 me c2  E 2 n + 2 En e2 4πǫ0 ! 1 r + + e2 4πǫ0 !2 1 r 2 + . (12.119) It follows from Eqs. (9.74) and (9.75) that ∆Enlm = − 1 2 me c2  E 2 n + 2 En e2 4πǫ0 ! 1 n2 a0 + e2 4πǫ0 !2 1 (l + 1/2) n3 a 2 0  . (12.120) Finally, making use of Eqs. (9.55), (9.57), and (9.58), the above expression reduces to ∆Enlm = En α2 n2 n l + 1/2 −3 4 ! , (12.121) where α = e2 4πǫ0 ¯ h c ≃ 1 137 (12.122) is the dimensionless fine structure constant. Note that the above derivation implicitly assumes that p4 is an Hermitian operator. It turns out that this is not the case for l = 0 states. However, somewhat fortuitously, our calculation still gives the correct answer when l = 0. Note, also, that we are able to use non-degenerate perturbation theory in the above calculation, using the ψnlm eigenstates, because the perturbing Hamiltonian commutes with both L2 and Lz. It follows that there is no coupling between states with different l and m quantum numbers. Hence, all coupled states have different n quantum numbers, and therefore have different energies. Now, an electron in a hydrogen atom experiences an electric field E = e r 4πǫ0 r3 (12.123) due to the charge on the nucleus. However, according to electromagnetic theory, a non-relativistic particle moving in a electric field E with velocity v also experiences an effective magnetic field B = −v × E c2 . (12.124) 166 QUANTUM MECHANICS Recall, that an electron possesses a magnetic moment [see Eqs. (10.58) and (10.59)] µ = −e me S (12.125) due to its spin angular momentum, S. We, therefore, expect an additional contribution to the Hamiltonian of a hydrogen atom of the form [see Eq. (10.60)] H1 = −µ · B = − e2 4πǫ0 me c2 r3 v × r · S = e2 4πǫ0 m 2 e c2 r3 L · S, (12.126) where L = me r × v is the electron’s orbital angular momentum. This effect is known as spin-orbit coupling. It turns out that the above expression is too large, by a factor 2, due to an obscure relativistic effect known as Thomas precession. Hence, the true spin-orbit correction to the Hamiltonian is H1 = e2 8π ǫ0 m 2 e c2 r3 L · S. (12.127) Let us now apply perturbation theory to the hydrogen atom, using the above expression as the perturbing Hamiltonian. Now J = L + S (12.128) is the total angular momentum of the system. Hence, J2 = L2 + S2 + 2 L · S, (12.129) giving L · S = 1 2 (J2 −L2 −S2). (12.130) Recall, from Sect. 11.2, that whilst J2 commutes with both L2 and S2, it does not commute with either Lz or Sz. It follows that the perturbing Hamiltonian (12.127) also commutes with both L2 and S2, but does not commute with either Lz or Sz. Hence, the simulta-neous eigenstates of the unperturbed Hamiltonian (12.58) and the perturbing Hamilto-nian (12.127) are the same as the simultaneous eigenstates of L2, S2, and J2 discussed in Sect. 11.3. It is important to know this since, according to Sect. 12.6, we can only safely apply perturbation theory to the simultaneous eigenstates of the unperturbed and perturbing Hamiltonians. Adopting the notation introduced in Sect. 11.3, let ψ(2) l,s;j,mj be a simultaneous eigenstate of L2, S2, J2, and Jz corresponding to the eigenvalues L2 ψ(2) l,s;j,mj = l (l + 1) ¯ h2 ψ(2) l,s;j,mj, (12.131) Time-Independent Perturbation Theory 167 S2 ψ(2) l,s;j,mj = s (s + 1) ¯ h2 ψ(2) l,s;j,mj, (12.132) J2 ψ(2) l,s;j,mj = j (j + 1) ¯ h2 ψ(2) l,s;j,mj, (12.133) Jz ψ(2) l,s;j,mj = mj ¯ h ψ(2) l,s;j,mj. (12.134) According to standard first-order perturbation theory, the energy-shift induced in such a state by spin-orbit coupling is given by ∆El,1/2;j,mj = ⟨l, 1/2; j, mj|H1|l, 1/2; j, mj⟩ = e2 16π ǫ0 m 2 e c2 1, 1/2; j, mj J2 −L2 −S2 r3 l, 1/2; j, mj + = e2 ¯ h2 16π ǫ0 m 2 e c2 [j (j + 1) −l (l + 1) −3/4] 1 r3 + . (12.135) Here, we have made use of the fact that s = 1/2 for an electron. It follows from Eq. (9.76) that ∆El,1/2;j,mj = e2 ¯ h2 16π ǫ0 m 2 e c2 a 3 0 "j (j + 1) −l (l + 1) −3/4 l (l + 1/2) (l + 1) n3 # , (12.136) where n is the radial quantum number. Finally, making use of Eqs. (9.55), (9.57), and (9.58), the above expression reduces to ∆El,1/2;j,mj = En α2 n2 "n {3/4 + l (l + 1) −j (j + 1)} 2 l (l + 1/2) (l + 1) # , (12.137) where α is the fine structure constant. A comparison of this expression with Eq. (12.121) reveals that the energy-shift due to spin-orbit coupling is of the same order of magnitude as that due to the lowest-order relativistic correction to the Hamiltonian. We can add these two corrections together (making use of the fact that j = l±1/2 for a hydrogen atom—see Sect. 11.3) to obtain a net energy-shift of ∆El,1/2;j,mj = En α2 n2 n j + 1/2 −3 4 ! . (12.138) This modification of the energy levels of a hydrogen atom due to a combination of relativity and spin-orbit coupling is known as fine structure. Now, it is conventional to refer to the energy eigenstates of a hydrogen atom which are also simultaneous eigenstates of J2 as nLj states, where n is the radial quantum number, L = (S, P, D, F, · · ·) as l = (0, 1, 2, 3, · · ·), and j is the total angular momentum quantum number. Let us examine the effect of the fine structure energy-shift (12.138) on these eigenstates for n = 1, 2 and 3. For n = 1, in the absence of fine structure, there are two degenerate 1S1/2 states. According to Eq. (12.138), the fine structure induced energy-shifts of these two states are 168 QUANTUM MECHANICS 2P1/2 + fine structure 1S1/2 2S1/2 2P3/2 3S1/2 3P3/2 3D5/2 unperturbed 3S1/2 3P3/2 3D5/2 3D3/2 3P1/2 2P3/2 2S1/2 2P1/2 1S1/2 3D3/2 3P1/2 Figure 12.1: Effect of the fine structure energy-shift on the n = 1, 2 and 3 states of a hydrogen atom. Not to scale. the same. Hence, fine structure does not break the degeneracy of the two 1S1/2 states of hydrogen. For n = 2, in the absence of fine structure, there are two 2S1/2 states, two 2P1/2 states, and four 2P3/2 states, all of which are degenerate. According to Eq. (12.138), the fine structure induced energy-shifts of the 2S1/2 and 2P1/2 states are the same as one another, but are different from the induced energy-shift of the 2P3/2 states. Hence, fine structure does not break the degeneracy of the 2S1/2 and 2P1/2 states of hydrogen, but does break the degeneracy of these states relative to the 2P3/2 states. For n = 3, in the absence of fine structure, there are two 3S1/2 states, two 3P1/2 states, four 3P3/2 states, four 3D3/2 states, and six 3D5/2 states, all of which are degenerate. Ac-cording to Eq. (12.138), fine structure breaks these states into three groups: the 3S1/2 and 3P1/2 states, the 3P3/2 and 3D3/2 states, and the 3D5/2 states. The effect of the fine structure energy-shift on the n = 1, 2, and 3 energy states of a hydrogen atom is illustrated in Fig. 12.1. Note, finally, that although expression (12.137) does not have a well-defined value for l = 0, when added to expression (12.121) it, somewhat fortuitously, gives rise to an expression (12.138) which is both well-defined and correct when l = 0. Time-Independent Perturbation Theory 169 12.9 Zeeman Effect Consider a hydrogen atom placed in a uniform z-directed external magnetic field of strength B. The modification to the Hamiltonian of the system is H1 = −µ · B, (12.139) where µ = − e 2 me (L + 2 S) (12.140) is the total electron magnetic moment, including both orbital and spin contributions [see Eqs. (10.57)–(10.59)]. Thus, H1 = e B 2 me (Lz + 2 Sz). (12.141) Suppose that the applied magnetic field is much weaker than the atom’s internal mag-netic field (12.124). Since the magnitude of the internal field is about 25 tesla, this is a fairly reasonable assumption. In this situation, we can treat H1 as a small perturbation acting on the simultaneous eigenstates of the unperturbed Hamiltonian and the fine struc-ture Hamiltonian. Of course, these states are the simultaneous eigenstates of L2, S2, J2, and Jz (see previous section). Hence, from standard perturbation theory, the first-order energy-shift induced by a weak external magnetic field is ∆El,1/2;j,mj = ⟨l, 1/2; j, mj|H1|l, 1/2; j, mj⟩ = e B 2 me (mj ¯ h + ⟨l, 1/2; j, mj|Sz|l, 1/2; j, mj⟩) , (12.142) since Jz = Lz + Sz. Now, according to Eqs. (11.47) and (11.48), ψ(2) j,mj = j + mj 2 l + 1 !1/2 ψ(1) mj−1/2,1/2 + j −mj 2 l + 1 !1/2 ψ(1) mj+1/2,−1/2 (12.143) when j = l + 1/2, and ψ(2) j,mj = j + 1 −mj 2 l + 1 !1/2 ψ(1) mj−1/2,1/2 − j + 1 + mj 2 l + 1 !1/2 ψ(1) mj+1/2,−1/2 (12.144) when j = l −1/2. Here, the ψ(1) m,ms are the simultaneous eigenstates of L2, S2, Lz, and Sz, whereas the ψ(2) j,mj are the simultaneous eigenstates of L2, S2, J2, and Jz. In particular, Sz ψ(1) m,±1/2 = ±¯ h 2 ψ(1) m,±1/2. (12.145) It follows from Eqs. (12.143)–(12.145), and the orthormality of the ψ(1), that ⟨l, 1/2; j, mj|Sz|l, 1/2; j, mj⟩= ± mj ¯ h 2 l + 1 (12.146) 170 QUANTUM MECHANICS when j = l ± 1/2. Thus, the induced energy-shift when a hydrogen atom is placed in an external magnetic field—which is known as the Zeeman effect—becomes ∆El,1/2;j,mj = µB B mj " 1 ± 1 2 l + 1 # (12.147) where the ± signs correspond to j = l ± 1/2. Here, µB = e ¯ h 2 me = 5.788 × 10−5 eV/T (12.148) is known as the Bohr magnetron. Of course, the quantum number mj takes values differing by unity in the range −j to j. It, thus, follows from Eq. (12.147) that the Zeeman effect splits degenerate states characterized by j = l + 1/2 into 2 j + 1 equally spaced states of interstate spacing ∆Ej=l+1/2 = µB B 2 l + 2 2 l + 1. (12.149) Likewise, the Zeeman effect splits degenerate states characterized by j = l−1/2 into 2 j+1 equally spaced states of interstate spacing ∆Ej=l−1/2 = µB B 2 l 2 l + 1. (12.150) In conclusion, in the presence of a weak external magnetic field, the two degenerate 1S1/2 states of the hydrogen atom are split by 2 µB B. Likewise, the four degenerate 2S1/2 and 2P1/2 states are split by (2/3) µB B, whereas the four degenerate 2P3/2 states are split by (4/3) µB B. This is illustrated in Fig. 12.2. Note, finally, that since the ψ(2) l,mj are not simultaneous eigenstates of the unperturbed and perturbing Hamiltonians, Eqs. (12.149) and (12.150) can only be regarded as the expectation values of the magnetic-field induced energy-shifts. However, as long as the external magnetic field is much weaker than the in-ternal magnetic field, these expectation values are almost identical to the actual measured values of the energy-shifts. 12.10 Hyperfine Structure The proton in a hydrogen atom is a spin one-half charged particle, and therefore possesses a magnetic moment. By analogy with Eq. (10.58), we can write µp = gp e 2 mp Sp, (12.151) where µp is the proton magnetic moment, Sp is the proton spin, and the proton gyro-magnetic ratio gp is found experimentally to take that value 5.59. Note that the magnetic Time-Independent Perturbation Theory 171 + Zeeman unperturbed + fine structure 1S1/2 2P3/2 2S1/2 2P1/2 2ǫ (4/3)ǫ (4/3)ǫ (4/3)ǫ (2/3)ǫ (2/3)ǫ (2/3)ǫ Figure 12.2: The Zeeman effect for the n = 1 and 2 states of a hydrogen atom. Here, ǫ = µB B. Not to scale. moment of a proton is much smaller (by a factor of order me/mp) than that of an elec-tron. According to classical electromagnetism, the proton’s magnetic moment generates a magnetic field of the form B = µ0 4π r3 h 3 (µp · er) er −µp i + 2 µ0 3 µp δ3(r), (12.152) where er = r/r. We can understand the origin of the delta-function term in the above expression by thinking of the proton as a tiny current loop centred on the origin. All magnetic field-lines generated by the loop must pass through the loop. Hence, if the size of the loop goes to zero then the field will be infinite at the origin, and this contribution is what is reflected by the delta-function term. Now, the Hamiltonian of the electron in the magnetic field generated by the proton is simply H1 = −µe · B, (12.153) where µe = −e me Se. (12.154) Here, µe is the electron magnetic moment [see Eqs. (10.58) and (10.59)], and Se the electron spin. Thus, the perturbing Hamiltonian is written H1 = µ0 gp e2 8π mp me 3 (Sp · er) (Se · er) −Sp · Se r3 + µ0 gp e2 3 mp me Sp · Se δ3(r). (12.155) 172 QUANTUM MECHANICS Note that, since we have neglected coupling between the proton spin and the magnetic field generated by the electron’s orbital motion, the above expression is only valid for l = 0 states. According to standard first-order perturbation theory, the energy-shift induced by spin-spin coupling between the proton and the electron is the expectation value of the perturb-ing Hamiltonian. Hence, ∆E = µ0 gp e2 8π mp me 3 (Sp · er) (Se · er) −Sp · Se r3 + + µ0 gp e2 3 mp me ⟨Sp · Se⟩|ψ(0)|2. (12.156) For the ground-state of hydrogen, which is spherically symmetric, the first term in the above expression vanishes by symmetry. Moreover, it is easily demonstrated that |ψ000(0)|2 = 1/(π a 3 0). Thus, we obtain ∆E = µ0 gp e2 3π mp me a 3 0 ⟨Sp · Se⟩. (12.157) Let S = Se + Sp (12.158) be the total spin. We can show that Sp · Se = 1 2 (S2 −S 2 e −S 2 p). (12.159) Thus, the simultaneous eigenstates of the perturbing Hamiltonian and the main Hamilto-nian are the simultaneous eigenstates of S 2 e , S 2 p, and S2. However, both the proton and the electron are spin one-half particles. According to Sect. 11.4, when two spin one-half particles are combined (in the absence of orbital angular momentum) the net state has either spin 1 or spin 0. In fact, there are three spin 1 states, known as triplet states, and a single spin 0 state, known as the singlet state. For all states, the eigenvalues of S 2 e and S 2 p are (3/4) ¯ h2. The eigenvalue of S2 is 0 for the singlet state, and 2 ¯ h2 for the triplet states. Hence, ⟨Sp · Se⟩= −3 4 ¯ h2 (12.160) for the singlet state, and ⟨Sp · Se⟩= 1 4 ¯ h2 (12.161) for the triplet states. It follows, from the above analysis, that spin-spin coupling breaks the degeneracy of the two 1S1/2 states in hydrogen, lifting the energy of the triplet configuration, and lowering that of the singlet. This splitting is known as hyperfine structure. The net energy difference between the singlet and the triplet states is ∆E = 8 3 gp me mp α2 E0 = 5.88 × 10−6 eV, (12.162) Time-Independent Perturbation Theory 173 where E0 = 13.6 eV is the (magnitude of the) ground-state energy. Note that the hyperfine energy-shift is much smaller, by a factor me/mp, than a typical fine structure energy-shift. If we convert the above energy into a wavelength then we obtain λ = 21.1 cm. (12.163) This is the wavelength of the radiation emitted by a hydrogen atom which is collisionally excited from the singlet to the triplet state, and then decays back to the lower energy singlet state. The 21 cm line is famous in radio astronomy because it was used to map out the spiral structure of our galaxy in the 1950’s. 174 QUANTUM MECHANICS Time-Dependent Perturbation Theory 175 13 Time-Dependent Perturbation Theory 13.1 Introduction Consider a system whose Hamiltonian can be written H(t) = H0 + H1(t). (13.1) Here, H0 is again a simple time-independent Hamiltonian whose eigenvalues and eigen-states are known exactly. However, H1 now represents a small time-dependent external perturbation. Let the eigenstates of H0 take the form H0 ψm = Em ψm. (13.2) We know (see Sect. 4.12) that if the system is in one of these eigenstates then, in the absence of an external perturbation, it remains in this state for ever. However, the presence of a small time-dependent perturbation can, in principle, give rise to a finite probability that if the system is initially in some eigenstate ψn of the unperturbed Hamiltonian then it is found in some other eigenstate at a subsequent time (since ψn is no longer an exact eigenstate of the total Hamiltonian). In other words, a time-dependent perturbation can cause the system to make transitions between its unperturbed energy eigenstates. Let us investigate this effect. 13.2 Preliminary Analysis Suppose that at t = 0 the state of the system is represented by ψ(0) = X m cm ψm, (13.3) where the cm are complex numbers. Thus, the initial state is some linear superposition of the unperturbed energy eigenstates. In the absence of the time-dependent perturbation, the time evolution of the system is simply (see Sect. 4.12) ψ(t) = X m cm exp (−i Em t/¯ h) ψm. (13.4) Now, the probability of finding the system in state n at time t is Pn(t) = |⟨ψn|ψ⟩|2 = |cn exp (−i En t/¯ h)|2 = |cn|2 = Pn(0), (13.5) since the unperturbed eigenstates are assummed to be orthonormal: i.e., ⟨n|m⟩= δnm. (13.6) 176 QUANTUM MECHANICS Clearly, with H1 = 0, the probability of finding the system in state ψn at time t is exactly the same as the probability of finding the system in this state at the initial time, t = 0. However, with H1 ̸= 0, we expect Pn—and, hence, cn—to vary with time. Thus, we can write ψ(t) = X m cm(t) exp (−i Em t/¯ h) ψm, (13.7) where Pn(t) = |cn(t)|2. Here, we have carefully separated the fast phase oscillation of the eigenstates, which depends on the unperturbed Hamiltonian, from the slow variation of the amplitudes cn(t), which depends entirely on the perturbation (i.e., cn is constant in time if H1 = 0). Note that in Eq. (13.7) the eigenstates ψm are time-independent (they are actually the eigenstates of H0 evaluated at the initial time, t = 0). The time-dependent Schr¨ odinger equation [see Eq. (4.63)] yields i ¯ h ∂ψ(t) ∂t = H(t) ψ(t) = [H0 + H1(t)] ψ(t). (13.8) Now, it follows from Eq. (13.7) that (H0 + H1) ψ = X m cm exp (−i Em t/¯ h) (Em + H1) ψm. (13.9) We also have i ¯ h ∂ψ ∂t = X m i ¯ h dcm dt + cm Em ! exp (−i Em t/¯ h) ψm, (13.10) since the ψm are time-independent. According to Eq. (13.8), we can equate the right-hand sides of the previous two equations to obtain X m i ¯ h dcm dt exp (−i Em t/¯ h) ψm = X m cm exp (−i Em t/¯ h) H1 ψm. (13.11) Projecting out the component of the above equation which is proportional to ψn, using Eq. (13.6), we obtain i ¯ h dcn(t) dt = X m Hnm(t) exp ( i ωnm t) cm(t), (13.12) where Hnm(t) = ⟨n|H1(t)|m⟩, (13.13) and ωnm = En −Em ¯ h . (13.14) Suppose that there are N linearly independent eigenstates of the unperturbed Hamilto-nian. According to Eqs. (13.12), the time-dependence of the set of N coefficients cn, which specify the probabilities of finding the system in these eigenstates at time t, is determined Time-Dependent Perturbation Theory 177 by N coupled first-order differential equations. Note that Eqs. (13.12) are exact—we have made no approximations at this stage. Unfortunately, we cannot generally find exact so-lutions to these equations. Instead, we have to obtain approximate solutions via suitable expansions in small quantities. However, for the particuilarly simple case of a two-state system (i.e., N = 2), it is actually possible to solve Eqs. (13.12) without approximation. This solution is of great practical importance. 13.3 Two-State System Consider a system in which the time-independent Hamiltonian possesses two eigenstates, denoted H0 ψ1 = E1 ψ1, (13.15) H0 ψ2 = E2 ψ2. (13.16) Suppose, for the sake of simplicity, that the diagonal elements of the interaction Hamilto-nian, H1, are zero: i.e., ⟨1|H1|1⟩= ⟨2|H1|2⟩= 0. (13.17) The off-diagonal elements are assumed to oscillate sinusoidally at some frequency ω: i.e., ⟨1|H1|2⟩= ⟨2|H1|1⟩∗= γ ¯ h exp(i ω t), (13.18) where γ and ω are real. Note that it is only the off-diagonal matrix elements which give rise to the effect which we are interested in—namely, transitions between states 1 and 2. For a two-state system, Eq. (13.12) reduces to i dc1 dt = γ exp [+i (ω −ω21) t] c2, (13.19) i dc2 dt = γ exp [−i (ω −ω21) t] c1, (13.20) where ω21 = (E2−E1)/¯ h. The above two equations can be combined to give a second-order differential equation for the time-variation of the amplitude c2: i.e., d2c2 dt2 + i (ω −ω21) dc2 dt + γ2 c2 = 0. (13.21) Once we have solved for c2, we can use Eq. (13.20) to obtain the amplitude c1. Let us search for a solution in which the system is certain to be in state 1 (and, thus, has no chance of being in state 2) at time t = 0. Thus, our initial conditions are c1(0) = 1 and c2(0) = 0. It is easily demonstrated that the appropriate solutions to (13.21) and (13.20) are c2(t) = −i γ Ω ! exp "−i (ω −ω21) t 2 # sin(Ωt), (13.22) 178 QUANTUM MECHANICS c1(t) = exp "i (ω −ω21) t 2 # cos(Ωt) − "i (ω −ω21) 2 Ω # exp "i (ω −ω21) t 2 # sin(Ωt), (13.23) where Ω= q γ2 + (ω −ω21)2/4. (13.24) Now, the probability of finding the system in state 1 at time t is simply P1(t) = |c1(t)|2. Likewise, the probability of finding the system in state 2 at time t is P2(t) = |c2(t)|2. It follows that P1(t) = 1 −P2(t), (13.25) P2(t) = " γ2 γ2 + (ω −ω21)2/4 # sin2(Ωt). (13.26) This result is known as Rabi’s formula. Equation (13.26) exhibits all the features of a classic resonance. At resonance, when the oscillation frequency of the perturbation, ω, matches the frequency ω21, we find that P1(t) = cos2(γ t), (13.27) P2(t) = sin2(γ t). (13.28) According to the above result, the system starts off in state 1 at t = 0. After a time interval π/(2 γ) it is certain to be in state 2. After a further time interval π/(2 γ) it is certain to be in state 1 again, and so on. Thus, the system periodically flip-flops between states 1 and 2 under the influence of the time-dependent perturbation. This implies that the system alternatively absorbs and emits energy from the source of the perturbation. The absorption-emission cycle also takes place away from the resonance, when ω ̸= ω21. However, the amplitude of the oscillation in the coefficient c2 is reduced. This means that the maximum value of P2(t) is no longer unity, nor is the minimum of P1(t) zero. In fact, if we plot the maximum value of P2(t) as a function of the applied frequency, ω, we obtain a resonance curve whose maximum (unity) lies at the resonance, and whose full-width half-maximum (in frequency) is 4 γ. Thus, if the applied frequency differs from the resonant frequency by substantially more than 2 γ then the probability of the system jumping from state 1 to state 2 is always very small. In other words, the time-dependent perturbation is only effective at causing transitions between states 1 and 2 if its frequency of oscillation lies in the approximate range ω21 ± 2 γ. Clearly, the weaker the perturbation (i.e., the smaller γ becomes), the narrower the resonance. 13.4 Spin Magnetic Resonance Consider a system consisting of a spin one-half particle with no orbital angular momentum (e.g., a bound electron) placed in a uniform z-directed magnetic field, and then subject to Time-Dependent Perturbation Theory 179 a small time-dependent magnetic field rotating in the x-y plane at the angular frequency ω. Thus, B = B0 ez + B1 [cos(ω t) ex + sin(ω t) ey] , (13.29) where B0 and B1 are constants, with B1 ≪B0. The rotating magnetic field usually repre-sents the magnetic component of an electromagnetic wave propagating along the z-axis. In this system, the electric component of the wave has no effect. The Hamiltonian is written H = −µ · B = H0 + H1, (13.30) where H0 = −g e B0 2 m Sz, (13.31) and H1 = −g e B1 2 m [cos(ω t) Sx + sin(ω t) Sy] . (13.32) Here, g and m are the gyromagnetic ratio [see Eq. (12.151)] and mass of the particle in question, respectively. The eigenstates of the unperturbed Hamiltonian are the “spin up” and “spin down” states, denoted χ+ and χ−, respectively. Of course, these states are the eigenstates of Sz corresponding to the eigenvalues +¯ h/2 and −¯ h/2 respectively (see Sect. 10). Thus, we have H0 χ± = ∓g e ¯ h B0 4 m χ±. (13.33) The time-dependent Hamiltonian can be written H1 = −g e B1 4 m [exp( i ω t) S−+ exp(−i ω t) S+] , (13.34) where S+ and S−are the conventional raising and lowering operators for spin angular momentum (see Sect. 10). It follows that ⟨+|H1|+⟩= ⟨−|H1|−⟩= 0, (13.35) and ⟨−|H1|+⟩= ⟨+|H1|−⟩∗= −g e B1 4 m exp( i ω t). (13.36) It can be seen that this system is exactly the same as the two-state system discussed in the previous subsection, provided that the make the following indentifications: ψ1 → χ+, (13.37) ψ2 → χ−, (13.38) ω21 → g e B0 2 m , (13.39) γ → −g e B1 4 m . (13.40) 180 QUANTUM MECHANICS The resonant frequency, ω21, is simply the spin precession frequency in a uniform magnetic field of strength B0 (see Sect. 10.6). In the absence of the perturbation, the expectation values of Sx and Sy oscillate because of the spin precession, but the expectation value of Sz remains invariant. If we now apply a magnetic perturbation rotating at the resonant frequency then, according to the analysis of the previous subsection, the system undergoes a succession of spin flips, χ+ ↔χ−, in addition to the spin precession. We also know that if the oscillation frequency of the applied field is very different from the resonant frequency then there is virtually zero probability of the field triggering a spin flip. The width of the resonance (in frequency) is determined by the strength of the oscillating magnetic perturbation. Experimentalists are able to measure the gyromagnetic ratios of spin one-half particles to a high degree of accuracy by placing the particles in a uniform magnetic field of known strength, and then subjecting them to an oscillating magnetic field whose frequency is gradually scanned. By determining the resonant frequency (i.e., the frequency at which the particles absorb energy from the oscillating field), it is possible to determine the gyromagnetic ratio (assuming that the mass is known). 13.5 Perturbation Expansion Let us recall the analysis of Sect. 13.2. The ψn are the stationary orthonormal eigenstates of the time-independent unperturbed Hamiltonian, H0. Thus, H0 ψn = En ψn, where the En are the unperturbed energy levels, and ⟨n|m⟩= δnm. Now, in the presence of a small time-dependent perturbation to the Hamiltonian, H1(t), the wavefunction of the system takes the form ψ(t) = X n cn(t) exp(−i ωn t) ψn, (13.41) where ωn = En/¯ h. The amplitudes cn(t) satisfy i ¯ h dcn dt = X m Hnm exp( i ωnm t) cm, (13.42) where Hnm(t) = ⟨n|H1(t)|m⟩and ωnm = (En −Em)/¯ h. Finally, the probability of finding the system in the nth eigenstate at time t is simply Pn(t) = |cn(t)|2 (13.43) (assuing that, initially, P n |cn|2 = 1). Suppose that at t = 0 the system is in some initial energy eigenstate labeled i. Equa-tion (13.42) is, thus, subject to the initial condition cn(0) = δni. (13.44) Let us attempt a perturbative solution of Eq. (13.42) using the ratio of H1 to H0 (or Hnm to ¯ h ωnm, to be more exact) as our expansion parameter. Now, according to (13.42), the cn Time-Dependent Perturbation Theory 181 are constant in time in the absence of the perturbation. Hence, the zeroth-order solution is simply c(0) n (t) = δni. (13.45) The first-order solution is obtained, via iteration, by substituting the zeroth-order solution into the right-hand side of Eq. (13.42). Thus, we obtain i ¯ h dc(1) n dt = X m Hnm exp( i ωnm t) c(0) m = Hni exp( i ωni t), (13.46) subject to the boundary condition c(1) n (0) = 0. The solution to the above equation is c(1) n = −i ¯ h Z t 0 Hni(t′) exp( i ωni t′) dt′. (13.47) It follows that, up to first-order in our perturbation expansion, cn(t) = δni −i ¯ h Z t 0 Hni(t′) exp( i ωni t′) dt′. (13.48) Hence, the probability of finding the system in some final energy eigenstate labeled f at time t, given that it is definitely in a different initial energy eigenstate labeled i at time t = 0, is Pi→f(t) = |cf(t)|2 = −i ¯ h Z t 0 Hfi(t′) exp( i ωfi t′) dt′ 2 . (13.49) Note, finally, that our perturbative solution is clearly only valid provided Pi→f(t) ≪1. (13.50) 13.6 Harmonic Perturbations Consider a (Hermitian) perturbation which oscillates sinusoidally in time. This is usually termed a harmonic perturbation. Such a perturbation takes the form H1(t) = V exp( i ω t) + V† exp(−i ω t), (13.51) where V is, in general, a function of position, momentum, and spin operators. It follows from Eqs. (13.48) and (13.51) that, to first-order, cf(t) = −i ¯ h Z t 0 h Vfi exp( i ω t′) + V† fi exp(−i ω t′) i exp( i ωfi t′) dt′, (13.52) where Vfi = ⟨f|V|i⟩, (13.53) V† fi = ⟨f|V†|i⟩= ⟨i|V|f⟩∗. (13.54) 182 QUANTUM MECHANICS Figure 13.1: The functions sinc(x) (dashed curve) and sinc2(x) (solid curve). The vertical dotted lines denote the region |x| ≤π. Integration with respect to t′ yields cf(t) = −i t ¯ h (Vfi exp [ i (ω + ωfi) t/2] sinc [(ω + ωfi) t/2] +V† fi exp [−i (ω −ωfi) t/2] sinc [(ω −ωfi) t/2]  , (13.55) where sinc x ≡sin x x . (13.56) Now, the function sinc(x) takes its largest values when |x| < ∼π, and is fairly negligible when |x| ≫π (see Fig. 13.1). Thus, the first and second terms on the right-hand side of Eq. (13.55) are only non-negligible when |ω + ωfi| < ∼ 2π t , (13.57) and |ω −ωfi| < ∼ 2π t , (13.58) respectively. Clearly, as t increases, the ranges in ω over which these two terms are non-negligible gradually shrink in size. Eventually, when t ≫2π/|ωfi|, these two ranges be-come strongly non-overlapping. Hence, in this limit, Pi→f = |cf|2 yields Pi→f(t) = t2 ¯ h2 |Vfi| 2 sinc2 [(ω + ωfi) t/2] + |V† fi| 2 sinc2 [(ω −ωfi) t/2] . (13.59) Time-Dependent Perturbation Theory 183 Now, the function sinc2(x) is very strongly peaked at x = 0, and is completely negligible for |x| > ∼π (see Fig. 13.1). It follows that the above expression exhibits a resonant response to the applied perturbation at the frequencies ω = ±ωfi. Moreover, the widths of these resonances decease linearly as time increases. At each of the resonances (i.e., at ω = ±ωfi), the transition probability Pi→f(t) varies as t2 [since sinh(0) = 1]. This behaviour is entirely consistent with our earlier result (13.28), for the two-state system, in the limit γ t ≪1 (recall that our perturbative solution is only valid as long as Pi→f ≪1). The resonance at ω = −ωfi corresponds to Ef −Ei = −¯ h ω. (13.60) This implies that the system loses energy ¯ h ω to the perturbing field, whilst making a transition to a final state whose energy is less than the initial state by ¯ h ω. This process is known as stimulated emission. The resonance at ω = ωfi corresponds to Ef −Ei = ¯ h ω. (13.61) This implies that the system gains energy ¯ h ω from the perturbing field, whilst making a transition to a final state whose energy is greater than that of the initial state by ¯ h ω. This process is known as absorption. Stimulated emission and absorption are mutually exclusive processes, since the first requires ωfi < 0, whereas the second requires ωfi > 0. Hence, we can write the transition probabilities for both processes separately. Thus, from (13.59), the transition probability for stimulated emission is Pstm i→f(t) = t2 ¯ h2 |V† if| 2 sinc2 [(ω −ωif) t/2] , (13.62) where we have made use of the facts that ωif = −ωfi > 0, and |Vfi|2 = |V† if|2. Likewise, the transition probability for absorption is Pabs i→f(t) = t2 ¯ h2 |V† fi| 2 sinc2 [(ω −ωfi) t/2]. (13.63) 13.7 Electromagnetic Radiation Let us use the above results to investigate the interaction of an atomic electron with clas-sical (i.e., non-quantized) electromagnetic radiation. The unperturbed Hamiltonian of the system is H0 = p2 2 me + V0(r). (13.64) Now, the standard classical prescription for obtaining the Hamiltonian of a particle of charge q in the presence of an electromagnetic field is p → p + q A, (13.65) H → H −q φ, (13.66) 184 QUANTUM MECHANICS where A(r) is the vector potential, and φ(r) the scalar potential. Note that E = −∇φ −∂A ∂t , (13.67) B = ∇× A. (13.68) This prescription also works in quantum mechanics. Thus, the Hamiltonian of an atomic electron placed in an electromagnetic field is H = (p −e A)2 2 me + e φ + V0(r), (13.69) where A and φ are functions of the position operators. The above equation can be written H =  p2 −e A·p −e p·A + e2A2 2 me + e φ + V0(r). (13.70) Now, p·A = A·p, (13.71) provided that we adopt the gauge ∇·A = 0. Hence, H = p2 2 me −e A·p me + e2A2 2 me + e φ + V0(r). (13.72) Suppose that the perturbation corresponds to a linearly polarized, monochromatic, plane-wave. In this case, φ = 0, (13.73) A = A0 ǫ cos(k·r −ωt) , (13.74) where k is the wavevector (note that ω = k c), and ǫ a unit vector which specifies the direction of polarization (i.e., the direction of E). Note that ǫ·k = 0. The Hamiltonian becomes H = H0 + H1(t), (13.75) with H0 = p2 2 me + V0(r), (13.76) and H1 ≃−e A·p me , (13.77) where the A2 term, which is second order in A0, has been neglected. The perturbing Hamiltonian can be written H1 = −e A0 ǫ·p 2 me [exp( i k·r −i ωt) + exp(−i k·r + i ωt)] . (13.78) Time-Dependent Perturbation Theory 185 This has the same form as Eq. (13.51), provided that V† = −e A0 ǫ·p 2 me exp( i k·r ). (13.79) It follows from Eqs. (13.53), (13.63), and (13.79) that the transition probability for radiation induced absorption is Pabs i→f(t) = t2 ¯ h2 e2 |A0|2 4 m 2 e |⟨f|ǫ·p exp( i k·r)|i⟩|2 sinc2[(ω −ωfi) t/2]. (13.80) Now, the mean energy density of an electromagnetic wave is u = 1 2 ǫ0 |E0|2 2 + |B0|2 2 µ0 ! = 1 2 ǫ0 |E0|2, (13.81) where E0 = A0 ω and B0 = E0/c are the peak electric and magnetic field-strengths, respec-tively. It thus follows that Pabs i→f(t) = t2 e2 2 ǫ0 ¯ h2 m 2 e ω2 |⟨f|ǫ·p exp( i k·r)|i⟩|2 u sinc2[(ω −ωfi) t/2]. (13.82) Thus, not surprisingly, the transition probability for radiation induced absorption (or stim-ulated emission) is directly proportional to the energy density of the incident radiation. Suppose that the incident radiation is not monochromatic, but instead extends over a range of frequencies. We can write u = Z ∞ −∞ ρ(ω) dω, (13.83) where ρ(ω) dω is the energy density of radiation whose frequencies lie between ω and ω + dω. Equation (13.82) generalizes to Pabs i→f(t) = Z ∞ −∞ t2 e2 2 ǫ0 ¯ h2 m 2 e ω2 |⟨f|ǫ·p exp( i k·r)|i⟩|2 ρ(ω) sinc2[(ω −ωfi) t/2] dω. (13.84) Note, however, that the above expression is only valid provided the radiation in question is incoherent: i.e., there are no phase correlations between waves of different frequen-cies. This follows because it is permissible to add the intensities of incoherent radiation, whereas we must always add the amplitudes of coherent radiation. Given that the function sinc2[(ω −ωfi) t/2] is very strongly peaked (see Fig. 13.1) about ω = ωfi (assuming that t ≫2π/ωfi), and Z ∞ −∞ sinc2(x) dx = π, (13.85) the above equation reduces to Pabs i→f(t) = π e2 ρ(ωfi) ǫ0 ¯ h2 m 2 e ω 2 fi |⟨f|ǫ·p exp( i k·r)|i⟩|2 t. (13.86) 186 QUANTUM MECHANICS Note that in integrating over the frequencies of the incoherent radiation we have trans-formed a transition probability which is basically proportional to t2 [see Eq. (13.82)] to one which is proportional to t. As has already been explained, the above expression is only valid when Pabs i→f ≪1. However, the result that wabs i→f ≡dPabs i→f dt = π e2 ρ(ωfi) ǫ0 ¯ h2 m 2 e ω 2 fi |⟨f|ǫ·p exp( i k·r)|i⟩|2 (13.87) is constant in time is universally valid. Here, wabs i→f is the transition probability per unit time interval, otherwise known as the transition rate. Given that the transition rate is constant, we can write (see Cha. 2) Pabs i→f(t + dt) −Pabs i→f(t) = h 1 −Pabs i→f(t) i wabs i→f dt : (13.88) i.e., the probability that the system makes a transition from state i to state f between times t and t + dt is equivalent to the probability that the system does not make a transition between times 0 and t and then makes a transition in a time interval dt—the probabilities of these two events are 1 −Pabs i→f(t) and wabs i→f dt, respectively. It follows that dPabs i→f dt + wabs i→f Pabs i→f = wabs i→f, (13.89) with the initial condition Pabs i→f(0) = 0. The above equation can be solved to give Pabs i→f(t) = 1 −exp  −wabs i→f t  . (13.90) This result is consistent with Eq. (13.86) provided wabs i→f t ≪1: i.e., provided that Pabs i→f ≪1. Using similar arguments to the above, the transition probability for stimulated emission can be shown to take the form Pstm i→f(t) = 1 −exp  −wstm i→f t  , (13.91) where the corresponding transition rate is written wstm i→f = π e2 ρ(ωif) ǫ0 ¯ h2 m 2 e ω 2 if |⟨i|ǫ·p exp( i k·r)|f⟩|2 . (13.92) 13.8 Electric Dipole Approximation In general, the wavelength of the type of electromagnetic radiation which induces, or is emitted during, transitions between different atomic energy levels is much larger than the typical size of an atom. Thus, exp( i k·r) = 1 + i k·r + · · · , (13.93) Time-Dependent Perturbation Theory 187 can be approximated by its first term, unity. This approach is known as the electric dipole approximation. It follows that ⟨f|ǫ·p exp( i k·r)|i⟩≃ǫ·⟨f|p|i⟩. (13.94) Now, it is readily demonstrated that [r, H0] = i ¯ h p me , (13.95) so ⟨f|p|i⟩= −i me ¯ h ⟨f|[r, H0]|i⟩= i me ωfi ⟨f|r|i⟩. (13.96) Thus, our previous expressions for the transition rates for radiation induced absorption and stimulated emission reduce to wabs i→f = π ǫ0 ¯ h2 |ǫ·dif| 2 ρ(ωfi), (13.97) wstm i→f = π ǫ0 ¯ h2 |ǫ·dif| 2 ρ(ωif), (13.98) respectively. Here, dif = ⟨f|e r|i⟩ (13.99) is the effective electric dipole moment of the atom when making a transition from state i to state f. Equations (13.97) and (13.98) give the transition rates for absorption and stimulated emission, respectively, induced by a linearly polarized plane-wave. Actually, we are more interested in the transition rates induced by unpolarized isotropic radiation. To obtain these we must average Eqs. (13.97) and (13.98) over all possible polarizations and propagation directions of the wave. To facilitate this process, we can define a set of Cartesian coordi-nates such that the wavevector k, which specifies the direction of wave propagation, points along the z-axis, and the vector dif, which specifies the direction of the atomic dipole mo-ment, lies in the x-z plane. It follows that the vector ǫ, which specifies the direction of wave polarization, must lie in the x-y plane, since it has to be orthogonal to k. Thus, we can write k = (0, 0, k), (13.100) dif = (dif sin θ, 0, dif cos θ), (13.101) ǫ = (cos φ, sin φ, 0), (13.102) which implies that |ǫ·dif| 2 = d2 if sin2 θ cos2 φ. (13.103) We must now average the above quantity over all possible values of θ and φ. Thus, D |ǫ·dif| 2E av = d 2 if R R sin2 θ cos2 φ dΩ 4π , (13.104) 188 QUANTUM MECHANICS where dΩ= sin θ dθ dφ, and the integral is taken over all solid angle. It is easily demon-strated that D |ǫ·dif| 2E av = d 2 if 3 . (13.105) Here, d 2 if stands for d 2 if = |⟨f|e x|i⟩| 2 + |⟨f|e y|i⟩| 2 + |⟨f|e z|i⟩| 2. (13.106) Hence, the transition rates for absorption and stimulated emission induced by unpolarized isotropic radiation are wabs i→f = π 3 ǫ0 ¯ h2 d 2 if ρ(ωfi), (13.107) wstm i→f = π 3 ǫ0 ¯ h2 d 2 if ρ(ωif), (13.108) respectively. 13.9 Spontaneous Emission So far, we have calculated the rates of radiation induced transitions between two atomic states. This process is known as absorption when the energy of the final state exceeds that of the initial state, and stimulated emission when the energy of the final state is less than that of the initial state. Now, in the absence of any external radiation, we would not expect an atom in a given state to spontaneously jump into an state with a higher energy. On the other hand, it should be possible for such an atom to spontaneously jump into an state with a lower energy via the emission of a photon whose energy is equal to the difference between the energies of the initial and final states. This process is known as spontaneous emission. It is possible to derive the rate of spontaneous emission between two atomic states from a knowledge of the corresponding absorption and stimulated emission rates using a famous thermodynamic argument due to Einstein. Consider a very large ensemble of similar atoms placed inside a closed cavity whose walls (which are assumed to be perfect emitters and absorbers of radiation) are held at the constant temperature T. Let the system have attained thermal equilibrium. According to statistical thermodynamics, the cavity is filled with so-called “black-body” electromagnetic radiation whose energy spectrum is ρ(ω) = ¯ h π2 c3 ω3 exp(¯ h ω/kB T) −1, (13.109) where kB is the Boltzmann constant. This well-known result was first obtained by Max Planck in 1900. Consider two atomic states, labeled i and f, with Ei > Ef. One of the tenants of statisti-cal thermodynamics is that in thermal equilibrium we have so-called detailed balance. This Time-Dependent Perturbation Theory 189 means that, irrespective of any other atomic states, the rate at which atoms in the ensem-ble leave state i due to transitions to state f is exactly balanced by the rate at which atoms enter state i due to transitions from state f. The former rate (i.e., number of transitions per unit time in the ensemble) is written Wi→f = Ni (wspn i→f + wstm i→f), (13.110) where wspn i→f is the rate of spontaneous emission (for a single atom) between states i and f, and Ni is the number of atoms in the ensemble in state i. Likewise, the latter rate takes the form Wf→i = Nf wabs f→i, (13.111) where Nf is the number of atoms in the ensemble in state f. The above expressions describe how atoms in the ensemble make transitions from state i to state f due to a combination of spontaneous and stimulated emission, and make the opposite transition as a consequence of absorption. In thermal equilibrium, we have Wi→f = Wf→i, which gives wspn i→f = Nf Ni wabs f→i −wstm i→f. (13.112) According to Eqs. (13.107) and (13.108), we can also write wspn i→f = Nf Ni −1 ! π 3 ǫ0 ¯ h2 d 2 if ρ(ωif). (13.113) Now, another famous result in statistical thermodynamics is that in thermal equilibrium the number of atoms in an ensemble occupying a state of energy E is proportional to exp(−E/kB T). This implies that Nf Ni = exp(−Ef/kB T) exp(−Ei/kB T) = exp( ¯ h ωif/kB T). (13.114) Thus, it follows from Eq. (13.109), (13.113), and (13.114) that the rate of spontaneous emission between states i and f takes the form wspn i→f = ω 3 if d 2 if 3π ǫ0 ¯ h c3. (13.115) Note, that, although the above result has been derived for an atom in a radiation-filled cavity, it remains correct even in the absence of radiation. Finally, the corresponding absorption and stimulated emission rates for an atom in a radiation-filled cavity are wabs i→f = ω 3 fi d 2 if 3π ǫ0 ¯ h c3 1 exp(¯ h ωfi/kB T) −1, (13.116) wstm i→f = ω 3 if d 2 if 3π ǫ0 ¯ h c3 1 exp(¯ h ωif/kB T) −1, (13.117) 190 QUANTUM MECHANICS respectively. Let us estimate the typical value of the spontaneous emission rate for a hydrogen atom. We expect the dipole moment dif to be of order e a0, where a0 is the Bohr radius [see Eq. (9.58)]. We also expect ωif to be of order |E0|/¯ h, where E0 is the energy of the ground-state [see Eq. (9.57)]. It thus follows from Eq. (13.115) that wspn i→f ∼α3 ωif, (13.118) where α = e2/(4π ǫ0 ¯ h c) ≃1/137 is the fine-structure constant. This is an important result, since our perturbation expansion is based on the assumption that the transition rate between different energy eigenstates is much slower than the frequency of phase oscillation of these states: i.e., that wspn i→f ≪ωif (see Sect. 13.2). This is indeed the case. 13.10 Radiation from a Harmonic Oscillator Consider an electron in a one-dimensional harmonic oscillator potential aligned along the x-axis. According to Sect. 5.8, the unperturbed energy eigenvalues of the system are En = (n + 1/2) ¯ h ω0, (13.119) where ω0 is the frequency of the corresponding classical oscillator. Here, the quantum number n takes the values 0, 1, 2, · · ·. Let the ψn(x) be the (real) properly normalized unperturbed eigenstates of the system. Suppose that the electron is initially in an excited state: i.e., n > 0. In principle, the electron can decay to a lower energy state via the spontaneous emission of a photon of the appropriate frequency. Let us investigate this effect. Now, according to Eq. (13.115), the system can only make a spontaneous transition from an energy state corresponding to the quantum number n to one corresponding to the quantum number n′ if the associated electric dipole moment (dx)n,n′ = ⟨n|e x|n′⟩= e Z ∞ −∞ ψn(x) x ψn′(x) dx (13.120) is non-zero [since dif ≡(dx) 2 n,n′ for the case in hand]. However, according to Eq. (5.117), Z ∞ −∞ ψn x ψn′ dx = s ¯ h 2 me ω0 √n δn,n′+1 + √ n′ δn,n′−1  . (13.121) Since we are dealing with emission, we must have n > n′. Hence, we obtain (dx)n,n′ = e s ¯ h n 2 me ω0 δn,n′+1. (13.122) Time-Dependent Perturbation Theory 191 It is clear that (in the electric dipole approximation) we can only have spontaneous emis-sion between states whose quantum numbers differ by unity. Thus, the frequency of the photon emitted when the nth excited state decays is ωn,n−1 = En −En−1 ¯ h = ω0. (13.123) Hence, we conclude that, no matter which state decays, the emitted photon always has the same frequency as the classical oscillator. According to Eq. (13.115), the decay rate of the nth excited state is given by wn = ω 3 n,n−1 (dx) 2 n,n−1 3π ǫ0 ¯ h c3 . (13.124) It follows that wn = n e2 ω 2 0 6π ǫ0 me c3. (13.125) The mean radiated power is simply Pn = ¯ h ω0 wn = e2 ω 2 0 6π ǫ0 me c3 [En −(1/2) ¯ hω0]. (13.126) Classically, an electron in a one-dimensional oscillator potential radiates at the oscillation frequency ω0 with the mean power P = e2 ω 2 0 6π ǫ0 me c3 E, (13.127) where E is the oscillator energy. It can be seen that a quantum oscillator radiates in an almost exactly analogous manner to the equivalent classical oscillator. The only difference is the factor (1/2) ¯ h ω0 in Eq. (13.126)—this is needed to ensure that the ground-state of the quantum oscillator does not radiate. 13.11 Selection Rules Let us now consider spontaneous transitions between the different energy levels of a hydro-gen atom. Since the perturbing Hamiltonian (13.77) does not contain any spin operators, we can neglect electron spin in our analysis. Thus, according to Sect. 9.4, the various energy eigenstates of the hydrogen atom are labeled by the familiar quantum numbers n, l, and m. According to Eqs. (13.106) and (13.115), a hydrogen atom can only make a spon-taneous transition from an energy state corresponding to the quantum numbers n, l, m to one corresponding to the quantum numbers n′, l′, m′ if the modulus squared of the associated electric dipole moment d2 = |⟨n, l, m|e x|n′, l′, m′⟩|2 + |⟨n, l, m|e y|n′, l′, m′⟩|2 + |⟨n, l, m|e z|n′, l′, m′⟩|2 (13.128) 192 QUANTUM MECHANICS is non-zero. Now, we have already seen, in Sect. 12.5, that the matrix element ⟨n, l, m|z|n′, l′, m′⟩ is only non-zero provided that m′ = m and l′ = l ± 1. It turns out that the proof that this matrix element is zero unless l′ = l ± 1 can, via a trivial modification, also be used to demonstrate that ⟨n, l, m|x|n′, l′, m′⟩and ⟨n, l, m|y|n′, l′, m′⟩are also zero unless l′ = l±1. Consider x± = x + i y. (13.129) It is easily demonstrated that [Lz, x±] = ± ¯ h x±. (13.130) Hence, ⟨n, l, m|[Lz, x+] −¯ h x+|n′, l′, m′⟩= ¯ h (m −m′ −1) ⟨n, l, m|x+|n′, l′, m′⟩= 0, (13.131) and ⟨n, l, m|[Lz, x−] + ¯ h x−|n′, l′, m′⟩= ¯ h (m −m′ + 1) ⟨n, l, m|x−|n′, l′, m′⟩= 0. (13.132) Clearly, ⟨n, l, m|x+|n′, l′, m′⟩is zero unless m′ = m −1, and ⟨n, l, m|x−|n′, l′, m′⟩is zero unless m′ = m + 1. Now, ⟨n, l, m|x|n′, l′, m′⟩and ⟨n, l, m|y|n′, l′, m′⟩are obviously both zero if ⟨n, l, m|x+|n′, l′, m′⟩and ⟨n, l, m|x−|n′, l′, m′⟩are both zero. Hence, we conclude that ⟨n, l, m|x|n′, l′, m′⟩and ⟨n, l, m|y|n′, l′, m′⟩are only non-zero if m′ = m ± 1. The above arguments demonstrate that spontaneous transitions between different en-ergy levels of a hydrogen atom are only possible provided l′ = l ± 1, (13.133) m′ = m, m ± 1. (13.134) These are termed the selection rules for electric dipole transitions (i.e., transitions calculated using the electric dipole approximation). Note, finally, that since the perturbing Hamil-tonian does not contain any spin operators, the spin quantum number ms cannot change during a transition. Hence, we have the additional selection rule that m′ s = ms. 13.12 2P →1S Transitions in Hydrogen Let us calculate the rate of spontaneous emission between the first excited state (i.e., n = 2) and the ground-state (i.e., n′ = 1) of a hydrogen atom. Now the ground-state is characterized by l′ = m′ = 0. Hence, in order to satisfy the selection rules (13.133) and (13.134), the excited state must have the quantum numbers l = 1 and m = 0, ±1. Thus, we are dealing with a spontaneous transition from a 2P to a 1S state. Note, incidentally, that a spontaneous transition from a 2S to a 1S state is forbidden by our selection rules. According to Sect. 9.4, the wavefunction of a hydrogen atom takes the form ψn,l,m(r, θ, φ) = Rn,l(r) Yl,m(θ, φ), (13.135) Time-Dependent Perturbation Theory 193 where the radial functions Rn,l are given in Sect. 9.4, and the spherical harmonics Yl,m are given in Sect. 8.7. Some straight-forward, but tedious, integration reveals that ⟨1, 0, 0|x|2, 1, ±1⟩= ±27 35 a0, (13.136) ⟨1, 0, 0|y|2, 1, ±1⟩= i 27 35 a0, (13.137) ⟨1, 0, 0|z|2, 1, 0⟩= √ 2 27 35 a0, (13.138) where a0 is the Bohr radius specified in Eq. (9.58). All of the other possible 2P →1S matrix elements are zero because of the selection rules. If follows from Eq. (13.128) that the modulus squared of the dipole moment for the 2P →1S transition takes the same value d2 = 215 310 (e a0)2 (13.139) for m = 0, 1, or −1. Clearly, the transition rate is independent of the quantum number m. It turns out that this is a general result. Now, the energy of the eigenstate of the hydrogen atom characterized by the quantum numbers n, l, m is E = E0/n2, where the ground-state energy E0 is specified in Eq. (9.57). Hence, the energy of the photon emitted during a 2P →1S transition is ¯ h ω = E0/4 −E0 = −3 4 E0 = 10.2 eV. (13.140) This corresponds to a wavelength of 1.215 × 10−7 m. Finally, according to Eq. (13.115), the 2P →1S transition rate is written w2P→1S = ω 3 d2 3π ǫ0 ¯ h c3, (13.141) which reduces to w2P→1S = 2 3 !8 α5 me c2 ¯ h = 6.27 × 108 s−1 (13.142) with the aid of Eqs. (13.139) and (13.140). Here, α = 1/137 is the fine-structure constant. Hence, the mean life-time of a hydrogen 2P state is τ2P = (w2P→1S)−1 = 1.6 ns. (13.143) Incidentally, since the 2P state only has a finite life-time, it follows from the energy-time uncertainty relation that the energy of this state is uncertain by an amount ∆E2P ∼¯ h τ2P ∼4 × 10−7 eV. (13.144) This uncertainty gives rise to a finite width of the spectral line associated with the 2P →1S transition. This natural line-width is of order ∆λ λ ∼∆E2P ¯ h ω ∼4 × 10−8. (13.145) 194 QUANTUM MECHANICS 13.13 Intensity Rules Now, we know, from Sect. 12.8, that when we take electron spin and spin-orbit coupling into account the degeneracy of the six 2P states of the hydrogen atom is broken. In fact, these states are divided into two groups with slightly different energies. There are four states characterized by the overall angular momentum quantum number j = 3/2—these are called the 2P3/2 states. The remaining two states are characterized by j = 1/2, and are thus called the 2P1/2 states. The energy of the 2P3/2 states is slightly higher than that of the 2P1/2 states. In fact, the energy difference is ∆E = −α2 16 E0 = 4.53 × 10−5 eV. (13.146) Thus, the wavelength of the spectral line associated with the 2P →1S transition in hydro-gen is split by a relative amount ∆λ λ = ∆E ¯ h ω = 4.4 × 10−6. (13.147) Note that this splitting is much greater than the natural line-width estimated in Eq. (13.145), so there really are two spectral lines. How does all of this affect the rate of the 2P →1S transition? Well, we have seen that the transition rate is independent of spin, and hence of the spin quantum number ms, and is also independent of the quantum number m. It follows that the transition rate is independent of the z-component of total angular momentum quantum number mj = m + ms. However, if this is the case, then the transition rate is plainly also independent of the total angular momentum quantum number j. Hence, we expect the 2P3/2 →1S and 2P1/2 →1S transition rates to be the same. However, there are four 2P3/2 states and only two 2P1/2 states. If these states are equally populated—which we would certainly expect to be the case in thermal equilibrium, since they have almost the same energies—and since they decay to the 1S state at the same rate, it stands to reason that the spectral line associated with the 2P3/2 →1S transition is twice as bright as that associated with the 2P1/2 →1S transition. 13.14 Forbidden Transitions Atomic transitions which are forbidden by the electric dipole selection rules (13.133) and (13.134) are unsurprisingly known as forbidden transitions. It is clear from the analysis in Sect. 13.8 that a forbidden transition is one for which the matrix element ⟨f|ǫ·p|i⟩is zero. However, this matrix element is only an approximation to the true matrix element for radiative transitions, which takes the form ⟨f|ǫ·p exp( i k·r)|i⟩. Expanding exp( i k·r), and keeping the first two terms, the matrix element for a forbidden transition becomes ⟨f|ǫ·p exp( i k·r)|i⟩≃i ⟨f|(ǫ·p) (k·r)|i⟩. (13.148) Time-Dependent Perturbation Theory 195 Hence, if the residual matrix element on the right-hand side of the above expression is non-zero then a “forbidden” transition can take place, allbeit at a much reduced rate. In fact, in Sect. 13.9, we calculated that the typical rate of an electric dipole transition is wi→f ∼α3 ωif. (13.149) Since the transition rate is proportional to the square of the radiative matrix element, it is clear that the transition rate for a forbidden transition enabled by the residual matrix element (13.148) is smaller than that of an electric dipole transition by a factor (k r)2. Estimating r as the Bohr radius, and k as the wavenumber of a typical spectral line of hydrogen, it is easily demonstrated that wi→f ∼α5 ωif (13.150) for such a transition. Of course, there are some transitions (in particular, the 2S →1S transition) for which the true radiative matrix element ⟨f|ǫ·p exp( i k·r)|i⟩is zero. Such transitions are absolutely forbidden. Finally, it is fairly obvious that excited states which decay via forbidden transitions have much longer life-times than those which decay via electric dipole transitions. Since the natural width of a spectral line is inversely proportional to the life-time of the associ-ated decaying state, it follows that spectral lines associated with forbidden transitions are generally much sharper than those associated with electric dipole transitions. 196 QUANTUM MECHANICS Variational Methods 197 14 Variational Methods 14.1 Introduction We have seen, in Sect. 9.4, that we can solve Schr¨ odinger’s equation exactly to find the stationary eigenstates of a hydrogen atom. Unfortunately, it is not possible to find exact solutions of Schr¨ odinger’s equation for atoms more complicated than hydrogen, or for molecules. In such systems, the best that we can do is to find approximate solutions. Most of the methods which have been developed for finding such solutions employ the so-called variational principle discussed below. 14.2 Variational Principle Suppose that we wish to solve the time-independent Schr¨ odinger equation H ψ = E ψ, (14.1) where H is a known (presumably complicated) time-independent Hamiltonian. Let ψ be a normalized trial solution to the above equation. The variational principle states, quite simply, that the ground-state energy, E0, is always less than or equal to the expectation value of H calculated with the trial wavefunction: i.e., E0 ≤⟨ψ|H|ψ⟩. (14.2) Thus, by varying ψ until the expectation value of H is minimized, we can obtain an ap-proximation to the wavefunction and energy of the ground-state. Let us prove the variational principle. Suppose that the ψn and the En are the true eigenstates and eigenvalues of H: i.e., H ψn = En ψn. (14.3) Furthermore, let E0 < E1 < E2 < · · · , (14.4) so that ψ0 is the ground-state, ψ1 the first excited state, etc. The ψn are assumed to be orthonormal: i.e., ⟨ψn|ψm⟩= δnm. (14.5) If our trial wavefunction ψ is properly normalized then we can write ψ = X n cn ψn, (14.6) 198 QUANTUM MECHANICS where X n |cn| 2 = 1. (14.7) Now, the expectation value of H, calculated with ψ, takes the form ⟨ψ|H|ψ⟩ = X n cn ψn H X m cm ψm + = X n,m c ∗ n cm ⟨ψn|H|ψm⟩ = X n c ∗ n cm Em ⟨ψn|ψm⟩= X n En |cn| 2, (14.8) where use has been made of Eqs. (14.3) and (14.5). So, we can write ⟨ψ|H|ψ⟩= |c0| 2 E0 + X n>0 |cn| 2 En. (14.9) However, Eq. (14.7) can be rearranged to give |c0| 2 = 1 − X n>0 |cn| 2. (14.10) Combining the previous two equations, we obtain ⟨ψ|H|ψ⟩= E0 + X n>0 |cn| 2 (En −E0). (14.11) Now, the second term on the right-hand side of the above expression is positive definite, since En −E0 > 0 for all n > 0 [see (14.4)]. Hence, we obtain the desired result ⟨ψ|H|ψ⟩≥E0. (14.12) Suppose that we have found a good approximation, ˜ ψ0, to the ground-state wavefunc-tion. If ψ is a normalized trial wavefunction which is orthogonal to ˜ ψ0 (i.e., ⟨ψ|˜ ψ0⟩= 0) then, by repeating the above analysis, we can easily demonstrate that ⟨ψ|H|ψ⟩≥E1. (14.13) Thus, by varying ψ until the expectation value of H is minimized, we can obtain an approx-imation to the wavefunction and energy of the first excited state. Obviously, we can con-tinue this process until we have approximations to all of the stationary eigenstates. Note, however, that the errors are clearly cumulative in this method, so that any approximations to highly excited states are unlikely to be very accurate. For this reason, the variational method is generally only used to calculate the ground-state and first few excited states of complicated quantum systems. Variational Methods 199 14.3 Helium Atom A helium atom consists of a nucleus of charge +2 e surrounded by two electrons. Let us attempt to calculate its ground-state energy. Let the nucleus lie at the origin of our coordinate system, and let the position vectors of the two electrons be r1 and r2, respectively. The Hamiltonian of the system thus takes the form H = −¯ h2 2 me  ∇2 1 + ∇2 2  − e2 4π ǫ0 2 r1 + 2 r2 − 1 |r2 −r1| ! , (14.14) where we have neglected any reduced mass effects. The terms in the above expression represent the kinetic energy of the first electron, the kinetic energy of the second electron, the electrostatic attraction between the nucleus and the first electron, the electrostatic attraction between the nucleus and the second electron, and the electrostatic repulsion be-tween the two electrons, respectively. It is the final term which causes all of the difficulties. Indeed, if this term is neglected then we can write H = H1 + H2, (14.15) where H1,2 = −¯ h2 2 me ∇2 1,2 − 2 e2 4π ǫ0 r1,2 . (14.16) In other words, the Hamiltonian just becomes the sum of separate Hamiltonians for each electron. In this case, we would expect the wavefunction to be separable: i.e., ψ(r1, r2) = ψ1(r1) ψ2(r2). (14.17) Hence, Schr¨ odinger’s equation H ψ = E ψ (14.18) reduces to H1,2 ψ1,2 = E1,2 ψ1,2, (14.19) where E = E1 + E2. (14.20) Of course, Eq. (14.19) is the Schr¨ odinger equation of a hydrogen atom whose nuclear charge is +2 e, instead of +e. It follows, from Sect. 9.4 (making the substitution e2 →2 e2), that if both electrons are in their lowest energy states then ψ1(r1) = ψ0(r1), (14.21) ψ2(r2) = ψ0(r2), (14.22) where ψ0(r) = 4 √ 2 π a 3/2 0 exp −2 r a0 ! . (14.23) 200 QUANTUM MECHANICS Here, a0 is the Bohr radius [see Eq. (9.58)]. Note that ψ0 is properly normalized. Further-more, E1 = E2 = 4 E0, (14.24) where E0 = −13.6 eV is the hydrogen ground-state energy [see Eq. (9.57)]. Thus, our crude estimate for the ground-state energy of helium becomes E = 4 E0 + 4 E0 = 8 E0 = −108.8 eV. (14.25) Unfortunately, this estimate is significantly different from the experimentally determined value, which is −78.98 eV. This fact demonstrates that the neglected electron-electron repulsion term makes a large contribution to the helium ground-state energy. Fortunately, however, we can use the variational principle to estimate this contribution. Let us employ the separable wavefunction discussed above as our trial solution. Thus, ψ(r1, r2) = ψ0(r1) ψ0(r2) = 8 π a 3 0 exp −2 [r1 + r2] a0 ! . (14.26) The expectation value of the Hamiltonian (14.14) thus becomes ⟨H⟩= 8 E0 + ⟨Vee⟩, (14.27) where ⟨Vee⟩= ψ e2 4π ǫ0 |r2 −r1| ψ + = e2 4π ǫ0 Z |ψ(r1, r2)| 2 |r2 −r1| d3r1 d3r2. (14.28) The variation principle only guarantees that (14.27) yields an upper bound on the ground-state energy. In reality, we hope that it will give a reasonably accurate estimate of this energy. It follows from Eqs. (9.57), (14.26) and (14.28) that ⟨Vee⟩= −4 E0 π2 Z e−2 (^ r1+^ r2) |^ r1 −^ r2| d3^ r1 d3^ r2, (14.29) where ^ r1,2 = 2 r1,2/a0. Neglecting the hats, for the sake of clarity, the above expression can also be written ⟨Vee⟩= −4 E0 π2 Z e−2 (r1+r2) q r 2 1 + r 2 2 −2 r1 r2 cos θ d3r1 d3r2, (14.30) where θ is the angle subtended between vectors r1 and r2. If we perform the integral in r1 space before that in r2 space then ⟨Vee⟩= −4 E0 π2 Z e−2 r2 I(r2) d3r2, (14.31) Variational Methods 201 where I(r2) = Z e−2 r1 q r 2 1 + r 2 2 −2 r1 r2 cos θ d3r1. (14.32) Our first task is to evaluate the function I(r2). Let (r1, θ1, φ1) be a set of spherical polar coordinates in r1 space whose axis of symmetry runs in the direction of r2. It follows that θ = θ1. Hence, I(r2) = Z ∞ 0 Z π 0 Z 2π 0 e−2 r1 q r 2 1 + r 2 2 −2 r1 r2 cos θ1 r 2 1 dr1 sin θ1 dθ1 dφ1, (14.33) which trivially reduces to I(r2) = 2π Z ∞ 0 Z π 0 e−2 r1 q r 2 1 + r 2 2 −2 r1 r2 cos θ1 r 2 1 dr1 sin θ1 dθ1. (14.34) Making the substitution µ = cos θ1, we can see that Z π 0 1 q r 2 1 + r 2 2 −2 r1 r2 cos θ1 sin θ1 dθ1 = Z 1 −1 dµ q r 2 1 + r 2 2 −2 r1 r2 µ . (14.35) Now, Z 1 −1 dµ q r 2 1 + r 2 2 −2 r1 r2 µ =   q r 2 1 + r 2 2 −2 r1 r2 µ r1 r2   −1 +1 = (r1 + r2) −|r1 −r2| r1 r2 = 2/r1 for r1 > r2 2/r2 for r1 < r2 , (14.36) giving I(r2) = 4π 1 r2 Z r2 0 e−2 r1 r 2 1 dr1 + Z ∞ r2 e−2 r1 r1 dr1 ! . (14.37) But, Z e−β x x dx = −e−β x β2 (1 + β x), (14.38) Z e−β x x2 dx = −e−β x β3 (2 + 2 β x + β2 x2), (14.39) yielding I(r2) = π r2 h 1 −e−2 r2 (1 + r2) i . (14.40) 202 QUANTUM MECHANICS Since the function I(r2) only depends on the magnitude of r2, the integral (14.31) reduces to ⟨Vee⟩= −16 E0 π Z ∞ 0 e−2 r2 I(r2) r 2 2 dr2, (14.41) which yields ⟨Vee⟩= −16 E0 Z ∞ 0 e−2 r2 h 1 −e−2 r2 (1 + r2) i r2 dr2 = −5 2 E0. (14.42) Hence, from (14.27), our estimate for the ground-state energy of helium is ⟨H⟩= 8 E0 −5 2 E0 = 11 2 E0 = −74.8 eV. (14.43) This is remarkably close to the correct result. We can actually refine our estimate further. The trial wavefunction (14.26) essentially treats the two electrons as non-interacting particles. In reality, we would expect one elec-tron to partially shield the nuclear charge from the other, and vice versa. Hence, a better trial wavefunction might be ψ(r1, r2) = Z3 π a 3 0 exp −Z [r1 + r2] a0 ! , (14.44) where Z < 2 is effective nuclear charge number seen by each electron. Let us recalculate the ground-state energy of helium as a function of Z, using the above trial wavefunction, and then minimize the result with respect to Z. According to the variational principle, this should give us an even better estimate for the ground-state energy. We can rewrite the expression (14.14) for the Hamiltonian of the helium atom in the form H = H1(Z) + H2(Z) + Vee + U(Z), (14.45) where H1,2(Z) = −¯ h2 2 me ∇2 1,2 − Z e2 4π ǫ0 r1,2 (14.46) is the Hamiltonian of a hydrogen atom with nuclear charge +Z e, Vee = e2 4π ǫ0 1 |r2 −r1| (14.47) is the electron-electron repulsion term, and U(Z) = e2 4π ǫ0 [Z −2] r1 + [Z −2] r2 ! . (14.48) It follows that ⟨H⟩(Z) = 2 E0(Z) + ⟨Vee⟩(Z) + ⟨U⟩(Z), (14.49) Variational Methods 203 where E0(Z) = Z2 E0 is the ground-state energy of a hydrogen atom with nuclear charge +Z e, ⟨Vee⟩(Z) = −(5 Z/4) E0 is the value of the electron-electron repulsion term when recalculated with the wavefunction (14.44) [actually, all we need to do is to make the substitution a0 →(2/Z) a0], and ⟨U⟩(Z) = 2 (Z −2) e2 4π ǫ0 ! 1 r + . (14.50) Here, ⟨1/r⟩is the expectation value of 1/r calculated for a hydrogen atom with nuclear charge +Z e. It follows from Eq. (9.74) [with n = 1, and making the substitution a0 → a0/Z] that 1 r + = Z a0 . (14.51) Hence, ⟨U⟩(Z) = −4 Z (Z −2) E0, (14.52) since E0 = −e2/(8π ǫ0 a0). Collecting the various terms, our new expression for the expec-tation value of the Hamiltonian becomes ⟨H⟩(Z) = " 2 Z2 −5 4 Z −4 Z (Z −2) # E0 = " −2 Z2 + 27 4 Z # E0. (14.53) The value of Z which minimizes this expression is the root of d⟨H⟩ dZ = " −4 Z + 27 4 # E0 = 0. (14.54) It follows that Z = 27 16 = 1.69. (14.55) The fact that Z < 2 confirms our earlier conjecture that the electrons partially shield the nuclear charge from one another. Our new estimate for the ground-state energy of helium is ⟨H⟩(1.69) = 1 2 3 2 !6 E0 = −77.5 eV. (14.56) This is clearly an improvement on our previous estimate (14.43) [recall that the correct result is −78.98 eV]. Obviously, we could get even closer to the correct value of the helium ground-state energy by using a more complicated trial wavefunction with more adjustable parameters. Note, finally, that since the two electrons in a helium atom are indistinguishable fermions, the overall wavefunction must be anti-symmetric with respect to exchange of particles (see Sect. 6). Now, the overall wavefunction is the product of the spatial wavefunction and the spinor representing the spin-state. Our spatial wavefunction (14.44) is obviously symmetric with respect to exchange of particles. This means that the spinor must be anti-symmetric. 204 QUANTUM MECHANICS r2 proton electron proton z-axis z = 0 z = R r1 Figure 14.1: The hydrogen molecule ion. It is clear, from Sect. 11.4, that if the spin-state of an l = 0 system consisting of two spin one-half particles (i.e., two electrons) is anti-symmetric with respect to interchange of par-ticles then the system is in the so-called singlet state with overall spin zero. Hence, the ground-state of helium has overall electron spin zero. 14.4 Hydrogen Molecule Ion The hydrogen molecule ion consists of an electron orbiting about two protons, and is the simplest imaginable molecule. Let us investigate whether or not this molecule possesses a bound state: i.e., whether or not it possesses a ground-state whose energy is less than that of a hydrogen atom and a free proton. According to the variation principle, we can deduce that the H+ 2 ion has a bound state if we can find any trial wavefunction for which the total Hamiltonian of the system has an expectation value less than that of a hydrogen atom and a free proton. Suppose that the two protons are separated by a distance R. In fact, let them lie on the z-axis, with the first at the origin, and the second at z = R (see Fig. 14.1). In the following, we shall treat the protons as essentially stationary. This is reasonable, since the electron moves far more rapidly than the protons. Let us try ψ(r)± = A [ψ0(r1) ± ψ0(r2)] (14.57) Variational Methods 205 as our trial wavefunction, where ψ0(r) = 1 √π a 3/2 0 e−r/a0 (14.58) is a normalized hydrogen ground-state wavefunction centered on the origin, and r1,2 are the position vectors of the electron with respect to each of the protons (see Fig. 14.1). Obviously, this is a very simplistic wavefunction, since it is just a linear combination of hydrogen ground-state wavefunctions centered on each proton. Note, however, that the wavefunction respects the obvious symmetries in the problem. Our first task is to normalize our trial wavefunction. We require that Z |ψ±|2 d3r = 1. (14.59) Hence, from (14.57), A = I−1/2, where I = Z h |ψ0(r1)|2 + |ψ0(r2)|2 ± 2 ψ0(r1) ψ(r2) i d3r. (14.60) It follows that I = 2 (1 ± J), (14.61) with J = Z ψ0(r1) ψ0(r2) d3r. (14.62) Let us employ the standard spherical polar coordinates (r, θ, φ). Now, it is easily seen that r1 = r and r2 = (r2 + R2 −2 r R cos θ)1/2. Hence, J = 2 Z ∞ 0 Z π 0 exp h −x −(x2 + X2 −2 x X cos θ)1/2i x2 dx sin θ dθ, (14.63) where X = R/a0. Here, we have already performed the trivial φ integral. Let y = (x2 + X2 −2 x X cos θ)1/2. It follows that d(y2) = 2 y dy = 2 x X sin θ dθ, giving Z π 0 e (x2+X2−2 x X cos θ)1/2 sin θ dθ = 1 x X Z x+X |x−X| e−y y dy (14.64) = −1 x X h e−(x+X) (1 + x + X) −e−|x−X| (1 + |x −X|) i . Thus, J = −2 X e−X Z X 0 h e−2 x (1 + X + x) −(1 + X −x) i x dx −2 X Z ∞ X e−2 x h e−X (1 + X + x) −eX (1 −X + x) i x dx, (14.65) 206 QUANTUM MECHANICS which evaluates to J = e−X 1 + X + X3 3 ! . (14.66) Now, the Hamiltonian of the electron is written H = −¯ h2 2 me ∇2 − e2 4π ǫ0 1 r1 + 1 r2 ! . (14.67) Note, however, that −¯ h2 2 me ∇2 − e2 4π ǫ0 r1,2 ! ψ0(r1,2) = E0 ψ0(r1,2), (14.68) since ψ0(r1,2) are hydrogen ground-state wavefunctions. It follows that H ψ± = A " −¯ h2 2 me ∇2 − e2 4π ǫ0 1 r1 + 1 r2 !# [ψ0(r1) ± ψ0(r2)] = E0 ψ −A e2 4π ǫ0 ! "ψ0(r1) r2 ± ψ0(r2) r1 # . (14.69) Hence, ⟨H⟩= E0 + 4 A2 (D ± E) E0, (14.70) where D = ψ0(r1) a0 r2 ψ0(r1) , (14.71) E = ψ0(r1) a0 r1 ψ0(r2) . (14.72) Now, D = 2 Z ∞ 0 Z π 0 e−2 x (x2 + X2 −2 x X cos θ)1/2 x2 dx sin θ dθ, (14.73) which reduces to D = 4 X Z X 0 e−2 x x2 dx + 4 Z ∞ X e−2 x x dx, (14.74) giving D = 1 X  1 −[1 + X] e−2 X . (14.75) Furthermore, E = 2 Z ∞ 0 Z π 0 exp h −x −(x2 + X2 −2 x X cos θ)1/2i x dx sin θ dθ, (14.76) Variational Methods 207 which reduces to E = −2 X e−X Z X 0 h e−2 x (1 + X + x) −(1 + X −x) i dx −2 X Z ∞ X e−2 x h e−X (1 + X + x) −eX (1 −X + x) i dx, (14.77) yielding E = (1 + X) e−X. (14.78) Our expression for the expectation value of the electron Hamiltonian is ⟨H⟩= " 1 + 2 (D ± E) (1 ± J) # E0, (14.79) where J, D, and E are specified as functions of X = R/a0 in Eqs. (14.66), (14.75), and (14.78), respectively. In order to obtain the total energy of the molecule, we must add to this the potential energy of the two protons. Thus, Etotal = ⟨H⟩+ e2 4π ǫ0 R = ⟨H⟩−2 X E0, (14.80) since E0 = −e2/(8π ǫ0 a0). Hence, we can write Etotal = −F±(R/a0) E0, (14.81) where E0 is the hydrogen ground-state energy, and F±(X) = −1 + 2 X "(1 + X) e−2 X ± (1 −2 X2/3) e−X 1 ± (1 + X + X2/3) e−X # . (14.82) The functions F+(X) and F−(X) are both plotted in Fig. 14.2. Recall that in order for the H+ 2 ion to be in a bound state it must have a lower energy than a hydrogen atom and a free proton: i.e., Etotal < E0. It follows from Eq. (14.81) that a bound state corresponds to F± < −1. Clearly, the even trial wavefunction ψ+ possesses a bound state, whereas the odd trial wavefunction ψ−does not [see Eq. (14.57)]. This is hardly surprising, since the even wavefunction maximizes the electron probability density between the two protons, thereby reducing their mutual electrostatic repulsion. On the other hand, the odd wavefunction does exactly the opposite. The binding energy of the H+ 2 ion is defined as the difference between its energy and that of a hydrogen atom and a free proton: i.e., Ebind = Etotal −E0 = −(F+ + 1) E0. (14.83) According to the variational principle, the binding energy is less than or equal to the minimum binding energy which can be inferred from Fig. 14.2. This minimum occurs when X ≃2.5 and F+ ≃−1.13. Thus, our estimates for the separation between the two protons, and the binding energy, for the H+ 2 ion are R = 2.5 a0 = 1.33 × 10−10 m and Ebind = 0.13 E0 = −1.77 eV, respectively. The experimentally determined values are R = 1.06 × 10−10 m, and Ebind = −2.8 eV, respectively. Clearly, our estimates are not particularly accurate. However, our calculation does establish, beyond any doubt, the existence of a bound state of the H+ 2 ion, which is all that we set out to achieve. 208 QUANTUM MECHANICS Figure 14.2: The functions F+(X) (solid curve) and F−(X) (dashed curve). Scattering Theory 209 15 Scattering Theory 15.1 Introduction Historically, data regarding quantum phenomena has been obtained from two main sources. Firstly, from the study of spectroscopic lines, and, secondly, from scattering experiments. We have already developed theories which account for some aspects of the spectrum of hy-drogen, and hydrogen-like, atoms. Let us now examine the quantum theory of scattering. 15.2 Fundamentals Consider time-independent, energy conserving scattering in which the Hamiltonian of the system is written H = H0 + V(r), (15.1) where H0 = p2 2 m ≡−¯ h2 2 m ∇2 (15.2) is the Hamiltonian of a free particle of mass m, and V(r) the scattering potential. This potential is assumed to only be non-zero in a fairly localized region close to the origin. Let ψ0(r) = √n e i k·r (15.3) represent an incident beam of particles, of number density n, and velocity v = ¯ h k/m. Of course, H0 ψ0 = E ψ0, (15.4) where E = ¯ h2 k2/2 m is the particle energy. Schr¨ odinger’s equation for the scattering problem is (H0 + V) ψ = E ψ, (15.5) subject to the boundary condition ψ →ψ0 as V →0. The above equation can be rearranged to give (∇2 + k2) ψ = 2 m ¯ h2 V ψ. (15.6) Now, (∇2 + k2) u(r) = ρ(r) (15.7) is known as the Helmholtz equation. The solution to this equation is well-known: 1 u(r) = u0(r) − Z e i k |r−r′| 4π |r −r′| ρ(r′) d3r′. (15.8) 1See Griffiths, Sect. 11.4. 210 QUANTUM MECHANICS Here, u0(r) is any solution of (∇2 + k2) u0 = 0. Hence, Eq. (15.6) can be inverted, subject to the boundary condition ψ →ψ0 as V →0, to give ψ(r) = ψ0(r) −2 m ¯ h2 Z e i k |r−r′| 4π |r −r′| V(r′) ψ(r′) d3r′. (15.9) Let us calculate the value of the wavefunction ψ(r) well outside the scattering region. Now, if r ≫r′ then |r −r′| ≃r −^ r · r′ (15.10) to first-order in r′/r, where ^ r/r is a unit vector which points from the scattering region to the observation point. It is helpful to define k′ = k^ r. This is the wavevector for particles with the same energy as the incoming particles (i.e., k′ = k) which propagate from the scattering region to the observation point. Equation (15.9) reduces to ψ(r) ≃√n " e i k·r + e i k r r f(k, k′) # , (15.11) where f(k, k′) = − m 2π √n ¯ h2 Z e−i k′·r′ V(r′) ψ(r′) d3r′. (15.12) The first term on the right-hand side of Eq. (15.11) represents the incident particle beam, whereas the second term represents an outgoing spherical wave of scattered particles. The differential scattering cross-section dσ/dΩis defined as the number of particles per unit time scattered into an element of solid angle dΩ, divided by the incident particle flux. From Sect. 7.2, the probability flux (i.e., the particle flux) associated with a wavefunction ψ is j = ¯ h m Im(ψ∗∇ψ). (15.13) Thus, the particle flux associated with the incident wavefunction ψ0 is j = n v, (15.14) where v = ¯ h k/m is the velocity of the incident particles. Likewise, the particle flux associated with the scattered wavefunction ψ −ψ0 is j′ = n |f(k, k′)|2 r2 v′, (15.15) where v′ = ¯ h k′/m is the velocity of the scattered particles. Now, dσ dΩdΩ= r2 dΩ|j′| |j| , (15.16) which yields dσ dΩ= |f(k, k′)|2. (15.17) Scattering Theory 211 Thus, |f(k, k′)|2 gives the differential cross-section for particles with incident velocity v = ¯ h k/m to be scattered such that their final velocities are directed into a range of solid angles dΩabout v′ = ¯ h k′/m. Note that the scattering conserves energy, so that |v′| = |v| and |k′| = |k|. 15.3 Born Approximation Equation (15.17) is not particularly useful, as it stands, because the quantity f(k, k′) de-pends on the, as yet, unknown wavefunction ψ(r) [see Eq. (15.12)]. Suppose, however, that the scattering is not particularly strong. In this case, it is reasonable to suppose that the total wavefunction, ψ(r), does not differ substantially from the incident wavefunc-tion, ψ0(r). Thus, we can obtain an expression for f(k, k′) by making the substitution ψ(r) →ψ0(r) = √n exp( i k · r) in Eq. (15.12). This procedure is called the Born approxi-mation. The Born approximation yields f(k, k′) ≃ m 2π ¯ h2 Z e i (k−k′)·r′ V(r′) d3r′. (15.18) Thus, f(k, k′) is proportional to the Fourier transform of the scattering potential V(r) with respect to the wavevector q = k −k′. For a spherically symmetric potential, f(k′, k) ≃− m 2π ¯ h2 ZZZ exp( i q r′ cos θ′) V(r′) r′ 2 dr′ sin θ′ dθ′ dφ′, (15.19) giving f(k′, k) ≃−2 m ¯ h2 q Z ∞ 0 r′ V(r′) sin(q r′) dr′. (15.20) Note that f(k′, k) is just a function of q for a spherically symmetric potential. It is easily demonstrated that q ≡|k −k′| = 2 k sin(θ/2), (15.21) where θ is the angle subtended between the vectors k and k′. In other words, θ is the scattering angle. Recall that the vectors k and k′ have the same length, via energy conser-vation. Consider scattering by a Yukawa potential V(r) = V0 exp(−µ r) µ r , (15.22) where V0 is a constant, and 1/µ measures the “range” of the potential. It follows from Eq. (15.20) that f(θ) = −2 m V0 ¯ h2 µ 1 q2 + µ2, (15.23) 212 QUANTUM MECHANICS since Z ∞ 0 exp(−µ r′) sin(q r′) dr′ = q q2 + µ2. (15.24) Thus, in the Born approximation, the differential cross-section for scattering by a Yukawa potential is dσ dΩ≃ 2 m V0 ¯ h2 µ !2 1 [2 k2 (1 −cos θ) + µ2] 2, (15.25) given that q2 = 4 k2 sin2(θ/2) = 2 k2 (1 −cos θ). (15.26) The Yukawa potential reduces to the familiar Coulomb potential as µ →0, provided that V0/µ →Z Z′ e2/4π ǫ0. In this limit, the Born differential cross-section becomes dσ dΩ≃ 2 m Z Z′ e2 4π ǫ0 ¯ h2 !2 1 16 k4 sin4(θ/2). (15.27) Recall that ¯ h k is equivalent to |p|, so the above equation can be rewritten dσ dΩ≃ Z Z′ e2 16π ǫ0 E !2 1 sin4(θ/2), (15.28) where E = p2/2 m is the kinetic energy of the incident particles. Of course, Eq. (15.28) is the famous Rutherford scattering cross-section formula. The Born approximation is valid provided that ψ(r) is not too different from ψ0(r) in the scattering region. It follows, from Eq. (15.9), that the condition for ψ(r) ≃ψ0(r) in the vicinity of r = 0 is m 2π ¯ h2 Z exp( i k r′) r′ V(r′) d3r′ ≪1. (15.29) Consider the special case of the Yukawa potential. At low energies, (i.e., k ≪µ) we can replace exp( i k r′) by unity, giving 2 m ¯ h2 |V0| µ2 ≪1 (15.30) as the condition for the validity of the Born approximation. The condition for the Yukawa potential to develop a bound state is 2 m ¯ h2 |V0| µ2 ≥2.7, (15.31) where V0 is negative. Thus, if the potential is strong enough to form a bound state then the Born approximation is likely to break down. In the high-k limit, Eq. (15.29) yields 2 m ¯ h2 |V0| µ k ≪1. (15.32) This inequality becomes progressively easier to satisfy as k increases, implying that the Born approximation is more accurate at high incident particle energies. Scattering Theory 213 15.4 Partial Waves We can assume, without loss of generality, that the incident wavefunction is characterized by a wavevector k which is aligned parallel to the z-axis. The scattered wavefunction is characterized by a wavevector k′ which has the same magnitude as k, but, in general, points in a different direction. The direction of k′ is specified by the polar angle θ (i.e., the angle subtended between the two wavevectors), and an azimuthal angle φ about the z-axis. Equations (15.20) and (15.21) strongly suggest that for a spherically symmetric scattering potential [i.e., V(r) = V(r)] the scattering amplitude is a function of θ only: i.e., f(θ, φ) = f(θ). (15.33) It follows that neither the incident wavefunction, ψ0(r) = √n exp( i k z) = √n exp( i k r cos θ), (15.34) nor the large-r form of the total wavefunction, ψ(r) = √n " exp( i k r cos θ) + exp( i k r) f(θ) r # , (15.35) depend on the azimuthal angle φ. Outside the range of the scattering potential, both ψ0(r) and ψ(r) satisfy the free space Schr¨ odinger equation (∇2 + k2) ψ = 0. (15.36) What is the most general solution to this equation in spherical polar coordinates which does not depend on the azimuthal angle φ? Separation of variables yields ψ(r, θ) = X l Rl(r) Pl(cos θ), (15.37) since the Legendre functions Pl(cos θ) form a complete set in θ-space. The Legendre func-tions are related to the spherical harmonics, introduced in Cha. 8, via Pl(cos θ) = s 4π 2 l + 1 Yl,0(θ, ϕ). (15.38) Equations (15.36) and (15.37) can be combined to give r2 d2Rl dr2 + 2 r dRl dr + [k2 r2 −l (l + 1)]Rl = 0. (15.39) The two independent solutions to this equation are the spherical Bessel functions, jl(k r) and yl(k r), introduced in Sect. 9.3. Recall that jl(z) = zl −1 z d dz !l sin z z ! , (15.40) yl(z) = −zl −1 z d dz !l cos z z  . (15.41) 214 QUANTUM MECHANICS Note that the jl(z) are well-behaved in the limit z →0 , whereas the yl(z) become singular. The asymptotic behaviour of these functions in the limit z →∞is jl(z) → sin(z −l π/2) z , (15.42) yl(z) → −cos(z −l π/2) z . (15.43) We can write exp( i k r cos θ) = X l al jl(k r) Pl(cos θ), (15.44) where the al are constants. Note there are no yl(k r) functions in this expression, because they are not well-behaved as r →0. The Legendre functions are orthonormal, Z 1 −1 Pn(µ) Pm(µ) dµ = δnm n + 1/2, (15.45) so we can invert the above expansion to give al jl(k r) = (l + 1/2) Z 1 −1 exp( i k r µ) Pl(µ) dµ. (15.46) It is well-known that jl(y) = (−i)l 2 Z 1 −1 exp( i y µ) Pl(µ) dµ, (15.47) where l = 0, 1, 2, · · · [see M. Abramowitz and I.A. Stegun, Handbook of mathematical func-tions, (Dover, New York NY, 1965), Eq. 10.1.14]. Thus, al = i l (2 l + 1), (15.48) giving ψ0(r) = √n exp( i k r cos θ) = √n X l i l (2 l + 1) jl(k r) Pl(cos θ). (15.49) The above expression tells us how to decompose the incident plane-wave into a series of spherical waves. These waves are usually termed “partial waves”. The most general expression for the total wavefunction outside the scattering region is ψ(r) = √n X l [Al jl(k r) + Bl yl(k r)] Pl(cos θ), (15.50) where the Al and Bl are constants. Note that the yl(k r) functions are allowed to appear in this expansion, because its region of validity does not include the origin. In the large-r limit, the total wavefunction reduces to ψ(r) ≃√n X l " Al sin(k r −l π/2) k r −Bl cos(k r −l π/2) k r # Pl(cos θ), (15.51) Scattering Theory 215 where use has been made of Eqs. (15.42) and (15.43). The above expression can also be written ψ(r) ≃√n X l Cl sin(k r −l π/2 + δl) k r Pl(cos θ), (15.52) where the sine and cosine functions have been combined to give a sine function which is phase-shifted by δl. Note that Al = Cl cos δl and Bl = −Cl sin δl. Equation (15.52) yields ψ(r) ≃√n X l Cl "e i (k r−l π/2+δl) −e−i (k r−l π/2+δl) 2 i k r # Pl(cos θ), (15.53) which contains both incoming and outgoing spherical waves. What is the source of the incoming waves? Obviously, they must be part of the large-r asymptotic expansion of the incident wavefunction. In fact, it is easily seen from Eqs. (15.42) and (15.49) that ψ0(r) ≃√n X l i l (2l + 1) "e i (k r−l π/2) −e−i (k r−l π/2) 2 i k r # Pl(cos θ) (15.54) in the large-r limit. Now, Eqs. (15.34) and (15.35) give ψ(r) −ψ0(r) √n = exp( i k r) r f(θ). (15.55) Note that the right-hand side consists of an outgoing spherical wave only. This implies that the coefficients of the incoming spherical waves in the large-r expansions of ψ(r) and ψ0(r) must be the same. It follows from Eqs. (15.53) and (15.54) that Cl = (2 l + 1) exp[ i (δl + l π/2)]. (15.56) Thus, Eqs. (15.53)–(15.55) yield f(θ) = ∞ X l=0 (2 l + 1) exp( i δl) k sin δl Pl(cos θ). (15.57) Clearly, determining the scattering amplitude f(θ) via a decomposition into partial waves (i.e., spherical waves) is equivalent to determining the phase-shifts δl. Now, the differential scattering cross-section dσ/dΩis simply the modulus squared of the scattering amplitude f(θ) [see Eq. (15.17)]. The total cross-section is thus given by σtotal = Z |f(θ)|2 dΩ = 1 k2 I dφ Z 1 −1 dµ X l X l′ (2 l + 1) (2 l′ + 1) exp[ i (δl −δl′)] × sin δl sin δl′ Pl(µ) Pl′(µ), (15.58) 216 QUANTUM MECHANICS where µ = cos θ. It follows that σtotal = 4π k2 X l (2 l + 1) sin2 δl, (15.59) where use has been made of Eq. (15.45). 15.5 Determination of Phase-Shifts Let us now consider how the phase-shifts δl in Eq. (15.57) can be evaluated. Consider a spherically symmetric potential V(r) which vanishes for r > a, where a is termed the range of the potential. In the region r > a, the wavefunction ψ(r) satisfies the free-space Schr¨ odinger equation (15.36). The most general solution which is consistent with no incoming spherical-waves is ψ(r) = √n ∞ X l=0 il (2 l + 1) Rl(r) Pl(cos θ), (15.60) where Rl(r) = exp( i δl) [cos δl jl(k r) −sin δl yl(k r)] . (15.61) Note that yl(k r) functions are allowed to appear in the above expression, because its region of validity does not include the origin (where V ̸= 0). The logarithmic derivative of the lth radial wavefunction, Rl(r), just outside the range of the potential is given by βl+ = k a "cos δl j′ l(k a) −sin δl y′ l(k a) cos δl jl(k a) −sin δl yl(k a) # , (15.62) where j′ l(x) denotes djl(x)/dx, etc. The above equation can be inverted to give tan δl = k a j′ l(k a) −βl+ jl(k a) k a y′ l(k a) −βl+ yl(k a). (15.63) Thus, the problem of determining the phase-shift δl is equivalent to that of obtaining βl+. The most general solution to Schr¨ odinger’s equation inside the range of the potential (r < a) which does not depend on the azimuthal angle φ is ψ(r) = √n ∞ X l=0 i l (2 l + 1) Rl(r) Pl(cos θ), (15.64) where Rl(r) = ul(r) r , (15.65) and d2ul dr2 + " k2 −l (l + 1) r2 −2 m ¯ h2 V # ul = 0. (15.66) Scattering Theory 217 The boundary condition ul(0) = 0 (15.67) ensures that the radial wavefunction is well-behaved at the origin. We can launch a well-behaved solution of the above equation from r = 0, integrate out to r = a, and form the logarithmic derivative βl−= 1 (ul/r) d(ul/r) dr r=a . (15.68) Since ψ(r) and its first derivatives are necessarily continuous for physically acceptible wavefunctions, it follows that βl+ = βl−. (15.69) The phase-shift δl is then obtainable from Eq. (15.63). 15.6 Hard Sphere Scattering Let us test out this scheme using a particularly simple example. Consider scattering by a hard sphere, for which the potential is infinite for r < a, and zero for r > a. It follows that ψ(r) is zero in the region r < a, which implies that ul = 0 for all l. Thus, βl−= βl+ = ∞, (15.70) for all l. Equation (15.63) thus gives tan δl = jl(k a) yl(k a). (15.71) Consider the l = 0 partial wave, which is usually referred to as the S-wave. Equa-tion (15.71) yields tan δ0 = sin(k a)/k a −cos(k a)/ka = −tan(k a), (15.72) where use has been made of Eqs. (15.40) and (15.41). It follows that δ0 = −k a. (15.73) The S-wave radial wave function is [see Eq. (15.61)] R0(r) = exp(−i k a)[cos(k a) sin(k r) −sin(k a) cos(k r)] k r = exp(−i k a) sin[k (r −a)] k r . (15.74) The corresponding radial wavefunction for the incident wave takes the form [see Eq. (15.49)] ˜ R0(r) = sin(k r) k r . (15.75) 218 QUANTUM MECHANICS Thus, the actual l = 0 radial wavefunction is similar to the incident l = 0 wavefunction, except that it is phase-shifted by k a. Let us examine the low and high energy asymptotic limits of tan δl. Low energy implies that k a ≪1. In this regime, the spherical Bessel functions reduce to: jl(k r) ≃ (k r)l (2 l + 1)!!, (15.76) yl(k r) ≃ −(2 l −1)!! (k r)l+1 , (15.77) where n!! = n (n −2) (n −4) · · ·1. It follows that tan δl = −(k a)2 l+1 (2 l + 1) [(2 l −1)!!] 2. (15.78) It is clear that we can neglect δl, with l > 0, with respect to δ0. In other words, at low energy only S-wave scattering (i.e., spherically symmetric scattering) is important. It follows from Eqs. (15.17), (15.57), and (15.73) that dσ dΩ= sin2 k a k2 ≃a2 (15.79) for k a ≪1. Note that the total cross-section σtotal = Z dσ dΩdΩ= 4π a2 (15.80) is four times the geometric cross-section π a2 (i.e., the cross-section for classical particles bouncing off a hard sphere of radius a). However, low energy scattering implies relatively long wavelengths, so we would not expect to obtain the classical result in this limit. Consider the high energy limit k a ≫1. At high energies, all partial waves up to lmax = k a contribute significantly to the scattering cross-section. It follows from Eq. (15.59) that σtotal ≃4π k2 lmax X l=0 (2 l + 1) sin2 δl. (15.81) With so many l values contributing, it is legitimate to replace sin2 δl by its average value 1/2. Thus, σtotal ≃ k a X l=0 2π k2 (2 l + 1) ≃2π a2. (15.82) This is twice the classical result, which is somewhat surprizing, since we might expect to obtain the classical result in the short wavelength limit. For hard sphere scattering, incident waves with impact parameters less than a must be deflected. However, in order to produce a “shadow” behind the sphere, there must also be some scattering in the forward Scattering Theory 219 direction in order to produce destructive interference with the incident plane-wave. In fact, the interference is not completely destructive, and the shadow has a bright spot (the so-called “Poisson spot”) in the forward direction. The effective cross-section associated with this bright spot is π a2 which, when combined with the cross-section for classical reflection, π a2, gives the actual cross-section of 2π a2. 15.7 Low Energy Scattering In general, at low energies (i.e., when 1/k is much larger than the range of the potential) partial waves with l > 0 make a negligible contribution to the scattering cross-section. It follows that, at these energies, with a finite range potential, only S-wave scattering is important. As a specific example, let us consider scattering by a finite potential well, characterized by V = V0 for r < a, and V = 0 for r ≥a. Here, V0 is a constant. The potential is repulsive for V0 > 0, and attractive for V0 < 0. The outside wavefunction is given by [see Eq. (15.61)] R0(r) = exp( i δ0) [cos δ0 j0(k r) −sin δ0 y0(k r)] = exp( i δ0) sin(k r + δ0) k r , (15.83) where use has been made of Eqs. (15.40) and (15.41). The inside wavefunction follows from Eq. (15.66). We obtain R0(r) = B sin(k′ r) r , (15.84) where use has been made of the boundary condition (15.67). Here, B is a constant, and E −V0 = ¯ h2 k′ 2 2 m . (15.85) Note that Eq. (15.84) only applies when E > V0. For E < V0, we have R0(r) = B sinh(κ r) r , (15.86) where V0 −E = ¯ h2κ2 2 m . (15.87) Matching R0(r), and its radial derivative, at r = a yields tan(k a + δ0) = k k′ tan(k′ a) (15.88) for E > V0, and tan(k a + δ0) = k κ tanh(κ a) (15.89) 220 QUANTUM MECHANICS for E < V0. Consider an attractive potential, for which E > V0. Suppose that |V0| ≫E (i.e., the depth of the potential well is much larger than the energy of the incident particles), so that k′ ≫k. We can see from Eq. (15.88) that, unless tan(k′ a) becomes extremely large, the right-hand side is much less that unity, so replacing the tangent of a small quantity with the quantity itself, we obtain k a + δ0 ≃k k′ tan(k′ a). (15.90) This yields δ0 ≃k a "tan(k′ a) k′ a −1 # . (15.91) According to Eq. (15.81), the scattering cross-section is given by σtotal ≃4π k2 sin2 δ0 = 4π a2 "tan(k ′a) k′ a −1 #2 . (15.92) Now k′ a = s k2 a2 + 2 m |V0| a2 ¯ h2 , (15.93) so for sufficiently small values of k a, k′ a ≃ s 2 m |V0| a2 ¯ h2 . (15.94) It follows that the total (S-wave) scattering cross-section is independent of the energy of the incident particles (provided that this energy is sufficiently small). Note that there are values of k′ a (e.g., k′ a ≃4.49) at which δ0 →π, and the scattering cross-section (15.92) vanishes, despite the very strong attraction of the potential. In reality, the cross-section is not exactly zero, because of contributions from l > 0 partial waves. But, at low incident energies, these contributions are small. It follows that there are certain values of V0 and k which give rise to almost perfect transmission of the incident wave. This is called the Ramsauer-Townsend effect, and has been observed experimentally. 15.8 Resonances There is a significant exception to the independence of the cross-section on energy men-tioned above. Suppose that the quantity q 2 m |V0| a2/¯ h2 is slightly less than π/2. As the incident energy increases, k′ a, which is given by Eq. (15.93), can reach the value π/2. In this case, tan(k′ a) becomes infinite, so we can no longer assume that the right-hand side of Eq. (15.88) is small. In fact, it follows from Eq. (15.88) that at the value of the incident Scattering Theory 221 energy when k′ a = π/2 then we also have k a + δ0 = π/2, or δ0 ≃π/2 (since we are assuming that k a ≪1). This implies that σtotal = 4π k2 sin2 δ0 = 4π a2 1 k2 a2 ! . (15.95) Note that the cross-section now depends on the energy. Furthermore, the magnitude of the cross-section is much larger than that given in Eq. (15.92) for k′ a ̸= π/2 (since k a ≪1). The origin of this rather strange behaviour is quite simple. The condition s 2 m |V0| a2 ¯ h2 = π 2 (15.96) is equivalent to the condition that a spherical well of depth V0 possesses a bound state at zero energy. Thus, for a potential well which satisfies the above equation, the energy of the scattering system is essentially the same as the energy of the bound state. In this situation, an incident particle would like to form a bound state in the potential well. However, the bound state is not stable, since the system has a small positive energy. Nevertheless, this sort of resonance scattering is best understood as the capture of an incident particle to form a metastable bound state, and the subsequent decay of the bound state and release of the particle. The cross-section for resonance scattering is generally much larger than that for non-resonance scattering. We have seen that there is a resonant effect when the phase-shift of the S-wave takes the value π/2. There is nothing special about the l = 0 partial wave, so it is reasonable to assume that there is a similar resonance when the phase-shift of the lth partial wave is π/2. Suppose that δl attains the value π/2 at the incident energy E0, so that δl(E0) = π 2 . (15.97) Let us expand cot δl in the vicinity of the resonant energy: cot δl(E) = cot δl(E0) + d cot δl dE ! E=E0 (E −E0) + · · · = − 1 sin2 δl dδl dE ! E=E0 (E −E0) + · · · . (15.98) Defining dδl(E) dE ! E=E0 = 2 Γ , (15.99) we obtain cot δl(E) = −2 Γ (E −E0) + · · ·. (15.100) 222 QUANTUM MECHANICS Recall, from Eq. (15.59), that the contribution of the lth partial wave to the scattering cross-section is σl = 4π k2 (2 l + 1) sin2 δl = 4π k2 (2 l + 1) 1 1 + cot2 δl . (15.101) Thus, σl ≃4π k2 (2 l + 1) Γ 2/4 (E −E0)2 + Γ 2/4. (15.102) This is the famous Breit-Wigner formula. The variation of the partial cross-section σl with the incident energy has the form of a classical resonance curve. The quantity Γ is the width of the resonance (in energy). We can interpret the Breit-Wigner formula as describing the absorption of an incident particle to form a metastable state, of energy E0, and lifetime τ = ¯ h/Γ.
747
https://jingyan.baidu.com/article/636f38bb3cfc88d6b946104b.html
方法/步骤 中文数位有:个,十,百,千,万,十万,百万,千万,亿,十亿,百亿,千亿,万亿,十万亿,百万亿,千万亿,亿亿…… 「兆」在大陆也有使用,不过含义不一,「百万、万亿、亿亿」都可能是「兆」。中国数位其实不像英语是三位等差排列的(10^3,10^6,10^9,10^12...),中文除了「千」以外,其他的基本的数位是乘方的关系,十乘以十是百,百乘以百是万,万乘以万是亿。「亿亿」我找不到一个特定的数位,暂以「兆」记,那亿乘以亿就是兆。所以中文基本的数位是10,10^2,10^4,10^8,10^16... 然后是「十」和「一十」的问题。10至19以及以这些数字开头的多位数,以「十」开头,如十五,十万,十亿等。两位数以上,在数字中部出现,则用「一十几」,如一百一十,一千零一十,一万零一十等。 「二」和「两」的问题。两亿,两万,两千,两百,都可以,但是20只能是二十,200用二百也更好。22,2222,2222是「二十二亿两千二百二十二万两千二百二十二」。 关于「零」和「〇」的问题,数字中一律用「零」,只有页码、年代等编号中数的空位才能用「〇」。数位中间无论多少个0,都读成一个「零」。2014是「两千零一十四」,20014是「二十万零一十四」,201400是「二十万零一千四百」。 编辑于2017-03-24,内容仅供参考并受版权保护 分享让生活更美好 分享到 您可以通过浏览器的分享按钮,将这篇经验分享到朋友圈 您也可以复制以下链接,打开朋友圈后进行分享
748
https://www.vulcanchem.com/product/vc1859071
Calcium bicarbonate (3983-19-5) for sale Menus Home Product Inhibitors Peptides APIs Impurities Reference Substances Natural Products Labelled Products Signature Products Antibody-Drug Conjugates Pheromones & Semiochemicals Service Tool Molarity Calculator Molecular Weight Calculator Periodic Table About Contact Us Order Products Inhibitors Peptides APIs Impurities Reference Substances Natural Products Labelled Products Signature Products Antibody-Drug Conjugates Pheromones & Semiochemicals Home Product Calcium bicarbonate Calcium bicarbonate Calcium bicarbonate Brand Name: Vulcanchem CAS No.: 3983-19-5 VCID: VC1859071 InChI: InChI=1S/2CH2O3.Ca/c22-1(3)4;/h2(H2,2,3,4);/q;;+2/p-2 SMILES: C(=O)(O)[O-].C(=O)(O)[O-].[Ca+2] Molecular Formula: C2H2CaO6 Molecular Weight: 162.11 g/mol Calcium bicarbonate CAS No.: 3983-19-5 Cat. No.: VC1859071 Molecular Formula: C2H2CaO6 Molecular Weight: 162.11 g/mol For research use only. Not for human or veterinary use. Inquiry Product Info Online Inquiry Specification | CAS No. | 3983-19-5 | | Molecular Formula | C2H2CaO6 | | Molecular Weight | 162.11 g/mol | | IUPAC Name | calcium;hydrogen carbonate | | Standard InChI | InChI=1S/2CH2O3.Ca/c22-1(3)4;/h2(H2,2,3,4);/q;;+2/p-2 | | Standard InChI Key | NKWPZUCBCARRDP-UHFFFAOYSA-L | | SMILES | C(=O)(O)[O-].C(=O)(O)[O-].[Ca+2] | | Canonical SMILES | C(=O)(O)[O-].C(=O)(O)[O-].[Ca+2] | Introduction Chemical Properties and Structure Calcium bicarbonate is an ionic compound composed of calcium, hydrogen, carbon, and oxygen with the molecular formula Ca(HCO₃)₂ or alternatively represented as C₂H₂CaO₆. The International Union of Pure and Applied Chemistry (IUPAC) designates it as calcium hydrogen carbonate. With a molar mass of 162.11 g/mol, calcium bicarbonate features a trigonal crystal structure when in solution. The relative concentrations of carbon-containing species in calcium bicarbonate solutions depend on pH levels, with bicarbonate ions predominating within the range of 6.36–10.25 in fresh water systems. Chemical Identification The chemical identification parameters for calcium bicarbonate provide essential reference information for researchers and industry professionals working with this compound. | Identification Parameter | Value | --- | | CAS Number | 3983-19-5 | | PubChem CID | 10176262 | | ChemSpider ID | 8351767 | | UNII | 7PRA4BLM2L | | Molecular Formula | Ca(HCO₃)₂ or C₂H₂CaO₆ | | Molar Mass | 162.11 g/mol | Table 1: Chemical identification parameters for calcium bicarbonate Physical Properties The physical properties of calcium bicarbonate define its behavior in various environmental and industrial settings. Understanding these properties is essential for its appropriate application and handling. | Property | Value/Description | --- | | Appearance | White powder (theoretical); exists as unstable solution | | Melting point | 1339°C, 2442°F (decomposes) | | Boiling point | Not applicable (decomposes) | | Density | 2.711 g cm⁻³ | | State at room temperature | Exists only in solution | | Solubility in water | 16.1 g/100 ml (0°C) | | | 16.6 g/100 ml (20°C) | | | 18.4 g/100 ml (100°C) | | pH | >7 (basic) | Table 2: Physical properties of calcium bicarbonate The solubility data indicates that calcium bicarbonate becomes more soluble at higher temperatures, contrary to the behavior of many other calcium salts. This property has significant implications for its role in natural water systems and various industrial processes. Formation and Synthesis Laboratory Synthesis Calcium bicarbonate can be prepared through a reaction between calcium carbonate and carbonic acid as represented by the following chemical equation: CaCO₃ + H₂CO₃ = Ca(HCO₃)₂ Alternatively, it can be produced by bubbling an excess of carbon dioxide through an aqueous suspension of calcium carbonate until all the carbonate dissolves: CaCO₃ (s) + CO₂ → Ca(HCO₃)₂ (aq) Natural Formation In natural environments, calcium bicarbonate forms when rainwater containing dissolved carbon dioxide (which forms carbonic acid) comes into contact with limestone or other calcium carbonate-containing minerals. The reaction can be represented as: CaCO₃ + CO₂ + H₂O → Ca(HCO₃)₂ This process is fundamental to the weathering of limestone formations and the creation of karst topography around the world. Reactions Calcium bicarbonate participates in various chemical reactions that define its behavior in both natural and industrial settings. These reactions are particularly important in understanding its role in water hardness, scale formation, and potential applications. Thermal Decomposition When heated, calcium bicarbonate decomposes to form calcium carbonate, carbon dioxide, and water: Ca(HCO₃)₂ = CaCO₃ + CO₂ + H₂O This decomposition reaction is crucial in the formation of stalactites, stalagmites, and other cave formations, as well as in problems related to scale formation in industrial equipment. Acid-Base Reactions Calcium bicarbonate reacts with various acids to produce corresponding calcium salts, carbon dioxide, and water. Some notable examples include: With hydrochloric acid: Ca(HCO₃)₂ + 2HCl = CaCl₂ + 2CO₂ + 2H₂O With sulfuric acid: Ca(HCO₃)₂ + H₂SO₄ = CaSO₄ + 2CO₂ + 2H₂O With nitric acid: Ca(HCO₃)₂ + 2HNO₃ = Ca(NO₃)₂ + 2CO₂ + 2H₂O These reactions are relevant to water treatment processes, agricultural applications, and industrial settings where calcium bicarbonate might need to be removed or transformed. Natural Occurrence Calcium bicarbonate is ubiquitous in natural water systems. All waters in contact with the atmosphere absorb carbon dioxide, and as these waters interact with rocks and sediments, they acquire metal ions, most commonly calcium and magnesium. Consequently, most natural waters from streams, lakes, and especially wells can be regarded as dilute solutions of these bicarbonates. Role in Cave Formation The chemical behavior of calcium bicarbonate plays a critical role in the formation of caves and their distinctive formations. As groundwater containing dissolved calcium bicarbonate enters cave environments, the excess carbon dioxide is released from the solution, causing the much less soluble calcium carbonate to be deposited. This process is responsible for the formation of stalactites, stalagmites, columns, and other speleothems within caves. Water Hardness The presence of calcium bicarbonate in water contributes significantly to water hardness. These hard waters tend to form carbonate scale in pipes and boilers and react with soaps to form an undesirable scum, creating challenges for both domestic and industrial water users. Applications Industrial Applications Calcium bicarbonate finds applications across various industries: Food Industry: Used as a food additive for various purposes. Manufacturing: Employed as an anti-caking agent in powdered products. Food Processing: Functions as a color stabilizer in food products. Water Treatment: Used in certain water treatment processes for pH adjustment and mineral addition. Medical and Antimicrobial Applications Recent research has revealed remarkable properties of calcium bicarbonate when subjected to specific treatments: Clinical applications include: Treatment of gastroesophageal reflux disease with calcium bicarbonate water. Potential use in protective oral care products, such as chewing gums. Advanced Research Findings Preparation of CAC-717 CAC-717 is prepared by electrifying calcium bicarbonate solution at 4 V for 48 hours using Teflon-coated electrostatic-field electrodes. The resulting material has a pH of approximately 12.4 and contains 6.9 mM calcium bicarbonate particles (81,120 mg/l) with a mesoscopic structure (50-500 nm) observable under an electron microscope. For storage, CAC-717 is adsorbed onto a ceramic surface and air-dried in the form of CAC-717 stones. Prior to use, these stones are placed in fresh distilled water, allowing calcium bicarbonate to dissolve and CAC-717 to be reconstituted with the same microbicidal properties as the original solution. Your Name Email Phone Country Cat No. Product CAS No. Quantity Comments Molarity CalculatorMolecular Weight Calculator Mass Molarity Calculator mass of a compound required to prepare a solution of known volume and concentration volume of solution required to dissolve a compound of known mass to a desired concentration concentration of a solution resulting from a known mass of compound in a specific volume Formula weight: g/mol Desired final volume: Desired concentration: g Molecular Mass Calculator the Chemical Formula of a Compound Chemical Products Inhibitors Peptides APIs Impurities Reference Substances Natural Products Labelled Products Signature Products Antibody-Drug Conjugates Pheromones & Semiochemicals Contact Us Fulfillment Warehouse and Logistics: 2229 El Sol Ave, Altadena, CA 91001 Email: info@vulcanchem.com About Us We strive to deliver yet another wikipedia on chemistry.
749
https://www.vcalc.com/wiki/circle-chord-length-from-angle-and-radius
Circle Chord Length from Angle and Radius MenuCreate, Collaborate, Calculate Sign Up NowLogin Navigation Home Library Blog Features Help Contact Us Navigation Home Library Blog Features Help Contact Us Settings Workspace Advertising Logout Create Collection Calculator Equation Data Item Dataset WikiClip Navigation Home Library Blog Features Help Contact Us Home Library Blog Features Help Contact Us Edit Delete Duplicate Add to Collection Share Circle Chord Length from Angle and Radius vCalc Reviewed Last modified by KurtHeckman on Apr 4, 2023, 11:53:30 AM Created by KurtHeckman on Apr 27, 2018, 2:43:56 PM L=2⋅r⋅sin(θ 2)L=2⋅r⋅sin(θ 2) (θ)Angle of the Arc(θ)Angle of the Arc (rad)radian (centiradian)centiradian (°)degree angle (grade)grade (')minute angle (rev)revolution (")second angle (r)Radius of the circle.(r)Radius of the circle. (m)meter (nm)nanometer (µm)micrometer (mm)millimeter (cm)centimeter (km)kilometer (in)inch (ft)foot (yd)yard (mi)mile (nmi)nautical mile (Å)angstrom (mil)mil (pixel)computer point (pt)point (foot_survey_us)survey foot(U.S.) (fathom)fathom (au)astronomical unit (light_second)light second (light_minute)light minute (light_hour)light hour (light_day)light day (ly)lightyear (kly)kilolightyear (pc)parsec (Mpc)megaparsec Share Result Enter a value for all fields | | | | --- TagsPlane Geometry (2D)CircleRadiusarcchordVerified UUID 69f5fb29-4a29-11e8-abb7-bc764e2038f2 Advertise Here The Chord of a Circle calculator computes the length of a chord(d) on a circle based on the radius (r) of the circle and the angle of the arc (θ). INSTRUCTIONS:Choose units and enter the following: (θ) The length of the arc (r) The radius of the circle Chord of a Circle (L): The calculator compute the length of the chord (d) in meters. However, this can be automatically converted to other length units via the pull-down menu. The Math The formula for the length of a circle's chord from the radius and angle is: L = 2•r•sin (θ/2) where: L is the length of the chord r is the radius of the circle θ is the angle Related Calculators: Circle Area - This computes the area of a circle given the radius(A = π r 2). Segment Area f(r,θ) - This computes the area of an arc segment of a circle given the radius (r) and angle (θ) Segment Area f(r,h) - This computes the area of an arc segment of a circle given radius (r) and the depth (h)into the circle. Sector Area f(r,Θ)- This computes the area of a sector (pie slice) of a circle given the radius (r) and angle (Θ). Area of Annulus- This computes the area of an annulus (ring) given the inner radius (r) and outer radius (R). Radius -Center to a Point - This computes the radius of a circle given the center point (h,k) and any other point (x,y) on the circle. Circumference - This computes the circumference of a circle given the radius (C = 2 π r). Arc Lengths - This computes the length of a cord segment (arc length) on a circle given the radius (r) and angle (Θ) Circle within a Triangle - This computes the radius of a circle inscribed within a trianglegiven the length of the three sides (a,b,c) of the triangle. Circle around a Triangle - This computes the radius of a circle that circumscribes a trianglegiven the length of the three sides (a,b,c) of the triangle. Radius from Circumference - This computes the radius of a circle given the circumference. Circumference from Area - This computes the circumference of a circle given the area. Radius from Area - This computes the radius of a circle given the area. Radius from Chord- This computes the radius of a circle based on the length of a chord and the chord's center height. This equation, Circle Chord Length from Angle and Radius, references 0 pages Show Datasets Equations and Data Items This equation, Circle Chord Length from Angle and Radius, is used in 1 page Show Calculators • Circle Calculator by KurtHeckman Equations and Data Items Collections Comments Attachments Stats No comments Submit Upload ChordLength.png X IMG_0236.jpg X No attachments Last 12 Months Do More with Your Free Account Sign-Up Today! Can’t find what you’re looking for? Sign up to create & submit Endless Solutions Discover what vCalc can do for you Learn More About vCalc Quick Links Admin Library Backlog Blog Features Help Contact Us Advertise Here Trending Content Decimal Feet to Feet Inches and Fraction Pipe Surface Area Sectional Density Midpoint Method for Price Elasticity of Demand Feet and Inches to Decimal Feet Canadian Dollars per Liter to US Dollar per Gallon 2nd Pulley RPM RPM of 4th pulley on 3 shafts Pyrantel Pamoate Dewormer for Puppies and Dogs Haversine - Distance Enhance your vCalc experience with a free account Sign Up Now! Home Library Blog Features Help Contact Us Copyright © 2025 vCalc | Privacy Policy | Terms vCalc content is available under the Creative Common Attribution-ShareAlike License; additional terms may apply. vCalc also provides terms of use and a privacy policy. This site uses cookies to give you the best, most relevant experience. By continuing to browse the site you are agreeing to our use of cookies.
750
http://mathcentral.uregina.ca/QQ/database/QQ.09.99/brown2.html
(-5)^2, -5^2 and -(5)^2 Name: Jennifer Brown Who is asking: Student Question: What is the difference between the following problems: (-5)2, -5 2 and -(5)2 Our text book (Beginning Algebra, fourth edition, published by McGraw Hill, by Streeter, Huthison and Hoetzle) says the second and third problem are exactly the same. I don't see how that can be. Is there a mathematical rule that explains this? Hi Jennifer, The parentheses in the first and third expression make it clear which part of the expression is raised to the power 2. In the first expression the power of 2 applies to -5 hence (-5)2 = (-5)(-5) = 25 In the third expression the parentheses tell us that the power of 2 applies to 5 and hence -(5)2 = -(5)(5) = -25 For the second expression, without the parentheses, we use the rules of precedence which tell us which operations to perform first. This convention is to first perform any exponentiation (powers), then any divisions and multiplications and lastly any + or - operations. Some people use memory devices to remember this convention, one of which is PEDMAS (Parentheses, Exponentiation, Division, Multiplication, Addition then Subtraction). In your problem -5 2 first apply the exponentiation and then the negation, hence -5 2 = -5x5 = -25 Cheers, Penny Go to Math Central
751
https://www.vedantu.com/content-files-downloadable/ncert-solutions/ncert-solutions-class-11-chemistry-chapter-13-hydrocarbons.pdf
Class XI Chemistry www.vedantu.com 1 NCERT solutions for Class 11 Chemistry Chapter 13 - Hydrocarbons NCERT Exercise 1. How do you account for the formation of ethane during chlorination of methane? Ans: Chlorination of methane proceeds by free radical chain mechanism which involves three steps as follows: - Initiation: The reaction begins with the homolytic bond cleavage within Cl – Cl bond that results in the formation of chlorine free radicals as; - Propagation: In this step, chlorine free radicals formed in the prior step abstracts a hydrogen atom from methane to generate methyl radicals as; The above methyl radicals then react with chlorine molecule to form methyl chloride along with the liberation of a chlorine free radical. Thus, methyl and chlorine free radicals set up a chain reaction. When HCl and 3 CH Clare formed as major products, other higher halogenated compounds are also formed as follows; - Termination: When all the reactants are consumed, the reaction stops and the chain gets to the point of termination. This happens by the combination of different free radicals. Class XI Chemistry www.vedantu.com 2 Chlorine free radicals combine to form a chlorine molecule. Methyl free radicals combine to form ethane. Hence, by the process of chlorination of methane, ethane is obtained as a by-product. 2. Write IUPAC names of the following compounds: a. Ans: The IUPAC name of the above compound is 2-Methylbut-2-ene. b. Ans: The IUPAC name of the above compound is Pen-1-en-3-yne. c. Ans: The IUPAC name of the above compound is Buta-1,3-diene or 1,3-Butadiene. d. Ans: The IUPAC name of the above compound is 4-Phenylbut-1-ene. Class XI Chemistry www.vedantu.com 3 e. Ans: The IUPAC name of the above compound is 2-Methylphenol. f. Ans: The above compound can be easily formulated as; The IUPAC name of the above compound is 5-(2-Methylpropyl) decane. g. Ans: The IUPAC name of the above compound is 4-Ethyldeca-1,5,8-triene. 3. For the following compounds, write structural formulas and IUPAC names for all possible isomers having the number of double or triple bond as indicated: (a) 4 8 C H (one double bond) Ans: The structures with their IUPAC names are given as; IUPAC name: But-1-ene Class XI Chemistry www.vedantu.com 4 IUPAC name: But-2-ene IUPAC name: 2-Methylprop-1-ene (b) 5 8 C H (one triple bond) Ans: The structures with their IUPAC names are given as; IUPAC name: Pent-1-yne IUPAC name: Pent-2-yne IUPAC name: 3-Methylbut-1-yne 4. Write IUPAC names of the products obtained by the ozonolysis of the following compounds: (i) Pent-2-ene Ans: The above compound undergo ozonolysis as follows; Class XI Chemistry www.vedantu.com 5 The products of the reaction are ethanal and propanal. (ii) 3,4-Dimethyl-hept-3-ene Ans: The above compound undergo ozonolysis as follows; The products of reaction are Butan-2-one and Pentan-2-one. (iii) 2-Ethylbut-1-ene Ans: The above compound undergo ozonolysis as follows; The products of the reaction are Pentan-3-one and methanal. (iv) 1-Phenylbut-1-ene Ans: The above compound undergo ozonolysis as follows; Class XI Chemistry www.vedantu.com 6 The products of the reaction are benzaldehyde and propanal. 5. An alkene ‘A’ on ozonolysis gives a mixture of ethanal and pentan-3-one. Write structure and IUPAC name of ‘A’. Ans: According to the given data the following reaction format can be formed as: While ozonolysis, an ozonide having a cyclic structure is formed as an intermediate which then undergoes cleavage to give the final products. Ethanal and pentan-3-one are obtained from the same intermediate ozonide. Thus, the expected structure of the ozonide is: This ozonide is formed as an addition of ozone to reactant ‘A’. Thus, the desired structure of ‘A’ can be obtained by the removal of ozone from the above ozonide. The structural formula of ‘A’ is as follows; The IUPAC name of the above compound is 3-Ethylpent-2-ene. Class XI Chemistry www.vedantu.com 7 6. An alkene ‘A’ contains three C – C, eight C – H  bonds and one C – C  bond. ‘A’ on ozonolysis gives two moles of an aldehyde of molar mass 44 u. Write IUPAC name of ‘A’. Ans: According to the given information, ‘A’ on ozonolysis gives two moles of an aldehyde of molar mass 44 u. The formation of two moles of an aldehyde indicates the presence of identical structural units on both sides of the double bond containing carbon atoms. Hence, the structure of ‘A’ can be represented as: XC = CX There are 8 C–H  bonds which state that there are 8 hydrogen atoms in ‘A’. Also, there are 3 C–C bonds which state that there are 4 carbon atoms present in the same. Combining the information given, the structure of ‘A’ can be represented as: ‘A’ has 3 C-C bonds, 8 C-H bonds along with a C-C bond which states that the IUPAC name of the same is But-2-ene. Now, Ozonolysis of ‘A’ takes place as; Here, we can say that the final product is being proved as ethanal with molar mass 44u. 7. Propanal and pentan-3-one are the ozonolysis products of an alkene? What is the structural formula of the alkene? Ans: According to the given information, propanal and pentan-3-one are the ozonolysis products of an alkene. Consider the given alkene as ‘A’. Now, writing the reverse of the ozonolysis reaction, we get; Class XI Chemistry www.vedantu.com 8 The products are obtained on the cleavage of ozonide ‘X’. Thus, ‘X’ contains both products in cyclic form. The possible structure of ozonide can be represented as follows; We know that, ‘X’ is an addition product of alkene ‘A’ with ozone. Thus, the possible structure of alkene ‘A’ is: 8. Write chemical equations for combustion reaction of the following hydrocarbons: (i) Butane Ans: The combustion reaction is given as;     4 2 10 g 2 g 2 g g 2C H 13O 8CO 10H O Heat     (ii) Pentene Ans: The combustion reaction is given as;     5 2 10 g 2 g 2 g g 2C H 15O 10CO 10H O Heat     (iii) Hexyne Ans: The combustion reaction is given as; Class XI Chemistry www.vedantu.com 9     6 2 10 g 2 g 2 g g 2C H 17O 12CO 10H O Heat     (iv) Toluene Ans: The combustion reaction is given as; 9. Draw the cis and trans structures of hex-2-ene. Which isomer will have higher b.p. and why? Ans: Hex-2-ene is represented as 3 2 2 3 CH CH CH CH CH CH      . Geometrical isomers of hex-2-ene are: The dipole moment of cis-compound is a sum of the dipole moments of 3 C CH  and 2 2 3 C CH CH CH  bonds acting in the same direction. The dipole moment of a trans-compound is the resultant of the dipole moments of the same bonds acting in opposite directions. Thus, cis-isomer is more polar than trans-isomer. The higher the polarity, the greater is the intermolecular dipole-dipole interaction and the higher will be the boiling point. Therefore, cis-isomer will have a higher boiling point than trans-isomer. 10. Why is benzene extraordinarily stable though it contains three double bonds? Ans: Benzene has resonating structures which define its stability perfectly. They can be represented as; Class XI Chemistry www.vedantu.com 10 All 6 carbon atoms in benzene are 2 sp hybridized. The 2 2 sp hybrid orbitals of each carbon atom overlaps with the 2 sp hybrid orbitals of adjacent carbon atoms to form 6  bonds in the hexagonal plane. The remaining 2 sp hybrid orbital on each carbon atom overlaps with the s-orbital of hydrogen to form 6 sigma C–H bonds. The remaining unhybridized p-orbital of carbon atoms has the possibility of forming 3  bonds by the lateral overlap of adjacent C atoms. The 6  electrons are delocalized and can move freely about the 6 carbon nuclei. Even after the presence of 3 double bonds, these delocalized -electrons stabilize benzene. 11. What are the necessary conditions for any system to be aromatic? Ans: A compound is only said to be aromatic if it completely satisfies the following conditions; - It should have a planar structure and should be cyclic. - The -electrons of the compound must be completely delocalized in the ring. - The total number of -electrons present in the ring should be equal to (4n+ 2), where n = 0, 1, 2 … etc. {Huckel’s rule}. 12. Explain why the following systems are not aromatic? (i) Ans: In the given compound, one carbon atom is 3 sp hybridized which signifies that it is tetrahedral (not planar). As for the compound to be aromatic, it should be planar. Thus, the given compound is not aromatic in nature. (ii) Ans: In the given compound, one carbon atom is 3 sp hybridized which signifies that it is tetrahedral (not planar). As for the compound to be Class XI Chemistry www.vedantu.com 11 aromatic, it should be planar. Also, for the given compound, the number of  electrons is 4 so, by Huckel’s rule; 4n 2 4   1 n 2  For a compound to be aromatic, the value of n must be an integer i.e. 0, 1, 2… etc. which is not satisfied for the given compound. Therefore, it is not aromatic in nature. (iii) Ans: For the given compound, the number of electrons is 8 so, by Huckel’s rule; 4n 2 8   2 n 3  For a compound to be aromatic, the value of n must be an integer i.e. 0, 1, 2… etc. which is not satisfied for the given compound. Therefore, it is not aromatic in nature. 13. How will you convert benzene into (i) p-nitrobromobenzene Ans: The reactions are given as; Class XI Chemistry www.vedantu.com 12 (ii) m-nitrochlorobenzene Ans: The reactions are given as; (iii) p -nitrotoluene Ans: The reactions are given as; (iv) acetophenone Ans: The reactions are given as; 14. In the alkane 3 2 3 2 2 3 2 CH CH C(CH ) CH CH(CH )     , identify 1 ,2 ,3    carbon atoms and give the number of H atoms bonded to each one of these. Ans: The given alkane can be represented as; Class XI Chemistry www.vedantu.com 13 - Primary carbon atoms are those which are bonded to only one carbon atom or none. i.e., they have only 1 carbon atom as their neighbor or none (in case of methane). The given structure has 5 primary carbon atoms and 15 hydrogen atoms attached to it. - Secondary carbon atoms are those which are bonded to 2 carbon atoms i.e., they have 2 carbon atoms as their neighbors. The given structure has 2 secondary carbon atoms and 4 hydrogen atoms attached to it. - Tertiary carbon atoms are those which are bonded to 3 carbon atoms i.e., they have 3 carbon atoms as their neighbors. The given structure has 1 tertiary carbon atom and only 1 hydrogen atom is attached to it. 15. What effect does branching of an alkane chain have on its boiling point? Ans: Alkanes mainly experience intermolecular Van der Waals forces. The stronger the force, the greater the boiling point of the alkane. As branching increases further, the surface area of the molecule decreases which eventually results in a small area of contact. Thus, the Van der Waals force also decreases which can be overcome at a relatively lower temperature. Therefore, the boiling point of an alkane chain decreases with an increase in branching. 16. Addition of HBr to propene yields 2-bromopropane, while in the presence of benzoyl peroxide, the same reaction yields 1-bromopropane. Explain and give a mechanism. Ans: Addition of HBr to propene is an example of an electrophilic addition reaction. Hydrogen bromide provides an electrophile, H . This electrophile attacks the double bond to form primary and secondary carbocations as shown: Secondary carbocations are comparatively more stable than primary carbocations. Thus, the former predominates since it will form at a faster rate. Thus, now Br attacks the carbocation to form 2 – bromopropane as the major product. Class XI Chemistry www.vedantu.com 14 This reaction follows Markovnikov’s rule. Now, In the presence of benzoyl peroxide, an additional reaction takes place by anti-Markovnikov’s rule. The reaction follows a free radical chain mechanism as; Here, 1 – bromopropane is obtained as the major product. In the presence of peroxide, Br free radical acts as an electrophile. Therefore, two different products are obtained in addition of HBr to propene according to the absence and presence of peroxide. 17. Write down the products of ozonolysis of 1, 2-dimethylbenzene (o-xylene). How does the result support Kekule structure for benzene? Ans: o-xylene has two resonating structures showing different reactions as follows; Class XI Chemistry www.vedantu.com 15 The three products are formed i.e., methyl glyoxal, 1,2-demethylglyoxal, and glyoxal from two Kekule structures. Since all three products cannot be obtained from any one of the two structures, this proves that o-xylene is a resonance hybrid of two Kekule structures. 18. Arrange benzene, n-hexane and ethyne in decreasing order of acidic behavior. Also give reason for this behavior. Ans: Acidic character of any species is defined on the basis of its ease with which it can lose the H– atoms. The hybridization state of carbon in the given compound is given as; According to the hybridization criterion, as the s–character increases the electronegativity of carbon increases and the electrons of C–H bond pair lie closer to the C atom. The s–character increases in the order: 3 2 sp sp sp   Thus, the decreasing order of acidic behavior is Ethyne > Benzene > Hexane. 19. Why does benzene undergo electrophilic substitution reactions easily and nucleophilic substitutions with difficulty? Class XI Chemistry www.vedantu.com 16 Ans:Benzene is a planar molecule having delocalized electrons above and below the plane of the ring. Thus, this makes it electron-rich. As a result, it is highly attractive to electron deficient species i.e., electrophiles. This is the reason; benzene undergoes electrophilic substitution reactions very easily. Whereas, nucleophiles are electron-rich. Hence, they are repelled by benzene. Therefore, benzene undergoes nucleophilic substitutions with much difficulty. 20. How would you convert the following compounds into benzene? (i) Ethyne Ans:The reactions are given as; (ii) Ethene Ans: The reactions are given as; (iii) Hexane Ans: The reactions are given as; 21. Write structures of all the alkenes which on hydrogenation give 2-methylbutane. Ans: The structure of 2-methylbutane can be stated as (skeleton); Class XI Chemistry www.vedantu.com 17 2 On the basis of the above structure, various alkenes that will give 2-methylbutane on hydrogenation are given as; 22. Arrange the following set of compounds in order of their decreasing relative reactivity with an electrophile, E (a) Chlorobenzene, 2,4-dinitrochlorobenzene, p-nitrochlorobenzene Ans:Electrophiles are reagents that participate in a reaction by accepting an electron pair in order to bond to the corresponding nucleophiles. The higher the electron density on a benzene ring, the more reactive is the compound towards an electrophile. Here, The presence of an EWG deactivates the aromatic ring by decreasing the electron density. Now, as the 2 NO  group is more EWG that Cl group. Thus, the decreasing order of EWG is given as; Chlorobenzene > p-nitrochlorobenzene > 2,4-dinitrochlorobenzene (b) Toluene, 3 6 4 2 p CH C H NO    , 2 6 4 2 p O N C H NO    . Ans: Here, 3 CH  is an EDG and 2 NO is an EWG. Thus, toluene will have the maximum electron density and is most easily attacked by E . The number of 2 NO substituents define the order as; Toluene > 3 6 4 2 p CH C H NO    > 2 6 4 2 p O N C H NO    23. Out of benzene, m–dinitrobenzene and toluene which will undergo nitration most easily and why? Ans: The ease of nitration depends on the presence of electron density on the Class XI Chemistry www.vedantu.com 18 compound to form nitrates. Nitration reactions are examples of electrophilic substitution reactions where an electron-rich species is attacked by a nitronium ion ( 2 NO  ). Now as we know, 3 CH group is electron donating and 2 NO  is electron withdrawing. Therefore, toluene will have the maximum electron density among the 3 compounds followed by benzene. On the other hand, m– Dinitrobenzene will have the least electron density. Thus, it will undergo nitration with difficulty. Hence, the increasing order of nitration is as; 24. Suggest the name of a Lewis acid other than anhydrous aluminium chloride which can be used during ethylation of benzene. Ans: The ethylation reaction of benzene involves the addition of an ethyl group on the benzene ring. This reaction is called Friedel-Craft alkylation reaction and takes place in the presence of a Lewis acid. Any Lewis acid like anhydrous 3 4 3 FeCl ,SnCl ,BF etc. can be used during the ethylation of benzene. 25. Why is the Wurtz reaction not preferred for the preparation of alkanes containing an odd number of carbon atoms? Illustrate your answer by taking one example. Ans: Wurtz reaction is limited for the synthesis of symmetrical alkanes (alkanes with an even number of carbon atoms). In the reaction, two similar alkyl halides are taken as reactants and an alkane, containing double the number of carbon atoms, are formed. Example: This reaction cannot be used for the preparation of unsymmetrical alkanes because if two dissimilar alkyl halides are taken as the reactants, then a mixture of alkanes is obtained as the products. Class XI Chemistry www.vedantu.com 19 Example: The boiling points of the above alkanes are very close. Hence, it becomes difficult to separate them.
752
https://physics.stackexchange.com/questions/11321/why-do-two-bodies-of-different-masses-fall-at-the-same-rate-in-the-absence-of-a
Skip to main content Asked Modified 5 months ago Viewed 252k times This question shows research effort; it is useful and clear 30 Save this question. Show activity on this post. I'm far from being a physics expert and figured this would be a good place to ask a beginner question that has been confusing me for some time. According to Galileo, two bodies of different masses, dropped from the same height, will touch the floor at the same time in the absence of air resistance. BUT Newton's second law states that a=F/m, with a the acceleration of a particle, m its mass and F the sum of forces applied to it. I understand that acceleration represents a variation of velocity and velocity represents a variation of position. I don't comprehend why the mass, which is seemingly affecting the acceleration, does not affect the "time of impact". Can someone explain this to me? I feel pretty dumb right now :) newtonian-mechanics newtonian-gravity mass acceleration free-fall Share CC BY-SA 3.0 Improve this question Follow this question to receive notifications edited Apr 5, 2014 at 15:49 Qmechanic♦ 222k5252 gold badges630630 silver badges2.5k2.5k bronze badges asked Jun 20, 2011 at 13:04 merwaaanmerwaaan 41911 gold badge44 silver badges44 bronze badges 2 1 Minor caveat for VERY heavy masses: physics.stackexchange.com/q/3534/2451 Qmechanic – Qmechanic ♦ 07/05/2011 19:52:35 Commented Jul 5, 2011 at 19:52 You are right to think of neglecting air resistance, but you also have to neglect air buoyancy due to Archimedes' principle. This is also an is easily observed effect by setting the right conditions. babou – babou 06/12/2013 06:06:30 Commented Jun 12, 2013 at 6:06 Add a comment | 7 Answers 7 Reset to default This answer is useful 32 Save this answer. Show activity on this post. Newton's gravitational force is proportional to the mass of a body, F=GMR2×m, where in the case you're thinking about M is the mass of the earth, R is the radius of the earth, and G is Newton's gravitational constant. Consequently, the acceleration is a=Fm=GMR2, which is independent of the mass of the object. Hence any two objects that are subject only to the force of gravity will fall with the same acceleration and hence they will hit the ground at the same time. What I think you were missing is that the force F on the two bodies is not the same, but the accelerations are the same. Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications answered Jun 20, 2011 at 13:18 Peter MorganPeter Morgan 10k22 gold badges2626 silver badges4343 bronze badges Add a comment | This answer is useful 30 Save this answer. Show activity on this post. it is because the Force at work here (gravity) is also dependent on the mass gravity acts on a body with mass m with F=mg you will plug this in to F=ma and you get ma=mg a=g and this is true for all bodies no matter what the mass is. Since they are accelerated the same and start with the same initial conditions (at rest and dropped from a height h) they will hit the floor at the same time. This is a peculiar aspect of gravity and underlying this is the equality of inertial mass and gravitational mass (here only the ratio must be the same for this to be true but Einstein later showed that they're really the same, i.e. the ratio is 1) Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications answered Jun 20, 2011 at 13:16 luksenluksen 6,2453333 silver badges4141 bronze badges 2 1 This is not a good answer does not explain things from first principles unlike the answers which start with the F=GMR2×m equation. Autodidact – Autodidact 05/25/2019 22:32:10 Commented May 25, 2019 at 22:32 2 The first eqn is what we are trying to prove.That for every body it’s a=g.That you assumed is true from start. And then everyone can do the math that g is in place of a. cOnnectOrTR 12 – cOnnectOrTR 12 07/26/2021 19:22:43 Commented Jul 26, 2021 at 19:22 Add a comment | This answer is useful 28 Save this answer. Show activity on this post. There are two ways that mass could affect the time of impact: (1) An object which is very massive has a stronger attraction to the earth. Logically, this might make the object fall faster and so reach the ground sooner. (2) An object which is very massive is difficult to get moving (i.e. it has very high inertia). Thus, one might expect the very massive object to be more difficult to move and so lose the race. The miracle is that in our world, these two effects exactly balance and so the two masses reach the ground at the same time. Now let me give a simple explanation for why it's natural that this occurs. Suppose we have two very heavy masses. If we drop them separately they take some time to fall. On the other hand, if we attach them together, will they take the same length of time? Think about a sphere split into two halves: The two halves of the sphere would fall at the same speed as each other. So if you dropped them close to each other, they'd fall together. And dropping them close to each other isn't any different from screwing them together (with a massless screw) and dropping them together (there won't be any new force from the screw). So the combined sphere has to fall at the same rate as the split sphere. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Mar 27 at 20:27 chr-wn 322 bronze badges answered Jun 20, 2011 at 22:19 Carl BrannenCarl Brannen 13.1k66 gold badges4141 silver badges7777 bronze badges 1 1 The beautiful explanation in the second half of your answer is more-or-less the same thought experiment which led Galileo to make his bold claim that dropping a heavy and a light ball from the leaning tower of Pisa they would reach the earth at the same time - contrary to what Aristotle had claimed. Nadav Har'El – Nadav Har'El 06/29/2021 21:39:51 Commented Jun 29, 2021 at 21:39 Add a comment | This answer is useful 10 Save this answer. Show activity on this post. Because force 'pushing' object closer to earth is proportionally bigger for 'heavier' object. But heavier object is also have higher gravitation force. So these two factors perfectly compensate each other: Yes you need more force for a set acceleration, but more force is here due to heavier mass. Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications answered Jun 20, 2011 at 13:15 BarsMonsterBarsMonster 2,49266 gold badges4242 silver badges7676 bronze badges Add a comment | This answer is useful 2 Save this answer. Show activity on this post. Lets say two separate mass M1 and m2 where M1 >> m2, both fall, from the same instant in a gravitational field Force on M1 is F1 = G Mearth M1/ R2 Force on m2 is F2 = G Mearth m2/ R2 Therefore the forces are F1 >> F2 So, most people think M1 should accelerate much faster than m2 But as you wrote above a = F/m and substituting F1, F2, M1 and m2 into that formula we find: F1/M1 = F2/M2 = G Mearth/ R2 Therefore the acceleration is independent of the masses we drop, and is a constant. EDIT Bother, by the time I had written this down, it was answered, obviously the other authors had a different acceleration in their typing. Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications answered Jun 20, 2011 at 13:25 metzgeermetzgeer 1,54811 gold badge1717 silver badges2727 bronze badges 2 What happens when M1 >> m2 is false? Does m2 pull on M1 mean acceleration grow with m2? Lorenzo Boccaccia – Lorenzo Boccaccia 03/07/2014 01:04:29 Commented Mar 7, 2014 at 1:04 I wonder if there is an "intuitive" GR way of explaining this. One could say that in the model of a large mass warping space-time that only the distance from the center of mass that creates the disturbance effects the acceleration of another mass toward it's center, but that doesn't really say "Why?". Also, if this wasn't the case, we couldn't predict orbital periods without knowing the mass of the orbiting object. Is the best we can do is say that the answer to "Why?" is "Because that is what we observe."? Jack R. Woods – Jack R. Woods 04/02/2018 16:35:49 Commented Apr 2, 2018 at 16:35 Add a comment | This answer is useful 0 Save this answer. Show activity on this post. Let us think about this by contradiction. Suppose the two masses fall at different rates (say, heavier mass falls faster), then if you tie the two masses together, what will happen? Solution #1. if you tie the masses together, they form a even larger mass, thus they fall faster Solution #2. if you tie the masses together, the lighter mass will give the heavier mass a drag force, thus they fall slower. The two solutions contradict each other; so they must fall at the same rate. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Jun 25, 2019 at 18:30 Zecheng GanZecheng Gan 9455 bronze badges 2 Nothing requires #2 to be true. Drag force only depends on area, shape velocity and fluid medium. It's possible to put two objects together in a way that doesn't increase the drag force at all (and due to the increased weight, increases the terminal velocity). You can also take an object of the same mass and either drop it at a new angle, or change it's shape, and it will fall at a different speed with the same mass. The contradiction you point out isn't really a contradiction. JMac – JMac 06/25/2019 18:52:51 Commented Jun 25, 2019 at 18:52 well I agree its not rigorous... thank you for pointing out :) Zecheng Gan – Zecheng Gan 07/02/2019 21:08:47 Commented Jul 2, 2019 at 21:08 Add a comment | This answer is useful 0 Save this answer. Show activity on this post. I am answering this in my own way critics are welcomed. So, Generally consider two bodies of same shape,surface area (same) but MASSES DIFFERENT. Now if you release both the bodies from a similar height say h, with u =0m/s for both (initially rest or aka free fall) they travel h height in same time . So,what is happening here is we know Newton's law of gravitation like F = (GMm)/R^2 Where, G is Grav.const. and M and m are masses along which gravitation (attraction only always) acts and R is the distance between the center of masses or simply centers of body (We just take point objects usually dw bout it). Now F=[(GMē) m]/R^2 Mē= earth mass and R dist.between earth center and object You can simplify (GMē/R^2) as g' usually when distance of object from earth surface is very less than earth radius you can take g'=g =9.8m/s^2 So,finally F=gm=mg For two different masses This F so called gravitational force of attraction is different but look g is same for both masses that is acceleration of two bodies is same even though there is different mag.of force on them . Mostly ppl are confused here they generally interpret this as greater the force greater the acc.n which is not right to be honest Greater force doesn't mean greater acc.n greater force maybe because of their masses being different but with same acc.n so it is possible for two bodies of different masses experiencing different forces having same acc.n It is more right if you put this explanation of so called g being same for two bodies rather than a1=a2=g YOU BETTER USE a1=F1/m1 and a2=F2/m2 and then you can see that g=F1/m1=F2/m2 You will be getting a lil clarity if you do it this way . Therefore since same acc.n it says both bodies have same change in their velocities over equal time interval so these bodies attain same final velocities over a particular time since they started with 0m/s at same time that's why they cover equal disp.during their journey down hence they reach ground in same time. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Jul 11, 2022 at 14:55 user339808user339808 2 1 please use Mathjax for any math related content for formatting LPZ – LPZ 07/11/2022 15:19:44 Commented Jul 11, 2022 at 15:19 Please don't use weird abbreviations. PM 2Ring – PM 2Ring 11/14/2023 01:19:18 Commented Nov 14, 2023 at 1:19 Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions newtonian-mechanics newtonian-gravity mass acceleration free-fall See similar questions with these tags. Linked 5 Why does gravity cause objects to pull other objects with constant acceleration and not constant force? 2 Free falling of object with no air resistance 0 About acceleration of gravity 2 Acceleration in free fall without air resistance regardless of mass 2 Why do objects with different masses fall at the same rate? 0 Why bodies with different masses reach gound at the same time in vacuum? 0 Why the Earth's gravitational acceleration is the same for every body? -1 Why are non-equally heavy objects falling at the same speed on the moon? 1 Why does two objects with different weights fall at the same time, taking air resistance to be negligible? 0 Why do 2 bodies of different masses reach the ground at the same time? See more linked questions Related 2 Free fall of two spheres made of same materials, different masses, with air resistance 10 How did Newton find out force has something to do with acceleration? 9 Terminal velocity of two equally shaped/sized objects with different masses? 1 Two bodies of equal mass thrown from a height-Practical physics question 1 Dropping Objects of Different Masses Hot Network Questions How to remove the shaded borders around the display in LXQt? How does copyright work when someone owns a language? the Hebrew numerical system How could the US promise, without a vote in Congress, not to expand NATO? Is it common for the Head of school to meet with PhD student who reported research misconduct and the supervisor who committed the misconduct to talk? Does allopathy kill 225-783 thousand people in the US per year? Under what corporate mechanism can the CEO of Intel transfer 10% of the company to the US government? Was a bright orange shark discovered in Costa Rica? P channel MOSFET for Reverse polarity Protection Using a PMOS as a level shifter from 12V to 5V Which sitcom had the most spin-offs, including spin-offs of the spin-offs? Float vs Decimal Recently-engaged celebrity that's initially fifthly onboard, without author backing, roily like racers (6 5) Simplifying expression involving square roots and fractional powers Why does a new Windows 11 PC have repeated paths in %PATH%? Fiery tsunami on a sunny day How to query a OSM SpatiaLite database in QGIS to add only a certain feature to my project? A single man to save mankind Chinese periodic table of elements (元素周期表) What is the difference between a compiler "frontend" and "backend"? What will happen when a pull request includes changes to ignored .gitignore? What was the first film to use the imaginary character twist? Typeset music both with and without manual line breaks, in Lilypond How Amstrad CPC games calculated positions for every sprite? Question feed
753
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=901156
IOP P UBLISHING M ETROLOGIA Metrologia 46 (2009) L16–L19 doi:10.1088/0026-1394/46/3/L01 LETTER TO THE EDITOR Molar mass and related quantities in the New SI Barry N Taylor National Institute of Standards and Technology 1 , 100 Bureau Drive, Gaithersburg, MD 20899, USA E-mail: barry.taylor@nist.gov Received 16 December 2008, in final form 23 January 2009 Published 24 February 2009 Online at stacks.iop.org/Met/46/L16 Abstract This letter addresses the calculation of molar mass and related quantities in the updated version of the SI (most often called the ‘New SI’ but sometimes the ‘Quantum SI’) currently under discussion by the International Committee for Weights and Measures and its Consultative Committee for Units and which could be adopted by the next General Conference on Weights and Measures in 2011. Introduction There is a reasonable likelihood that the next General Conference on Weights and Measures, the 24th and which is to convene in 2011, will adopt new definitions of the kilogram, ampere, kelvin and mole based on fixed values of the Planck constant h, elementary charge e, Boltzmann constant k and Avogadro constant NA , respectively, just as the current definition of the metre is based on a fixed value of the speed of light in vacuum c [1–3]. (In this letter the current International System of Units, or SI, is simply called ‘SI’ while the International System of Units with the new definitions is called ‘New SI.’) Whereas in the SI the definitions of the kilogram, ampere, kelvin and mole fix the values of the mass of the international prototype of the kilogram m( K ), the magnetic constant μ0 ,the triple point of water TTPW and the molar mass of the carbon-12 atom M( 12 C) to be exactly 1 kg, 4 π × 10 −7 N A −2 ,273.16 K and 12 g mol −1 , respectively, in the New SI these quantities no longer have exact values but must be determined experimentally. One of the consequences of M( 12 C) no longer being exactly 12 g mol −1 is that the molar mass of an entity X, M(X) ,can no longer be calculated from the expression M(X) = Ar (X) g mol −1 , but must be calculated from a modified form of this expression . (As usual and as defined in (14), the quantity Ar (X) , formerly called the ‘atomic weight’ of X, is the relative atomic mass of X.) The purpose of this letter is to present a straightforward way of calculating M(X) and related 1 NIST is part of the US Department of Commerce. quantities that avoids the use of the molar mass factor (1 + κ) introduced in while at the same time retaining the current definitions of the relevant quantities and constants, thereby simplifying molar mass calculations in particular and the New SI in general 2 . Summary of results 2.1. Definitions For easy reference, the relevant constants, quantities and relations among them are summarized in this section. Note that all the equations that appear in (1)–(23) apply to both the SI and the New SI, and that derivations of the most important of these equations are given in the appendix. c speed of light in vacuum (exactly known in the SI and the New SI) (1) h Planck constant (exactly known in the New SI) (2) e elementary charge (exactly known in the New SI) (3) me electron mass (4) α fine-structure constant: α = μ0ce 2/2h (5) R∞ Rydberg constant: R∞ = α2mec/ 2h (6) 2 This letter is based on document CCU/07-22 prepared by the author following the 18th meeting of the Consultative Committee on Units (CCU) held in June 2007. Since CCU/07-22 is available only on the CCU restricted documents Web site and because of its potential interest to a broader audience, at the request of the President of the CCU, Professor I M Mills, it is being published in Metrologia in the form of a Letter to the Editor. 0026-1394/09/030016+04$30.00 © 2009 BIPM and IOP Publishing Ltd Printed in the UK L16 Letter to the Editor NA Avogadro constant (number of specified entities X per mole and exactly known in the New SI) (7) m(X) mass of entity X (8) m( 12 C) mass of the carbon-12 atom (9) M(X) molar mass of entity X (mass per amount of substance of X): M(X) = m(X)N A(10) M( 12 C) molar mass of the carbon-12 atom: M( 12 C) = m( 12 C)N A (in the SI the definition of the mole fixes the value of M( 12 C) to be 12 g mol −1 exactly, but not in the New SI) (11) mu atomic mass constant: mu = m( 12 C)/12 (12) u unified atomic mass unit (also called the dalton, symbol Da): 1 u = 1 Da = mu(13) Ar (X) relative atomic mass of entity X: Ar (X) = m(X)/m u(14) Ar (12 C) relative atomic mass of the carbon-12 atom: Ar (12 C) = m( 12 C)/m u = 12 exactly in the SI and the New SI (15) Ar (e) relative atomic mass of the electron: Ar (e) = me/m u(16) Mu molar mass constant: Mu = M( 12 C)/ 12 (since M( 12 C) = 12 g mol −1 exactly in the SI, Mu = 1 g mol −1 exactly in the SI, but not in the New SI) (17) nS(X) amount of substance of X for a sample S of entities X: nS(X) = NS(X)/N A = mS(X)/M(X) , where NS(X) is the number of entities X in the sample and mS(X) is the mass of the sample (again, these relations hold in both the SI and the New SI) (18) 2.2. Expressions for calculating molar mass and related quantities The relevant expression for calculating the molar mass M(X) of an entity X is M(X) = Ar (X) M( 12 C) 12 = Ar (X)M u (19) with M( 12 C) 12 = Mu = 2R∞NAhα2cA r (e) . (20) The expressions for the related quantities m(X) , m( 12 C) and 1 u = 1 Da = mu are m(X) = Ar (X) NA M( 12 C) 12 = Ar (X)M u NA , (21) m( 12 C) = 12 NA M( 12 C) 12 = 12 Mu NA , (22) 1 u = 1 Da = mu = 1 NA M( 12 C) 12 = Mu NA . (23) Although (19)–(23) hold for both the SI and the New SI, the SI definition of the mole is such that M( 12 C)/ 12 = Mu = 1 g mol −1 exactly, as already indicated. Consequently, in the SI the combination of constants on the right-hand side of (20) has this value. 2.3. Evaluation of expressions and application If the New SI were to be implemented today based on the results of the most recent (2006) Committee on Data for Science and Technology (CODATA) least-squares adjustment, one would have for the constants of interest c = 299 792 458 m s −1 (exact ), h = 6.626 068 961 × 10 −34 J s (exact ), NA = 6.022 141 794 × 10 23 mol −1 (exact ), R∞ = 10 973 731 .568 527 (73 ) × 10 7 m−1 [6 .6 × 10 −12 ],Ar (e) = 5.485 799 0943 (23 ) × 10 −4 [4 .2 × 10 −10 ],α = 1/137 .035 999 679 (94 ) [6 .8 × 10 −10 ], (24) where one additional digit has been included in the values of h and NA beyond those given in the 2006 CODATA compilation to reduce rounding errors in the calculations below. To ensure that the consistency of the New SI with the SI is at an acceptable level, the fixed values of h and NAchosen to redefine the kilogram and the mole must be such that the difference between the magnitudes (sizes) of the New SI kilogram and the SI kilogram and the difference between the magnitudes of the New SI mole and the SI mole have no practical consequences and may therefore be considered negligible. This means that in establishing the New SI, one is not free to choose arbitrary values for any of the constants in (24), in particular for h and NA , but only values that result from a least-squares adjustment in which all quantities are expressed in their respective SI units, since such adjustments provide a set of self-consistent SI values of the constants that satisfy (20) . The values of the constants in (24) together with (20), (22) and (23) lead to M( 12 C)/ 12 = Mu = 1.000 000 0000 (14 ) g mol −1 [1 .4 × 10 −9 ] (25 a) = [1 + 0 .0(1.4) × 10 −9 ] g mol −1 [1 .4 × 10 −9 ],m( 12 C) = 1.992 646 5384 (28 ) × 10 −26 kg [1 .4 × 10 −9 ], (25 b)1 u = 1 Da = mu = 1.660 538 7820 (24 ) × 10 −27 kg [1 .4 × 10 −9 ], (25 c)where the covariances among R∞, Ar (e) and α are sufficiently small that they have a negligible effect on the uncertainty of M( 12 C)/ 12 = Mu . Because the fixed values of h and NAare the self-consistent recommended values resulting from the 2006 CODATA least-squares adjustment in which all quantities Metrologia ,46 (2009) L16–L19 L17 Letter to the Editor are expressed in SI units, the magnitudes of the New SI kilogram and mole are highly consistent with the magnitudes of the SI kilogram and mole. It is therefore no surprise to see from (25 a) that in the New SI, M( 12 C)/ 12 = Mu is equal to 1 g mol −1 within its fractional uncertainty of 1 .4 × 10 −9 .As an example of the calculation of the molar mass of a real substance, we consider silicon. Naturally occurring Si has three isotopes: 28 Si, 29 Si and 30 Si. In the most recent International Union of Pure and Applied Chemistry (IUPAC) compilation of the atomic weights of the elements dated 2005 , its relative atomic mass is given as Ar (Si ) = 28 .0855 (3).Thus the molar mass of naturally occurring silicon would be, from (19) and the above value of M( 12 C)/ 12 = Mu , M( Si ) = 28 .0855 (3) × 1.000 000 0000 (14 ) g mol −1 = 28 .0855 (3) g mol −1. Clearly, the numerical value and uncertainty of M( 12 C)/ 12 = Mu has no practical effect on the value of M(Si) obtained from Ar (Si ). (Because atomic weight, or more correctly relative atomic mass, is defined according to (14), namely, Ar (X) = m(X)/m u , it is a dimensionless quantity. Thus the periodic IUPAC compilations of the atomic weights of the elements do not depend directly on a particular set of units such as the SI or the New SI.) Further, we may now answer a question such as ‘What is the amount of substance of Si for a 100 g sample S of naturally occurring Si?’ From (18) we have nS(Si ) = NS(Si )/N A = mS(Si )/M( Si ) = 100 g /[28 .0855 (3) g mol −1 ] = 3.56 mol . It is expected that the recommended values resulting from the 2010 CODATA least-squares adjustment will serve as the basis for the exact values of h, e, k and NA chosen for the new definitions if, as anticipated, they are adopted by the 24th CGPM in 2011. However, it should be recognized that CODATA adjustments of the values of the constants subsequent to that of 2010, for example that of 2014, will undoubtedly lead to small changes in Mu , m( 12 C) and mu ,because the recommended values of R∞, Ar (e) and α on which they depend (see (20)–(23)) would likely change slightly from one adjustment to the next due to new data. Nevertheless, it is highly probable that any changes in Mu , m( 12 C) and muwould be less than 2 × 10 −9 in relative value, which would be so small that they would have no practical consequences of any sort. Of course, because they would be fixed by the new definitions of the kilogram and mole, the recommended values of h and NA would not change and hence would not themselves lead to any change in Mu , m( 12 C) or mu . This is analogous to the speed of light in vacuum: because the value of c is fixed by the definition of the metre, it does not change from one adjustment to the next. 2.4. The molar mass constant MuIn the above discussion we have used the molar mass constant Mu , which we have defined to be equal to M( 12 C)/ 12 exactly, in both the SI and the New SI. The convenience of adopting this constant, with this name and symbol, is that it is for molar mass the analogue of the atomic mass constant mu , which is defined to have the value m( 12 C)/12. These two constants are related by the equation Mu = muNA . It then enables us to write the molar mass of an atom (or molecule) X as in (19), M(X) = Ar (X)M u , just as we write the mass of an entity X in u (sometimes incorrectly called atomic mass) in the form m(X) = Ar (X)m u .To reiterate, in the SI, Mu = 1 g mol −1 exactly, but in the New SI the value of Mu will no longer be exactly known. Although it will have this same value at the time of adoption of the New SI (see (25 a)), the value will have an associated uncertainty, and, as already observed, the value may change slightly from the value 1 g mol −1 due to future changes in the adjusted values of R∞, Ar (e) and α due to new data. However, the fractional change of Mu from the value 1 g mol −1 is unlikely ever to be greater than a few parts in 10 9 , and this is so much smaller than the uncertainty with which chemical measurements are likely to be made that for all practical purposes chemists may still treat Mu as being exactly equal to 1 g mol −1 .The constant Mu with the name ‘molar mass constant’ has not been much used in the established literature. It can, of course, always be replaced by the expression M( 12 C)/12, which is how it is defined—as one-twelfth of the molar mass of carbon-12. We recommend that this constant could be used with advantage more widely than it is at present, in teaching chemistry for example, to simplify the expression for calculating the molar mass of atoms and molecules. Appendix. Derivation of expressions From the quotient of (10) and (11) one has M(X) = m(X) m( 12 C) M( 12 C), which, with the aid of (12) and (14), becomes (19): M(X) = Ar (X) M( 12 C) 12 . From (6) one has me = 2R∞hα2c , which may be written as m(X) = 2R∞hm(X) α2cm e , or, with the aid of (14) and (16), as m(X) = 2R∞hA r (X) α2cA r (e) . L18 Metrologia , 46 (2009) L16–L19 Letter to the Editor Based on (10), this last expression leads to M(X) = 2R∞NAhA r (X) α2cA r (e) . If the entity X is the carbon-12 atom, then, with the aid of (15), this becomes (20): M( 12 C) 12 = 2R∞NAhα2cA r (e) . Further, we see that (21) follows from (10) and (19), (22) is the same as (11), and (23) follows from (11) and (12). For completeness, we point out that the molar mass factor (1 + κ) first introduced in , and the molar mass of carbon-12, are related by (1 + κ) = M( 12 C)/( 12 g mol −1). Thus, with the aid of this expression, (19)–(23) could be rewritten in terms of (1 + κ) . We also see from this expression that in the New SI, the difference between M( 12 C) and 12 g mol −1 carries the same information that is carried by the factor (1 + κ) . Acknowledgment The author gratefully acknowledges helpful discussions with his colleagues I M Mills, P J Mohr, T J Quinn and E R Williams. References BIPM 2007 Comptes Rendus des S´ eances de la 23rd Conf´ erence G´ en´ erale des Poids et Mesures (the relevant CGPM resolution, No 12, is available at en/pdf/Resol23CGPM-EN.pdf) (to be published) Mills I M, Mohr P J, Quinn T J, Taylor B N and Williams E R 2006 Metrologia 43 227–46 BIPM 2006 The International System of Units 8th edn (S` evres, France: Bureau International des Poids et Mesures) Mohr P J, Taylor B N and Newell D B 2008 Rev. Mod. Phys. 80 633–730 Mohr P J, Taylor B N and Newell D B 2008 J. Phys. Chem. Ref. Data 37 1187–284 Wieser M E 2006 Pure Appl. Chem. 78 2051–66 Metrologia ,46 (2009) L16–L19 L19
754
https://www.merriam-webster.com/dictionary/underhanded
adverb adjective underhanded adverb underhanded adjective Synonyms Adjective secret, covert, stealthy, furtive, clandestine, surreptitious, underhanded mean done without attracting observation. secret implies concealment on any grounds for any motive. met at a secret location covert stresses the fact of not being open or declared. covert intelligence operations stealthy suggests taking pains to avoid being seen or heard especially in some misdoing. the stealthy step of a burglar furtive implies a sly or cautious stealthiness. lovers exchanging furtive glances clandestine implies secrecy usually for an evil, illicit, or unauthorized purpose and often emphasizes the fear of being discovered. a clandestine meeting of conspirators surreptitious applies to action or behavior done secretly often with skillful avoidance of detection and in violation of custom, law, or authority. the surreptitious stockpiling of weapons underhanded stresses fraud or deception. an underhanded trick secret, covert, stealthy, furtive, clandestine, surreptitious, underhanded mean done without attracting observation. secret implies concealment on any grounds for any motive. covert stresses the fact of not being open or declared. stealthy suggests taking pains to avoid being seen or heard especially in some misdoing. furtive implies a sly or cautious stealthiness. clandestine implies secrecy usually for an evil, illicit, or unauthorized purpose and often emphasizes the fear of being discovered. surreptitious applies to action or behavior done secretly often with skillful avoidance of detection and in violation of custom, law, or authority. underhanded stresses fraud or deception. Examples of underhanded in a Sentence Word History Adverb circa 1822, in the meaning defined above Adjective 1853, in the meaning defined above Rhymes for underhanded Browse Nearby Words Cite this Entry “Underhanded.” Merriam-Webster.com Dictionary, Merriam-Webster, Accessed 29 Sep. 2025. Share Kids Definition underhanded More from Merriam-Webster on underhanded Nglish: Translation of underhanded for Spanish Speakers Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free! More from Merriam-Webster Can you solve 4 words at once? Can you solve 4 words at once? Word of the Day obliterate See Definitions and Examples » Get Word of the Day daily email! Popular in Grammar & Usage Is it 'autumn' or 'fall'? Using Bullet Points ( • ) Merriam-Webster’s Great Big List of Words You Love to Hate How to Use Em Dashes (—), En Dashes (–) , and Hyphens (-) A Guide to Using Semicolons Popular in Wordplay Ye Olde Nincompoop: Old-Fashioned Words for 'Stupid' Great Big List of Beautiful and Useless Words, Vol. 3 And So It Begins: 9 Words for Beginnings 'Za' and 9 Other Words to Help You Win at SCRABBLE 12 Words Whose History Will Surprise You Popular Is it 'autumn' or 'fall'? Ye Olde Nincompoop: Old-Fashioned Words for 'Stupid' Great Big List of Beautiful and Useless Words, Vol. 3 Games & Quizzes Learn a new word every day. Delivered to your inbox! © 2025 Merriam-Webster, Incorporated
755
https://electronics.stackexchange.com/questions/721023/is-the-coulomb-unit-a-constant-or-does-it-depend-on-the-wire
current - Is the coulomb unit a constant or does it depend on the wire? - Electrical Engineering Stack Exchange Join Electrical Engineering By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Electrical Engineering helpchat Electrical Engineering Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Is the coulomb unit a constant or does it depend on the wire? Ask Question Asked 1 year, 1 month ago Modified1 year, 1 month ago Viewed 1k times This question shows research effort; it is useful and clear 7 Save this question. Show activity on this post. I'm completely new to electronics, and I'm a bit lost on the definition of a coulomb. I tried to find a decent definition, but they all seem to equate to something like: The coulomb (symbol: C) is the unit of electric charge in the International System of Units (SI). It is equal to the electric charge delivered by a 1 ampere current in 1 second and is defined in terms of the elementary charge e, at about 6.241509×10 18 e. So the definition here is effectively defining a coulomb as a constant by saying C = 6.241509×10 18 × e and since e is a constant then a Coulomb is a constant derived from it. The reason that I'm confused is because the definition also says: It is equal to the electric charge delivered by a 1 ampere current in 1 second But, doesn't the charge delivered by a current in a second depend on the resistance of the circuit that the current is running through? If I'm running 1 amp through two different gauges of wire, then doesn't the resistance of the wire alter how much charge can flow in a second? current resistance cables Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Aug 23, 2024 at 9:04 Velvet 4,908 5 5 gold badges 18 18 silver badges 32 32 bronze badges asked Jul 31, 2024 at 0:21 OngGabOngGab 73 4 4 bronze badges 3 1 (Please identify the source of 3rd party material you present.)greybeard –greybeard 2024-07-31 08:01:53 +00:00 Commented Jul 31, 2024 at 8:01 1 Your definition is for one coulomb, which is an amount of charge different from any other amounts of coulombs. There's nothing magic about this. It's the same as saying one pound, or one meter.Scott Seidman –Scott Seidman 2024-07-31 17:44:29 +00:00 Commented Jul 31, 2024 at 17:44 As you may imagine, it would be a bit of a problem if a unit of measure varied with how you perform the measurement. It would be useless.Kuba hasn't forgotten Monica –Kuba hasn't forgotten Monica 2024-08-23 11:53:33 +00:00 Commented Aug 23, 2024 at 11:53 Add a comment| 5 Answers 5 Sorted by: Reset to default This answer is useful 17 Save this answer. Show activity on this post. If the current is 1A then 1C/second flows. The higher the resistance of the wire the more power is lost in the wire (a thin wire may get very hot), but the rate of flow of charge is exactly the same. It will take more voltage to cause 1A to flow through the wire if the wire has high resistance. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Jul 31, 2024 at 0:29 Spehro 'speff' PefhanySpehro 'speff' Pefhany 446k 24 24 gold badges 378 378 silver badges 1k 1k bronze badges 3 Oh, this makes sense - so one Coulomb is effectively just the 6.241509×10^18 e constant, and I was overcomplicating it by attempting to factor the resistance of the wire [or the configuration of the circuit as a whole]. So the "The higher the resistance of the wire the more power" part isn't directly related to Coulomb calculation, but it would relate to the efficiency/power factor of the circuit?OngGab –OngGab 2024-07-31 01:01:16 +00:00 Commented Jul 31, 2024 at 1:01 1 1C is the charge of 6.241509×10 18 10 18 electrons. No e.StainlessSteelRat –StainlessSteelRat 2024-07-31 01:28:54 +00:00 Commented Jul 31, 2024 at 1:28 2 Yes, power loss. "Power factor" is a term with specific meaning in AC circuits, so not applicable here, by convention. Note too that e is the charge of one electron so you can think of an imaginary boundary in the wire and the ~6x10^18 is the net count of electrons that meander through that boundary every second when 1A flows.Spehro 'speff' Pefhany –Spehro 'speff' Pefhany 2024-07-31 01:29:00 +00:00 Commented Jul 31, 2024 at 1:29 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. To use the good old-fashioned plumbing analogy for electronics, a Coulomb of charge is exactly equivalent to a litre of water - you can deliver it slowly through a fat pipe, or quickly through a thin pipe, but it will always be one litre. An ampere is the same - a fixed number of electrons per second - regardless of whether the wire is thin or thick, which is why it is almost the same definition as a Coulomb of charge. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Aug 1, 2024 at 11:50 MikeBMikeB 286 1 1 silver badge 9 9 bronze badges Add a comment| This answer is useful 3 Save this answer. Show activity on this post. 1 C is the charge of 6.241509×10 18 6.241509×10 18 electrons. This means the charge of one electron is 1.602177×10−19 C 1.602177×10−19 C (Coulombs). Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Aug 23, 2024 at 11:52 Kuba hasn't forgotten Monica 56.7k 2 2 gold badges 53 53 silver badges 155 155 bronze badges answered Jul 31, 2024 at 1:34 StainlessSteelRatStainlessSteelRat 8,537 2 2 gold badges 20 20 silver badges 35 35 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. History In the olden days¹, units of measure were defined by a physical object, the unit prototype. For the metric/SI systems², there were prototypes for metre, kilogram, etc. For example, the kilogram prototype was a physical object made of a platinum-iridium alloy. It was used as the standard for defining the kilogram since its creation in 1889. The most significant change came during the 26th General Conference on Weights and Measures (CGPM) in November 2018, where it was decided to redefine the kilogram in terms of fundamental constants of nature rather than a physical object. It was effectively the last prototype of unit of measure used. Since then, we use physical constants to define out units of measurement. The Kilogram prototype The physical constants era Now, we use physical constants to derive sizes of the units. Physical prototypes and standard reference materials are still used to calibrate instruments that do not need extreme precision. The balance used in your favourite grocery store is not calibrated using advanced physical experiments, but using a (typically certified) copy of a prototype of the unit. For example: The meter is defined by the distance that light travels in a vacuum in 1/299,792,458 seconds. Kilogram is defined by a specific measurement on the Kibble balance. The coulomb is not any different! We can say the coulomb unit is the electric charge delivered by a current of 1 ampere in one second. But to be precise, the official definition is³: 1 coulomb is 1/(1.602176634×10–19) elementary charges, given that the elementary charge is a charge of a single proton or the negative charge of a single electron. Footnotes ¹ Even before that, units were defined quite arbitrarily – by using dimensions of human body parts etc. Many cultures used also parts of statues as the prototypes. One notable example is the use of the "cubit," an ancient unit of length that was based on the forearm's length from the elbow to the tip of the middle finger. In some cases, the cubit was represented or standardized using a statue or a carved figure, where the dimensions of the statue's arm would serve as a reference for this measurement. Statues, particularly those of significant cultural or religious importance, were often constructed with precise proportions, making them suitable for establishing standard units. ² Nobody uses imperial units, right? It is possible that I am not right. ³ This answer does not provide literal definitions; they were reworded a little bit. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Jul 31, 2024 at 15:07 bindiffbindiff 258 1 1 silver badge 10 10 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Your confusion may originate from somewhat reciprocal definition of charge and current. Electrical current is a rate of change in time. Any rate of chage is defined as: R=d N d t R=d N d t where N is (local) quantity and t is time. (d means differential, you can think of it as infinitisemally small change) In case of currents (water flow, electric current,...) the change happens usually through defined cross-section of [river, wire,...] from one side of the section to the other. The problem is that unit of electric current (ampére) is base SI, while coulomb is not. Working our case for I=1 and t going from t0=0 to t1=1 s: I=d Q d t⇒d Q=1⋅d t Q=∫t 1 t 0 I⋅d t=I⋅(t 1−t 0)I=d Q d t⇒d Q=1⋅d t Q=∫t 0 t 1 I⋅d t=I⋅(t 1−t 0) So one C is ammout of charge flowing through a cross-section at 1 A for 1 s. Also it means one C is ammout of charge flowing same section at 2 A for 1/2 s. Of course 1 C is a constant. It is a unit. Actually it is a constant (one) multiplied by unit of measure (coulomb) The catch in your confusion are consequences of variables you can directly control/choose (voltage, power, resistance) and those you can not (charge, current). For constant voltage (battery) but different conductor (different resistance) you get different currents flowing through it. If you want to charge 1 C using that current you need to accomodate the time for that condition. There are fake units, for example kgF. It is defined as force that keeps 1kg weight in equilibrium. The catch is that 1kgF in newtons at pole is not equal to same value in newtons on equator because of Earth's shape and rotation. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Aug 1, 2024 at 9:24 answered Jul 31, 2024 at 17:39 CrowleyCrowley 399 1 1 gold badge 2 2 silver badges 8 8 bronze badges 1 Tip: SI units named a person have their symbols capitalised but are lowercase when spelt out. 'volt', 'ampere', 'coulomb', 'newton', etc.Transistor –Transistor 2024-07-31 18:36:23 +00:00 Commented Jul 31, 2024 at 18:36 Add a comment| Your Answer Thanks for contributing an answer to Electrical Engineering Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions current resistance cables See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Related 6Discharge and charge rates of a capacitor - Comparing Energy Movements 0What is more accurate definition of Ohm's law? 0A current of 1A flows in a wire carried by electrons. How many electrons pass through the a cross-section of the wire each second? 4If the voltage doesn't kill you, does the current capability matter? 1Can the the rate of the flow of electrons (current) remain the same even though the number of coulombs differ? 6We can ignore the wire resistance if the current delivered by the wire is low? 1Confusion about relationship between energy per coulomb, and the amount of coulombs in current 1How can a current be constant and there still be a voltage drop 0Voltage-current relationship in the wire 1Why is current lower for higher voltage? Hot Network Questions For every second-order formula, is there a first-order formula equivalent to it by reification? Non-degeneracy of wedge product in cohomology Is it ok to place components "inside" the PCB Is existence always locational? Identifying a thriller where a man is trapped in a telephone box by a sniper Can induction and coinduction be generalized into a single principle? How can blood fuel space travel? Checking model assumptions at cluster level vs global level? Alternatives to Test-Driven Grading in an LLM world alignment in a table with custom separator Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Analog story - nuclear bombs used to neutralize global warming Do we need the author's permission for reference Can a cleric gain the intended benefit from the Extra Spell feat? Does the curvature engine's wake really last forever? I have a lot of PTO to take, which will make the deadline impossible Passengers on a flight vote on the destination, "It's democracy!" ConTeXt: Unnecessary space in \setupheadertext в ответе meaning in context Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Discussing strategy reduces winning chances of everyone! How many stars is possible to obtain in your savefile? Proof of every Highly Abundant Number greater than 3 is Even Lingering odor presumably from bad chicken Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Electrical Engineering Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
756
https://brainly.com/question/43476720
[FREE] How do you find the common ratio of a geometric sequence if given the first term and the fourth term? - brainly.com 4 Search Learning Mode Cancel Log in / Join for free Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions Log in Join for free Tutoring Session +81,4k Smart guidance, rooted in what you’re studying Get Guidance Test Prep +25,9k Ace exams faster, with practice that adapts to you Practice Worksheets +5,1k Guided help for every grade, topic or textbook Complete See more / Mathematics Textbook & Expert-Verified Textbook & Expert-Verified How do you find the common ratio of a geometric sequence if given the first term and the fourth term? 1 See answer Explain with Learning Companion NEW Asked by nmusevitoglu190 • 11/27/2023 0:00 / -- Read More Community by Students Brainly by Experts ChatGPT by OpenAI Gemini Google AI Community Answer This answer helped 1675214 people 1M 0.0 0 Upload your school material for a more relevant answer To find the common ratio of a geometric sequence, divide the fourth term by the first term and take the cube root of the result. Explanation To find the common ratio of a geometric sequence when given the first term (a1) and the fourth term (a4), you can use the formula for the n_th term of a geometric sequence, which is _an = a1 × r(n-1). Since you have the first and fourth terms, your equation will look like this: a__4 = a__1 × r__3. To solve for the common ratio (r), you would divide the fourth term by the first term and then take the cube root of that result. Here are the steps: Divide the fourth term by the first term: ratio = a__4 / a__1. Take the cube root of the ratio to find the common ratio: r = ∛(ratio). You do not need a calculator or computer to find the common ratio, and you do not need to write a linear equation. The process involves straightforward arithmetic operations. Answered by meeraverma •13.1K answers•1.7M people helped Thanks 0 0.0 (0 votes) Textbook &Expert-Verified⬈(opens in a new tab) This answer helped 1675214 people 1M 0.0 0 Lectures on Elementary Mathematics - Joseph Louis Lagrange Mathematics for Biomedical Physics - Jogindra M Wadehra Introducing Mathematical Biology - Alex Best Upload your school material for a more relevant answer To find the common ratio in a geometric sequence from the first and fourth terms, use the formula r=3 a 1​a 4​​​. Simply divide the fourth term by the first term and then take the cube root of the result. For example, if the first term is 2 and the fourth term is 16, the common ratio is 2. Explanation To find the common ratio of a geometric sequence when you have the first term and the fourth term, you can follow these steps: Understand the formula for a geometric sequence: The n-th term of a geometric sequence is represented as: a n​=a 1​×r(n−1) where a n​ is the n-th term, a 1​ is the first term, and r is the common ratio. Identify your terms: Let's say you have the first term a 1​ and the fourth term a 4​. According to the formula: a 4​=a 1​×r 3 Express the common ratio: To find the common ratio r, rearrange the formula: r 3=a 1​a 4​​ Solve for the common ratio: To isolate r, take the cube root of both sides: r=3 a 1​a 4​​​ This formula allows you to compute the common ratio when you know the values of the first term and the fourth term. For example, if the first term a 1​=2 and the fourth term a 4​=16: Calculate the ratio: r 3=2 16​=8 Take the cube root: r=3 8​=2 Thus, the common ratio is r=2. Examples & Evidence For instance, if the first term is 3 and the fourth term is 81, you would calculate: r 3=3 81​=27, yielding a common ratio of 3 since r=3 27​=3. The formula for the n-th term of a geometric sequence is a well-established concept in mathematics and can be found in most algebra textbooks. Using specific numeric examples demonstrates the application of this formula effectively. Thanks 0 0.0 (0 votes) Advertisement nmusevitoglu190 has a question! Can you help? Add your answer See Expert-Verified Answer ### Free Mathematics solutions and answers Community Answer A In a geometric progression, the first term is 5, and the common ratio is 3. What is the fourth term in the sequence? 1211 Community Answer the second term of geometric sequence is 6 and the fourth term is 96. Find the possible value of the first term and the common ratio Community Answer Find the next term of a geometric sequence with 8 as the first term and a common ratio of 5/4. Community Answer a) Find the nth term of the geometric sequence with given first term a and common ratio r. a = 5, r = 4; a_n =. What is the fourth term? a_4 =. (b) a = −5, r = −3; a_n =,a_4 =. (c) Find the indicated term of the geometric sequence with the given description.The first term of a geometric sequence is 20 and the second term is 8. Find the fourth term. Community Answer 4.6 12 Jonathan and his sister Jennifer have a combined age of 48. If Jonathan is twice as old as his sister, how old is Jennifer Community Answer 11 What is the present value of a cash inflow of 1250 four years from now if the required rate of return is 8% (Rounded to 2 decimal places)? Community Answer 13 Where can you find your state-specific Lottery information to sell Lottery tickets and redeem winning Lottery tickets? (Select all that apply.) 1. Barcode and Quick Reference Guide 2. Lottery Terminal Handbook 3. Lottery vending machine 4. OneWalmart using Handheld/BYOD Community Answer 4.1 17 How many positive integers between 100 and 999 inclusive are divisible by three or four? Community Answer 4.0 9 N a bike race: julie came in ahead of roger. julie finished after james. david beat james but finished after sarah. in what place did david finish? Community Answer 4.1 8 Carly, sandi, cyrus and pedro have multiple pets. carly and sandi have dogs, while the other two have cats. sandi and pedro have chickens. everyone except carly has a rabbit. who only has a cat and a rabbit? New questions in Mathematics Simplify 3 8 x 6​ completely given x>0. Simplify 3 27 x 15​ completely given x>0. Simplify. (v 3)−2 Write your answer without using negative exponents. Suppose X is a binomial random variable with 37 trials and a probability of success of 0.43. Find P(X=14). Round your answer to four decimal places. Solve for x: 3(x−1)(x+3)−7 x=9+x Previous questionNext question Learn Practice Test Open in Learning Companion Company Copyright Policy Privacy Policy Cookie Preferences Insights: The Brainly Blog Advertise with us Careers Homework Questions & Answers Help Terms of Use Help Center Safety Center Responsible Disclosure Agreement Connect with us (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) Brainly.com Dismiss Materials from your teacher, like lecture notes or study guides, help Brainly adjust this answer to fit your needs. Dismiss
757
https://assets.cambridge.org/97813165/18625/frontmatter/9781316518625_frontmatter.pdf
Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Pedrottis’ Introduction to Optics Fourth edition The fourth edition of Pedrottis’ Introduction to Optics is a comprehensive revision of a classic guide to the fascinating properties of light, now with new authors. Ideally suited for undergraduate optics courses in physics and electrical/electronic engineering departments, this edition adopts a distinctive phenomeno-logical approach, bringing the underlying science to life through interactive simulations and beautifully revised figures. The modular structure and succinct style of previous editions has been maintained, while the content has been modernized, new topics have been added, and a greater consistency of terminology attained. For even more effective learning, a recurring theme of student engagement runs throughout the text, supported by a multifaceted pedagogical package that reinforces key concepts, develops a clear understanding of optical technologies and applications, and connects to students’ experiences and observa-tions from everyday life. Rayf Shiell passionately balances both research and teaching at Trent University, Canada. His primary research is in optics and ranges from probing matter using high-power laser beams to exploring the optics of the eye. He teaches a wide range of physics courses and delights in uncovering the often-surprising connections between them. His love of good pedagogy has permeated not only this present text, from which he has taught for many years, but all his work as a mentor and instructor, extending also to coaching the university rowing team. He is co-developer of the integrated testlet, an assessment tool that employs a scaffolded answer-until-correct question structure, which features in each chapter of this text. For this work he was a co-recipient of Trent University’s 2021 Award for Educational Leadership and Innovation. Iain McNab is a former Dean of the Faculty of Applied Science and Technology at Sheridan College, Canada; he counts among his mentors Nobel Laureates Harold Kroto and John Polanyi. At Newcastle University, UK, he was head of the physics teaching laboratories and led research in laser spectroscopy. He has taught widely in physics and engineering and was an early adopter of innovative methods such as the flipped classroom and web-based learning systems. Iain is a former Chair of the Spectroscopy Group of the Institute of Physics and currently runs the McNab Group, teaches for Canadian Laser Safety, and is a visiting Professor at Guangdong Technion Israel Institute of Technology, China. Matthew Romerein earned his M.Sc. in Materials Science from Trent University, researching polarimetry and spectroscopy. Working closely with the authors, he endeavoured to improve the clarity, scientific accuracy, and educational impact of the figures in this book. The accompanying interactive animations were also designed by Matthew, who develops quality assurance testing software for the RF industry. Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment “I have been teaching optics for thirty years using various editions of Pedrotti’s Introduction to Optics, and I have never been disappointed. This latest fourth edition with new authors is a thorough revision of the original text with modernized content, but retaining the same excellence in pedagogy. The book is a hallmark of clarity and especially accessible to students from broad backgrounds who may be encountering topics in optics and photonics for the first time. For example, when discussing the standard topics of geometrical optics, I was delighted to see the expanded discussion on sign conventions, and the explicit use of the Cartesian convention for more complex optical systems. And the inclusion of advanced topics such as frequency combs – whilst necessarily brief in a general textbook – highlights how the authors want the book to be a stepping stone for students into more specialised research topics. The extensive problem sets – often with quantitative exercises based on real-world numbers – will make this very popular with instructors, and there is enough variety in the problems to be adapted to different levels and lengths of courses. I expect this edition to be as much a success as its predecessors.” John M. Dudley, University Bourgogne Franche-Comté “Exceptionally clear and readable! The diagrams are equally clear and well annotated, which con-tributes to pedagogical effectiveness. What I most appreciate is the breadth of coverage far greater than in any introductory optics textbook I have used.” Andrew Rex, University of Puget Sound Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Pedrottis’ Introduction to Optics Fourth edition RAYF SHIELL Trent University, Peterborough, Ontario IAIN MCNAB McNab Group, Toronto, Ontario With figure design and enhancements by MATTHEW ROMEREIN Wide Band Workshop Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Shaftesbury Road, Cambridge CB2 8EA, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467 Cambridge University Press is part of Cambridge University Press & Assessment, a department of the University of Cambridge. We share the University’s mission to contribute to society through the pursuit of education, learning and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/highereducation/isbn/9781316518625 DOI: 10.1017/9781009000963 Third edition © Cambridge University Press 2018 Fourth edition © Cambridge University Press & Assessment 2025 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press & Assessment. This book was previously published by Pearson Education, Inc. Third edition reissued by Cambridge University Press 2018 Reprinted 2019 Fourth edition 2025 Printed in the United Kingdom by CPI Group Ltd, Croydon CR0 4YY, 2025 A catalogue record for this publication is available from the British Library A Cataloging-in-Publication data record for this book is available from the Library of Congress ISBN 978-1-316-51862-5 Hardback Additional resources for this publication at www.cambridge.org/pedrotti4ed Cambridge University Press & Assessment has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment To all our teachers, from all walks of life. Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Brief Contents Preface page xxiii Theme I Introducing Light 1 1 The Optical Landscape 3 2 Ray Optics 22 3 Understanding Optical Instruments 103 4 Waves 164 5 Light Sources, Displays, and Detectors 192 Theme II A Scalar Field Approach to Light 237 6 Superposition of Waves 239 7 Interference of Light 262 8 Interferometry and Multilayer Films 296 9 Coherence 345 10 Fraunhofer Diffraction 372 11 Prisms, Diffraction Gratings, and Spectrometers 405 Theme III AVector Field Approach to Light 445 12 Reflection and Transmission at Surfaces 447 13 Optical Fibers and Communications Technology 473 Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 14 Mathematical Treatment of Polarization 512 15 Polarization in Practice 544 Theme IV Lasers and Laser Beams 573 16 Light–Matter Interactions 575 17 Lasers and Laser Operation 603 18 Laser Beams and Laser Cavities 646 Theme V Advanced Topics 683 19 Aberrations 685 20 Fourier Optics: Imaging and Spectroscopy 719 21 Fresnel Diffraction 747 22 Holography 781 23 Optical Properties of Materials 800 24 Nonlinear Optics and the Modulation of Light 822 25 Laser Technology and Applications 857 26 Optics of the Eye 886 Appendices A. Physical Constants 911 B. Mathematical Formulas 913 C. Bibliography and References 916 D. Answers to Selected Questions 925 E. Integrated Testlet Lookup Table 942 Index 944 viii BRIEF CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Complete Contents Preface page xxiii Theme I Introducing Light 1 1 The Optical Landscape 3 Introduction 3 1.1 Wave–Particle Duality: An Overview 5 1.2 Wave–Particle Duality: Some History 5 1.3 Particle Picture of Light: Photons 7 1.4 Wave Picture of Light: Electromagnetic Waves 9 1.5 The Energy Content of Light: Radiometry 14 Questions 20 2 Ray Optics 22 Introduction 23 2.1 Light Sources and Image Formation 23 2.2 Reflection, Refraction, and Refractive Indices 25 2.2.1 Law of Reflection 26 2.2.2 Law of Refraction, and Refractive Indices 27 2.2.3 Reversibility of Light 29 2.2.4 Reflecting Surfaces 29 2.2.5 Apparent Depth and Total Internal Reflection 30 2.2.6 Paraxial Optics 31 2.3 Two Theoretical Approaches: Huygens’ Principle and Fermat’s Principle 32 2.3.1 Huygens’ Principle 32 2.3.2 Fermat’s Principle 35 2.4 Image Formation from Reflection at Plane and Spherical Surfaces 37 2.4.1 Reflection from Plane Mirrors and Corner Cube Reflectors 38 2.4.2 Reflection from Spherical Surfaces: General Observations 40 2.4.3 Ray Tracing for Paraxial Rays Reflecting from a Spherical Surface 40 2.4.4 Equation for Image Formation from a Reflecting Spherical Surface 43 2.5 Image Formation from Refraction at Plane and Spherical Surfaces 47 2.5.1 An Introduction to Lenses 47 2.5.2 Equation for Image Formation from a Refracting Spherical Surface 49 2.5.3 Refraction from Two Spherical Surfaces, and the Thin-Lens Equation 51 2.5.4 Newtonian Form of the Thin-Lens Equation 58 2.6 Vergence and Refractive Power 59 Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 2.7 Seeking the Perfect Image 60 2.7.1 Image Quality and Aberrations 61 2.7.2 Cartesian Surfaces 62 2.8 The Matrix Method in Paraxial Optics I: An Introduction 65 2.8.1 The Cardinal Points of the Thick Lens 65 2.9 The Matrix Method in Paraxial Optics II: ABCD Matrices 71 2.9.1 The Translation Matrix 72 2.9.2 The Refraction Matrix 72 2.9.3 The Reflection Matrix 74 2.9.4 The Thick-Lens and Thin-Lens Matrices 75 2.10 The Matrix Method in Paraxial Optics III: Putting It All Together 78 2.10.1 Significance of the System Matrix Elements 80 2.10.2 Finding the Cardinal Points of an Optical System from the System Matrix 82 2.10.3 Examples of the Use of the System Matrix and the Cardinal Points 86 2.11 Computational Optics and Exact Ray Tracing 88 2.11.1 Meridional Ray Tracing 88 2.11.2 Non-Meridional Ray Tracing 93 2.11.3 Photo-Realistic Rendering 94 Questions 95 3 Understanding Optical Instruments 103 Introduction 103 3.1 Image Brightness: Aperture Stops, Entrance Pupils, and Exit Pupils 104 3.1.1 The Aperture Stop 104 3.1.2 Entrance and Exit Pupils 106 3.2 Field of View: Field Stops, Entrance Windows, and Exit Windows 113 3.2.1 Field Stops 114 3.2.2 Entrance and Exit Windows 115 3.3 Summary of Stops, Pupils, and Windows 116 3.4 Redirecting Light Using Reflecting Prisms 119 3.5 An Introduction to Aberrations 120 3.6 The Simple Magnifier/Magnifying Glass and Eyepieces 123 3.6.1 Magnifying Glasses 123 3.6.2 Eyepieces 125 3.7 Microscopes 129 3.7.1 Magnification 130 3.7.2 Numerical Aperture: Light Collecting Ability 132 3.7.3 Spatial Resolution of Microscopes 134 3.8 Superresolution Microscopy: Beyond the Diffraction Limit 136 3.8.1 Electron Microscopes 136 3.8.2 Atomic Force Microscopes 136 3.8.3 Optical Microscopes 138 3.9 Telescopes, Binoculars, and Beam Expanders 139 3.9.1 Refracting Telescopes 139 3.9.2 Reflecting Telescopes 143 3.9.3 The Schmidt Telescope 144 3.9.4 Spatial Resolution of Telescopes 146 x COMPLETE CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 3.10 Cameras 146 3.10.1 The Pinhole Camera 147 3.10.2 A Basic Camera 148 3.10.3 The f-Number 149 3.10.4 Depth of Field 151 3.10.5 Lens Design 152 3.10.6 Compact Cameras 155 Questions 157 4 Waves 164 Introduction 164 4.1 The One-Dimensional Wave Equation 164 4.2 Harmonic Waves 167 4.3 A Review of Complex Numbers 171 4.4 Complex Representation of Harmonic Waves 172 4.5 Harmonic Plane Waves 172 4.6 Other Harmonic Waves 176 4.6.1 Spherical Waves 176 4.6.2 Cylindrical Waves 177 4.6.3 Gaussian Beams 178 4.7 Electromagnetic Waves 179 4.8 Polarization of Light 183 4.8.1 Pure Polarized Light: Two Examples 184 4.8.2 Unpolarized Light 186 4.9 Doppler Effect 186 Questions 187 5 Light Sources, Displays, and Detectors 192 Introduction 192 5.1 Energy Quantization of Light 193 5.1.1 Blackbody Radiation 193 5.1.2 Correlated Color Temperature 195 5.1.3 Photons 196 5.2 Energy Quantization of Matter: Atoms 197 5.3 Energy Quantization of Matter: Molecules 200 5.3.1 Diatomic Molecules 200 5.3.2 Polyatomic Molecules 202 5.4 Energy Quantization of Matter: Crystalline Solids 203 5.4.1 The Tight-Binding Model 204 5.4.2 Band Gaps and Conductivity 204 5.4.3 The Nearly Free Electron Model 205 5.4.4 Doped Semiconductors 206 5.5 Widths and Populations of Energy Levels 207 5.5.1 Energy Level Widths 207 5.5.2 The Boltzmann Distribution 208 5.6 Incoherent Light Sources 209 5.6.1 Sunlight and Skylight 210 5.6.2 Cosmic Background Radiation 211 COMPLETE CONTENTS xi Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 5.6.3 Discharge Lamps 211 5.6.4 Incandescent Light Sources 212 5.6.5 Fluorescent Lamps 213 5.6.6 Light-Emitting Diodes (LEDs) 214 5.7 The Business of Lighting: Perceived Color and Cost Efficiencies 218 5.8 Optical Displays 219 5.8.1 Liquid-Crystal Displays (LCDs) 219 5.8.2 Organic Light-Emitting Diode (OLED) Displays 221 5.8.3 MicroLED Displays 222 5.9 Optical Detectors 223 5.9.1 Thermal Detectors of Radiation 224 5.9.2 Photon Detectors 225 5.10 Detection of Images, and Image Sensors 228 5.10.1 Photographic Film 228 5.10.2 Image Detector Arrays 228 5.11 Noise and Sensitivity in Optical Detectors 230 Questions 231 Theme II A Scalar Field Approach to Light 237 6 Superposition of Waves 239 Introduction 239 6.1 Superposition and the Superposition Principle 240 6.2 Superposition of Harmonic Waves of the Same Frequency 240 6.2.1 Constructive Interference: δ ¼ mð2πÞ 243 6.2.2 Destructive Interference: δ ¼ m þ 1 2  2π 243 6.2.3 General Interference 243 6.3 Two Extremes: Mutually Incoherent and Mutually Coherent Beams 247 6.4 Standing Waves 248 6.5 The Beat Phenomenon 252 6.6 Phase and Group Velocities 255 Questions 258 7 Interference of Light 262 Introduction 262 7.1 Two-Beam Interference 263 7.1.1 Mutually Incoherent Beams: The Interference Term Disappears 266 7.1.2 Mutually Coherent Beams: The Time Average Operation Disappears 266 7.2 Young’s Double-Slit Experiment 269 7.3 Double-Slit Interference with Virtual Sources 275 7.3.1 Lloyd’s Mirror 275 7.3.2 Fresnel’s Biprism 276 7.4 Stokes’ Relations 277 7.5 Interference from Films 278 7.5.1 Interference from an Extended Monochromatic Source 283 7.5.2 Interference from a Small Monochromatic Source 284 7.6 Fringes of Equal Thickness 285 7.7 Applications: Measuring Spherical Surfaces and Film Thicknesses 286 xii COMPLETE CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 7.7.1 Newton’s Rings 286 7.7.2 Measuring the Thicknesses of Opaque Films 289 Questions 291 8 Interferometry and Multilayer Films 296 Introduction 296 8.1 The Michelson Interferometer 297 8.2 Applications of the Michelson Interferometer 302 8.2.1 Measuring Refractive Index of a Gas 302 8.2.2 Measuring Wavelength Differences 302 8.2.3 Observing Variations in Air Flow 303 8.3 Variations of the Michelson Interferometer 304 8.3.1 The Twyman–Green Interferometer: Quality Control of Optical Components 304 8.3.2 The Mach–Zehnder Interferometer: Comparing Two Well-Separated Paths 305 8.4 Multiple-Beam Interference from a Parallel Plate 306 8.5 The Fabry–Pérot Interferometer: Multiple-Beam Interference in Practice 309 8.6 Analysis of the Fabry–Pérot Transmittance 311 8.6.1 Coefficient of Finesse, F: A Measure of Fringe Contrast 311 8.6.2 Finesse, F : A Measure of Fringe Width 313 8.7 The Variable-Length Fabry–Pérot Interferometer: A Spectrum Analyzer 315 8.8 Resolution of a Fabry–Pérot Interferometer 317 8.9 Fixed-Length Fabry–Pérot Interferometers and Laser Modes 319 8.9.1 Laser Modes and Single-Mode Operation 322 8.9.2 Laser Frequency Stabilization 323 8.10 Interference from Multilayer Films: A Matrix Approach 325 8.11 Antireflection Coatings at Normal Incidence 329 8.11.1 Single-Layer Films 329 8.11.2 Double-Layer Films 331 8.11.3 Triple-Layer Films 334 8.12 High-Reflection Coatings at Normal Incidence 336 Questions 339 9 Coherence 345 Introduction 345 9.1 Spatial Coherence and Its Measurement 346 9.2 Temporal Coherence and Its Measurement 347 9.3 Volume of Coherence 347 9.4 Finding the Spatial Coherence Width of a Light Field 348 9.5 Finding the Temporal Coherence Length of a Light Beam 352 9.5.1 Introduction to Fourier Analysis 352 9.5.2 The Fourier Analysis of a Single Harmonic Pulse 357 9.5.3 Bandwidth, Linewidth, and Temporal Coherence 358 9.6 Partial Coherence 362 Questions 367 10 Fraunhofer Diffraction 372 Introduction 372 10.1 Single-Slit Diffraction 374 10.2 Beam Spreading 380 10.3 Diffraction from Rectangular and Circular Apertures 382 COMPLETE CONTENTS xiii Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 10.3.1 Fraunhofer Diffraction Pattern from a Rectangular Aperture 382 10.3.2 Fraunhofer Diffraction Pattern from a Circular Aperture 384 10.4 Spatial Resolution 389 10.5 Double-Slit Diffraction 391 10.6 Multiple-Slit Diffraction: The Diffraction Grating 394 10.6.1 The Theory of Multiple-Slit Diffraction 394 10.6.2 Diffraction Gratings in Practice 396 Questions 400 11 Prisms, Diffraction Gratings, and Spectrometers 405 Introduction 405 11.1 The Solar Spectrum and the Fraunhofer Lines 406 11.2 Spectral Identification 408 11.3 Prisms 409 11.3.1 Angular Deviation of a Prism 409 11.3.2 Angular Dispersion of a Prism 412 11.4 Prism Spectrometers 414 11.4.1 General Overview 414 11.4.2 Spectral Resolution of a Prism 415 11.4.3 Multiple Prisms for Greater Angular Dispersion 417 11.4.4 Prisms with Special Applications 417 11.4.5 Comparison of Prism and Diffraction Gratings for Wavelength Measurement 419 11.5 Gratings 419 11.5.1 The General Grating Equation 419 11.5.2 Free Spectral Range of a Diffraction Grating 423 11.5.3 Angular Dispersion of a Grating 425 11.5.4 Spectral Resolution of a Grating 426 11.5.5 Blazed Gratings and Echelle Gratings 428 11.6 Grating Spectrometers, and the Manufacture of Gratings 431 11.6.1 Various Grating Spectrometers 431 11.6.2 The Manufacture of Diffraction Gratings 434 Questions 438 Theme III AVector Field Approach to Light 445 12 Reflection and Transmission at Surfaces 447 Introduction 447 12.1 Maxwell’s Equations, and Boundary Conditions 448 12.2 The Fresnel Equations 452 12.2.1 Application of the Boundary Conditions 452 12.2.2 The Reflection and Transmission Coefficients 455 12.3 Examining External and Internal Reflections 457 12.4 Phase Changes on Reflection 463 12.5 Evanescent Waves 467 12.6 Reflection from Metals; Complex Refractive Indices 468 Questions 470 13 Optical Fibers and Communications Technology 473 Introduction 473 13.1 Applications of Optical Fibers 474 xiv COMPLETE CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 13.2 Communications Systems: An Introduction 475 13.3 Modulation and Digitization of Signals 478 13.4 The Optics of Propagation and Fiber Modes 481 13.4.1 The Acceptance Cone and the Skip Distance 481 13.4.2 Allowed Propagation Modes 483 13.5 Attenuation, Regeneration, and Amplification 486 13.5.1 Extrinsic Losses 486 13.5.2 Intrinsic Losses 486 13.5.3 Regeneration and Amplification 489 13.6 Pulse Broadening 490 13.6.1 Modal Dispersion, and Graded Index Fibers 490 13.6.2 Material Dispersion 494 13.6.3 Waveguide Dispersion 497 13.6.4 Polarization Mode Dispersion 498 13.6.5 Comparing Different Types of Dispersion 500 13.7 Some Optical Fiber Communication Technologies 500 Questions 507 14 Mathematical Treatment of Polarization 512 Introduction 512 14.1 Jones Vectors: Representation of Pure Polarization States 515 14.2 Jones Matrices: Representation of Polarizing Components 526 14.2.1 Linear Polarizers 527 14.2.2 Waveplates 527 14.2.3 Rotators 528 14.2.4 Jones Matrices for Linear Polarizers, Waveplates, and Rotators 528 14.3 Stokes Vectors and Mueller Matrices 536 Questions 540 15 Polarization in Practice 544 Introduction 544 15.1 Polarization Due to Selective Absorption: Dichroism 546 15.2 Polarization Due to Selective Reflection 548 15.3 Polarization Due to Selective Scattering 550 15.4 Introduction to Birefringence and Waveplates 552 15.5 Polarization due to Birefringence 557 15.6 Chirality: Optical Activity and Circular Dichroism 560 15.7 Photoelasticity 564 Questions 566 Theme IV Lasers and Laser Beams 573 16 Light–Matter Interactions 575 Introduction 575 16.1 Transparency, Scattering, Absorption, and Emission 576 16.2 Cross-Sections, Beer’s Law, and Lineshapes 580 16.3 Einstein’s Theory of Light–Matter Interactions 582 16.3.1 Stimulated Absorption 582 16.3.2 Stimulated Emission 584 COMPLETE CONTENTS xv Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 16.3.3 Spontaneous Emission 584 16.4 Rate Equations for a Two-Level System, and the Einstein Relations 585 16.4.1 Broadband Illumination 586 16.4.2 Population Inversion and Negative Temperatures 588 16.4.3 Narrowband Illumination 590 16.5 Beer’s Law Revisited, and the Gain Coefficient 593 16.6 Transition Lineshapes and Broadening Mechanisms 594 16.6.1 Homogeneous Broadening 595 16.6.2 Inhomogeneous Broadening 597 Questions 599 17 Lasers and Laser Operation 603 Introduction 603 17.1 Essential Elements of a Laser 605 17.1.1 The Pump 606 17.1.2 The Gain Medium 607 17.1.3 The Optical Cavity 607 17.1.4 The Cooling System 608 17.2 Qualitative Description of Laser Operation 608 17.3 Characteristics of Laser Light 611 17.3.1 Monochromaticity (Temporal Coherence) 612 17.3.2 Wavefront Uniformity (Spatial Coherence) 612 17.3.3 Directionality 612 17.3.4 Intensity 613 17.3.5 Focusability 614 17.3.6 Pulsed Operation 615 17.4 Introducing Real Lasers 616 17.4.1 Atomic Gas Lasers, Ion Lasers, and Molecular Gas Lasers 617 17.4.2 Excimer Lasers 617 17.4.3 Dye Lasers 618 17.4.4 Solid-State/Dielectric Lasers 618 17.4.5 Semiconductor Lasers 619 17.5 Rate Equations for a Four-Level Laser System 621 17.5.1 Undepleted Pump Approximation 623 17.5.2 Gain in the Ideal Four-Level Medium 624 17.5.3 Light Amplification in a Gain Medium 625 17.6 Steady-State Laser Output 626 17.6.1 Steady-State Output Intensity from a Ring Cavity 627 17.6.2 Steady-State Output Intensity from a Two-Mirror Linear Cavity 630 17.7 Gain Saturation and Laser Modes 631 17.7.1 Laser Operation in Homogeneously Broadened Media 631 17.7.2 Laser Operation in Inhomogeneously Broadened Media 632 17.8 Time-Dependent Phenomena at Laser Startup 633 17.9 Pulsed Operation: Q-Switching and Mode Locking 635 17.9.1 Q-Switching 635 17.9.2 Mode Locking 637 Questions 639 xvi COMPLETE CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 18 Laser Beams and Laser Cavities 646 Introduction 646 18.1 Intensity Patterns of Laser Beams 647 18.2 The Three-Dimensional Wave Equation and Electromagnetic Waves 648 18.3 Gaussian Beams 649 18.4 Radius of Curvature and Beam Width of Gaussian Beams 651 18.5 Intensity Profile, Divergence, and Wavefronts of Gaussian Beams 653 18.5.1 Intensity Profile 653 18.5.2 Beam Divergence 654 18.5.3 Wavefronts 654 18.6 Modes of Spherical Mirror Cavities 656 18.7 Gaussian Beam Propagation Through Optical Systems 659 18.7.1 The ABCD Law 660 18.7.2 Stable and Unstable Laser Cavities 663 18.7.3 Collimation of a Gaussian Beam 667 18.7.4 Beam Transmission Through Optical Components 667 18.7.5 Focusing a Gaussian Beam 669 18.8 Higher-Order, Hermite–Gaussian, Beams 670 18.8.1 Hermite–Gaussian Beams 673 18.8.2 Field and Intensity Patterns for Hermite–Gaussian Beams 674 18.8.3 Mode Competition in Laser Cavities 675 Questions 676 Theme V Advanced Topics 683 19 Aberrations 685 Introduction 685 19.1 Ray and Wavefront Aberrations 687 19.2 Wavefront Aberrations for Refraction at a Single Spherical Surface 690 19.3 Wavefront Aberration Coefficients for a Thin Lens 695 19.4 Spherical Aberration 697 19.4.1 The Effects of Spherical Aberration 697 19.4.2 Correcting for Spherical Aberration 699 19.5 Coma 700 19.5.1 The Effects of Coma 701 19.5.2 The Optical Sine Theorem and the Abbe Sine Condition 703 19.5.3 Correcting for Coma 704 19.6 Astigmatism and Field Curvature 705 19.6.1 The Effects of Astigmatism and Field Curvature 705 19.6.2 Correcting for Astigmatism and Field Curvature 707 19.7 Distortion 708 19.7.1 The Effects of Distortion 708 19.7.2 Correcting for Distortion 709 19.8 Chromatic Aberrations 709 19.8.1 The Effects of Chromatic Aberrations 709 19.8.2 Correcting for Chromatic Aberrations 711 Questions 716 COMPLETE CONTENTS xvii Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 20 Fourier Optics: Imaging and Spectroscopy 719 Introduction 719 20.1 Optical Data Imaging and Processing 721 20.1.1 The Fourier Transform and Fraunhofer Diffraction 721 20.1.2 Optical Spectrum Analysis 725 20.1.3 Optical Filtering 728 20.1.4 Optical Correlation and Pattern Recognition 730 20.1.5 Imaging: The Convolution Theorem and the Optical Transfer Function 735 20.2 Fourier Transform Spectroscopy 737 20.2.1 The Basic Integral of Fourier Transform Spectroscopy 740 20.2.2 Optical Frequency Comb – Fourier Transform Spectroscopy (OFC–FTS) 743 Questions 743 21 Fresnel Diffraction 747 Introduction 747 21.1 The Fresnel–Kirchhoff Diffraction Integral 748 21.2 Criteria for Fresnel Diffraction 751 21.3 Fresnel Diffraction of Spherical Waves from Circular Apertures 752 21.4 Phase Advance of Secondary Wavelets 757 21.5 The Fresnel Zone Plate 758 21.6 Fresnel Diffraction of Cylindrical Waves from Straight Edges 761 21.7 The Cornu Spiral 765 21.8 Applications of the Cornu Spiral 769 21.8.1 Intensity of an Unobstructed Wavefront 769 21.8.2 Fresnel Diffraction by a Straight Edge 769 21.8.3 Fresnel Diffraction by a Single Slit 772 21.8.4 Fresnel Diffraction by a Wire 773 21.9 Babinet’s Principle 774 Questions 776 22 Holography 781 Introduction 781 22.1 Conventional Photography Versus Holography 782 22.2 Hologram of a Point Source: An Inline Configuration 784 22.3 Hologram of an Extended Object: An Off-Axis Configuration 786 22.4 Some Additional Properties of Holograms 789 22.5 White-Light Holograms 789 22.5.1 Rainbow Holograms 790 22.5.2 Volume Holograms 790 22.6 Applications of Holography 792 22.6.1 Nondestructive Testing 792 22.6.2 Time-Average Holographic Interferometry 793 22.6.3 Microscopy 793 22.6.4 Ultrasonic Holograms 794 22.6.5 Holographic Data Storage 795 22.6.6 Synthetic Holograms 795 22.6.7 Holocameras 796 xviii COMPLETE CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 22.6.8 Pattern Recognition 796 22.6.9 Holographic Optical Elements 797 Questions 797 23 Optical Properties of Materials 800 Introduction 800 23.1 Polarization of a Dielectric, and the Lorentz Model 801 23.2 Propagation of Light Inside a Dielectric 805 23.2.1 The Wave Equation for Light Inside a Dielectric 805 23.2.2 The Optical Constants of a Dielectric 806 23.2.3 Dispersion in Dielectrics 810 23.3 Electron Motion in Metals, and the Drude Model 812 23.4 Propagation of Light Inside a Metal 813 23.4.1 Skin Depth 814 23.4.2 Plasma Frequency 815 23.5 Improved Theory and Measurement of Optical Properties 817 Questions 818 24 Nonlinear Optics and the Modulation of Light 822 Introduction 822 24.1 The Nonlinear Medium 823 24.2 Second-Harmonic Generation and Optical Rectification 826 24.3 Phase Matching 828 24.4 Frequency Mixing 830 24.5 Electro-Optic Effects 831 24.5.1 The Pockels Effect 832 24.5.2 The Kerr Effect 836 24.6 Magneto-Optic Effects: The Faraday Effect 839 24.7 The Acousto-Optic Effect 845 24.8 Optical Phase Conjugation 848 24.9 Optical Nonlinearities in Optical Fibers 851 24.9.1 Stimulated Raman Scattering 851 24.9.2 Stimulated Brillouin Scattering 852 24.9.3 Self-Phase Modulation and Cross-Phase Modulation 852 24.9.4 Raman Amplification in Fibers 852 Questions 853 25 Laser Technology and Applications 857 Introduction 857 25.1 Overview of Laser Applications 858 25.2 Medical Applications of Lasers 858 25.2.1 Diode Lasers 860 25.2.2 Fiber Lasers: An Overview 860 25.2.3 Carbon Dioxide Lasers and Tm:Fiber Lasers 860 25.2.4 Nd:YAG Lasers and Nd:Fiber Lasers 863 25.2.5 Frequency-Doubled Nd:YAG Lasers and Yb:Fiber Lasers: “KTP Lasers” 863 25.2.6 Er:YAG Lasers, Ho:YAG Lasers and Tm:Fiber Lasers 863 25.2.7 Excimer Lasers 863 COMPLETE CONTENTS xix Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment 25.3 Remote Sensing by LIDAR 865 25.4 Ultrashort Pulse Production and Applications 867 25.4.1 The Femtosecond/Gigawatt Regime 868 25.4.2 Chirped Pulse Amplification and the Terawatt and Petawatt Regimes 868 25.4.3 High-Harmonic Generation (HHG) and Attosecond Pulses 871 25.5 Cooling, Trapping, and Optical Tweezers 871 25.5.1 Laser Cooling 872 25.5.2 Laser Trapping 873 25.5.3 Optical Tweezers 874 25.6 Optical Parametric Oscillators 874 25.6.1 OPO Tuning with a Periodically Poled Crystal 876 25.6.2 Singly Resonant OPO 876 25.6.3 Doubly Resonant OPO 877 25.6.4 Pump-Resonant OPO 877 25.7 Optical Frequency Combs 877 25.7.1 Current and Potential Applications of Optical Frequency Combs 880 Questions 881 26 Optics of the Eye 886 Introduction 886 26.1 Biological Structure of the Eye 887 26.2 Photometry 890 26.3 Optical Representation of the Eye 893 26.4 Functions of the Eye 894 26.4.1 Accommodation 896 26.4.2 Adaptation 896 26.4.3 Depth Perception 897 26.4.4 Visual Acuity 897 26.5 Vision Correction with External Lenses 899 26.5.1 Myopia 899 26.5.2 Hyperopia 901 26.5.3 Presbyopia 903 26.5.4 Astigmatism 904 26.6 Vision Correction with Surgery 905 26.6.1 Radial Keratotomy 906 26.6.2 Laser Surface Remodeling 906 26.6.3 Lenticule Extraction 907 Questions 907 Appendices A. Physical Constants 911 B. Mathematical Formulas 913 C. Bibliography and References 916 D. Answers to Selected Questions 925 E. Integrated Testlet Lookup Table 942 Index 944 xx COMPLETE CONTENTS Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Cartwheel Galaxy, imaged by the James Webb Space Telescope. From (courtesy of NASA, ESA, CSA, STScl). Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Preface Introduction The field of optics – the study and application of the infrared, visible, and ultraviolet regions of the electromagnetic spectrum – can inspire both awe and significant advancements in science and engineering. To pick one example far from home, the James Webb Space Telescope (JWST) was launched on December 25, 2021, from French Guiana, and nearly one month later it arrived at its destination, the Sun–Earth Lagrange L2 point, 1.5 million km (~4 lunar distances) from Earth. Its infrared telescope contains an exquisitely engineered 6.5 m diameter primary mirror, itself constructed from 18 tessellated hexagonal mirrors of gold-plated beryllium, which is perpetually pointed away from both Sun and Earth. By the time of printing the JWST had already recorded unprecedented images of galaxies deep in the Universe, of a NASA spacecraft striking an asteroid 11 million km away in order to modify its orbit, and of the rings of Neptune and its largest moon, Triton. The JWST took the remarkable image of the Cartwheel Galaxy shown in the frontispiece: a beautiful double-spiral galaxy located 500 million light years away, with diameter of nearly 150,000 light years and with a complex structure that is thought to result from a head-on collision with another galaxy. The overarching aim of this book is to answer many of the fascinating questions that arise in the field of optics and optical engineering. For the telescope just described, this includes questions such as: why operate in the infrared; how are the hexagonal mirror segments adjusted to improve image quality; and what is a three-mirror anastigmat telescope – the type of telescope used in the JWST? Closer to home, we might consider the vibrant display of colors, and the different capacities for vision, across the animal kingdom. For example, how do the morpho butterflies from Central and South America appear so spectacularly blue without requiring a single blue pigment? How do chameleons change their color without the use of pigments? And why do cats and many other predators have vertically oriented pupils while deer and other herbivores have horizontally oriented pupils? The answers to these questions, and others like them, help us to appreciate the ubiquity of interference, and imaging, and they open a portal into the field of biomimicry, or nature-inspired design. From such understanding emerges further applications of optics and optical technology, in sectors as wide-ranging as remote sensing, communications, machining, and bioengineering. We have organized this new, fourth, edition, of Pedrottis’ Introduction to Optics into five themes. We have maintained a similar number of chapters to previous editions, which allows for a variety of learning pathways to satisfy the preferences of instructors and learners. In the section titled Text Organization below we indicate four particular pathways that we anticipate to be the most popular. While updating this text we strived to remain mindful of our role as guides in this field. We hope to share our passion for optics with all our readers, and we also hope that the many connections that exist between different chapters may become apparent and be savored. We welcome all feedback. Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Improvements and Special Features in the Fourth Edition We have tried to maintain the succinct style and versatile structure of previous editions, while updating the language, figures, technology, captions, and questions, with the goal of providing a clear, consist-ent, and communicative textbook. We aspire to excite students by light and its wonders through a book that introduces students gradually to the physical models with which optics and optical phenomena can be understood, and with which effects can be quantitatively predicted. Particular improvements and/or additions to the fourth edition include: • Callout Box sections titled “A closer look . . .” explore particular topics in greater detail, or with greater mathematical sophistication, than the main text. They supplement the main text while maintaining its overall flow. • Callout Box sections titled “A glance at . . . ” introduce the reader to exciting new technologies and applications and provide an invitation to further research. • A new type of question, called an integrated testlet, is introduced, which employs scaffolded, answer-until-correct, steps to assist and deepen student understanding. These reinforce concepts by taking students on a path of increasing complexity as they explore the content in each chapter. The integrated testlets can be adapted for use by instructors as “ice-breakers” or as pre-class quizzes, from which to deliver a presentation. • Figures have been made both pedagogical and scientifically correct, with several of these animated and vibrantly displayed through online resources available to students from around the world. • New material has been introduced at a level appropriate to its placement within the ordering of the chapters. In particular, we discuss and/or expand on: exact ray tracing in Chapter 2; the resolution of a microscope and the design of cellphone camera lenses in Chapter 3; the quantum behavior of matter in Chapter 5; the solar spectrum, spectroscopy, and spectroscopic instruments in Chapter 11; light–matter interactions in Chapter 16; and modern laser applications in Chapter 25. • Students have often asked us why things are named as they are. To help answer this, we have given, as footnotes, the origin of many optical terms that are in common use. • In general, we present phenomena first, followed by a discussion, explanation and derivation of the quantitative predictions that stem from the analysis of these phenomena. Through footnotes and the Box sections titled “A closer look”, we describe experimental investigations that may be completed at home, or in the classroom, which bring to life the distinct, rich, physical phenomena embedded within each chapter. • Material has, where warranted, been consolidated from several chapters into a single chapter, to enhance the readability and consistency of approach. In particular: matrix methods within the paraxial approximation are now included within the chapter on ray optics; multiple-beam interfer-ence from a parallel plate is now included within the chapter on interferometry; and prisms operating as dispersive elements are included within a substantially revised chapter on prisms, diffraction gratings, and spectrometers. • The text has been adapted to incorporate the changes that derive from the 2019 redefinition of the SI base units, in which the kilogram, ampere, kelvin, and mole are defined by setting exact numerical values for the Planck constant, the elementary charge, the Boltzmann constant, and the Avogadro constant. This means that some quantities that previously were measured are now necessarily exact, and vice versa. Finally, it is appropriate to add a note about style. As we updated the text we made decisions commensurate with our aim of providing as seamless a text as possible. In particular: xxiv PREFACE Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment • We have chosen throughout the book to use the term intensity, rather than irradiance, for the power per unit area that can strike a detector placed at some location. In our experience the term intensity is more commonly adopted by practitioners of optics, both in the labora-tory, and in research papers. We expand on these terms and this particular choice in Chapter 1. • We have added tildes to those variables that represent complex quantities, and we have strived to emphasize that the corresponding real, physical, quantity is the real part of this complex representa-tion. Real, physical, vectors are denoted by both bold type and an over-arrow, while real, physical, scalar waves are denoted simply by the variable alone. Thus, for example, the real, physical, scalar wave ψ is the real part of its complex representation: ψ ¼ Reðe ψÞ. The real, physical, electric field vector, ~ E, is similarly the real part of its complex representation: ~ E ¼ Reðe EÞ. We discuss this further in Chapter 4. Accompanying Interactive Animations Figures in the book that are marked with the butterfly and slider icon are available online as interactive animations. These allow the reader to explore a range of values for specific parameters, that are set using a slider. By exploring this parameter space, a more complete understanding of the information contained within a figure can be attained. Integrated Testlets: A Guide to Their Use The final question found in each chapter is an Integrated Testlet: a scaffolded, multiple-part, question, where each part should be attempted in order.1 The Integrated Testlet Lookup Table of Appendix E provides a protocol specifically designed for individual study and self-assessment. For each part of each integrated testlet, each option A…E has a unique 4-digit code surrounded by square brackets, for example: . Read the introduction at the start of the integrated testlet, and then determine your answer to part (i). Then look up the associated code in Appendix E to confirm the correctness of this answer. If the annotation next to the code is “YES”, your answer is correct, and you can progress to part (ii). If the annotation is “NO”, the answer is incorrect and you should reassess your response and make subsequent selections as necessary until you determine the correct answer. To gain the full benefit from this, we recommend you attempt each part until you arrive at the correct answer, and understand why it is correct, before moving on to the next part. These questions enable you to work through quite complex problems, with each part of a question building on the previous answer, that you will now know to be correct. Text Organization The text is divided into five themes that are broadly partitioned according to the various models used to understand and harness light. As an indication of the flexibility of the text, we indicate here four possible one-semester course pathways: 1 See: A. D. Slepkov and R. C. Shiell, Phys. Rev. ST Physics Ed. Research, 10, 020120, 1–15, 2014; R. C. Shiell and A. D. Slepkov, CELT, VIII, 201–210, 2015; R. C. Shiell and I. R McNab, Proc. of SPIE, 12723, 127230Y, 2023. PREFACE xxv Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Physical Optics (= Wave Optics) Laser Optics Prerequisites: Two or three semesters of introductory physics and two semesters of calculus Prerequisites: Two or three semesters of introductory physics and two semesters of calculus Chapter 1 The Optical Landscape Chapter 1 The Optical Landscape Chapter 2 Ray Optics (Sections 2.1 through 2.7) Chapter 2 Ray Optics Chapter 3 Understanding Optical Instruments Chapter 3 Understanding Optical Instruments Chapter 4 Waves Chapter 19 Aberrations Chapter 5 Light Sources, Displays, and Detectors Chapter 4 Waves Chapter 17 Lasers and Laser Operation (17.1 through 17.4) Chapter 5 Light Sources, Displays, and Detectors Chapter 6 Superposition of Waves Chapter 17 Lasers and Laser Operation (Sections 17.1 through 17.4) Chapter 7 Interference of Light Chapter 11 Prisms, Diffraction Gratings, and Spectrometers Chapter 8 Interferometry and Multilayer Films Chapter 12 Reflection and Transmission at Surfaces Chapter 9 Coherence Chapter 13 Optical Fibers and Communications Technology Chapter 10 Fraunhofer Diffraction Chapter 6 Superposition of Waves Chapter 12 Reflection and Transmission at Surfaces Chapter 7 Interference of Light Chapter 13 Optical Fibers and Communications Technology Chapter 8 Interferometry and Multilayer Films Chapter 14 Mathematical Treatment of Polarization Chapter 15 Polarization in Practice Physical Optics (= Wave Optics) Laser Optics Prerequisites: Two or three semesters of introductory physics, two semesters of calculus, and one semester of intermediate electricity and magnetism Prerequisites: Two or three semesters of introductory physics, two semesters of calculus, and one semester of intermediate electricity and magnetism Chapter 1 The Optical Landscape Chapter 1 The Optical Landscape Chapter 4 Waves Chapter 2 Ray Optics Chapter 6 Superposition of Waves Chapter 4 Waves Chapter 17 Lasers and Laser Operation (Sections 17.1 through 17.4) Chapter 6 Superposition of Waves xxvi PREFACE Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Other chapter sequences are possible. For example, for upper-year undergraduates who have covered two semesters of electricity and magnetism, the Laser Optics course pathway above could be modified by replacing Chapters 4 and 6 with Chapters 22 (Holography), and 24 (Nonlinear Optics and the Modulation of Light). A variety of two-semester, two-quarter, or three-quarter sequences are also possible. From the Preface to the Third Edition In honor of the original authors, we preserve here a portion of the preface to the third edition. “The field of optics impacts an ever-expanding range of applications in physics, engineering, and technology. The parallel emergence of lasers, fiber optics, nonlinear devices, and a variety of semicon-ductor sources and detectors in the 1960s initiated a continuing period of rapid development in applied and theoretical optics. The need for a variety of updated optics texts with different approaches and emphases is apparent, both for students of optics and for practitioners who need an occasional review of the basics. With Introduction to Optics we propose to teach introductory modern optics at an intermediate level. In order to use this text, students should have a preparatory background that includes a calculus-based introductory physics sequence and at least two semesters of calculus. The material in this text encompasses the traditional areas of classical optics and many topics in modern optics. The organiza-tion of the material is intended to facilitate its use in a variety of one-semester courses, with different emphases. In addition, the text includes more than enough material to be used in a full-year optics course for physics or engineering students at the sophomore, junior, or senior undergraduate level. We wish to thank the many teachers who have inspired us with an interest in optics and teaching, and the many students who have motivated us to teach with clarity and efficiency. Frank L. Pedrotti, S.J., Leno S. Pedrotti, and Leno M. Pedrotti” Physical Optics (= Wave Optics) Laser Optics Chapter 7 Interference of Light Chapter 16 Light–Matter Interactions Chapter 8 Interferometry and Multilayer Films Chapter 17 Lasers and Laser Operation Chapter 9 Coherence Chapter 18 Laser Beams and Laser Cavities Chapter 10 Fraunhofer Diffraction Chapter 7 Interference of Light Chapter 11 Prisms, Diffraction Gratings, and Spectrometers Chapter 8 Interferometry and Multilayer Films Chapter 14 Mathematical Treatment of Polarization Chapter 9 Coherence Chapter 15 Polarization in Practice Chapter 13 Optical Fibers and Communications Technology Chapter 20 Fourier Optics: Imaging and Spectroscopy Chapter 10 Fraunhofer Diffraction Chapter 21 Fresnel Diffraction Chapter 14 Mathematical Treatment of Polarization Chapter 22 Holography Chapter 15 Polarization in Practice Chapter 25 Laser Technology and Applications PREFACE xxvii Cambridge University Press & Assessment 978-1-316-51862-5 — Pedrottis' Introduction to Optics 4th Edition Rayf Shiell , Iain McNab , With contributions by Matthew Romerein Frontmatter More Information www.cambridge.org © in this web service Cambridge University Press & Assessment Acknowledgments: Fourth Edition This new edition stemmed from the authors’ experiences teaching introductory through to upper-year optics and laser physics courses at various universities, including Trent University, the University of Waterloo, the University of Sussex, and Newcastle University. As we updated the third edition of this text we were profoundly grateful to the many inquisitive students who asked us insightful (and difficult!) questions in their search for clarification of the subject. Indeed, both of us thank all the students we have taught, who have stimulated us to be the best scholars that we can be. Among our colleagues also, we acknowledge those who have taken their personal time to read draft chapters and provide excellent and honest feedback, and who are well-meaning friends indeed. We thank in particular Cécile Fradin, Jonathan Murrell, Duncan O’Dell, Jeffrey Philippson, Aaron Slepkov (the other co-developer of the integrated testlet), and Balaji Subramanian, as well as the anonymous reviewers solicited by Cambridge University Press & Assessment. We are exceptionally grateful to Matthew Romerein for so masterfully blending the skills of art and science in producing scientifically accurate figures of stunning clarity. We are very grateful to Nicholas Gibbons, Stefanie Seaton, Tineke Bryson, and the entire editorial team at Cambridge University Press & Assessment for their helpful support, and for advocating for all the benefits that science education can bring, and to Susan Parkinson for her perceptive insights and diligent copy-editing. We are grateful also to Leno M. Pedrotti for allowing us free rein with the text. We very much hope that he enjoys the result. We have made every effort to avoid errors but some will remain, for which we blame only ourselves. If anyone should find errors, we would be grateful to be notified of them so we may include them within the list of errata which will be made available online. We embrace the idea of a community of practice, and thus welcome any and all comments related to this text. To my parents for all they taught me; to Katherine for her enduring patience as I toiled away at “the book”; to my daughter Zara for making life so wondrous; and to the rest of my family, and close friends from undergraduate days and beyond. Rayf Shiell In loving memory of my parents, Joy and Donald, for encouraging me to ask “How?” and to find out “Why?”. With love to Juyean and Jin for their unending patience and support, and to the rest of my family, friends, and economic associates. Iain McNab xxviii PREFACE
758
https://awwalker.com/wp-content/uploads/2018/02/homslides_2_egyptian_fractions.pdf
Representation via Egyptian Fractions Problems from the History of Mathematics Lecture 2 — January 26, 2018 Brown University Egyptian Fractions Egyptian Fractions An Egyptian fraction is a representation of a rational number p/q as a sum of distinct unit fractions. For example, 15 36 = 1 3 + 1 12 = 1 4 + 1 6. Egyptian fractions were the standard means to represent rational numbers in ancient Egypt, and their use continued into early European mathematics. While it is not known why the Egyptians used this system, two probable causes are 1. Ease of division of goods into equal parts. (To split 15 pizzas among 36 people, split 9 pizzas into quarters and 6 into sixths.) 2. Division as a concept inspired by the study of reciprocals of integers. 1 The Rhind Papyrus The Rhind papyrus is an Egyptian mathematical papyrus dated to c. 1650 BC, although parts were copied from earlier texts dated to c. 1850 BC. It begins with a table known as the 2/n table, which lists Egyptian fractions for the numbers 2/n with n ≤101 odd. 2 The Rhind Papyrus There is some debate as to how Ahmes and other Egyptian scribes would have prepared such tables. The last entry in the 2/n table suggests that 2 n = 1 n + 1 2n + 1 3n + 1 6n was known, but fewer terms and smaller denominators are preferred. For general p/q with q composite, the Egyptians would attempt to write p as a sum of divisors of q. This is not always possible (eg. 5/21). What survives is likely a mix of formulas and ad hoc results. 3 A Greedy Algorithm for Egyptian Fractions It is not clear that every rational number even has an Egyptian fraction. The first proof of this result is due to Leonardo of Pisa (Fibonacci) and appears in his Liber Abaci in 1202. Fibonacci’s proof is an example of a greedy algorithm: 1. Given p/q < 1, let n1 be minimal such that 1 n1 ≤p q  < 1 n1−1  . 2. Define p q −1 n1 = n1p −q qn1 =: p1 q1 . 3. Repeat from (1) with p1/q1 in place of p/q. This process will terminate because p1 < p. The solutions this produces are not always very elegant. For example, 29 61 = 1 3 + 1 8 + 1 59 + 1 7853 + 1 96901533 + 1 21909783131182008 = 1 3 + 1 8 + 1 60 + 1 2440. 4 Open Problems in Egyptian Fractions Egyptian fractions are of continued interest to number theorists today. Some open problems include the following: 1. It is known that each proper fraction with denominator q has an Egyptian fraction of length O(√log q). A conjecture of Erd˝ os claims O(log log q) suffices. 2. The Erd˝ os–Strauss Conjecture: 4 n = 1 x + 1 y + 1 z has a solution for each n ≥1. 5 Questions? 5
759
https://pmc.ncbi.nlm.nih.gov/articles/PMC1174929/
Blocking protein farnesyltransferase improves nuclear blebbing in mouse fibroblasts with a targeted Hutchinson–Gilford progeria syndrome mutation - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Journal List User Guide PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Proc Natl Acad Sci U S A . 2005 Jul 12;102(29):10291–10296. doi: 10.1073/pnas.0504641102 Search in PMC Search in PubMed View in NLM Catalog Add to search Blocking protein farnesyltransferase improves nuclear blebbing in mouse fibroblasts with a targeted Hutchinson–Gilford progeria syndrome mutation Shao H Yang Shao H Yang Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Shao H Yang , Martin O Bergo Martin O Bergo Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Martin O Bergo †, Julia I Toth Julia I Toth Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Julia I Toth , Xin Qiao Xin Qiao Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Xin Qiao , Yan Hu Yan Hu Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Yan Hu , Salemiz Sandoval Salemiz Sandoval Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Salemiz Sandoval , Margarita Meta Margarita Meta Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Margarita Meta ‡, Pravin Bendale Pravin Bendale Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Pravin Bendale §, Michael H Gelb Michael H Gelb Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Michael H Gelb §, Stephen G Young Stephen G Young Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Stephen G Young , Loren G Fong Loren G Fong Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 Find articles by Loren G Fong ,¶ Author information Article notes Copyright and License information Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA 90095; †Department of Internal Medicine, Bruna Stråket 16, Third Floor, Sahlgrenska University Hospital, SE-413 45 Göteborg, Sweden; ‡Musculoskeletal and Quantitative Research Group, Department of Radiology, University of California, San Francisco, CA 94107; and §Departments of Chemistry and Biochemistry, University of Washington, Seattle, WA 98195 ¶ To whom correspondence should be addressed at: University of California, 675 Charles East Young Drive South, MacDonald Medical Research Laboratory Building, Room 4770, Los Angeles, CA 90095. E-mail: lfong@mednet.ucla.edu. Communicated by Daniel Steinberg, University of California at San Diego, LaJolla, CA, June 6, 2005 Received 2005 Apr 20; Issue date 2005 Jul 19. Copyright © 2005, The National Academy of Sciences Freely available online through the PNAS open access option. PMC Copyright notice PMCID: PMC1174929 PMID: 16014412 Abstract Hutchinson–Gilford progeria syndrome (HGPS), a progeroid syndrome in children, is caused by mutations in LMNA (the gene for prelamin A and lamin C) that result in the deletion of 50 aa within prelamin A. In normal cells, prelamin A is a “CAAX protein” that is farnesylated and then processed further to generate mature lamin A, which is a structural protein of the nuclear lamina. The mutant prelamin A in HGPS, which is commonly called progerin, retains the CAAX motif that triggers farnesylation, but the 50-aa deletion prevents the subsequent processing to mature lamin A. The presence of progerin adversely affects the integrity of the nuclear lamina, resulting in misshapen nuclei and nuclear blebs. We hypothesized that interfering with protein farnesylation would block the targeting of progerin to the nuclear envelope, and we further hypothesized that the mislocalization of progerin away from the nuclear envelope would improve the nuclear blebbing phenotype. To approach this hypothesis, we created a gene-targeted mouse model of HGPS, generated genetically identical primary mouse embryonic fibroblasts, and we then examined the effect of a farnesyltransferase inhibitor on nuclear blebbing. The farnesyltransferase inhibitor mislocalized progerin away from the nuclear envelope to the nucleoplasm, as determined by immunofluoresence microscopy, and resulted in a striking improvement in nuclear blebbing (P< 0.0001 by χ 2 statistic). These studies suggest a possible treatment strategy for HGPS. Keywords: aging, lamin A/C, laminopathy Hutchinson–Gilford progeria syndrome (HGPS) is a progeroid syndrome characterized by a host of aging-like phenotypes, including a wizened appearance of the skin, osteoporosis, alopecia, and premature atherosclerosis (1). Children with HGPS die at the mean age of 13, generally from myocardial infarctions or strokes (1). This disease is caused by the accumulation of a mutant form of prelamin A that cannot be processed to mature lamin A (1). In normal cells, wild-type prelamin A is virtually undetectable because it is fully converted to mature lamin A, a structural protein of the nuclear lamina (2, 3). The nuclear lamina is an intermediate filament meshwork adjacent to the inner nuclear membrane that provides structural support for the nucleus (2, 3). Prelamin A contains a nuclear localization signal and terminates with a CAAX motif (2), in which C is a cysteine, A residues are usually aliphatic amino acids, and X can be one of many different residues. CAAX motifs are also found on lamin B1, lamin B2, the Ras family of proteins, and many other cellular proteins. The CAAX motif triggers three sequential enzymatic posttranslational modifications, beginning with protein prenylation. In the case of prelamin A, the first processing step is carried out by protein farnesyltransferase (FTase) and involves the addition of a 15-carbon farnesyl lipid to the thiol group of the cysteine within the CAAX motif. Second, the last 3 aa of the protein (i.e., -AAX) are removed by a prenylprotein-specific endoprotease. For prelamin A, this step is likely to be a redundant function of two endoplasmic reticulum membrane endoproteases, Zmpste24 and Rce1 (4). Third, the newly exposed farnesylcysteine is methylated by Icmt, a prenylprotein-specific membrane methyltransferase of the endoplasmic reticulum (5). After these CAAX-box modifications have been completed, prelamin A (in contrast to other CAAX proteins) undergoes an additional processing step. The last 15 aa of the protein (including the farnesylcysteine methyl ester) are clipped off by Zmpste24 and then degraded, leaving behind mature lamin A (4, 6, 7). The farnesylation of prelamin A is important for its targeting to the nuclear envelope (8–10). Each of the three CAAX motif modifications of prelamin A render the C terminus of the protein more hydrophobic, facilitating its association with the inner nuclear membrane, where the protein is cleaved, releasing mature lamin A (9, 11). In the absence of farnesylation (for example, in mevinolin-treated cells), prelamin A accumulates in the nucleoplasm and does not reach the nuclear envelope (9, 11). In the setting of Zmpste24 deficiency, farnesyl prelamin A accumulates at the nuclear envelope (6, 12) and adversely affects the integrity of the nuclear envelope. The nuclei of Zmpste24-deficient fibroblasts are misshapen, containing numerous nuclear blebs (6, 12). HGPS is most commonly caused by a de novo point mutation in exon 11 of LMNA (1). This mutation, which occurs in codon 608, activates a cryptic splice site and leads to the in-frame deletion of 50 aa within prelamin A. This deletion leaves the CAAX motif intact; hence, the mutant prelamin A (progerin) is predicted to undergo farnesylation, release of the -AAX, and carboxyl methylation. However, the site for the second endoproteolytic cleavage step is eliminated by the deletion (1). Thus, progerin cannot be processed to lamin A and likely retains a farnesylcysteine methyl ester at its C terminus. Like Zmpste24-deficient cells, HGPS fibroblasts contain misshapen nuclei with numerous blebs of the nuclear envelope (1). In human HGPS cells, the severity of nuclear blebbing is variable, depending in part on the number of times the cells have been passaged (13). A GFP-tagged progerin has been reported to accumulate along the nuclear envelope of HeLa cells and cause nuclear shape abnormalities (13). We hypothesized that the farnesylation of progerin targets the protein to the nuclear envelope, where it might weaken the nuclear lamina and cause nuclear blebbing. We further hypothesized that blocking farnesylation with an FTase inhibitor (FTI) would mislocalize progerin away from the nuclear envelope and reduce nuclear blebbing. Some researchers might argue that the latter hypothesis is unattractive, given that the FTI would also block the posttranslational processing of lamin B1 and lamin B2, potentially weakening the lamina further. However, we hypothesized that the salutary effects of blocking farnesylation of progerin would “trump” any deleterious effects of the FTI on the lamina and lead to an overall improvement in nuclear blebbing. To test the impact of blocking the farnesylation of progerin on nuclear shape, we reasoned that it would be helpful to create gene-targeted “HGPS mice” that express progerin at levels sufficient to cause nuclear blebbing. The existence of a gene-targeted mouse model would make it possible to study the impact of an FTI on nuclear shape in independent lines of low-passage, genetically identical mouse embryonic fibroblasts (MEFs). In this study, we adopted exactly this strategy, first by generating a mouse model of HGPS and then by using primary MEFs to define the impact of an FTI on the nuclear blebbing phenotype. Materials and Methods Gene-Targeted Mouse Model of HGPS. The DNA fragments for the arms of the gene-targeting vector were generated by long-range PCR of genomic DNA from 129/Ola ES cells. A 6-kb 5′ fragment, which spanned from the end of intron 5 to sequences 1.9 kb downstream from the 3′ UTR (encoded by exon 12) was amplified with 5′-GGCTTCCTGGTCACTGGATA-3′ and 5′-GATCTGCCTGGAAGCTGAGT-3′ and cloned into pCR2.1-TOPO-XL (Invitrogen). Next, a 5-kb EcoRI fragment (spanning from a polylinker EcoRI site in pCR2.1-TOPO-XL to an EcoRI site 0.9 kb downstream from the 3′ UTR) was cleaved from the vector and cloned into pBSK (Stratagene). To create the 5′ arm of a sequence-replacement vector, this EcoRI fragment was subjected to two sequential site-directed mutagenesis reactions (QuikChange, Stratagene). First, intron 10 was deleted with the primer 5′-GATGGAGAAGAGCTCCTCCATCACCACCGTGGTTCCCACTGCAGCGGCTCGGGGGACCCC-3′ and the reversed and complemented primer. Second, the last 150 nt of exon 11 and intron 11 were deleted with the primer 5′-GACAAGGCTGCCGGTGGAGCGGGAGCCCAGAGCTCCCAGAACTGCAGCATCATGTAATCT-3′ and the reversed and complemented primer. To create the gene-targeting vector, the mutant EcoRI fragment was cloned into the polylinker EcoRI site of pKS loxP NT-mod. The 3′ arm, consisting of sequences immediately downstream of those in the 5′ arm, was amplified with the primers 5′-GACAGCCACCTGGTCAGTTT-3′ and 5′-GTAACTCTGGCTGCCCTCAA-3′ and then cloned into pCR2.1-TOPO-XL. To complete the gene-targeting vector, this fragment was cloned into the polylinker AscI site of pKS loxP NTmod. The integrity of the vector (≈17 kb in length) was verified by DNA sequencing and restriction mapping. The vector was linearized with NotI and electroporated into strain 129/Ola ES cells. To identify clones carrying the targeted Lmna HG allele, we performed Southern blot analyses of EcoRI-digested genomic DNA with a 348-bp 5′-flanking probe. The probe was generated by PCR from mouse genomic DNA with the following primers: 5′-CAAGGAGCTCGGATTCTGTC-3′ and 5′-GTCAGGGAAGAGTGCAGAGG-3′. The probe detected a 10.4-kb band in the wild-type Lmna allele and a 9.3-kb band in the Lmna HG allele. Genotyping was also performed on genomic DNA by PCR with the following primers: 5′-TGAGTACAACCTGCGCTCAC-3′ and 5′-CAGACAGGAGGTGGCATGT-3′. The PCR fragment, spanning from exon 11 to exon 12, was 582 bp in the wild-type Lmna allele and 186 bp in the Lmna HG allele. Creating Lmna HG MEFs. Targeted 129/Ola ES cells were microinjected into C57BL/6 blastocysts to produce chimeric mice. To generate MEFs, chimeras were bred with C57BL/6 females. Wild-type primary MEFs (Lmna+/+) and Lmna HG/+ MEFs were prepared from the same litter of day 13.5 embryos and genotyped by PCR. Because the MEFs were generated from embryos of chimera matings, they were genetically identical (one 129/Ola chromosome and one C57BL/6 chromosome). Cells homozygous for the Lmna HG mutation were generated by subjecting Lmna HG/+ cells to several months of selection in increasing concentrations of G418 (14). The Lmna HG/HG cells were euploid, but many of the cells were differentiated, as determined by morphology. FTIs. For all experiments, we used PB-43, which is a member of the tetrahydroquinoline family of protein FTIs (15) that displays potent inhibition of rat FTase in vitro (IC 50 = 1.7 nM). PB-43 readily crosses cell membranes, as demonstrated by its ability to kill Plasmodium falciparum in human red blood cells (M.H.G., unpublished data). PB-43 was synthesized by described methods (15) and shown to be pure by HPLC on a reverse-phase column. The compound was dissolved in DMSO at a concentration of 10 mM and stored in aliquots at -80°C. Treatment of Cells with the FTI and Western Blot Analyses. Adherent early-passage MEFs in six-well tissue culture plates were incubated with the vehicle control (DMSO) or the indicated concentrations of PB-43 diluted in culture medium at 37°C for 48 h. The cells were washed with PBS, and urea-soluble extracts were prepared as described in ref. 11. Cell pellets solubilized with SDS-containing buffers were also prepared and yielded results indistinguishable from those with urea extraction. Proteins were size-separated on 4–12% gradient polyacrylamide Bis-Tris gels (Invitrogen) and then electrophoretically transferred to nitrocellulose membranes for Western blotting. The following antibody dilutions were used: 1:400 anti-lamin A/C goat IgG (sc-6215, Santa Cruz Biotechnology), 1:400 anti-lamin B (sc-6217, Santa Cruz Biotechnology), 1:6,000 anti-mouse prelamin A rabbit antiserum (12, 13), 1:500 anti-Hdj-2 mouse IgG (LabVision, Fremont, CA), 1:1,000 anti-actin goat IgG (sc-1616, Santa Cruz Biotechnology), 1:6,000 horseradish peroxidase (HRP)-labeled anti-goat IgG (sc-2020, Santa Cruz Biotechnology), 1:4,000 HRP-labeled anti-mouse IgG (Amersham Biosciences), and 1:6,000 HRP-labeled anti-rabbit IgG (Amersham Biosciences). Antibody binding was detected with the ECL Plus chemiluminescence system (Amersham Biosciences) and exposure to x-ray film. Immunofluoresence Microscopy. Primary cells of different genotypes were grown on coverslips, fixed in 3% paraformaldehyde, permeabilized with 0.2% Triton X-100, and blocked with BSA (12). Cells were incubated for 60 min with antibodies against lamin A (sc-20680) or lamin B (sc-6217) (Santa Cruz Biotechnology). After washing, cells were stained with species-specific Cy3-conjugated secondary antibodies (Jackson Immuno-Research) and DAPI to visualize DNA. Images were obtained on an Axiovert 40CFL microscope (Zeiss) with a ×63/1.25 oil-immersion objective and processed with axovision software (version 4.2; Zeiss). Nuclear shape abnormalities were scored by two independent observers, who were blinded to genotype or treatment group. Results We created a mouse model of HGPS that expressed large amounts of progerin. To accomplish that goal, we created a mutant Lmna allele, Lmna HG, that exclusively yields progerin (Fig. 1 A). We deleted intron 10 of Lmna, thereby eliminating lamin C synthesis; we also deleted the last 150 nt of exon 11 and intron 11, which results in the synthesis of progerin (and precludes wild-type prelamin A synthesis). Two targeted ES cell clones (from 192 G418- and FIAU-resistant clones) were identified by Southern blotting (Fig. 1 B) and used to create 22 high-percentage chimeric mice. Those mice were bred with C57BL/6 mice to produce the Lmna HG/+ mice for this study (Fig. 1 C). In addition, Lmna HG/HG cell lines were created from Lmna HG/+ ES cells by high-G418 selection (Fig. 1 D). As predicted, primary MEFs from Lmna HG/+ embryos produced progerin, along with lamin A and lamin C from the wild-type Lmna allele (Figs. 1 D and 3), whereas the Lmna HG/HG cells yielded only progerin (Fig. 1 D). Lmna expression in wild-type ES cells is low (2, 3, 16). However, the Lmna HG/HG cells, which had undergone ≈2 months of high G418 selection and appeared to be differentiated, expressed progerin at levels comparable with those in MEFs (Fig. 1 D). Not surprisingly, the nuclei of many Lmna HG/+ MEFs were misshapen and contained large blebs (Fig. 2). Fig. 1. Open in a new tab Production of a mutant Lmna allele, Lmna HG, that yields progerin. (A) Gene-targeting strategy, which involves deleting intron 10, intron 11, and the last 150 nt of exon 11 of Lmna. (B) Southern blot analysis detecting the Lmna HG allele in mouse ES cells, with EcoRI-cleaved genomic DNA and the indicated 5′ flanking probe. (C) PCR identification of the Lmna HG allele. Results with wild-type MEFs (WT), heterozygous MEFs (Lmna HG/+), homozygous ES cells (Lmna HG/HG), and heterozygous MEFs (Lmna HG/+) are shown. (D) Western blotting identification of progerin with a lamin A/C-specific monoclonal antibody. Wild-type cells, Lmna HG/+ MEFs, and Lmna HG/HG cells are shown. On SDS/PAGE gels, the electrophoretic migration of progerin in mouse Lmna HG/+ MEFs and human HGPS fibroblasts was identical (data not shown). Fig. 3. Open in a new tab Western blot analysis of wild-type, Lmna HG/+, and Zmpste24-/- MEFs. Cells were grown in the presence and absence of an FTI (10 μM PB-43), and Western blot analyses of cell extracts were performed with antibodies specific for prelamin A (specific for the extreme C terminus), lamin A/C (binds to both lamin A and lamin C), lamin B1, Hdj-2, and actin. Fig. 2. Open in a new tab Immunofluorescence microscopy of wild-type and Lmna HG/+ fibroblasts. (A–E) Immunostaining showing nuclear blebs in Lmna HG/+ MEFs. Blebs are indicated by white arrows. (F) Lmna+/+ MEFs. In this experiment, cells were stained for lamin B1 (red). We sought to define the impact of an FTI, PB-43, on nuclear shape. To document that this compound was effective in blocking farnesylation, we treated Lmna HG/+ MEFs with the PB-43 FTI and then examined the electrophoretic migration of Hdj-2, a farnesylated protein, with Western blot analyses of SDS/PAGE gels (Fig. 3). In FTI-treated MEFs, nearly all of the Hdj-2 migrated slowly and almost none migrated normally, indicating that the FTI had been effective in blocking protein farnesylation. We also performed Western blot analyses of wild-type and Lmna HG/+ cell extracts with an antibody specific for the C terminus of prelamin A (Fig. 3). The prelamin A-specific antibody does not normally detect prelamin A in cells because prelamin A is farnesylated and rapidly converted to mature lamin A. However, in the presence of the FTI, prelamin A processing is blocked, and prelamin A is easily detectable in both wild-type and Lmna HG/+ MEFs (Fig. 3). Western blot analyses with an antibody against the N-terminal portion of lamin A detected mature lamin A in wild-type MEFs and in Lmna HG/+ MEFs but detected the slightly larger prelamin A (and virtually no mature lamin A) in MEFs that had been treated with the FTI. The FTI did not yield clear-cut or consistent alterations in the amount of either progerin or lamin B1 in Lmna HG/+ MEFs (Fig. 3). To judge the impact of the FTI on nuclear blebbing in Lmna HG/+ MEFs, we examined both Lmna+/+ and Lmna HG/+ MEFs in a blinded fashion by immunofluorescence microscopy. Lmna HG/+ MEFs had more nuclear blebs than Lmna+/+ cells (P< 0.0001, χ 2 test) (Fig. 4). The FTI did not affect nuclear shape in Lmna+/+ MEFs. However, FTI treatment of the Lmna HG/+ MEFs reduced the frequency of nuclear blebbing (P< 0.0001). This result was consistent in several independently isolated Lmna HG/+ cell lines and in several independent experiments (Fig. 4). Of note, the frequency of blebbing in FTI-treated Lmna HG/+ MEFs was not different, as determined by the χ 2 statistic, from the frequency of blebbing in FTI-treated or untreated Lmna+/+ MEFs. Fig. 4. Open in a new tab Bar graph showing increased frequency of nuclear blebbing in Lmna HG/+ MEFs and a reduction in nuclear blebbing in Lmna HG/+ MEFs treated with an FTI (10 μM PB-43). A and B represent two independent experiments. Each black circle shows the frequency of nuclear blebbing with an independently isolated fibroblast cell line. Bars indicate the mean frequency of blebbing. The number of cells with nuclear blebs and the total number of cells examined are recorded within each bar. In both experiments, the FTI did not change the frequency of blebbing in Lmna+/+ MEFs, as determined by the χ 2 test. Lmna HG/+ MEFs exhibited more blebbing than Lmna+/+ cells (P< 0.0001, χ 2 test). An FTI reduced the frequency of blebbing in Lmna HG/+ MEFs (P< 0.0001, χ 2 test). The frequency of blebbing in FTI-treated Lmna HG/+ MEFs was not different from the frequency of blebbing in the treated or untreated Lmna+/+ MEFs. Very similar results, with identical levels of statistical significance, were obtained when the microscopic slides were scored by a second blinded observer. Because Lmna HG/+ MEFs synthesize lamin A and lamin C in addition to progerin, it was impossible to use immunofluorescence microscopy of Lmna HG/+ MEFs to define the location of progerin within those cells. However, the intracellular location of progerin could be examined in experiments with Lmna HG/HG cells, which synthesize exclusively progerin. In those cells, progerin was located mainly at the nuclear envelope (Fig. 5 A and B). However, in the presence of the FTI, virtually all of the progerin was mislocalized to intensely staining aggregates within the nucleoplasm, and none of it was detectable at the nuclear envelope (Fig. 5 C and D). Fig. 5. Open in a new tab Immunofluoresence images showing the distribution of progerin in untreated and FTI-treated Lmna HG/HG cells. DNA was visualized with DAPI (blue), and progerin was visualized with an antibody against lamin A (red). (A and B) Untreated Lmna HG/HG cells, showing progerin along the nuclear envelope. Misshapen nuclei were common (arrow). (C and D) FTI-treated Lmna HG/HG cells, revealing intensely staining progerin aggregates (arrow-heads) in the nucleoplasm. Discussion FTIs were developed initially as anticancer agents (17). The concept was straightforward: to eliminate the farnesyl lipid from mutationally activated Ras proteins, thereby mislocalizing these signaling proteins away from the plasma membrane, where they “cause trouble” by stimulating uncontrolled cell division. In this study, we assessed an analogous concept with HGPS, which is a disease in which another farnesylated protein causes trouble in the cell. The synthesis of progerin leads to misshapen nuclei and frequent nuclear blebs (1). We hypothesized that blocking farnesylation would mislocalize progerin away from the nuclear envelope and ameliorate the nuclear blebbing phenotype. We generated a gene-targeted mouse model of HGPS, created genetically identical primary Lmna HG/+ MEFs as well as Lmna HG/HG cells, and tested the impact of an FTI on both nuclear blebbing and progerin localization. In untreated cells, progerin was located mainly along the nuclear envelope, whereas in FTI-treated cells the protein was mislocalized to the nucleoplasm. In the Lmna HG/+ MEFs, the FTI reduced nuclear blebbing to a baseline level observed in untreated wild-type cells. These studies visualize progerin localization independently of lamin A and lamin C, and they show that mislocalization of progerin is associated with a change in nuclear shape. Interestingly, the FTI did not adversely affect nuclear shape in wild-type fibroblasts. Nuclear blebbing is the principal cellular phenotype in HGPS (13, 18), and the FTI clearly ameliorates this phenotype. The next obvious questions are as follows. Will the nuclear shape abnormalities in Lmna HG/+ cells be accompanied by any disease phenotypes at the “whole-animal” level? And, if so, would an FTI also ameliorate those disease phenotypes, including perhaps the atherosclerotic disease that claims the lives of most humans with the disease? With regard to the first question, the Lmna HG/+ mice clearly exhibit unequivocal disease phenotypes (S.H.Y., L.G.F., and S.G.Y., unpublished data). By 4 months of age, all of the Lmna HG/+ mice (n = 22) exhibit growth retardation and/or bone disease, akin to homozygous Zmpste24-deficient mice (4). Detailed radiographic and pathologic analyses (e.g., assessing bone density and defining atherosclerosis susceptibility in inbred backgrounds) will take many months to complete. However, the existence of tractable phenotypes in our heterozygous gene-targeted mice makes us confident that this model could be very useful for assessing therapeutic strategies. To properly assess the impact of FTIs on disease phenotypes in mice, we first need to identify methods for the long-term oral delivery of the drugs and document that the drug is efficacious in blocking protein farnesylation in multiple tissues. Again, we anticipate that optimization of FTI delivery schemes will require considerable experimentation. In the meantime, we can only speculate about whether FTIs might be useful for treating the disease phenotypes in the Lmna HG/+ mice. Optimists would contend that it makes intuitive sense (i.e., in line with Occam's razor) that improvements in “whole-animal” disease phenotypes would parallel improvements in cellular phenotypes. Optimists would also point out that the nuclear blebbing phenotypes in cultured fibroblasts and the whole-animal disease phenotypes were clearly associated in a recent study of Zmpste24-deficient mice (12). Zmpste24-deficient fibroblasts, which accumulate wild-type farnesyl prelamin A, manifest nuclear blebbing (6, 12), and Zmpste24-deficient mice develop a variety of progeria-like phenotypes (4, 12). When prelamin A synthesis was reduced by 50% (by introducing a single copy of a Lmna knockout allele), the nuclear blebbing in fibroblasts and the disease phenotypes in mice were eliminated (12). That study was also intriguing because it proved that dramatic improvements in progeria-like disease phenotypes can occur with a 50% reduction in the amount of farnesyl prelamin A in the cell. Thus, one might easily imagine that an FTI could ameliorate the disease phenotypes even with incomplete inhibition of farnesylation and incomplete mislocalization of progerin (i.e., without pushing FTI doses to levels associated with side effects). However, drug toxicity may not be much of an issue because FTIs have been generally well tolerated in humans and in mice (19, 20). Genetic studies also support the idea that it should be possible to give FTIs safely on a long-term basis (21). Completely inactivating FTase in adult mice by using Cre/loxP approaches did not cause histopathological abnormalities or any noticeable disease phenotypes (21). There are also reasons to be pessimistic about the proposition that an FTI would be a panacea for HGPS. Although an FTI mislocalizes progerin and improves nuclear blebbing, one could imagine that nonfarnesylated progerin could still be toxic in vivo. Nonfarnesylated progerin is still a structurally abnormal protein, and it is sobering to remember that even single amino acid substitutions in mature lamin A and lamin C (neither of which is farnesylated) can cause a host of different genetic diseases (e.g., several forms of muscular dystrophy, cardiomyopathy, partial lipodystrophy, and mandibuloacral dysplasia) (2, 3). Note in particular that some mutations in the carboxyl-terminal region of lamin A (i.e., the region not shared with lamin C) cause human genetic diseases (22, 23). Moreover, one could argue that improved nuclear blebbing in cultured cells might not be an accurate indicator of disease phenotypes at the whole-animal level, for the simple reason that some LMNA missense mutations have been reported to cause human disease without affecting nuclear shape in cultured fibroblasts (22). Last, a pessimist would certainly point out that HGPS is a dominant disease, caused by a single mutant chromosome. Treatment with an FTI interferes with the biogenesis of mature lamin A from the normal LMNA allele. To our knowledge, the short or long-term consequences of reducing lamin A biogenesis in humans are not known, although a single LMNA null allele, causing a 50% reduction in both lamin A and lamin C synthesis, causes muscular dystrophy (24). In this study, we sought to create a HGPS allele that yielded large amounts of progerin, and our strategy was successful. Although it may have been technically simpler to introduce the codon 608 point mutation identified in humans with HGPS (24), we did not follow this approach because we worried that that the “point mutation” approach would not yield sufficient amounts of progerin to elicit phenotypes in the mouse. In humans with HGPS, most of the transcripts from the mutant LMNA allele actually yield wild-type lamin C and wild-type lamin A rather than progerin (1). We worried about this type of an allele for mouse experimentation because experience with mouse Lmna mutations has suggested that mice may be “tougher” than humans when it comes to the dose of mutant lamin proteins required to elicit disease phenotypes. For example, humans carrying a single LMNA H222P mutation develop muscular dystrophy, whereas two copies of the H222P allele are required to elicit a muscle phenotype in mice (25). Similarly, humans heterozygous for a LMNA nonsense mutation develop muscular dystrophy (24), whereas mice with a single Lmna knockout allele are normal (26). Thus, we worried that the point mutation approach in the mouse might not yield sufficient levels of progerin to elicit phenotypes. Accordingly, we chose a gene-targeting strategy that would guarantee high levels of progerin expression. As it turned out, we generated the first Lmna mutant mice that exhibit a phenotype with a single copy of the mutant allele. Progerin is clearly the “culprit molecule” in HGPS (1, 13, 18), and studies raise the hope that FTIs might ultimately prove to be useful for treating HGPS. The FTI strategy appears to be well suited for testing in children because the drugs have been studied extensively, are generally well tolerated, and can be given orally. However, FTIs are not the only potential hope for HGPS. Recent studies have indicated that the nuclear blebbing phenotype in HGPS fibroblasts can be ameliorated with morpholino antisense reagents (18) or by expressing short hairpin RNA constructs (RNA interference) (Junko Oshima, personal communication). These RNAi and antisense methods directly reduce the production of progerin transcripts, so the finding of reduced nuclear blebbing is probably not particularly surprising. Nevertheless, those studies were very important because they provided hope to patients affected by HGPS and because they will stimulate interest in overcoming the practical and pharmacological obstacles to delivering RNAi and morpholino oligonucleotides to humans. Acknowledgments We thank Dr. Luanne Peters for counting chromosomes in the Lmna HG/HG ES cells. This work was supported in part by National Institutes of Health Grants AI054384 (to M.H.G.), R01 CA099506, and R01 AR050200 and a grant from the Progeria Research Foundation (to S.G.Y.). Author contributions: L.G.F. designed research; S.H.Y., M.O.B., J.I.T., X.Q., Y.H., S.S., and M.M. performed research; S.H.Y., S.G.Y., and L.G.F. analyzed data; P.B. and M.H.G. contributed new reagents/analytic tools; and S.H.Y., S.G.Y., and L.G.F. wrote the paper. Abbreviations: HGPS, Hutchinson–Gilford progeria syndrome; FTase, protein farnesyltransferase; FTI, FTase inhibitor; MEF, mouse embryonic fibroblast. References 1.Eriksson, M., Brown, W. T., Gordon, L. B., Glynn, M. W., Singer, J., Scott, L., Erdos, M. R., Robbins, C. M., Moses, T. Y., Berglund, P., et al. (2003) Nature 423, 293-298. [DOI] [PMC free article] [PubMed] [Google Scholar] 2.Mounkes, L. C., Burke, B. & Stewart, C. L. (2001) Trends Cardiovasc. Med. 11, 280-285. [DOI] [PubMed] [Google Scholar] 3.Burke, B. & Stewart, C. L. (2002) Nat. Rev. Mol. Cell Biol. 3, 575-585. [DOI] [PubMed] [Google Scholar] 4.Bergo, M. O., Gavino, B., Ross, J., Schmidt, W. K., Hong, C., Kendall, L. V., Mohr, A., Meta, M., Genant, H., Jiang, Y., et al. (2002) Proc. Natl. Acad. Sci. USA 99, 13049-13054. [DOI] [PMC free article] [PubMed] [Google Scholar] 5.Dai, Q., Choy, E., Chiu, V., Romano, J., Slivka, S. R., Steitz, S. A., Michaelis, S. & Philips, M. R. (1998) J. Biol. Chem. 273, 15030-15034. [DOI] [PubMed] [Google Scholar] 6.Pendás, A. M., Zhou, Z., Cadiñanos, J., Freije, J. M. P., Wang, J., Hultenby, K., Astudillo, A., Wernerson, A., Rodríguez, F., Tryggvason, K. & Lopéz-Otín, C. (2002) Nat. Genet. 31, 94-99. [DOI] [PubMed] [Google Scholar] 7.Corrigan, D. P., Kuszczak, D., Rusinol, A. E., Thewke, D. P., Hrycyna, C. A., Michaelis, S. & Sinensky, M. S. (2005) Biochem. J. 387, 129-138. [DOI] [PMC free article] [PubMed] [Google Scholar] 8.Hennekes, H. & Nigg, E. A. (1994) J. Cell Sci. 107, 1019-1029. [DOI] [PubMed] [Google Scholar] 9.Lutz, R. J., Trujillo, M. A., Denham, K. S., Wenger, L. & Sinensky, M. (1992) Proc. Natl. Acad. Sci. USA 89, 3000-3004. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Izumi, M., Vaughan, O. A., Hutchison, C. J. & Gilbert, D. M. (2000) Mol. Biol. Cell 11, 4323-4337. [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Dalton, M. & Sinensky, M. (1995) Methods Enzymol. 250, 134-148. [DOI] [PubMed] [Google Scholar] 12.Fong, L. G., Ng, J. K., Meta, M., Cote, N., Yang, S. H., Stewart, C. L., Sullivan, T., Burghardt, A., Majumdar, S., Reue, K., et al. (2004) Proc. Natl. Acad. Sci. USA 101, 18111-18116. [DOI] [PMC free article] [PubMed] [Google Scholar] 13.Goldman, R. D., Shumaker, D. K., Erdos, M. R., Eriksson, M., Goldman, A. E., Gordon, L. B., Gruenbaum, Y., Khuon, S., Mendez, M., Varga, R. & Collins, F. S. (2004) Proc. Natl. Acad. Sci. USA 101, 8963-8968. [DOI] [PMC free article] [PubMed] [Google Scholar] 14.Mortensen, R. M., Conner, D. A., Chao, S., Geisterfer-Lowrance, A. A. T. & Seidman, J. G. (1992) Mol. Cell. Biol. 12, 2391-2395. [DOI] [PMC free article] [PubMed] [Google Scholar] 15.Nallan, L., Bauer, K. D., Bendale, P., Rivas, K., Yokoyama, K., Hornéy, C. P., Pendyala, P. R., Floyd, D., Lombardo, L. J., Williams, D. K., et al. (2005) J. Med. Chem. 48, 3704-3713. [DOI] [PubMed] [Google Scholar] 16.Raharjo, W. H., Enarson, P., Sullivan, T., Stewart, C. L. & Burke, B. (2001) J. Cell Sci. 114, 4447-4457. [DOI] [PubMed] [Google Scholar] 17.Reiss, Y., Goldstein, J. L., Seabra, M. C., Casey, P. J. & Brown, M. S. (1990) Cell 62, 81-88. [DOI] [PubMed] [Google Scholar] 18.Scaffidi, P. & Misteli, T. (2005) Nat. Med. 11, 440-445. [DOI] [PMC free article] [PubMed] [Google Scholar] 19.Caraglia, M., D'Alessandro, A. M., Marra, M., Giuberti, G., Vitale, G., Viscomi, C., Colao, A., Prete, S. D., Tagliaferri, P., Tassone, P., et al. (2004) Oncogene 23, 6900-6913. [DOI] [PubMed] [Google Scholar] 20.Rogers, M. J. (2003) Curr. Pharm. Des. 9, 2643-2658. [DOI] [PubMed] [Google Scholar] 21.Mijimolle, N., Velasco, J., Dubus, P., Guerra, C., Weinbaum, C. A., Casey, P. J., Campuzano, V. & Barbacid, M. (2005) Cancer Cell 7, 313-324. [DOI] [PubMed] [Google Scholar] 22.Muchir, A., Medioni, J., Laluc, M., Massart, C., Arimura, T., van der Kooi, A. J., Desguerre, I., Mayer, M., Ferrer, X., Briault, S., et al. (2004) Muscle Nerve 30, 444-450. [DOI] [PubMed] [Google Scholar] 23.Csoka, A. B., Cao, H., Sammak, P. J., Constantinescu, D., Schatten, G. P. & Hegele, R. A. (2004) J. Med. Genet. 41, 304-308. [DOI] [PMC free article] [PubMed] [Google Scholar] 24.Bonne, G., Di Barletta, M. R., Varnous, S., Bécane, H.-M., Hammouda, E.-H., Merlini, L., Muntoni, F., Greenberg, C. R., Gary, F., Urtizberea, J.-A., et al. (1999) Nat. Genet. 21, 285-288. [DOI] [PubMed] [Google Scholar] 25.Arimura, T., Helbling-Leclerc, A., Massart, C., Varnous, S., Niel, F., Lacene, E., Fromes, Y., Toussaint, M., Mura, A. M., Keller, D. I., et al. (2005) Hum. Mol. Genet. 14, 155-169. [DOI] [PubMed] [Google Scholar] 26.Sullivan, T., Escalante-Alcalde, D., Bhatt, H., Anver, M., Bhat, N., Nagashima, K., Stewart, C. L. & Burke, B. (1999) J. Cell Biol. 147, 913-919. [DOI] [PMC free article] [PubMed] [Google Scholar] Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences ACTIONS View on publisher site PDF (436.2 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
760
https://www.maths.lu.se/fileadmin/maths/personal_staff/Andreas_Jakobsson/StoicaM05.pdf
“sm2” 2004/2/ page i i i i i i i i i SPECTRAL ANALYSIS OF SIGNALS Petre Stoica and Randolph Moses PRENTICE HALL, Upper Saddle River, New Jersey 07458 “sm2” 2004/2/2 page ii i i i i i i i i Library of Congress Cataloging-in-Publication Data Spectral Analysis of Signals/Petre Stoica and Randolph Moses p. cm. Includes bibliographical references index. ISBN 0-13-113956-8 1. Spectral theory (Mathematics) I. Moses, Randolph II. Title 512’–dc21 2005 QA814.G27 00-055035 CIP Acquisitions Editor: Tom Robbins Editor-in-Chief: ? Assistant Vice President of Production and Manufacturing: ? Executive Managing Editor: ? Senior Managing Editor: ? Production Editor: ? Manufacturing Buyer: ? Manufacturing Manager: ? Marketing Manager: ? Marketing Assistant: ? Director of Marketing: ? Editorial Assistant: ? Art Director: ? Interior Designer: ? Cover Designer: ? Cover Photo: ? c ⃝2005 by Prentice Hall, Inc. Upper Saddle River, New Jersey 07458 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 ISBN 0-13-113956-8 Pearson Education LTD., London Pearson Education Australia PTY, Limited, Sydney Pearson Education Singapore, Pte. Ltd Pearson Education North Asia Ltd, Hong Kong Pearson Education Canada, Ltd., Toronto Pearson Educacion de Mexico, S.A. de C.V. Pearson Education - Japan, Tokyo Pearson Education Malaysia, Pte. Ltd “sm2” 2004/2/ page iii i i i i i i i i Contents 1 Basic Concepts 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Energy Spectral Density of Deterministic Signals . . . . . . . . . . . 3 1.3 Power Spectral Density of Random Signals . . . . . . . . . . . . . . 4 1.3.1 First Definition of Power Spectral Density . . . . . . . . . . . 6 1.3.2 Second Definition of Power Spectral Density . . . . . . . . . . 7 1.4 Properties of Power Spectral Densities . . . . . . . . . . . . . . . . . 8 1.5 The Spectral Estimation Problem . . . . . . . . . . . . . . . . . . . . 12 1.6 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.6.1 Coherency Spectrum . . . . . . . . . . . . . . . . . . . . . . . 12 1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Nonparametric Methods 22 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2 Periodogram and Correlogram Methods . . . . . . . . . . . . . . . . 22 2.2.1 Periodogram . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.2 Correlogram . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3 Periodogram Computation via FFT . . . . . . . . . . . . . . . . . . 25 2.3.1 Radix–2 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.2 Zero Padding . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4 Properties of the Periodogram Method . . . . . . . . . . . . . . . . . 28 2.4.1 Bias Analysis of the Periodogram . . . . . . . . . . . . . . . . 28 2.4.2 Variance Analysis of the Periodogram . . . . . . . . . . . . . 32 2.5 The Blackman–Tukey Method . . . . . . . . . . . . . . . . . . . . . . 37 2.5.1 The Blackman–Tukey Spectral Estimate . . . . . . . . . . . . 37 2.5.2 Nonnegativeness of the Blackman–Tukey Spectral Estimate . 39 2.6 Window Design Considerations . . . . . . . . . . . . . . . . . . . . . 39 2.6.1 Time–Bandwidth Product and Resolution–Variance Trade-offs in Window Design . . . . . . . . . . . . . . . . . . . . . . 40 2.6.2 Some Common Lag Windows . . . . . . . . . . . . . . . . . . 41 2.6.3 Window Design Example . . . . . . . . . . . . . . . . . . . . 45 2.6.4 Temporal Windows and Lag Windows . . . . . . . . . . . . . 47 2.7 Other Refined Periodogram Methods . . . . . . . . . . . . . . . . . . 48 2.7.1 Bartlett Method . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.7.2 Welch Method . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.7.3 Daniell Method . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.8 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2.8.1 Sample Covariance Computation via FFT . . . . . . . . . . . 55 2.8.2 FFT–Based Computation of Windowed Blackman–Tukey Pe-riodograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.8.3 Data and Frequency Dependent Temporal Windows: The Apodization Approach . . . . . . . . . . . . . . . . . . . . . . 59 iii “sm2” 2004/2/ page iv i i i i i i i i iv 2.8.4 Estimation of Cross–Spectra and Coherency Spectra . . . . . 64 2.8.5 More Time–Bandwidth Product Results . . . . . . . . . . . . 66 2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3 Parametric Methods for Rational Spectra 86 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.2 Signals with Rational Spectra . . . . . . . . . . . . . . . . . . . . . . 87 3.3 Covariance Structure of ARMA Processes . . . . . . . . . . . . . . . 88 3.4 AR Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.4.1 Yule–Walker Method . . . . . . . . . . . . . . . . . . . . . . . 90 3.4.2 Least Squares Method . . . . . . . . . . . . . . . . . . . . . . 91 3.5 Order–Recursive Solutions to the Yule–Walker Equations . . . . . . 94 3.5.1 Levinson–Durbin Algorithm . . . . . . . . . . . . . . . . . . . 96 3.5.2 Delsarte–Genin Algorithm . . . . . . . . . . . . . . . . . . . . 97 3.6 MA Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.7 ARMA Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3.7.1 Modified Yule–Walker Method . . . . . . . . . . . . . . . . . 103 3.7.2 Two–Stage Least Squares Method . . . . . . . . . . . . . . . 106 3.8 Multivariate ARMA Signals . . . . . . . . . . . . . . . . . . . . . . . 109 3.8.1 ARMA State–Space Equations . . . . . . . . . . . . . . . . . 109 3.8.2 Subspace Parameter Estimation — Theoretical Aspects . . . 113 3.8.3 Subspace Parameter Estimation — Implementation Aspects . 115 3.9 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.9.1 The Partial Autocorrelation Sequence . . . . . . . . . . . . . 117 3.9.2 Some Properties of Covariance Extensions . . . . . . . . . . . 118 3.9.3 The Burg Method for AR Parameter Estimation . . . . . . . 119 3.9.4 The Gohberg–Semencul Formula . . . . . . . . . . . . . . . . 122 3.9.5 MA Parameter Estimation in Polynomial Time . . . . . . . . 125 3.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4 Parametric Methods for Line Spectra 144 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4.2 Models of Sinusoidal Signals in Noise . . . . . . . . . . . . . . . . . . 148 4.2.1 Nonlinear Regression Model . . . . . . . . . . . . . . . . . . . 148 4.2.2 ARMA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 4.2.3 Covariance Matrix Model . . . . . . . . . . . . . . . . . . . . 149 4.3 Nonlinear Least Squares Method . . . . . . . . . . . . . . . . . . . . 151 4.4 High–Order Yule–Walker Method . . . . . . . . . . . . . . . . . . . . 155 4.5 Pisarenko and MUSIC Methods . . . . . . . . . . . . . . . . . . . . . 159 4.6 Min–Norm Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.7 ESPRIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 4.8 Forward–Backward Approach . . . . . . . . . . . . . . . . . . . . . . 168 4.9 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 4.9.1 Mean Square Convergence of Sample Covariances for Line Spectral Processes . . . . . . . . . . . . . . . . . . . . . . . . 170 4.9.2 The Carath´ eodory Parameterization of a Covariance Matrix . 172 “sm2” 2004/2/ page v i i i i i i i i v 4.9.3 Using the Unwindowed Periodogram for Sine Wave Detection in White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.9.4 NLS Frequency Estimation for a Sinusoidal Signal with Time-Varying Amplitude . . . . . . . . . . . . . . . . . . . . . . . . 177 4.9.5 Monotonically Descending Techniques for Function Minimiza-tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 4.9.6 Frequency-selective ESPRIT-based Method . . . . . . . . . . 185 4.9.7 A Useful Result for Two-Dimensional (2D) Sinusoidal Signals 193 4.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5 Filter Bank Methods 207 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5.2 Filter Bank Interpretation of the Periodogram . . . . . . . . . . . . . 210 5.3 Refined Filter Bank Method . . . . . . . . . . . . . . . . . . . . . . . 212 5.3.1 Slepian Baseband Filters . . . . . . . . . . . . . . . . . . . . . 213 5.3.2 RFB Method for High–Resolution Spectral Analysis . . . . . 216 5.3.3 RFB Method for Statistically Stable Spectral Analysis . . . . 218 5.4 Capon Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 5.4.1 Derivation of the Capon Method . . . . . . . . . . . . . . . . 222 5.4.2 Relationship between Capon and AR Methods . . . . . . . . 228 5.5 Filter Bank Reinterpretation of the Periodogram . . . . . . . . . . . 231 5.6 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 5.6.1 Another Relationship between the Capon and AR Methods . 235 5.6.2 Multiwindow Interpretation of Daniell and Blackman–Tukey Periodograms . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 5.6.3 Capon Method for Exponentially Damped Sinusoidal Signals 241 5.6.4 Amplitude and Phase Estimation Method (APES) . . . . . . 244 5.6.5 Amplitude and Phase Estimation Method for Gapped Data (GAPES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 5.6.6 Extensions of Filter Bank Approaches to Two–Dimensional Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 5.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 6 Spatial Methods 263 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.2 Array Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.2.1 The Modulation–Transmission–Demodulation Process . . . . 266 6.2.2 Derivation of the Model Equation . . . . . . . . . . . . . . . 268 6.3 Nonparametric Methods . . . . . . . . . . . . . . . . . . . . . . . . . 273 6.3.1 Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 6.3.2 Capon Method . . . . . . . . . . . . . . . . . . . . . . . . . . 279 6.4 Parametric Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 6.4.1 Nonlinear Least Squares Method . . . . . . . . . . . . . . . . 281 6.4.2 Yule–Walker Method . . . . . . . . . . . . . . . . . . . . . . . 283 6.4.3 Pisarenko and MUSIC Methods . . . . . . . . . . . . . . . . . 284 6.4.4 Min–Norm Method . . . . . . . . . . . . . . . . . . . . . . . . 285 6.4.5 ESPRIT Method . . . . . . . . . . . . . . . . . . . . . . . . . 285 “sm2” 2004/2/ page vi i i i i i i i i vi 6.5 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 6.5.1 On the Minimum Norm Constraint . . . . . . . . . . . . . . . 286 6.5.2 NLS Direction-of-Arrival Estimation for a Constant-Modulus Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 6.5.3 Capon Method: Further Insights and Derivations . . . . . . . 290 6.5.4 Capon Method for Uncertain Direction Vectors . . . . . . . . 294 6.5.5 Capon Method with Noise Gain Constraint . . . . . . . . . . 298 6.5.6 Spatial Amplitude and Phase Estimation (APES) . . . . . . 305 6.5.7 The CLEAN Algorithm . . . . . . . . . . . . . . . . . . . . . 312 6.5.8 Unstructured and Persymmetric ML Estimates of the Covari-ance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 APPENDICES A Linear Algebra and Matrix Analysis Tools 328 A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 A.2 Range Space, Null Space, and Matrix Rank . . . . . . . . . . . . . . 328 A.3 Eigenvalue Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 330 A.3.1 General Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 331 A.3.2 Hermitian Matrices . . . . . . . . . . . . . . . . . . . . . . . . 333 A.4 Singular Value Decomposition and Projection Operators . . . . . . . 336 A.5 Positive (Semi)Definite Matrices . . . . . . . . . . . . . . . . . . . . 341 A.6 Matrices with Special Structure . . . . . . . . . . . . . . . . . . . . . 345 A.7 Matrix Inversion Lemmas . . . . . . . . . . . . . . . . . . . . . . . . 347 A.8 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . 347 A.8.1 Consistent Systems . . . . . . . . . . . . . . . . . . . . . . . . 347 A.8.2 Inconsistent Systems . . . . . . . . . . . . . . . . . . . . . . . 350 A.9 Quadratic Minimization . . . . . . . . . . . . . . . . . . . . . . . . . 353 B Cram´ er–Rao Bound Tools 355 B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 B.2 The CRB for General Distributions . . . . . . . . . . . . . . . . . . . 358 B.3 The CRB for Gaussian Distributions . . . . . . . . . . . . . . . . . . 359 B.4 The CRB for Line Spectra . . . . . . . . . . . . . . . . . . . . . . . . 364 B.5 The CRB for Rational Spectra . . . . . . . . . . . . . . . . . . . . . 365 B.6 The CRB for Spatial Spectra . . . . . . . . . . . . . . . . . . . . . . 367 C Model Order Selection Tools 377 C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 C.2 Maximum Likelihood Parameter Estimation . . . . . . . . . . . . . . 378 C.3 Useful Mathematical Preliminaries and Outlook . . . . . . . . . . . . 381 C.3.1 Maximum A Posteriori (MAP) Selection Rule . . . . . . . . . 382 C.3.2 Kullback-Leibler Information . . . . . . . . . . . . . . . . . . 384 C.3.3 Outlook: Theoretical and Practical Perspectives . . . . . . . 385 C.4 Direct Kullback-Leibler (KL) Approach: No-Name Rule . . . . . . . 386 “sm2” 2004/2/ page vii i i i i i i i i vii C.5 Cross-Validatory KL Approach: The AIC Rule . . . . . . . . . . . . 387 C.6 Generalized Cross-Validatory KL Approach: the GIC Rule . . . . . . 391 C.7 Bayesian Approach: The BIC Rule . . . . . . . . . . . . . . . . . . . 392 C.8 Summary and the Multimodel Approach . . . . . . . . . . . . . . . . 395 C.8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 C.8.2 The Multimodel Approach . . . . . . . . . . . . . . . . . . . 397 D Answers to Selected Exercises 399 Bibliography 401 References Grouped by Subject 413 Index 420 “sm2” 2004/2/ page vii i i i i i i i i viii “sm2” 2004/2/ page ix i i i i i i i i List of Exercises CHAPTER 1 1.1 Scaling of the Frequency Axis 1.2 Time–Frequency Distributions 1.3 Two Useful Z–Transform Properties 1.4 A Simple ACS Example 1.5 Alternative Proof that |r(k)| ≤r(0) 1.6 A Double Summation Formula 1.7 Is a Truncated Autocovariance Sequence (ACS) a Valid ACS? 1.8 When Is a Sequence an Autocovariance Sequence? 1.9 Spectral Density of the Sum of Two Correlated Signals 1.10 Least Squares Spectral Approximation 1.11 Linear Filtering and the Cross–Spectrum C1.12 Computer Generation of Autocovariance Sequences C1.13 DTFT Computations using Two–Sided Sequences C1.14 Relationship between the PSD and the Eigenvalues of the ACS Matrix CHAPTER 2 2.1 Covariance Estimation for Signals with Unknown Means 2.2 Covariance Estimation for Signals with Unknown Means (cont’d) 2.3 Unbiased ACS Estimates may lead to Negative Spectral Estimates 2.4 Variance of Estimated ACS 2.5 Another Proof of the Equality ˆ φp(ω) = ˆ φc(ω) 2.6 A Compact Expression for the Sample ACS 2.7 Yet Another Proof of the Equality ˆ φp(ω) = ˆ φc(ω) 2.8 Linear Transformation Interpretation of the DFT 2.9 For White Noise the Periodogram is an Unbiased PSD Estimator 2.10 Shrinking the Periodogram 2.11 Asymptotic Maximum Likelihood Estimation of φ(ω) from ˆ φp(ω) 2.12 Plotting the Spectral Estimates in dB 2.13 Finite–Sample Variance/Covariance Analysis of the Periodogram 2.14 Data–Weighted ACS Estimate Interpretation of Bartlett and Welch Meth-ods 2.15 Approximate Formula for Bandwidth Calculation 2.16 A Further Look at the Time–Bandwidth Product 2.17 Bias Considerations in Blackman–Tukey Window Design 2.18 A Property of the Bartlett Window C2.19 Zero Padding Effects on Periodogram Estimators C2.20 Resolution and Leakage Properties of the Periodogram C2.21 Bias and Variance Properties of the Periodogram Spectral Estimate C2.22 Refined Methods: Variance–Resolution Tradeoff C2.23 Periodogram–Based Estimators applied to Measured Data ix “sm2” 2004/2/ page x i i i i i i i i x CHAPTER 3 3.1 The Minimum Phase Property 3.2 Generating the ACS from ARMA Parameters 3.3 Relationship between AR Modeling and Forward Linear Prediction 3.4 Relationship between AR Modeling and Backward Linear Prediction 3.5 Prediction Filters and Smoothing Filters 3.6 Relationship between Minimum Prediction Error and Spectral Flatness 3.7 Diagonalization of the Covariance Matrix 3.8 Stability of Yule–Walker AR Models 3.9 Three Equivalent Representations for AR Processes 3.10 An Alternative Proof of the Stability Property of Reflection Coefficients 3.11 Recurrence Properties of Reflection Coefficient Sequence for an MA Model 3.12 Asymptotic Variance of the ARMA Spectral Estimator 3.13 Filtering Interpretation of Numerator Estimators in ARMA Estimation 3.14 An Alternative Expression for ARMA Power Spectral Density 3.15 Pad´ e Approximation 3.16 (Non)Uniqueness of Fully Parameterized ARMA Equations C3.17 Comparison of AR, ARMA and Periodogram Methods for ARMA Signals C3.18 AR and ARMA Estimators for Line Spectral Estimation C3.19 Model Order Selection for AR and ARMA Processes C3.20 AR and ARMA Estimators applied to Measured Data CHAPTER 4 4.1 Speed Measurement by a Doppler Radar as a Frequency Determination Problem 4.2 ACS of Sinusoids with Random Amplitudes or Nonuniform Phases 4.3 A Nonergodic Sinusoidal Signal 4.4 AR Model–Based Frequency Estimation 4.5 An ARMA Model–Based Derivation of the Pisarenko Method 4.6 Frequency Estimation when Some Frequencies are Known 4.7 A Combined HOYW-ESPRIT Method for the MA Noise Case 4.8 Chebyshev Inequality and the Convergence of Sample Covariances 4.9 More about the Forward–Backward Approach 4.10 ESPRIT and Min–Norm Under the Same Umbrella 4.11 Yet Another Relationship between ESPRIT and Min–Norm C4.12 Resolution Properties of Subspace Methods for Estimation of Line Spectra C4.13 Model Order Selection for Sinusoidal Signals C4.14 Line Spectral Methods applied to Measured Data CHAPTER 5 5.1 Multiwindow Interpretation of Bartlett and Welch Methods 5.2 An Alternative Statistically Stable RFB Estimate 5.3 Another Derivation of the Capon FIR Filter 5.4 The Capon Filter is a Matched Filter 5.5 Computation of the Capon Spectrum 5.6 A Relationship between the Capon Method and MUSIC (Pseudo)Spectra 5.7 A Capon–like Implementation of MUSIC “sm2” 2004/2/ page xi i i i i i i i i xi 5.8 Capon Estimate of the Parameters of a Single Sine Wave 5.9 An Alternative Derivation of the Relationship between the Capon and AR Methods C5.10 Slepian Window Sequences C5.11 Resolution of Refined Filter Bank Methods C5.12 The Statistically Stable RFB Power Spectral Estimator C5.13 The Capon Method CHAPTER 6 6.1 Source Localization using a Sensor in Motion 6.2 Beamforming Resolution for Uniform Linear Arrays 6.3 Beamforming Resolution for Arbitrary Arrays 6.4 Beamforming Resolution for L–Shaped Arrays 6.5 Relationship between Beamwidth and Array Element Locations 6.6 Isotropic Arrays 6.7 Grating Lobes 6.8 Beamspace Processing 6.9 Beamspace Processing (cont’d) 6.10 Beamforming and MUSIC under the Same Umbrella 6.11 Subspace Fitting Interpretation of MUSIC 6.12 Subspace Fitting Interpretation of MUSIC (cont’d.) 6.13 Subspace Fitting Interpretation of MUSIC (cont’d.) 6.14 Modified MUSIC for Coherent Signals C6.15 Comparison of Spatial Spectral Estimators C6.16 Performance of Spatial Spectral Estimators for Coherent Source Signals C6.17 Spatial Spectral Estimators applied to Measured Data “sm2” 2004/2/ page xii i i i i i i i i xii “sm2” 2004/2/ page xii i i i i i i i i Preface Spectral analysis considers the problem of determining the spectral content (i.e., the distribution of power over frequency) of a time series from a finite set of measurements, by means of either nonparametric or parametric techniques. The history of spectral analysis as an established discipline started more than a century ago with the work by Schuster on detecting cyclic behavior in time series. An interesting historical perspective on the developments in this field can be found in [Marple 1987]. This reference notes that the word “spectrum” was apparently introduced by Newton in relation to his studies of the decomposition of white light into a band of light colors, when passed through a glass prism (as illustrated on the front cover). This word appears to be a variant of the Latin word “specter” which means “ghostly apparition”. The contemporary English word that has the same meaning as the original Latin word is “spectre”. Despite these roots of the word “spectrum”, we hope the student will be a “vivid presence” in the course that has just started! This text, which is a revised and expanded version of Introduction to Spectral Analysis (Prentice Hall, 1997), is designed to be used with a first course in spec-tral analysis that would typically be offered to senior undergraduate or first–year graduate students. The book should also be useful for self-study, as it is largely self-contained. The text is concise by design, so that it gets to the main points quickly and should hence be appealing to those who would like a fast appraisal on the classical and modern approaches of spectral analysis. In order to keep the book as concise as possible without sacrificing the rigor of presentation or skipping over essential aspects, we do not cover some advanced topics of spectral estimation in the main part of the text. However, several advanced topics are considered in the complements that appear at the end of each chapter, and also in the appendices. For an introductory course, the reader can skip the complements and refer to results in the appendices without having to understand in detail their derivation. For the more advanced reader, we have included three appendices and a num-ber of complement sections in each chapter. The appendices provide a summary of the main techniques and results in linear algebra, statistical accuracy bounds, and model order selection, respectively. The complements present a broad range of advanced topics in spectral analysis. Many of these are current or recent research topics in the spectral analysis literature. At the end of each chapter we have included both analytical exercises and computer problems. The analytical exercises are more–or–less ordered from least to most difficult; this ordering also approximately follows the chronological presen-tation of material in the chapters. The more difficult exercises explore advanced topics in spectral analysis and provide results which are not available in the main text. Answers to selected exercises are found in Appendix D. The computer prob-lems are designed to illustrate the main points of the text and to provide the reader with first–hand information on the behavior and performance of the various spectral analysis techniques considered. The computer exercises also illustrate the relative xiii “sm2” 2004/2/ page xiv i i i i i i i i xiv performance of the methods and explore other topics such as statistical accuracy, resolution properties, and the like, that are not analytically developed in the book. We have used Matlab1 to minimize the programming chore and to encourage the reader to “play” with other examples. We provide a set of Matlab functions for data generation and spectral estimation that form a basis for a comprehensive set of spectral estimation tools; these functions are available at the text web site www.prenhall.com/stoica. Supplementary material may also be obtained from the text web site. We have prepared a set of overhead transparencies which can be used as a teaching aid for a spectral analysis course. We believe that these transparencies are useful not only to course instructors but also to other readers, because they summarize the principal methods and results in the text. For readers who study the topic on their own, it should be a useful exercise to refer to the main points addressed in the transparencies after completing the reading of each chapter. As we mentioned earlier, this text is a revised and expanded version of In-troduction to Spectral Analysis (Prentice Hall, 1997). We have maintained the conciseness and accessability of the main text; the revision has primarily focused on expanding the complements, appendices, and bibliography. Specifically, we have expanded Appendix B to include a detailed discussion of Cram´ er-Rao bounds for direction-of-arrival estimation. We have added Appendix C, which covers model order selection, and have added new computer exercises on order selection. We have more than doubled the number of complements from the previous book to 32, most of which present recent results in spectral analysis. We have also expanded the bibliography to include new topics along with recent results on more established topics. The text is organized as follows. Chapter 1 introduces the spectral analysis problem, motivates the definition of power spectral density functions, and reviews some important properties of autocorrelation sequences and spectral density func-tions. Chapters 2 and 5 consider nonparametric spectral estimation. Chapter 2 presents classical techniques, including the periodogram, the correlogram, and their modified versions to reduce variance. We include an analysis of bias and variance of these techniques, and relate them to one another. Chapter 5 considers the more recent filter bank version of nonparametric techniques, including both data-independent and data-dependent filter design techniques. Chapters 3 and 4 consider parametric techniques; Chapter 3 focuses on continuous spectral models (Autoregressive Moving Average (ARMA) models and their AR and MA special cases), while Chapter 4 focuses on discrete spectral models (sinusoids in noise). We have placed the filter bank methods in Chapter 5, after Chapters 3 and 4, mainly because the Capon estimator has interpretations as both an averaged AR spectral estimator and as a matched filter for line spectral models, and we need the background of Chapters 3 and 4 to develop these interpretations. The data-independent filter bank techniques in Sections 5.1–5.4 can equally well be covered directly following Chapter 2, if desired. Chapter 6 considers the closely-related problem of spatial spectral estimation in the context of array signal processing. Both nonparametric (beamforming) and 1Matlab R ⃝is a registered trademark of The Mathworks, Inc. “sm2” 2004/2/ page xv i i i i i i i i xv parametric methods are considered, and tied into the temporal spectral estimation techniques considered in Chapters 2, 4 and 5. The Bibliography contains both modern and classical references (ordered both alphabetically and by subject). We include many historical references as well, for those interested in tracing the early developments of spectral analysis. However, spectral analysis is a topic with contributions from many diverse fields, including electrical and mechanical engineering, astronomy, biomedical spectroscopy, geo-physics, mathematical statistics, and econometrics to name a few. As such, any attempt to accurately document the historical development of spectral analysis is doomed to failure. The bibliography reflects our own perspectives, biases, and limi-tations; while there is no doubt that the list is incomplete, we hope that it gives the reader an appreciation of the breadth and diversity of the spectral analysis field. The background needed for this text includes a basic knowledge of linear al-gebra, discrete-time linear systems, and introductory discrete-time stochastic pro-cesses (or time series). A basic understanding of estimation theory is helpful, though not required. Appendix A develops most of the needed background results on ma-trices and linear algebra, Appendix B gives a tutorial introduction to the Cram´ er-Rao bound, and Appendix C develops the theory of model order selection. We have included concise definitions and descriptions of the required concepts and results where needed. Thus, we have tried to make the text as self-contained as possible. We are indebted to Jian Li and Lee Potter for adopting a former version of the text in their spectral estimation classes, for their valuable feedback, and for contributing to this book in several other ways. We would like to thank Torsten S¨ oderstr¨ om for providing the initial stimulus for preparation of lecture notes that led to the book, and Hung-Chih Chiang, Peter H¨ andel, Ari Kangas, Erlendur Karlsson, and Lee Swindlehurst for careful proofreading and comments, and for many ideas on and early drafts of the computer problems. We are grateful to Mats Bengtsson, Tryphon Georgiou, K.V.S. Hari, Andreas Jakobsson, Erchin Serpedin, and Andreas Spanias for comments and suggestions that helped us eliminate some inadvertencies and typographical errors from the previous edition of the book. We also wish to thank Wallace Anderson, Alfred Hero, Ralph Hippenstiel, Louis Scharf, and Douglas Williams, who reviewed a former version of the book and provided us with numerous useful comments and suggestions. It was a pleasure to work with the excellent staffat Prentice Hall, and we are particularly appreciative of Tom Robbins for his professional expertise. Many of the topics described in this book are outgrowths of our research pro-grams in statistical signal and array processing, and we wish to thank the sponsors of this research: the Swedish Foundation for Strategic Research, the Swedish Re-search Council, the Swedish Institute, the U.S. Army Research Laboratory, the U.S. Air Force Research Laboratory, and the U.S. Defense Advanced Research Projects Administration. Finally, we are indebted to Anca and Liz for their continuing support and understanding throughout this project. Petre Stoica Uppsala University Randy Moses The Ohio State University “sm2” 2004/2/ page xv i i i i i i i i xvi “sm2” 2004/2/ page xv i i i i i i i i Notational Conventions R the set of real numbers C the set of complex numbers N(A) the null space of the matrix A (p. 328) R(A) the range space of the matrix A (p. 328) Dn the nth definition in Appendix A or B Rn the nth result in Appendix A ∥x∥ the Euclidean norm of a vector x ∗ convolution operator (·)T transpose of a vector or matrix (·)c conjugate of a vector or matrix (·)∗ conjugate transpose of a vector or matrix; also used for scalars in lieu of (·)c Aij the (i, j)th element of the matrix A ai the ith element of the vector a ˆ x an estimate of the quantity x A > 0 (≥0) A is positive definite (positive semidefinite) (p. 341) arg max x f(x) the value of x that maximizes f(x) arg min x f(x) the value of x that minimizes f(x) cov{x, y} the covariance between x and y |x| the modulus of the (possibly complex) scalar x |A| the determinant of the square matrix A diag(a) the square diagonal matrix whose diagonal elements are the elements of the vector a δk,l Kronecker delta: δk,l = 1 if k = l and δk,l = 0 otherwise δ(t −t0) Dirac delta: δ(t −t0) = 0 for t ̸= t0; R ∞ −∞δ(t −t0)dt = 1 E {x} the expected value of x (p. 5) f (discrete-time) frequency: f = ω/2π, in cycles per sampling interval (p. 8) φ(ω) a power spectral density function (p. 6) Im{x} the imaginary part of x O(x) on the order of x (p. 32) xvii “sm2” 2004/2/ page xv i i i i i i i i xviii p(x) probability density function Pr{A} the probability of event A r(k) an autocovariance sequence (p. 5) Re{x} the real part of x t discrete-time index tr(A) the trace of the matrix A (p. 331) var{x} the variance of x w(k), W(ω) a window sequence and its Fourier transform wB(k), WB(ω) the Bartlett (or triangular) window sequence and its Fourier transform (p. 29) wR(k), WR(ω) the rectangular (or Dirichlet) window sequence and its Fourier transform (p. 30) ω radian (angular) frequency, in radians/sampling interval (p. 3) z−1 unit delay operator: z−1x(t) = x(t −1) (p. 10) “sm2” 2004/2/ page xix i i i i i i i i Abbreviations ACS autocovariance sequence (p. 5) APES amplitude and phase estimation (p. 244) AR autoregressive (p. 88) ARMA autoregressive moving-average (p. 88) BSP beamspace processing (p. 323) BT Blackman-Tukey (p. 37) CM Capon method (p. 222) CCM constrained Capon method (p. 300) CRB Cram´ er-Rao bound (p. 355) DFT discrete Fourier transform (p. 25) DGA Delsarte-Genin algorithm (p. 95) DOA direction of arrival (p. 264) DTFT discrete-time Fourier transform (p. 3) ESP elementspace processing (p. 323) ESPRIT estimation of signal parameters by rotational invariance techniques (p. 166) EVD eigenvalue decomposition (p. 330) FB forward-backward (p. 168) FBA filter bank approach (p. 208) FFT fast Fourier transform (p. 26) FIR finite impulse response (p. 17) flop floating point operation (p. 26) GAPES gapped amplitude and phase estimation (p. 247) GS Gohberg-Semencul (formula) (p. 122) HOYW high–order Yule–Walker (p. 155) i.i.d. independent, identically distributed (p. 317) LDA Levinson–Durbin algorithm (p. 95) LS least squares (p. 350) MA moving-average (p. 88) MFD matrix fraction description (p. 137) ML maximum likelihood (p. 356) MLE maximum likelihood estimate (p. 356) xix “sm2” 2004/2/ page xx i i i i i i i i xx MSE mean squared error (p. 28) MUSIC multiple signal classification (or characterization) (p. 159) MYW modified Yule–Walker (p. 96) NLS nonlinear least squares (p. 145) PARCOR partial correlation (p. 96) PSD power spectral density (p. 5) RFB refined filter bank (p. 212) QRD Q-R decomposition (p. 351) RCM robust Capon method (p. 299) SNR signal-to-noise ratio (p. 81) SVD singular value decomposition (p. 336) TLS total least squares (p. 352) ULA uniform linear array (p. 271) YW Yule–Walker (p. 90) “sm2” 2004/2/ page 1 i i i i i i i i C H A P T E R 1 Basic Concepts 1.1 INTRODUCTION The essence of the spectral estimation problem is captured by the following informal formulation. From a finite record of a stationary data sequence, estimate how the total power is distributed over frequency. (1.1.1) Spectral analysis finds applications in many diverse fields. In vibration monitoring, the spectral content of measured signals give information on the wear and other characteristics of mechanical parts under study. In economics, meteorology, astron-omy and several other fields, the spectral analysis may reveal “hidden periodicities” in the studied data, which are to be associated with cyclic behavior or recurring processes. In speech analysis, spectral models of voice signals are useful in better understanding the speech production process, and — in addition — can be used for both speech synthesis (or compression) and speech recognition. In radar and sonar systems, the spectral contents of the received signals provide information on the location of the sources (or targets) situated in the field of view. In medicine, spectral analysis of various signals measured from a patient, such as electrocardio-gram (ECG) or electroencephalogram (EEG) signals, can provide useful material for diagnosis. In seismology, the spectral analysis of the signals recorded prior to and during a seismic event (such as a volcano eruption or an earthquake) gives useful information on the ground movement associated with such events and may help in predicting them. Seismic spectral estimation is also used to predict sub-surface geologic structure in gas and oil exploration. In control systems, there is a resurging interest in spectral analysis methods as a means of characterizing the dynamical behavior of a given system, and ultimately synthesizing a controller for that system. The previous and other applications of spectral analysis are reviewed in [Kay 1988; Marple 1987; Bloomfield 1976; Bracewell 1986; Haykin 1991; Haykin 1995; Hayes III 1996; Koopmans 1974; Priestley 1981; Perci-val and Walden 1993; Porat 1994; Scharf 1991; Therrien 1992; Proakis, Rader, Ling, and Nikias 1992]. The textbook [Marple 1987] also contains a well–written historical perspective on spectral estimation which is worth read-ing. Many of the classical articles on spectral analysis, both application–driven and theoretical, are reprinted in [Childers 1978; Kesler 1986]; these excellent collections of reprints are well worth consulting. There are two broad approaches to spectral analysis. One of these derives its basic idea directly from definition (1.1.1): the studied signal is applied to a band-pass filter with a narrow bandwidth, which is swept through the frequency band of 1 “sm2” 2004/2/ page 2 i i i i i i i i 2 Chapter 1 Basic Concepts interest, and the filter output power divided by the filter bandwidth is used as a measure of the spectral content of the input to the filter. This is essentially what the classical (or nonparametric) methods of spectral analysis do. These methods are described in Chapters 2 and 5 of this text (the fact that the methods of Chapter 2 can be given the above filter bank interpretation is made clear in Chapter 5). The second approach to spectral estimation, called the parametric approach, is to postu-late a model for the data, which provides a means of parameterizing the spectrum, and to thereby reduce the spectral estimation problem to that of estimating the parameters in the assumed model. The parametric approach to spectral analysis is treated in Chapters 3, 4 and 6. Parametric methods may offer more accurate spectral estimates than the nonparametric ones in the cases where the data indeed satisfy the model assumed by the former methods. However, in the more likely case that the data do not satisfy the assumed models, the nonparametric meth-ods may outperform the parametric ones owing to the sensitivity of the latter to model misspecifications. This observation has motivated renewed interest in the nonparametric approach to spectral estimation. Many real–world signals can be characterized as being random (from the ob-server’s viewpoint). Briefly speaking, this means that the variation of such a signal outside the observed interval cannot be determined exactly but only specified in statistical terms of averages. In this text, we will be concerned with estimating the spectral characteristics of random signals. In spite of this fact, we find it useful to start the discussion by considering the spectral analysis of deterministic signals (which we do in the first section of this chapter). Throughout this work, we consider discrete signals (or data sequences). Such signals are most commonly obtained by the temporal or spatial sampling of a continuous (in time or space) signal. The main motivation for focusing on discrete signals lies in the fact that spectral analy-sis is most often performed by a digital computer or by digital circuitry. Chapters 2 to 5 of this text deal with discrete–time signals, while Chapter 6 considers the case of discrete–space data sequences. In the interest of notational simplicity, the discrete–time variable t, as used in this text, is assumed to be measured in units of sampling interval. A similar conven-tion is adopted for spatial signals, whenever the sampling is uniform. Accordingly, the units of frequency are cycles per sampling interval. The signals dealt with in the text are complex–valued. Complex–valued data may appear in signal processing and spectral estimation applications, for instance, as a result of a “complex demodulation” process (this is explained in detail in Chapter 6). It should be noted that the treatment of complex–valued signals is not always more general or more difficult than the analysis of corresponding real–valued signals. A typical example which illustrates this claim is the case of sinusoidal signals considered in Chapter 4. A real–valued sinusoidal signal, α cos(ωt + ϕ), can be rewritten as a linear combination of two complex–valued sinusoidal signals, α1ei(ω1t+ϕ1) + α2ei(ω2t+ϕ2), whose parameters are constrained as follows: α1 = α2 = α/2, ϕ1 = −ϕ2 = ϕ and ω1 = −ω2 = ω. Here i = √−1. The fact that we need to consider two constrained complex sine waves to treat the case of one unconstrained real sine wave shows that the real–valued case of sinusoidal signals can actually be considered to be more complicated than the complex–valued case! Fortunately, it appears that the latter case is encountered more frequently “sm2” 2004/2/ page 3 i i i i i i i i Section 1.2 Energy Spectral Density of Deterministic Signals 3 in applications, where often both the in–phase and quadrature components of the studied signal are available. (For more details and explanations on this aspect, see Chapter 6’s introductory section.) 1.2 ENERGY SPECTRAL DENSITY OF DETERMINISTIC SIGNALS Let {y(t); t = 0, ±1, ±2, . . .} denote a deterministic discrete–time data sequence. Most commonly, {y(t)} is obtained by sampling a continuous–time signal. For notational convenience, the time index t is expressed in units of sampling interval; that is, y(t) = yc(t · Ts), where yc(·) is the continuous time signal and Ts is the sampling time interval. Assume that {y(t)} has finite energy, which means that ∞ X t=−∞ |y(t)|2 < ∞ (1.2.1) Then, under some additional regularity conditions, the sequence {y(t)} possesses a discrete–time Fourier transform (DTFT) defined as Y (ω) = ∞ X t=−∞ y(t)e−iωt (DTFT) (1.2.2) In this text we use the symbol Y (ω), in lieu of the more cumbersome Y (eiω), to denote the DTFT. This notational convention is commented on a bit later, following equation (1.4.6). The corresponding inverse DTFT is then y(t) = 1 2π Z π −π Y (ω)eiωtdω (Inverse DTFT) (1.2.3) which can be verified by substituting (1.2.3) into (1.2.2). The (angular) frequency ω is measured in radians per sampling interval. The conversion from ω to the physical frequency variable ¯ ω = ω/Ts [rad/sec] can be done in a straightforward manner, as described in Exercise 1.1. Let S(ω) = |Y (ω)|2 (Energy Spectral Density) (1.2.4) A straightforward calculation gives 1 2π Z π −π S(ω)dω = 1 2π Z π −π ∞ X t=−∞ ∞ X s=−∞ y(t)y∗(s)e−iω(t−s)dω = ∞ X t=−∞ ∞ X s=−∞ y(t)y∗(s)  1 2π Z π −π e−iω(t−s)dω  = ∞ X t=−∞ |y(t)|2 (1.2.5) “sm2” 2004/2/ page 4 i i i i i i i i 4 Chapter 1 Basic Concepts To obtain the last equality in (1.2.5) we have used the fact that 1 2π R π −π e−iω(t−s)dω = δt,s (the Kronecker delta). The symbol (·)∗will be used in this text to denote the complex–conjugate of a scalar variable or the conjugate transpose of a vector or matrix. Equation (1.2.5) can be restated as ∞ X t=−∞ |y(t)|2 = 1 2π Z π −π S(ω)dω (1.2.6) This equality is called Parseval’s theorem. It shows that S(ω) represents the dis-tribution of sequence energy as a function of frequency. For this reason, S(ω) is called the energy spectral density. The previous interpretation of S(ω) also comes up in the following way. Equa-tion (1.2.3) represents the sequence {y(t)} as a weighted “sum” (actually, an inte-gral) of orthonormal sequences { 1 √ 2πeiωt} (ω ∈[−π, π]), with weighting 1 √ 2πY (ω). Hence, 1 √ 2π|Y (ω)| “measures” the “length” of the projection of {y(t)} on each of these basis sequences. In loose terms, therefore, 1 √ 2π|Y (ω)| shows how much (or how little) of the sequence {y(t)} can be “explained” by the orthonormal sequence { 1 √ 2πeiωt} for some given value of ω. Define ρ(k) = ∞ X t=−∞ y(t)y∗(t −k) (1.2.7) It is readily verified that ∞ X k=−∞ ρ(k)e−iωk = ∞ X k=−∞ ∞ X t=−∞ y(t)y∗(t −k)e−iωteiω(t−k) = " ∞ X t=−∞ y(t)e−iωt # " ∞ X s=−∞ y(s)e−iωs #∗ = S(ω) (1.2.8) which shows that S(ω) can be obtained as the DTFT of the “autocorrelation” (1.2.7) of the finite–energy sequence {y(t)}. The above definitions can be extended in a rather straightforward manner to the case of random signals treated throughout the remaining text. In fact, the only purpose for discussing the deterministic case in this section was to provide some motivation for the analogous definitions in the random case. As such, the discussion in this section has been kept brief. More insights into the meaning and properties of the previous definitions are provided by the detailed treatment of the random case in the following sections. 1.3 POWER SPECTRAL DENSITY OF RANDOM SIGNALS Most of the signals encountered in applications are such that their variation in the future cannot be known exactly. It is only possible to make probabilistic statements about that variation. The mathematical device to describe such a signal is that “sm2” 2004/2/ page 5 i i i i i i i i Section 1.3 Power Spectral Density of Random Signals 5 of a random sequence which consists of an ensemble of possible realizations, each of which has some associated probability of occurrence. Of course, from the whole ensemble of realizations, the experimenter can usually observe only one realization of the signal, and then it might be thought that the deterministic definitions of the previous section could be carried over unchanged to the present case. However, this is not possible because the realizations of a random signal, viewed as discrete–time sequences, do not have finite energy, and hence do not possess DTFTs. A random signal usually has finite average power and, therefore, can be characterized by an average power spectral density. For simplicity reasons, in what follows we will use the name power spectral density (PSD) for that quantity. The discrete–time signal {y(t); t = 0, ±1, ±2, . . .} is assumed to be a sequence of random variables with zero mean: E {y(t)} = 0 for all t (1.3.1) Hereafter, E {·} denotes the expectation operator (which averages over the ensemble of realizations). The autocovariance sequence (ACS) or covariance function of y(t) is defined as r(k) = E {y(t)y∗(t −k)} (1.3.2) and it is assumed to depend only on the lag between the two samples averaged. The two assumptions (1.3.1) and (1.3.2) imply that {y(t)} is a second–order stationary sequence. When it is required to distinguish between the autocovariance sequences of several signals, a lower index will be used to indicate the signal associated with a given covariance lag, such as ry(k). The autocovariance sequence r(k) enjoys some simple but useful properties: r(k) = r∗(−k) (1.3.3) and r(0) ≥|r(k)| for all k (1.3.4) The equality (1.3.3) directly follows from definition (1.3.2) and the stationarity assumption, while (1.3.4) is a consequence of the fact that the covariance matrix of {y(t)}, defined as follows Rm =       r(0) r∗(1) . . . r∗(m −1) r(1) r(0) ... . . . . . . ... ... r∗(1) r(m −1) . . . r(1) r(0)       = E                  y∗(t −1) . . . . . . y∗(t −m)       [y(t −1) . . . y(t −m)]            (1.3.5) “sm2” 2004/2/ page 6 i i i i i i i i 6 Chapter 1 Basic Concepts is positive semidefinite for all m. Recall that a Hermitian matrix M is positive semidefinite if a∗Ma ≥0 for every vector a (see Section A.5 for details). Since a∗Rma = a∗E               y∗(t −1) . . . y∗(t −m)     [y(t −1) . . . y(t −m)]          a = E {z∗(t)z(t)} = E  |z(t)|2 ≥0 (1.3.6) where z(t) = [y(t −1) . . . y(t −m)]a we see that Rm is indeed positive semidefinite for every m. Hence, (1.3.4) fol-lows from the properties of positive semidefinite matrices (see Definition D11 in Appendix A and Exercise 1.5). 1.3.1 First Definition of Power Spectral Density The PSD is defined as the DTFT of the covariance sequence: φ(ω) = ∞ X k=−∞ r(k)e−iωk (Power Spectral Density) (1.3.7) Note that the previous definition (1.3.7) of φ(ω) is similar to the definition (1.2.8) in the deterministic case. The inverse transform, which recovers {r(k)} from given φ(ω), is r(k) = 1 2π Z π −π φ(ω)eiωkdω (1.3.8) We readily verify that 1 2π Z π −π φ(ω)eiωkdω = ∞ X p=−∞ r(p)  1 2π Z π −π eiω(k−p)dω  = r(k) which proves that (1.3.8) is the inverse transform for (1.3.7). Note that to obtain the first equality above, the order of integration and summation has been inverted, which is possible under weak conditions (such as under the requirement that φ(ω) is square integrable; see Chapter 4 in [Priestley 1981] for a detailed discussion on this aspect). From (1.3.8), we obtain r(0) = 1 2π Z π −π φ(ω)dω (1.3.9) Since r(0) = E  |y(t)|2 measures the (average) power of {y(t)}, the equality (1.3.9) shows that φ(ω) can indeed be named PSD, as it represents the distribution of the “sm2” 2004/2/ page 7 i i i i i i i i Section 1.3 Power Spectral Density of Random Signals 7 (average) signal power over frequencies. Put another way, it follows from (1.3.9) that φ(ω)dω/2π is the infinitesimal power in the band (ω−dω/2, ω+dω/2), and the total power in the signal is obtained by integrating these infinitesimal contributions. Additional motivation for calling φ(ω) a PSD is provided by the second definition of φ(ω), given next, which resembles the usual definition (1.2.2), (1.2.4) in the deterministic case. 1.3.2 Second Definition of Power Spectral Density The second definition of φ(ω) is: φ(ω) = lim N→∞E    1 N N X t=1 y(t)e−iωt 2   (1.3.10) This definition is equivalent to (1.3.7) under the mild assumption that the covari-ance sequence {r(k)} decays sufficiently rapidly, so that lim N→∞ 1 N N X k=−N |k||r(k)| = 0 (1.3.11) The equivalence of (1.3.7) and (1.3.10) can be verified as follows: lim N→∞E    1 N N X t=1 y(t)e−iωt 2  = lim N→∞ 1 N N X t=1 N X s=1 E {y(t)y∗(s)} e−iω(t−s) = lim N→∞ 1 N N−1 X τ=−(N−1) (N −|τ|)r(τ)e−iωτ = ∞ X τ=−∞ r(τ)e−iωτ −lim N→∞ 1 N N−1 X τ=−(N−1) |τ|r(τ)e−iωτ = φ(ω) The second equality is proven in Exercise 1.6, and we used (1.3.11) in the last equality. The above definition of φ(ω) resembles the definition (1.2.4) of energy spec-tral density in the deterministic case. The main difference between (1.2.4) and (1.3.10) consists of the appearance of the expectation operator in (1.3.10) and the normalization by 1/N; the fact that the “discrete–time” variable in (1.3.10) runs over positive integers only is just for convenience and does not constitute an es-sential difference, compared to (1.2.2). In spite of these differences, the analogy between the deterministic formula (1.2.4) and (1.3.10) provides further motivation for calling φ(ω) a PSD. The alternative definition (1.3.10) will also be quite useful when discussing the problem of estimating the PSD by nonparametric techniques in Chapters 2 and 5. “sm2” 2004/2/ page 8 i i i i i i i i 8 Chapter 1 Basic Concepts We can see from either of these definitions that φ(ω) is a periodic function, with the period equal to 2π. Hence, φ(ω) is completely described by its variation in the interval ω ∈[−π, π] (radians per sampling interval) (1.3.12) Alternatively, the PSD can be viewed as a function of the frequency f = ω 2π (cycles per sampling interval) (1.3.13) which, according to (1.3.12), can be considered to take values in the interval f ∈[−1/2, 1/2] (1.3.14) We will generally write the PSD as a function of ω whenever possible, since this will simplify the notation. As already mentioned, the discrete–time sequence {y(t)} is most commonly derived by sampling a continuous–time signal. To avoid aliasing effects which might be incurred by the sampling process, the continuous–time signal should be (at least, approximately) bandlimited in the frequency domain. To ensure this, it may be necessary to low–pass filter the continuous–time signal before sampling. Let F0 denote the largest (“significant”) frequency component in the spectrum of the (possibly filtered) continuous signal, and let Fs be the sampling frequency. Then it follows from Shannon’s sampling theorem that the continuous–time signal can be “exactly” reconstructed from its samples {y(t)}, provided that Fs ≥2F0 (1.3.15) In particular, “no” frequency aliasing will occur when (1.3.15) holds (see, e.g., [Oppenheim and Schafer 1989]). Since the frequency variable, F, associated with the continuous–time signal, is related to f by the equation F = f · Fs (1.3.16) it follows that the interval of F corresponding to (1.3.14) is F ∈  −Fs 2 , Fs 2  (cycles/sec) (1.3.17) which is quite natural in view of (1.3.15). 1.4 PROPERTIES OF POWER SPECTRAL DENSITIES Since φ(ω) is a power density, it should be real–valued and nonnegative. That this is indeed the case is readily seen from definition (1.3.10) of φ(ω). Hence, φ(ω) ≥0 for all ω (1.4.1) “sm2” 2004/2/ page 9 i i i i i i i i Section 1.4 Properties of Power Spectral Densities 9 From (1.3.3) and (1.3.7), we obtain φ(ω) = r(0) + 2 ∞ X k=1 Re{r(k)e−iωk} where Re{·} denotes the real part of the bracketed quantity. If y(t), and hence r(k), is real valued then it follows that φ(ω) = r(0) + 2 ∞ X k=1 r(k) cos(ωk) (1.4.2) which shows that φ(ω) is an even function in such a case. In the case of complex– valued signals, however, φ(ω) is not necessarily symmetric about the ω = 0 axis. Thus: For real–valued signals: φ(ω) = φ(−ω), ω ∈[−π, π] For complex–valued signals: in general φ(ω) ̸= φ(−ω), ω ∈[−π, π] (1.4.3) Remark: The reader might wonder why we did not define the ACS as c(k) = E {y(t)y∗(t + k)} Comparing with the ACS {r(k)} used in this text, as defined in (1.3.2), we obtain c(k) = r(−k). Consequently, the PSD associated with {c(k)} is related to the PSD corresponding to {r(k)} (see (1.3.7)) via: ψ(ω) ≜ ∞ X k=−∞ c(k)e−iωk = ∞ X k=−∞ r(k)eiωk = φ(−ω) It may seem arbitrary as to which definition of the ACS (and corresponding defini-tion of PSD) we choose. In fact, from a mathematical standpoint we can use either definition of the ACS, but the ACS definition r(k) is preferred from a practical standpoint as we now explain. First, we should stress that the PSD describes the spectral content of the ACS, as seen from equation (1.3.7). The PSD φ(ω) is sometimes perceived as showing the (infinitesimal) power at frequency ω in the signal itself, but that is not strictly true. If the PSD represented the power in the signal itself, then we should have had ψ(ω) = φ(ω) because the signal’s spectral content should not depend on the ACS definition. However, as shown above, in the general complex case ψ(ω) = φ(−ω) ̸= φ(ω), which means that the signal power interpretation of the PSD is not (always) correct. Indeed, the PSD φ(ω) “measures” the power at frequency ω in the signal’s ACS. “sm2” 2004/2/ page 10 i i i i i i i i 10 Chapter 1 Basic Concepts e(t) φe(ω) y(t) φy(ω) = |H(ω)|2φe(ω) H(z) --Figure 1.1. Relationship between the PSDs of the input and output of a linear system. On the other hand, our motivation for considering spectral analysis is to characterize the average power at frequency ω in the signal, as given by the sec-ond definition of the PSD in equation (1.3.10). If c(k) is used as the ACS, its corresponding second definition of the PSD is ψ(ω) = lim N→∞E    1 N N X t=1 y(t)e+iωt 2   which is the average power of y(t) at frequency −ω. Clearly, the second PSD definition corresponding to r(k) aligns with this average power motivation, while the one for c(k) does not; it is for this reason that we use the definition r(k) for the ACS. ■ Next, we present a useful result which concerns the transfer of PSD through an asymptotically stable linear system. Let H(z) = ∞ X k=−∞ hkz−k (1.4.4) denote an asymptotically stable linear time–invariant system. The symbol z−1 denotes the unit delay operator defined by z−1y(t) = y(t −1). Also, let e(t) be the stationary input to the system and y(t) the corresponding output, as shown in Figure 1.1. Then {y(t)} and {e(t)} are related via the convolution sum y(t) = H(z)e(t) = ∞ X k=−∞ hke(t −k) (1.4.5) The transfer function of this filter is H(ω) = ∞ X k=−∞ hke−iωk (1.4.6) Throughout the text, we will follow the convention of writing H(z) for the con-volution operator of a linear system and its corresponding Z-transform, and H(ω) “sm2” 2004/2/ page 11 i i i i i i i i Section 1.4 Properties of Power Spectral Densities 11 for its transfer function. We obtain the transfer function H(ω) from H(z) by the substitution z = eiω: H(ω) = H(z) z=eiω While we recognize the slight abuse of notation in writing H(ω) instead of H(eiω) and in using z as both an operator and a complex variable, we prefer the simplicity of notation it affords. From (1.4.5), we obtain ry(k) = ∞ X p=−∞ ∞ X m=−∞ hph∗ mE {e(t −p)e∗(t −m −k)} = ∞ X p=−∞ ∞ X m=−∞ hph∗ mre(m + k −p) (1.4.7) Inserting (1.4.7) in (1.3.7) gives φy(ω) = ∞ X k=−∞ ∞ X p=−∞ ∞ X m=−∞ hph∗ mre(m + k −p)e−iω(k+m−p)eiωme−iωp = " ∞ X p=−∞ hpe−iωp # " ∞ X m=−∞ h∗ meiωm # " ∞ X τ=−∞ re(τ)e−iωτ # = |H(ω)|2φe(ω) (1.4.8) From (1.4.8), we get the following important formula φy(ω) = |H(ω)|2φe(ω) (1.4.9) that will be much used in the following chapters. Finally, we derive a property that will be of use in Chapter 5. Let the signals y(t) and x(t) be related by y(t) = eiω0tx(t) (1.4.10) for some given ω0. Then, it holds that φy(ω) = φx(ω −ω0) (1.4.11) In other words, multiplication by eiω0t of a temporal sequence shifts its spectral density by the angular frequency ω0. Owing to this interpretation, the process of constructing y(t) as in (1.4.10) is called complex (de)modulation. The proof of (1.4.11) is immediate: since (1.4.10) implies that ry(k) = eiω0krx(k) (1.4.12) we obtain φy(ω) = ∞ X k=−∞ rx(k)e−i(ω−ω0)k = φx(ω −ω0) (1.4.13) which is the desired result. “sm2” 2004/2/ page 12 i i i i i i i i 12 Chapter 1 Basic Concepts 1.5 THE SPECTRAL ESTIMATION PROBLEM The spectral estimation problem can now be stated more formally as follows. From a finite–length record {y(1), . . . , y(N)} of a second–order stationary random process, determine an estimate ˆ φ(ω) of its power spectral density φ(ω), for ω ∈[−π, π] (1.5.1) It would, of course, be desirable that ˆ φ(ω) is as close to φ(ω) as possible. As we shall see, the main limitation on the quality of most PSD estimates is due to the quite small number of data samples usually available for processing. Note that N will be used throughout this text to denote the number of points of the available data sequence. In some applications, N is small since the cost of obtaining large amounts of data is prohibitive. Most commonly, the value of N is limited by the fact that the signal under study can be considered second–order stationary only over short observation intervals. As already mentioned in the introductory part of this chapter, there are two main approaches to the PSD estimation problem. The nonparametric approach, discussed in Chapters 2 and 5, proceeds to estimate the PSD by relying essentially only on the basic definitions (1.3.7) and (1.3.10) and on some properties that di-rectly follow from these definitions. In particular, these methods do not make any assumption on the functional form of φ(ω). This is in contrast with the parametric approach, discussed in Chapters 3, 4 and 6. That approach makes assumptions on the signal under study, which lead to a parameterized functional form of the PSD, and then proceeds by estimating the parameters in the PSD model. The paramet-ric approach can thus be used only when there is enough information about the studied signal, that allows formulation of a model. Otherwise the nonparametric approach should be used. Interestingly enough, the nonparametric methods are close competitors to the parametric ones, even when the model form assumed by the latter is a reasonable description of reality. 1.6 COMPLEMENTS 1.6.1 Coherency Spectrum Let Cyu(ω) = φyu(ω) [φyy(ω)φuu(ω)]1/2 (1.6.1) denote the so–called complex–coherency of the stationary signals y(t) and u(t). In the definition above, φyu(ω) is the cross–spectrum of the two signals, and φyy(ω) and φuu(ω) are their respective PSDs. (We implicitly assume in (1.6.1) that φyy(ω) and φuu(ω) are strictly positive for all ω.) Also, let ϵ(t) = y(t) − ∞ X k=−∞ hku(t −k) (1.6.2) “sm2” 2004/2/ page 13 i i i i i i i i Section 1.6 Complements 13 denote the residues of the least squares problem in Exercise 1.11. Hence, {hk} in equation (1.6.2) satisfy ∞ X k=−∞ hke−iωk ≜H(ω) = φyu(ω)/φuu(ω). In what follows, we will show that E  |ϵ(t)|2 = 1 2π Z π −π (1 −|Cyu(ω)|2)φyy(ω) dω (1.6.3) where |Cyu(ω)| is the so–called coherency spectrum. We will deduce from (1.6.3) that the coherency spectrum shows the extent to which y(t) and u(t) are linearly related to one another, hence providing a motivation for the name given to |Cyu(ω)|. We will also show that |Cyu(ω)| ≤1 with equality, for all ω values, if and only if y(t) and u(t) are related as in equation (1.6.2) with ϵ(t) ≡0 (in the mean square sense). Finally, we will show that |Cyu(ω)| is invariant to linear filtering of u(t) and y(t) (possibly by different filters); that is, if ˜ u = g ∗u and ˜ y = f ∗y where f and g are linear filters, then |C˜ y˜ u(ω)| = |Cyu(ω)|. Let x(t) = P∞ k=−∞hku(t −k). It can be shown that u(t −k) and ϵ(t) are uncorrelated with one another for all k. (The reader is required to verify this claim; see also Exercise 1.11). Hence x(t) and ϵ(t) are also uncorrelated with each other. As y(t) = ϵ(t) + x(t), (1.6.4) it then follows that φyy(ω) = φϵϵ(ω) + φxx(ω) (1.6.5) By using the fact that φxx(ω) = |H(ω)|2φuu(ω), we can write E  |ϵ(t)|2 = 1 2π Z π −π φϵϵ(ω) dω = 1 2π Z π −π  1 −|H(ω)|2 φuu(ω) φyy(ω)  φyy(ω) dω = 1 2π Z π −π  1 − |φyu(ω)|2 φuu(ω)φyy(ω)  φyy(ω) dω = 1 2π Z π −π 1 −|Cyu(ω)|2 φyy(ω) dω which is (1.6.3). Since the left-hand side in (1.6.3) is nonnegative and the PSD function φyy(ω) is arbitrary, we must have |Cyu(ω)| ≤1 for all ω. It can also be seen from (1.6.3) that the closer |Cyu(ω)| is to one, the smaller the residual variance. In particular, if |Cyu(ω)| ≡1 then ϵ(t) ≡0 (in the mean square sense) and hence y(t) and u(t) must be linearly related as in (1.7.11). Owing to the previous interpretation, Cyu(ω) is sometimes called the correlation coefficient in the frequency domain. “sm2” 2004/2/ page 14 i i i i i i i i 14 Chapter 1 Basic Concepts Next, consider the filtered signals ˜ y(t) = ∞ X k=−∞ fky(t −k) and ˜ u(t) = ∞ X k=−∞ gku(t −k) where the filters {fk} and {gk} are assumed to be stable. As r˜ y˜ u(p) ≜E {˜ y(t)˜ u∗(t −p)} = ∞ X k=−∞ ∞ X j=−∞ fkg∗ j E {y(t −k)u∗(t −j −p)} = ∞ X k=−∞ ∞ X j=−∞ fkg∗ j ryu(j + p −k), it follows that φ˜ y˜ u(ω) = ∞ X p=−∞ ∞ X k=−∞ ∞ X j=−∞ fke−iωk g∗ j eiωj ryu(j + p −k)e−iω(j+p−k) = ∞ X k=−∞ fke−iωk !   ∞ X j=−∞ gje−iωj   ∗ ∞ X s=−∞ ryu(s)e−iωs ! = F(ω)G∗(ω)φyu(ω) Hence |C˜ y˜ u(ω)| = |F(ω)| |G(ω)| |φyu(ω)| |F(ω)|φ1/2 yy (ω)|G(ω)|φ1/2 uu (ω) = |Cyu(ω)| which is the desired result. Observe that the latter result is similar to the invariance of the modulus of the correlation coefficient in the time domain, |ryu(k)| [ryy(0)ruu(0)]1/2 , to a scaling of the two signals: ˜ y(t) = f · y(t) and ˜ u(t) = g · u(t). 1.7 EXERCISES Exercise 1.1: Scaling of the Frequency Axis In this text, the time variable t has been expressed in units of the sampling interval Ts (say). Consequently, the frequency is measured in cycles per sampling interval. Assume we want the frequency units to be expressed in radians per sec-ond or in Hertz (Hz = cycles per second). Then we have to introduce the scaled frequency variables ¯ ω = ω/Ts ¯ ω ∈[−π/Ts, π/Ts] rad/sec (1.7.1) “sm2” 2004/2/ page 15 i i i i i i i i Section 1.7 Exercises 15 and ¯ f = ¯ ω/2π (in Hz). It might be thought that the PSD in the new frequency variable is obtained by inserting ω = ¯ ωTs into φ(ω), but this is wrong. Show that the PSD, as expressed in units of power per Hz, is in fact given by: ¯ φ(¯ ω) = Tsφ(¯ ωTs) ≜Ts ∞ X k=−∞ r(k)e−i¯ ωTsk, |¯ ω| ≤π/Ts (1.7.2) (See [Marple 1987] for more details on this scaling aspect.) Exercise 1.2: Time–Frequency Distributions Let y(t) denote a discrete–time signal, and let Y (ω) be its discrete–time Fourier transform. As explained in Section 1.2, Y (ω) shows how the energy in the whole sequence {y(t)}∞ t=−∞is distributed over frequency. Assume that we want to determine how the energy of the signal is distributed in time and frequency. If D(t, ω) is a function that characterizes the time–frequency distribution, then it should satisfy the so–called marginal properties: ∞ X t=−∞ D(t, ω) = |Y (ω)|2 (1.7.3) and 1 2π Z π −π D(t, ω)dω = |y(t)|2 (1.7.4) Use intuitive arguments to explain why the previous conditions are desirable prop-erties of a time–frequency distribution. Next, show that the so–called Rihaczek distribution, D(t, ω) = y(t)Y ∗(ω)e−iωt (1.7.5) satisfies conditions (1.7.3) and (1.7.4). (For treatments of the time–frequency dis-tributions, the reader is referred to [Therrien 1992] and [Cohen 1995]). Exercise 1.3: Two Useful Z–Transform Properties (a) Let hk be an absolutely summable sequence, and let H(z) = P∞ k=−∞hkz−k be its Z–transform. Find the Z–transforms of the following two sequences: (i) h−k (ii) gk = P∞ m=−∞hmh∗ m−k (b) Show that if zi is a zero of A(z) = 1 + a1z−1 + · · · + anz−n, then (1/z∗ i ) is a zero of A∗(1/z∗) (where A∗(1/z∗) = [A(1/z∗)]∗). Exercise 1.4: A Simple ACS Example Let y(t) be the output of a linear system as in Figure 1.1 with filter H(z) = (1 + b1z−1)/(1 + a1z−1), and whose input is zero mean white noise with variance σ2 (the ACS of such an input is σ2δk,0). “sm2” 2004/2/ page 16 i i i i i i i i 16 Chapter 1 Basic Concepts (a) Find r(k) and φ(ω) analytically in terms of a1, b1, and σ2. (b) Verify that r(−k) = r∗(k), and that |r(k)| ≤r(0). Also verify that when a1 and b1 are real, r(k) can be written as a function of |k|. Exercise 1.5: Alternative Proof that |r(k)| ≤r(0) We stated in the text that (1.3.4) follows from (1.3.6). Provide a proof of that statement. Also, find an alternative, simple proof of (1.3.4) by using (1.3.8). Exercise 1.6: A Double Summation Formula A result often used in the study of discrete–time random signals is the follow-ing summation formula: N X t=1 N X s=1 f(t −s) = N−1 X τ=−N+1 (N −|τ|)f(τ) (1.7.6) where f(·) is an arbitrary function. Provide a proof of the above formula. Exercise 1.7: Is a Truncated Autocovariance Sequence (ACS) a Valid ACS? Suppose that {r(k)}∞ k=−∞is a valid ACS; thus, P∞ k=−∞r(k)e−iωk ≥0 for all ω. Is it possible that for some integer p the partial (or truncated) sum p X k=−p r(k)e−iωk is negative for some ω? Justify your answer. Exercise 1.8: When Is a Sequence an Autocovariance Sequence? We showed in Section 1.3 that if {r(k)}∞ k=−∞is an ACS, then Rm ≥0 for m = 0, 1, 2, . . .. We also implied that the first definition of the PSD in (1.3.7) satisfies φ(ω) ≥0 for all ω; however, we did not prove this by using (1.3.7) solely. Show that φ(ω) = ∞ X k=−∞ r(k)e−iωk ≥0 for all ω if and only if a∗Rma ≥0 for every m and for every vector a Exercise 1.9: Spectral Density of the Sum of Two Correlated Signals Let y(t) be the output to the system shown in Figure 1.2. Assume H1(z) and H2(z) are linear, asymptotically stable systems. The inputs e1(t) and e2(t) are each zero mean white noise, with E  e1(t) e2(t)  e∗ 1(s) e∗ 2(s)  =  σ2 1 ρσ1σ2 ρσ1σ2 σ2 2  δt,s “sm2” 2004/2/ page 17 i i i i i i i i Section 1.7 Exercises 17 H1(z) -H2(z) -m + J J J ^ -e1(t) e2(t) x1(t) x2(t) y(t) Figure 1.2. The system considered in Exercise 1.9. (a) Find the PSD of y(t). (b) Show that for ρ = 0, φy(ω) = φx1(ω) + φx2(ω). (c) Show that for ρ = ±1 and σ2 1 = σ2 2 = σ2, φy(ω) = σ2|H1(ω) ± H2(ω)|2. Exercise 1.10: Least Squares Spectral Approximation Assume we are given an ACS {r(k)}∞ k=−∞or, equivalently, a PSD function φ(ω) as in equation (1.3.7). We wish to find a finite–impulse response (FIR) filter as in Figure 1.1, where H(ω) = h0 + h1e−iω + . . . + hme−imω, whose input e(t) is zero mean, unit variance white noise, and such that the output sequence y(t) has PSD φy(ω) “close to” φ(ω). Specifically, we wish to find h = [h0 . . . hm]T so that the approximation error ϵ = 1 2π Z π −π [φ(ω) −φy(ω)]2 dω (1.7.7) is minimum. (a) Show that ϵ is a quartic (fourth–order) function of h, and thus no simple closed–form solution h exists to minimize (1.7.7). (b) We attempt to reparameterize the minimization problem as follows. We note that ry(k) ≡0 for |k| > m; thus, φy(ω) = m X k=−m ry(k)e−iωk (1.7.8) Equation (1.7.8) and the fact that ry(−k) = r∗ y(k) mean that φy(ω) is a function of g = [ry(0) . . . ry(m)]T . Show that the minimization problem in (1.7.7) is quadratic in g; it thus admits a closed–form solution. Show that the vector g that minimizes ϵ in equation (1.7.7) gives ry(k) = ( r(k), |k| ≤m 0, otherwise (1.7.9) “sm2” 2004/2/ page 18 i i i i i i i i 18 Chapter 1 Basic Concepts (c) Can you identify any problems with the “solution” (1.7.9)? Exercise 1.11: Linear Filtering and the Cross–Spectrum For two stationary signals y(t) and u(t), with (cross)covariance sequence ryu(k) = E {y(t)u∗(t −k)}, the cross–spectrum is defined as: φyu(ω) = ∞ X k=−∞ ryu(k)e−iωk (1.7.10) Let y(t) be the output of a linear filter with input u(t), y(t) = ∞ X k=−∞ hku(t −k) (1.7.11) Show that the input PSD, φuu(ω), the filter transfer function H(ω) = ∞ X k=−∞ hke−iωk and φyu(ω) are related through the so–called Wiener–Hopf equation: φyu(ω) = H(ω)φuu(ω) (1.7.12) Next, consider the following least squares (LS) problem, min {hk} E    y(t) − ∞ X k=−∞ hku(t −k) 2   (1.7.13) where now y(t) and u(t) are no longer necessarily related through equation (1.7.11). Show that the filter minimizing the above LS criterion is still given by the Wiener– Hopf equation, by minimizing the expectation in (1.7.13) with respect to the real and imaginary parts of hk (assume that φuu(ω) > 0 for all ω). COMPUTER EXERCISES Exercise C1.12: Computer Generation of Autocovariance Sequences Autocovariance sequences are two–sided sequences. In this exercise we develop computer techniques for generating two–sided ACSs. Let y(t) be the output of the linear system in Figure 1.1 with filter H(z) = (1 + b1z−1)/(1 + a1z−1), and whose input is zero mean white noise with variance σ2. “sm2” 2004/2/ page 19 i i i i i i i i Section 1.7 Exercises 19 (a) Find r(k) analytically in terms of a1, b1, and σ2 (see also Exercise 1.4). (b) Plot r(k) for −20 ≤k ≤20 and for various values of a1 and b1. Notice that the tails of r(k) decay at a rate dictated by |a1|. (c) When a1 ≃b1 and σ2 = 1, then r(k) ≃δk,0. Verify this for a1 = −0.95, b1 = −0.9, and for a1 = −0.75, b1 = −0.7. (d) A quick way to generate (approximately) r(k) on the computer is to use the fact that r(k) = σ2h(k) ∗h∗(−k) where h(k) is the impulse response of the filter in Figure 1.1 (see equation (1.4.7)) and ∗denotes convolution. Consider the case where H(z) = 1 + b1z−1 + · · · + bmz−m 1 + a1z−1 + · · · + anz−n . Write a Matlab function genacs.m whose inputs are M, σ2, a and b, where a and b are the vectors of denominator and numerator coefficients, respectively, and whose output is a vector of ACS coefficients from 0 to M. Your function should make use of the Matlab function filter to generate {hk}M k=0, and conv to compute r(k) = σ2h(k)∗h∗(−k) using the truncated impulse response sequence. (e) Test your function using σ2 = 1, a1 = −0.9 and b1 = 0.8. Try M = 20 and M = 150; why is the result more accurate for larger M? Suggest a “rule of thumb” about a good choice of M in relation to the poles of the filter. The above method is a “quick and simple” way to compute an approximation to the ACS, but it is sometimes not very accurate because the impulse response is truncated. Methods for computing the exact ACS from σ2, a and b are discussed in Exercise 3.2 and also in [Kinkel, Perl, Scharf, and Stubberud 1979; Demeure and Mullis 1989]. Exercise C1.13: DTFT Computations using Two–Sided Sequences In this exercise we consider the DTFT of two–sided sequences (including au-tocovariance sequences), and in doing so illustrate some basic properties of autoco-variance sequences. (a) We first consider how to use the DTFT to determine φ(ω) from r(k) on a computer. We are given an ACS: r(k) = ( M−|k| M , |k| ≤M 0, otherwise (1.7.14) Generate r(k) for M = 10. Now, in Matlab form a vector x of length L = 256 as: x = [r(0), r(1), . . . , r(M), 0 . . . , 0, r(−M), . . . , r(−1)] Verify that xf=fft(x) gives φ(ωk) for ωk = 2πk/L. (Note that the elements of xf should be nonnegative and real.). Explain why this particular choice of x is needed, citing appropriate circular shift and zero padding properties of the DTFT. “sm2” 2004/2/ page 20 i i i i i i i i 20 Chapter 1 Basic Concepts Note that xf often contains a very small imaginary part due to computer roundofferror; replacing xf by real(xf) truncates this imaginary component and leads to more expected results when plotting. A word of caution — do not truncate the imaginary part unless you are sure it is negligible; the command zf=real(fft(z)) when z = [r(−M), . . . , r(−1), r(0), r(1), . . . , r(M), 0 . . . , 0] gives erroneous “spectral” values; try it and explain why it does not work. (b) Alternatively, since we can readily derive the analytical expression for φ(ω), we can instead work backwards. Form a vector yf = [φ(0), φ(2π/L), φ(4π/L), . . . , φ((L −1)2π/L)] and find y=ifft(yf). Verify that y closely approximates the ACS. (c) Consider the ACS r(k) in Exercise C1.12; let a1 = −0.9 and b1 = 0, and set σ2 = 1. Form a vector x as above, with M = 10, and find xf. Why is xf not a good approximation of φ(ωk) in this case? Repeat the experiment for M = 127 and L = 256; is the approximation better for this case? Why? We can again work backwards from the analytical expression for φ(ω). Form a vector yf = [φ(0), φ(2π/L), φ(4π/L), . . . , φ((L −1)2π/L)] and find y=ifft(yf). Verify that y closely approximates the ACS for large L (say, L = 256), but poorly approximates the ACS for small L (say, L = 20). By citing properties of inverse DTFTs of infinite, two–sided sequences, explain how the elements of y relate to the ACS r(k), and why the approximation is poor for small L. Based on this explanation, give a “rule of thumb” on the choice of L for good approximations of the ACS. (d) We have seen above that the fft command results in spectral estimates from 0 to 2π instead of the more common range of −π to π. The Matlab command fftshift can be used to exchange the first and second halves of the fft output to correspond to a frequency range of −π to π. Similarly, fftshift can be used on the output of the ifft operation to “center” the zero lag of an ACS. Experiment with fftshift to achieve both of these results. What frequency vector w is needed so that the command plot(w, fftshift(fft(x))) gives the spectral values at the proper frequencies? Similarly, what time vector t is needed to get a proper plot of the ACS with stem(t,fftshift(ifft(xf)))? Do the results depend on whether the vectors are even or odd in length? Exercise C1.14: Relationship between the PSD and the Eigenvalues of the ACS Matrix An interesting property of the ACS matrix R in equation (1.3.5) is that for large dimensions m, its eigenvalues are close to the values of the PSD φ(ωk) for ωk = 2πk/m, k = 0, 1, . . . , m −1 (see, e.g., [Gray 1972]). We verify this property here. Consider the ACS in Exercise C1.12, with the values a1 = −0.9, b1 = 0.8, and σ2 = 1. “sm2” 2004/2/ page 21 i i i i i i i i Section 1.7 Exercises 21 (a) Compute a vector phi which contains the values of φ(ωk) for ωk = 2πk/m, with m = 256 and k = 0, 1, . . . , m −1 . Plot a histogram of these values with hist(phi). Also useful is the cumulative distribution of the values of phi (plotted on a logarithmic scale), which can be found with the command semilogy( (1/m:1/m:1), sort(phi) ). (b) Compute the eigenvalues of R in equation (1.3.5) for various values of m. Plot the histogram of the eigenvalues, and their cumulative distribution. Verify that as m increases, the cumulative distribution of the eigenvalues approaches the cumulative distribution of the φ(ω) values. Similarly, the histograms also approach the histogram for φ(ω), but it is easier to see this result using cumulative distributions than using histograms. “sm2” 2004/2/ page 22 i i i i i i i i C H A P T E R 2 Nonparametric Methods 2.1 INTRODUCTION The nonparametric methods of spectral estimation rely entirely on the definitions (1.3.7) and (1.3.10) of PSD to provide spectral estimates. These methods constitute the “classical means” for PSD estimation [Jenkins and Watts 1968]. The present chapter reviews the main nonparametric methods, their properties and the Fast Fourier Transform (FFT) algorithm for their implementation. A related discussion is to be found in Chapter 5, where the nonparametric approach to PSD estimation is given a filter bank interpretation. We first introduce two common spectral estimators, the periodogram and the correlogram, derived directly from (1.3.10) and (1.3.7), respectively. These methods are then shown to be equivalent under weak conditions. The periodogram and correlogram methods provide reasonably high resolution for sufficiently long data lengths, but are poor spectral estimators because their variance is high and does not decrease with increasing data length. (In Chapter 5 we provide an interpretation of the periodogram and correlogram methods as a power estimate based on a single sample of a filtered version of the signal under study; it is thus not surprising that the periodogram or correlogram variance is large). The high variance of the periodogram and correlogram methods motivates the development of modified methods that have lower variance, at a cost of reduced resolution. Several modified methods have been introduced, and we present some of the most popular ones. We show them all to be more–or–less equivalent in their properties and performance for large data lengths. 2.2 PERIODOGRAM AND CORRELOGRAM METHODS 2.2.1 Periodogram The periodogram method relies on the definition (1.3.10) of the PSD. Neglecting the expectation and the limit operation in (1.3.10), which cannot be performed when the only available information on the signal consists of the samples {y(t)}N t=1, we get ˆ φp(ω) = 1 N N X t=1 y(t)e−iωt 2 (Periodogram) (2.2.1) One of the first uses of the periodogram spectral estimator, (2.2.1), has been in determining possible “hidden periodicities” in time series, which may be seen as a motivation for the name of this method [Schuster 1900]. 22 “sm2” 2004/2/ page 23 i i i i i i i i Section 2.2 Periodogram and Correlogram Methods 23 2.2.2 Correlogram The correlation–based definition (1.3.7) of the PSD leads to the correlogram spectral estimator [Blackman and Tukey 1959]: ˆ φc(ω) = N−1 X k=−(N−1) ˆ r(k)e−iωk (Correlogram) (2.2.2) where ˆ r(k) denotes an estimate of the covariance lag r(k), obtained from the avail-able sample {y(1), . . . , y(N)}. When no assumption is made on the signal under study, except for the stationarity assumption, there are two standard ways to obtain the sample covariances required in (2.2.2): ˆ r(k) = 1 N −k N X t=k+1 y(t)y∗(t −k), 0 ≤k ≤N −1 (2.2.3) and ˆ r(k) = 1 N N X t=k+1 y(t)y∗(t −k) 0 ≤k ≤N −1 (2.2.4) The sample covariances for negative lags are then constructed using the property (1.3.3) of the covariance function: ˆ r(−k) = ˆ r∗(k), k = 0, . . . , N −1 (2.2.5) The estimator (2.2.3) is called the standard unbiased ACS estimate, and (2.2.4) is called the standard biased ACS estimate. The biased ACS estimate is most commonly used, for the following reasons: • For most stationary signals, the covariance function decays rather rapidly, so that r(k) is quite small for large lags k. Comparing the definitions (2.2.3) and (2.2.4), it can be seen that ˆ r(k) in (2.2.4) will be small for large k (provided N is reasonably large), whereas ˆ r(k) in (2.2.3) may take large and erratic values for large k, as it is obtained by averaging only a few products in such a case (in particular, only one product for k = N −1!). This observation implies that (2.2.4) is likely to be a more accurate estimator of r(k), than (2.2.3), for medium and large values of k (compared to N). For small values of k, the two estimators in (2.2.3) and (2.2.4) can be expected to behave in a similar manner. • The sequence {ˆ r(k), k = 0, ±1, ±2, . . .} obtained with (2.2.4) is guaranteed to be positive semidefinite (as it should, see equation (1.3.5) and the related dis-cussion), while this is not the case for (2.2.3). This fact is especially important for PSD estimation, since a sample covariance sequence that is not positive definite, when inserted in (2.2.2), may lead to negative spectral estimates, and this is undesirable in most applications. “sm2” 2004/2/ page 24 i i i i i i i i 24 Chapter 2 Nonparametric Methods When the sample covariances (2.2.4) are inserted in (2.2.2), it can be shown that the so–obtained spectral estimate is identical to (2.2.1). In other words, we have the following result. ˆ φc(ω) evaluated using the standard biased ACS estimates coincides with ˆ φp(ω) (2.2.6) A simple proof of (2.2.6) runs as follows. Consider the signal x(t) = 1 √ N N X k=1 y(k)e(t −k) (2.2.7) where {y(k)} are considered to be fixed (nonrandom) constants and e(t) is a white noise of unit variance: E {e(t)e∗(s)} = δt,s (= 1 if t = s; and = 0 otherwise). Hence x(t) is the output of a filter with the following transfer function: Y (ω) = 1 √ N N X k=1 y(k)e−iωk Since the PSD of the input to the filter is given by φe(ω) = 1, it follows from (1.4.5) that φx(ω) = |Y (ω)|2 = ˆ φp(ω) (2.2.8) On the other hand, a straightforward calculation gives (for k ≥0): rx(k) = E {x(t)x∗(t −k)} = 1 N N X p=1 N X s=1 y(p)y∗(s)E {e(t −p)e∗(t −k −s)} = 1 N N X p=1 N X s=1 y(p)y∗(s)δp,k+s = 1 N N X p=k+1 y(p)y∗(p −k) = ( ˆ r(k) given by (2.2.4), k = 0, . . . , N −1 0, k ≥N (2.2.9) Inserting (2.2.9) in the definition (1.3.7) of PSD, the following alternative expression for φx(ω) is obtained: φx(ω) = N−1 X k=−(N−1) ˆ r(k)e−iωk = ˆ φc(ω) (2.2.10) Comparing (2.2.8) and (2.2.10) concludes the proof of the claim (2.2.6). The equivalence of the periodogram and correlogram spectral estimators can be used to derive their properties simultaneously. These two methods are shown in Section 2.4 to provide poor estimates of the PSD. There are two reasons for this, and both can be explained intuitively using ˆ φc(ω). “sm2” 2004/2/ page 25 i i i i i i i i Section 2.3 Periodogram Computation via FFT 25 • The estimation errors in ˆ r(k) are on the order of 1/ √ N for large N (see Exercise 2.4), at least for |k| not too close to N. Because ˆ φc(ω) = ˆ φp(ω) is a sum that involves (2N −1) such covariance estimates, the difference between the true and estimated spectra will be a sum of “many small” errors. Hence there is no guarantee that the total error will die out as N increases. The spectrum estimation error is even larger than what is suggested by the above discussion, because errors in {ˆ r(k)}, for |k| close to N, are typically of an order larger than 1/ √ N. The consequence is that the variance of ˆ φc(ω) does not go to zero as N increases. • In addition, if r(k) converges slowly to zero, then the periodogram estimates will be biased. Indeed, for lags |k| ≃N, ˆ r(k) will be a poor estimate of r(k) since ˆ r(k) is the sum of only a few lag products that are divided by N (see equation (2.2.4)). Thus, ˆ r(k) will be much closer to zero than r(k) is; in fact, E {ˆ r(k)} = [(N −|k|)/N]r(k), and the bias is significant for |k| ≃N if r(k) is not close to zero in this region. If r(k) decays rapidly to zero, the bias will be small and will not contribute significantly to the total error in ˆ φc(ω); however, the nonzero variance discussed above will still be present. Both the bias and the variance of the periodogram are discussed more quantitatively in Section 2.4. Another intuitive explanation for the poor statistical accuracy of the peri-odogram and correlogram methods is given in Chapter 5, where it is shown, roughly speaking, that these methods can be viewed as procedures attempting to estimate the variance of a data sequence from a single sample. In spite of their poor quality as spectral estimators, the periodogram and correlogram methods form the basis for the improved nonparametric spectral es-timation methods, to be discussed later in this chapter. As such, computation of these two basic estimators is relevant to many other nonparametric estimators derived from them. The next section addresses this computational task. 2.3 PERIODOGRAM COMPUTATION VIA FFT In practice it is not possible to evaluate ˆ φp(ω) (or ˆ φc(ω)) over a continuum of frequencies. Hence, the frequency variable must be sampled for the purpose of computing ˆ φp(ω). The following frequency sampling scheme is most commonly used: ω = 2π N k, k = 0, . . . , N −1 (2.3.1) Define W = e−i 2π N (2.3.2) Then, evaluation of ˆ φp(ω) (or ˆ φc(ω)) at the frequency samples in (2.3.1) basically reduces to the computation of the following Discrete Fourier Transform (DFT): Y (k) = N X t=1 y(t)W tk, k = 0, . . . , N −1 (2.3.3) “sm2” 2004/2/ page 26 i i i i i i i i 26 Chapter 2 Nonparametric Methods A direct evaluation of (2.3.3) would require about N 2 complex multiplications and additions, which might be a prohibitive burden for large values of N. Any proce-dure that computes (2.3.3) in less than N 2 flops (1 flop = 1 complex multiplication plus 1 complex addition) is called a Fast Fourier Transform (FFT) algorithm. In recent years, there has been significant interest in developing more and more com-putationally efficient FFT algorithms. In the following, we review one of the first FFT procedures — the so–called radix–2 FFT — which, while not being the most computationally efficient of all, is easy to program in a computer and yet quite computationally efficient [Cooley and Tukey 1965; Proakis, Rader, Ling, and Nikias 1992]. 2.3.1 Radix–2 FFT Assume that N is a power of 2, N = 2m (2.3.4) If this is not the case, then we can resort to zero padding, as described in the next subsection. By splitting the sum in (2.3.3) into two parts, we get Y (k) = N/2 X t=1 y(t)W tk + N X t=N/2+1 y(t)W tk = N/2 X t=1 [y(t) + y(t + N/2)W Nk 2 ]W tk (2.3.5) Next, note that W Nk 2 = ( 1, for even k −1, for odd k (2.3.6) Using this simple observation in (2.3.5), we obtain: For k = 2p = 0, 2, . . . Y (2p) = ¯ N X t=1 [y(t) + y(t + ¯ N)] ¯ W tp (2.3.7) For k = 2p + 1 = 1, 3, . . . Y (2p + 1) = ¯ N X t=1 {[y(t) −y(t + ¯ N)]W t} ¯ W tp (2.3.8) where ¯ N = N/2 and ¯ W = W 2 = e−i2π/ ¯ N. The above two equations are the core of the radix–2 FFT algorithm. Both of these equations represent DFTs for sequences of length equal to ¯ N. Computation of the sequences transformed in (2.3.7) and (2.3.8) requires roughly ¯ N flops. Hence, “sm2” 2004/2/ page 27 i i i i i i i i Section 2.3 Periodogram Computation via FFT 27 the computation of an N–point transform has been reduced to the evaluation of two N/2–point transforms plus a sequence computation requiring about N/2 flops. This reduction process is continued until ¯ N = 1 (which is made possible by requiring N to be a power of 2). In order to evaluate the number of flops required by a radix–2 FFT, let ck denote the computational cost (expressed in flops) of a 2k–point radix–2 FFT. According to the discussion in the previous paragraph, ck satisfies the following recursion: ck = 2k/2 + 2ck−1 = 2k−1 + 2ck−1 (2.3.9) with initial condition c1 = 1 (the number of flops required by a 1–point transform). By iterating (2.3.9), we obtain the solution ck = k2k−1 = 1 2k2k (2.3.10) from which it follows that cm = 1 2m2m = 1 2N log2 N; thus An N–point radix–2 FFT requires about 1 2N log2 N flops (2.3.11) As a comparison, the number of complex operations required to carry out an N–point split–radix FFT, which at present appears to be the most practical algo-rithm for general–purpose computers when N is a power of 2, is about 1 3N log2 N (see [Proakis, Rader, Ling, and Nikias 1992]). 2.3.2 Zero Padding In some applications, N is not a power of 2 and hence the previously described radix–2 FFT algorithm cannot be applied directly to the original data sequence. However, this is easily remedied since we may increase the length of the given sequence by means of zero padding {y(1), . . . , y(N), 0, 0, . . .} until the length of the so–obtained sequence is, say, L (which is generally chosen as a power of 2). Zero padding is also useful when the frequency sampling (2.3.1) is considered to be too sparse to provide a good representation of the continuous–frequency estimated spectrum, for example ˆ φp(ω). Applying the FFT algorithm to the data sequence padded with zeroes, which gives ˆ φp(ω) at frequencies ωk = 2πk L , 0 ≤k ≤L −1 may reveal finer details in the spectrum, which were not visible without zero padding. Since the continuous–frequency spectral estimate, ˆ φp(ω), is the same for both the original data sequence and the sequence padded with zeroes, zero padding cannot of course improve the spectral resolution of the periodogram methods. See [Oppenheim and Schafer 1989; Porat 1997] for further discussion. In a zero-padded data sequence the number of nonzero data points may be considerably smaller than the total number of samples, i.e., N ≪L. In such a case a significant time saving can be obtained by pruning the FFT algorithm, which is “sm2” 2004/2/ page 28 i i i i i i i i 28 Chapter 2 Nonparametric Methods done by reducing or eliminating operations on zeroes (see, e.g., [Markel 1971]). FFT pruning, along with a decimation in time, can also be used to reduce the computation time when we want to evaluate the FFT only in a narrow region of the frequency domain (see [Markel 1971]). 2.4 PROPERTIES OF THE PERIODOGRAM METHOD The analysis of the statistical properties of ˆ φp(ω) (or ˆ φc(ω)) is important in that it shows the poor quality of the periodogram as an estimator of the PSD and, in addition, provides some insight into how we can modify the periodogram so as to obtain better spectral estimators. We split the analysis in two parts: bias analysis and variance analysis (see also [Priestley 1981]). The bias and variance of an estimator are two measures often used to char-acterize its performance. A primary motivation is that the total squared error of the estimate is the sum of the bias squared and the variance. To see this, let a denote any quantity to be estimated, and let ˆ a be an estimate of a. Then the mean squared error (MSE) of the estimate is: MSE ≜E  |ˆ a −a|2 = E  |ˆ a −E {ˆ a} + E {ˆ a} −a|2 = E  |ˆ a −E {ˆ a} |2 + |E {ˆ a} −a|2 +2 Re [E {ˆ a −E {ˆ a}} · (E {ˆ a} −a)] = var{ˆ a} + |bias{ˆ a}|2 (2.4.1) By separately considering the bias and variance components of the MSE, we gain some additional insight into the source of error and in ways to reduce the error. 2.4.1 Bias Analysis of the Periodogram By using the result (2.2.6), we obtain E n ˆ φp(ω) o = E n ˆ φc(ω) o = N−1 X k=−(N−1) E {ˆ r(k)} e−iωk (2.4.2) For ˆ r(k) defined in (2.2.4) E {ˆ r(k)} =  1 −k N  r(k), k ≥0 (2.4.3) and E {ˆ r(−k)} = E {ˆ r∗(k)} =  1 −k N  r(−k), −k ≤0 (2.4.4) Hence E n ˆ φp(ω) o = N−1 X k=−(N−1)  1 −|k| N  r(k)e−iωk (2.4.5) Define wB(k) = ( 1 −|k| N , k = 0, ±1, . . . , ±(N −1) 0, otherwise (2.4.6) “sm2” 2004/2/ page 29 i i i i i i i i Section 2.4 Properties of the Periodogram Method 29 The above sequence is called the triangular window, or the Bartlett window. By using wB(k), we can write (2.4.5) as a DTFT: E n ˆ φp(ω) o = ∞ X k=−∞ [wB(k)r(k)]e−iωk (2.4.7) The DTFT of the product of two sequences is equal to the convolution of their respective DTFTs. Hence, (2.4.7) leads to E n ˆ φp(ω) o = 1 2π Z π −π φ(ψ)WB(ω −ψ)dψ (2.4.8) where WB(ω) is the DTFT of the triangular window. For completeness, we include a direct proof of (2.4.8). Inserting (1.3.8) in (2.4.7), we get E n ˆ φp(ω) o = ∞ X k=−∞ wB(k)  1 2π Z π −π φ(ψ)eiψkdψ  e−iωk (2.4.9) = 1 2π Z π −π φ(ψ) " ∞ X k=−∞ wB(k)e−ik(ω−ψ) # dψ (2.4.10) = 1 2π Z π −π φ(ψ)WB(ω −ψ)dψ (2.4.11) which is (2.4.8). We can find an explicit expression for WB(ω) as follows. A straightforward calculation gives WB(ω) = N−1 X k=−(N−1) N −|k| N e−iωk (2.4.12) = 1 N N X t=1 N X s=1 e−iω(t−s) = 1 N N X t=1 eiωt 2 (2.4.13) = 1 N eiωN −1 eiω −1 2 = 1 N eiωN/2 −e−iωN/2 eiω/2 −e−iω/2 2 (2.4.14) or, in final form, WB(ω) = 1 N sin(ωN/2) sin(ω/2) 2 (2.4.15) WB(ω) is sometimes referred to as the Fejer kernel. As an illustration, WB(ω) is displayed as a function of ω, for N = 25, in Figure 2.1. The convolution formula (2.4.8) is the key equation to understanding the behavior of the mean estimated spectrum E{ˆ φp(ω)}. In order to facilitate the in-terpretation of this equation, the reader may think of it as representing a dynamical system with “input” φ(ω), “weighting function” WB(ω) and “output” E{ˆ φp(ω)}. “sm2” 2004/2/ page 30 i i i i i i i i 30 Chapter 2 Nonparametric Methods −3 −2 −1 0 1 2 3 −60 −50 −40 −30 −20 −10 0 dB ANGULAR FREQUENCY Figure 2.1. WB(ω)/WB(0), for N = 25. Note that a similar equation would be obtained if the covariance estimator (2.2.3) were used in ˆ φc(ω), in lieu of (2.2.4). As in that case E {ˆ r(k)} = r(k), the corre-sponding W(ω) function that would appear in (2.4.8) is the DTFT of the rectangular window wR(k) = ( 1, k = 0, ±1, . . . , ±(N −1) 0, otherwise (2.4.16) A straightforward calculation gives WR(ω) = (N−1) X k=−(N−1) e−iωk = 2 Re eiNω −1 eiω −1  −1 = 2 cos h (N−1)ω 2 i sin Nω 2 sin ω 2 −1 = sin N −1 2  ω sin ω 2 (2.4.17) which is displayed in Figure 2.2 (for N = 25; to facilitate comparison with WB(ω)). WR(ω) is sometimes called the Dirichlet kernel. As can be seen, there are no “essential” differences between WR(ω) and WB(ω). For conciseness, in the following we focus on the use of the triangular window. Since we would like E n ˆ φp(ω) o to be as close to φ(ω) as possible, it follows from (2.4.8) that WB(ω) should be a close approximation to a Dirac impulse. The half–power (3 dB) width of the main lobe of WB(ω) can be shown to be approxi-mately 2π/N radians (see Exercise 2.15), so in frequency units (with f = ω/2π) main lobe width in frequency f ≃1/N (2.4.18) (Also, see the calculation of the time–bandwidth product for windows in the next section, which supports (2.4.18).) It follows from (2.4.18) that WB(ω) is a poor “sm2” 2004/2/ page 31 i i i i i i i i Section 2.4 Properties of the Periodogram Method 31 −3 −2 −1 0 1 2 3 −60 −50 −40 −30 −20 −10 0 dB ANGULAR FREQUENCY Figure 2.2. WR(ω)/WR(0), for N = 25. approximation of a Dirac impulse for small values of N. In addition, unlike the Dirac delta function, WB(ω) has a large number of sidelobes. It follows that the bias of the periodogram spectral estimate can basically be divided into two components. These two components correspond respectively to the nonzero main lobe width and the nonzero sidelobe height of the window function WB(ω), as we explain below. The principal effect of the main lobe of WB(ω) is to smear or smooth the estimated spectrum. Assume, for instance, that φ(ω) has two peaks separated in frequency f by less than 1/N. Then these two peaks appear as a single broader peak in E{ˆ φp(ω)} since (see (2.4.8)) the “response” of the “system” corresponding to WB(ω) to the first peak does not get the time to die out before the “response” to the second peak starts. This kind of effect of the main lobe on the estimated spectrum is called smearing. Owing to smearing, the periodogram–based methods cannot resolve details in the studied spectrum that are separated by less than 1/N in cycles per sampling interval. For this reason, 1/N is called the spectral resolution limit of the periodogram method. Remark: The previous comments on resolution give us the occasion to stress that, in spite of the fact that we have seen the PSD as a function of the angular frequency (ω), we generally refer to the resolution in frequency (f) in units of cycles per sampling interval. Of course, the “resolution in angular frequency” is determined from the “resolution in frequency” by the simple relation ω = 2πf. ■ The principal effect of the sidelobes on the estimated spectrum consists of transferring power from the frequency bands that concentrate most of the power in the signal to bands that contain less or no power. This effect is called leakage. For instance, a dominant peak in φ(ω) may through convolution with the sidelobes of WB(ω) lead to an estimated spectrum that contains power in frequency bands where φ(ω) is zero. Note that the smearing effect associated with the main lobe can also be interpreted as a form of leakage from a local peak of φ(ω) to neighboring “sm2” 2004/2/ page 32 i i i i i i i i 32 Chapter 2 Nonparametric Methods frequency bands. It follows from the previous discussion that smearing and leakage are partic-ularly critical for spectra with large amplitude ranges, such as peaky spectra. For smooth spectra, these effects are less important. In particular, we see from (2.4.7) that for white noise (which has a maximally smooth spectrum) the periodogram is an unbiased spectral estimator: E{ˆ φp(ω)} = φ(ω) (see also Exercise 2.9). The bias of the periodogram estimator, even though it might be severe for spectra with large dynamic ranges when the sample length is small, does not con-stitute the main limitation of this spectral estimator. In fact, if the bias were the only problem, then by increasing N (assuming this is possible) the bias in ˆ φp(ω) would be eliminated. In order to see this, note from (2.4.5), for example, that lim N→∞E n ˆ φp(ω) o = φ(ω) Hence, the periodogram is an asymptotically unbiased spectral estimator. The main problem of the periodogram method lies in its large variance, as explained next. 2.4.2 Variance Analysis of the Periodogram The finite–sample variance of ˆ φp(ω) can be easily established only in some specific cases, such as in the case of Gaussian white noise. The asymptotic variance of ˆ φp(ω), however, can be derived for more general signals. In the following, we present an asymptotic (for N ≫1) analysis of the variance of ˆ φp(ω) since it turns out to be sufficient for showing the poor statistical accuracy of the periodogram (for a finite–sample analysis, see Exercise 2.13). Some preliminary discussion is required. A sequence {e(t)} is called complex (or circular) white noise if it satisfies E {e(t)e∗(s)} = σ2δt,s E {e(t)e(s)} = 0, for all t and s (2.4.19) Note that σ2 = E  |e(t)|2 is the variance (or power) of e(t). Equation (2.4.19) can be rewritten as          E {Re[e(t)] Re[e(s)]} = σ2 2 δt,s E {Im[e(t)] Im[e(s)]} = σ2 2 δt,s E {Re[e(t)] Im[e(s)]} = 0 (2.4.20) Hence, the real and imaginary parts of a complex/circular white noise are real– valued white noise sequences of identical power equal to σ2/2, and uncorrelated with one another. See Appendix B for more details on circular random sequences, such as {e(t)} above. In what follows, we shall also make use of the symbol O(1/N α), for some α > 0, to denote a random variable which is such that the square root of its second–order moment goes to zero at least as fast as 1/N α, as N tends to infinity. “sm2” 2004/2/ page 33 i i i i i i i i Section 2.4 Properties of the Periodogram Method 33 First, we establish the asymptotic variance/covariance of ˆ φp(ω) in the case of Gaussian complex/circular white noise. The following result holds. lim N→∞E n [ˆ φp(ω1) −φ(ω1)][ˆ φp(ω2) −φ(ω2)] o = ( φ2(ω1), ω1 = ω2 0, ω1 ̸= ω2 (2.4.21) Note that, for white noise, φ(ω) = σ2 (for all ω). Since limN→∞E {ˆ φp(ω)} = φ(ω) (cf. the analysis in the previous subsection), in order to prove (2.4.21) it suffices to show that lim N→∞E n ˆ φp(ω1)ˆ φp(ω2) o = φ(ω1)φ(ω2) + φ2(ω1)δω1,ω2 (2.4.22) From (2.2.1), we obtain E n ˆ φp(ω1)ˆ φp(ω2) o = 1 N 2 N X t=1 N X s=1 N X p=1 N X m=1 E {e(t)e∗(s)e(p)e∗(m)} ·e−iω1(t−s)e−iω2(p−m) (2.4.23) For general random processes, the evaluation of the expectation in (2.4.23) is rel-atively complicated. However, the following general result for Gaussian random variables can be used: If a, b, c, and d are jointly Gaussian (complex or real) random variables, then E {abcd} = E {ab} E {cd} + E {ac} E {bd} + E {ad} E {bc} −2E {a} E {b} E {c} E {d} (2.4.24) For a proof of (2.4.24), see, e.g., [Janssen and Stoica 1988] and references therein. Thus, if the white noise e(t) is Gaussian as assumed, the fourth–order moment in (2.4.23) is found to be: E {e(t)e∗(s)e(p)e∗(m)} = [E {e(t)e∗(s)}] [E {e(p)e∗(m)}] + [E {e(t)e(p)}] [E {e(s)e(m)}]∗ + [E {e(t)e∗(m)}] [E {e∗(s)e(p)}] = σ4(δt,sδp,m + δt,mδs,p) (2.4.25) Inserting (2.4.25) in (2.4.23) gives E n ˆ φp(ω1)ˆ φp(ω2) o = σ4 + σ4 N 2 N X t=1 N X s=1 e−i(ω1−ω2)(t−s) = σ4 + σ4 N 2 N X t=1 ei(ω1−ω2)t 2 = σ4 + σ4 N 2 sin[(ω1 −ω2)N/2] sin[(ω1 −ω2)/2] 2 (2.4.26) “sm2” 2004/2/ page 34 i i i i i i i i 34 Chapter 2 Nonparametric Methods The limit of the second term in (2.4.26) is σ4 when ω1 = ω2 and zero otherwise, and (2.4.22) follows at once. Remark: Note that in the previous case, it was indeed possible to derive the finite– sample variance of ˆ φp(ω). For colored noise the above derivation becomes more difficult, and a different approach (presented below) is needed. See Exercise 2.13 for yet another approach that applies to general Gaussian signals. ■ Next, we consider the case of a much more general signal obtained by linearly filtering the Gaussian white noise sequence {e(t)} considered above: y(t) = ∞ X k=1 hke(t −k) (2.4.27) whose PSD is given by φy(ω) = |H(ω)|2φe(ω) (2.4.28) (cf. (1.4.9)). Here H(ω) = P∞ k=1 hke−iωk. The following intermediate result, con-cerned with signals of the above type, appears to have an independent interest. (Below, we omit the index “p” of ˆ φp(ω) in order to simplify the notation.) For N ≫1, ˆ φy(ω) = |H(ω)|2 ˆ φe(ω) + O(1/ √ N) (2.4.29) Hence, the periodograms approximately satisfy an equation of the form of (2.4.28) that is satisfied by the true PSDs. In order to prove (2.4.29), first observe that 1 √ N N X t=1 y(t)e−iωt = 1 √ N N X t=1 ∞ X k=1 hke(t −k)e−iω(t−k)e−iωk = 1 √ N ∞ X k=1 hke−iωk N−k X p=1−k e(p)e−iωp = 1 √ N ∞ X k=1 hke−iωk ·   N X p=1 e(p)e−iωp + 0 X p=1−k e(p)e−iωp − N X p=N−k+1 e(p)e−iωp   ≜H(ω) " 1 √ N N X p=1 e(p)e−iωp # + ρ(ω) (2.4.30) “sm2” 2004/2/ page 35 i i i i i i i i Section 2.4 Properties of the Periodogram Method 35 where ρ(ω) = 1 √ N ∞ X k=1 hke−iωk   0 X p=1−k e(p)e−iωp − N X p=N−k+1 e(p)e−iωp   ≜ 1 √ N ∞ X k=1 hke−iωkεk(ω) (2.4.31) Next, note that E {εk(ω)} = 0, E {εk(ω)εj(ω)} = 0 for all k and j, and E  εk(ω)ε∗ j(ω) = 2σ2 min(k, j) which imply E {ρ(ω)} = 0, E  ρ2(ω) = 0 and E  |ρ(ω)|2 = 1 N ∞ X k=1 ∞ X j=1 hke−iωkh∗ jeiωjE  εk(ω)ε∗ j(ω) = 2σ2 N ∞ X k=1 hke−iωk    k X j=1 h∗ jeiωjj + ∞ X j=k+1 h∗ jeiωjk    ≤2σ2 N ∞ X k=1 |hk|    ∞ X j=1 |hj|j + ∞ X j=1 |hj|k    = 4σ2 N ∞ X k=1 |hk| !   ∞ X j=1 |hj|j   If P∞ j=1 k|hk| is finite (which, for example, is true if {hk} is exponentially stable; see [S¨ oderstr¨ om and Stoica 1989]), we have E  |ρ(ω)|2 ≤constant N (2.4.32) Now, from (2.4.30) we obtain ˆ φy(ω) = |H(ω)|2 ˆ φe(ω) + γ(ω) (2.4.33) where γ(ω) = H∗(ω)E∗(ω)ρ(ω) + H(ω)E(ω)ρ∗(ω) + ρ(ω)ρ∗(ω) and where E(ω) = 1 √ N N X t=1 e(t)e−iωt “sm2” 2004/2/ page 36 i i i i i i i i 36 Chapter 2 Nonparametric Methods Since E(ω) and ρ(ω) are linear combinations of Gaussian random variables, they are also Gaussian distributed. This means that the fourth–order moment formula (2.4.24) can be used to obtain the second–order moment of γ(ω). By doing so, and also by using (2.4.32) and the fact that, for example, |E {ρ(ω)E∗(ω)}| ≤ E  |ρ(ω)|2 1/2 E  |E(ω)|2 1/2 = constant √ N · h E n |ˆ φe(ω)|2oi1/2 = constant √ N we can verify that γ(ω) = O(1/ √ N), and hence the proof of (2.4.29) is concluded. The main result of this section is derived by combining (2.4.21) and (2.4.29). The asymptotic variance/covariance result (2.4.21) is also valid for a general linear signal as defined in (2.4.27). (2.4.34) Remark: In the introduction to Chapter 1, we mentioned that the analysis of a complex–valued signal is not always more general than the analysis of the corre-sponding real–valued signal; we supported this claim by the example of a complex sine wave. Here, we have another instance where the claim is valid. Similarly to the complex sinusoidal signal case, the complex (or circular) white noise does not specialize, in a direct manner, to real white noise. Indeed, if we would let e(t) in (2.4.19) be real, then the two equations in (2.4.19) would conflict with each other (for t = s). The real white noise random process is a stationary signal which satisfies E {e(t)e(s)} = σ2δt,s (2.4.35) If we try to carry out the proof of (2.4.21) under (2.4.35), then we find that the proof has to be modified. This was expected: both φ(ω) and ˆ φp(ω) are even functions in the real–valued case; hence (2.4.21) should be modified to include the case of both ω1 = ω2 and ω1 = −ω2. ■ It follows from (2.4.34) that for a fairly general class of signals, the peri-odogram values are asymptotically (for N ≫1) uncorrelated random variables whose means and standard deviations are both equal to the corresponding true PSD values. Hence, the periodogram is an inconsistent spectral estimator which continues to fluctuate around the true PSD, with a nonzero variance, even if the length of the processed sample increases without bound. Furthermore, the fact that the periodogram values ˆ φp(ω) are uncorrelated (for large N values) makes the periodogram exhibit an erratic behavior (similar to that of a white noise realiza-tion). These facts constitute the main limitations of the periodogram approach to PSD estimation. In the next sections, we present several modified periodogram– based methods which attempt to cure the aforementioned difficulties of the basic periodogram approach. As we shall see, the “improved methods” decrease the vari-ance of the estimated spectrum at the expense of increasing its bias (and, hence, decreasing the average resolution). “sm2” 2004/2/ page 37 i i i i i i i i Section 2.5 The Blackman–Tukey Method 37 2.5 THE BLACKMAN–TUKEY METHOD In this section we develop the Blackman–Tukey method [Blackman and Tukey 1959] and compare it to the periodogram. In later sections we consider several other refined periodogram–based methods that, like the Blackman–Tukey (BT) method, seek to reduce the statistical variability of the estimated spectrum; we will compare these methods to one another and to the Blackman–Tukey method. 2.5.1 The Blackman–Tukey Spectral Estimate As we have seen, the main problem with the periodogram is the high statistical variability of this spectral estimator, even for very large sample lengths. The poor statistical quality of the periodogram PSD estimator has been intuitively explained as arising from both the poor accuracy of ˆ r(k) in ˆ φc(ω) for extreme lags (|k| ≃ N) and the large number of (even if small) covariance estimation errors that are cumulatively summed up in ˆ φc(ω). Both these effects may be reduced by truncating the sum in the definition formula of ˆ φc(ω), (2.2.2). Following this idea leads to the Blackman–Tukey estimator, which is given by ˆ φBT (ω) = M−1 X k=−(M−1) w(k)ˆ r(k)e−iωk (2.5.1) where {w(k)} is an even function (i.e., w(−k) = w(k)) which is such that w(0) = 1, w(k) = 0 for |k| ≥M, and w(k) decays smoothly to zero with k, and where M < N. Since w(k) in (2.5.1) weights the lags of the sample covariance sequence, it is called a lag window. If w(k) in (2.5.1) is selected as the rectangular window, then we simply obtain a truncated version of ˆ φc(ω). However, we may choose w(k) in many other ways, and this flexibility may be employed to improve the accuracy of the Blackman– Tukey spectral estimator or to emphasize some of its characteristics that are of particular interest in a given application. In the following subsections, we address the principal issues which concern the problem of window selection. However, before doing so we rewrite (2.5.1) in an alternative form that will be used in several places of the discussion that follows. Let W(ω) denote the DTFT of w(k), W(ω) = ∞ X k=−∞ w(k)e−iωk = M−1 X k=−(M−1) w(k)e−iωk (2.5.2) “sm2” 2004/2/ page 38 i i i i i i i i 38 Chapter 2 Nonparametric Methods Making use of the DTFT property that led to (2.4.8), we can then write ˆ φBT (ω) = ∞ X k=−∞ w(k)ˆ r(k)e−iωk = DTFT of the product of the sequences {. . . , 0, 0, w(−(M −1)), . . . , w(M −1), 0, 0, . . .} and {. . . , 0, 0, ˆ r(−(N −1)), . . . , ˆ r(N −1), 0, 0, . . .} = {DTFT(ˆ r(k))} ∗{DTFT(w(k))} As DTFT{. . . , 0, 0, ˆ r(−(N −1)), . . . , ˆ r(N −1), 0, 0, . . .} = ˆ φp(ω), we obtain ˆ φBT (ω) = ˆ φp(ω) ∗W(ω) = 1 2π Z π −π ˆ φp(ψ)W(ω −ψ)dψ (2.5.3) This equation is analogous to (2.4.8) and can be interpreted in the same way. Hence, since for most windows in common use W(ω) has a dominant, relatively narrow peak at ω = 0, it follows from (2.5.3) that The Blackman–Tukey spectral estimator (2.5.1) corresponds to a “locally” weighted average of the periodogram. (2.5.4) Since the function W(ω) in (2.5.3) acts as a window (or weighting) in the frequency domain, it is sometimes called a spectral window. As we shall see, several refined periodogram–based spectral estimators discussed in what follows can be given an interpretation similar to that afforded by (2.5.3). The form (2.5.3) under which the Blackman–Tukey spectral estimator has been put is quite appealing from an intuitive standpoint. The main problem with the periodogram lies in its large variations about the true PSD. The weighted average in (2.5.3), in the neighborhood of the current frequency point ω, should smooth the periodogram and hence eliminate its large fluctuations. On the other hand, this smoothing by the spectral window W(ω) will also have the undesirable effect of reducing the resolution. We may expect that the smaller the M, the larger the reduction in variance and the lower the resolution. These qualitative arguments may be made exact by a statistical analysis of ˆ φBT (ω), similar to that in the previous section. In fact, it is clear from (2.5.3) that the mean and variance of ˆ φBT (ω) can be derived from those of ˆ φp(ω). Roughly speaking, the results that can be established by the analysis of ˆ φBT (ω), based on (2.5.3), show that the resolution of this spectral estimator is on the order of 1/M, whereas its variance is on the order of M/N. The compromise between resolution and variance, which should be considered when choosing the window’s length, is clearly seen from the above considerations. We will look at the tradeoffresolution–variance in more detail in what follows. The next discussion addresses some of the main issues which concern window design. “sm2” 2004/2/ page 39 i i i i i i i i Section 2.6 Window Design Considerations 39 2.5.2 Nonnegativeness of the Blackman–Tukey Spectral Estimate Since φ(ω) ≥0, it is natural to also require that ˆ φBT (ω) ≥0. The lag window can be selected to achieve this desirable property of the estimated spectrum. The following result holds true. If the lag window {w(k)} is positive semidefinite (i.e., W(ω) ≥0), then the windowed covariance sequence {w(k)ˆ r(k)} (with ˆ r(k) given by (2.2.4)) is positive semidefinite, too; which implies that ˆ φBT (ω) ≥0 for all ω. (2.5.5) In order to prove the above result, first note that ˆ φBT (ω) ≥0 if and only if the sequence {. . . , 0, 0, w(−(M −1))ˆ r(−(M −1)), . . . , w(M −1)ˆ r(M −1), 0, 0, . . .} is positive semidefinite or, equivalently, the following Toeplitz matrix is positive semidefinite for all dimensions:          w(0)ˆ r(0) . . . w(M −1)ˆ r(M −1) 0 . . . ... w(−M + 1)ˆ r(−M + 1) ... w(M −1)ˆ r(M −1) ... . . . 0 w(−M + 1)ˆ r(−M + 1) . . . w(0)ˆ r(0)          =          w(0) . . . w(M −1) 0 . . . ... w(−M + 1) ... w(M −1) ... . . . 0 w(−M + 1) . . . w(0)          ⊙          ˆ r(0) . . . ˆ r(M −1) 0 . . . ... ˆ r(−M + 1) ... ˆ r(M −1) ... . . . 0 ˆ r(−M + 1) . . . ˆ r(0)          The symbol ⊙denotes the Hadamard matrix product (i.e., element–wise multi-plication). By a result in matrix theory, the Hadamard product of two positive semidefinite matrices is also a positive semidefinite matrix (see Result R19 in Ap-pendix A). Thus, the proof of (2.5.5) is concluded. Another, perhaps simpler, proof of (2.5.5) makes use of (2.5.3) in the following way. Since the sequence {w(k)} is real and symmetric about the point k = 0, its DTFT W(ω) is an even, real–valued function. Furthermore, if {w(k)} is a positive semidefinite sequence then W(ω) ≥0 for all ω values (see Exercise 1.8). By (2.5.3), W(ω) ≥0 immediately implies ˆ φBT (ω) ≥0, as ˆ φp(ω) ≥0 by definition. It should be noted that some lag windows, such as the rectangular window, do not satisfy the assumption made in (2.5.5) and hence their use may lead to estimated spectra that take negative values. The Bartlett window, on the other hand, is positive semidefinite (as can be seen from (2.4.15)). 2.6 WINDOW DESIGN CONSIDERATIONS The properties of the Blackman–Tukey estimator (and of other refined periodogram methods discussed in the next section) are directly related to the choice of the lag “sm2” 2004/2/ page 40 i i i i i i i i 40 Chapter 2 Nonparametric Methods window. In this section, we discuss several relevant properties of windows that are useful in selecting or designing a window to use in a refined spectral estimation procedure. 2.6.1 Time–Bandwidth Product and Resolution–Variance Tradeoffs in Window Design Most windows are such that they take only nonnegative values in both time and frequency domains (or, if they also take negative values, these are much smaller than the positive values of the window). In addition, they peak at the origin in both domains. For this type of window, it is possible to define an equivalent time width, Ne, and an equivalent bandwidth, βe, as follows: Ne = PM−1 k=−(M−1) w(k) w(0) (2.6.1) and βe = 1 2π R π −π W(ω)dω W(0) (2.6.2) From the definitions of direct and inverse DTFTs, we obtain W(0) = ∞ X k=−∞ w(k) = M−1 X k=−(M−1) w(k) (2.6.3) and w(0) = 1 2π Z π −π W(ω)dω (2.6.4) Using (2.6.3) and (2.6.4) in (2.6.1) and (2.6.2) gives the following result. The (equivalent) time–bandwidth product equals unity: Neβe = 1 (2.6.5) As already indicated, the result above applies to window–like signals. Some extended results of the time–bandwidth product type, which apply to more general classes of signals, are presented in Complement 2.8.5. It is clearly seen from (2.6.5) that a window cannot be both time–limited and band–limited. The more slowly the window decays to zero in one domain, the more concentrated it is in the other domain. The simple result above, (2.6.5), has several other interesting consequences, as explained below. The equivalent temporal extent (or aperture), Ne, of w(k) is essentially de-termined by the window’s length. For example, for a rectangular window we have Ne ≃2M, whereas for a triangular window Ne ≃M. This observation, together with (2.6.5), implies that the equivalent bandwidth βe is basically determined by the window’s length. More precisely, βe = O(1/M). This fact lends support to a claim made previously that for a window which concentrates most of its energy in its main lobe, the width of that lobe should be on the order of 1/M. Since the main lobe’s width sets a limit on the spectral resolution achievable (as explained “sm2” 2004/2/ page 41 i i i i i i i i Section 2.6 Window Design Considerations 41 in Section 2.4), the above observation shows that the spectral resolution limit of a windowed method should be on the order of 1/M. On the other hand, as explained in the previous section, the statistical variance of such a method is essentially pro-portional to M/N. Hence, we reached the following conclusion. The choice of window’s length should be based on a tradeoffbe-tween spectral resolution and statistical variance (2.6.6) As a rule of thumb, we should choose M ≤N/10 in order to reduce the standard deviation of the estimated spectrum at least three times, compared with the periodogram. Once M is determined, we cannot decrease simultaneously the energy in the main lobe (to reduce smearing) and the energy in the sidelobes (to reduce leakage). This follows, for example, from (2.6.4) which shows that the area of W(ω) is fixed once w(0) is fixed (such as w(0) = 1). In other words, if we want to decrease the main lobe’s width then we should accept an increase in the sidelobe energy and vice versa. In summary: The selection of window’s shape should be based on a tradeoff between smearing and leakage effects. (2.6.7) The above tradeoffis usually dictated by the specific application at hand. A number of windows have been developed to address this tradeoff. In some sense, each of these windows can be seen as a design at a specific point in the resolu-tion/leakage tradeoffcurve. We consider several such windows in the next subsec-tion. 2.6.2 Some Common Lag Windows In this section, we list some of the most common lag windows and outline their rele-vant properties. Our purpose is not to provide a detailed derivation or an exhaustive listing of such windows, but rather to provide a quick reference of common windows. More detailed information on these and other windows can be found in [Harris 1978; Kay 1988; Marple 1987; Oppenheim and Schafer 1989; Priestley 1981; Porat 1997], where many of the closed–form windows have been compiled. Table 2.1 lists some common windows along with some useful properties. In addition to the fixed window designs in Table 2.1, there are windows that contain a design parameter which may be varied to trade between resolution and sidelobe leakage. Two such common designs are the Chebyshev window and the Kaiser window. The Chebyshev window has the property that the peak level of the sidelobe “ripples” is constant. Thus, unlike most other windows, the sidelobe level does not decrease as ω increases. The Kaiser window is defined by w(k) = I0  γ p 1 −[k/(M −1)]2  I0(γ) , −(M −1) ≤k ≤M −1 (2.6.8) where I0(·) is the zeroth–order modified Bessel function of the first kind. The parameter γ trades the main lobe width for the sidelobe leakage level; γ = 0 “sm2” 2004/2/ page 42 i i i i i i i i 42 Chapter 2 Nonparametric Methods TABLE 2.1: Some Common Windows and their Properties The windows satisfy w(k) ≡0 for |k| ≥M, and w(k) = w(−k); the defining equations below are valid for 0 ≤k ≤(M −1). Window Approx. Main Lobe Sidelobe Name Defining Equation Width (radians) Level (dB) Rectangular w(k) = 1 2π/M -13 Bartlett w(k) = M−k M 4π/M -25 Hanning w(k) = 0.5 + 0.5 cos πk M  4π/M -31 Hamming w(k) = 0.54 + 0.46 cos  πk M−1  4π/M -41 Blackman w(k) = 0.42 + 0.5 cos  πk M−1  6π/M -57 + 0.08 cos  πk M−1  corresponds to a rectangular window, and γ > 0 results in lower sidelobe leakage at the expense of a broader main lobe. The approximate value of γ needed to achieve a peak sidelobe level of B dB below the peak value is γ ≃      0, B < 21 0.584(B −21)0.4 + 0.0789(B −21), 21 ≤B ≤50 0.11(B −8.7), B > 50 The Kaiser window is an approximation of the optimal window described in the next subsection. It is often chosen over the fixed window designs because it has a lower sidelobe level when γ is selected to have the same main lobe width as the corresponding fixed window (or narrower main lobe width for a given sidelobe level). The optimal window of the next subsection improves on the Kaiser design slightly. Figure 2.3 shows plots of several windows with M = 26. The Kaiser window is shown for γ = 1 and γ = 4, and the Chebyshev window is designed to have a −40 dB sidelobe level. Figure 2.4 shows the corresponding normalized window transfer functions W(ω). Note the constant sidelobe ripple level of the Chebyshev design. We remark that except for the Bartlett window, none of the windows we have introduced (including the Chebyshev and Kaiser windows) has nonnegative Fourier transform. On the other hand, it is straightforward to produce such a nonnegative definite window by convolving the window with itself. Recall that the Bartlett window is the convolution of a rectangular window with itself. We will make use of the convolution of windows with themselves in the next two subsections, both for window design and for relating temporal windows to covariance lag windows. “sm2” 2004/2/ page 43 i i i i i i i i Section 2.6 Window Design Considerations 43 −20 −10 0 10 20 0 0.2 0.4 0.6 0.8 1 Rectangular Window k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Bartlett Window k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Hamming Window k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Hanning Window k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Blackman Window k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Kaiser Window (gamma=1) k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Kaiser Window (gamma=4) k −20 −10 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Chebyshev Window (40 dB ripple) k Figure 2.3. Some common window functions (shown for M = 26). The Kaiser window uses γ = 1 and γ = 4 and the Chebyshev window is designed for a −40 dB sidelobe level. “sm2” 2004/2/ page 44 i i i i i i i i 44 Chapter 2 Nonparametric Methods 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Rectangular Window Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Bartlett Window Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Hamming Window Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Hanning Window Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Blackman Window Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Kaiser Window (gamma=1) Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Kaiser Window (gamma=4) Magnitude in dB frequency 0 0.1 0.2 0.3 0.4 0.5 −80 −70 −60 −50 −40 −30 −20 −10 0 Chebyshev Window (40 dB ripple) Magnitude in dB frequency Figure 2.4. The DTFTs of the window functions in Figure 2.3. “sm2” 2004/2/ page 45 i i i i i i i i Section 2.6 Window Design Considerations 45 2.6.3 Window Design Example Assume a situation where it is known that the observed signal consists of a useful weak signal and a strong interference, and that both the useful signal and the interference can be assumed to be narrowband signals which are well separated in frequency. However, there is no a priori quantitative information available on the frequency separation between the desired signal and the interference. It is required to design a lag window for use in a Blackman–Tukey spectral estimation method, with the purpose of detecting and locating in frequency the useful signal. The main problem in the application outlined above lies in the fact that the (strong) interference may completely mask the (weak) desired signal through leakage. In order to get rid of this problem, the window design should compromise smearing for leakage. Note that the smearing effect is not of main concern in this application, as the useful signal and the interference are well separated in frequency. Hence, smearing cannot affect our ability to detect the desired signal; it will only limit, to some degree, our ability to accurately locate in frequency the signal in question. We consider a window sequence whose DTFT W(ω) is constructed as the squared magnitude of the DTFT of another sequence {v(k)}; in this way, we guarantee that the constructed window is positive semidefinite. Mathematically, the above design problem can be formulated as follows. Consider a sequence {v(0), . . . , v(M −1)}, and let V (ω) = M−1 X k=0 v(k)e−iωk (2.6.9) The DTFT V (ω) can be rewritten in the more compact form V (ω) = v∗a(ω) (2.6.10) where v = [v(0) . . . v(M −1)]∗ (2.6.11) and a(ω) = [1 e−iω . . . e−i(M−1)ω]T (2.6.12) Define the spectral window as W(ω) = |V (ω)|2 (2.6.13) The corresponding lag window can be obtained from (2.6.13) as follows: M−1 X k=−(M−1) w(k)e−iωk = M−1 X n=0 M−1 X p=0 v(n)v∗(p)e−iω(n−p) = M−1 X n=0 n−(M−1) X k=n v(n)v∗(n −k)e−iωk = M−1 X k=−(M−1) "M−1 X n=0 v(n)v∗(n −k) # e−iωk (2.6.14) “sm2” 2004/2/ page 46 i i i i i i i i 46 Chapter 2 Nonparametric Methods which gives w(k) = M−1 X n=0 v(n)v∗(n −k) (2.6.15) The last equality in (2.6.14), and hence the equality (2.6.15), are valid under the convention that v(k) = 0 for k < 0 and k ≥M. As already mentioned, this method of constructing {w(k)} from the convolu-tion of the sequence {v(k)} with itself has the advantage that the so–obtained lag window is always positive semidefinite or, equivalently, the corresponding spectral window satisfies W(ω) ≥0 (which is easily seen from (2.6.13)). Besides this, the design of {w(k)} can be reduced to the selection of {v(k)} which may be more conveniently done, as explained next. In the present application, the design objective is to reduce the leakage in-curred by {w(k)} as much as possible. This objective can be formulated as the problem of minimizing the relative energy in the sidelobes of W(ω) or, equivalently, as the problem of maximizing the relative energy in the main lobe of W(ω): max v    R βπ −βπ W(ω)dω R π −π W(ω)dω    (2.6.16) Here, β is a design parameter which quantifies how much smearing (or, basically equivalent, resolution) we can tradeofffor leakage reduction. The larger the β, the more leakage free the optimal window derived from (2.6.16) but also the more diminished the spectral resolution associated with that window. By writing the criterion in (2.6.16) in the following form 1 2π R βπ −βπ |V (ω)|2dω 1 2π R π −π |V (ω)|2dω = v∗h 1 2π R βπ −βπ a(ω)a∗(ω)dω i v v∗v (2.6.17) (cf. (2.6.10) and Parseval’s theorem, (1.2.6)), the optimization problem (2.6.16) becomes max v v∗Γv v∗v (2.6.18) where Γ = 1 2π Z βπ −βπ a(ω)a∗(ω)dω ≜[γm−n] (2.6.19) and where γm−n = 1 2π Z βπ −βπ e−i(m−n)ωdω = sin[(m −n)βπ] (m −n)π (2.6.20) (note that γ0 = β). By using the function sinc(x) ≜sin x x , (sinc(0) = 1) (2.6.21) we can write (2.6.20) as γm−n = βsinc[(m −n)βπ] (2.6.22) “sm2” 2004/2/ page 47 i i i i i i i i Section 2.6 Window Design Considerations 47 The solution to the problem (2.6.18) is well known: the maximizing v is given by the dominant eigenvector of Γ, associated with the maximum eigenvalue of this matrix (see Result R13 in Appendix A). To summarize: The optimal lag window which minimizes the relative energy in the sidelobe interval [−π, −βπ]∪[βπ, π] is given by (2.6.15), where v is the dominant eigenvector of the matrix Γ defined in (2.6.19) and (2.6.22). (2.6.23) Regarding the choice of the design parameter β, it is clear that β should be larger than 1/M in order to allow for a significant reduction of leakage. Otherwise, by selecting for example β ≃1/M, we weigh the resolution issue too much in the design problem, with unfavorable consequences for leakage reduction. Finally, we remark that a problem quite similar to the above one, although derived from different considerations, will be encountered in Chapter 5 (see also [Mullis and Scharf 1991]). 2.6.4 Temporal Windows and Lag Windows As we have seen previously, the unwindowed periodogram coincides with the un-windowed correlogram. The Blackman–Tukey estimator is a windowed correlogram obtained using a lag window. Similarly, we can define a windowed periodogram ˆ φW (ω) = 1 N N X t=1 v(t)y(t)e−iωt 2 (2.6.24) where the weighting sequence {v(t)} may be called a temporal window. A tem-poral window is sometimes called a taper. Welch [Welch 1967] was one of the first researchers who considered windowed periodogram spectral estimators (see Section 2.7.2 for a description of Welch’s method), and hence the subscript “W” attached to ˆ φ(ω) in (2.6.24). However, while the reason for windowing the cor-relogram is clearly motivated, the reason for windowing the periodogram is less obvious. In order to motivate (2.6.24), at least partially, write this equation as ˆ φW (ω) = 1 N N X t=1 N X s=1 v(t)v∗(s)y(t)y∗(s)e−iω(t−s) (2.6.25) Next, take expectation of both sides of (2.6.25) to obtain E n ˆ φW (ω) o = 1 N N X t=1 N X s=1 v(t)v∗(s)r(t −s)e−iω(t−s) (2.6.26) Inserting r(t −s) = 1 2π Z π −π φ(ω)eiω(t−s)dω (2.6.27) “sm2” 2004/2/ page 48 i i i i i i i i 48 Chapter 2 Nonparametric Methods in (2.6.26) gives E n ˆ φW (ω) o = 1 N2π Z π −π φ(ψ) " N X t=1 N X s=1 v(t)v∗(s)e−i(ω−ψ)(t−s) # dψ = 1 N2π Z π −π φ(ψ) N X t=1 v(t)e−i(ω−ψ)t 2 dψ (2.6.28) Define W(ω) = 1 N N X t=1 v(t)e−iωt 2 (2.6.29) By using this notation, we can write (2.6.28) as E n ˆ φW (ω) o = 1 2π Z π −π φ(ψ)W(ω −ψ)dψ (2.6.30) As the equation (2.6.29) is similar to (2.6.13), the sequence whose DTFT is equal to W(ω) immediately follows from (2.6.15): w(k) = 1 N N X n=1 v(n)v∗(n −k) (2.6.31) Next, by comparing (2.6.30) and (2.5.3), we get the following result. The windowed periodogram and the windowed correlogram have the same average behavior, provided the temporal and lag win-dows are related as in (2.6.31). (2.6.32) Hence E{ˆ φW (ω)} = E{ˆ φBT (ω)}, provided the temporal and lag windows are matched to one another. A similarly simple relationship between ˆ φW (ω) and ˆ φBT (ω), however, does not seem to exist. This makes it somewhat difficult to motivate the windowed periodogram as defined in (2.6.24). The Welch periodogram, though, does not weigh all data samples as in (2.6.24), and is a useful spectral estimator (see the next section). 2.7 OTHER REFINED PERIODOGRAM METHODS In Section 2.5 we introduced the Blackman–Tukey estimator as an alternative to the periodogram. In this section we present three other modified periodograms: the Bartlett, Welch, and Daniell methods. Like the Blackman–Tukey method, they seek to reduce the variance of the periodogram by smoothing or averaging the periodogram estimates in some way. We will relate these methods to one another and to the Blackman–Tukey method. “sm2” 2004/2/ page 49 i i i i i i i i Section 2.7 Other Refined Periodogram Methods 49 2.7.1 Bartlett Method The basic idea of the Bartlett method [Bartlett 1948; Bartlett 1950] is sim-ple: to reduce the large fluctuations of the periodogram, split up the available sample of N observations into L = N/M subsamples of M observations each, and then average the periodograms obtained from the subsamples for each value of ω. Mathematically, the Bartlett method can be described as follows. Let yj(t) = y((j −1)M + t), t = 1, . . . , M j = 1, . . . , L (2.7.1) denote the observations of the jth subsample, and let ˆ φj(ω) = 1 M M X t=1 yj(t)e−iωt 2 (2.7.2) denote the corresponding periodogram. The Bartlett spectral estimate is then given by ˆ φB(ω) = 1 L L X j=1 ˆ φj(ω) (2.7.3) Since the Bartlett method operates on data segments of length M, the resolution afforded should be on the order of 1/M. Hence, the spectral resolution of the Bartlett method is reduced by a factor L, compared to the resolution of the original periodogram method. In return for this reduction in resolution, we can expect that the Bartlett method has a reduced variance. It can, in fact, be shown that the Bartlett method reduces the variance of the periodogram by the same factor L (see below). The compromise between resolution and variance when selecting M (or L) is thus evident. An interesting way to look at the Bartlett method and its properties is by relating it to the Blackman–Tukey method. As we know, ˆ φj(ω) of (2.7.2) can be rewritten as ˆ φj(ω) = M−1 X k=−(M−1) ˆ rj(k)e−iωk (2.7.4) where {ˆ rj(k)} is the sample covariance sequence corresponding to the jth subsam-ple. Inserting (2.7.4) in (2.7.3) gives ˆ φB(ω) = M−1 X k=−(M−1)  1 L L X j=1 ˆ rj(k)  e−iωk (2.7.5) We see that ˆ φB(ω) is similar in form to the Blackman–Tukey estimator that uses a rectangular window. The average, over j, of the subsample covariance ˆ rj(k) is an estimate of the ACS r(k). However, the ACS estimate in (2.7.5) does not make efficient use of available data lag products y(t)y∗(t−k), especially for |k| near M −1 (see Exercise 2.14). In fact, for k = M −1, only about 1/Mth of the available lag products are used to form the ACS estimate in (2.7.5). We expect that the variance “sm2” 2004/2/ page 50 i i i i i i i i 50 Chapter 2 Nonparametric Methods of these lags is higher than for the corresponding ˆ r(k) lags used in the Blackman– Tukey estimate, and similarly, the variance of ˆ φB(ω) is higher than that of ˆ φBT (ω). In addition, the Bartlett method uses a fixed rectangular lag window, and thus has less flexibility in resolution–leakage tradeoffthan does the Blackman–Tukey method. For these reasons, we conclude that The Bartlett estimate, as defined in (2.7.1)–(2.7.3), is similar in form to, but typically has a slightly higher variance than, the Blackman–Tukey estimate with a rectangular lag window of length M. (2.7.6) The reduction in resolution and the decrease of variance (both by a factor L = N/M) for the Bartlett estimate, as compared to the basic periodogram method, follows from (2.7.6) and the properties of the Blackman–Tukey spectral estimator given previously. The main lobe of the rectangular window is narrower than that associated with most other lag windows (this follows from the observation that the rectangular window clearly has the largest equivalent time width , and the fact that the time– bandwidth product is constant, see (2.6.5)). Thus, it follows from (2.7.6) that in the class of Blackman–Tukey estimates, the Bartlett estimator can be expected to have the least smearing (and hence the best resolution) but the most significant leakage. 2.7.2 Welch Method The Welch method [Welch 1967] is obtained by refining the Bartlett method in two respects. First, the data segments in the Welch method are allowed to overlap. Second, each data segment is windowed prior to computing the periodogram. To describe the Welch method in a mathematical form, let yj(t) = y((j −1)K + t), t = 1, . . . , M j = 1, . . . , S (2.7.7) denote the jth data segment. In (2.7.7), (j −1)K is the starting point for the jth sequence of observations. If K = M, then the sequences do not overlap (but are contiguous) and we get the sample splitting used by the Bartlett method (which leads to S = L = N/M data subsamples). However, the value recommended for K in the Welch method is K = M/2, in which case S ≃2M/N data segments (with 50% overlap between successive segments) are obtained. The windowed periodogram corresponding to yj(t) is computed as ˆ φj(ω) = 1 MP M X t=1 v(t)yj(t)e−iωt 2 (2.7.8) where P denotes the “power” of the temporal window {v(t)}: P = 1 M M X t=1 |v(t)|2 (2.7.9) “sm2” 2004/2/ page 51 i i i i i i i i Section 2.7 Other Refined Periodogram Methods 51 The Welch estimate of PSD is determined by averaging the windowed periodograms in (2.7.8): ˆ φW (ω) = 1 S S X j=1 ˆ φj(ω) (2.7.10) The reasons for the above modifications to the Bartlett method, which led to the Welch method, are simple to explain. By allowing overlap between the data seg-ments and hence by getting more periodograms to be averaged in (2.7.10), we hope to decrease the variance of the estimated PSD. By introducing the window in the periodogram computation it may be hoped to get more control over the bias/re-solution properties of the estimated PSD (see Section 2.6.4). Additionally, the temporal window may be used to give less weight to the data samples at the ends of each subsample, hence making the consecutive subsample sequences less corre-lated to one another, even though they are overlapping. The principal effect of this “decorrelation” should be a more effective reduction of variance via the averaging in (2.7.10). The analysis that led to the results (2.6.30)–(2.6.32) can be modified to show that the use of windowed periodograms in the Welch method, as contrasted to the unwindowed periodograms in the Bartlett method, indeed offers more flexibility in controlling the bias properties of the estimated spectrum. The variance of the Welch spectral estimator is more difficult to analyze (except in some special cases). However, there is empirical evidence that the Welch method can offer lower variance than the Bartlett method but the difference in the variances corresponding to the two methods is not dramatic. We can relate the Welch estimator to the Blackman–Tukey spectral estimator by a straightforward calculation as we did for the Bartlett method. By inserting (2.7.8) in (2.7.10), we obtain ˆ φW (ω) = 1 S S X j=1 1 MP M X t=1 M X k=1 v(t)v∗(k)yj(t)y∗ j (k)e−iω(t−k) (2.7.11) For large values of N and for K = M/2 or smaller, S results sufficiently large for the average (1/S) PS j=1 yj(t)y∗ j (k) to be close to the covariance r(t−k). We do not replace the previous sum by the true covariance lag. However, we assume that this sum does not depend on both t and k, but only on their difference (t −k), at least approximately; say ˜ r(t, k) = 1 S S X j=1 yj(t)y∗ j (k) ≃˜ r(t −k) (2.7.12) “sm2” 2004/2/ page 52 i i i i i i i i 52 Chapter 2 Nonparametric Methods Using (2.7.12) in (2.7.11) gives ˆ φW (ω) ≃ 1 MP M X t=1 M X k=1 v(t)v∗(k)˜ r(t −k)e−iω(t−k) = 1 MP M X t=1 t−M X τ=t−1 v(t)v∗(t −τ)˜ r(τ)e−iωτ = M−1 X τ=−(M−1) " 1 MP M X t=1 v(t)v∗(t −τ) # ˜ r(τ)e−iωτ (2.7.13) By introducing w(τ) = 1 MP M X t=1 v(t)v∗(t −τ) (2.7.14) (under the convention that v(k) = 0 for k < 1 and k > M), we can write (2.7.13) as ˆ φW (ω) ≃ M−1 X τ=−(M−1) w(τ)˜ r(τ)e−iωτ (2.7.15) which is to be compared to the form of the Blackman–Tukey estimator. To summa-rize, the Welch estimator has been shown to approximate a Blackman–Tukey–type estimator for the estimated covariance sequence (2.7.12) (which may be expected to have finite–sample properties different from those of ˆ r(k)). The Welch estimator can be efficiently computed via the FFT, and is one of the most frequently used PSD estimation methods. Its previous interpretation is pleasing, even if approximate, since the Blackman–Tukey form of spectral estimator is theoretically the most favored one. This interpretation also shows that we may think of replacing the usual covariance estimates {ˆ r(k)} in the Blackman–Tukey estimator by other sample covariances, with the purpose of either reducing the computational burden or improving the statistical accuracy. 2.7.3 Daniell Method As shown in (2.4.21), the periodogram values ˆ φ(ωk) corresponding to different fre-quency values ωk are (asymptotically) uncorrelated random variables. One may then think of reducing the large variance of the basic periodogram estimator by averaging the periodogram over small intervals centered on the current frequency ω. This is the idea behind the Daniell method [Daniell 1946]. The practical form of the Daniell estimate, which can be implemented by means of the FFT, is the following: ˆ φD(ωk) = 1 2J + 1 k+J X j=k−J ˆ φp(ωj) (2.7.16) where ωk = 2π ˜ N k, k = 0, . . . , ˜ N −1 (2.7.17) “sm2” 2004/2/ page 53 i i i i i i i i Section 2.7 Other Refined Periodogram Methods 53 and where ˜ N is (much) larger than N to ensure a fine sampling of ˆ φp(ω). The periodogram samples needed in (2.7.16) can be obtained, for example, by using a radix–2 FFT algorithm applied to the zero–padded data sequence, as described in Section 2.3. The parameter J in the Daniell method should be chosen sufficiently small to guarantee that φ(ω) is nearly constant on the interval(s):  ω −2π ˜ N J, ω + 2π ˜ N J  (2.7.18) Since ˜ N can in principle be chosen as large as we want, we can choose J fairly large without violating the above requirement that φ(ω) is nearly constant over the interval in (2.7.18). For the sake of illustration, let us assume that we keep the ratio J/ ˜ N constant, but increase both J and ˜ N significantly. As J/ ˜ N is con-stant, the resolution/bias properties of the Daniell estimator should be basically unaffected. On the other hand, the fact that the number of periodogram values averaged in (2.7.16) increases with increased J might suggest that the variance de-creases. However, we know that this should not be possible, as the variance can be decreased only at the expense of increasing the bias (and vice versa). Indeed, in the case under discussion the periodogram values averaged in (2.7.16) become more and more correlated as ˜ N increases and hence the variance of ˆ φD(ω) does not necessarily decrease with J if ˜ N is larger than N (see, e.g., Exercise 2.13). We will return to the bias and variance properties of the Daniell method a bit later. By introducing β = 2J/ ˜ N, one can write (2.7.18) in a form that is more convenient for the discussion that follows, namely [ω −πβ, ω + πβ] (2.7.19) Equation (2.7.16) is a discrete approximation of the theoretical version of the Daniell estimator, which is given by ˆ φD(ω) = 1 2πβ Z ω+βπ ω−βπ ˆ φp(ψ)dψ (2.7.20) The larger the ˜ N, the smaller the difference between the approximation (2.7.16) and the continuous version, (2.7.20), of the Daniell spectral estimator. It is intuitively clear from (2.7.20) that as β increases, the resolution of the Daniell estimator decreases (or, essentially equivalent, the bias increases) and the variance gets lower. In fact, if we introduce M = 1/β (2.7.21) (in an approximate sense, as 1/β is not necessarily an integer) then we may expect that the resolution and the variance of the Daniell estimator are both decreased by a factor M, compared to the basic periodogram method. In order to support this claim, we relate the Daniell estimator to the Blackman–Tukey estimation technique. “sm2” 2004/2/ page 54 i i i i i i i i 54 Chapter 2 Nonparametric Methods By simply comparing (2.7.20) and (2.5.3), we obtain the following result. The Daniell estimator is a particular case of the Blackman–Tukey class of spectral estimators, corresponding to a rectangular spec-tral window: W(ω) = ( 1/β, ω ∈[−βπ, βπ] 0, otherwise (2.7.22) The above observation, along with the time–bandwidth product result and the properties of the Blackman–Tukey spectral estimator, lends support to the pre-viously made claim on the Daniell estimator. Note that the Daniell estimate of PSD is a nonnegative function by its very definition, (2.7.20), which is not necessarily the case for several members of the Blackman–Tukey class of PSD estimators. The lag window corresponding to the W(ω) in (2.7.22) is readily evaluated as follows: w(k) = 1 2π Z π −π W(ω)eiωkdω = 1 2πβ Z πβ −πβ eiωkdω = sin(kπβ) kπβ = sinc(kπβ) (2.7.23) Note that w(k) does not vanish as k increases, which leads to a subtle (but not essential) difference between the lag windowed forms of the Daniell and Blackman– Tukey estimators. Since the inverse DTFT of ˆ φp(ω) is given by the sequence {. . . , 0, 0, ˆ r(−(N −1)), . . . , ˆ r(N −1), 0, 0, . . .}, it follows immediately from (2.7.20) that ˆ φD(ω) can also be written as ˆ φD(ω) = N−1 X k=−(N−1) w(k)ˆ r(k)e−iωk (2.7.24) It is seen from (2.7.24) that, like the Blackman–Tukey estimator, ˆ φD(ω) is a win-dowed version of the correlogram but, unlike the Blackman–Tukey estimator, the sum in (2.7.24) is not truncated to a value M < N. Hence, contrary to what might have been expected intuitively, the parameter M defined in (2.7.21) cannot be ex-actly interpreted as a “truncation point” for the lag windowed version of ˆ φD(ω). However, since the equivalent bandwidth of W(ω) is clearly equal to β, βe = β it follows that the equivalent time width of w(k) is Ne = 1/βe = M which shows that M plays essentially the same role here as the “truncation point” in the Blackman–Tukey estimator (and, indeed, it can be verified that w(k) in (2.7.23) takes small values for |k| > M). “sm2” 2004/2/ page 55 i i i i i i i i Section 2.8 Complements 55 In closing this section and this chapter, we point out that the periodogram– based methods for spectrum estimation are all variations on the same theme. These methods attempt to reduce the variance of the basic periodogram estimator, at the expense of some reduction in resolution, by various means such as: averaging peri-odograms derived from data subsamples (Bartlett and Welch methods); averaging periodogram values locally around the frequency of interest (Daniell method); and smoothing the periodogram (Blackman–Tukey method). The unifying theme of these methods is seen in that they are essentially special forms of the Blackman– Tukey approach. In Chapter 5 we will push the unifying theme one step further by showing that the periodogram–based methods can also be obtained as special cases of the filter bank approach to spectrum estimation described there (see also [Mullis and Scharf 1991]). Finally, it is interesting to note that, while the modifications of the peri-odogram described in this chapter are indeed required when estimating a continu-ous PSD, the unmodified periodogram can be shown to be a satisfactory estimator (actually, the best one in large samples) for discrete (or line) spectra corresponding to sinusoidal signals. This is shown in Chapter 4. 2.8 COMPLEMENTS 2.8.1 Sample Covariance Computation via FFT Computation of the sample covariances is a ubiquitous problem in spectral esti-mation and signal processing applications. In this complement we make use of the DTFT–like formula (2.2.2), relating the periodogram and the sample covariance se-quence, to devise an FFT–based algorithm for computation of the {ˆ r(k)}N−1 k=0 . We also compare the computational requirements of such an algorithm with those cor-responding to the evaluation of {ˆ r(k)} via the temporal averaging formula (2.2.4), and show that the former may be computationally more efficient than the latter if N is larger than a certain value. From (2.2.2) and (2.2.6) we have that (we omit the subscript p of ˆ φp(ω) for notational simplicity): ˆ φ(ω) = N−1 X k=−N+1 ˆ r(k)e−iωk = 2N−1 X p=1 ˆ r(p −N)e−iω(p−N) or, equivalently, e−iωN ˆ φ(ω) = 2N−1 X p=1 ρ(p)e−iωp (2.8.1) where ρ(p) ≜ˆ r(p −N). Equation (2.8.1) has the standard form of a DFT. It is evident from (2.8.1) that in order to determine the sample covariance sequence we need at least (2N −1) values of the periodogram. This is expected: the sequence {ˆ r(k)}N−1 k=0 contains (2N −1) real–valued unknowns for the determination of which at least (2N −1) periodogram values should be necessary (as ˆ φ(ω) is real valued). Let ωk = 2π 2N −1 (k −1), k = 1, . . . , 2N −1 “sm2” 2004/2/ page 56 i i i i i i i i 56 Chapter 2 Nonparametric Methods Also, let the sequence {y(t)}2N−1 t=1 be obtained by padding the raw data sequence with (N −1) zeroes. Compute Yk = 2N−1 X t=1 y(t)e−iωkt (k = 1, 2, . . . , 2N −1) (2.8.2) by means of a (2N −1)–point FFT algorithm. Next, evaluate ˜ φk = e−iωkN |Yk|2/N (k = 1, . . . , 2N −1) (2.8.3) Finally, determine the sample covariances via the “inversion” of (2.8.1): ρ(p) = 2N−1 X k=1 ˜ φkeiωkp/(2N −1) = 2N−1 X k=1 ˜ φkeiωpk/(2N −1) (2.8.4) The previous computation may once again be done by using a (2N −1)–point FFT algorithm. The bulk of the procedure outlined above consists of the FFT–based computation of (2.8.2) and (2.8.4). That computation requires about 2N log2(2N) flops (assuming that the radix–2 FFT algorithm is used; the required number of operations is larger than the one previously given whenever N is not a power of two). The direct evaluation of the sample covariance sequence via (2.2.4) requires N + (N −1) + · · · + 1 ≃N 2/2 flops Hence, the FFT–based computation would be more efficient whenever N > 4 log2(2N) This inequality is satisfied for N ≥32. (Actually, N needs to be greater than 32 because we neglected the operations needed to implement equation (2.8.3).) The previous discussion assumes that N is a power of two. If this is not the case then the relative computational efficiency of the two procedures may be differ-ent. Note, also, that there are several other issues that may affect this comparison. For instance, if only the lags {ˆ r(k)}M−1 k=0 (with M ≪N) are required, then the num-ber of computations required by (2.2.4) is drastically reduced. On the other hand, the FFT–based procedure can also be implemented in a more efficient way in such a case, so that it remains computationally more efficient than a direct calculation, for instance, for N ≥100 [Oppenheim and Schafer 1989]. We conclude that the various implementation details may change the value of N beyond which the FFT–based procedure is more efficient than the direct approach, and hence may influence the decision as to which of the two procedures should be used in a given application. “sm2” 2004/2/ page 57 i i i i i i i i Section 2.8 Complements 57 2.8.2 FFT–Based Computation of Windowed Blackman–Tukey Periodograms The windowed Blackman–Tukey periodogram (2.5.1), unlike its unwindowed ver-sion, is not amenable to a direct computation via a single FFT. In this complement we show that three FFTs are sufficient to evaluate (2.5.1): two FFTs for the compu-tation of the sample covariance sequence entering the equation (2.5.1) (as described in Complement 2.8.1), and one FFT for the evaluation of (2.5.1). We also show that the computational formula for {ˆ r(k)} derived in Complement 2.8.1 can be used to obtain an FFT–based algorithm for evaluation of (2.5.1) directly in terms of ˆ φp(ω). We relate the latter way of computing (2.5.1) to the evaluation of ˆ φBT (ω) from the integral equation (2.5.3). Finally, we compare the two ways outlined above for evaluating the windowed Blackman–Tukey periodogram. The windowed Blackman–Tukey periodogram can be written as ˆ φBT (ω) = N−1 X k=−(N−1) w(k)ˆ r(k)e−iωk = N−1 X k=0 w(k)ˆ r(k)e−iωk + N−1 X k=0 w(k)ˆ r∗(k)eiωk −w(0)ˆ r(0) = 2 Re (N−1 X k=0 w(k)ˆ r(k)e−iωk ) −w(0)ˆ r(0) (2.8.5) where we made use of the facts that the window sequence is even and ˆ r(−k) = ˆ r∗(k). It is now evident that an N–point FFT can be used to evaluate ˆ φBT (ω) at ω = 2πk/N (k = 0, . . . , N −1). This requires about 1 2N log2(N) flops that should be added to the 2N log2(2N) flops required to compute {ˆ r(k)} (as in Com-plement 2.8.1), hence giving a total of about N[ 1 2 log2(N) + 2 log2(2N)] flops for this way of evaluating ˆ φBT (ω). Next, we make use of the expression (2.8.4) for {ˆ r(k)} that is derived in Complement 2.8.1, ˆ r(p −N) = 1 2N −1 2N−1 X k=1 ˆ φ(¯ ωk)ei¯ ωk(p−N) (p = 1, . . . , 2N −1) (2.8.6) where ¯ ωk = 2π(k −1)/(2N −1), (k = 1, . . . , 2N −1), and where ˆ φ(ω) is the unwindowed periodogram. Inserting (2.8.6) into (2.5.1), we obtain ˆ φBT (ω) = 1 2N −1 N−1 X s=−(N−1) w(s)e−iωs 2N−1 X k=1 ˆ φ(¯ ωk)ei¯ ωks = 1 2N −1 2N−1 X k=1 ˆ φ(¯ ωk)   N−1 X s=−(N−1) w(s)e−i(ω−¯ ωk)s   (2.8.7) “sm2” 2004/2/ page 58 i i i i i i i i 58 Chapter 2 Nonparametric Methods which gives ˆ φBT (ω) = 1 2N −1 2N−1 X k=1 ˆ φ(¯ ωk) W(ω −¯ ωk) (2.8.8) where W(ω) is the spectral window. It might be thought that the last step in the above derivation requires that {w(k)} is a “truncated–type” window (i.e., w(k) = 0 for |k| ≥N). However, no such requirement on {w(k)} is needed, as explained next. By inserting the usual expression for ˆ φ(ω) into (2.8.6) we obtain: ˆ r(p −N) = 1 2N −1 2N−1 X k=1   N−1 X s=−(N−1) ˆ r(s)e−i¯ ωks  ei¯ ωk(p−N) = 1 2N −1 N−1 X s=−(N−1) ˆ r(s) "2N−1 X k=1 ei¯ ωk(p−N−s) # ≜ 1 2N −1 N−1 X s=−(N−1) ˆ r(s)∆(s, p) where ∆(s, p) = 2N−1 X k=1 ei¯ ωp−N−sk = ei¯ ωp−N−s ei(2N−1)¯ ωp−N−s −1 ei¯ ωp−N−s −1 As (2N −1)¯ ωp−N−s = 2π(p −N −s), it follows that ∆(s, p) = (2N −1)δp−N,s from which we immediately get 1 2N −1 N−1 X s=−(N−1) ˆ r(s)∆(s, p) = ( ˆ r(p −N) p = 1, . . . , 2N −1 0, otherwise (2.8.9) First, the above calculation provides a cross–checking of the derivation of equation (2.8.6) in Complement 2.8.1. Second, the result (2.8.9) implies that the values of ˆ r(p −N) calculated with the formula (2.8.6) are equal to zero for p < 1 or p > 2N −1. It follows that the limits for the summation over s in (2.8.7) can be extended to ±∞, hence showing that (2.8.8) is valid for an arbitrary window. In the general case there seems to be no way for evaluating (2.8.8) by means of an FFT algorithm. Hence, it appears that for a general window it is more efficient to base the computation of ˆ φBT (ω) on (2.8.5) rather than on (2.8.8). For certain windows, however, (2.8.8) may be computationally more efficient than (2.8.5). For instance, in the case of the Daniell method, which corresponds to a rectangular spectral window, (2.8.8) takes a very convenient computational form and should “sm2” 2004/2/ page 59 i i i i i i i i Section 2.8 Complements 59 be preferred to (2.8.5). It should be noted that (2.8.8) can be viewed as an exact formula for evaluation of the integral in equation (2.5.3). In particular, (2.8.8) provides an exact implementation formula for the Daniell periodogram (2.7.20) (whereas (2.7.16) is only an approximation of the integral (2.7.20) that is valid for sufficiently large values of N). 2.8.3 Data and Frequency Dependent Temporal Windows: The Apodization Approach All windows discussed so far are both data and frequency independent; in other words, the window used is the same at any frequency of the spectrum and for any data sequence. Apparently this is a rather serious restriction. A consequence of this restriction is that for such non-adaptive windows (i.e., windows that do not adapt to the data under analysis) any attempt to reduce the leakage effect (by keeping the sidelobes low) inherently leads to a reduction of the resolution (due to the widening of the main lobe), and vice versa; see Section 2.6.1. In this complement we show how to design a data and frequency dependent temporal window that has the following desirable properties: • It mitigates the leakage problem of the periodogram without compromising its resolution; and • It does so with only a very marginal increase in the computational burden. Our presentation is based on the apodization approach of [Stankwitz, Dal-laire, and Fienup 1994], even though in some places we will deviate from it to some extent. Apodization is a term borrowed from optics where it has been used to mean a reduction of the sidelobes induced by diffraction. We begin our presentation with a derivation of the temporally windowed pe-riodogram, (2.6.24), in a least-squares (LS) framework. Consider the following weighted LS fitting problem min a N X t=1 ρ(t) y(t) −aeiωt 2 (2.8.10) where ω is given and so are the weights ρ(t) ≥0. It can be readily verified that the minimizer of (2.8.10) is given by ˆ a = PN t=1 ρ(t)y(t)e−iωt PN t=1 ρ(t) (2.8.11) If we let v(t) = ρ(t) PN t=1 ρ(t) (2.8.12) then we can rewrite (2.8.11) as a windowed DFT ˆ a = N X t=1 v(t)y(t)e−iωt (2.8.13) “sm2” 2004/2/ page 60 i i i i i i i i 60 Chapter 2 Nonparametric Methods The squared magnitude of (2.8.13) appears in the windowed periodogram formula (2.6.24), which of course is not accidental as |ˆ a|2 should indicate the power in y(t) at frequency ω (cf. (2.8.10)). The usefulness of the LS-based derivation of (2.6.24) above lies in the fact that it reveals two constraints which must be satisfied by a temporal window: v(t) ≥0 (2.8.14) which follows from ρ(t) ≥0, and N X t=1 v(t) = 1 (2.8.15) which follows from (2.8.12). The constraint (2.8.15) can also be obtained by in-spection of (2.6.24); indeed, if y(t) had a component with frequency ω then that component would pass undistorted (or unbiased) through the DFT in (2.6.24) if and only if (2.8.15) holds. For this reason, (2.8.15) is sometimes called the unbi-asedness condition. On the other hand, the constraint (2.8.14) appears to be more difficult to obtain directly from (2.6.24). Next, we turn our attention to window design, which is the problem of main in-terest here. To emphasize the dependence of the temporally windowed periodogram in (2.6.24) on {v(t)} we use the notation ˆ φv(ω): ˆ φv(ω) = N N X t=1 v(t)y(t)e−iωt 2 (2.8.16) Note that in (2.8.16) the squared modulus is multiplied by N whereas in (2.6.24) it is divided by N; this difference is due to the fact that the window {v(t)} in this complement is constrained to satisfy (2.8.15), whereas in Section 2.6 it is implicitly assumed to satisfy PN t=1 v(t) = N. In the apodization approach the window is selected such that ˆ φv(ω) = minimum (2.8.17) for each ω and for the given data sequence. Evidently, the apodization window will in general be both frequency and data dependent. Sometimes such a window is said to be frequency and data adaptive. Let C denote the class of windows over which we perform the minimization in (2.8.17). Each window in C must satisfy the constraints (2.8.14) and (2.8.15). Usually, C is generated by an archetype window that depends on a number of unknown or free parameters, most commonly in a linear manner. It is important to observe that we should not use more than two free parameters to describe the windows v(t) ∈C. Indeed, one parameter is needed to satisfy the constraint (2.8.15) and the remaining one(s) to minimize the function in (2.8.17) under the inequality constraint (2.8.14); if in the minimization operation, ˆ φv(ω) “sm2” 2004/2/ page 61 i i i i i i i i Section 2.8 Complements 61 −3 −2 −1 0 1 2 3 −60 −50 −40 −30 −20 −10 0 ANGULAR FREQUENCY φv(ω) in dB using v2(t) using v1(t) apodization window Figure 2.5. An apodization window design example using a rectangular window (v1(t)) and a Kaiser window (v2(t)). Shown are the periodograms corresponding to v1(t) and v2(t), and to the apodization window v(t) selected using (2.8.17), for a data sequence of length 16 consisting of two noise-free sinusoids. depends quadratically on more than one parameter, then in general the minimum value will be zero, ˆ φv(ω) = 0 for all ω, which is not acceptable. We postpone a more detailed discussion on the parameterization of C until we have presented a motivation for the apodization design criterion in (2.8.17). To understand intuitively why (2.8.17) makes sense, consider an example in which the data consists of two noise-free sinusoids. In this example we use a rect-angular window {v1(t)} and a Kaiser window {v2(t)}. The use of these windows leads to the windowed periodograms in Figure 2.5. As is apparent from this figure, v1(t) is a “high-resolution” window that trades offleakage for resolution, whereas v2(t) compromises resolution (the two sinusoids are not resolved in the correspond-ing periodogram) for less leakage. By using the apodization principle in (2.8.17) to choose between ˆ φv1(ω) and ˆ φv2(ω), at each frequency ω, we obtain the spectral estimate shown in Figure 2.5, which inherits the high resolution of ˆ φv1(ω) and the low leakage of ˆ φv2(ω). A more formal motivation of the apodization approach can be obtained as follows. Let ht = v(t)e−iωt In terms of {ht} the equality constraint (2.8.15) becomes N X t=1 hteiωt = 1 (2.8.18) and hence the apodization design problem is to minimize N X t=1 hty(t) 2 (2.8.19) “sm2” 2004/2/ page 62 i i i i i i i i 62 Chapter 2 Nonparametric Methods subject to (2.8.18) as well as (2.8.14) and any other conditions resulting from the parameterization used for {v(t)} (and therefore for {ht}). We can interpret {ht} as an FIR filter of length N, and consequently (2.8.19) is the “power” of the filter output and (2.8.18) is the (complex) gain of the filter at frequency ω. Therefore, making use of {ht}, we can describe the apodization principle in words as follows: find the (parameterized) FIR filter {ht} which passes without distortion the sinu-soid with frequency ω (see (2.8.18)) and minimizes the output power (see (2.8.19)), and thus attenuates any other frequency components in the data as much as pos-sible. The (normalized) power at the output of the filter is taken as an estimate of the power in the data at frequency ω. This interpretation can clearly serve as a motivation of the apodization approach and it sheds more light on the apodization principle. In effect, minimizing (2.8.19) subject to (2.8.18) (along with the other constraints on {ht} resulting from the parameterization used for {v(t)}) is a special case of a sound approach to spectral analysis that will be described in Section 5.4.1 (a fact apparently noted for the first time in [Lee and Munson Jr. 1995]). As already stated above, an important aspect that remains to be discussed is the parameterization of {v(t)}. For the apodization principle to make sense, the class C of windows must be chosen carefully. In particular, as explained above, we should not use more than two parameters to describe {v(t)} (to prevent the meaningless “spectral estimate” ˆ φv(ω) ≡0). The choice of the class C is also important from a computational standpoint. Indeed, the task of solving (2.8.17), for each ω, and then computing the corresponding ˆ φv(ω) may be computationally demanding unless C is carefully chosen. In the following we will consider the class of temporal windows used in [Stank-witz, Dallaire, and Fienup 1994]: v(t) = 1 N  α −β cos 2π N t  , t = 1, . . . , N (2.8.20) It can be readily checked that (2.8.20) satisfies the constraints (2.8.14) and (2.8.15) if and only if α = 1 and |β| ≤1 (2.8.21) In addition we require that β ≥0 (2.8.22) to ensure that the peak of v(t) occurs in the middle of the interval [1, N]; this condition guarantees that the window in (2.8.20) (with β > 0) has lower sidelobes than the rectangular window corresponding to β = 0 (the window (2.8.20) with β < 0 generally has higher sidelobes than the rectangular window, and hence β < 0 cannot be a solution to the apodization design problem). Remark: The temporal window (2.8.20) is of the same type as the lag Hanning and Hamming windows in Table 2.1. For the latter windows the interval of interest is [−N, N] and hence for the peak of these windows to occur in the middle of the interval of interest, we need β ≤0 (cf. Table 2.1). This observation explains the difference between (2.8.20) and the lag windows in Table 2.1. ■ “sm2” 2004/2/ page 63 i i i i i i i i Section 2.8 Complements 63 Combining (2.8.20), (2.8.21), and (2.8.22) leads to the following (constrained) parameterization of the temporal windows: v(t) = 1 N  1 −β cos 2π N t  = 1 N  1 −β 2  ei 2π N t + e−i 2π N t , β ∈[0, 1] (2.8.23) Assume, for simplicity, that N is a power of two (for the general case we refer to [DeGraaf 1994]) and that a radix-2 FFT algorithm is used to compute Y (k) = N X t=1 y(t)e−i 2πk N t, k = 1, . . . , N (2.8.24) (see Section 2.3). Then the windowed periodogram corresponding to (2.8.23) can be conveniently computed as follows: ˆ φv(k) = 1 N Y (k) −β 2 Y (k −1) + Y (k + 1) 2 , k = 2, . . . , N −1 (2.8.25) Furthermore, in (2.8.25) β is the solution to the following apodization design prob-lem: min β∈[0,1] Y (k) −β 2 Y (k −1) + Y (k + 1) 2 (2.8.26) The unconstrained minimizer of the above function is given by: β0 = Re  2Y (k) Y (k −1) + Y (k + 1)  (2.8.27) Because the function in (2.8.26) is quadratic in β, it follows that the constrained minimizer of (2.8.26) is given by β =      0, if β0 < 0 β0, if 0 ≤β0 ≤1 1, if β0 > 1 (2.8.28) Remark: It is interesting to note from (2.8.28) that a change of the value of α in the window expression (2.8.20) will affect the apodization (optimal) window in a more complicated way than just a simple scaling. Indeed, if we change the value of α, for instance to α = 0.75, then the interval for β becomes β ∈[0, 0.75] and this modification will affect the apodization window nonlinearly via (2.8.28). ■ The apodizaton-based windowed periodogram is simply obtained by using β given by (2.8.28) in (2.8.25). Hence, despite the fact that the apodization window is both frequency and data dependent (via β in (2.8.27), (2.8.28)) the implementation “sm2” 2004/2/ page 64 i i i i i i i i 64 Chapter 2 Nonparametric Methods of the corresponding spectral estimate is only marginally more computationally de-manding than the implementation of an unwindowed periodogram. Compared with the latter, however, the apodization-based windowed periodogram has a consider-ably reduced leakage problem and essentially the same resolution (see [Stankwitz, Dallaire, and Fienup 1994; DeGraaf 1994] for numerical examples illustrat-ing this fact). 2.8.4 Estimation of Cross–Spectra and Coherency Spectra As can be seen from Complement 1.6.1, the estimation of the cross–spectrum φyu(ω) of two stationary signals, y(t) and u(t), is a useful operation when studying possible linear (dynamic) relations between y(t) and u(t). Let z(t) denote the bivariate signal z(t) = [y(t) u(t)]T and let ˆ φ(ω) = 1 N Z(ω)Z∗(ω) (2.8.29) denote the unwindowed periodogram estimate of the spectral density matrix of z(t). In equation (2.8.29), Z(ω) = N X t=1 z(t)e−iωt is the DTFT of {z(t)}N t=1. Partition ˆ φ(ω) as ˆ φ(ω) =  ˆ φyy(ω) ˆ φyu(ω) ˆ φ∗ yu(ω) ˆ φuu(ω)  (2.8.30) As indicated by the notation previously used, estimates of φyy(ω), φuu(ω) and of the cross–spectrum φyu(ω) may be obtained from the corresponding elements of ˆ φ(ω). We first show that the estimate of the coherency spectrum obtained from (2.8.30) is always such that | ˆ Cyu(ω)| = 1 for all ω (2.8.31) and hence it is useless. To see this, note that since the rank of the 2 × 2 matrix in (2.8.30) is equal to one (see Result R22 in Appendix A), we must have ˆ φuu(ω)ˆ φyy(ω) = |ˆ φyu(ω)|2 which readily leads to the conclusion that the coherency spectrum estimate obtained from the elements of ˆ φ(ω) is bound to satisfy (2.8.31), and hence is meaningless. This result is yet another indication that the unwindowed periodogram is a poor estimate of the PSD. Consider next a windowed Blackman–Tukey periodogram estimate of the cross–spectrum: ˆ φyu(ω) = M X k=−M w(k)ˆ ryu(k)e−iωk (2.8.32) “sm2” 2004/2/ page 65 i i i i i i i i Section 2.8 Complements 65 where w(k) is the lag window, and ˆ ryu(k) is some usual estimate of ryu(k). Unlike ryy(k) or ruu(k), ryu(k) does not necessarily peak at k = 0 and, moreover, is not an even function in general. The choice of the lag window for estimating cross– spectra may hence be governed by different rules from those commonly used in the autospectrum estimation. The main task of a lag window is to retain the “essential part” of the covari-ance sequence in the defining equation for the spectral density. In this way the bias is kept small and the variance is also reduced as the noisy tails of the sample co-variance sequence are weighted out. For simplicity of discussion, assume that most of the area under the plot of ˆ ryu(k) is concentrated about k = k0, with |k0| ≪N. As ˆ ryu(k) is a reasonably accurate estimate of ryu(k), provided |k| ≪N, we can assume that {ˆ ryu(k)} and {ryu(k)} have similar shapes. In such a case, one can redefine (2.8.32) as ˆ φyu(ω) = M X k=−M w(k −k0)ˆ ryu(k)e−iωk where the lag window w(s) is of the type recommended for autospectrum estimation. The choice of an appropriate value for k0 in the above cross–spectral estimator is essential, for if k0 is poorly selected the following situations can occur: • If M is chosen small to reduce the variance, the bias may be significant as “essential” lags of the cross–covariance sequence may be left out. • If M is chosen large to reduce the bias, the variance may significantly be inflated as poorly estimated high–order “nonessential” lags are included into the spectral estimation formula. Finally, let us look at the cross–spectrum estimators derived from (2.8.30) and (2.8.32), respectively, with a view of establishing a relation between them. Partition Z(ω) as Z(ω) =  Y (ω) U(ω)  and observe that 1 2πN Z π −π Y (ω)U ∗(ω)eiωk dω = 1 2πN Z π −π N X t=1 N X s=1 y(t)u∗(s)e−iω(t−s) eiωk dω = 1 N N X t=1 N X s=1 y(t)u∗(s)δk,t−s = 1 N X t∈[1,N]∩[1+k,N+k] y(t)u∗(t −k) ≜ˆ ryu(k) (2.8.33) “sm2” 2004/2/ page 66 i i i i i i i i 66 Chapter 2 Nonparametric Methods where ˆ ryu(k) can be rewritten in the following more familiar form: ˆ ryu(k) =                  1 N N X t=k+1 y(t)u∗(t −k), k = 0, 1, 2, . . . 1 N N+k X t=1 y(t)u∗(t −k), k = 0, −1, −2, . . . Let ˆ φp yu(ω) = 1 N Y (ω)U ∗(ω) denote the unwindowed cross–spectral periodogram–like estimator, given by the off– diagonal element of ˆ φ(ω) in (2.8.30). With this notation, (2.8.33) can be written more compactly as ˆ ryu(k) = 1 2π Z π −π ˆ φp yu(µ)eiµk dµ By using the above equation in (2.8.32), we obtain: ˆ φyu(ω) = 1 2π Z π −π ˆ φp yu(µ) M X k=−M w(k)e−i(ω−µ)k dµ = 1 2π Z π −π W(ω −µ)ˆ φp yu(µ) dµ (2.8.34) where W(ω) = P∞ k=−∞w(k)e−iωk is the spectral window. The previous equation should be compared with the similar equation, (2.5.3), that holds in the case of autospectra. For implementation purposes, one can use the following discrete approxima-tion of (2.8.34): ˆ φyu(ω) = 1 N N X k=−N W(ω −ωk)ˆ φp yu(ωk) where ωk = 2π N k are the Fourier frequencies. The periodogram (cross–spectral) estimate that appears in the above equation can be efficiently computed by means of an FFT algorithm. 2.8.5 More Time–Bandwidth Product Results The time (or duration)–bandwidth product result (2.6.5) relies on the assumptions that both w(t) and W(ω) have a dominant peak at the origin, that they both are real–valued, and that they take on nonnegative values only. While most window– like signals (nearly) satisfy these assumptions, many other signals do not satisfy them. In this complement we obtain time–bandwidth product results that apply to a much broader class of signals. “sm2” 2004/2/ page 67 i i i i i i i i Section 2.8 Complements 67 We begin by showing how the result (2.6.5) can be extended to a more general class of signals. Let x(t) denote a general discrete–time sequence and let X(ω) denote its DTFT. Both x(t) and X(ω) are allowed to take negative or complex values, and neither is required to peak at the origin. Let t0 and ω0 denote the maximum points of |x(t)| and |X(ω)|, respectively. The time width (or duration) and bandwidth definitions in (2.6.1) and (2.6.2) are modified as follows: ¯ Ne = P∞ t=−∞|x(t)| |x(t0)| and ¯ βe = 1 2π R π −π |X(ω)|dω |X(ω0)| Because x(t) and X(ω) form a Fourier transform pair, we obtain |X(ω0)| = ∞ X t=−∞ x(t)e−iω0t ≤ ∞ X t=−∞ |x(t)| and |x(t0)| = 1 2π Z π −π X(ω)eiωt0dω ≤1 2π Z π −π |X(ω)|dω which implies that ¯ Ne ¯ βe ≥1 (2.8.35) The above result, similar to (2.6.5), can be used to conclude that: A sequence {x(t)} cannot be narrow in both time and frequency. (2.8.36) More precisely, if x(t) is narrow in one domain it must be wide in the other domain. However, the inequality result (2.8.35), unlike (2.6.5), does not necessarily imply that ¯ βe decreases whenever ¯ Ne increases (or vice versa). Furthermore, the result (2.8.35) — again unlike (2.6.5) — does not exclude the possibility that the signal is broad in both domains. In fact, in the general class of signals to which (2.8.35) applies there are signals which are broad in both the time and frequency domains (for such signals ˜ Ne ˜ βe ≫1); see, e.g., [Papoulis 1977]. Evidently, the significant consequence of (2.8.35) is (2.8.36), which is precisely what makes the duration– bandwidth result an important one. The duration–bandwidth product type of result (such as (2.6.5) or (2.8.35), and (2.8.40) below) has been sometimes referred to by using the generic name of uncertainty principle, in an attempt to relate it to the Heisenberg Uncertainty Prin-ciple in quantum mechanics. (Briefly stated, the Heisenberg Uncertainty Principle asserts that the position and velocity of a particle cannot be simultaneously speci-fied to arbitrary precision.) To support the relationship, one can argue as follows: “sm2” 2004/2/ page 68 i i i i i i i i 68 Chapter 2 Nonparametric Methods Suppose that we are given a sequence with (equivalent) duration equal to Ne and that we are asked to use a linear filtering device to determine the sequence’s spec-tral content in a certain narrow band. Because the filter impulse response cannot be longer than Ne (in fact, it should be (much) shorter!), it follows from the time– bandwidth product result that the filter’s bandwidth can be on the order of 1/Ne but not smaller. Hence, the sequence’s spectral content in fine bands on an order smaller than 1/Ne cannot be exactly determined and therefore is “uncertain”. This is in effect the type of limitation that applies to the nonparametric spectral meth-ods discussed in this chapter. However, this way of arguing is related to a specific approach to spectral estimation and not to a fundamental limitation associated with the signal itself. (As we will see in later chapters of this text, there are parametric methods of spectral analysis that can provide the “high resolution” necessary to determine the spectral content in bands that are on an order less than 1/Ne). Next, we present another, slightly more general form of time–bandwidth prod-uct result. The definitions of duration and bandwidth used to obtain (2.8.35) make full sense whenever |x(t)| and |X(ω)| are single pulse–like waveforms, though these definitions may give reasonable results in many other instances as well. There are several other possible definitions of the broadness of a waveform in either the time or frequency domain. The definition used below and the corresponding time– bandwidth product result appear to be among the most general. Let ˜ x(t) = x(t) qP∞ t=−∞|x(t)|2 (2.8.37) and ˜ X(ω) = X(ω) q 1 2π R π −π |X(ω)|2dω (2.8.38) By Parseval’s theorem (see (1.2.6)) the denominators in (2.8.37) and (2.8.38) are equal to each other. Therefore, ˜ X(ω) is the DTFT of ˜ x(t) as is already indicated by notation. Observe that ∞ X t=−∞ |˜ x(t)|2 = 1 2π Z π −π | ˜ X(ω)|2dω = 1 Hence, both {|˜ x(t)|2} and {| ˜ X(ω)|2/2π} can be interpreted as probability density functions in the sense that they are nonnegative and that they sum or integrate to one. The means and variances associated with these two “probability” densities are given by the following equations. Time Domain: µ = ∞ X t=−∞ t|˜ x(t)|2 σ2 = ∞ X t=−∞ (t −µ)2|˜ x(t)|2 “sm2” 2004/2/ page 69 i i i i i i i i Section 2.8 Complements 69 Frequency Domain: ν = 1 (2π)2 Z π −π ω| ˜ X(ω)|2dω ρ2 = 1 (2π)3 Z π −π (ω −2πν)2| ˜ X(ω)|2dω The values of the “standard deviations” σ and ρ show whether the normalized functions {|˜ x(t)|} and {| ˜ X(ω)|}, respectively, are narrow or broad. Hence, we can use σ and ρ as definitions for the duration and bandwidth, respectively, of the original functions {x(t)} and {X(ω)}. In what follows, we assume that: µ = 0, ν = 0 (2.8.39) For continuous–time signals, the zero–mean assumptions can always be made to hold by appropriately translating the origin on the time and frequency axes (see, e.g., [Cohen 1995]). However, doing the same in the case of the discrete–time sequences considered here does not appear to be possible. Indeed, µ may not be integer–valued, and the support of X(ω) is finite and hence is affected by transla-tion. Consequently, in the present case the zero–mean assumption introduces some restriction; nevertheless we impose it to simplify the analysis. According to the discussion above and assumption (2.8.39), we define the (equivalent) time width and bandwidth of x(t) as follows: ˜ Ne = " ∞ X t=−∞ t2|˜ x(t)|2 #1/2 ˜ βe = 1 2π  1 2π Z π −π ω2| ˜ X(ω)|2dω 1/2 In the remainder of this complement, we prove the following time–bandwidth prod-uct result: ˜ Ne ˜ βe ≥1 4π (2.8.40) which holds true under (2.8.39) and the weak additional assumption that | ˜ X(π)| = 0 (2.8.41) To prove (2.8.40), first we note that ˜ X′(ω) ≜d ˜ X(ω) dω = −i ∞ X t=−∞ t˜ x(t)e−iωt “sm2” 2004/2/ page 70 i i i i i i i i 70 Chapter 2 Nonparametric Methods Hence, i ˜ X′(ω) is the DTFT of {t˜ x(t)}, which implies (by Parseval’s theorem) that ∞ X t=−∞ t2|˜ x(t)|2 = 1 2π Z π −π | ˜ X′(ω)|2dω (2.8.42) Consequently, by the Cauchy–Schwartz inequality for functions (see Result R23 in Appendix A), ˜ Ne ˜ βe =  1 2π Z π −π | ˜ X′(ω)|2dω 1/2  1 (2π)3 Z π −π ω2| ˜ X(ω)|2dω 1/2 ≥ 1 (2π)2 Z π −π ω ˜ X∗(ω) ˜ X′(ω)dω = 1 2(2π)2  Z π −π ω ˜ X∗(ω) ˜ X′(ω)dω + Z π −π ω ˜ X(ω) ˜ X∗′(ω)dω  (2.8.43) (the first equality above follows from (2.8.42) and the last one from a simple calcu-lation). Hence ˜ Ne ˜ βe ≥ 1 2(2π)2 Z π −π ω h ˜ X∗(ω) ˜ X′(ω) + ˜ X(ω) ˜ X∗′(ω) i dω = 1 2(2π)2 Z π −π ω h | ˜ X(ω)|2i′ dω which, after integrating by parts and using (2.8.41), yields ˜ Ne ˜ βe ≥ 1 2(2π)2 ω| ˜ X(ω)|2 π −π − Z π −π | ˜ X(ω)|2dω = 1 2(2π) and the proof is concluded. Remark: There is an alternative way to complete the proof above, starting from the inequality in (2.8.43). In fact, as we will see, this alternative proof yields a tighter inequality than (2.8.40). Let ϕ(ω) denote the phase of ˜ X(ω): ˜ X(ω) = | ˜ X(ω)|eiϕ(ω) Then, ω ˜ X∗(ω) ˜ X′(ω) = ω| ˜ X(ω)| h | ˜ X(ω)| i′ + iωϕ′(ω)| ˜ X(ω)|2 = 1 2 h ω| ˜ X(ω)|2i′ −1 2| ˜ X(ω)|2 + iωϕ′(ω)| ˜ X(ω)|2 (2.8.44) Inserting (2.8.44) into (2.8.43) yields ˜ Ne ˜ βe ≥ 1 (2π)2 ω 2 | ˜ X(ω)|2 π −π −π + i2πγ (2.8.45) “sm2” 2004/2/ page 71 i i i i i i i i Section 2.9 Exercises 71 where γ = 1 2π Z π −π ωϕ′(ω)| ˜ X(ω)|2dω can be interpreted as the “covariance” of ω and ϕ′(ω) under the “probability density function” given by | ˜ X(ω)|2/(2π). From (2.8.45) we obtain at once ˜ Ne ˜ βe ≥1 4π p 1 + 4γ2 (2.8.46) which is a slightly stronger result than (2.8.40). ■ The results (2.8.40) and (2.8.46) are similar to (2.8.35), and hence the type of comments previously made about (2.8.35) applies to (2.8.40) and (2.8.46) as well. For a more general time-bandwidth product result than the one above, see [Doroslovacki 1998]; the papers [Calvez and Vilb´ e 1992] and [Ishii and Furukawa 1986] contain similar results to the one presented in this complement. 2.9 EXERCISES Exercise 2.1: Covariance Estimation for Signals with Unknown Means The sample covariance estimators (2.2.3) and (2.2.4) are based on the assump-tion that the signal mean is equal to zero. A simple calculation shows that, under the zero–mean assumption, E {˜ r(k)} = r(k) (2.9.1) and E {ˆ r(k)} = N −|k| N r(k) (2.9.2) where {˜ r(k)} denotes the sample covariance estimate in (2.2.3). Equations (2.9.1) and (2.9.2) show that ˜ r(k) is an unbiased estimate of r(k), whereas ˆ r(k) is a biased one (note, however, that the bias in ˆ r(k) is small for N ≫|k|). For this reason, {˜ r(k)} and {ˆ r(k)} are often called the unbiased and, respectively, biased sample covariances. Whenever the signal mean is unknown, a most natural modification of the covariance estimators (2.2.3) and (2.2.4) is as follows: ˜ r(k) = 1 N −k N X t=k+1 [y(t) −¯ y] [y(t −k) −¯ y]∗ (2.9.3) and ˆ r(k) = 1 N N X t=k+1 [y(t) −¯ y] [y(t −k) −¯ y]∗ (2.9.4) “sm2” 2004/2/ page 72 i i i i i i i i 72 Chapter 2 Nonparametric Methods where ¯ y is the sample mean ¯ y = 1 N N X t=1 y(t) (2.9.5) Show that in the unknown mean case, the usual names of unbiased and biased sample covariances associated with (2.9.3) and (2.9.4), respectively, may no longer be appropriate. Indeed, in such a case both estimators may be biased; furthermore, ˆ r(k) may be less biased than ˜ r(k). To simplify the calculations, assume that y(t) is white noise. Exercise 2.2: Covariance Estimation for Signals with Unknown Means (cont’d) Show that the sample covariance sequence {ˆ r(k)} in equation (2.9.4) of Exer-cise 2.1 satisfies the following equality: N−1 X k=−(N−1) ˆ r(k) = 0 (2.9.6) The above equality may seem somewhat surprising. (Why should the {ˆ r(k)} satisfy such a constraint, which the true covariances do not necessarily satisfy? Note, for instance, that the latter covariance sequence may well comprise only positive ele-ments.) However, the equality in (2.9.6) has a natural explanation when viewed in the context of periodogram–based spectral estimation. Derive and explain formula (2.9.6) in the aforementioned context. Exercise 2.3: Unbiased ACS Estimates may lead to Negative Spectral Estimates We stated in Section 2.2.2 that if unbiased ACS estimates, given by equation (2.2.3), are used in the correlogram spectral estimate (2.2.2), then negative spectral estimates may result. Find an example data sequence {y(t)}N t=1 that gives such a negative spectral estimate. Exercise 2.4: Variance of Estimated ACS Let {y(t)}N t=1 be real Gaussian (for simplicity), with zero mean, ACS equal to {r(k)}, and ACS estimate (either biased or unbiased) equal to {ˆ r(k)} (given by equation (2.2.3) or (2.2.4); we treat both cases simultaneously). Assume, without loss of generality, that k ≥0. (a) Make use of equation (2.4.24) to show that var{ˆ r(k)} = α2(k) N−k−1 X m=−(N−k−1) (N −k −|m|) r2(m) + r(m + k)r(m −k) where α(k) =      1 N −k for unbiased ACS estimates 1 N for biased ACS estimates “sm2” 2004/2/ page 73 i i i i i i i i Section 2.9 Exercises 73 Hence, for large N, the standard deviation of the ACS estimate is O(1/ √ N) under weak conditions on the true ACS {r(k)}. (b) For the special case that y(t) is white Gaussian noise, show that cov{ˆ r(k), ˆ r(l)} = 0 for k ̸= l, and find a simple expression for var{ˆ r(k)}. Exercise 2.5: Another Proof of the Equality ˆ φp(ω) = ˆ φc(ω) The proof of the result (2.2.6) in the text introduces an auxiliary random sequence and treats the original data sequence as deterministic (nonrandom). That proof relies on several results previously derived. A more direct proof of (2.2.6) can be found using only (2.2.1), (2.2.2), and (2.2.4). Find such a proof. Exercise 2.6: A Compact Expression for the Sample ACS Show that the expressions for the sample ACS given in the text (equations (2.2.3) or (2.2.4) for k ≥0 and (2.2.5) for k < 0) can be rewritten using a single formula as follows: ˆ r(k) = ρ N X p=1 N X s=1 y(p)y∗(s)δs,p−k, k = 0, ±1, . . . , ±(N −1) (2.9.7) where ρ = 1 N for (2.2.4) and ρ = 1 N−|k| for (2.2.3). Exercise 2.7: Yet Another Proof of the Equality ˆ φp(ω) = ˆ φc(ω) Use the compact expression for the sample ACS derived in Exercise 2.6 to obtain a very simple proof of (2.2.6). Exercise 2.8: Linear Transformation Interpretation of the DFT Let F be the N × N matrix whose (k, t)th element is given by W kt, where W is as defined in (2.3.2). Then the DFT, (2.3.3), can be written as a linear transformation of the data vector y ≜[y(1) . . . y(N)]T , Y ≜[Y (0) . . . Y (N −1)]T = Fy (2.9.8) Show that F is an orthogonal matrix that satisfies 1 N FF ∗= I (2.9.9) and, as a result, that the inverse transform is y = 1 N F ∗Y (2.9.10) Deduce from the above that the DFT is nothing but a representation of the data vector y via an orthogonal basis in Cn (the basis vectors are the columns of F ∗). Also, deduce that if the sequence {y(t)} is periodic with a period equal to N, then the Fourier coefficient vector, Y , determines the whole sequence {y(t)}t=1,2,..., and that in effect the inverse transform (2.9.10) can be extended to include all samples y(1), . . . , y(N), y(N + 1), y(N + 2), . . . “sm2” 2004/2/ page 74 i i i i i i i i 74 Chapter 2 Nonparametric Methods Exercise 2.9: For White Noise the Periodogram is an Unbiased PSD Estimator Let y(t) be a zero–mean white noise with variance σ2 and let Y (ωk) = 1 √ N N−1 X t=0 y(t)e−iωkt ; ωk = 2π N k (k = 0, . . . , N −1) denote its (normalized) DFT evaluated at the Fourier frequencies. (a) Derive the covariances E {Y (ωk)Y ∗(ωr)} , k, r = 0, . . . , N −1 (b) Use the result of the previous calculation to conclude that the periodogram ˆ φ(ωk) = |Y (ωk)|2 is an unbiased estimator of the PSD of y(t). (c) Explain whether the unbiasedness property holds for ω ̸= ωk as well. Present an intuitive explanation for your finding. Exercise 2.10: Shrinking the Periodogram First, we introduce a simple general result on mean squared error (MSE) reduction by shrinking. Let ˆ x be some estimate of a true (and unknown) parameter x. Assume that ˆ x is unbiased, i.e., E(ˆ x) = x, and let σ2 ˆ x denote the MSE of ˆ x σ2 ˆ x = E  (ˆ x −x)2 (Since ˆ x is unbiased, σ2 ˆ x also equals the variance of ˆ x.) For a fixed (nonrandom) ρ, let ˜ x = ρˆ x be another estimate of x. The “shrinkage coefficient” ρ can be chosen so as to make the MSE of ˜ x (much) smaller than σ2 ˆ x. (Note that ˜ x, for ρ ̸= 1, is a biased estimate of x; hence ˜ x trades offbias for variance.) More precisely, show that the MSE of ˜ x, σ2 ˜ x, achieves its minimum value (with respect to ρ) of σ2 ˜ xo = ρo σ2 ˆ x for ρo = x2 x2 + σ2 ˆ x Next, consider the application of the previous result to the periodogram. As we explained in the chapter, the periodogram–based spectral estimate is asymptotically unbiased and has an asymptotic MSE equal to the squared PSD value: E n ˆ φp(ω) o →φ(ω), E n (ˆ φp(ω) −φ(ω))2o →φ2(ω) as N →∞ Show that the “optimally shrunk” periodogram estimate is ˜ φ(ω) = ˆ φp(ω)/2 “sm2” 2004/2/ page 75 i i i i i i i i Section 2.9 Exercises 75 and that the MSE of ˜ φ(ω) is half the MSE of ˆ φp(ω). Finally, comment on the general applicability of this extremely simple tool for MSE reduction. Exercise 2.11: Asymptotic Maximum Likelihood Estimation of φ(ω) from ˆ φp(ω) It follows from the calculations in Section 2.4 that, asymptotically in N, ˆ φp(ω) has mean φ(ω) and variance φ2(ω). In this exercise we assume that ˆ φp(ω) is (asymp-totically) Gaussian distributed (which is not necessarily the case; however, the spectral estimator derived here under the Gaussian assumption may also be used when this assumption does not hold). Hence, the asymptotic probability density function of ˆ φp(ω) is (we omit the index p as well as the dependence on ω to simplify the notation): pφ(ˆ φ) = 1 p 2πφ2 exp " −(ˆ φ −φ)2 2φ2 # Show that the maximum likelihood estimate (MLE) of φ based on ˆ φ, which by definition is equal to the maximizer of pφ(ˆ φ) (see Appendices B and C for a short introduction of maximum likelihood estimation) is given by ˜ φ = √ 5 −1 2 ˆ φ Compare ˜ φ with the “optimally shrunk” estimate of φ derived in Exercise 2.10. Exercise 2.12: Plotting the Spectral Estimates in dB It has been shown in this chapter that the spectral estimate ˆ φ(ω), obtained via an improved periodogram method, is asymptotically unbiased with a variance of the form µ2φ2(ω), where µ is a constant that can be made (much) smaller than one by appropriately choosing the window. This fact implies that the confidence interval ˆ φ(ω) ± µφ(ω), constructed around the estimated PSD, should include the true (and unknown) PSD with a large probability. Now, obtaining a confidence interval as above has a twofold drawback: first, φ(ω) is unknown; secondly, the interval may have significantly different widths for different frequency values. Show that plotting ˆ φ(ω) in decibels eliminates the previous drawbacks. More precisely, show that when ˆ φ(ω) is expressed in dB, its asymptotic variance is c2µ2 (with c = 10 log10 e), and hence that the confidence interval for a log–scale plot has the same width (independent of φ(ω)) for all ω. Exercise 2.13: Finite–Sample Variance/Covariance Analysis of the Peri-odogram This exercise has two aims. First, it shows that in the Gaussian case the variance/covariance analysis of the periodogram can be done in an extremely simple manner (even without the assumption that the data comes from a linear process, as in (2.4.26)). Secondly, the exercise asks for a finite–sample analysis which, for some purposes, may be more useful than the asymptotic analysis presented in the text. Indeed, the asymptotic analysis result (2.4.21) may be misleading if not interpreted “sm2” 2004/2/ page 76 i i i i i i i i 76 Chapter 2 Nonparametric Methods with care. For instance, (2.4.21) says that asymptotically (for N →∞) ˆ φ(ω1) and ˆ φ(ω2) are uncorrelated with one another, no matter how close ω1 and ω2 are. This cannot be true in finite samples, and hence the following question naturally arises: For a given N, how close can ω1 be to ω2 such that ˆ φ(ω1) and ˆ φ(ω2) are (nearly) uncorrelated with each other? The finite–sample analysis of this exercise can provide an answer to such questions, whereas the asymptotic analysis cannot. Let a(ω) = [eiω . . . eiNω]T y = [y(1) . . . y(N)]T Then the periodogram, (2.2.1), can be written as (we omit the subindex p of ˆ φp(ω) in this exercise): ˆ φ(ω) = |a∗(ω)y|2/N (2.9.11) Assume that {y(t)} is a zero mean, stationary circular Gaussian process. The “circular Gaussianity” assumption (see, e.g., Appendix B) allows us to write the fourth–order moments of {y(t)} as (see equation (2.4.24)): E {y(t)y∗(s)y(u)y∗(v)} = E {y(t)y∗(s)} E {y(u)y∗(v)} +E {y(t)y∗(v)} E {y(u)y∗(s)} (2.9.12) Make use of (2.9.11) and (2.9.12) to show that cov{ˆ φ(µ), ˆ φ(ν)} ≜ E nh ˆ φ(µ) −E{ˆ φ(µ)} i h ˆ φ(ν) −E{ˆ φ(ν)} io = |a∗(µ)Ra(ν)|2/N 2 (2.9.13) where R = E {yy∗}. Deduce from (2.9.13) that var{ˆ φ(µ)} = |a∗(µ)Ra(µ)|2/N 2 (2.9.14) Use (2.9.14) to readily rederive the variance part of the asymptotic result (2.4.21). Next, use (2.9.14) to show that the covariance between ˆ φ(µ) and ˆ φ(ν) is not signif-icant if |µ −ν| > 4π/N and also that it may be significant otherwise. Hint: To show the inequality above, make use of the Carath´ eodory parameterization of a covariance matrix in Sec-tion 4.9.2. Exercise 2.14: Data–Weighted ACS Estimate Interpretation of Bartlett and Welch Methods Consider the Bartlett estimator, and assume LM = N. (a) Show that the Bartlett spectral estimate can be written as: ˆ φB(ω) = M−1 X k=−(M−1) ˜ r(k)e−iωk “sm2” 2004/2/ page 77 i i i i i i i i Section 2.9 Exercises 77 where ˜ r(k) = N X t=k+1 α(k, t)y(t)y∗(t −k), 0 ≤k < M for some α(k, t) to be derived. Note that this is nearly of the form of the Blackman–Tukey spectral estimator, with the exception that the “standard” biased ACS estimate that is used in the Blackman–Tukey estimator is replaced by the “generalized” ACS estimate ˜ r(k). (b) Make use of the derived expression for α(k, t) to conclude that the Bartlett estimator is inferior to the Blackman–Tukey estimator (especially for small N) because it fails to use all available lag products in forming ACS estimates. (c) Find α(k, t) for the Welch method. What overlap values (K in equation (2.7.7)) give lag product usage similar to the Blackman–Tukey method? Exercise 2.15: Approximate Formula for Bandwidth Calculation Let W(ω) denote a general spectral window that has a peak at ω = 0 and is symmetric about that point. In addition, assume that the peak of W(ω) is narrow (as usually it should be). Under these assumptions, make use of a Taylor series expansion to show that an approximate formula for calculating the bandwidth B of the peak of W(ω) is the following: B ≃2 p |W(0)/W ′′(0)| (2.9.15) The spectral peak bandwidth B is mathematically defined as follows. Let ω1 and ω2 denote the “half–power points,” defined through W(ω1) = W(ω2) = W(0)/2, ω1 < ω2 (hence the ratio 10 log10 (W(0)/W(ωj)) ≃3dB for j = 1, 2; we use 10 log10 rather than 20 log10 because the spectral window is applied to a power quantity, φ(ω)). Then since W(ω) is symmetric, so ω2 = −ω1, B ≜ω2 −ω1 = 2ω2 As an application of (2.9.15), show that B ≃0.78 · 2π/N (in radians per sampling interval) or, equivalently, that B ≃0.78/N (in cycles per sampling interval) for the Bartlett window (2.4.15). Note that this formula remains approximate even as N →∞. Even though the half power bandwidth of the window gets smaller as N increases (so that one would expect the Taylor series expansion to be more accurate), the curvature of the window at ω = 0 increases without bound as N increases. For the Bartlett “sm2” 2004/2/ page 78 i i i i i i i i 78 Chapter 2 Nonparametric Methods window, verify that B ≃0.9 · 2π/N for N large, which differs from the prediction in this exercise by about 16%. Exercise 2.16: A Further Look at the Time–Bandwidth Product We saw in Section 2.6.1 that the product between the equivalent time and frequency widths of a regular window equals unity. Use the formula (2.9.15) derived in Exercise 2.15 to show that the spectral peak bandwidth B of a window w(k) that is nonzero only for |k| < N, satisfies B · N ≥1/π, (in cycles per sampling interval) (2.9.16) This once again illustrates the “time–bandwidth product” type of result. Note that (2.9.16) involves the effective window time length and spectral peak width, as opposed to (2.6.5) which is concerned with equivalent time and frequency widths. Exercise 2.17: Bias Considerations in Blackman–Tukey Window Design The discussion in this chapter treated the bias of a spectral estimator and its resolution as two interrelated properties. This exercise illustrates further the strong relationship between bias and resolution. Consider ˆ φBT (ω) as in (2.5.1), and without loss of generality assume that E{ˆ r(k)} = r(k). (Generality is not lost because, if E{ˆ r(k)} = α(k)r(k), then replacing w(k) by α(k)w(k) and ˆ r(k) by ˆ r(k)/α(k) results in an equivalent estimator with unbiased ACS estimates.) Find the weights {w(k)}M−1 k=−M+1 that minimize the squared bias, as given by the error measure: ϵ = 1 2π Z π −π h φ(ω) −E{ˆ φBT (ω)} i2 dω (2.9.17) In particular, show that the weight function that minimizes ϵ is the rectangular window. Recall that the rectangular window also has the narrowest main lobe, and hence the best resolution. Exercise 2.18: A Property of the Bartlett Window Let the window length, M, be given. Then, in the general case, the rectan-gular window can be expected to yield the windowed spectral estimate with the most favorable bias properties, owing to the fact that the sample covariance lags {ˆ r(k)}M−1 k=−(M−1), appearing in (2.5.1), are left unchanged by this window (also see Exercise 2.17). The rectangular window, however, has the drawback that it is not positive definite and hence may produce negative spectral estimates. The Bartlett window, on the other hand, is positive definite and therefore yields a spectral esti-mate that is positive for all frequencies. Show that the latter window is the positive definite window which is closest to the rectangular one, in the sense of minimizing “sm2” 2004/2/ page 79 i i i i i i i i Section 2.9 Exercises 79 the following criterion: min {w(k)} M−1 X k=0 |1 −w(k)| subject to: 1) w(k) ≡0 for |k| ≥M 2) {w(k)}∞ k=−∞is a positive definite sequence 3) w(0) = 1 (2.9.18) Conclude that the Bartlett window is the positive definite window that distorts the sample covariances {ˆ r(k)}M−1 k=−(M−1) least in the windowed spectral estimate formula. Hint: Any positive definite real window {w(k)}M−1 k=−(M−1) can be written as w(k) = M−1 X i=0 bi bi+k (bi = 0 for i ≥M) (2.9.19) for some real–valued parameters {bi}M−1 i=0 . Make use of the above parameterization of the set of positive definite windows to transform (2.9.18) into an optimization problem without constraints. COMPUTER EXERCISES Tools for Periodogram Spectral Estimation: The text web site www.prenhall.com/stoica contains the following Matlab functions for use in computing periodogram-based spectral estimates. In each case, y is the input data vector, L controls the frequency sample spacing of the output, and the output vector phi= φ(ωk) where ωk = 2πk L . Matlab functions that generate the Correlogram, Blackman–Tukey, Windowed Periodogram, Bartlett, Welch, and Daniell spectral estimates are as follows: • phi = correlogramse(y,L) Implements the correlogram spectral estimate in equation (2.2.2). • phi = btse(y,w,L) Implements the Blackman–Tukey spectral estimate in equation (2.5.1); w is the vector [w(0), . . . , w(M −1)]T . • phi = periodogramse(y,v,L) Implements the windowed periodogram spectral estimate in equation (2.6.24); v is a vector of window function elements [v(1), . . . , v(N)]T , and should be the same size as y. If v is a vector of ones, this function implements the unwindowed periodogram spectral estimate in equation (2.2.1). • phi = bartlettse(y,M,L) Implements the Bartlett spectral estimate in equations (2.7.2) and (2.7.3); M is the size of each subsequence as in equation (2.7.2). • phi = welchse(y,v,K,L) Implements the Welch spectral estimate in equation (2.7.8); M is the size of “sm2” 2004/2/ page 80 i i i i i i i i 80 Chapter 2 Nonparametric Methods each subsequence, v is the window function [v(1), . . . , v(M)]T applied to each subsequence, and K is the overlap parameter, as in equation (2.7.7). • phi = daniellse(y,J,Ntilde) Implements the Daniell spectral estimate in equation (2.7.16); J and Ntilde correspond to J and ˜ N there. Exercise C2.19: Zero Padding Effects on Periodogram Estimators In this exercise we study the effect zero padding has on the periodogram. Consider the sequence y(t) = 10 sin(0.2 · 2πt + φ1) + 5 sin((0.2 + 1/N)2πt + φ2) + e(t), (2.9.20) where t = 0, . . . , N −1, and e(t) is white Gaussian noise with variance 1. Let N = 64 and φ1 = φ2 = 0. From the results in Chapter 4, we find the spectrum of y(t) to be φ(ω) = 50π [δ(ω −0.2 · 2π) + δ(ω + 0.2 · 2π)] +12.5π [δ(ω −(0.2 + 1/N) · 2π) + δ(ω + (0.2 + 1/N) · 2π)] + 1 Plot the periodogram for the sequence {y(t)}, and the sequence {y(t)} zero padded with N, 3N, 5N, and 7N zeroes. Explain the difference between the five periodograms. Why does the first periodogram not give a good description of the spectral content of the signal? Note that zero padding does not change the resolution of the estimator. Exercise C2.20: Resolution and Leakage Properties of the Periodogram We have seen from Section 2.4 that the expected value of the periodogram is the convolution of the true spectrum φy(ω) with the Fourier transform of a Bartlett window, denoted WB(ω) (see equation (2.4.15)). The shape and size of the WB(ω) function determines the amount of smearing and leakage in the periodogram. Sim-ilarly, in Section 2.5 we introduced a windowed periodogram in (2.6.24) whose expected value is equal to the expected value of a corresponding Blackman–Tukey estimate with weights w(k) given by (2.6.31). Different window functions than the rectangular window could be used in the periodogram estimate, giving rise to correspondingly different windows in the correlogram estimate. The choice of win-dow affects the resolution and leakage properties of the periodogram (correlogram) spectral estimate. Resolution Properties: The amount of smearing of the spectral estimate is de-termined by the width of the main lobe, and the amount of leakage is determined by the energy in the sidelobes. The amount of smearing is what limits the resolving power of the periodogram, and is studied empirically below. We first study the resolution properties by considering a sequence made up of two sinusoids in noise, where the two sinusoidal frequencies are “close”. Consider y(t) = a1 sin(f0 · 2πt + φ1) + a2 sin((f0 + α/N)2πt + φ2) + e(t), (2.9.21) where e(t) is real–valued Gaussian white noise with zero mean and variance σ2. We choose f0 = 0.2 and N = 256, but the results are nearly independent of f0 and N. “sm2” 2004/2/ page 81 i i i i i i i i Section 2.9 Exercises 81 (a) Determine empirically the 3 dB width of the main lobe of WB(ω) as a function of N, and verify equation (2.4.18). Also determine the peak sidelobe height (in dB) as a function of N. Note that the sidelobe level of a window function is generally independent of N. Verify this by examining plots of the magnitude of WB(ω) for several values of N; try both linear and dB scales in your plots. (b) Set σ2 = 0 (this eliminates the statistical variation in the periodogram, so that the bias properties can be isolated and studied). Set a1 = a2 = 1 and φ1 = φ2 = 0. Plot the (zero–padded) periodogram of y(t) for various α and determine the resolution threshold (i.e., the minimum value of α for which the two frequency components can be resolved). How does this value of α compare with the predicted resolution in Section 2.4? (c) Repeat part (b) for a Hamming–windowed correlogram estimate. (d) For reasonably high signal–to–noise ratio (SNR) values and reasonably close signal amplitudes, the resolution thresholds in parts (b) and (c) above are not very sensitive to variations in the signal amplitudes and frequency f0. However, these thresholds are sensitive to the phases φ1 and φ2, especially if α is smaller than 1. Try two pairs (φ1, φ2) so that the two sinusoids are in phase and out of phase, respectively, at the center of the observation interval, and compare the resolution thresholds. Also, try different values of a1, a2, and σ2 to verify that their values have relatively little effect on the resolution threshold. Spectral Leakage: In this part we analyze the effects of leakage on the peri-odogram estimate. Leakage properties can be clearly seen when trying to estimate two sinusoidal terms that are well separated but have greatly differing amplitudes. (a) Generate the sinusoidal sequence above for α = 4, σ2 = 0, and φ1 = φ2 = 0. Set a1 = 1 and vary a2 (choose a2 = 1, 0.1, 0.01, and 0.001, for example). Compute the periodogram (using a rectangular data window), and comment on the ability to identify the second sinusoidal term from the spectral estimate. (b) Repeat part (a) for α = 12. Does the amplitude threshold for identifiability of the second sinusoidal term change? (c) Explain your results in parts (a) and (b) by looking at the amplitude of the Bartlett window’s Fourier transform at frequencies corresponding to α/N for α = 4 and α = 12. (d) The Bartlett window (and many other windows) has the property that the leakage level depends on the distance between spectral components in the data, as seen in parts (a) and (b). For many practical applications it may be known what dynamic range the sinusoidal components in the data may have, and it is thus desirable to use a data window with a constant sidelobe level that can be chosen by the user. The Chebyshev window (or Taylor window) is a good choice for these applications, because the user can select the (constant) sidelobe level in the window design (see the Matlab command chebwin). Assume we know that the maximum dynamic range of sinusoidal components is 60 dB. Design a Chebyshev window v(t) and corresponding Blackman– Tukey window w(k) using (2.6.31) so that the two sinusoidal components of “sm2” 2004/2/ page 82 i i i i i i i i 82 Chapter 2 Nonparametric Methods the data can be resolved for this dynamic range using (i) the Blackman–Tukey spectral estimator with window w(k), and (ii) the windowed periodogram method with window v(t). Plot the Fourier transform of the window and determine the spectral resolution of the window. Test your window design by computing the Blackman–Tukey and windowed periodogram estimates for two sinusoids whose amplitudes differ by 50 dB in dynamic range, and whose frequency separation is the minimum value you predicted. Compare the resolution results with your predictions. Explain why the smaller amplitude sinusoid can be detected using one of the methods but not the other. Exercise C2.21: Bias and Variance Properties of the Periodogram Spec-tral Estimate In this exercise we verify the theoretical predictions about bias and variance properties of the periodogram spectral estimate. We use autoregressive moving average (ARMA) signals (see Chapter 3) as test signals. Bias Properties — Resolution and Leakage: We consider a random process y(t) generated by filtering white noise: y(t) = H(z)e(t) where e(t) is zero mean Gaussian white noise with variance σ2 = 1, and the filter H(z) is given by: H(z) = 2 X k=1 Ak 1 −zkz−1 1 −pkz−1 + 1 −z∗ kz−1 1 −p∗ kz−1  (2.9.22) with p1 = 0.99ei2π0.3 p2 = 0.99ei2π(0.3+α) z1 = 0.95ei2π0.3 z2 = 0.95ei2π(0.3+α) (2.9.23) We first let A1 = A2 = 1 and α = 0.05. (a) Plot the true spectrum φ(ω). Using a sufficiently fine grid for ω so that approximation errors are small, plot the ACS using an inverse FFT of φ(ω). (b) For N = 64, plot the Fourier transform of the Bartlett window, and also plot the expected value of the periodogram estimate ˆ φp(ω) as given by equa-tion (2.4.8). We see that for this example and data length, the main lobe width of the Bartlett window is wider than the distance between the spectral peaks in φ(ω). Discuss how this relatively wide main lobe width affects the resolution properties of the estimator. (c) Generate 50 realizations of y(t), each of length N = 64 data points. You can generate the data by passing white noise through the filter H(z) (see the Matlab commands dimpulse and filter); be sure to discard a sufficient number of initial filter output points to effectively remove the transient part of the filter output. Compute the periodogram spectral estimates for each data “sm2” 2004/2/ page 83 i i i i i i i i Section 2.9 Exercises 83 sequence; plot 10 spectral estimates overlaid on a single plot. Also plot the average of the 50 spectral estimates. Compare the average with the predicted expected value as found in part (b). (d) The resolution of the spectral peaks in φ(ω) will depend on their separation relative to the width of the Bartlett window main lobe. Generate realizations of y(t) for N = 256, and find the minimum value of α so that the spectral peaks can be resolved in the averaged spectral estimate. Compare your results with the predicted formula (2.4.18) for spectral resolution. (e) Leakage from the Bartlett window will impact the ability to identify peaks of different amplitudes. To illustrate this, generate realizations of y(t) for N = 64, for both α = 4/N and α = 12/N. For each value of α, set A1 = 1, and vary A2 to find the minimum amplitude for which the lower amplitude peak can reliably be identified from the averaged spectral estimate. Compare this value with the Bartlett window sidelobe level for ω = 2πα and for the two values of α. Does the window sidelobe level accurately reflect the amplitude separation required to identify the second peak? Variance Properties: In this part we will verify that the variance of the pe-riodogram is almost independent of the data length, and compare the empirical variance with theoretical predictions. For this part, we consider a broadband signal y(t) for which the Bartlett window smearing and leakage effects are small. Consider the broadband ARMA process y(t) = B1(z) A1(z) e(t) with A1(z) = 1 −1.3817z−1 + 1.5632z−2 −0.8843z−3 + 0.4096z−4 B1(z) = 1 + 0.3544z−1 + 0.3508z−2 + 0.1736z−3 + 0.2401z−4 (a) Plot the true spectrum φ(ω). (b) Generate 50 Monte–Carlo data realizations using different noise sequences, and compute the corresponding 50 periodogram spectral estimates. Plot the sample mean, the sample mean plus one sample standard deviation and sample mean minus one sample standard deviation spectral estimate curves. Do this for N = 64, 256, and 1024. Note that the variance does not decrease with N. (c) Compare the sample variance to the predicted variance in equation (2.4.21). It may help to plot stdev{ˆ φ(ω)}/φ(ω) and determine to what degree this curve is approximately constant. Discuss your results. Exercise C2.22: Refined Methods: Variance–Resolution Tradeoff In this exercise we apply the Blackman–Tukey and Welch estimators to both a narrowband and broadband random process. We consider the same processes in Chapters 3 and 5 to facilitate comparison with the spectral estimation methods developed in those chapters. “sm2” 2004/2/ page 84 i i i i i i i i 84 Chapter 2 Nonparametric Methods Broadband ARMA Process: Generate realizations of the broadband autore-gressive moving–average (ARMA) process y(t) = B1(z) A1(z) e(t) with A1(z) = 1 −1.3817z−1 + 1.5632z−2 −0.8843z−3 + 0.4096z−4 B1(z) = 1 + 0.3544z−1 + 0.3508z−2 + 0.1736z−3 + 0.2401z−4 Choose the number of samples as N = 256. (a) Generate 50 Monte–Carlo data realizations using different noise sequences, and compute the corresponding 50 spectral estimates using the following methods: • The Blackman–Tukey spectral estimate using the Bartlett window wB(t). Try both M = N/4 and M = N/16. • The Welch spectral estimate using the rectangular window wR(t), and using both M = N/4 and M = N/16 and overlap parameter K = M/2. Plot the sample mean, the sample mean plus one sample standard devia-tion and sample mean minus one sample standard deviation spectral estimate curves. Compare with the periodogram results from Exercise C2.21, and with each other. (b) Judging from the plots you have obtained, how has the variance decreased in the refined estimates? How does this variance decrease compare to the theoretical expectations? (c) As discussed in the text, the value of M should be chosen to compromise be-tween low “smearing” and low variance. For the Blackman–Tukey estimate, experiment with different values of M and different window functions to find a “best design” (in your judgment), and plot the corresponding spectral esti-mates. Narrowband ARMA Process: Generate realizations of the narrowband ARMA process y(t) = B2(z) A2(z) e(t) with A2(z) = 1 −1.6408z−1 + 2.2044z−2 −1.4808z−3 + 0.8145z−4 B2(z) = 1 + 1.5857z−1 + 0.9604z−2 and N = 256. Repeat the experiments and comparisons in the broadband example for the narrowband process. “sm2” 2004/2/ page 85 i i i i i i i i Section 2.9 Exercises 85 Exercise C2.23: Periodogram–Based Estimators applied to Measured Data Consider the data sets in the files sunspotdata.mat and lynxdata.mat. These files can be obtained from the text web site www.prenhall.com/stoica. Apply periodogram–based estimation techniques (possibly after some preprocess-ing; see the following) to estimate the spectral content of these data. Try to answer the following questions: (a) Are there sinusoidal components (or periodic structure) in the data? If so, how many components and at what frequencies? (b) Nonlinear transformations and linear or polynomial trend removal are often applied before spectral analysis of a time series. For the lynx data, compare your spectral analysis results from the original data, and the data transformed first by taking the logarithm of each sample and then by subtracting the sample mean of this logarithmic data. Does the logarithmic transformation make the data more sinusoidal in nature? “sm2” 2004/2/ page 86 i i i i i i i i C H A P T E R 3 Parametric Methods for Rational Spectra 3.1 INTRODUCTION The principal difference between the spectral estimation methods of Chapter 2 and those in this chapter, is that in Chapter 2 we made no assumption on the stud-ied signal (except for its stationarity). The parametric or model–based methods of spectral estimation assume that the signal satisfies a generating model with known functional form, and then proceed by estimating the parameters in the assumed model. The signal’s spectral characteristics of interest are then derived from the estimated model. In those cases where the assumed model is a close approximation to the reality, it is no wonder that the parametric methods provide more accu-rate spectral estimates than the nonparametric techniques. The nonparametric approach to PSD estimation remains useful, though, in applications where there is little or no information about the signal in question. Our discussion of parametric methods for spectral estimation is divided into two parts. In this chapter, we discuss parametric methods for rational spectra which form a dense set in the class of continuous spectra (see Section 3.2) [Anderson 1971; Wei 1990]; more precisely, we discuss methods for estimating the param-eters in rational spectral models. The parametric methods of spectral analysis, unlike the nonparametric approaches, also require the selection of the structure (or order) of the spectral model. A review of methods that can be used to solve the structure selection problem can be found in Appendix C. Furthermore, in Ap-pendix B we discuss the Cram´ er–Rao bound and the best accuracy achievable in the rational class of spectral models. However, we do not include detailed results on the statistical properties of the estimation methods discussed in the following sections since: (i) such results are readily available in the literature [Kay 1988; Priestley 1981; S¨ oderstr¨ om and Stoica 1989]; (ii) parametric methods provide consistent spectral estimates and hence (for large sample sizes, at least) the issue of statistical behavior is not so critical; and (iii) a detailed statistical analysis is beyond the scope of an introductory course. The second part of our discussion on parametric methods is contained in the next chapter where we consider discrete spectra such as those associated with sinusoidal signals buried in white noise. Mixed spectra (containing both continuous and discrete spectral components, such as in the case of sinusoidal signals corrupted by colored noise) are not covered explicitly in this text, but we remark that some methods in Chapter 4 can be extended to deal with such spectra as well. 86 “sm2” 2004/2/ page 87 i i i i i i i i Section 3.2 Signals with Rational Spectra 87 3.2 SIGNALS WITH RATIONAL SPECTRA A rational PSD is a rational function of e−iω (i.e., the ratio of two polynomials in e−iω): φ(ω) = Pm k=−m γke−iωk Pn k=−n ρke−iωk (3.2.1) where γ−k = γ∗ k and ρ−k = ρ∗ k. The Weierstrass Theorem from calculus asserts that any continuous PSD can be approximated arbitrarily closely by a rational PSD of the form (3.2.1), provided the degrees m and n in (3.2.1) are chosen sufficiently large; that is, the rational PSDs form a dense set in the class of all continuous spectra. This observation partly motivates the significant interest in the model (3.2.1) for φ(ω), among the researchers in the “spectral estimation community”. It is not difficult to show that, since φ(ω) ≥0, the rational spectral density in (3.2.1) can be factored as follows: φ(ω) = B(ω) A(ω) 2 σ2 (3.2.2) where σ2 is a positive scalar, and A(ω) and B(ω) are the polynomials A(ω) = 1 + a1e−iω + . . . + ane−inω B(ω) = 1 + b1e−iω + . . . + bme−imω (3.2.3) The result (3.2.2) can similarly be expressed in the Z–domain. With the notation φ(z) = Pm k=−m γkz−k/ Pn k=−n ρkz−k, we can factor φ(z) as: φ(z) = σ2 B(z)B∗(1/z∗) A(z)A∗(1/z∗) (3.2.4) where, for example, A(z) = 1 + a1z−1 + · · · + anz−n A∗(1/z∗) = [A(1/z∗)]∗= 1 + a∗ 1z + · · · + a∗ nzn Recall the notational convention in this text that we write, for example, A(z) and A(ω) with the implicit understanding that when we convert from a function of z to a function of ω, we use the substitution z = eiω. We note that the zeroes and poles of φ(z) are in symmetric pairs about the unit circle; if zi = reiθ is a zero (pole) of φ(z), then (1/z∗ i ) = (1/r)eiθ is also a zero (pole) (see Exercise 1.3). Under the assumption that φ(z) has no pole with modulus equal to one, the region of convergence of φ(z) includes the unit circle z = eiω. The result that (3.2.1) can be written as in (3.2.2) and (3.2.4) is called the spectral factorization theorem (see, e.g., [S¨ oderstr¨ om and Stoica 1989; Kay 1988]). The next point of interest is to compare (3.2.2) and (1.4.9). This comparison leads to the following result. The arbitrary rational PSD in (3.2.2) can be associated with a signal obtained by filtering white noise of power σ2 through the rational filter with transfer function H(ω) = B(ω)/A(ω). (3.2.5) “sm2” 2004/2/ page 88 i i i i i i i i 88 Chapter 3 Parametric Methods for Rational Spectra The filtering referred to in (3.2.5) can be written in the time domain as y(t) = B(z) A(z) e(t) (3.2.6) or, alternatively, A(z)y(t) = B(z)e(t) (3.2.7) where y(t) is the filter output, and z−1 = the unit delay operator (z−ky(t) = y(t −k)) e(t) = white noise of variance equal to σ2 Hence, by means of the spectral factorization theorem, the parameterized model of φ(ω) turned into a model of the signal itself. The spectral estimation problem can then be reduced to a problem of signal modeling. In the following sec-tions, we present several methods for estimating the parameters in the signal model (3.2.7) and in two of its special cases corresponding to m = 0 and, respectively, n = 0. A signal y(t) satisfying the equation (3.2.6) is called an autoregressive moving average (ARMA or ARMA(n, m)) signal. If m = 0 in (3.2.6), then y(t) is an autoregressive (AR or AR(n)) signal; and y(t) is a moving average (MA or MA(m)) signal if n = 0. For easy reference, we summarize these naming conventions below. ARMA : A(z)y(t) = B(z)e(t) AR : A(z)y(t) = e(t) MA : y(t) = B(z)e(t) (3.2.8) By assumption, φ(ω) is finite for all ω values; as a result, A(z) cannot have any zero exactly on the unit circle. Furthermore, since the poles and zeroes of φ(z) are in reciprocal pairs, as explained before, it is always possible to choose A(z) to have all its zeroes strictly inside the unit disc. The corresponding model (3.2.6) is then said to be stable. If we assume, for simplicity, that φ(ω) does not vanish at any ω then — similarly to the above — we can choose the polynomial B(z) so that it has all zeroes inside the unit (open) disc. The corresponding model (3.2.6) is said to be minimum phase (see Exercise 3.1 for a motivation for the name minimum phase). We remark that in the previous paragraph we actually provided a sketch of the proof of the spectral factorization theorem. That discussion also showed that the spectral factorization problem associated with a rational PSD has multiple solutions, with the stable and minimum phase ARMA model being only one of them. In the following, we will consider the problem of estimating the parameters in this particular ARMA equation. When the final goal is the estimation of φ(ω), focusing on the stable and minimum phase ARMA model is no restriction. 3.3 COVARIANCE STRUCTURE OF ARMA PROCESSES In this section we derive an expression for the covariances of an ARMA process in terms of the parameters {ai}n i=1, {bi}m i=1, and σ2. The expression provides a “sm2” 2004/2/ page 89 i i i i i i i i Section 3.3 Covariance Structure of ARMA Processes 89 convenient method for estimating the ARMA parameters by replacing the true autocovariances with estimates obtained from data. Nearly all ARMA spectral estimation methods exploit this covariance structure either explicitly or implicitly, and thus it will be widely used in the remainder of the chapter. Equation (3.2.7) can be written as y(t) + n X i=1 aiy(t −i) = m X j=0 bje(t −j), (b0 = 1) (3.3.1) Multiplying (3.3.1) by y∗(t −k) and taking expectation yields r(k) + n X i=1 air(k −i) = m X j=0 bjE {e(t −j)y∗(t −k)} (3.3.2) Since the filter H(z) = B(z)/A(z) is asymptotically stable and causal, we can write H(z) = B(z)/A(z) = ∞ X k=0 hkz−k, (h0 = 1) which gives y(t) = H(z)e(t) = ∞ X k=0 hke(t −k). Then the term E {e(t −j)y∗(t −k)} becomes E {e(t −j)y∗(t −k)} = E ( e(t −j) ∞ X s=0 h∗ se∗(t −k −s) ) = σ2 ∞ X s=0 h∗ sδj,k+s = σ2h∗ j−k where we use the convention that hk = 0 for k < 0. Thus, equation (3.3.2) becomes r(k) + n X i=1 air(k −i) = σ2 m X j=0 bjh∗ j−k (3.3.3) In general, hk is a nonlinear function of the {ai} and {bi} coefficients. However, since hs = 0 for s < 0, equation (3.3.3) for k ≥m + 1 reduces to r(k) + n X i=1 air(k −i) = 0, for k > m (3.3.4) Equation (3.3.4) is the basis for many estimators of the AR coefficients of AR(MA) processes, as we will see. “sm2” 2004/2/ page 90 i i i i i i i i 90 Chapter 3 Parametric Methods for Rational Spectra 3.4 AR SIGNALS In the ARMA class, the autoregressive or all–pole signals constitute the type that is most frequently used in applications. The AR equation may model spectra with narrow peaks by placing zeroes of the A–polynomial in (3.2.2) (with B(ω) ≡1) close to the unit circle. This is an important feature since narrowband spectra are quite common in practice. In addition, the estimation of parameters in AR signal models is a well–established topic; the estimates are found by solving a system of linear equations, and the stability of the estimated AR polynomial can be guaranteed. We consider two methods for AR spectral estimation. The first is based di-rectly on the linear relationship between the covariances and the AR parameters derived in equation (3.3.4); it is called the Yule–Walker method. The second method is based on a least squares solution of AR parameters using the time–domain equa-tion A(z)y(t) = e(t). This so–called least squares method is closely related to the problem of linear prediction, as we shall see. 3.4.1 Yule–Walker Method In this section, we focus on a technique for estimating the AR parameters which is called the Yule–Walker (YW) method [Yule 1927; Walker 1931]. For AR signals, m = 0 and B(z) = 1. Thus, equation (3.3.4) holds for k > 0. Also, we have from equation (3.3.3) that r(0) + n X i=1 air(−i) = σ2 0 X j=0 bjh∗ j = σ2 (3.4.1) Combining (3.4.1) and (3.3.4) for k = 1, . . . , n gives the following system of linear equations       r(0) r(−1) . . . r(−n) r(1) r(0) . . . . . . ... r(−1) r(n) . . . r(0)            1 a1 . . . an     =      σ2 0 . . . 0      (3.4.2) The above equations are called the Yule–Walker equations or Normal equations, and form the basis of many AR estimation methods. If {r(k)}n k=0 were known, we could solve (3.4.2) for θ = [a1, . . . , an]T (3.4.3) by using all but the first row of (3.4.2):    r(1) . . . r(n)   +    r(0) · · · r(−n + 1) . . . ... . . . r(n −1) · · · r(0)       a1 . . . an   =    0 . . . 0    (3.4.4) or, with obvious definitions, rn + Rnθ = 0. (3.4.5) “sm2” 2004/2/ page 91 i i i i i i i i Section 3.4 AR Signals 91 The solution is θ = −R−1 n rn. Once θ is found, σ2 can be obtained from the first row of (3.4.2) or, equivalently, from (3.4.1). The Yule–Walker method for AR spectral estimation is based directly on (3.4.2). Given data {y(t)}N t=1, we first obtain sample covariances {ˆ r(k)}n k=0 using the standard biased ACS estimator (2.2.4). We insert these ACS estimates in (3.4.2) and solve for ˆ θ and ˆ σ2 as explained above in the known–covariance case. Note that the covariance matrix in (3.4.2) can be shown to be positive definite for any n, and hence the solution to (3.4.2) is unique [S¨ oderstr¨ om and Stoica 1989]. When the covariances are replaced by standard biased ACS estimates, the matrix can be shown to be positive definite for any sample (not necessarily generated by an AR equation) that is not identically equal to zero; see the remark in the next section for a proof. To explicitly stress the dependence of θ and σ2 on the order n, we can write (3.4.2) as Rn+1  1 θn  =  σ2 n 0  (3.4.6) We will return to the above equation in Section 3.5. 3.4.2 Least Squares Method The Yule–Walker method for estimating the AR parameters is based on equation (3.4.2) with the true covariance elements {r(k)} replaced by the sample covariances {ˆ r(k)}. In this section, we derive another type of AR estimator based on a least squares (LS) minimization criterion using the time–domain relation A(z)y(t) = e(t). We develop the LS estimator by considering the closely related problem of linear prediction. We then interpret the LS method as a Yule–Walker-type method that uses a different estimate of Rn+1 in equation (3.4.6). We first relate the Yule–Walker equations to the linear prediction problem. Let y(t) be an AR process of order n. Then y(t) satisfies e(t) = y(t) + n X i=1 aiy(t −i) = y(t) + ϕT (t)θ (3.4.7) ≜y(t) + ˆ y(t) where ϕ(t) = [y(t −1), . . . , y(t −n)]T . We interpret ˆ y(t) as a linear prediction of y(t) from the n previous samples y(t −1), . . . , y(t −n), and we interpret e(t) as the corresponding prediction error. See Complement 3.9.1 and also Exercises 3.3– 3.5 for more discussion on this and other related linear prediction problems. The vector θ that minimizes the prediction error variance σ2 n ≜E  |e(t)|2 is the AR coefficient vector in (3.4.6), as we will show. From (3.4.7) we have σ2 n = E  |e(t)|2 = E  y∗(t) + θ∗ϕc(t) y(t) + ϕT (t)θ = r(0) + r∗ nθ + θ∗rn + θ∗Rnθ (3.4.8) “sm2” 2004/2/ page 92 i i i i i i i i 92 Chapter 3 Parametric Methods for Rational Spectra where rn and Rn are defined in equations (3.4.4)–(3.4.5). The vector θ that mini-mizes (3.4.8) is given by (see Result R34 in Appendix A) θ = −R−1 n rn (3.4.9) with corresponding minimum prediction error σ2 n = r(0) −r∗ nR−1 n rn (3.4.10) Equations (3.4.9) and (3.4.10) are exactly the Yule–Walker equations in (3.4.5) and (3.4.1) (or, equivalently, in (3.4.6)). Thus, we see that the Yule–Walker equations can be interpreted as the solution to the problem of finding the best linear predictor of y(t) from its n most recent past samples. For this reason, AR modeling is sometimes referred to as linear predictive modeling. The Least Squares AR estimation method is based on a finite–sample ap-proximate solution of the above minimization problem. Given a finite set of mea-surements {y(t)}N t=1, we approximate the minimization of E  |e(t)|2 by the finite– sample cost function f(θ) = N2 X t=N1 |e(t)|2 = N2 X t=N1 y(t) + n X i=1 aiy(t −i) 2 =      y(N1) y(N1 + 1) . . . y(N2)     +      y(N1 −1) · · · y(N1 −n) y(N1) · · · y(N1 + 1 −n) . . . . . . y(N2 −1) · · · y(N2 −n)     θ 2 ≜∥y + Y θ∥2 (3.4.11) where we assume y(t) = 0 for t < 1 and t > N. The vector θ that minimizes f(θ) is given by (see Result R32 in Appendix A) ˆ θ = −(Y ∗Y )−1(Y ∗y) (3.4.12) where, as seen from (3.4.11), the definitions of Y and y depend on the choice of “sm2” 2004/2/ page 93 i i i i i i i i Section 3.4 AR Signals 93 (N1, N2) considered. If N1 = 1 and N2 = N + n we have: y =                       y(1) y(2) . . . y(n + 1) y(n + 2) . . . y(N) 0 0 . . . 0                       , Y =                       0 0 . . . 0 y(1) 0 . . . . . . ... ... 0 y(n) y(n −1) · · · y(1) y(n + 1) y(n) · · · y(2) . . . . . . y(N −1) y(N −2) · · · y(N −n) y(N) y(N −1) · · · y(N −n + 1) 0 y(N) . . . ... ... . . . 0 . . . 0 y(N)                       (3.4.13) Notice the Toeplitz structure of Y , and also that y matches this Toeplitz structure when it is appended to the left of Y ; that is, [y|Y ] also shares this Toeplitz structure. The two most common choices for N1 and N2 are: • N1 = 1, N2 = N + n (considered above). This choice yields the so–called autocorrelation method. • N1 = n+1, N2 = N. This choice corresponds to removing the first n and last n rows of Y and y in equation (3.4.13), and hence eliminates all the arbitrary zero values there. The estimate (3.4.12) with this choice of (N1, N2) is often named the covariance method. We refer to this method as the covariance LS method, or the LS method. Other choices for N1 and N2 have also been suggested. For example, the prewindow method uses N1 = 1 and N2 = N, and the postwindow method uses N1 = n+1 and N2 = N. The least squares methods can be interpreted as approximate solutions to the Yule–Walker equations in (3.4.4) by recognizing that Y ∗Y and Y ∗y are, to within a multiplicative constant, finite–sample estimates of Rn and rn, respectively. In fact, it is easy to show that for the autocorrelation method, the elements of (Y ∗Y )/N and (Y ∗y)/N are exactly the biased ACS estimates (2.2.4) used in the Yule–Walker AR estimate. Writing ˆ θ in (3.4.12) as ˆ θ = −  1 N (Y ∗Y ) −1  1 N (Y ∗y)  we see as a consequence that The autocorrelation method of least squares AR estimation is equivalent to the Yule–Walker method. Remark: We can now prove a claim made in the previous subsection that the matrix Y ∗Y in (3.4.12), with Y given by (3.4.13), is positive definite for any sample “sm2” 2004/2/ page 94 i i i i i i i i 94 Chapter 3 Parametric Methods for Rational Spectra {y(t)}N t=1 that is not identically equal to zero. To prove this claim it is necessary and sufficient to show that rank(Y ) = n. If y(1) ̸= 0, then clearly rank(Y ) = n. If y(1) = 0 and y(2) ̸= 0, then again we clearly have rank(Y ) = n, and so on. ■ For the LS estimator, (Y ∗Y )/(N −n) and (Y ∗y)/(N −n) are unbiased es-timates of Rn and rn in equations (3.4.4) and (3.4.5), and they do not use any measurement data outside the available interval 1 ≤t ≤N. On the other hand, the matrix (Y ∗Y )/(N −n) is not Toeplitz, so the Levinson–Durbin or Delsarte–Genin algorithms in the next section cannot be used (although similar fast algorithms for the LS method have been developed; see, e.g., [Marple 1987]). As N increases, the difference between the covariance matrix estimates used by the Yule–Walker and the LS methods diminishes. Consequently, for large samples (i.e., for N ≫1), the YW and LS estimates of the AR parameters nearly coincide with one another. For small or medium sample lengths, the Yule–Walker and covariance LS methods may behave differently. First, the estimated AR model obtained with the Yule–Walker method is always guaranteed to be stable (see, e.g., [Stoica and Ne-horai 1987] and Exercise 3.8), whereas the estimated LS model may be unstable. For applications in which one is interested in the AR model (and not just the AR spectral estimate), stability of the model is often an important requirement. It may, therefore, be thought that the potential instability of the AR model provided by the LS method is a significant drawback of this method. The case, however, is that estimated LS models which are unstable only appear infrequently and, moreover, when they do occur there are simple means to “stabilize” them (for instance, by reflecting the unstable poles inside the unit circle). Hence, to conclude this point, the lack of guaranteed stability is a drawback of the LS method, when compared with the Yule–Walker method, but often not a serious one. Second, the LS method has been found to be more accurate than the Yule– Walker method in the sense that the estimated parameters of the former are on the average closer to the true values than those of the latter [Marple 1987; Kay 1988]. Since the finite–sample statistical analysis of these methods is underdeveloped, a theoretical explanation of this behavior is not possible at this time. Only heuristic explanations are available. One such explanation is that the assumption that y(t) = 0 outside the interval 1 ≤t ≤N, and the corresponding zero elements in Y and y, result in bias in the Yule–Walker estimates of the AR parameters. When N is not much greater than n, this bias can be significant. 3.5 ORDER–RECURSIVE SOLUTIONS TO THE YULE–WALKER EQUATIONS In applications, where a priori information about the true order n is usually lacking, AR models with different orders have to be tested and hence the Yule–Walker system of equations, (3.4.6), has to be solved for n = 1 up to n = nmax (some prespecified maximum order); see Appendix C. By using a general solving method, this task requires O(n4 max) flops. This may be a significant computational burden if nmax is large. This is, for example, the case in the applications dealing with narrowband signals, where values of 50 or even 100 for nmax are not uncommon. In such applications, it may be important to reduce the number of flops required “sm2” 2004/2/ page 95 i i i i i i i i Section 3.5 Order–Recursive Solutions to the Yule–Walker Equations 95 to determine {θn, σ2 n} in (3.4.6). In order to be able to do so, the special algebraic structure of (3.4.6) should be exploited, as explained next. The matrix Rn+1 in the Yule–Walker system of equations is highly struc-tured: it is Hermitian and Toeplitz. The first algorithm which exploited this fact to determine {θn, σ2 n}nmax n=1 in n2 max flops is the Levinson–Durbin algorithm (LDA) [Levinson 1947; Durbin 1960]. The number of flops required by the LDA is on the order of nmax times smaller than that required by a general linear equation solver to determine (θnmax, σ2 nmax), and on the order of n2 max times smaller than that required by a general linear equation solver to determine {θn, σ2 n}nmax n=1 . The LDA is discussed in Section 3.5.1. In Section 3.5.2 we present another algorithm, the Delsarte–Genin algorithm (DGA), also named the split–Levinson algorithm, which in the case of real–valued signals is about two times faster than the LDA [Delsarte and Genin 1986]. Both the LDA and DGA solve, recursively in the order n, equation (3.4.6). The only requirement is that the matrix there be positive definite, Hermitian, and Toeplitz. Thus, the algorithms apply equally well to the Yule–Walker AR estimator (or, equivalently, the autocorrelation least squares AR method), in which the “true” ACS elements are replaced by estimates. Hence, to cover both cases simultaneously, in the following: ρk is used to represent either r(k) or ˆ r(k) (3.5.1) By using the above convention, we have Rn+1 =       ρ0 ρ−1 . . . ρ−n ρ1 ρ0 . . . . . . ... ρ−1 ρn . . . ρ1 ρ0       =       ρ0 ρ∗ 1 . . . ρ∗ n ρ1 ρ0 . . . . . . ... ρ∗ 1 ρn . . . ρ1 ρ0       (3.5.2) The following notational convention will also be frequently used in this section. For a vector x = [x1 . . . xn]T , we define ˜ x = [x∗ n . . . x∗ 1]T An important property of any Hermitian Toeplitz matrix R is that y = Rx ⇒ ˜ y = R˜ x (3.5.3) The result (3.5.3) follows from the following calculation ˜ yi = y∗ n−i+1 = n X k=1 R∗ n−i+1,kx∗ k = n X k=1 ρ∗ n−i+1−kx∗ k = n X p=1 ρ∗ p−ix∗ n−p+1 = n X p=1 Ri,p˜ xp = (R˜ x)i where Ri,j denotes the (i, j)th element of the matrix R. “sm2” 2004/2/ page 96 i i i i i i i i 96 Chapter 3 Parametric Methods for Rational Spectra 3.5.1 Levinson–Durbin Algorithm The basic idea of the LDA is to solve (3.4.6) recursively in n, starting from the solution for n = 1 (which is easily determined). By using (3.4.6) and the nested structure of the R matrix, we can write Rn+2   1 θn 0  =   Rn+1 ρ∗ n+1 ˜ rn ρn+1 ˜ r∗ n ρ0     1 θn 0  =   σ2 n 0 αn   (3.5.4) where rn = [ρ1 . . . ρn]T (3.5.5) αn = ρn+1 + ˜ r∗ nθn (3.5.6) Equation (3.5.4) would be the counterpart of (3.4.6) when n is increased by one, if αn in (3.5.4) could be nulled. To do so, let kn+1 = −αn/σ2 n (3.5.7) It follows from (3.5.3) and (3.5.4) that Rn+2      1 θn 0  + kn+1   0 ˜ θn 1     =   σ2 n 0 αn  + kn+1   α∗ n 0 σ2 n   =  σ2 n + kn+1α∗ n 0  (3.5.8) which has the same structure as Rn+2  1 θn+1  =  σ2 n+1 0  (3.5.9) Comparing (3.5.8) and (3.5.9) and making use of the fact that the solution to (3.4.6) is unique for any n, we reach the conclusion that θn+1 =  θn 0  + kn+1  ˜ θn 1  (3.5.10) and σ2 n+1 = σ2 n 1 −|kn+1|2 (3.5.11) constitute the solution to (3.4.6) for order (n + 1). Equations (3.5.10) and (3.5.11) form the core of the LDA. The initialization of these recursive–in–n equations is straightforward. The box below summarizes the LDA in a form that should be convenient for machine coding. The LDA has many interesting properties and uses for which we refer to [S¨ oderstr¨ om and Stoica 1989; Marple 1987; Kay 1988]. The coefficients ki in the LDA are often called the reflection coefficients; −ki are also called the partial correlation (PARCOR) co-efficients. The motivation for the name “partial correlation coefficient” is developed in Complement 3.9.1 “sm2” 2004/2/ page 97 i i i i i i i i Section 3.5 Order–Recursive Solutions to the Yule–Walker Equations 97 The Levinson–Durbin Algorithm Initialization: θ1 = −ρ1/ρ0 = k1 [1 flop] σ2 1 = ρ0 −|ρ1|2/ρ0 [1 flop] For n = 1, . . . , nmax, do: kn+1 = −ρn+1 + ˜ r∗ nθn σ2 n [n + 1 flops] σ2 n+1 = σ2 n(1 −|kn+1|2) [2 flops] θn+1 =  θn 0  + kn+1  ˜ θn 1  [n flops] It can be seen from the box above that the LDA requires on the order of 2n flops to compute {θn+1, σ2 n+1} from {θn, σ2 n}. Hence a total of about n2 max flops is needed to compute all the solutions to the Yule–Walker system of equations, from n = 1 to n = nmax. This confirms the claim that the LDA reduces the computational burden associated with a general solver by two orders of magnitude. 3.5.2 Delsarte–Genin Algorithm In the real data case (i.e., whenever y(t) is real valued), the Delsarte–Genin al-gorithm (DGA), or the split Levinson algorithm, exploits some further structure of the Yule–Walker problem, which is not exploited by the LDA, to decrease even more the number of flops required to solve for {θn, σ2 n} [Delsarte and Genin 1986]. In the following, we present a derivation of the DGA which is simpler than the original derivation. As already stated, we assume that the covariance elements {ρk} in the Yule–Walker equations are real valued. Let ∆n be defined by Rn+1∆n = βn    1 . . . 1    (3.5.12) where the scalar βn is unspecified for the moment. As the matrix Rn+1 is positive definite, the (n + 1)–vector ∆n is uniquely defined by (3.5.12) (once βn is specified; as a matter of fact, note that βn only has a scaling effect on the components of ∆n). It follows from (3.5.12) and (3.5.3) that ∆n is a “symmetric vector”, i.e., it satisfies ∆n = ˜ ∆n (3.5.13) The key idea of the DGA is to introduce such symmetric vectors into the compu-tations involved by the LDA, as only half of the elements of these vectors need to be computed. “sm2” 2004/2/ page 98 i i i i i i i i 98 Chapter 3 Parametric Methods for Rational Spectra Next, note that by using the nested structure of Rn+1 and the defining equa-tion (3.5.12), we can write: Rn+1  0 ∆n−1  =  ρ0 rT n rn Rn   0 ∆n−1  =      γn−1 βn−1 . . . βn−1      (3.5.14) where rn is defined in (3.5.5) and γn−1 = rT n ∆n−1 (3.5.15) The systems of equations (3.5.12) and (3.5.14) can be linearly combined into a system having the structure of (3.4.6). To do so, let λn = βn/βn−1 (3.5.16) Then, from (3.5.12), (3.5.14) and (3.5.16), we get Rn+1  ∆n −λn  0 ∆n−1  =  βn −λnγn−1 0  (3.5.17) It will be shown that βn can always be chosen so as to make the first element of ∆n equal to one, (∆n)1 = 1 (3.5.18) In such a case, (3.5.17) has exactly the same structure as (3.4.6) and, as the solutions to these two systems of equations are unique, we are led to the following relations:  1 θn  = ∆n −λn  0 ∆n−1  (3.5.19) σ2 n = βn −λnγn−1 (3.5.20) Furthermore, since (∆n)1 = 1 and ∆n is a symmetric vector, we must also have (∆n)n+1 = 1. This observation, along with (3.5.19) and the fact that kn is the last element of θn (see (3.5.10)), gives the following expression for kn: kn = 1 −λn (3.5.21) The equations (3.5.19)–(3.5.21) express the LDA variables {θn, σ2 n, kn} as functions of {∆n} and {βn}. It remains to derive recursive–in–n formulas for {∆n} and {βn}, and also to prove that (3.5.18) really holds. This is done in the following. Let {βn} be defined recursively by the following second–order difference equa-tion: βn = 2βn−1 −αnβn−2 (3.5.22) where αn = (βn−1 −γn−1)/(βn−2 −γn−2) (3.5.23) “sm2” 2004/2/ page 99 i i i i i i i i Section 3.5 Order–Recursive Solutions to the Yule–Walker Equations 99 The initial values required to start the recursion (3.5.22) are: β0 = ρ0 and β1 = ρ0 + ρ1. With this definition of {βn}, we claim that the vectors {∆n} (as defined in (3.5.12)) satisfy (3.5.18) as well as the following second–order recursion: ∆n =  ∆n−1 0  +  0 ∆n−1  −αn   0 ∆n−2 0   (3.5.24) In order to prove the above claim, we first apply the result (3.5.3) to (3.5.14) to get Rn+1  ∆n−1 0  =      βn−1 . . . βn−1 γn−1      (3.5.25) Next, we note that Rn+1   0 ∆n−2 0  =   ρ0 rT n−1 ρn rn−1 Rn−1 ˜ rn−1 ρn ˜ rT n−1 ρ0     0 ∆n−2 0  =        γn−2 βn−2 . . . βn−2 γn−2        (3.5.26) The right–hand sides of equations (3.5.14), (3.5.25) and (3.5.26) can be linearly combined, as described below, to get the right–hand side of (3.5.12):      γn−1 βn−1 . . . βn−1     +      βn−1 . . . βn−1 γn−1     −αn        γn−2 βn−2 . . . βn−2 γn−2        = βn    1 . . . 1    (3.5.27) The equality in (3.5.27) follows from the defining equations of βn and αn. This observation, in conjunction with (3.5.14), (3.5.25) and (3.5.26), gives the following system of linear equations Rn+1     ∆n−1 0  +  0 ∆n−1  −αn   0 ∆n−2 0     = βn    1 . . . 1    (3.5.28) which has exactly the structure of (3.5.12). Since the solutions to (3.5.12) and (3.5.28) are unique, they must coincide and hence (3.5.24) follows. Next, turn to the condition (3.5.18). From (3.5.24) we see that (∆n)1 = (∆n−1)1. Hence, in order to prove that (3.5.18) holds, it suffices to show that ∆1 = [1 1]T . The initial values β0 = ρ0 and β1 = ρ0 + ρ1 (purposely chosen for the sequence {βn}), when inserted in (3.5.12), give ∆0 = 1 and ∆1 = [1 1]T . With this observation, the proof of (3.5.18) and (3.5.24) is finished. “sm2” 2004/2/ page 100 i i i i i i i i 100 Chapter 3 Parametric Methods for Rational Spectra The DGA consists of the equations (3.5.16) and (3.5.19)–(3.5.24). These equations include second–order recursions and appear to be more complicated than the first–order recursive equations of the LDA. In reality, owing to the symmetry of the ∆n vectors, the DGA is computationally more efficient than the LDA (see below). The DGA equations are summarized in the box below, along with an approximate count of the number of flops required for implementation. The Delsarte–Genin Algorithm DGA equations Operation count no. of (×) no. of (+) Initialization: ∆0 = 1, β0 = ρ0, γ0 = ρ1 – – ∆1 = [1 1]T , β1 = ρ0 + ρ1, γ1 = ρ1 + ρ2 – 2 For n = 2, . . . , nmax do: (a) αn = (βn−1 −γn−1)/(βn−2 −γn−2) 1 2 βn = 2βn−1 −αnβn−2 2 1 ∆n =  ∆n−1 0  +  0 ∆n−1  −αn   0 ∆n−2 0   ∼n/2 ∼n γn = rT n+1∆n = (ρ1 + ρn+1) +∆n,2(ρ2 + ρn) + . . . ∼n/2 ∼n (b) λn = βn/βn−1 1 – σ2 n = βn −λnγn−1 1 1 kn = 1 −λn – 1 (c)  1 θn  = ∆n −λn  0 ∆n−1  ∼n/2 ∼n The DGA can be implemented in two principal modes, depending on the application at hand. DGA — Mode 1: In most AR modeling exercises, we do not really need all {θn}nmax n=1 . We do, however, need {σ2 1, σ2 2, . . .} for the purpose of order selection (see Appendix C). Assume that we determined the AR order on the basis of the σ2 sequence. For simplicity, let this order be denoted by nmax. Then the only θ vector to be computed is θnmax. We may also need to compute the {kn} sequence since this bears useful information about the stability of the determined AR model (see, e.g., [S¨ oderstr¨ om and Stoica 1989; Kay 1988; Therrien 1992]). In the modeling application outlined above, we need to iterate only the groups (a) and (b) of equations in the previous DGA summary. The matrix equation (c) is computed only for n = nmax. This way of implementing the DGA requires the following number of multiplications and additions: no. of (×) ≃n2 max/2 no. of (+) ≃n2 max (3.5.29) “sm2” 2004/2/ page 10 i i i i i i i i Section 3.6 MA Signals 101 Recall that, for LDA, no. of (×) = no. of (+) ≃n2 max. Thus, the DGA is approximately two times faster than the LDA (on computers for which multiplica-tion is much more time consuming than addition). We may remark that in some parameter estimation applications, the equations in group (b) of the DGA can also be left out, but this will speed up the implementation of the DGA only slightly. DGA — Mode 2: In other applications, we need all {θn}nmax n=1 . An example of such an application is the Cholesky factorization of the inverse covariance matrix R−1 nmax (see, e.g., Exercise 3.7 and [S¨ oderstr¨ om and Stoica 1989]). In such a case, we need to iterate all equations in the DGA, which results in the following number of arithmetic operations: no. of (×) ≃0.75n2 max no. of (+) ≃1.5n2 max (3.5.30) This is still about 25% faster than the LDA (assuming, once again, that the com-putation time required for multiplication dominates the time corresponding to an addition). In closing this section, we note that the computational comparisons between the DGA and the LDA above neglected terms on the order O(nmax). This is acceptable if nmax is reasonably large (say, nmax ≥10). If nmax is small, then these comparisons are no longer valid and, in fact, LDA may be computationally more efficient than the DGA in such a case. In such low–dimensional applications, the LDA is therefore to be preferred to the DGA. Also recall that the LDA is the algorithm to use with complex–valued data, since the DGA does not appear to have a computationally efficient extension for complex–valued data. 3.6 MA SIGNALS According to the definition in (3.2.8), an MA signal is obtained by filtering white noise with an all–zero filter. Owing to this all–zero structure, it is not possible to use an MA equation to model a spectrum with sharp peaks unless the MA order is chosen “sufficiently large”. This is to be contrasted to the ability of the AR (or “all–pole”) equation to model narrowband spectra by using fairly low model orders (cf. the discussion in the previous sections). The MA model provides a good approximation for those spectra which are characterized by broad peaks and sharp nulls. Such spectra are encountered less frequently in applications than narrowband spectra, so there is a somewhat limited engineering interest in using the MA signal model for spectral estimation. Another reason for this limited interest is that the MA parameter estimation problem is basically a nonlinear one, and is significantly more difficult to solve than the AR parameter estimation problem. In any case, the types of difficulties we must face in MA and ARMA estimation problems are quite similar, and hence we may almost always prefer to use the more general ARMA model in lieu of the MA one. For these reasons, our discussion of the MA spectral estimation will be brief. One method to estimate an MA spectrum consists of two steps: (i) Estimate the MA parameters {bk}m k=1 and σ2; and (ii) Insert the estimated parameters from the first step in the MA PSD formula (see (3.2.2)): φ(ω) = σ2|B(ω)|2 (3.6.1) “sm2” 2004/2/ page 10 i i i i i i i i 102 Chapter 3 Parametric Methods for Rational Spectra The difficulty with this approach lies in step (i) which is a nonlinear estimation problem. Approximate linear solutions to this problem do, however, exist. One of these approximate procedures, perhaps the most used method for MA parameter estimation, is based on a two–stage least squares methodology [Durbin 1959]. It is called Durbin’s method and will be described in the next section in the more general context of ARMA parameter estimation. Another method to estimate an MA spectrum is based on the reparameter-ization of the PSD in terms of the covariance sequence. We see from (3.2.8) that for an MA of order m, r(k) = 0 for |k| > m (3.6.2) Owing to this simple observation, the definition of the PSD as a function of {r(k)} turns into a finite–dimensional spectral model: φ(ω) = m X k=−m r(k)e−iωk (3.6.3) Hence a simple estimator of MA PSD is obtained by inserting estimates of {r(k)}m k=0 in (3.6.3). If the standard sample covariances {ˆ r(k)} are used to estimate {r(k)}, then we obtain: ˆ φ(ω) = m X k=−m ˆ r(k)e−iωk (3.6.4) This spectral estimate is of the form of the Blackman–Tukey estimator (2.5.1). More precisely, (3.6.4) coincides with a Blackman–Tukey estimator using a rectangular window of length 2m + 1. This is not unexpected. If we impose the zero–bias restriction on the nonparametric approach to spectral estimation (to make the comparison with the parametric approach fair) then the Blackman–Tukey estimator with a rectangular window of length 2m + 1 implicitly assumes that the covariance lags outside the window interval are equal to zero. This is, however, precisely the assumption behind the MA signal model; see (3.6.2). Alternatively, if we make use of the assumption (3.6.2) in a Blackman–Tukey estimator, then we definitely end up with (3.6.4) as in such a case this is the spectral estimator in the Blackman–Tukey class with zero bias and “minimum” variance. The analogy between the Blackman–Tukey and MA spectrum estimation methods makes it simpler to understand a problem associated with the MA spectral estimator (3.6.4). Owing to the (implicit) use of a rectangular window in (3.6.4), the so–obtained spectral estimate is not necessarily positive at all frequencies (see (2.5.5) and the discussion following that equation). Indeed, it is often noted in applications that (3.6.4) produces negative PSD estimates. In order to cure this deficiency of (3.6.4), we may use another lag window which is guaranteed to be positive semidefinite, in lieu of the rectangular one. This way of correcting ˆ φ(ω) in (3.6.4) is, of course, reminiscent of the Blackman–Tukey approach. It should be “sm2” 2004/2/ page 103 i i i i i i i i Section 3.7 ARMA Signals 103 noted, however, that the so–corrected ˆ φ(ω) is no longer an unbiased estimator of the PSD of an MA(m) signal (see, e.g., [Moses and Beex 1986] for details on this aspect). 3.7 ARMA SIGNALS Spectra with both sharp peaks and deep nulls cannot be modeled by either AR or MA equations of reasonably small orders. There are, of course, other instances of rational spectra that cannot be exactly described as AR or MA spectra. It is in these cases where the more general ARMA model, also called the pole–zero model, is valuable. However, the great initial promise of ARMA spectral estimation di-minishes to some extent because there is yet no well–established algorithm, from both theoretical and practical standpoints, for ARMA parameter estimation. The “theoretically optimal ARMA estimators” are based on iterative procedures whose global convergence is not guaranteed. The “practical ARMA estimators”, on the other hand, are computationally simple and often quite reliable, but their statistical accuracy may be poor in some cases. In the following, we describe two ARMA spec-tral estimation algorithms which have been used in applications with a reasonable degree of success (see also [Byrnes, Georgiou, and Lindquist 2000; Byrnes, Georgiou, and Lindquist 2001] for some recent results on ARMA parameter estimation). 3.7.1 Modified Yule–Walker Method The modified Yule–Walker method is a two–stage procedure for estimating the ARMA spectral density. In the first stage we estimate the AR coefficients using equation (3.3.4). In the second stage, we use the AR coefficient and ACS estimates in equation (3.2.1) to estimate the γk coefficients. We describe the two steps below. Writing equation (3.3.4) for k = m + 1, m + 2, . . . , m + M in a matrix form gives      r(m) r(m −1) . . . r(m −n + 1) r(m + 1) r(m) r(m −n + 2) . . . ... . . . r(m + M −1) . . . . . . r(m −n + M)         a1 . . . an   = −      r(m + 1) r(m + 2) . . . r(m + M)      (3.7.1) If we set M = n in (3.7.1) we obtain a system of n equations in n unknowns. This constitutes a generalization of the Yule–Walker system of equations that holds in the AR case. Hence, these equations are said to form the modified Yule–Walker (MYW) system of equations [Gersh 1970; Kinkel, Perl, Scharf, and Stub-berud 1979; Beex and Scharf 1981; Cadzow 1982]. Replacing the theoretical covariances {r(k)} by their sample estimates {ˆ r(k)} in these equations leads to:    ˆ r(m) . . . ˆ r(m −n + 1) . . . . . . ˆ r(m + n −1) . . . ˆ r(m)       ˆ a1 . . . ˆ an   = −    ˆ r(m + 1) . . . ˆ r(m + n)    (3.7.2) “sm2” 2004/2/ page 104 i i i i i i i i 104 Chapter 3 Parametric Methods for Rational Spectra The above linear system can be solved for {ˆ ai}, which are called the modified Yule–Walker estimates of {ai}. The square matrix in (3.7.2) can be shown to be nonsingular under mild conditions. Note that there exist fast algorithms of the Levinson type for solving non–Hermitian Toeplitz systems of equations of the form of (3.7.2); they require about twice the computational burden of the LDA algorithm (see [Marple 1987; Kay 1988; S¨ oderstr¨ om and Stoica 1989]). The MYW AR estimate has reasonable accuracy if the zeroes of B(z) in the ARMA model are well inside the unit circle. However, (3.7.2) may give very in-accurate estimates in those cases where the poles and zeroes of the ARMA model description are closely spaced together at positions near the unit circle. Such ARMA models, with nearly coinciding poles and zeroes of modulus close to one, correspond to narrowband signals. The covariance sequence of narrowband signals decays very slowly. Indeed, as we know, the more concentrated a signal is in frequency, usually the more expanded it is in time, and vice versa. This means that there is “informa-tion” in the higher–lag covariances of the signal that can be exploited to improve the accuracy of the AR coefficient estimates. We can exploit the additional information by choosing M > n in equation (3.7.1) and solving the so–obtained overdetermined system of equations. If we replace the true covariances in (3.7.1) with M > n by finite–sample estimates, there will in general be no exact solution. A most natural idea to overcome this problem is to solve the resultant equations ˆ Rˆ a ≃−ˆ r (3.7.3) in a least squares (LS) or total least squares (TLS) sense (see Appendix A). Here, ˆ R and ˆ r represent the ACS matrix and vector in (3.7.1) with sample ACS estimates replacing the true ACS there. For instance, the (weighted) least squares solution to (3.7.3) is mathematically given by1 ˆ a = −( ˆ R∗W ˆ R)−1( ˆ R∗W ˆ r) (3.7.4) where W is an M ×M positive definite weighting matrix. The AR estimate derived from (3.7.3) with M > n is called the overdetermined modified YW estimate [Beex and Scharf 1981; Cadzow 1982]. Some notes on the choice between (3.7.2) and (3.7.3), and on the selection of M, are in order. • Choosing M > n does not always improve the accuracy of the previous AR coefficient estimates. In fact, if the poles and zeroes are not close to the unit circle, choosing M > n can make the accuracy worse. When the ACS decays slowly to zero, however, choosing M > n generally improves the accuracy of ˆ a [Cadzow 1982; Stoica, Friedlander, and S¨ oderstr¨ om 1987b]. A qualitative explanation for this phenomenon can be seen by thinking of a finite–sample ACS estimate as being the sum of its “signal” component r(k) and a “noise” component due to finite–sample estimation: ˆ r(k) = r(k)+n(k). If the ACS decays slowly to zero, the signal component is “large” compared to the noise component even for relatively large values of k, and including 1From a numerical viewpoint, equation (3.7.4) is not a particularly good way to solve (3.7.3). A more numerically sound approach is to use the QR decomposition; see Section A.8.2 for details. “sm2” 2004/2/ page 10 i i i i i i i i Section 3.7 ARMA Signals 105 ˆ r(k) in the estimation of ˆ a improves accuracy. If the noise component of ˆ r(k) dominates, including ˆ r(k) in the estimation of ˆ a may decrease the accuracy of ˆ a. • The statistical and numerical accuracies of the solution {ˆ ai} to (3.7.3) are quite interrelated. In more exact but still loose terms, it can be shown that the statistical accuracy of {ˆ ai} is poor (good) if the condition number of the matrix ˆ R in (3.7.3) is large (small) (see [Stoica, Friedlander, and S¨ oderstr¨ om 1987b; S¨ oderstr¨ om and Stoica 1989] and also Appendix A). This observation suggests that M should be selected so as to make the matrix in (3.7.3) reasonably well–conditioned. In order to make a connection between this rule of thumb for selecting M and the previous explanation for the poor accuracy of (3.7.2) in the case of narrowband signals, note that for slowly decaying covariance sequences the columns of the matrix in (3.7.2) are nearly linearly dependent. Hence, the condition number of the covariance matrix may be quite high in such a case, and we may need to increase M in order to lower the condition number to a reasonable value. • The weighting matrix W in (3.7.4) can also be chosen to improve the accu-racy of the AR coefficient estimates. A simple first choice is W = I, resulting in the regular (unweighted) least squares estimate. Some accuracy improve-ment can be obtained by choosing W to be diagonal with decreasing positive diagonal elements (to reflect the decreased confidence in higher ACS lag es-timates). In addition, optimal weighting matrices have been derived (see [Stoica, Friedlander, and S¨ oderstr¨ om 1987a]); the optimal weight minimizes the covariance of ˆ a (for large N) over all choices of W. Unfor-tunately, the optimal weight depends on the (unknown) ARMA parameters. Thus, to use optimally weighted methods, a two–step “bootstrap” approach is used, in which a fixed W is first chosen and initial parameter estimates are obtained; these initial estimates are used to form an optimal W, and a second estimation gives the “optimal accuracy” AR coefficients. As a general rule, the performance gain in using optimal weighting is relatively small compared to the computational overhead required to compute the optimal weighting matrix. Most accuracy improvement can be realized by choosing M > n and W = I for many problems. We refer the reader to [Stoica, Friedlander, and S¨ oderstr¨ om 1987a; Cadzow 1982] for a discussion on the effect of W on the accuracy of ˆ a and on optimal weighting matrices. Once the AR estimates are obtained, we turn to the problem of estimating the MA part of the ARMA spectrum. Let γk = E {[B(z)e(t)][B(z)e(t −k)]∗} (3.7.5) denote the covariances of the MA part. Since the PSD of this part of the ARMA signal model is given by (see (3.6.1) and (3.6.3)): σ2|B(ω)|2 = m X k=−m γke−iωk (3.7.6) “sm2” 2004/2/ page 106 i i i i i i i i 106 Chapter 3 Parametric Methods for Rational Spectra it suffices to estimate {γk} in order to characterize the spectrum of the MA part. From (3.2.7) and (3.7.5), we obtain γk = E {[A(z)y(t)][A(z)y(t −k)]∗} = n X j=0 n X p=0 aja∗ pE {y(t −j)y∗(t −k −p)} = n X j=0 n X p=0 aja∗ pr(k + p −j) (a0 ≜1) (3.7.7) for k = 0, . . . , m. Inserting the previously calculated estimates of {ak} and {rk} in (3.7.7) leads to the following estimator of {γk} ˆ γk =      n X j=0 n X p=0 ˆ ajˆ a∗ pˆ r(k + p −j), k = 0, . . . , m (ˆ a0 ≜1) ˆ γ∗ −k, k = −1, . . . , −m (3.7.8) Finally, the ARMA spectrum is estimated as follows: ˆ φ(ω) = m X k=−m ˆ γke−iωk | ˆ A(ω)|2 (3.7.9) The MA estimate used by the above ARMA spectral estimator is of the type (3.6.4) encountered in the MA context. Hence, the criticism of (3.6.4) in the previous section is still valid. In particular, the numerator in (3.7.9) is not guaranteed to be positive for all ω values, which may lead to negative ARMA spectral estimates (see, e.g., [Kinkel, Perl, Scharf, and Stubberud 1979; Moses and Beex 1986]). Since (3.7.9) relies on the modified YW method of AR parameter estimation, we call (3.7.9) the modified YW ARMA spectral estimator. Refined versions of this ARMA spectral estimator, which improve the estimation accuracy if N is suffi-ciently large, were proposed in [Stoica and Nehorai 1986; Stoica, Friedlan-der, and S¨ oderstr¨ om 1987a; Moses, ˇ Simonyt˙ e, Stoica, and S¨ oderstr¨ om 1994]. A related ARMA spectral estimation method is outlined in Exercise 3.14. 3.7.2 Two–Stage Least Squares Method If the noise sequence {e(t)} were known, then the problem of estimating the param-eters in the ARMA model (3.2.7) would have been a simple input–output system parameter estimation problem which could be solved by a diversity of means of which the most simple is the least squares (LS) method. In the LS method, we express equation (3.2.7) as y(t) + ϕT (t)θ = e(t) (3.7.10) “sm2” 2004/2/ page 10 i i i i i i i i Section 3.7 ARMA Signals 107 where ϕT (t) = [y(t −1), . . . , y(t −n)| −e(t −1), . . . , −e(t −m)] θ = [a1, . . . , an|b1, . . . , bm]T Writing (3.7.10) in matrix form for t = L + 1, . . . , N (for some L > max(m, n)) gives z + Zθ = e (3.7.11) where Z =      y(L) . . . y(L −n + 1) −e(L) . . . −e(L −m + 1) y(L + 1) . . . y(L −n + 2) −e(L + 1) . . . −e(L −m + 2) . . . . . . . . . . . . y(N −1) . . . y(N −n) −e(N −1) . . . −e(N −m)      (3.7.12) z = [y(L + 1), y(L + 2), . . . , y(N)]T (3.7.13) e = [e(L + 1), e(L + 2), . . . , e(N)]T (3.7.14) Assume we know Z; then we could solve for θ in (3.7.11) by minimizing ∥e∥2. This leads to a least squares estimate similar to the AR LS estimate introduced in Section 3.4.2 (see also Result R32 in Appendix A): ˆ θ = −(Z∗Z)−1(Z∗z) (3.7.15) Of course, the {e(t)} in Z are not known. However, they may be estimated as described next. Since the ARMA model (3.2.7) is minimum phase, by assumption, it can alternatively be written as an infinite–order AR equation: (1 + α1z−1 + α2z−2 + . . .)y(t) = e(t) (3.7.16) where the coefficients {αk} of 1+α1z−1+α2z−2+· · · ≜A(z)/B(z) converge to zero as k increases. An idea to estimate {e(t)} is to first determine the AR parameters {αk} in (3.7.16) and next obtain {e(t)} by filtering {y(t)} as in (3.7.16). Of course, we cannot estimate an infinite number of (independent) parameters from a finite number of samples. In practice, the AR equation must be approximated by one of order K (say). The parameters in the truncated AR model of y(t) can be estimated by using either the YW or the LS procedure in Section 3.4. The above discussion leads to the two–stage LS algorithm summarized in the box below. The two–stage LS parameter estimator is also discussed, for example, in [Mayne and Firoozan 1982; S¨ oderstr¨ om and Stoica 1989]. The spectral estimate is guaranteed to be positive for all frequencies by construction. Owing to the practical requirement to truncate the AR model (3.7.16), the two–stage LS estimate is biased. The bias can be made small by choosing K sufficiently large; however, K should not be too large with respect to N or the accuracy of ˆ θ in Step 2 “sm2” 2004/2/ page 108 i i i i i i i i 108 Chapter 3 Parametric Methods for Rational Spectra will decrease. The difficult case for this method is apparently that of ARMA signals with zeroes close to the unit circle. In such a case, it may be necessary to select a very large value of K in order to keep the approximation (bias) errors in Step 1 at a reasonable level. The computational burden of Step 1 may then become prohibitively large. It should be noted, however, that the case of ARMA signals with zeroes near the unit circle is a difficult one for all known ARMA estimation methods [Kay 1988; Marple 1987; S¨ oderstr¨ om and Stoica 1989]. The Two–Stage Least Squares ARMA Method Step 1. Estimate the parameters {αk} in an AR(K) model of y(t) by the YW or covariance LS method. Let {ˆ αk}K k=1 denote the estimated parameters. Obtain an estimate of the noise sequence {e(t)} by ˆ e(t) = y(t) + K X k=1 ˆ αky(t −k) for t = K + 1, . . . , N. Step 2. Replace e(t) in (3.7.12) by ˆ e(t) determined in Step 1. Obtain ˆ θ from (3.7.15) with L = K + m. Estimate ˆ σ2 = 1 N −L ˜ e∗˜ e where ˜ e = Z ˆ θ + z is the LS error from (3.7.11). Insert {ˆ θ, ˆ σ2} into the PSD expression (3.2.2) to estimate the ARMA spectrum. Finally, we remark that the two–stage LS algorithm may be modified to esti-mate the parameters in MA models, simply by skipping over the estimation of AR parameters in Step 2. The so–obtained method was for the first time suggested in [Durbin 1959], and is often called Durbin’s Method. “sm2” 2004/2/ page 109 i i i i i i i i Section 3.8 Multivariate ARMA Signals 109 3.8 MULTIVARIATE ARMA SIGNALS The multivariate analog of the ARMA signal in equation (3.2.7) is: A(z)y(t) = B(z)e(t) (3.8.1) where y(t) and e(t) are ny ×1 vectors, and A(z) and B(z) are ny ×ny matrix poly-nomials in the unit delay operator. The task of estimating the matrix coefficients, {Ai, Bj} say, of the AR and MA polynomials in (3.8.1) is much more complicated than in the scalar case for at least one reason: The representation of y(t) in (3.8.1), with all elements in {Ai, Bj} assumed to be unknown, may well be nonunique even though the orders of A(z) and B(z) may have been chosen correctly. More pre-cisely, assume that we are given the spectral density matrix of an ARMA signal y(t) along with the (minimal) orders of the AR and MA polynomials in its ARMA equation. If all elements of {Ai, Bj} are considered to be unknown, then, unlike in the scalar case, the previous information may not be sufficient to determine the matrix coefficients {Ai, Bj} uniquely (see, e.g., [Hannan and Deistler 1988] and also Exercise 3.16). The lack of uniqueness of the representation may lead to a numerically ill–conditioned parameter estimation problem. For instance, this would be the case with the multivariate analog of the modified Yule–Walker method discussed in Section 3.7.1. Apparently the only possible cure to the aforementioned problem consists of using a canonical parameterization for the AR and MA coefficients. Basically this amounts to setting some of the elements of {Ai, Bj} to known values, such as 0 or 1, hence reducing the number of unknowns. The problem, however, is that to know which elements should be set to 0 or 1 in a specific case, we need to know ny indices (called “structure indices”) which are usually difficult to determine in practice [Kailath 1980; Hannan and Deistler 1988]. The difficulty in obtaining those indices has hampered the use of canonical parameterizations in applications. For this reason we do not go into any detail of the canonical forms for ARMA signals. The nonuniqueness of the fully parameterized ARMA equation will, however, receive further attention in the next subsection. Concerning the other approach to ARMA parameter estimation discussed in Section 3.7.2, namely the two–stage least squares method, it is worth noting that it can be extended to the multivariate case in a straightforward manner. In particular there is no need for using a canonical parameterization in either step of the extended method (see, e.g., [S¨ oderstr¨ om and Stoica 1989]). Working the details of the extension is left as an interesting exercise to the reader. We stress that the two– stage LS approach is perhaps the only real competitor to the subspace ARMA parameter estimation method described in the next subsections. 3.8.1 ARMA State–Space Equations The difference equation representation in (3.8.1) can be transformed into the fol-lowing state–space representation, and vice versa (see, e.g., [Aoki 1987; Kailath 1980]): x(t + 1) = Ax(t) + Be(t) (n × 1) y(t) = Cx(t) + e(t) (ny × 1) (3.8.2) “sm2” 2004/2/ page 110 i i i i i i i i 110 Chapter 3 Parametric Methods for Rational Spectra Thereafter, x(t) is the state vector of dimension n; A, B, and C are matrices of appropriate dimensions (with A having all eigenvalues inside the unit circle); and e(t) is white noise with zero mean and covariance matrix denoted by Q: E {e(t)} = 0 (3.8.3) E {e(t)e∗(s)} = Qδt,s (3.8.4) where Q is positive definite by assumption. The transfer filter corresponding to (3.8.2), also called the ARMA shaping filter, is readily seen to be: H(z) = z−1C(I −Az−1)−1B + I (3.8.5) By paralleling the calculation leading to (1.4.9), it is then possible to show that the ARMA power spectral density (PSD) matrix is given by: φ(ω) = H(ω)QH∗(ω) (3.8.6) (The derivation of (3.8.6) is left as an exercise to the reader.) In the next subsections, we will introduce a methodology for estimating the matrices A, B, C, and Q of the state–space equation (3.8.2), and hence the ARMA’s power spectral density (via (3.8.5) and (3.8.6)). In this subsection, we derive a number of results that prepare the discussion in the next subsections. Let Rk = E {y(t)y∗(t −k)} (3.8.7) P = E {x(t)x∗(t)} (3.8.8) Observe that, for k ≥1, Rk = E {[Cx(t + k) + e(t + k)][x∗(t)C∗+ e∗(t)]} = CE {x(t + k)x∗(t)} C∗+ CE {x(t + k)e∗(t)} (3.8.9) From equation (3.8.2), we obtain (by induction): x(t + k) = Akx(t) + k−1 X ℓ=0 Ak−ℓ−1 Be(t + ℓ) (3.8.10) which implies that E {x(t + k)x∗(t)} = AkP (3.8.11) and E {x(t + k)e∗(t)} = Ak−1BQ (3.8.12) Inserting (3.8.11) and (3.8.12) into (3.8.9) yields: Rk = CAk−1D (for k ≥1) (3.8.13) “sm2” 2004/2/ page 11 i i i i i i i i Section 3.8 Multivariate ARMA Signals 111 where D = APC∗+ BQ (3.8.14) From the first equation in (3.8.2), we also readily obtain P = APA∗+ BQB∗ (3.8.15) and from the second equation, R0 = CPC∗+ Q (3.8.16) It follows from (3.8.14) and (3.8.16) that B = (D −APC∗)Q−1 (3.8.17) and, respectively, Q = R0 −CPC∗ (3.8.18) Finally, inserting (3.8.17) and (3.8.18) into (3.8.15) gives the following Riccati equa-tion for P: P = APA∗+ (D −APC∗)(R0 −CPC∗)−1(D −APC∗)∗ (3.8.19) The above results lead to a number of interesting observations. The (Non)Uniqueness Issue: It is well known that a linear nonsingular transfor-mation of the state vector in (3.8.2) leaves the transfer function matrix associated with (3.8.2) unchanged. To be more precise, let the new state vector be given by: ˜ x(t) = Tx(t), (|T| ̸= 0) (3.8.20) It can be verified that the state–space equations in ˜ x(t), corresponding to (3.8.2), are: ˜ x(t + 1) = ˜ A˜ x(t) + ˜ Be(t) y(t) = ˜ C˜ x(t) + e(t) (3.8.21) where ˜ A = TAT −1; ˜ B = TB; ˜ C = CT −1 (3.8.22) As {y(t)} and {e(t)} in (3.8.21) are the same as in (3.8.2), the transfer function H(z) from e(t) to y(t) must be the same for both (3.8.2) and (3.8.21). (Verifying this by direct calculation is left to the reader.) The consequence is that there exists an infinite number of triples (A, B, C) (with all matrix elements assumed unknown) that lead to the same ARMA transfer function, and hence the same ARMA covariance sequence and PSD matrix. For the transfer function matrix, the nonuniqueness induced by the similarity transformation (3.8.22) is the only type “sm2” 2004/2/ page 11 i i i i i i i i 112 Chapter 3 Parametric Methods for Rational Spectra possible (as we know from the deterministic system theory, e.g., [Kailath 1980]). For the covariance sequence and the PSD, however, other types of nonuniqueness are also possible (see, e.g., [Faurre 1976] and [S¨ oderstr¨ om and Stoica 1989, Problem 6.3]). Most ARMA estimation methods require the use of a uniquely parameterized representation. The previous discussion has clearly shown that letting all elements of A, B, C, and Q be unknown does not lead to such a unique representation. The latter representation is obtained only if a canonical form is used. As already explained, the ARMA parameter estimation methods relying on canonical param-eterizations are impractical. The subspace–based estimation approach discussed in the next subsection circumvents the canonical parameterization requirement in an interesting way: The nonuniqueness of the ARMA representation with A, B, C, and Q fully parameterized is reduced to the nonuniqueness of a certain decompo-sition of covariance matrices; then by choosing a specific decomposition, a triplet (A, B, C) is isolated and determined in a numerically well–posed manner. The Minimality Issue: Let, for some integer–valued m, O =      C CA . . . CAm−1      (3.8.23) and C∗= [D AD · · · Am−1D] (3.8.24) The similarity between the above matrices and the observability and controllability matrices, respectively, from the theory of deterministic state–space equations is ev-ident. In fact, it follows from the aforementioned theory and from (3.8.13) that the triplet (A, D, C) is a minimal representation (i.e., one with the minimum possible dimension n) of the covariance sequence {Rk} if and only if (see, e.g., [Kailath 1980; Hannan and Deistler 1988]): rank(O) = rank(C) = n (for m ≥n) (3.8.25) As shown previously, the other matrices P, Q, and B of the state–space equation (3.8.2) can be obtained from A, C, and D (see equations (3.8.19), (3.8.18), and (3.8.17), respectively). It follows that the state–space equation (3.8.2) is a minimal representation of the ARMA covariance sequence {Rk} if and only if the condition (3.8.25) is satisfied. In what follows, we assume that the “minimality condition” (3.8.25) holds true. “sm2” 2004/2/ page 113 i i i i i i i i Section 3.8 Multivariate ARMA Signals 113 3.8.2 Subspace Parameter Estimation — Theoretical Aspects We begin with showing how A, C, and D can be obtained from a sequence of theoretical ARMA covariances. Let R =      R1 R2 · · · Rm R2 R3 · · · Rm+1 . . . . . . . . . Rm Rm+1 · · · R2m−1      = E         y(t) . . . y(t + m −1)   [y∗(t −1) · · · y∗(t −m)]      (3.8.26) denote the block–Hankel matrix of covariances. (The name given to (3.8.26) is due to its special structure: the submatrices on its block antidiagonals are identical. Such a matrix is a block extension to the standard Hankel matrix; see Definition D14 in Appendix A.) According to (3.8.13), we can factor R as follows: R =      C CA . . . CAm−1     [D AD · · · Am−1 D] = OC∗ (3.8.27) It follows from (3.8.25) and (3.8.27) that (see Result R4 in Appendix A): rank(R) = n (for m ≥n) (3.8.28) Hence, n could in principle be obtained as the rank of R. To determine A, C, and D let us consider the singular value decomposition (SVD) of R (see Appendix A): R = UΣV ∗ (3.8.29) where Σ is a nonsingular n × n diagonal matrix, and U ∗U = V ∗V = I (n × n) By comparing (3.8.27) and (3.8.29), we obtain O = UΣ1/2T for some nonsingular transformation matrix T (3.8.30) because the columns of both O and UΣ1/2 are bases of the range space of R. Henceforth, Σ1/2 denotes a square root of Σ (that is, Σ1/2Σ1/2 = Σ). By inserting (3.8.30) in the equation OC∗= UΣV ∗, we also obtain: C = V Σ1/2(T −1)∗ (3.8.31) Next, observe that OT −1 =      (CT −1) (CT −1)(TAT −1) . . . (CT −1)(TAT −1)m−1      (3.8.32) “sm2” 2004/2/ page 114 i i i i i i i i 114 Chapter 3 Parametric Methods for Rational Spectra and TC∗= [(TD) · · · (TAT −1)m−1(TD)] (3.8.33) This implies that by identifying O and C with the matrices made from all possible bases of the range spaces of R and R∗, respectively, we obtain the set of similarity–equivalent triples (A, D, C). Hence, picking up a certain basis yields a specific triple (A, D, C) in the aforementioned set. This is how the subspace ap-proach to ARMA state–space parameter estimation circumvents the nonuniqueness problem associated with a fully parameterized model. In view of the previous discussion we can, for instance, set T = I in (3.8.30) and (3.8.31) and obtain C as the first ny rows of UΣ1/2 and D as the first ny columns of Σ1/2V ∗. Then, A may be obtained as the solution to the linear system of equations ( ¯ UΣ1/2)A = U ¯Σ1/2 (3.8.34) where ¯ U and U ¯ are the matrices made from the first and, respectively, the last (m −1) block rows of U. Once A, C, and D have been determined, P is obtained by solving the Riccati equation (3.8.19) and then Q and B are derived from (3.8.18) and (3.8.17). Algorithms for solving the Riccati equation are presented, for instance, in [van Overschee and de Moor 1996] and the references therein. A modification of the above procedure that does not change the solution obtained in the theoretical case but which appears to have beneficial effects on the parameter estimates obtained from finite samples is as follows. Let us denote the two vectors appearing in (3.8.26) by the following symbols: f(t) = [yT (t) · · · yT (t + m −1)]T (3.8.35) p(t) = [yT (t −1) · · · yT (t −m)]T (3.8.36) Let Rfp = E {f(t)p∗(t)} (3.8.37) and let Rff and Rpp be similarly defined. Redefine the matrix in (3.8.26) as R = R−1/2 ff RfpR−1/2 pp (3.8.38) where R−1/2 ff and R−1/2 pp are the Hermitian square roots of R−1 ff and R−1 pp (see Ap-pendix A). A heuristic explanation why the previous modification should lead to better parameter estimates in finite samples is as follows. The matrix R in (3.8.26) is equal to Rfp, whereas the R in (3.8.38) can be written as R ˜ f ˜ p where both ˜ f(t) = R−1/2 ff f(t) and ˜ p(t) = R−1/2 pp p(t) have unity covariance matrices. Owing to the latter property the cross–covariance matrix R ˜ f ˜ p and its singular elements are usually estimated more accurately from finite samples than are Rfp and its singular elements. This fact should eventually lead to better parameter estimates. By making use of the factorization (3.8.27) of Rfp along with the formula (3.8.38) for the matrix R, we can write: R = R−1/2 ff RfpR−1/2 pp = = R−1/2 ff OC∗R−1/2 pp = UΣV ∗ (3.8.39) “sm2” 2004/2/ page 11 i i i i i i i i Section 3.8 Multivariate ARMA Signals 115 where UΣV ∗is now the SVD of R in (3.8.38). Identifying R−1/2 ff O with UΣ1/2 and R−1/2 pp C with V Σ1/2, we obtain O = R1/2 ff UΣ1/2 (3.8.40) C = R1/2 pp V Σ1/2 (3.8.41) The matrices A, C, and D can be determined from these equations as previously described. Then we can derive P, Q, and B as has also been indicated before. 3.8.3 Subspace Parameter Estimation — Implementation Aspects Let ˆ Rfp be the sample estimate of Rfp, for example, ˆ Rfp = 1 N N−m+1 X t=m+1 f(t)p∗(t) (3.8.42) and let ˆ Rff etc be similarly defined. Compute ˆ R as ˆ R = ˆ R−1/2 ff ˆ Rfp ˆ R−1/2 pp (3.8.43) and its SVD. Estimate n as the “practical rank” of ˆ R: ˆ n = p-rank( ˆ R) (3.8.44) (i.e., the number of singular values of ˆ R which are significantly larger than the remaining ones; statistical tests for deciding whether a singular value of a given sample covariance matrix is significantly different from zero are discussed in, e.g., [Fuchs 1987].) Let ˆ U, ˆ Σ and ˆ V denote the matrices made from the ˆ n principal singular elements of ˆ R, corresponding to the matrices U, Σ and V in (3.8.39). Take ˆ C = the first ny rows of ˆ R1/2 ff ˆ U ˆ Σ1/2 ˆ D = the first ny columns of ˆ Σ1/2 ˆ V ∗ˆ R1/2 pp (3.8.45) Next, let ¯ Γ and Γ ¯ = the matrices made from the first and, respectively, last (m −1) block rows of ˆ R1/2 ff ˆ U ˆ Σ1/2. (3.8.46) Estimate A as ˆ A = the LS or TLS solution to ¯ ΓA ≃Γ ¯ (3.8.47) Finally, estimate P as ˆ P = the positive definite solution, if any, of the Riccati equation (3.8.19) with A, C, D and R0 replaced by their estimates (3.8.48) “sm2” 2004/2/ page 116 i i i i i i i i 116 Chapter 3 Parametric Methods for Rational Spectra and Q and B as: ˆ Q = ˆ R0 −ˆ C ˆ P ˆ C∗ ˆ B = ( ˆ D −ˆ A ˆ P ˆ C∗) ˆ Q−1 (3.8.49) In some cases, the previous procedure cannot be completed because the Riccati equa-tion has no positive definite solution or even no solution at all. (In the case of a real–valued ARMA signal, for instance, that equation may have no real–valued solution.) In such cases, we can approximately determine P as follows. (Note that only the estimation of P has to be modified; all the other parameter estimates can be obtained as described above.) A straightforward calculation making use of (3.8.11) and (3.8.12) yields: E {x(t)y∗(t −k)} = AkPC∗+ Ak−1BQ = Ak−1D (for k ≥1) (3.8.50) Hence, C∗= E {x(t)p∗(t)} (3.8.51) Let ψ = C∗R−1 pp (3.8.52) and define ϵ(t) via the equation: x(t) = ψp(t) + ϵ(t) (3.8.53) It is not difficult to verify that ϵ(t) is uncorrelated with p(t). Indeed, E {ϵ(t)p∗(t)} = E {[x(t) −ψp(t)]p∗(t)} = C∗−ψRpp = 0 (3.8.54) This implies that the first term in (3.8.53) is the least squares approximation of x(t) based on the past signal values in p(t) (see, e.g., [S¨ oderstr¨ om and Stoica 1989] and Appendix A). It follows from this observation that ψp(t) approaches x(t) as m increases. Hence, ψRppψ∗= C∗R−1 pp C →P (as m →∞) (3.8.55) However, in view of (3.8.41), C∗R−1 pp C = Σ (3.8.56) The conclusion is that, provided m is chosen large enough, we can approximate P as ˜ P = ˆ Σ, for m ≫1 (3.8.57) This is the alternative estimate of P which can be used in lieu of (3.8.48) whenever the latter estimation procedure fails. The estimate ˜ P approaches the true value P as N tends to infinity provided m is also increased without bound at an appropriate “sm2” 2004/2/ page 11 i i i i i i i i Section 3.9 Complements 117 rate. However, if (3.8.57) is used with too small a value of m the estimate of P so obtained may be heavily biased. The reader interested in more aspects on the subspace approach to parameter estimation for rational models should consult [Aoki 1987; van Overschee and de Moor 1996; Rao and Arun 1992; Viberg 1995] and the references therein. 3.9 COMPLEMENTS 3.9.1 The Partial Autocorrelation Sequence The sequence {kj} computed in equation (3.5.7) of the LDA has an interesting statistical interpretation, as explained next. The covariance lag ρj “measures” the degree of correlation between the data samples y(t) and y(t −j) (in the chapter ρj is equal to either r(j) or ˆ r(j); here ρj = r(j)). The normalized covariance sequence {ρj/ρ0} is often called the autocorrelation function. Now, y(t) and y(t−j) are related to one another not only “directly” but also through the intermediate samples: [y(t −1) . . . y(t −j + 1)]T ≜ϕ(t) Let ϵf(t) and ϵb(t −j) denote the errors of the LS linear predictions of y(t) and y(t−j), respectively, based on ϕ(t) above; in particular, ϵf(t) and ϵb(t−j) must then be uncorrelated with ϕ(t): E {ϵf(t)ϕ∗(t)} = E {ϵb(t −j)ϕ∗(t)} = 0. (Note that ϵf(t) and ϵb(t −j) are termed forward and backward prediction errors respectively; see also Exercises 3.3 and 3.4.) We show that kj = − E {ϵf(t)ϵ∗ b(t −j)} [E {|ϵf(t)|2} E {|ϵb(t −j)|2}]1/2 (3.9.1) Hence, kj is the negative of the so–called partial correlation (PARCOR) coefficient of {y(t)}, which measures the “partial correlation” between y(t) and y(t −j) after the correlation due to the intermediate values y(t −1), . . . , y(t −j + 1) has been eliminated. Let ϵf(t) = y(t) + ϕT (t)θ (3.9.2) where, similarly to (3.4.9), θ = −{E  ϕc(t)ϕT (t) }−1 E {ϕc(t) y(t)} ≜−R−1r It is readily verified (by making use of the previous definition for θ) that: E {ϕc(t)ϵf(t)} = 0 which shows that ϵf(t), as defined above, is indeed the error of the linear forward LS prediction of y(t), based on ϕ(t). Similarly, define the following linear backward LS prediction error: ϵb(t −j) = y(t −j) + ϕT (t)α “sm2” 2004/2/ page 118 i i i i i i i i 118 Chapter 3 Parametric Methods for Rational Spectra where α = −{E  ϕc(t)ϕT (t) }−1E {ϕc(t)y(t −j)} = −R−1˜ r = ˜ θ The last equality above follows from (3.5.3). We thus have E {ϕc(t)ϵb(t −j)} = 0 as required. Next, some simple calculations give: E  |ϵf(t)|2 = E  y∗(t)[y(t) + ϕT (t)θ] = ρ0 + [ρ∗ 1 . . . ρ∗ j−1]θ = σ2 j−1 E  |ϵb(t −j)|2 = E  y∗(t −j)[y(t −j) + ϕT (t)α] = ρ0 + [ρj−1 . . . ρ1]˜ θ = σ2 j−1 and E {ϵf(t)ϵ∗ b(t −j)} = E  [y(t) + ϕT (t)θ]y∗(t −j) = ρj + [ρj−1 . . . ρ1]θ = αj−1 (cf. (3.4.1) and (3.5.6)). By using the previous equations in (3.9.1), we obtain kj = −αj−1/σ2 j−1 which coincides with (3.5.7). 3.9.2 Some Properties of Covariance Extensions Assume we are given a finite sequence {r(k)}m−1 k=−(m−1) with r(−k) = r∗(k), and such that Rm in equation (3.4.6) is positive definite. We show that the finite sequence can be extended to an infinite sequence that is a valid ACS. Moreover, there are an infinite number of possible covariance extensions and we derive an algorithm to construct these extensions. One such extension, in which the reflection coefficients km, km+1, . . . are all zero (and thus the infinite ACS corresponds to an AR process of order less than or equal to (m−1)), gives the so-called Maximum Entropy extension [Burg 1975]. We begin by constructing the set of r(m) values for which Rm+1 > 0. Using the result of Exercise 3.7, we have |Rm+1| = σ2 m|Rm| (3.9.3) From the Levinson–Durbin algorithm, σ2 m = σ2 m−1 1 −|km|2 = σ2 m−1  1 −|r(m) + ˜ r∗ m−1θm−1|2 σ4 m−1  (3.9.4) Combining (3.9.3) and (3.9.4) gives |Rm+1| = |Rm| · σ2 m−1  1 −|r(m) + ˜ r∗ m−1θm−1|2 σ4 m−1  (3.9.5) “sm2” 2004/2/ page 119 i i i i i i i i Section 3.9 Complements 119 which shows that |Rm+1| is quadratic in r(m). Since σ2 m−1 > 0 and Rm is positive definite, it follows that |Rm+1| > 0 if and only if |r(m) + ˜ r∗ m−1θm−1|2 < σ4 m−1 (3.9.6) The above region is an open disk in the complex plane whose center is −˜ r∗ m−1θm−1 and radius is σ2 m−1. Equation (3.9.6) leads to a construction of all possible covariance extenstions. Note that if Rp > 0 and we choose r(p) inside the disk |r(p) + ˜ r∗ p−1θp−1|2 < σ4 p−1, then |Rp+1| > 0. This implies σ2 p > 0, and the admissible disk for r(p + 1) has nonzero radius, so there are an infinite number of possible choices for r(p + 1) such that |Rp+2| > 0. Arguing inductively in this way for p = m, m + 1, . . . shows that there are an infinite number of covariance extensions and provides a construction for them. If we choose r(p) = −˜ r∗ p−1θp−1 for p = m, m + 1, . . . (i.e., r(p) is chosen to be at the center of each disk in (3.9.6)), then from (3.9.4) we see that the reflec-tion coefficient kp = 0. Thus, from the Levinson–Durbin algorithm (see equation (3.5.10)) we have θp =  θp−1 0  (3.9.7) and σ2 p = σ2 p−1 (3.9.8) Arguing inductively again, we find that kp = 0, θp =  θm−1 0  , and σ2 p = σ2 m−1 for p = m, m + 1, . . .. This extension, called the Maximum Entropy extension [Burg 1975], thus gives an ACS sequence that corresponds to an AR process of order less than or equal to (m −1). The name maximum entropy arises because the so– obtained spectrum has maximum entropy rate R π −π ln φ(ω)dω under the Gaussian assumption [Burg 1975]; the entropy rate is closely related to the numerator in the spectral flatness measure introduced in Exercise 3.6. For some recent results on the covariance extension problem and its variations, we refer to [Byrnes, Georgiou, and Lindquist 2001] and the references therein. 3.9.3 The Burg Method for AR Parameter Estimation The thesis [Burg 1975] developed a method for AR parameter estimation that is based on forward and backward prediction errors, and on direct estimation of the reflection coefficients in equation (3.9.1). In this complement, we develop the Burg estimator and discuss some of its properties. Assume we have data measurements {y(t)} for t = 1, 2, . . . , N. Similarly to Complement 3.9.1, we define the forward and backward prediction errors for a pth–order model as: ˆ ef,p(t) = y(t) + p X i=1 ˆ ap,iy(t −i), t = p + 1, . . . , N (3.9.9) ˆ eb,p(t) = y(t −p) + p X i=1 ˆ a∗ p,iy(t −p + i), t = p + 1, . . . , N (3.9.10) “sm2” 2004/2/ page 120 i i i i i i i i 120 Chapter 3 Parametric Methods for Rational Spectra (we have shifted the time index in the definition of eb(t) from that in equation (3.9.2) to reflect that ˆ eb,p(t) is computed using data up to time t; also, the fact that the coefficients in (3.9.10) are given by {ˆ a∗ p,i} follows from Complement 3.9.1). We use hats to denote estimated quantities, and we explicitly denote the order p in both the prediction error sequences and the AR coefficients. The AR parameters are related to the reflection coefficient ˆ kp by (see (3.5.10)) ˆ ap,i = ( ˆ ap−1,i + ˆ kpˆ a∗ p−1,p−i, i = 1, . . . , p −1 ˆ kp, i = p (3.9.11) Burg’s method considers the recursive–in–order estimation of ˆ kp given that the AR coefficients for order p −1 have been computed. In particular, Burg’s method finds ˆ kp to minimize the arithmetic mean of the forward and backward prediction error variance estimates: min ˆ kp 1 2 [ˆ ρf(p) + ˆ ρb(p)] (3.9.12) where ˆ ρf(p) = 1 N −p N X t=p+1 |ˆ ef,p(t)|2 ˆ ρb(p) = 1 N −p N X t=p+1 |ˆ eb,p(t)|2 and where {ˆ ap−1,i}p−1 i=1 are assumed to be known from the recursion at the previous order. The prediction errors satisfy the following recursive–in–order expressions ˆ ef,p(t) = ˆ ef,p−1(t) + ˆ kpˆ eb,p−1(t −1) (3.9.13) ˆ eb,p(t) = ˆ eb,p−1(t −1) + ˆ k∗ pˆ ef,p−1(t) (3.9.14) Equation (3.9.13) follows directly from (3.9.9)–(3.9.11) as ˆ ef,p(t) = y(t) + p−1 X i=1  ˆ ap−1,i + ˆ kpˆ a∗ p−1,p−i  y(t −i) + ˆ kpy(t −p) = " y(t) + p−1 X i=1 ˆ ap−1,iy(t −i) # + ˆ kp " y(t −p) + p−1 X i=1 ˆ a∗ p−1,iy(t −p + i) # = ˆ ef,p−1(t) + ˆ kpˆ eb,p−1(t −1) Similarly, ˆ eb,p(t) = y(t −p) + p−1 X i=1 [ˆ a∗ p−1,i + ˆ k∗ pˆ ap−1,p−i]y(t −p + i) + ˆ k∗ py(t) = ˆ eb,p−1(t −1) + ˆ k∗ pˆ ef,p−1(t) “sm2” 2004/2/ page 12 i i i i i i i i Section 3.9 Complements 121 which shows (3.9.14). We can use the above expressions to develop a recursive–in–order algorithm for estimating the AR coefficients. Note that the quantity to be minimized in (3.9.12) is quadratic in ˆ kp since 1 2 [ˆ ρf(p) + ˆ ρb(p)] = 1 2(N −p) N X t=p+1  ˆ ef,p−1(t) + ˆ kpˆ eb,p−1(t −1) 2 + ˆ eb,p−1(t −1) + ˆ k∗ pˆ ef,p−1(t) 2 = 1 2(N −p) N X t=p+1 nh |ˆ ef,p−1(t)|2 + |ˆ eb,p−1(t −1)|2i h 1 + |ˆ kp|2i +2ˆ ef,p−1(t)ˆ e∗ b,p−1(t −1)ˆ k∗ p + 2ˆ e∗ f,p−1(t)ˆ eb,p−1(t −1)ˆ kp o Using Result R34 in Appendix A, we find that the ˆ kp that minimizes the above quantity is given by ˆ kp = −2 PN t=p+1 ˆ ef,p−1(t)ˆ e∗ b,p−1(t −1) PN t=p+1 h |ˆ ef,p−1(t)|2 + |ˆ eb,p−1(t −1)|2i (3.9.15) A recursive–in–order algorithm for estimating the AR parameters, called the Burg algorithm, is as follows: The Burg Algorithm Step 0. Initialize ˆ ef,0(t) = ˆ eb,0(t) = y(t). Step 1. For p = 1, . . . , n, (a) Compute ˆ ef,p−1(t) and ˆ eb,p−1(t) for t = p + 1, . . . , N from (3.9.13) and (3.9.14). (b) Compute ˆ kp from (3.9.15). (c) Compute ˆ ap,i for i = 1, . . . , p from (3.9.11). Then ˆ θ = [ˆ ap,1, . . . , ˆ ap,p]T is the vector of AR coefficient estimates. Finally, we show that the resulting AR model is stable; this is accomplished by showing that |ˆ kp| ≤1 for p = 1, . . . , n (see Exercise 3.9). To do so, we express ˆ kp as ˆ kp = −2c∗d c∗c + d∗d (3.9.16) where c = [ˆ eb,p−1(p), . . . , ˆ eb,p−1(N −1)]T d = [ˆ ef,p−1(p + 1), . . . , ˆ ef,p−1(N)]T “sm2” 2004/2/ page 12 i i i i i i i i 122 Chapter 3 Parametric Methods for Rational Spectra Then 0 ≤∥c −eiαd∥2 = c∗c + d∗d −2 Re {eiαc∗d} for every α ∈[−π, π] = ⇒2 Re {eiαc∗d} ≤c∗c + d∗d for every α ∈[−π, π] = ⇒2|c∗d| ≤c∗c + d∗d = ⇒|ˆ kp| ≤1 The Burg algorithm is computationally simple, and is amenable to both order– recursive and time–recursive solutions. In addition, the Burg AR model estimate is guaranteed to be stable. On the other hand, the Burg method is suboptimal in that it estimates the n reflection coefficients by decoupling an n–dimensional minimization problem into the n one–dimensional minimizations in (3.9.12). This is in contrast to the Least Squares AR method in Section 3.4.2, in which the AR coefficients are found by an n–dimensional minimization. For large N, the two algorithms give very similar performance; for short or medium data lengths, the Burg algorithm usually behaves somewhere between the LS method and the Yule– Walker method. 3.9.4 The Gohberg–Semencul Formula The Hermitian Toeplitz matrix Rn+1 in (3.4.6) is highly structured. In particular, it is completely defined by its first column (or row). As shown in Section 3.5, exploitation of the special algebraic structure of (3.4.6) makes it possible to solve this system of equations very efficiently. In this complement we show that the Toeplitz structure of Rn+1 may also be exploited to derive a closed–form expression for the inverse of this matrix. This expression is what is usually called the Gohberg– Semencul (GS) formula (or the Gohberg–Semencul–Heining formula, in recognition of the contribution also made by Heining to its discovery) [S¨ oderstr¨ om and Stoica 1989; Iohvidov 1982; B¨ ottcher and Silbermann 1983]. As will be seen, an interesting consequence of the GS formula is the fact that, even if R−1 n+1 is not Toeplitz in general, it is still completely determined by its first column. Observe from (3.4.6) that the first column of R−1 n+1 is given by [1 θ]T /σ2. In what follows, we drop the subscript n of θ for notational convenience. The derivation of the GS formula requires some preparations. First, note that the following nested structures of Rn+1, Rn+1 =  ρ0 r∗ n rn Rn  =  Rn ˜ rn ˜ r∗ n ρ0  along with (3.4.6) and the result (3.5.3), imply that θ = −R−1 n rn, ˜ θ = −R−1 n ˜ rn σ2 n = ρ0 −r∗ nR−1 n rn = ρ0 −˜ r∗ nR−1 n ˜ rn Next, make use of the above equations and a standard formula for the inverse of a “sm2” 2004/2/ page 123 i i i i i i i i Section 3.9 Complements 123 partitioned matrix (see Result R26 in Appendix A) to write R−1 n+1 =  0 0 0 R−1 n  +  1 θ  [1 θ∗]/σ2 n (3.9.17) =  R−1 n 0 0 0  +  ˜ θ 1  [˜ θ∗1]/σ2 n (3.9.18) Finally, introduce the following (n + 1) × (n + 1) matrix Z =       0 . . . 0 1 ... . . . ... 0 1 0       =      0 . . . 0 . . . In×n 0      and observe that multiplication by Z of a vector or a matrix has the effects indicated below. q q q q q q 0 0 0 0 n × n n × n n × 1 n × 1 ZXZT X Zx x PPPPPPPPPPPP q u PPPPPPPP P q u Owing to these effects of the linear transformation by Z, this matrix is called a shift or displacement operator. We are now prepared to present a simple derivation of the GS formula. The basic idea of this derivation is to eliminate R−1 n from the expressions for R−1 n+1 in (3.9.17) and (3.9.18) by making use of the above displacement properties of Z. Hence, using the expression (3.9.17) for R−1 n+1, and its “dual” (3.9.18) for calculating “sm2” 2004/2/ page 124 i i i i i i i i 124 Chapter 3 Parametric Methods for Rational Spectra ZR−1 n+1ZT , gives R−1 n+1 −ZR−1 n+1ZT = 1 σ2 n               1 a1 . . . an     [1 a∗ 1 . . . a∗ n] −      0 a∗ n . . . a∗ 1     [0 an . . . a1]          (3.9.19) Premultiplying and postmultiplying (3.9.19) by Z and ZT , respectively, and then continuing to do so with the resulting equations, we obtain ZR−1 n+1ZT −Z2R−1 n+1Z2T = 1 σ2 n                     0 1 a1 . . . an−1        [0 1 a∗ 1 . . . a∗ n−1] −        0 0 a∗ n . . . a∗ 2        [0 0 an . . . a2]              (3.9.20) . . . ZnR−1 n+1ZnT −0 = 1 σ2 n               0 . . . 0 1     [0 . . . 0 1]          (3.9.21) In (3.9.21), use is made of the fact that Z is a nilpotent matrix of order n + 1, in the sense that: Zn+1 = 0 (which can be readily verified). Now, by simply summing up the above equations (3.9.19)–(3.9.21), we derive the following expression for R−1 n+1: R−1 n+1 = 1 σ2 n                  1 0 a1 ... . . . ... ... an . . . a1 1             1 a∗ 1 . . . a∗ n ... ... . . . ... a∗ 1 0 1       −       0 0 a∗ n ... . . . ... ... a∗ 1 . . . a∗ n 0             0 an . . . a1 ... ... . . . ... an 0 0                  (3.9.22) which is the GS formula. Note from (3.9.22) that R−1 n+1 is, indeed, completely determined by its first column, as is claimed earlier. “sm2” 2004/2/ page 12 i i i i i i i i Section 3.9 Complements 125 The GS formula is inherently related to the Yule–Walker method of AR mod-eling, and this is one of the reasons for including it in this book. The GS formula is also useful in studying other spectral estimators, such as the Capon method, which is discussed in Chapter 5. The hope that the curious reader who studies this part will become interested in the fascinating topic of Toeplitz matrices and allied subjects is another reason for its inclusion. In particular, it is indeed fascinating to be able to derive an analytical formula for the inverse of a given matrix, as is shown above to be the case for Toeplitz matrices. The basic ideas of the previous derivation may be extended to more general matrices. Let us explain this briefly. For a given matrix X, the rank of X −ZXZT is called the displacement rank of X under Z. As can be seen from (3.9.19), the inverse of a Hermitian Toeplitz matrix has a displacement rank equal to two. Now, assume we are given a (structured) matrix X for which we are able to find a nilpotent matrix Y such that X−1 has a low displacement rank under Y ; the matrix Y does not need to have the previous form of Z. Then, paralleling the calculations in (3.9.19)–(3.9.22), we might be able to derive a simple “closed–form” expression for X−1. See [Friedlander, Morf, Kailath, and Ljung 1979] for more details on the topic of this complement. 3.9.5 MA Parameter Estimation in Polynomial Time The parameter estimation of an AR process via the LS method leads to a quadratic minimization problem that can be solved in closed form (see (3.4.11), (3.4.12)). On the other hand, for an MA process the LS criterion similar to (3.4.11), which is given by N2 X t=N1 1 B(z)y(t) 2 (3.9.23) is a highly nonlinear function of the MA parameters (and likewise for an ARMA process). A simple MA spectral estimator, that does not require solving a nonlinear minimization problem, is given by equation (3.6.4) and is repeated here: ˆ φ(ω) = ˆ m X k=−ˆ m ˆ r(k)e−iωk (3.9.24) where ˆ m is the assumed MA order and {ˆ r(k)} are the standard sample covariances. As explained in Section 3.6 the main problem associated with (3.9.24) is the fact that ˆ φ(ω) is not guaranteed to be positive for all ω ∈[0, 2π]. If the final goal of the signal processing exercise is spectral analysis then an occurrence of negative values ˆ φ(ω) < 0 (for some values of ω) is not acceptable, as the true spectral density of course satisfies φ(ω) ≥0 for all ω ∈[0, 2π]. If the goal is MA parameter estimation, then the problem induced by ˆ φ(ω) < 0 (for some values of ω) is even more serious because in such a case ˆ φ(ω) cannot be factored as in (3.6.1), and hence no MA parameter estimates can be determined directly from ˆ φ(ω). In this complement we will show how to get around the problem of ˆ φ(ω) < 0, and hence how to obtain MA parameter estimates from such an invalid MA spectral density estimate, using an indirect but computationally efficient method (see [Stoica, McKelvey, and “sm2” 2004/2/ page 126 i i i i i i i i 126 Chapter 3 Parametric Methods for Rational Spectra Mari 2000; Dumitrescu, Tabus, and Stoica 2001]). Note that obtaining MA parameter estimates from the ˆ φ(ω) in (3.9.24) is not only of interest for MA estimation, but also as a step of some ARMA estimation methods (see, e.g., (3.7.9) as well as Exercise 3.12). A sound way of tackling this problem of “factoring the unfactorable” is as follows. Let φ(ω) denote the PSD of an MA process of order m: φ(ω) = m X k=−m r(k)e−iωk ≥0, ω ∈[0, 2π] (3.9.25) We would like to determine the φ(ω) in (3.9.25) that is closest to ˆ φ(ω) in (3.9.24), in the following LS sense: min 1 2π Z π −π h ˆ φ(ω) −φ(ω) i2 dω (3.9.26) The order m in (3.9.25) may be different from the order ˆ m in (3.9.24). Without loss of generality we can assume that m ≤ˆ m (indeed, if m > ˆ m we can extend the sequence {ˆ r(k)} with zeroes to make m ≤ˆ m). Once φ(ω) has been obtained by solving (3.9.26) we can factor it by using any of a number of available spectral factorization algorithms (see, e.g., [Wilson 1969; Vostry 1975; Vostry 1976]), and in this way derive MA parameter estimates {bk} satisfying φ(ω) = σ2|B(ω)|2 (3.9.27) (see (3.6.1)). This step of obtaining {bk} and σ2 from φ(ω) can be computed in O(m2) flops. The problem that remains is to solve (3.9.26) for φ(ω) in a similarly efficient computational way. As ˆ φ(ω) −φ(ω) = m X k=−m [ˆ r(k) −r(k)] e−iωk + X |k|>m ˆ r(k)e−iωk it follows from Parseval’s theorem (see (1.2.6)) that the spectral LS criterion of (3.9.26) can be rewritten as a covariance fitting criterion: 1 2π Z π −π h ˆ φ(ω) −φ(ω) i2 dω = m X k=−m ˆ r(k) −r(k) 2 + X |k|>m |ˆ r(k)|2 Consequently, the approximation problem (3.9.26) is equivalent to: min {r(k)} ∥ˆ r −r∥2 W subject to (3.9.25) (3.9.28) “sm2” 2004/2/ page 12 i i i i i i i i Section 3.9 Complements 127 where ∥x∥2 W = x∗Wx and ˆ r = ˆ r(0) . . . ˆ r(m) T r = r(0) . . . r(m) T W =      1 0 2 ... 0 2      In the following we will describe a computationally efficient and reliable algorithm for solving problem (3.9.28) (with a general W matrix) in a time that is a polynomial function of m (a more precise flop count is given below). Note that a possible way of tackling (3.9.28) would consist of writing the covariances {r(k)} as functions of the MA parameters (see (3.3.3)), which would guarantee that they satisfy (3.9.25), and then minimize the function in (3.9.28) with respect to the MA parameters. However, the so-obtained minimization problem would be, similarly to (3.9.23), nonlinear in the MA parameters (more precisely, the criterion in (3.9.28) is quartic in {bk}), which is exactly the type of problem we tried to avoid in the first place. As a preparation step for solving (3.9.28) we first derive a parameterization of the MA covariance sequence {r(k)}, which will turn out to be more convenient than the parameterization via {bk}. Let Jk denote the (m + 1) × (m + 1) matrix with ones on the (k + 1)st diagonal and zeroes everywhere else: Jk = k+1 z }| {             0 . . . 0 1 0 . . . 0 ... ... 1 . . . 0 0 . . . 0 . . . . . . . . . 0             , (m + 1) × (m + 1) (for k = 0, . . . , m). Note that J0 = I. Then the following result holds: Any MA covariance sequence {r(k)}m k=0 can be written as r(k) = tr(JkQ) for k = 0, . . . , m, where Q is an (m+1)×(m+1) positive semidefinite matrix. (3.9.29) To prove this result, let a(ω) = 1 eiω . . . eimωT “sm2” 2004/2/ page 128 i i i i i i i i 128 Chapter 3 Parametric Methods for Rational Spectra and observe that a(ω)a∗(ω) =       1 e−iω · · · e−imω eiω 1 ... . . . . . . ... ... e−iω eimω · · · eiω 1       = m X k=−m Jke−ikω where J−k = JT k (for k ≥0). Hence, for the sequence parameterized as in (3.9.29), we have that m X k=−m r(k)e−ikω = tr " m X k=−m JkQe−ikω # = tr [a(ω)a∗(ω)Q] = a∗(ω)Qa(ω) ≥0, for ω ∈[0, 2π] which implies that {r(k)} indeed is an MA(m) covariance sequence. To show that any MA(m) covariance sequence can be parameterized as in (3.9.29), we make use of (3.3.3) to write (for k = 0, . . . , m) r(k) = σ2 m X j=k bjb∗ j−k = σ2 b∗ 0 · · · b∗ m Jk    b0 . . . bm    = tr      Jk · σ2    b0 . . . bm    b∗ 0 · · · b∗ m      (3.9.30) Evidently (3.9.30) has the form stated in (3.9.29) with Q = σ2    b0 . . . bm    b∗ 0 · · · b∗ m With this observation, the proof of (3.9.29) is complete. We can now turn our attention to the main problem, (3.9.28). We will describe an efficient algorithm for solving (3.9.28) with a general weighting matrix W > 0 (as already stated.). For a choice of W that usually yields more accurate MA parameter estimates than the simple diagonal weighting in (3.9.28), we refer the reader to [Stoica, McKelvey, and Mari 2000]. Let µ = C(ˆ r −r) where C is the Cholesky factor of W (i.e., C is an upper triangular matrix and W = C∗C). Also, let α be a vector containing all the elements in the upper triangle of Q, including the diagonal: α = [Q1,1 Q1,2 . . . Q1,m+1 ; Q2,2 . . . Q2,m+1 ; . . . ; Qm+1,m+1]T “sm2” 2004/2/ page 129 i i i i i i i i Section 3.10 Exercises 129 Note that α defines Q; that is, the elements of Q are either elements of α or complex conjugates of elements of α. Making use of this notation and of (3.9.29) we can rewrite (3.9.28) in the following form (for real-valued sequences): min ρ,µ,α ρ subject to: ∥µ∥≤ρ Q ≥0      tr[Q] tr 1 2 J1 + JT 1  Q . . . tr 1 2 Jm + JT m  Q     + C−1µ = ˆ r (3.9.31) Note that to obtain the equality constraint in (3.9.31) we used the fact that (in the real-valued case; the complex-valued case can be treated similarly): r(k) = tr(JkQ) = tr(QT JT k ) = tr(JT k Q) = 1 2 tr (Jk + JT k )Q The reason for this seemingly artificial trick is that we need the matrices multiplying Q in (3.9.31) to be symmetric. In effect, the problem (3.9.31) has precisely the form of a semidefinite quadratic program (SQP) which can be solved efficiently by means of interior point methods (see [Sturm 1999] and also [Dumitrescu, Tabus, and Stoica 2001] and references therein). Specifically, it can be shown that an interior point method (such as the ones in [Sturm 1999]) when applied to the SQP in (3.9.31) requires O(m4) flops per iteration; furthermore, the number of iterations needed to achieve practical convergence of the method is typically quite small (and nearly independent of m), for instance between 10 and 20 iterations. The overall conclusion, therefore, is that (3.9.31), and hence the original problem (3.9.28), can be efficiently solved in O(m4) flops. Once the solution to (3.9.31) has been computed, we can obtain the corresponding MA covariances either as r = ˆ r −C−1µ or as r(k) = tr(JkQ) for k = 0, . . . , m. Numerical results obtained with the MA parameter estimation algorithm outlined above have been reported in [Dumitrescu, Tabus, and Stoica 2001] (see also [Stoica, McKelvey, and Mari 2000]). 3.10 EXERCISES Exercise 3.1: The Minimum Phase Property As stated in the text, a polynomial A(z) is said to be minimum phase if all its zeroes are inside the unit circle. In this exercise, we motivate the name minimum phase. Specifically, we will show that if A(z) = 1 + a1z−1 + · · · + anz−n has real-valued coefficients and has all its zeroes inside the unit circle, and if B(z) is any other polynomial in z−1 with real-valued coefficients that satisfies |B(ω)| = |A(ω)| and B(0) = A(0) (where B(ω) ≜B(z)|z=eiω), then the phase lag of B(ω), given by “sm2” 2004/2/ page 130 i i i i i i i i 130 Chapter 3 Parametric Methods for Rational Spectra −arg B(ω)), is greater than or equal to the phase lag of A(ω): −arg B(ω) ≥−arg A(ω) Since we can factor A(z) as A(z) = n Y k=1 (1 −αkz−1) and arg A(ω) = Pn k=1 arg 1 −αke−iω , we begin by proving the minimum phase property for first–order polynomials. Let C(z) = 1 −αz−1, α ≜reiθ, r < 1 D(z) = z−1 −α∗= C(z)z−1 −α∗ 1 −αz−1 ≜C(z)E(z) (3.10.1) (a) Show that the zero of D(z) is outside the unit circle, and that |D(ω)| = |C(ω)|. (b) Show that −arg E(ω) = ω + 2 tan−1  r sin(ω −θ) 1 −r cos(ω −θ)  Also, show that the above function is increasing. (c) If α is real, conclude that −arg D(ω) ≥−arg C(ω) for 0 ≤ω ≤π, which justifies the name minimum phase for C(z) in the first–order case. (d) Generalize the first–order results proven in parts (a)–(c) to polynomials A(z) and B(z) of arbitrary order; in this case, the αk are either real or occur in complex-conjugate pairs. Exercise 3.2: Generating the ACS from ARMA Parameters In this chapter we developed equations expressing the ARMA coefficients {σ2, ai, bj} in terms of the ACS {r(k)}∞ k=−∞. Find the inverse map; that is, given σ2, a1, . . . , an, b1 . . . , bm, find equations to determine {r(k)}∞ k=−∞. Exercise 3.3: Relationship between AR Modeling and Forward Linear Prediction Suppose we have a zero mean stationary process {y(t)} (not necessarily AR) with ACS {r(k)}∞ k=−∞. We wish to predict y(t) by a linear combination of its n past values; that is, the predicted value is given by ˆ yf(t) = n X k=1 (−ak)y(t −k) We define the forward prediction error as ef(t) = y(t) −ˆ yf(t) = n X k=0 aky(t −k) “sm2” 2004/2/ page 13 i i i i i i i i Section 3.10 Exercises 131 with a0 = 1. Show that the vector θf = [a1 . . . an]T of prediction coefficients that minimizes the prediction error variance σ2 f ≜E{|ef(t)|2} is the solution to (3.4.2). Show also that σ2 f = σ2 n, i.e., that σ2 n in (3.4.2) is the prediction error variance. Furthermore, show that if {y(t)} is an AR(p) process with p ≤n, then the prediction error is white noise, and that kj = 0 for j > p where kj is the jth reflection coefficient defined in (3.5.7). Show that, as a conse-quence, ap+1, . . . , an = 0. Hint: The calculations performed in Section 3.4.2 and in Complement 3.9.2 will be useful in solving this problem. Exercise 3.4: Relationship between AR Modeling and Backward Linear Prediction Consider the signal {y(t)} as in Exercise 3.3. This time, we will consider backward prediction; that is, we will predict y(t) from its n immediate future values: ˆ yb(t) = n X k=1 (−bk)y(t + k) with corresponding backward prediction error eb(t) = y(t) −ˆ yb(t). Such backward prediction is useful in applications where noncausal processing is permitted; for example, when the data has been prerecorded and is stored in memory or on a tape and we want to make inferences on samples that precede the observed ones. Find an expression similar to (3.4.2) for the backward prediction coefficient vector θb = [b1 . . . bn]T . Find a relationship between the θb and the corresponding forward prediction coefficient vector θf. Relate the forward and backward prediction error variances. Exercise 3.5: Prediction Filters and Smoothing Filters The smoothing filter is a practically useful variation on the theme of linear prediction. A result of Exercises 3.3 and 3.4 should be that for the forward and backward prediction filters A(z) = 1 + n X k=1 akz−k and B(z) = 1 + n X k=1 bkz−k, the prediction coefficients satisfy ak = b∗ k, and the prediction error variances are equal. Now consider the smoothing filter es(t) = m X k=1 cky(t −k) + y(t) + m X k=1 dky(t + k). (a) Derive a system of linear equations, similar to the forward and backward linear prediction equations, that relate the smoothing filter coefficients, the smoothing prediction error variance σ2 s = E  |es(t)|2 , and the ACS of y(t). “sm2” 2004/2/ page 13 i i i i i i i i 132 Chapter 3 Parametric Methods for Rational Spectra (b) For n = 2m, provide an example of a zero–mean stationary random process for which the minimum smoothing prediction error variance is greater than the minimum forward prediction error variance. Also provide a second example where the minimum smoothing filter prediction error variance is less than the corresponding minimum forward prediction error variance. (c) Assume m = n, but now constrain the smoothing prediction coefficients to be complex–conjugate symmetric: ck = d∗ k for k = 1, . . . , m. In this case the two prediction filters and the smoothing filter have the same number of degrees of freedom. Prove that the minimum smoothing prediction error variance is less than or equal to the minimum (forward or backward) prediction error variance. Hint: Show that the unconstrained minimum smoothing error variance solution (where we do not impose the constraint ck = d∗ k) satisfies ck = d∗ k anyway. Exercise 3.6: Relationship between Minimum Prediction Error and Spec-tral Flatness Consider a random process {y(t)} with ACS {r(k)} (y(t) is not necessarily an AR process). We find an AR(n) model for y(t) by solving (3.4.6) for σ2 n and θn. These parameters generate an AR PSD model: φAR(ω) = σ2 n |A(ω)|2 whose inverse Fourier transform we denote by {rAR(k)}∞ k=−∞. In this exercise we explore the relationship between {r(k)} and {rAR(k)}, and between φy(ω) and φAR(ω). (a) Verify that the AR model has the property that rAR(k) = r(k), k = 0, . . . , n. (b) We have seen from Exercise 3.3 that the AR model minimizes the nth–order forward prediction error variance; that is, the variance of e(t) = y(t) + a1y(t −1) + . . . + any(t −n). For the special case that {y(t)} is AR of order n or less, we also know that {e(t)} is white noise, so φe(ω) is flat. We will extend this last property by showing that, for general {y(t)}, φe(ω) is maximally flat in the sense that the AR model maximizes the spectral flatness measure given by fe = exp h 1 2π R π −π ln φe(ω)dω i 1 2π R π −π φe(ω) dω (3.10.2) where φe(ω) = |A(ω)|2 φy(ω) = σ2 n φy(ω) φAR(ω). Show that the measure fe has the following “desirable” properties of a spectral flatness measure: “sm2” 2004/2/ page 133 i i i i i i i i Section 3.10 Exercises 133 (i) fe is unchanged if φe(ω) is multiplied by a constant. (ii) 0 ≤fe ≤1. (iii) fe = 1 if and only if φe(ω) = constant. Hint: Use the fact that 1 2π Z π −π ln |A(ω)|2 dω = 0 (3.10.3) (The above result can be proven using the Cauchy integral formula). Show that (3.10.3) implies fe = fy ry(0) re(0) (3.10.4) and thus that minimizing re(0) maximizes fe. Exercise 3.7: Diagonalization of the Covariance Matrix Show that Rn+1 in equation (3.5.2) satisfies L∗Rn+1L = D where L =         1 0 . . . 0 0 1 . . . . . . ... 0 1 0 θn θn−1 θ1 1         and D = diag [σ2 n σ2 n−1 . . . σ2 0] and where θk and σ2 k are defined in (3.4.6). Use this property to show that |Rn+1| = n Y k=0 σ2 k Exercise 3.8: Stability of Yule–Walker AR Models Assume that the matrix Rn+1 in equation (3.4.6) is positive definite. (This can be achieved by using the sample covariances in (2.2.4) to build Rn+1, as explained in Section 2.2.) Then show that the AR model obtained from the Yule–Walker equations (3.4.6) is stable in the sense that the polynomial A(z) has all its zeroes strictly inside the unit circle. (Most of the available proofs for this property are discussed in [Stoica and Nehorai 1987]). Exercise 3.9: Three Equivalent Representations for AR Processes In this chapter we have considered three ways to parameterize an AR(n), but we have not explicitly shown when they are equivalent. Show that, for a nondegenerate AR(n) process (i.e., one for which Rn+1 is positive definite), the following three parameterizations are equivalent: (R) r(0), . . . , r(n) such that Rn+1 is positive definite. (K) r(0), k1, . . . , kn such that r(0) > 0 and |ki| < 1 for i = 1, . . . , n. “sm2” 2004/2/ page 134 i i i i i i i i 134 Chapter 3 Parametric Methods for Rational Spectra (A) σ2 n, a1, . . . , an such that σ2 n > 0 and all the zeroes of A(z) are inside the unit circle. Find the mapping from each parameterization to the others (some of these have already been derived in the text and in the previous exercises). Exercise 3.10: An Alternative Proof of the Stability Property of Reflec-tion Coefficients Prove that ˆ kp which minimizes (3.9.12) must be such that |ˆ kp| ≤1, without using the expression (3.9.15) for ˆ kp. Hint: Write the criterion in (3.9.12) as f(kp) = E  1 kp k∗ p 1  z(t) 2! where E(·) = 1 2(N −p) N X t=p+1 (·) z(t) = ˆ ef,p−1(t) ˆ eb,p−1(t −1) T and show that if |kp| > 1 then f(kp) > f(1/k∗ p). Exercise 3.11: Recurrence Properties of Reflection Coefficient Sequence for an MA Model For an AR process of order n, the reflection coefficients satisfy ki = 0 for i > n (see Exercise 3.3), and the ACS satisfies the linear recurrence relationship A(z)r(k) = 0 for k > 0. Since an MA process of order m has the property that r(i) = 0 for i > m, we might wonder if a recurrence relationship holds for the reflection coefficients corresponding to a MA process. We will investigate this “con-jecture” for a simple case. Consider an MA process of order 1 with parameter b1. Show that |Rn| satisfies the relationship |Rn| = r(0)|Rn−1| −|r(1)|2|Rn−2|, n ≥2 Show that kn = (−r(1))n/|Rn| and that the reflection coefficient sequence satisfies the recurrence relationship: 1 kn = −r(0) r(1) 1 kn−1 −r∗(1) r(1) 1 kn−2 (3.10.5) with appropriate initial conditions (state them). Show that the solution to (3.10.5) for |b1| < 1 is kn = (1 −|b1|2)(−b1)n 1 −|b1|2n+2 (3.10.6) This sequence decays exponentially to zero. When b1 = −1, show that kn = 1/n. “sm2” 2004/2/ page 13 i i i i i i i i Section 3.10 Exercises 135 It has been shown that for large n, B(z)kn ≃0, where ≃0 means that the residue is small compared to the kn terms [Georgiou 1987]. This result holds even for MA processes of order higher than 1. Unfortunately, the result is of little practical use as a means of estimating the bk coefficients since for large n the kn values are (very) small. Exercise 3.12: Asymptotic Variance of the ARMA Spectral Estimator Consider the ARMA spectral estimator (3.2.2) with any consistent estimate of σ2 and {ai, bj}. For simplicity, assume that the ARMA parameters are real; however, the result holds for complex ARMA processes as well. Show that the asymptotic (for large data sets) variance of this spectral estimator can be written in the form E n [ˆ φ(ω) −φ(ω)]2o = C(ω)φ2(ω) (3.10.7) where C(ω) = ϕT (ω)Pϕ(ω). Here, P is the covariance matrix of the estimate of the parameter vector [σ2, aT , bT ]T and the vector ϕ(ω) has an expression that is to be found. Deduce that (3.10.7) has the same form as the asymptotic variance of the periodogram spectral estimator but with the essential difference that in the ARMA estimator case C(ω) goes to zero as the number of data samples processed increases (and that C(ω) in (3.10.7) is a function of ω). Hint: Use a Taylor series expansion of ˆ φ(ω) as a function of the estimated parameters {ˆ σ2, ˆ ai,ˆ bj} (see, e.g., Appendix B). Exercise 3.13: Filtering Interpretation of Numerator Estimators in AR-MA Estimation An alternative method for estimating the MA part of an ARMA PSD is as follows. Assume we have estimated the AR coefficients (e.g., from equation (3.7.2) or (3.7.4)). We filter y(t) by ˆ A(z) to form f(t): f(t) = y(t) + n X i=1 ˆ aiy(t −i), t = n + 1, . . . , N. Then estimate the ARMA PSD as ˆ φ(ω) = Pm k=−m ˆ rf(k)e−iωk | ˆ A(ω)|2 where ˆ rf(k) are the standard ACS estimates for f(t). Show that the above estimator is quite similar to (3.7.8) and (3.7.9) for large N. Exercise 3.14: An Alternative Expression for ARMA Power Spectral Density Consider an ARMA(n, m) process. Show that φ(z) = σ2 B(z)B∗(1/z∗) A(z)A∗(1/z∗) can be written as φ(z) = C(z) A(z) + C∗(1/z∗) A∗(1/z∗) (3.10.8) “sm2” 2004/2/ page 136 i i i i i i i i 136 Chapter 3 Parametric Methods for Rational Spectra where C(z) = max(m,n) X k=0 ckz−k Show that the polynomial C(z) satisfying (3.10.8) is unique, and find an expression for ck in terms of {ai} and {r(k)}. Equation (3.10.8) motivates an alternative estimation procedure to that in equations (3.7.8) and (3.7.9) for ARMA spectral estimation. In the alternative approach, we first estimate the AR coefficients {ˆ ai}n i=1 using, e.g., equation (3.7.2). We then estimate the ck coefficients using the formula found in this exercise, and finally insert the estimates ˆ ak and ˆ ck into the right–hand side of (3.10.8) to obtain a spectral estimate. Prove that this alternative estimator is equivalent to that in (3.7.8)–(3.7.9) under certain conditions, and find conditions on {ˆ ak} so that they are equivalent. Also, compare (3.7.9) and (3.10.8) for ARMA(n, m) spectral estimation when m < n. Exercise 3.15: Pad´ e Approximation A minimum phase (or causally invertible) ARMA(n, m) model B(z)/A(z) can be equivalently represented as an AR(∞) model 1/C(z). The approximation of a ratio of polynomials by a polynomial of higher order was considered by Pad´ e in the late 1800s. One possible application of the Pad´ e approximation is to obtain an ARMA spectral model by first estimating the coefficients of a high–order AR model, then solving for a (low–order) ARMA model from the estimated AR coefficients. In this exercise we investigate the model relationships and some consequences of truncating the AR model polynomial coefficients. Define: A(z) = 1 + a1z−1 + · · · + anz−n B(z) = 1 + b1z−1 + · · · + bmz−m C(z) = 1 + c1z−1 + c2z−2 + · · · (a) Show that ck =      1, k = 0 ak −Pm i=1 bick−i, 1 ≤k ≤n −Pm i=1 bick−i, k > n where we assume any polynomial coefficient is equal to zero outside its defined range. (b) Using the equations above, derive a procedure for computing the ai and bj parameters from a given set of {ck}m+n k=0 parameters. Assume m and n are known. (c) The above equations give an exact representation using an infinite–order AR polynomial. In the Pad´ e method, an approximation to B(z)/A(z) = 1/C(z) is obtained by truncating (setting to zero) the ck coefficients for k > m + n. “sm2” 2004/2/ page 13 i i i i i i i i Section 3.10 Exercises 137 Suppose a stable minimum phase ARMA(n, m) filter is approximated by an AR(m + n) filter using the Pad´ e approximation. Give an example to show that the resulting AR approximation is not necessarily stable. (d) Suppose a stable AR(m+n) filter is approximated by a ratio Bm(z)/An(z) as in part (b). Give an example to show that the resulting ARMA approximation is not necessarily stable. Exercise 3.16: (Non)Uniqueness of Fully Parameterized ARMA Equa-tions The shaping filter (or transfer function) of the ARMA equation (3.8.1) is given by the following matrix fraction: H(z) = A−1(z)B(z), (ny × ny) (3.10.9) where z is a dummy variable, and A(z) = I + A1z−1 + · · · + Apz−p B(z) = I + B1z−1 + · · · + Bpz−p (if the AR and MA orders, n and m, are different, then p above is equal to max(m, n)). Assume that A(z) and B(z) are “fully parameterized” in the sense that all elements of the matrix coefficients {Ai, Bj} are unknown. The matrix fraction description (MFD) (3.10.9) of the ARMA shaping filter is unique if and only if there exist no matrix polynomials ˜ A(z) and ˜ B(z) of degree p and no matrix polynomial L(z) ̸= I such that ˜ A(z) = L(z)A(z) ˜ B(z) = L(z)B(z) (3.10.10) This can be verified by making use of (3.10.9); see, e.g., [Kailath 1980]. Show that the above uniqueness condition is satisfied for the fully parameter-ized MFD if and only if rank[Ap Bp] = ny (3.10.11) Comment on the character of this condition: is it restrictive or not? COMPUTER EXERCISES Tools for AR, MA, and ARMA Spectral Estimation: The text web site www.prenhall.com/stoica contains the following Matlab functions for use in computing AR, MA, and ARMA spectral estimates and selecting the model order. For the first four functions, y is the input data vector, n is the desired AR order, and m is the desired MA order (if applicable). The outputs are a, the vector [ˆ a1, . . . , ˆ an]T of estimated AR parameters, b, the vector [ˆ b1, . . . ,ˆ bm]T of MA parameters (if applicable), and sig2, the noise variance estimate ˆ σ2. Variable definitions specific to a particular functions are given below. “sm2” 2004/2/ page 138 i i i i i i i i 138 Chapter 3 Parametric Methods for Rational Spectra • [a,sig2]=yulewalker(y,n) The Yule–Walker AR method given by equation (3.4.2). • [a,sig2]=lsar(y,n) The covariance Least Squares AR method given by equation (3.4.12). • [a,gamma]=mywarma(y,n,m,M) The modified Yule–Walker based ARMA spectral estimate given by equation (3.7.9), where the AR coefficients are estimated from the overdetermined set of equations (3.7.4) with W = I. Here, M is the number of Yule-Walker equations used in (3.7.4) and gamma is the vector [ˆ γ0, . . . , ˆ γm]T . • [a,b,sig2]=lsarma(y,n,m,K) The two–stage Least Squares ARMA method given in Section 3.7.2; K is the number of AR parameters to estimate in Step 1 of that algorithm. • order=armaorder(mo,sig2,N,nu) Computes the AIC, AICc, GIC, and BIC model order selections for general parameter estimation problems (see Appendix C for details on the derivations of these methods). Here, mo is a vector of possible model orders, sig2 is the vector of estimated residual variances corresponding to the model orders in mo, N is the length of the observed data vector, and nu is a parameter in the GIC method. The output 4-element vector order contains the model orders selected using AIC, AICc, GIC, and BIC, respectively. Exercise C3.17: Comparison of AR, ARMA and Periodogram Methods for ARMA Signals In this exercise we examine the properties of parametric methods for PSD estimation. We will use two ARMA signals, one broadband and one narrowband, to illustrate the performance of these parametric methods. Broadband ARMA Process: Generate realizations of the broadband ARMA process y(t) = B1(z) A1(z) e(t) with σ2 = 1 and A1(z) = 1 −1.3817z−1 + 1.5632z−2 −0.8843z−3 + 0.4096z−4 B1(z) = 1 + 0.3544z−1 + 0.3508z−2 + 0.1736z−3 + 0.2401z−4 Choose the number of samples as N = 256. (a) Estimate the PSD of the realizations by using the four AR and ARMA esti-mators described above. Use AR(4), AR(8), ARMA(4,4), and ARMA(8,8); for the MYW algorithm, use both M = n and M = 2n; for the LS AR(MA) algorithms, use K = 2n. Illustrate the performance by plotting ten overlaid estimates of the PSD. Also, plot the true PSD on the same diagram. “sm2” 2004/2/ page 139 i i i i i i i i Section 3.10 Exercises 139 In addition, plot pole or pole–zero estimates for the various methods. (For the MYW method, the zeroes can be found by spectral factorization of the numerator; comment on the difficulties you encounter, if any.) (b) Compare the two AR algorithms. How are they different in performance? (c) Compare the two ARMA algorithms. How does M impact performance of the MYW algorithm? How do the accuracies of the respective pole and zero estimates compare? (d) Use an ARMA(4,4) model for the LS ARMA algorithm, and estimate the PSD of the realizations for K = 4, 8, 12, and 16. How does K impact performance of the algorithm? (e) Compare the lower–order estimates with the higher–order estimates. In what way(s) does increasing the model order improve or degrade estimation perfor-mance? (f) Compare the AR to the ARMA estimates. How does the AR(8) model perform with respect to the ARMA(4,4) model and the ARMA(8,8) model? (g) Compare your results with those using the periodogram method on the same process (from Exercise C2.21 in Chapter 2). Comment on the difference be-tween the methods with respect to variance, bias, and any other relevant properties of the estimators you notice. Narrowband ARMA Process: Generate realizations of the narrowband ARMA process y(t) = B2(z) A2(z) e(t) with σ2 = 1 and A2(z) = 1 −1.6408z−1 + 2.2044z−2 −1.4808z−3 + 0.8145z−4 B2(z) = 1 + 1.5857z−1 + 0.9604z−2 (a) Repeat the experiments and comparisons in the broadband example for the narrowband process; this time, use the following model orders: AR(4), AR(8), AR(12), AR(16), ARMA(4,2), ARMA(8,4), and ARMA(12,6). (b) Study qualitatively how the algorithm performances differ for narrowband and broadband data. Comment separately on performance near the spectral peaks and near the spectral valleys. Exercise C3.18: AR and ARMA Estimators for Line Spectral Estimation The ARMA methods can also be used to estimate line spectra (estimation of line spectra by other methods is the topic of Chapter 4). In this application, AR(MA) techniques are often said to provide super–resolution capabilities because they are able to resolve sinusoids too closely spaced in frequency to be resolved by periodogram–based methods. We again consider the four AR and ARMA estimators described above. “sm2” 2004/2/ page 140 i i i i i i i i 140 Chapter 3 Parametric Methods for Rational Spectra (a) Generate realizations of the signal y(t) = 10 sin(0.24πt + ϕ1) + 5 sin(0.26πt + ϕ2) + e(t), t = 1, . . . , N where e(t) is (real) white Gaussian noise with variance σ2, and where ϕ1, ϕ2 are independent random variables each uniformly distributed on [0, 2π]. From the results in Chapter 4, we find the spectrum of y(t) to be φ(ω) = 50π [δ(ω −0.24π) + δ(ω + 0.24π)] +12.5π [δ(ω −0.26π) + δ(ω + 0.26π)] + σ2 (b) Compute the “true” AR polynomial (using the true ACS sequence; see equa-tion (4.1.6)) using the Yule–Walker equations for both AR(4), AR(12), AR-MA(4,4) and ARMA(12,12) models when σ2 = 1. This experiment corre-sponds to estimates obtained as N →∞. Plot 1/|A(ω)|2 for each case, and find the roots of A(z). Which method(s) are able to resolve the two sinusoids? (c) Consider now N = 64, and set σ2 = 0; this corresponds to the finite data length but infinite SNR case. Compute estimated AR polynomials using the four spectral estimators and the AR and ARMA model orders described above; for the MYW technique consider both M = n and M = 2n, and for the LS ARMA technique use both K = n and K = 2n. Plot 1/|A(ω)|2, overlaid, for 50 different Monte–Carlo simulations (using different values of ϕ1 and ϕ2 for each). Also plot the zeroes of A(z), overlaid, for these 50 simulations. Which method(s) are reliably able to resolve the sinusoids? Explain why. Note that as σ2 →0, y(t) corresponds to a (limiting) AR(4) process. How does the choice of M or K in the ARMA methods affect resolution or accuracy of the frequency estimates? (d) Obtain spectral estimates (ˆ σ2| ˆ B(ω)|2/| ˆ A(ω)|2 for the ARMA estimators and ˆ σ2/| ˆ A(ω)|2 for the AR estimators) for the four methods when N = 64 and σ2 = 1. Plot ten overlaid spectral estimates and overlaid polynomial zeroes of the ˆ A(z) estimates. Experiment with different AR and ARMA model orders to see if the true frequencies are estimated more accurately; note also the appearance and severity of “spurious” sinusoids in the estimates for higher model orders. Which method(s) give reliable “super–resolution” estimation of the sinusoids? How does the model order influence the resolution properties? Which method appears to have the best resolution? You may want to experiment further by changing the SNR and the relative amplitudes of the sinusoids to gain a better understanding of the relative differences between the methods. Also, experiment with different model orders and parameters K and M to understand their impact on estimation accuracy. (e) Compare the estimation results with periodogram–based estimates obtained from the same signals. Discuss differences in resolution, bias, and variance of the techniques. Exercise C3.19: Model Order Selection for AR and ARMA Processes “sm2” 2004/2/ page 14 i i i i i i i i Section 3.10 Exercises 141 In this exercise we examine four methods for model order selection in AR and ARMA spectral estimation. We will experiment with both broadband and narrowband processes. As discussed in Appendix C, several important model order selection rules have the following general form (see (C.8.1)–(C.8.2)): −2 ln pn(y, ˆ θn) + η(n, N)n (3.10.12) with different penalty coefficients η(n, N) for the different methods: AIC : η(n, N) = 2 AICc : η(n, N) = 2 N N −n −1 GIC : η(n, N) = ν (e.g., ν = 4) BIC : η(n, N) = ln N (3.10.13) The term ln pn(y, ˆ θn) is the log-likelihood of the observed data vector y given the maximum-likelihood (ML) estimate of the parameter vector θ for a model of order n (where n is the total number of estimated real-valued parameters in the model); for the case of AR, MA, and ARMA models, a large-sample approximation for −2 ln pn(y, ˆ θn) that is commonly used for order selection (see, e.g., [Ljung 1987; S¨ oderstr¨ om and Stoica 1989]) is given by: −2 ln pn(y, ˆ θn) ≃N ˆ σ2 n + constant (3.10.14) where ˆ σ2 n is the sample estimate of σ2 in (3.2.2) corresponding to the model of order n. The selected order is the value of n that minimizes (3.10.12). The order selection rules above, while derived for ML estimates of θ, can be used even with approximate ML estimates of θ, albeit with some loss of performance. Broadband AR Process: Generate 100 realizations of the broadband AR process y(t) = 1 A1(z) e(t) with σ2 = 1 and A1(z) = 1 −1.3817z−1 + 1.5632z−2 −0.8843z−3 + 0.4096z−4 Choose the number of samples as N = 128. For each realization: (a) Estimate the model parameters using the LS AR estimator, and using AR model orders from 1 to 12. (b) Find the model orders that minimize the AIC, AICc, GIC (with ν = 4), and BIC criteria (See Appendix C). Note that for an AR model of order m, n = m + 1. (c) For each of the four order selection methods, plot a histogram of the selected orders for the 100 realizations. Comment on their relative performance. “sm2” 2004/2/ page 14 i i i i i i i i 142 Chapter 3 Parametric Methods for Rational Spectra Repeat the above experiment using N = 256 and N = 1024 samples. Discuss the relative performance of the order selection methods as N increases. Narrowband AR Process: Repeat the above experiment using the narrowband AR process: y(t) = 1 A2(z) e(t) with σ2 = 1 and A2(z) = 1 −1.6408z−1 + 2.2044z−2 −1.4808z−3 + 0.8145z−4 Compare the narrowband AR and broadband AR order selection results, and dis-cuss the relative order selection performance for these two AR processes. Broadband ARMA Process: Repeat the broadband AR experiment using the broadband ARMA process y(t) = B1(z) A1(z) e(t) with σ2 = 1 and A1(z) = 1 −1.3817z−1 + 1.5632z−2 −0.8843z−3 + 0.4096z−4 B1(z) = 1 + 0.3544z−1 + 0.3508z−2 + 0.1736z−3 + 0.2401z−4 For the broadband ARMA process, use N = 256 and N = 1024 data samples. For each value of N, find ARMA(m, m) models (so n = 2m + 1 in equation (3.10.12)) for m = 1, . . . , 12. Use the two-stage LS ARMA method with K = 4m to estimate parameters. Narrowband ARMA Process: Repeat the broadband ARMA experiment using the narrowband ARMA process: y(t) = B2(z) A2(z) e(t) with σ2 = 1 and A2(z) = 1 −1.6408z−1 + 2.2044z−2 −1.4808z−3 + 0.8145z−4 B2(z) = 1 + 1.1100z−1 + 0.4706z−2 Find ARMA(2m, m) models for m = 1, . . . , 6 (so n = 3m + 1 in equation (3.10.12)) using the two-stage LS ARMA method with K = 8m. Compare the narrowband ARMA and broadband ARMA order selection results, and discuss the relative order selection performance for these two ARMA processes. Exercise C3.20: AR and ARMA Estimators applied to Measured Data Consider the data sets in the files sunspotdata.mat and lynxdata.mat. These files can be obtained from the text web site www.prenhall.com/stoica. “sm2” 2004/2/ page 143 i i i i i i i i Section 3.10 Exercises 143 Apply your favorite AR and ARMA estimator(s) (for the lynx data, use both the original data and the logarithmically transformed data as in Exercise C2.23) to estimate the spectral content of these data. You will also need to determine ap-propriate model orders m and n (see, e.g., Exercise C3.19). As in Exercise C2.23, try to answer the following questions: Are there sinusoidal components (or peri-odic structure) in the data? If so, how many components and at what frequencies? Discuss the relative strengths and weaknesses of parametric and nonparametric es-timators for understanding the spectral content of these data. In particular, discuss how a combination of the two techniques can be used to estimate the spectral and periodic structure of the data. “sm2” 2004/2/ page 144 i i i i i i i i C H A P T E R 4 Parametric Methods for Line Spectra 4.1 INTRODUCTION In several applications, particularly in communications, radar, sonar, geophysical seismology and so forth, the signals dealt with can be well described by the following sinusoidal model: y(t) = x(t) + e(t) ; x(t) = n X k=1 αkei(ωkt+ϕk) (4.1.1) where x(t) denotes the noise–free complex–valued sinusoidal signal; {αk}, {ωk}, {ϕk} are its amplitudes, (angular) frequencies and initial phases, respectively; and e(t) is an additive observation noise. The complex–valued form (4.1.1), of course, is not encountered in practice as it stands; practical signals are real valued. However, as already mentioned in Chapter 1, in many applications both the in–phase and quadrature components of the studied signal are available. (See Chapter 6 for more details on this aspect.) In the case of a (real–valued) sinusoidal signal, this means that both the sine and the corresponding cosine components are available. These two components may be processed by arranging them in a two–dimensional vector signal or a complex–valued signal of the form of (4.1.1). Since the complex–valued description (4.1.1) of the in–phase and quadrature components of a sinusoidal signal is the most convenient one from a mathematical standpoint, we focus on it in this chapter. The noise {e(t)} in (4.1.1) is usually assumed to be (complex–valued) circular white noise as defined in (2.4.19). We also make the white noise assumption in this chapter. We may argue in the following way that the white noise assumption is not particularly restrictive. Let the continuous–time counterpart of the noise in (4.1.1) be correlated, but assume that the “correlation time” of the continuous– time noise is less than half of the shortest period of the sine wave components in the continuous–time counterpart of x(t) in (4.1.1). If this mild condition is satisfied, then choosing the sampling period larger than the noise correlation time (yet smaller than half the shortest sinusoidal signal period, to avoid aliasing) results in a white discrete–time noise sequence {e(t)}. If the correlation condition above is not satisfied, but we know the shape of the noise spectrum, we can filter y(t) by a linear whitening filter which makes the noise component at the filter output white; the sinusoidal components remain sinusoidal with the same frequencies, and with amplitudes and phases altered in a known way. 144 “sm2” 2004/2/ page 14 i i i i i i i i Section 4.1 Introduction 145 If the noise process is not white and has unknown spectral shape, then accu-rate frequency estimates can still be found if we estimate the sinusoids using the nonlinear least squares (NLS) method in Section 4.3 (see [Stoica and Nehorai 1989b], for example). Indeed, the properties of the NLS estimates in the colored and unknown noise case are quite similar to those for the white noise case, only with the sinusoidal signal amplitudes “adjusted” to give corresponding local SNRs — the signal–to–noise power ratio at each frequency ωk. This amplitude adjust-ment is the same as that realized by the whitening filter approach. It is important to note that these comments only apply if the NLS method is used. The other estimation methods in this chapter (e.g., the subspace–based methods) depend on the assumption that the noise is white, and may be adversely affected if the noise is not white (or is not prewhitened). Concerning the signal in (4.1.1), we assume that ωk ∈[−π, π] and that αk > 0. We need to specify the sign of {αk}; otherwise we are left with a phase ambigu-ity. More precisely, without the condition αk > 0 in (4.1.1), both {αk, ωk, ϕk} and {−αk, ωk, ϕk + π} give the same signal {x(t)}, so the parameterization is not unique. As to the initial phases {ϕk} in (4.1.1), one could assume that they are fixed (nonrandom) constants, which would result in {x(t)} being a deterministic signal. In most applications, however, {ϕk} are nuisance parameters and it is more convenient to assume that they are random variables. Note that if we try to mimic the conditions of a previous experiment as much as possible, we will usually be un-able to ensure the same initial phases of the sine waves in the observed sinusoidal signal (this will be particularly true for received signals). Since there is usually no reason to believe that a specific set of initial phases is more likely than another one, or that two different initial phases are interrelated, we make the following assumption: The initial phases {ϕk} are independent random variables uni-formly distributed on [−π, π] (4.1.2) The covariance function and the PSD of the noisy sinusoidal signal {y(t)} can be calculated in a straightforward manner under the assumptions made above. By using (4.1.2), we get E  eiϕpe−iϕj = 1 for p = j and for p ̸= j E  eiϕpe−iϕj = E  eiϕp E  e−iϕj =  1 2π Z π −π eiϕdϕ   1 2π Z π −π e−iϕdϕ  = 0 Thus, E  eiϕpe−iϕj = δp,j (4.1.3) Let xp(t) = αpei(ωpt+ϕp) (4.1.4) denote the pth sine wave in (4.1.1). It follows from (4.1.3) that E  xp(t)x∗ j(t −k) = α2 peiωpkδp,j (4.1.5) “sm2” 2004/2/ page 146 i i i i i i i i 146 Chapter 4 Parametric Methods for Line Spectra which, in turn, gives r(k) = E {y(t)y∗(t −k)} = n X p=1 α2 peiωpk + σ2δk,0 (4.1.6) and the derivation of the covariance function of y(t) is completed. The PSD of y(t) is given by the DTFT of {r(k)} in (4.1.6), which is φ(ω) = 2π n X p=1 α2 pδ(ω −ωp) + σ2 (4.1.7) where δ(ω−ωp) is the Dirac impulse (or Dirac delta “function”) which, by definition, has the property that Z π −π F(ω)δ(ω −ωp)dω = F(ωp) (4.1.8) for any function F(ω) that is continuous at ωp. The expression (4.1.7) for φ(ω) may be verified by inserting it in the inverse transform formula (1.3.8) and checking that the result is the covariance function. Doing so, we obtain 1 2π Z π −π [2π n X p=1 α2 pδ(ω −ωp) + σ2]eiωkdω = n X p=1 α2 peiωpk + σ2δk,o = r(k) (4.1.9) which is the desired result. The PSD (4.1.7) is depicted in Figure 4.1. It consists of a “floor” of constant level equal to the noise power σ2, along with n vertical lines (or impulses) located at the sinusoidal frequencies {ωk} and having zero support but nonzero areas equal to 2π times the sine wave powers {α2 k}. Owing to its appearance, as exhibited in Figure 4.1, φ(ω) in (4.1.7) is called a line or discrete spectrum. It is evident from the previous discussion that a spectral analysis based on the parametric PSD model (4.1.7) reduces to the problem of estimating the parameters of the signal in (4.1.1). In most applications, such as those listed at the beginning of this chapter, the parameters of major interest are the locations of the spectral lines, namely the sinusoidal frequencies. In the following sections, we present a number of methods for spectral line analysis. We focus on the problem of frequency estimation meaning determination of {ωk}n k=1 from a set of observations {y(t)}N t=1. Once the frequencies have been determined, estimation of the other signal parameters (or PSD parameters) becomes a simple linear regression problem. More precisely, for given {ωk} the observations y(t) can be written as a linear regression function whose coefficients are equal to the remaining unknowns {αkeiϕk ≜βk}: y(t) = n X k=1 βkeiωkt + e(t) (4.1.10) “sm2” 2004/2/ page 14 i i i i i i i i Section 4.1 Introduction 147 σ2 (2πα1 2) (2πα2 2) (2πα3 2) −π π ω φ(ω) ω1 ω2 ω3 Figure 4.1. The PSD of a complex sinusoidal signal in additive white noise. If desired, {βk} (and hence {αk}, {ϕk}) in (4.1.10) can be obtained by a least squares method (as in equation (4.3.8) below). Alternatively, one may determine the signal powers {α2 k} — for given {ωk} — from the sample version of (4.1.6): ˆ r(k) = n X p=1 α2 peiωpk + residuals for k ≥1 (4.1.11) where the residuals arise from finite–sample estimation of r(k); this is, once more, a linear regression with {α2 p} as unknown coefficients. The solution to either linear regression problem is straightforward and is discussed in Section A.8 of Appendix A. The methods for frequency estimation that will be described in the follow-ing sections are sometimes called high–resolution (or, even, super–resolution) tech-niques. This is due to their ability to resolve spectral lines separated in frequency f = ω/2π by less than 1/N cycles per sampling interval, which is the resolution limit for the classical periodogram–based methods. All of the high–resolution meth-ods to be discussed in the following provide consistent estimates of {ωk} under the assumptions we made. Their consistency will surface in the following discussion in an obvious manner and hence we do not need to pay special attention to this aspect. Nor do we discuss in detail other statistical properties of the frequency estimates obtained by these high–resolution methods, though in Appendix B we review the Cram´ er–Rao bound and the best accuracy that can be achieved by such methods. For derivations and discussions of the statistical properties not addressed in this text, we refer the interested reader to [Stoica, S¨ oderstr¨ om, and Ti 1989; Stoica and S¨ oderstr¨ om 1991; Stoica, Moses, Friedlander, and S¨ oderstr¨ om 1989; Stoica and Nehorai 1989b]. Let us briefly summarize the conclusions of these analyses: All the high–resolution methods presented in the following provide very accurate frequency estimates, with only small differences in their statistical performances. Furthermore, the computational burdens associated with these methods are rather similar. Hence, selecting one of the high–resolution methods for frequency estimation is essentially a “matter of taste” even though we “sm2” 2004/2/ page 148 i i i i i i i i 148 Chapter 4 Parametric Methods for Line Spectra will identify some advantages of one of these methods, named ESPRIT, over the others. We should point out that the comparison in the previous paragraph between the high–resolution methods and the periodogram–based techniques is unfair in the sense that periodogram–based methods do not assume any knowledge about the data, whereas high–resolution methods exploit an exact description of the stud-ied signal. Owing to the additional information assumed, a parametric method should be expected to offer better resolution than the nonparametric method of the periodogram. On the other hand, when no two spectral lines in the spectrum are separated by less than 1/N, the unmodified periodogram turns out to be an excellent frequency estimator which may outperform any of the high–resolution methods (as we shall see). One may ask why the unmodified periodogram is preferred over the many windowed or smoothed periodogram techniques to which we paid so much attention in Chapter 2. The explanation actually follows from the discussion in that chapter. The unmodified periodogram can be viewed as a Blackman–Tukey “win-dowed” estimator with a rectangular window of maximum length equal to 2N + 1. Of all window sequences, this is exactly the one which has the narrowest main lobe and hence the one which affords the maximum spectral resolution, a desirable prop-erty for high-resolution spectral line scenarios. It should be noted, however, that if the sinusoidal components in the signal are not too closely spaced in frequency, but their amplitudes differ significantly from one another, then a mildly windowed peri-odogram (to avoid leakage) may perform better than the unwindowed periodogram (in the unwindowed periodogram, the weaker sinusoids may be obscured by the leakage from the stronger ones, and hence they may not be visible in a plot of the estimated spectrum). In order to simplify the discussion in this chapter, we assume that the number of sinusoidal components, n, in (4.1.1) is known. When n is unknown, which may well be the case in many applications, it can be determined from the available data as described for example in [Fuchs 1988; Kay 1988; Marple 1987; Proakis, Rader, Ling, and Nikias 1992; S¨ oderstr¨ om and Stoica 1989] and in Ap-pendix C. 4.2 MODELS OF SINUSOIDAL SIGNALS IN NOISE The frequency estimation methods presented in this chapter rely on three different models for the noisy sinusoidal signal (4.1.1). This section introduces the three models of (4.1.1). 4.2.1 Nonlinear Regression Model The nonlinear regression model is given by (4.1.1). Note that {ωk} enter in a nonlinear fashion in (4.1.1), hence the name “nonlinear regression” given to this type of model for {y(t)}. The other two models for {y(t)}, to be discussed in the following, are derived from (4.1.1); they are descriptions of the data that are not as complete as (4.1.1). However, they preserve the information required to determine the frequencies {ωk} which, as already stated, are the parameters of major interest. Hence, in some sense, these two models are more appropriate for frequency estimation since they do not include some of the nuisance parameters “sm2” 2004/2/ page 149 i i i i i i i i Section 4.2 Models of Sinusoidal Signals in Noise 149 which appear in (4.1.1). 4.2.2 ARMA Model It can be readily verified that (1 −eiωkz−1)xk(t) ≡0 (4.2.1) where z−1 denotes the unit delay (or shift) operator introduced in Chapter 1. Hence, (1 −eiωkz−1) is an annihilating filter for the kth component in x(t). By using this simple observation, we obtain the following homogeneous AR equation for {x(t)} A(z)x(t) = 0 (4.2.2) and the following ARMA model for the noisy data {y(t)}: A(z)y(t) = A(z)e(t) A(z) = n Y k=1 (1 −eiωkz−1) (4.2.3) It may be a useful exercise to derive equation (4.2.2) in a different way. The PSD of x(t) consists of n spectral lines located at {ωk}n k=1. It should then be clear, in view of the relation (1.4.9) governing the transfer of a PSD through a linear system, that any filter which has zeroes at frequencies {ωk} is an annihilating filter for x(t). The polynomial A(z) in (4.2.3) is the simplest kind of such an annihilating filter. This polynomial bears complete information about {ωk} and hence the problem of estimating the frequencies can be reduced to that of determining A(z). We remark that the ARMA model (4.2.3) has a very special form (a reason for which it is sometimes called a “degenerate” ARMA). All its poles and zeroes are located exactly on the unit circle. Furthermore, its AR and MA parts are identical. It might be tempting to cancel the common poles and zeroes in (4.2.3). However, such an operation leads to the wrong conclusion that y(t) = e(t) and, therefore, should be invalid. Let us explain briefly why cancelation in (4.2.3) is not allowed. The ARMA equation description of a signal y(t) is asymptotically equivalent to the associated transfer function description (in the sense that both give the same signal sequence, for t →∞) if and only if the poles are situated strictly inside the unit circle. If there are poles on the unit circle, then the equivalence between these two descriptions ceases. In particular, the solution of an ARMA equation with poles on the unit circle strongly depends on the initial conditions, whereas the transfer function description does not include a dependence on initial values. 4.2.3 Covariance Matrix Model A notation that will often be used in the following is: a(ω) ≜ [1 e−iω . . . e−i(m−1)ω]T (m × 1) A = [a(ω1) . . . a(ωn)] (m × n) (4.2.4) “sm2” 2004/2/ page 150 i i i i i i i i 150 Chapter 4 Parametric Methods for Line Spectra In (4.2.4), m is a positive integer which is not yet specified. Note that the matrix A introduced above is a Vandermonde matrix which enjoys the following rank property (see Result R24 in Appendix A): rank(A) = n if m ≥n and ωk ̸= ωp for k ̸= p (4.2.5) By making use of the previous notation, along with (4.1.1) and (4.1.4), we can write ˜ y(t) ≜      y(t) y(t −1) . . . y(t −m + 1)     = A˜ x(t) + ˜ e(t) ˜ x(t) = [x1(t) . . . xn(t)]T ˜ e(t) = [e(t) . . . e(t −m + 1)]T (4.2.6) The following expression for the covariance matrix of ˜ y(t) can be readily derived from (4.1.5) and (4.2.6) R ≜E {˜ y(t)˜ y∗(t)} = APA∗+ σ2I ; P =    α2 1 0 ... 0 α2 n    (4.2.7) The above equation constitutes the covariance matrix model of the data. As we will show later, the eigenstructure of R contains complete information on the frequencies {ωk}, and this is exactly where the usefulness of (4.2.7) lies. From equations (4.2.6) and (4.1.5), we also derive for later use the following result: Γ ≜ E         y(t −L −1) . . . y(t −L −M)   [y∗(t) . . . y∗(t −L)]      = E  AM ˜ x(t −L −1)˜ x∗(t)A∗ L+1 = AMPL+1A∗ L+1 (L, M ≥1) (4.2.8) where AK stands for A in (4.2.4) with m = K, and PK =    α2 1e−iω1K 0 ... 0 α2 ne−iωnK    As we explain in detail later, the null space of the matrix Γ (with L, M ≥n) gives complete information on the frequencies {ωk}. “sm2” 2004/2/ page 15 i i i i i i i i Section 4.3 Nonlinear Least Squares Method 151 4.3 NONLINEAR LEAST SQUARES METHOD An intuitively appealing approach to spectral line analysis, based on the nonlinear regression model (4.1.1), consists of determining the unknown parameters as the minimizers of the following criterion: f(ω, α, ϕ) = N X t=1 y(t) − n X k=1 αkei(ωkt+ϕk) 2 (4.3.1) where ω is the vector of frequencies ωk, and similarly for α and ϕ. The sinusoidal model determined as above has the smallest “sum of squares” distance to the ob-served data {y(t)}N t=1. Since f is a nonlinear function of its arguments {ω, ϕ, α}, the method which obtains parameter estimates by minimizing (4.3.1) is called the nonlinear least squares (NLS) method. When the (white) noise e(t) is Gaussian distributed, the minimization of (4.3.1) can also be interpreted as the method of maximum likelihood (see Appendices B and C); in that case, minimization of (4.3.1) can be shown to provide the parameter values which are most likely to “explain” the observed data sequence (see [S¨ oderstr¨ om and Stoica 1989; Kay 1988; Marple 1987]). The criterion in (4.3.1) depends on both {αk} and {ϕk} as well as on {ωk}. However, it can be concentrated with respect to the nuisance parameters {αk, ϕk}, as explained next. By making use of the following notation, βk = αkeiϕk (4.3.2) β = [β1 . . . βn]T (4.3.3) Y = [y(1) . . . y(N)]T (4.3.4) B =    eiω1 . . . eiωn . . . . . . eiNω1 . . . eiNωn    (4.3.5) we can write the function f in (4.3.1) as f = (Y −Bβ)∗(Y −Bβ) (4.3.6) The Vandermonde matrix B in (4.3.5) (which resembles the matrix A defined in (4.2.4)) has full column rank equal to n under the weak condition that N ≥n; in this case, (B∗B)−1 exists. By using this observation, we can put (4.3.6) in the more convenient form: f = [β −(B∗B)−1B∗Y ]∗[B∗B][β −(B∗B)−1B∗Y ] + Y ∗Y −Y ∗B(B∗B)−1B∗Y (4.3.7) For any choice of ω = [ω1, . . . , ωn]T in B (which is such that ωk ̸= ωp for k ̸= p), we can choose β to make the first term of f zero; thus, we see that the vectors β and ω which minimize f are given by ˆ ω = arg maxω[Y ∗B(B∗B)−1B∗Y ] ˆ β = (B∗B)−1B∗Y |ω=ˆ ω (4.3.8) “sm2” 2004/2/ page 15 i i i i i i i i 152 Chapter 4 Parametric Methods for Line Spectra It can be shown that, as N tends to infinity, ˆ ω obtained as above converges to ω (i.e., ˆ ω is a consistent estimate) and, in addition, the estimation errors {ˆ ωk −ωk} have the following (asymptotic) covariance matrix: Cov(ˆ ω) = 6σ2 N 3    1/α2 1 0 ... 0 1/α2 n    (4.3.9) (see [Stoica and Nehorai 1989a; Stoica, Moses, Friedlander, and S¨ o-derstr¨ om 1989]). In the case of Gaussian noise, the matrix in (4.3.9) can also be shown to equal the Cram´ er–Rao limit matrix which gives a lower bound on the covariance matrix of any unbiased estimator of ω (see Appendix B). Hence, under the Gaussian hypothesis the NLS method provides the most accurate (i.e., minimum variance) frequency estimates in a fairly general class of estimators. As a matter of fact, the variance of {ˆ ωk} (as given by (4.3.9)) may take quite small values for reasonably large sample lengths N and signal–to–noise ratios SNRk = α2 k/σ2. For example, for N = 300 and SNRk = 30dB it follows from (4.3.9) that we may expect frequency estimation errors on the order of 10−5, which is comparable with the roundofferrors in a 32–bit fixed–point processor. The NLS method has another advantage that sets it apart from the subspace-based approaches that are discussed in the remainder of the chapter. The NLS method does not critically depend on the assumption that the noise process is white. If the noise process is not white, the NLS still gives consistent frequency esti-mates. In fact, the asymptotic covariance of the frequency estimates is diagonal and var(ˆ ωk) = 6/(N 3SNRk), where SNRk = α2 k/φn(ωk) (here φn(ω) is the noise PSD) is the “local” signal-to-noise ratio of the sinusoid at frequency ωk (see [Stoica and Nehorai 1989b], for example). Interestingly enough, the NLS method remains the most accurate method (if the data length is large) even in those cases where the (Gaussian) noise is colored [Stoica and Nehorai 1989b]. This fact spurred a renewed interest in the NLS approach and in reliable algorithms for performing the minimization required in (4.3.1) (see, e.g., [Hwang and Chen 1993; Ying, Potter, and Moses 1994; Li and Stoica 1996b; Umesh and Tufts 1996] and Complement 4.9.5). Unfortunately, the good statistical performance associated with the NLS meth-od of frequency estimation is difficult to achieve, for the following reason. The func-tion (4.3.8) has a complicated multimodal shape with a very sharp global maximum corresponding to ˆ ω [Stoica, Moses, Friedlander, and S¨ oderstr¨ om 1989]. Hence, finding ˆ ω by a search algorithm requires very accurate initialization. Initial-ization procedures that provide fairly accurate approximations of the maximizer of (4.3.8) have been proposed in [Kumaresan, Scharf, and Shaw 1986], [Bresler and Macovski 1986], [Ziskind and Wax 1988]. However, there is no available method which is guaranteed to provide frequency estimates within the attraction domain of the global maximum ˆ ω of (4.3.8). As a consequence, a search algorithm may well fail to converge to ˆ ω, or may even diverge. The kind of difficulties indicated above, that must be faced when using the NLS method in applications, limits the practical interest in this approach to fre-“sm2” 2004/2/ page 153 i i i i i i i i Section 4.3 Nonlinear Least Squares Method 153 quency estimation. There are, however, some instances when the NLS approach may be turned into a practical frequency estimation method. Consider, first, the case of a single sine wave (n = 1). A straightforward calculation shows that, in such a case, the first equation in (4.3.8) can be rewritten in the following form: ˆ ω = arg max ω ˆ φp(ω) (4.3.10) where ˆ φp(ω) is the periodogram (see (2.2.1)) ˆ φp(ω) = 1 N N X t=1 y(t)e−iωt 2 (4.3.11) Hence, the NLS estimate of the frequency of a single sine wave buried in observation noise is precisely given by the highest peak of the unmodified periodogram. Note that the above result is only approximately true (for N ≫1) in the case of real–valued sinusoidal signals, a fact which lends additional support to the claim made in Chapter 1 that the analysis of the case of real–valued signals faces additional complications not encountered in the complex–valued case. Each real– valued sinusoid can be written as a sum of two complex exponentials, and the treatment of the real case with n = 1 is similar to that of the complex case with n > 1 presented below. Next, consider the case of multiple sine waves (n > 1). The key condition that makes it possible to treat this case in a manner similar to the one above, is that the minimum frequency separation between the sine waves in the studied signal is larger than the periodogram’s resolution limit: ∆ω = inf k̸=p |ωk −ωp| > 2π/N (4.3.12) Since the estimation errors {ˆ ωk−ωk} from the NLS estimates are of order O(1/N 3/2) (because cov(ˆ ω) = O(1/N 3); see (4.3.9)), equation (4.3.12) implies a similar in-equality for the NLS frequency estimates {ˆ ωk}: ∆ˆ ω > 2π/N. It should then be possible to resolve all n sine waves in the noisy signal and to obtain reasonable approximations {˜ ωk} to {ˆ ωk} by evaluating the function in (4.3.8) at the points of a grid corresponding to the sampling of each frequency variable as in the FFT: ωk = 2π N j j = 0, . . . , N −1 (k = 1, . . . , n) (4.3.13) Of course, a direct application of such a grid method for the approximate maxi-mization of (4.3.8) would be computationally burdensome for large values of n or N. However, it can be greatly simplified as described in the following. The p, k element of the matrix B∗B occurring in (4.3.8), when evaluated at the points of the grid (4.3.13), is given by [B∗B]p,k = N for p = k (4.3.14) “sm2” 2004/2/ page 154 i i i i i i i i 154 Chapter 4 Parametric Methods for Line Spectra and [B∗B]p,k = N X t=1 ei(ωk−ωp)t = ei(ωk−ωp) eiN(ωk−ωp) −1 ei(ωk−ωp) −1 = 0 for p ̸= k (4.3.15) which implies that the function to be minimized in (4.3.8) has, in such a case, the following form: n X k=1 1 N N X t=1 y(t)e−iωkt 2 (4.3.16) The previous additive decomposition in n functions of ω1, . . . , ωn (respectively) leads to the conclusion that {˜ ωk} (which, by definition, maximize (4.3.16) at the points of the grid (4.3.13)) are given by the n largest peaks of the periodogram. To show this, let us write the function in (4.3.16) as g(ω1, . . . , ωn) = n X k=1 ˆ φp(ωk) where ˆ φp(ω) is once again the periodogram. Observe that ∂g(ω1, . . . , ωn) ∂ωk = ˆ φ′ p(ωk) and ∂2g(ω1, . . . , ωn) ∂ωk∂ωj = ˆ φ′′ p(ωk)δk,j Hence, the maximum points of (4.3.16) satisfy ˆ φ′ p(ωk) = 0 and ˆ φ′′ p(ωk) < 0 for k = 1, . . . , n It follows that the set of maximizers of (4.3.16) is given by all possible combinations of n elements from the periodogram’s peak locations. Now, recall the assumption made that {ωk}, and hence their estimates {ˆ ωk}, are distinct. Under this assump-tion the highest maximum of g(ω1, . . . , ωn) is given by the locations of the n largest peaks of ˆ φp(ω), which is the desired result. The above findings are summarized as: Under the condition (4.3.12), the unmodified periodogram re-solves all the n sine waves present in the noisy signal. Further-more, the locations {˜ ωk} of the n largest peaks in the periodogram provide O(1/N) approximations to the NLS frequency estimates {ˆ ωk}. In the case of n = 1, we have ˜ ω1 = ˆ ω1 exactly. (4.3.17) The fact that the differences {˜ ωk −ˆ ωk} are O(1/N) means, of course, that the computationally convenient estimates {˜ ωk} (derived from the periodogram) “sm2” 2004/2/ page 15 i i i i i i i i Section 4.4 High–Order Yule–Walker Method 155 will generally have an inflated variance compared to {ˆ ωk}. However, {˜ ωk} can at least be used as initial values in a numerical implementation of the NLS estimator. In any case, the above discussion indicates that, under (4.3.12), the periodogram performs quite well as a frequency estimator (which actually is the task for which it was introduced by Schuster nearly a century ago!). In the following sections, we present several “high–resolution” methods for frequency estimation, which exploit the covariance matrix models. More precisely, all of these methods derive frequency estimates by exploiting the properties of the eigendecomposition of data covariance matrices and, in particular, the subspaces as-sociated with those matrices. For this reason, these methods are sometimes referred to by the generic name of subspace methods. However, in spite of their common subspace theme, the methods are quite different, and we will treat them in separate sections below. The main features of these methods can be summarized as follows: (i) Their statistical performance is close to the ultimate performance corresponding to the NLS method (and given by the Cram´ er–Rao lower bound, (4.3.9)); (ii) Unlike the NLS method, these methods are not based on multidimensional search proce-dures; and (iii) They do not depend on a “resolution condition”, such as (4.3.12), which means that they may generally have a lower resolution threshold than that of the periodogram. The chief drawback of these methods, as compared with the NLS method, is that their performance significantly degrades if the measurement noise in (4.1.1) cannot be assumed to be white. 4.4 HIGH–ORDER YULE–WALKER METHOD The high–order Yule–Walker (HOYW) method of frequency estimation can be de-rived from the ARMA model of the sinusoidal data, (4.2.3), similarly to its coun-terpart in the rational PSD case (see Section 3.7 and [Cadzow 1982; Stoica, S¨ oderstr¨ om, and Ti 1989; Stoica, Moses, S¨ oderstr¨ om, and Li 1991]). Actually, the HOYW method is based on an ARMA model of an order L higher than the minimal order n, for a reason that will be explained shortly. If the polynomial A(z) in (4.2.3) is multiplied by any other polynomial ¯ A(z), say of degree equal to L −n, then we obtain a higher–order ARMA representation of our sinusoidal data, given by y(t) + b1y(t −1) + . . . + bLy(t −L) = e(t) + b1e(t −1) + . . . + bLe(t −L) (4.4.1) or B(z)y(t) = B(z)e(t) where B(z) = 1 + L X k=1 bkz−k ≜A(z) ¯ A(z) (4.4.2) Equation (4.4.1) can be rewritten in the following more condensed form (with ob-vious notation): [y(t) y(t −1) . . . y(t −L)]  1 b  = e(t) + . . . + bLe(t −L) (4.4.3) “sm2” 2004/2/ page 156 i i i i i i i i 156 Chapter 4 Parametric Methods for Line Spectra Premultiplying (4.4.3) by [y∗(t−L−1) . . . y∗(t−L−M)]T and taking the expectation leads to Γc  1 b  = 0 (4.4.4) where the matrix Γ is defined in (4.2.8) and M is a positive integer which is yet to be specified. In order to obtain (4.4.4) as indicated above, we made use of the fact that E {y∗(t −k)e(t)} = 0 for k > 0. The similarity of (4.4.4) with the Yule–Walker system of equations encoun-tered in Chapter 3 (see equation (3.7.1)) is more readily seen if (4.4.4) is rewritten in the following more detailed form:    r(L) . . . r(1) . . . . . . r(L + M −1) . . . r(M)   b = −    r(L + 1) . . . r(L + M)    (4.4.5) Owing to this analogy, the set of equations (4.4.5) associated with the noisy sinu-soidal signal {y(t)} is said to form a HOYW system. The HOYW matrix equation (4.4.4) can also be obtained directly from (4.2.8). For any L ≥n and any polynomial ¯ A(z) (used in the defining equation, (4.4.2), for b), the elements of the vector AT L+1  1 b  (4.4.6) are equal to zero. Indeed, the kth row of (4.4.6) is [1 e−iωk . . . e−iLωk]  1 b  = 1 + L X p=1 bpe−iωkp = A(ωk) ¯ A(ωk) = 0, k = 1, . . . , n (4.4.7) (since A(ωk) = 0, cf. (4.2.3)). It follows from (4.2.8) and (4.4.7) that the vector  1 b  lies in the null space of Γc (see Definition D2 in Appendix A), Γc  1 b  = 0 which is the desired result, (4.4.4). The HOYW system of equations derived above can be used for frequency estimation in the following way. By replacing the unavailable theoretical covariances {r(k)} in (4.4.5) by the sample covariances {ˆ r(k)}, we obtain    ˆ r(L) . . . ˆ r(1) . . . . . . ˆ r(L + M −1) . . . ˆ r(M)   ˆ b ≃−    ˆ r(L + 1) . . . ˆ r(L + M)    (4.4.8) Owing to the estimation errors in {ˆ r(k)} the matrix equation (4.4.8) cannot hold exactly in the general case, for any vector ˆ b, which is indicated above by the use of “sm2” 2004/2/ page 15 i i i i i i i i Section 4.4 High–Order Yule–Walker Method 157 the “approximate equality” symbol ≃. We can solve (4.4.8) for ˆ b in a sense that is discussed in detail below, then form the polynomial 1 + L X k=1 ˆ bkz−k (4.4.9) and finally (in view of (4.2.3) and (4.4.2)) obtain frequency estimates {ˆ ωk} as the angular positions of the n roots of (4.4.9) that are located nearest the unit circle. It may be expected that increasing the values of M and L results in improved frequency estimates. Indeed, by increasing M and L we use higher–lag covariances in (4.4.8), which may bear “additional information” on the data at hand. Increasing M and L also has a second, more subtle, effect that is explained next. Let Ωdenote the M × L covariance matrix in (4.4.5) and, similarly, let ˆ Ω denote the sample covariance matrix in (4.4.8). It can be seen from (4.2.8) that rank(Ω) = n for M, L ≥n (4.4.10) On the other hand, the matrix ˆ Ωhas full rank (almost surely) rank(ˆ Ω) = min(M, L) (4.4.11) owing to the random errors in {ˆ r(k)}. However, for reasonably large values of N the matrix ˆ Ωis close to the rank–n matrix Ωsince the sample covariances {ˆ r(k)} converge to {r(k)} as N increases (this is shown in Complement 4.9.1). Hence, we may expect the linear system (4.4.8) to be ill–conditioned from a numerical standpoint (see the discussion in Section A.8.1 in Appendix A). In fact, there is compelling empirical evidence that any LS procedure which determines ˆ b directly from (4.4.8) has very poor accuracy. In order to overcome the previously described difficulty we can make use of the a priori rank information (4.4.10). However, some preparations are required before we shall be able to do so. Let ˆ Ω= UΣV ∗≜[ U1 |{z} n U2 |{z} M−n ]  Σ1 0 0 Σ2   V ∗ 1 V ∗ 2  n L−n (4.4.12) denote the singular value decomposition (SVD) of the matrix ˆ Ω(see Section A.4 in Appendix A, and [S¨ oderstr¨ om and Stoica 1989; Van Huffel and Van-dewalle 1991] for general discussions on the SVD). In (4.4.12), U is an M × M unitary matrix, V is an L × L unitary matrix and Σ is an M × L diagonal matrix. As ˆ Ωis close to a rank–n matrix, Σ2 in (4.4.12) should be close to zero, which implies that ˆ Ωn ≜U1Σ1V ∗ 1 (4.4.13) should be a good approximation for ˆ Ω. In fact, it can be proven that ˆ Ωn above is the best (in the Frobenius–norm sense) rank–n approximation of ˆ Ω(see Result R18 in Appendix A). Hence, in accordance with the rank information (4.4.10), we can “sm2” 2004/2/ page 158 i i i i i i i i 158 Chapter 4 Parametric Methods for Line Spectra use ˆ Ωn in (4.4.8) in lieu of ˆ Ω. The so–obtained rank–truncated HOYW system of equations: ˆ Ωnˆ b ≃−    ˆ r(L + 1) . . . ˆ r(L + M)    (4.4.14) can be solved in a numerically sound way by using a simple LS procedure. It is readily verified that ˆ Ω† n = V1Σ−1 1 U ∗ 1 (4.4.15) is the pseudoinverse of ˆ Ωn (see Definition D15 and Result R32). Hence, the LS solution to (4.4.14) is given by ˆ b = −V1Σ−1 1 U ∗ 1    ˆ r(L + 1) . . . ˆ r(L + M)    (4.4.16) The additional bonus for using ˆ Ωn instead of ˆ Ωin (4.4.8) is an improvement in the statistical accuracy of the frequency estimates obtained from (4.4.16). This improved accuracy is explained by the fact that ˆ Ωn should be closer to Ωthan ˆ Ωis; the improved covariance matrix estimate ˆ Ωn obtained by exploitation of the rank information (4.4.10), when used in the HOYW system of equations, should lead to refined frequency estimates. We remark that a total least squares (TLS) solution for ˆ b can also be obtained from (4.4.8) (see Definition D17 and Result R33 in Appendix A). A TLS solution makes sense because we have errors in both ˆ Ωand the right–hand–side vector in equation (4.4.8). In fact the TLS–based estimate of b is often slightly better than the estimate discussed above, which is obtained as the LS solution to the rank– truncated system of linear equations in (4.4.14). We next return to the selection of L and M. As M and L increase, the in-formation brought into the estimation problem under study by the rank condition (4.4.10) is more and more important, and hence the corresponding increase of ac-curacy is more and more pronounced. (For instance, the information that a 10×10 noisy matrix has rank one in the noise–free case leads to more relations between the matrix elements, and hence to more “noise cleaning”, than if the matrix were 2×2.) In fact, for M = n or L = n the rank condition is inactive as ˆ Ωn = ˆ Ωin such a case. The previous discussion gives another explanation as to why the accuracy of the frequency estimates obtained from (4.4.16) may be expected to increase with increasing M and L. The box below summarizes the HOYW frequency estimation method. It should be noted that the operation in Step 3 of the HOYW method is implicitly based on the assumption that the estimated “signal roots” (i.e., the roots of A(z) in (4.4.2)) are always closer to the unit circle than the estimated “noise roots” (i.e., the roots of ¯ A(z) in (4.4.2)). It can be shown that as N →∞, all roots of ¯ A(z) are strictly inside the unit circle (see, e.g., Complement 6.5.1 and [Kumaresan and Tufts 1983]). While this property cannot be guaranteed in finite samples, “sm2” 2004/2/ page 159 i i i i i i i i Section 4.5 Pisarenko and MUSIC Methods 159 there is empirical evidence that it holds most often. In those rare cases where it fails to hold, the HOYW method produces spurious (or false) frequency estimates. The risk of producing spurious estimates is the price paid for the improved accuracy obtained by increasing L (note that for L = n there is no “noise root”, and hence no spurious estimate can occur in such a case). The risk for false frequency estimation is a problem that is common to all methods which estimate the frequencies from the roots of a polynomial of degree larger than n, such as the MUSIC and Min–Norm methods to be discussed in the next two sections. The HOYW Frequency Estimation Method Step 1. Compute the sample covariances {ˆ r(k)}L+M k=1 . We may set L ≃M and select the values of these integers so that L + M is a fraction of the sample length (such as N/3). Note that if L + M is set to a value which is too close to N, then the higher–lag covariances required in (4.4.8) cannot be estimated in a reliable way. Step 2. Compute the SVD of ˆ Ω, (4.4.12), and determine ˆ b with (4.4.16). Step 3. Isolate the n roots of the polynomial (4.4.9) that are closest to the unit circle, and obtain the frequency estimates as the angular positions of these roots. 4.5 PISARENKO AND MUSIC METHODS The MUltiple SIgnal Classification (or MUltiple SIgnal Characterization) (MUSIC) method [Schmidt 1979; Bienvenu 1979] and Pisarenko’s method [Pisarenko 1973] (which is a special case of MUSIC, as explained below) are derived from the covariance model (4.2.7) with m > n. Let λ1 ≥λ2 ≥. . . ≥λm denote the eigenvalues of R in (4.2.7), arranged in nonincreasing order, and let {s1, . . . , sn} be the orthonormal eigenvectors associated with {λ1, . . . , λn}, and {g1, . . . , gm−n} a set of orthonormal eigenvectors corresponding to {λn+1, . . . , λm} (see Appendix A). Since rank(APA∗) = n (4.5.1) it follows that APA∗has n strictly positive eigenvalues, the remaining (m −n) eigenvalues all being equal to zero. Combining this observation with the fact that (see Result R5 in Appendix A) λk = ˜ λk + σ2 (k = 1, . . . , m) (4.5.2) where {˜ λk}m k=1 are the eigenvalues of APA∗(arranged in nonincreasing order), leads to the following result:  λk > σ2 for k = 1, . . . , n λk = σ2 for k = n + 1, . . . , m (4.5.3) The set of eigenvalues of R can hence be split into two subsets. Next, we show that the eigenvectors associated with each of these subsets, as introduced above, possess some interesting properties that can be used for frequency estimation. “sm2” 2004/2/ page 160 i i i i i i i i 160 Chapter 4 Parametric Methods for Line Spectra Let S = [s1, . . . , sn] (m × n), G = [g1, . . . , gm−n] (m × (m −n)) (4.5.4) From (4.2.7) and (4.5.3), we get at once: RG = G    λn+1 0 ... 0 λm   = σ2G = APA∗G + σ2G. (4.5.5) The first equality in (4.5.5) follows from the definition of G and {λk}m k=n+1, the second equality follows from (4.5.3), and the third from (4.2.7). The last equality in equation (4.5.5) implies that APA∗G = 0, or (as the matrix AP has full column rank) A∗G = 0 (4.5.6) In other words, the columns {gk} of G belong to the null space of A∗, a fact which is denoted by gk ∈N(A∗). Since rank(A) = n, the dimension of N(A∗) is equal to m −n which is also the dimension of the range space of G, R(G). It follows from this observation and (4.5.6) that R(G) = N(A∗) (4.5.7) In words (4.5.7) says that the vectors {gk} span both R(G) and N(A∗). Now, since by definition S∗G = 0 (4.5.8) we also have R(G) = N(S∗); hence, N(S∗) = N(A∗). Since R(S) and R(A) are the orthogonal complements to N(S∗) and N(A∗), it follows that R(S) = R(A) (4.5.9) We can also derive the equality (4.5.9) directly from (4.2.7). Set Λ ◦=    λ1 −σ2 0 ... 0 λn −σ2    (4.5.10) From RS = S    λ1 0 ... 0 λn   = APA∗S + σ2S (4.5.11) we obtain S = A  PA∗SΛ ◦−1 (4.5.12) “sm2” 2004/2/ page 16 i i i i i i i i Section 4.5 Pisarenko and MUSIC Methods 161 which shows that R(S) ⊂R(A). However, R(S) and R(A) have the same dimen-sion (equal to n); hence, (4.5.9) follows. Owing to (4.5.9) and (4.5.8), the subspaces R(S) and R(G) are sometimes called the signal subspace and noise subspace, re-spectively. The following key result is obtained from (4.5.6). The true frequency values {ωk}n k=1 are the only solutions of the equation a∗(ω)GG∗a(ω) = 0 for any m > n. (4.5.13) The fact that {ωk} satisfy the above equation follows from (4.5.6). It only remains to prove that {ωk}n k=1 are the only solutions to (4.5.13). Let ˜ ω denote another possible solution, with ˜ ω ̸= ωk (k = 1, . . . , n). In (4.5.13), GG∗is the or-thogonal projector onto R(G) (see Section A.4). Hence, (4.5.13) implies that a(˜ ω) is orthogonal to R(G), which means that a(˜ ω) ∈N(G∗). However, the Vandermonde vector a(˜ ω) is linearly independent of {a(ωk)}n k=1. Since n+1 linearly independent vectors cannot belong to an n–dimensional subspace, which is N(G∗) in the present case, we conclude that no other solution ˜ ω to (4.5.13) can exist; with this, the proof is finished. The MUSIC algorithm uses the previous result to derive frequency estimates in the following steps. Step 1. Compute the sample covariance matrix ˆ R = 1 N N X t=m ˜ y(t)˜ y∗(t) (4.5.14) and its eigendecomposition. Let ˆ S and ˆ G denote the matrices defined similarly to S and G, but made from the eigenvectors {ˆ s1, . . . , ˆ sn} and {ˆ g1, . . . , ˆ gm−n} of ˆ R. Step 2a. (Spectral MUSIC) [Schmidt 1979; Bienvenu 1979]. Determine fre-quency estimates as the locations of the n highest peaks of the function 1 a∗(ω) ˆ G ˆ G∗a(ω) , ω ∈[−π, π] (4.5.15) (Sometimes (4.5.15) is called a “pseudospectrum” since it indicates the presence of sinusoidal components in the studied signal, but it is not a true PSD. This fact may explain the attribute “spectral” attached to this variant of MUSIC.) OR: Step 2b. (Root MUSIC) [Barabell 1983]. Determine frequency estimates as the angular positions of the n (pairs of reciprocal) roots of the equation aT (z−1) ˆ G ˆ G∗a(z) = 0 (4.5.16) “sm2” 2004/2/ page 16 i i i i i i i i 162 Chapter 4 Parametric Methods for Line Spectra which are located nearest the unit circle. In (4.5.16), a(z) stands for the vector a(ω), (4.2.4), with eiω replaced by z, so a(z) = [1, z−1, . . . , z−(m−1)]T For m = n+1 (which is the minimum possible value) the MUSIC algorithm reduces to the Pisarenko method, which was the earliest proposal for an eigenanalysis–based (or subspace–based) method of frequency estimation [Pisarenko 1973]. The Pisarenko method is MUSIC with m = n + 1 (4.5.17) In the Pisarenko method, the estimated frequencies are determined from (4.5.16). For m = n + 1 this 2(m −1)–degree equation can be reduced to the following equation of degree m −1 = n: aT (z−1)ˆ g1 = 0 (4.5.18) The Pisarenko frequency estimates are obtained as the angular positions of the roots of (4.5.18). The Pisarenko method is the simplest version of MUSIC from a computational standpoint. In addition, unlike MUSIC with m > n + 1, the Pisarenko procedure does not have the problem of separating the “signal roots” from the “noise roots” (see the discussion on this point at the end of Section 4.4). However, it can be shown that the accuracy of the MUSIC frequency estimates in-creases significantly with increasing m. Hence, the price paid for the computational simplicity of the Pisarenko method may be a relatively poor statistical accuracy. Regarding the selection of a value for m, this parameter may be chosen as large as possible, but not too close to N, in order to still allow a reliable estimation of the covariance matrix (for example, as in (4.5.14)). In some applications, the largest possible value that may be selected for m may also be limited by computational complexity considerations. Whenever the tradeoffbetween statistical accuracy and computational com-plexity is an important issue, the following simple ideas may be valuable. The finite–sample statistical accuracy of MUSIC frequency estimates may be improved by modifying the covariance estimator (4.5.14). For instance, ˆ R is not Toeplitz whereas the true covariance matrix R is. We may correct this situation by replacing the elements in each diagonal of ˆ R with their average. The so–corrected sample covariance matrix can be shown to be the best (in the Frobenius–norm sense) Toeplitz approximation of ˆ R. Another modification of ˆ R, with the same purpose of improving the finite–sample statistical accuracy, is described in Section 4.8. The computational complexity of MUSIC, for a given m, may be reduced in various ways. Quite often, m is such that m −n > n. Then, the computational burdens associated with both Spectral and Root MUSIC may be reduced by using I −ˆ S ˆ S∗in (4.5.15) or (4.5.16) in lieu of ˆ G ˆ G∗. (Note that ˆ S ˆ S∗+ ˆ G ˆ G∗= I by the very definition of the eigenvector matrices.) The computational burden of Root MUSIC may be further reduced as explained in the following. The polynomial in (4.5.16) is a self–reciprocal (or symmetric) one: its roots appear in reciprocal pairs (ρeiϕ, 1 ρeiϕ). On the unit circle z = eiω, (4.5.16) is nonnegative and hence may be “sm2” 2004/2/ page 163 i i i i i i i i Section 4.5 Pisarenko and MUSIC Methods 163 interpreted as a PSD. Owing to the properties mentioned above, (4.5.16) can be factored as aT (z−1) ˆ G ˆ G∗a(z) = α(z)α∗(1/z∗) (4.5.19) where α(z) is a polynomial of degree (m−1) with all its zeroes located within or on the unit circle. We may then determine the frequency estimates from the n roots of α(z) that are closest to the unit circle. Since there are efficient numerical procedures for spectral factorization, determining α(z) as in (4.5.19) and then computing its zeroes is usually computationally more efficient than finding the (reciprocal) roots of the 2(m −1)–degree polynomial (4.5.16). Finally, we address the issue of spurious frequency estimates. As implied by the result (4.5.13), for N →∞there is no risk of obtaining false frequency estimates. However, in finite samples such a risk always exists. Usually, this risk is quite small but it may become a real problem if m takes on large values. The key result on which the standard MUSIC algorithm, (4.5.15), is based can be used to derive a modified MUSIC which does not suffer from the spurious estimate problem. In the following, we only explain the basic ideas leading to the modified MUSIC method without going into details of its implementation (for such details, the interested reader may consult [Stoica and Sharman 1990]). Let {ck}n k=1 denote the coefficients of the polynomial A(z) defined in (4.2.3): A(z) = 1 + c1z−1 + . . . + cnz−n = n Y k=1 (1 −eiωkz−1) (4.5.20) Introduce the following matrix made from {ck}: C∗=    1 c1 . . . cn 0 ... ... ... 0 1 c1 . . . cn   , (m −n) × m (4.5.21) It is readily verified that C∗A = 0, (m −n) × n (4.5.22) where A is defined in (4.2.4). Combining (4.5.9) and (4.5.22) gives C∗S = 0, (m −n) × n (4.5.23) which is the key property here. The matrix equation (4.5.23) can be rewritten in the following form φc = µ (4.5.24) where the (m −n)n × n matrix φ and the (m −n)n × 1 vector µ are entirely determined from the elements of S, and where c = [c1 . . . cn]T (4.5.25) By replacing the elements of S in φ and µ by the corresponding entries of ˆ S, we obtain the sample version of (4.5.24) ˆ φˆ c ≃ˆ µ (4.5.26) “sm2” 2004/2/ page 164 i i i i i i i i 164 Chapter 4 Parametric Methods for Line Spectra from which an estimate ˆ c of c may be obtained by an LS or TLS algorithm; see Section A.8 for details. The frequency estimates can then be derived from the roots of the estimated polynomial (4.5.20) corresponding to ˆ c. Since this polynomial has a (minimal) degree equal to n, there is no risk for false frequency estimation. 4.6 MIN–NORM METHOD MUSIC uses (m −n) linearly independent vectors in R( ˆ G) to obtain the frequency estimates. Since any vector in R( ˆ G) is (asymptotically) orthogonal to {a(ωk)}n k=1 (cf. (4.5.7)), we may think of using only one such vector for frequency estima-tion. By doing so, we may achieve some computational saving, hopefully without sacrificing too much accuracy. The Min–Norm method proceeds to estimate the frequencies along these lines [Kumaresan and Tufts 1983]. Let  1 ˆ g  = the vector in R( ˆ G), with first element equal to one, that has minimum Euclidean norm. (4.6.1) Then, the Min–Norm frequency estimates are determined as (Spectral Min–Norm). The locations of the n highest peaks in the pseudospectrum 1 a∗(ω)  1 ˆ g  2 (4.6.2) or, alternatively, (Root Min–Norm). The angular positions of the n roots of the polynomial aT (z−1)  1 ˆ g  that are located nearest the unit circle. (4.6.3) It remains to determine the vector in (4.6.1) and, in particular, to show that its first element can always be normalized to one. We will later comment on the reason behind the specific selection (4.6.1) of a vector in R( ˆ G). In the following, the Euclidean norm of a vector is denoted by ∥· ∥. Partition the matrix ˆ S as ˆ S =  α∗ ¯ S  } 1 } m −1 (4.6.4) As  1 ˆ g  ∈R( ˆ G), it must satisfy the equation ˆ S∗  1 ˆ g  = 0 (4.6.5) “sm2” 2004/2/ page 16 i i i i i i i i Section 4.6 Min–Norm Method 165 which, using (4.6.4), can be rewritten as ¯ S∗ˆ g = −α (4.6.6) The minimum–norm solution to (4.6.6) is given by (see Result R31 in Appendix A): ˆ g = −¯ S( ¯ S∗¯ S)−1α (4.6.7) assuming that the inverse exists. Noting that I = ˆ S∗ˆ S = αα∗+ ¯ S∗¯ S (4.6.8) and also that one eigenvalue of I −αα∗is equal to 1 −∥α∥2 and the remaining (n −1) eigenvalues of I −αα∗are equal to 1, it follows that the inverse in (4.6.7) exists if and only if ∥α∥2 ̸= 1 (4.6.9) If the above condition is not satisfied, there will be no vector of the form of (4.6.1) in R( ˆ G). We postpone the study of (4.6.9) until we obtain a final–form expression for ˆ g. Under the condition (4.6.9), a simple calculation shows that ( ¯ S∗¯ S)−1α = (I −αα∗)−1α = α/(1 −∥α∥2) (4.6.10) Inserting (4.6.10) in (4.6.7) gives ˆ g = −¯ Sα/(1 −∥α∥2) (4.6.11) which expresses ˆ g as a function of the elements of ˆ S. We can also obtain ˆ g as a function of the entries in ˆ G. To do so, partition ˆ G as ˆ G =  β∗ ¯ G  (4.6.12) Since ˆ S ˆ S∗= I −ˆ G ˆ G∗by the definition of the matrices ˆ S and ˆ G, it follows that  ∥α∥2 ( ¯ Sα)∗ ¯ Sα ¯ S ¯ S∗  =  1 −∥β∥2 −( ¯ Gβ)∗ −¯ Gβ I −¯ G ¯ G∗  (4.6.13) Comparing the blocks in (4.6.13) makes it possible to express ∥α∥2 and ¯ Sα as functions of ¯ G and β, which leads to the following equivalent expression for ˆ g: ˆ g = ¯ Gβ/∥β∥2 (4.6.14) If m −n > n, then it is computationally more advantageous to obtain ˆ g from (4.6.11); otherwise, (4.6.14) should be used. Next, we return to the condition (4.6.9) that is implicitly assumed to hold in the previous derivations. As already mentioned, this condition is equivalent to rank( ¯ S∗¯ S) = n which, in turn, holds if and only if rank( ¯ S) = n (4.6.15) “sm2” 2004/2/ page 166 i i i i i i i i 166 Chapter 4 Parametric Methods for Line Spectra Now, it follows from (4.5.9) that any block of S made from more than n consec-utive rows should have rank equal to n. Hence, (4.6.15) must hold at least for N sufficiently large. With this observation, the derivation of the Min–Norm frequency estimator is complete. The statistical accuracy of the Min–Norm method is similar to that corre-sponding to MUSIC. Hence, Min–Norm achieves MUSIC’s performance at a re-duced computational cost. It should be noted that the selection (4.6.1) of the vector in R( ˆ G), used in the Min–Norm algorithm, is critical in obtaining frequency estimates with satisfactory statistical accuracy. Other choices of vectors in R( ˆ G) may give rather poor accuracy. In addition, there is empirical evidence that the use of the minimum–norm vector in R( ˆ G), as in (4.6.1), may decrease the risk of spurious frequency estimates compared with other vectors in R( ˆ G) or even with MUSIC (see Complement 6.5.1 for details on this aspect). 4.7 ESPRIT METHOD Let A1 = [Im−1 0]A (m −1) × n (4.7.1) and A2 = [0 Im−1]A (m −1) × n (4.7.2) where Im−1 is the identity matrix of dimension (m −1) × (m −1) and [Im−1 0] and [0 Im−1] are (m −1) × m. It is readily verified that A2 = A1D (4.7.3) where D =    e−iω1 0 ... 0 e−iωn    (4.7.4) Since D is a unitary matrix, the transformation in (4.7.3) is a rotation. ES-PRIT, i.e., Estimation of Signal Parameters by Rotational Invariance Techniques ([Paulraj, Roy, and Kailath 1986; Roy and Kailath 1989]; see also [Kung, Arun, and Rao 1983]), relies on the rotational transformation (4.7.3) as we detail below. Similarly to (4.7.1) and (4.7.2), define S1 = [Im−1 0]S (4.7.5) S2 = [0 Im−1]S (4.7.6) From (4.5.12), we have that S = AC (4.7.7) where C is the n × n nonsingular matrix given by C = PA∗SΛ ◦−1 (4.7.8) (Observe that both S and A in (4.7.7) have full column rank, and hence C must be nonsingular; see Result R2 in Appendix A). The above explicit expression for C “sm2” 2004/2/ page 16 i i i i i i i i Section 4.7 ESPRIT Method 167 actually has no relevance to the present discussion. It is only (4.7.7), and the fact that C is nonsingular, that counts. By using (4.7.1)–(4.7.3) and (4.7.7), we can write S2 = A2C = A1DC = S1C−1DC = S1φ (4.7.9) where φ ≜C−1DC (4.7.10) Owing to the Vandermonde structure of A, the matrices A1 and A2 have full column rank (equal to n). In view of (4.7.7), S1 and S2 must also have full column rank. It then follows from (4.7.9) that the matrix φ is uniquely given by φ = (S∗ 1S1)−1S∗ 1S2 (4.7.11) This formula expresses φ as a function of some quantities which can be estimated from the available sample. The importance of being able to estimate φ stems from the fact that φ and D have the same eigenvalues. (This can be seen from the equation (4.7.10), which is a similarity transformation relating φ and D, along with Result R6 in Appendix A.) ESPRIT uses the previous observations to determine frequency estimates as described next. ESPRIT estimates the frequencies {ωk}n k=1 as −arg(ˆ νk), where {ˆ νk}n k=1 are the eigenvalues of the following (consistent) estimate of the matrix φ: ˆ φ = ( ˆ S∗ 1 ˆ S1)−1 ˆ S∗ 1 ˆ S2 (4.7.12) It should be noted that the above estimate of φ is implicitly obtained by solving the following linear system of equations: ˆ S1 ˆ φ ≃ˆ S2 (4.7.13) by an LS method. It has been empirically observed that better finite–sample accu-racy may be achieved if (4.7.13) is solved for ˆ φ by a Total LS method (see Section A.8 and [Van Huffel and Vandewalle 1991] for discussions on the TLS approach). The statistical accuracy of ESPRIT is similar to that of the previously de-scribed methods: HOYW, MUSIC and Min–Norm. In fact, in most cases, ESPRIT may provide slightly more accurate frequency estimates than the other methods mentioned above; and this at similar computational cost. In addition, unlike these other methods, ESPRIT has no problem with separating the “signal roots” from the “noise roots”, as can be seen from (4.7.12). Note that this property is shared by the modified MUSIC method (discussed in Section 4.5); however, in many cases ESPRIT outperforms modified MUSIC in terms of statistical accuracy. All these considerations recommend ESPRIT as the first choice in a frequency estimation application. “sm2” 2004/2/ page 168 i i i i i i i i 168 Chapter 4 Parametric Methods for Line Spectra 4.8 FORWARD–BACKWARD APPROACH The previously described eigenanalysis–based methods (MUSIC, Min–Norm and ESPRIT) derive their frequency estimates from the eigenvectors of the sample co-variance matrix ˆ R, (4.5.14), which is restated here for easy reference: ˆ R = 1 N N X t=m    y(t) . . . y(t −m + 1)   [y∗(t) . . . y∗(t −m + 1)] (4.8.1) The ˆ R above is recognized to be the matrix that appears in the least squares (LS) estimation of the coefficients {αk} of an mth–order forward linear predictor of y∗(t + 1): ˆ y∗(t + 1) = α1y∗(t) + . . . + αmy∗(t −m + 1) (4.8.2) For this reason, the methods which obtain frequency estimates from ˆ R are named forward (F) approaches. Extensive numerical experience with the aforementioned methods has shown that the corresponding frequency estimation accuracy can be enhanced by using the following modified sample covariance matrix, in lieu of ˆ R, ˜ R = 1 2( ˆ R + J ˆ RT J) (4.8.3) where J =   0 1 ... 1 0   (4.8.4) is the so–called “exchange” (or “reversal”) matrix. The second term in (4.8.3) has the following detailed form: J ˆ RT J = 1 N N X t=m    y∗(t −m + 1) . . . y∗(t)   [y(t −m + 1) . . . y(t)] (4.8.5) The matrix (4.8.5) is the one that appears in the LS estimate of the coefficients of an mth–order backward linear predictor of y(t −m): ˆ y(t −m) = µ1y(t −m + 1) + . . . + µmy(t) (4.8.6) This observation, along with the previous remark made about ˆ R, suggests the name of forward–backward (FB) approaches for methods that determine frequency estimates from ˜ R in (4.8.3). The (i, j) element of ˜ R is given by: ˜ Ri,j = 1 2N N X t=m [y(t −i)y∗(t −j) + y∗(t −m + 1 + i)y(t −m + 1 + j)] ≜T1 + T2 (i, j = 0, . . . , m −1) (4.8.7) “sm2” 2004/2/ page 169 i i i i i i i i Section 4.8 Forward–Backward Approach 169 Assume that i ≤j (the other case i ≥j can be similarly treated). Let ˆ r(j −i) denote the usual sample covariance: ˆ r(j −i) = 1 N N X t=(j−i)+1 y(t)y∗(t −(j −i)) (4.8.8) A straightforward calculation shows that the two terms T1 and T2 in (4.8.7) can be written as T1 = 1 2N N−i X p=m−i y(p)y∗(p −(j −i)) = 1 2 ˆ r(j −i) + O(1/N) (4.8.9) and T2 = 1 2N N−m+j+1 X p=j+1 y(p)y∗(p −(j −i)) = 1 2 ˆ r(j −i) + O(1/N) (4.8.10) where O(1/N) denotes a term that tends to zero as 1/N when N increases (it is here assumed that m ≪N). It follows from (4.8.7)–(4.8.10) that, for large N, the difference between ˜ Ri,j or ˆ Ri,j and the sample covariance lag ˆ r(j −i) is “small”. Hence, the frequency estimation methods based on ˆ R or ˜ R (or on [ˆ r(j −i)]) may be expected to have similar performances in large samples. In summary, it follows from the previous discussion that the empirically ob-served performance superiority of the forward–backward approach over the forward– only approach should only be manifest in samples with relatively small lengths. As such, this superiority cannot be easily established by theoretical means. Let us then argue heuristically. First, note that the transformation J(.)T J is such that the following equalities hold: ( ˆ R)i,j = (J ˆ RJ)m−i,m−j = (J ˆ RT J)m−j,m−i (4.8.11) and ( ˆ R)m−j,m−i = (J ˆ RT J)i,j (4.8.12) This implies that the (i, j) and (m −j, m −i) elements of ˜ R are both given by ˜ Ri,j = ˜ Rm−j,m−i = 1 2( ˆ Ri,j + ˆ Rm−j,m−i) (4.8.13) Equations (4.8.11)–(4.8.12) imply that ˜ R is invariant to the transformation J(.)T J: J ˜ RT J = ˜ R (4.8.14) Such a matrix is said to be persymmetric (also called centrosymmetric). In order to see the reason for this name, note that ˜ R is Hermitian (symmetric in the real– valued case) with respect to its main diagonal; in addition, ˜ R is symmetric about its main antidiagonal. Indeed, the equal elements ˜ Ri,j and ˜ Rm−j,m−i of ˜ R belong to the same diagonal as i −j = (m −j) −(m −i). They are also symmetrically “sm2” 2004/2/ page 170 i i i i i i i i 170 Chapter 4 Parametric Methods for Line Spectra placed with respect to the main antidiagonal; ˜ Ri,j lies on antidiagonal (i + j), ˜ Rm−j,m−i on the [2m −(j + i)]th one, and the main antidiagonal is the mth one (and m = [(i + j) + 2m −(i + j)]/2). The theoretical (and unknown) covariance matrix R is Toeplitz and hence persymmetric. Since ˜ R is persymmetric like R, whereas ˆ R is not, we may expect ˜ R to be a better estimate of R than ˆ R. In turn, this means that the frequency estimates derived from ˜ R are likely to be more accurate than those obtained from ˆ R. The impact of enforcing the persymmetric property can be seen by examining, say, the (1, 1) and (m, m) elements of ˆ R and ˜ R. Both the (1,1) and (m, m) elements of ˆ R are estimates of r(0); however, the (1,1) element does not use the first (m −1) lag products |y(1)|2, . . . , |y(m −1)|2, and the (m, m) element does not use the last (m−1) lag products |y(N −m+2)|2, . . . , |y(N)|2. If N ≫m, the omission of these lag products is negligible; for small N, however, this omission may be significant. On the other hand, all lag products of y(t) are used to form the (1, 1) and (m, m) elements of ˜ R, and in general the (i, j) element of ˜ R uses more lag products of y(t) than the corresponding element of ˆ R. For more details on the FB approach, we refer the reader to, e.g., [Rao and Hari 1993; Pillai 1989]; see also Complement 6.5.8. Finally, the reader might wonder why we do not replace ˆ R by a Toeplitz estimate, obtained for example by averaging the elements along each diagonal of ˆ R. This Toeplitz estimate would at first seem to be a better approximation of R than either ˆ R or ˜ R. The reason why we do not “Toeplitz–ize” ˆ R or ˜ R is that for finite N, and infinite signal–to–noise ratio (σ2 →0), the use of either ˆ R or ˜ R gives exact frequency estimates, whereas the Toeplitz–averaged approximation of R does not. As σ2 →0, both ˆ R and ˜ R have rank n, but the Toeplitz–averaged approximation of R has full rank in general. 4.9 COMPLEMENTS 4.9.1 Mean Square Convergence of Sample Covariances for Line Spectral Processes In this complement we prove that lim N→∞ˆ r(k) = r(k) (in a mean square sense) (4.9.1) (that is, limN→∞E  |ˆ r(k) −r(k)|2 = 0 ). The above result has already been referred to in Section 4.4, in the discussion on the rank properties of ˆ Ωand Ω. It is also the basic result from which the consistency of all covariance–based frequency estimators discussed in this chapter can be readily concluded. Note that a signal {y(t)} satisfying (4.9.1) is said to be second–order ergodic (see [S¨ oderstr¨ om and Stoica 1989; Brockwell and Davis 1991] for a more detailed discussion of the ergodicity property). “sm2” 2004/2/ page 17 i i i i i i i i Section 4.9 Complements 171 A straightforward calculation gives ˆ r(k) = 1 N N X t=k+1 [x(t) + e(t)][x∗(t −k) + e∗(t −k)] = 1 N N X t=k+1 [x(t)x∗(t −k) + x(t)e∗(t −k) + e(t)x∗(t −k) +e(t)e∗(t −k)] ≜T1 + T2 + T3 + T4 (4.9.2) The limit of T1 is found as follows. First note that: lim N→∞E  |T1 −rx(k)|2 = lim N→∞ ( 1 N 2 N X t=k+1 N X s=k+1 E {x(t)x∗(t −k)x∗(s)x(s −k)} − 2 N N X t=k+1 |rx(k)|2 ! + |rx(k)|2 ) = lim N→∞ ( 1 N 2 N X t=k+1 N X s=k+1 E {x(t)x∗(t −k)x∗(s)x(s −k)} ) −|rx(k)|2 Now, E {x(t)x∗(t −k)x∗(s)x(s −k)} = n X p=1 n X j=1 n X l=1 n X m=1 apajalamei(ωp−ωj)tei(ωm−ωl)s ·ei(ωj−ωm)kE  eiϕpe−iϕjeiϕme−iϕl = n X p=1 n X j=1 n X l=1 n X m=1 apajalamei(ωp−ωj)tei(ωm−ωl)s ·ei(ωj−ωm)k (δp,jδm,l + δp,lδm,j −δp,jδm,lδp,m) where the last equality follows from the assumed independence of the initial phases {ϕk}. Combining the results of the above two calculations yields: lim N→∞E  |T1 −rx(k)|2 = lim N→∞ 1 N 2 N X t=k+1 N X s=k+1 ( n X p=1 n X m=1 a2 pa2 mei(ωp−ωm)k + n X p=1 n X m=1 a2 pa2 mei(ωp−ωm)(t−s) − n X p=1 a4 p ) −|rx(k)|2 = n X p=1 n X m=1 m̸=p a2 pa2 m lim N→∞ 1 N 2 N X τ=−N (N −|τ|)ei(ωp−ωm)τ = 0 (4.9.3) It follows that T1 converges to r(k) (in the mean square sense) as N tends to infinity. “sm2” 2004/2/ page 17 i i i i i i i i 172 Chapter 4 Parametric Methods for Line Spectra The limits of T2 and T3 are equal to zero, as shown below for T2; the proof for T3 is similar. Using the fact that {x(t)} and {e(t)} are by assumption independent random signals, we get E  |T2|2 = 1 N 2 N X t=k+1 N X s=k+1 E {x(t)e∗(t −k)x∗(s)e(s −k)} = σ2 N 2 N X t=k+1 N X s=k+1 E {x(t)x∗(s)} δt,s = σ2 N 2 N X t=k+1 E  |x(t)|2 = (N −k)σ2 N 2 E  |x(t)|2 (4.9.4) which tends to zero, as N →∞. Hence, T2 (and, similarly, T3) converges to zero in the mean square sense. The last term, T4, in (4.9.2) converges to σ2δk,0 by the “law of large numbers” (see [S¨ oderstr¨ om and Stoica 1989; Brockwell and Davis 1991]). In fact, it is readily verified, at least under the Gaussian hypothesis, that E  |T4 −σ2δk,0|2 = 1 N 2 N X t=k+1 N X s=k+1 E {e(t)e∗(t −k)e∗(s)e(s −k)} −σ2δk,0 ( 1 N N X t=k+1 E {e(t)e∗(t −k) + e∗(t)e(t −k)} ) +σ4δk,0 = 1 N 2 N X t=k+1 N X s=k+1 [σ4δk,0 + σ4δt,s] −2σ4δk,0 1 N N X t=k+1 (δk,0) + σ4δk,0 →σ4δk,0 −2σ4δk,0 + σ4δk,0 = 0 (4.9.5) Hence, T4 converges to σ2δk,0 in the mean square sense if e(t) is Gaussian. It can be shown using the law of large numbers that T4 →σ2δk,0 in the mean square sense even if e(t) is non–Gaussian, as long as the fourth–order moment of e(t) is finite. Next, observe that since, for example, E{|T2|2} and E{|T3|2} converge to zero, then E{T2T ∗ 3 } also converges to zero (as N →∞); this is so because |E {T2T ∗ 3 } | ≤ E  |T2|2 E  |T3|2 1/2 With this observation, the proof of (4.9.1) is complete. 4.9.2 The Carath´ eodory Parameterization of a Covariance Matrix The covariance matrix model in (4.2.7) is more general than it might appear at first sight. We show that for any given covariance matrix R = {r(i −j)}m i,j=1, “sm2” 2004/2/ page 173 i i i i i i i i Section 4.9 Complements 173 there exist n ≤m, σ2 and {ωk, αk}n k=1 such that R can be written as in (4.2.7). Equation (4.2.7), associated with an arbitrary given covariance matrix R, is named the Carath´ eodory parameterization of R. Let σ2 denote the minimum eigenvalue of R. As σ2 is not necessarily unique, let ¯ n denote its multiplicity and set n = m −¯ n. Define Γ = R −σ2I The matrix Γ is positive semidefinite and Toeplitz and, hence, must be the covari-ance matrix associated with a stationary signal, say y(t): Γ = E         y(t) . . . y(t −m + 1)   [y∗(t) . . . y∗(t −m + 1)]      By definition, rank(Γ) = n (4.9.6) which implies that there must exist a linear combination between {y(t), . . . , y(t−n)} for all t. Moreover, both y(t) and y(t −n) must appear with nonzero coefficients in that linear combination (otherwise either {y(t) . . . y(t−n+1)} or {y(t−1) . . . y(t− n)} would be linearly related, and rank(Γ) would be less than n, which would contradict (4.9.6)). Hence y(t) obeys the following homogeneous AR equation: B(z)y(t) = 0 (4.9.7) where z−1 is the unit delay operator, and B(z) = 1 + b1z−1 + · · · + bnz−n with bn ̸= 0. Let φ(ω) denote the PSD of y(t). Then we have the following equivalences: B(z)y(t) = 0 ⇐ ⇒ Z π −π |B(ω)|2 φ(ω) dω = 0 ⇐ ⇒|B(ω)|2 φ(ω) = 0 ⇐ ⇒{If φ(ω) > 0 then B(ω) = 0} ⇐ ⇒{φ(ω) > 0 for at most n values of ω} Furthermore, {y(t), . . . y(t −n + 1) are linearly independent} ⇐ ⇒  E  |g0y(t) + . . . + gn−1y(t −n + 1)|2 > 0 for every [g0 . . . gn−1]T ̸= 0 ⇐ ⇒ nR π −π |G(ω)|2 φ(ω) dω > 0 for every G(z) = Pn−1 k=0 gkz−k ̸= 0 o ⇐ ⇒{φ(ω) > 0 for at least n distinct values of ω} It follows from the two results above that φ(ω) > 0 for exactly n distinct values of ω. Furthermore, the values of ω for which φ(ω) > 0 are given by the n roots of the “sm2” 2004/2/ page 174 i i i i i i i i 174 Chapter 4 Parametric Methods for Line Spectra equation B(ω) = 0. A signal y(t) with such a PSD consists of a sum of n sinusoidal components with an m × m covariance matrix given by Γ = APA∗ (4.9.8) (cf. (4.2.7)). In (4.9.8), the frequencies {ωk}n k=1 are defined as indicated above, and can be found from Γ using any of the subspace–based frequency estimation methods in this chapter. Once {ωk} are available, {α2 i } can be determined from Γ. (Show that.) By combining the additive decomposition R = Γ+σ2I and (4.9.8) we obtain (4.2.7). With this observation, the derivation of the Carath´ eodory parameterization is complete. It is interesting to note that the sinusoids–in–noise signal which “realizes” a given covariance sequence {r(0), . . . , r(m)} (as described above) also provides a positive definite extension of that sequence. More precisely, the covariance lags {r(m + 1), r(m + 2), . . .} derived from the sinusoidal signal equation, when ap-pended to {r(0), . . . , r(m)}, provide a positive definite covariance sequence of in-finite length. The AR covariance realization is the other well–known method for obtaining a positive definite extension of a given covariance sequence of finite length (see Complement 3.9.2). 4.9.3 Using the Unwindowed Periodogram for Sine Wave Detection in White Noise As shown in Section 4.3, the unwindowed periodogram is an accurate frequency es-timation method whenever the minimum frequency separation is larger than 1/N. A simple intuitive explanation as to why the unwindowed periodogram is a better frequency estimator than the windowed periodogram(s) is as follows. The principal effect of a window is to remove the tails of the sample covariance sequence from the periodogram formula; while this is appropriate for signals whose covariance sequence “rapidly” goes to zero, it is inappropriate for sinusoidal signals whose covariance sequence never dies out (for sinusoidal signals, the use of a window is expected to introduce a significant bias in the estimated spectrum). Note, however, that if the data contains sinusoidal components with significantly different ampli-tudes, then it may be advisable to use a (mildly) windowed periodogram. This will induce bias in the frequency estimates, but, on the other hand, will reduce the leakage and hence make it possible to detect the low–amplitude components. When using the (unwindowed) periodogram for frequency estimation, an im-portant problem is to infer whether any of the many peaks of the erratic peri-odogram plot can really be associated with the existence of a sinusoidal component in the data. In order to be more precise, consider the following two hypotheses. H0: the data consists of (complex circular Gaussian) white noise only (with un-known variance σ2). H1: the data consists of a sum of sinusoidal components and noise. Deciding between H0 and H1 constitutes the so–called (signal) detection prob-lem. A solution to the detection problem can be obtained as follows. From the cal-culations leading to the result (2.4.21) one can see that the normalized periodogram “sm2” 2004/2/ page 17 i i i i i i i i Section 4.9 Complements 175 values in (4.9.15) are independent random variables (under H0). It remains to de-rive their distribution. Let ϵr(ω) = √ 2 σ √ N N X t=1 Re[e(t)e−iωt] ϵi(ω) = √ 2 σ √ N N X t=1 Im[e(t)e−iωt] With this notation and under the null hypothesis H0, 2ˆ φp(ω)/σ2 = ϵ2 r(ω) + ϵ2 i (ω) (4.9.9) For any two complex scalars z1 and z2 we have Re(z1) Im(z2) = z1 + z∗ 1 2 z2 −z∗ 2 2i = 1 2 Im (z1z2 + z∗ 1z2) (4.9.10) and, similarly, Re(z1) Re(z2) = 1 2 Re(z1z2 + z∗ 1z2) (4.9.11) Im(z1) Im(z2) = 1 2 Re(−z1z2 + z∗ 1z2) (4.9.12) By making use of (4.9.10)–(4.9.12), we can write E {ϵr(ω)ϵi(ω)} = 1 σ2N Im ( N X t=1 N X s=1 E n e(t)e(s)e−iω(t+s) + e∗(t)e(s)eiω(t−s)o) = Im{1} = 0 E  ϵ2 r(ω) = 1 σ2N Re ( N X t=1 N X s=1 E n e(t)e(s)e−iω(t+s) + e∗(t)e(s)eiω(t−s)o) = Re{1} = 1 (4.9.13) E  ϵ2 i (ω) = 1 σ2N Re ( N X t=1 N X s=1 E n −e(t)e(s)e−iω(t+s) + e∗(t)e(s)eiω(t−s)o) = Re{1} = 1 (4.9.14) In addition, note that the random variables ϵr(ω) and ϵi(ω) are zero–mean Gaussian distributed because they are linear transformations of the Gaussian white noise sequence. Then, it follows that under H0 The random variables {2ˆ φp(ωk)/σ2}N k=1, with mink̸=j |ωk −ωj| ≥2π/N, are asymptotically independent and χ2 distributed with 2 degrees of freedom. (4.9.15) “sm2” 2004/2/ page 176 i i i i i i i i 176 Chapter 4 Parametric Methods for Line Spectra (See, e.g., [Priestley 1981] and [S¨ oderstr¨ om and Stoica 1989] for the defini-tion and properties of the χ2 distribution.) It is worth noting that if {ωk} are equal to the Fourier frequencies {2πk/N}N−1 k=0 , then the previous distributional result is exactly valid (i.e., it holds in samples of finite length; see, for example, equation (2.4.26)). However, this observation is not as important as it might seem at first sight, since σ2 in (4.9.15) is unknown. When the noise power in (4.9.15) is replaced by a consistent estimate ˆ σ2, the so–obtained normalized periodogram values {2ˆ φp(ωk)/ˆ σ2} (4.9.16) are χ2(2) distributed only asymptotically (for N ≫1). A consistent estimate of σ2 can be obtained as follows. From (4.9.9), (4.9.13), and (4.9.14) we have that under H0 E n ˆ φp(ωk) o = σ2 for k = 1, 2, . . . , N Since {ˆ φp(ωk)}N k=1 are independent random variables, a consistent estimate of σ2 is given by ˆ σ2 = 1 N N X k=1 ˆ φp (ωk) Inserting this expression for ˆ σ2 into (4.9.16) leads to the following “test statistic”: µk = 2N ˆ φp(ωk) N X k=1 ˆ φp(ωk) In accordance with the (asymptotic) χ2 distribution of {µk}, we have (for any given c ≥0; see, e.g., [Priestley 1981]): Pr(µk ≤c) = Z c 0 1 2 e−x/2 dx = 1 −e−c/2. (4.9.17) Let µ = max k [µk] Using (4.9.17) and the fact that {µk} are independent random variables, gives (for any c ≥0): Pr(µ > c) = 1 −Pr(µ ≤c) = 1 −Pr(µk ≤c for all k) = 1 −(1 −e−c/2)N (under H0) “sm2” 2004/2/ page 17 i i i i i i i i Section 4.9 Complements 177 This result can be used to set a bound on µ that, under H0, holds with a (high) preassigned probability 1 −α (say). More precisely, let α be given (e.g., α = 0.05) and solve for c from the equation (1 −e−c/2)N = 1 −α Then • If µ ≤c, accept H0 with an unknown risk. (That risk depends on the signal–to–noise ratio (SNR). The lower the SNR, the larger the risk of accepting H0 when it does not hold.) • If µ > c, reject H0 with a risk equal to α. It should be noted that whenever H0 is rejected by the above test, what we can really infer is that the periodogram peak in question is significant enough to make the existence of a sinusoidal component in the studied data highly probable. However, the previous test does not tell us the number of sinusoidal components in the data. In order to determine that number, the test should be continued by looking at the second highest peak in the periodogram. For a test of the significance of the second highest value of the periodogram, and so on, we refer to [Priestley 1981]. Finally, we note that in addition to the test presented in this complement, there are several other tests to decide between the hypotheses H0 and H1 above; see [Priestley 1997] for a review. 4.9.4 NLS Frequency Estimation for a Sinusoidal Signal with Time-Varying Amplitude Consider the sinusoidal data model in (4.1.1) for the case of a single component (n = 1) but with a time-varying amplitude: y(t) = α(t)ei(ωt+ϕ) + e(t), t = 1, . . . , N (4.9.18) where α(t) ∈R is an arbitrary unknown envelope modulating the sinusoidal signal. The NLS estimates of α(t), ω, and ϕ are obtained by minimizing the following criterion: f = N X t=1 y(t) −α(t)ei(ωt+ϕ) 2 (cf. (4.3.1)). In this complement we show that the above seemingly complicated minimization problem has in fact a simple solution. We also discuss briefly an FFT-based algorithm for computing that solution. The reader interested in more details on the topic of this complement can consult [Besson and Stoica 1999; Stoica, Besson, and Gershman 2001] and references therein. “sm2” 2004/2/ page 178 i i i i i i i i 178 Chapter 4 Parametric Methods for Line Spectra A straightforward calculation shows that: f = N X t=1  y(t) 2 + h α(t) −Re  e−i(ωt+ϕ)y(t) i2 − h Re  e−i(ωt+ϕ)y(t) i2 (4.9.19) The minimization of (4.9.19) with respect to α(t) is immediate: ˆ α(t) = Re  e−i(ˆ ωt+ ˆ ϕ)y(t)  (4.9.20) where the NLS estimates ˆ ω and ˆ ϕ are yet to be determined. Inserting (4.9.20) into (4.9.19) shows that the NLS estimates of ϕ and ω are obtained by maximizing the function g = 2 N X t=1 h Re  e−i(ωt+ϕ)y(t) i2 where the factor 2 has been introduced for the sake of convenience. For any complex number c we have [Re(c)]2 = 1 4 (c + c∗)2 = 1 2 |c|2 + Re c2 It follows that g = N X t=1 n |y(t)|2 + Re h e−2i(ωt+ϕ)y2(t) io = constant + N X t=1 y2(t)e−i2ωt · cos " arg N X t=1 y2(t)e−i2ωt ! −2ϕ # (4.9.21) Clearly the maximizing ϕ is given by ˆ ϕ = 1 2 arg N X t=1 y2(t)e−i2ˆ ωt ! with the NLS estimate of ω given by ˆ ω = arg max ω N X t=1 y2(t)e−i2ωt (4.9.22) It is important to note that the maximization in (4.9.22) should be conducted over [0, π] instead of over [0, 2π]; indeed, the function in (4.9.22) is periodic with a period equal to π. The restriction of ω to [0, π] is not a peculiar feature of the NLS approach, but rather it is a consequence of the generality of the problem considered in this complement. This is easily seen by making the substitution ω →ω + π in (4.9.18), which yields y(t) = ˜ α(t)ei(ωt+ϕ) + e(t), t = 1, . . . , N “sm2” 2004/2/ page 179 i i i i i i i i Section 4.9 Complements 179 where ˜ α(t) = (−1)tα(t) is another valid (i.e., real-valued) envelope. This simple calculation confirms the fact that ω is uniquely identifiable only in the interval [0, π]. In applications, the frequency can be made to belong to [0, π] by using a sufficiently small sampling period. The above estimate of ω should be contrasted with the NLS estimate of ω in the constant-amplitude case (see (4.3.11), (4.3.17)): ˆ ω = arg max ω N X t=1 y(t)e−iωt (for α(t) = constant) (4.9.23) There is a striking similarity between (4.9.22) and (4.9.23); the only difference between these equations is the squaring of the terms in (4.9.22). As a consequence, we can apply the FFT to the squared data sequence {y2(t)} to obtain the ˆ ω in (4.9.22). The reader may wonder if there is an intuitive reason for the occurrence of the squared data in (4.9.22). A possible way to explain this occurrence goes as follows. Assume that α(t) has zero average value. Hence the DFT of {α(t)}, denoted A(¯ ω), takes on small values (theoretically zero) at ¯ ω = 0. As the DFT of α(t)eiωt is A(¯ ω −ω), it follows that the modulus of this DFT has a valley instead of a peak at ¯ ω = ω, and hence the standard periodogram (see (4.9.23)) should not be used to determine ω. On the other hand, α2(t) always has a nonzero average value (or DC component), and hence the modulus of the DFT of α2(t)ei2ωt will typically have a peak at ¯ ω = 2ω. This observation provides an heuristic reason for the squaring operation in (4.9.22). 4.9.5 Monotonically Descending Techniques for Function Minimization As explained in Section 4.3, minimizing the NLS criterion with respect to the unknown frequencies is a rather difficult task owing to the existence of possibly many local minima and the sharpness of the global minimum. In this complement1 we will discuss a number of methods that can be used to solve such a minimization problem. Our discussion is quite general and applies to many other functions, not to just the NLS criterion that is used as an illustrating example in what follows. We will denote the function to be minimized by f(θ), where θ is a vector. Sometimes we will write this function as f(x, y) where [xT , yT ]T = θ. The al-gorithms for minimizing f(θ) discussed in this complement are iterative. We let θi denote the value taken by θ at the ith iteration (and similarly for x and y). The common feature of the algorithms included in this complement is that they all monotonically decrease the function at each iteration: f(θi+1) ≤f(θi) for i = 0, 1, 2, . . . (4.9.24) Hereafter θ0 denotes the initial value (or estimate) of θ used by the minimization algorithm in question. Clearly (4.9.24) is an appealing property which in effect is the 1Based on “Cyclic minimizers, majorization techniques, and the expectation-maximization algorithm: A refresher,” by P. Stoica and Y. Sel´ en, IEEE Signal Processing Magazine, 21(1), January, 2004, pp. 112–114. “sm2” 2004/2/ page 180 i i i i i i i i 180 Chapter 4 Parametric Methods for Line Spectra main reason for the interest in the algorithms discussed here. However, we should note that usually (4.9.24) can only guarantee the convergence to a local minimum of f(θ). The goodness of the initial estimate θ0 will often determine whether the algorithm will converge to the global minimum. In fact, for some of the algorithms discussed below not even the convergence to a local minimum is guaranteed. For example, the EM algorithm (discussed later in this complement) can converge to saddle points or local maxima (see, e.g., [McLachlan and Krishnan 1997]). However, such a behavior is rare in applications, provided that some regularity conditions are satisfied. Cyclic Minimizer To describe the main idea of this type of algorithm in its simplest form, let us partition θ into two subvectors: θ =  x y  Then the generic iteration of a cyclic algorithm for minimizing f(x, y) will have the following form: y0 = given For i = 1, 2, . . . compute: xi = arg min x f(x, yi−1) yi = arg min y f(xi, y) (4.9.25) Note that (4.9.25) alternates (or cycles) between the minimization of f(x, y) with respect to x for given y and the minimization of f(x, y) with respect to y for given x, and hence the name of “cyclic” given to this type of algorithm. An obvious modification of (4.9.25) allows us to start with x0, if so desired. It is readily verified that the cyclic minimizer (4.9.25) possesses the property (4.9.24): f(xi, yi) ≤f(xi, yi−1) ≤f(xi−1, yi−1) where the first inequality follows from the definition of yi and the second from the definition of xi. The partitioning of θ into subvectors is usually done in such a way that the minimization operations in (4.9.25) (or at least one of them) are “easy” (in any case, easier than the minimization of f jointly with respect to x and y). Quite often, to achieve this desired property we need to partition θ in more than two subvectors. The extension of (4.9.25) to such a case is straightforward and will not be discussed here. However, there is one point about this extension that we would like to make briefly: whenever θ is partitioned into three or more subvectors we can choose the way in which the various minimization subproblems are iterated. For instance, if θ = [xT , yT , zT ]T then we may iterate the minimization steps with respect to x and with respect to y a number of times (with z being fixed), before re-determining z, and so forth. “sm2” 2004/2/ page 18 i i i i i i i i Section 4.9 Complements 181 With reference to the NLS problem in Section 4.3, we can apply the above ideas to the following natural partitioning of the parameter vector: θ =      γ1 γ2 . . . γn     , γk =   ωk ϕk αk   (4.9.26) The main virtue of this partitioning of θ is that the problem of minimizing the NLS criterion with respect to γk, for given {γj} (j = 1, . . . , n; j ̸= k), can be solved via the FFT (see (4.3.10), (4.3.11)). Furthermore, the cyclic minimizer corresponding to (4.9.26) can be simply initialized with γ2 = · · · = γn = 0, in which case γ1 minimizing the NLS criterion is obtained from the highest peak of the periodogram (which should give a reasonably accurate estimate of γ1), and so on. An elaborated cyclic algorithm, called RELAX, for the minimization of the NLS criterion based on the above ideas (see (4.9.26)), was proposed in [Li and Stoica 1996b]. Note that cyclic minimizers are sometimes called relaxation al-gorithms, which provides a motivation for the name given to the algorithm in [Li and Stoica 1996b]. Majorization Technique The main idea of this type of iterative technique for minimizing a given function f(θ) is quite simple (see, e.g., [Heiser 1995] and the references therein). Assume that, at the ith iteration, we can find a function gi(θ) (the subindex i indicates the dependence of this function on θi) which possesses the following three properties: gi(θi) = f(θi) (4.9.27) gi(θ) ≥f(θ) (4.9.28) and the minimization of gi(θ) with respect to θ is “easy” (or, in any case, easier than the minimization of f(θ)). (4.9.29) Owing to (4.9.28), gi(θ) is called a majorizing function for f(θ) at the ith iteration. In the majorization technique, the parameter vector at iteration (i + 1) is obtained from the minimization of gi(θ): θi+1 = arg min θ gi(θ) (4.9.30) The key property (4.9.24) is satisfied for (4.9.30), since f(θi) = gi(θi) ≥gi(θi+1) ≥f(θi+1) (4.9.31) The first inequality in (4.9.31) follows from the definition of θi+1 in (4.9.30), and the second inequality from (4.9.28). “sm2” 2004/2/ page 18 i i i i i i i i 182 Chapter 4 Parametric Methods for Line Spectra Note that any parameter vector θi+1 which gives a smaller value of gi(θ) than gi(θi) will satisfy (4.9.31). Consequently, whenever the minimum point of gi(θ) (see (4.9.30)) cannot be derived in closed-form we can think of determining θi+1, for example, by performing a few iterations with a gradient-based algorithm initialized at θi and using a line search (to guarantee that gi(θi+1) ≤gi(θi)). We should note that a similar observation could be made on the cyclic minimizer in (4.9.25) when the minimization of either f(x, yi−1) or f(xi, y) cannot be done in closed-form. The modification of either (4.9.30) or (4.9.25) in this way usually simplifies the computational effort of each iteration, but may slow down the convergence speed of the algorithm by increasing the number of iterations needed to achieve convergence. An interesting question regarding the two algorithms discussed so far is whether we could obtain the cyclic minimizer by using the majorization principle on a cer-tain majorizing function. In general it appears difficult or impossible to do so; nor can the majorization technique be obtained as a special case of a cyclic minimizer. Hence, these two iterative minimization techniques appear to have “independent lives”. To draw more parallels between the cyclic minimizer and the majorization technique, we remark on the fact that in the former the user has to choose the partitioning of θ that makes the minimization in, e.g., (4.9.25) “easy”, whereas in the latter a function gi(θ) has to be found that is not only “easy” to minimize but also possesses the essential property (4.9.28). Fortunately for the majorization ap-proach, finding such functions gi(θ) is not as hard as it may at first seem. Below we will develop a method for constructing a function gi(θ) possessing the desired prop-erties (4.9.27) and (4.9.28) for a general class of functions f(θ) (including the NLS criterion) that are commonly encountered in parameter estimation applications. EM Algorithm The NLS criterion (see (4.3.1)), f(θ) = N X t=1 y(t) − n X k=1 αkei(ωkt+ϕk) 2 (4.9.32) where θ is defined in (4.9.26), is obtained from the data equation (4.1.1) in which the noise {e(t)} is assumed to be circular and white with mean zero and variance σ2. Let us also assume that {e(t)} is Gaussian distributed. Then, the probability density function of the data vector y = [y(1), . . . , y(N)]T , for given θ, is p(y, θ) = 1 (πσ2)N e−f(θ) σ2 (4.9.33) where f(θ) is as defined in (4.9.32) above. The method of maximum likelihood (ML) obtains an estimate of θ by maximizing (4.9.33) (see (B.1.7) in Appendix B) or, equivalently, by minimizing the so-called negative log-likelihood function: −ln p(y, θ) = constant + N ln σ2 + f(θ) σ2 (4.9.34) Minimizing (4.9.34) with respect to θ is equivalent to minimizing (4.9.32), “sm2” 2004/2/ page 183 i i i i i i i i Section 4.9 Complements 183 which shows that the NLS method is identical to the ML method under the as-sumption that {e(t)} is Gaussian white noise. The ML is without a doubt the most widely studied method of parameter estimation. In what follows we assume that this is the method used for parameter estimation, and hence that the function we want to minimize with respect to θ is the negative log-likelihood: f(θ) = −ln p(y, θ) (4.9.35) Our main goal in this subsection is to show how to construct a majorizing func-tion for the estimation criterion in (4.9.35) and how the use of the corresponding majorization technique leads to the expectation-maximization (EM) algorithm in-troduced in [Dempster, Laird, and Rubin 1977] (see also [McLachlan and Krishnan 1997] and [Moon 1996] for more recent and detailed accounts on the EM algorithm). A notation that will be frequently used below concerns the expectation with respect to the distribution of a certain random vector, let us say z, which we will denote by Ez{·}. When the distribution concerned is conditioned on another random vector, let us say y, we will use the notation Ez|y{·}. If we also want to stress the dependence of the distribution (with respect to which the expectation is taken) on a certain parameter vector θ, then we write Ez|y,θ{·}. The main result which we will use in the following is Jensen’s inequality. It asserts that for any concave function h(x), where x is a random vector, the following inequality holds: E {h(x)} ≤h (E {x}) (4.9.36) The proof of (4.9.36) is simple. Let d(x) denote the plane tangent to h(x) at the point E{x}. Then E{h(x)} ≤E{d(x)} = d(E{x}) = h(E{x}) (4.9.37) which proves (4.9.36). The inequality in (4.9.37) follows from the concavity of h(x), the first equality follows from the fact that d(x) is a linear function of x, and the second equality from the fact that d(x) is tangent (and hence equal) to h(x) at the point E{x}. Remark: We note in passing that, despite its simplicity, Jensen’s inequality is a powerful analysis tool. As a simple illustration of this fact, consider a scalar random variable x with a discrete probability distribution: Pr{x = xk} = pk, k = 1, . . . , M Then, using (4.9.36) and the fact that the logarithm is a concave function we obtain (assuming xk > 0) E{ln(x)} = M X k=1 pk ln(xk) ≤ln [E{x}] = ln " M X k=1 pkxk # “sm2” 2004/2/ page 184 i i i i i i i i 184 Chapter 4 Parametric Methods for Line Spectra or, equivalently, M X k=1 pkxk ≥ M Y k=1 xpk k (for xk > 0 and M X k=1 pk = 1) (4.9.38) For pk = 1/M, (4.9.38) reduces to the well-known inequality between the arithmetic and geometric means: 1 M M X k=1 xk ≥ M Y k=1 xk !1/M which is so easily obtained in the present framework. ■ After these preparations, we turn our attention to the main question of finding a majorizing function for (4.9.35). Let z be a random vector whose probability density function conditioned on y is completely determined by θ, and let gi(θ) = f(θi) −Ez|y,θi  ln  p(y, z, θ) p(y, z, θi)  (4.9.39) Clearly gi(θ) satisfies: gi(θi) = f(θi) (4.9.40) Furthermore, it follows from Jensen’s inequality (4.9.36), the concavity of the func-tion ln(·), and Bayes’ rule for conditional probabilities that: gi(θ) ≥f(θi) −ln  Ez|y,θi  p(y, z, θ) p(y, z, θi)  = f(θi) −ln  Ez|y,θi  p(y, z, θ) p(z|y, θi)p(y, θi)  = f(θi) −ln  1 p(y, θi) Z p(y, z, θ) dz | {z } p(y,θ)  = f(θi) + ln  p(y, θ) p(y, θi)  = f(θi) + f(θ) −f(θi) = f(θ) (4.9.41) which shows that the function gi(θ) in (4.9.39) also satisfies the key majorization condition (4.9.28). Usually, z is called the unobserved data (to distinguish it from the observed data vector y), and the combination (z, y) is called the complete data while y is called the incomplete data. It follows from (4.9.40) and (4.9.41), along with the discussion in the previ-ous subsection about the majorization approach, that the following algorithm will “sm2” 2004/2/ page 18 i i i i i i i i Section 4.9 Complements 185 monotonically reduce the negative log-likelihood function at each iteration: The Expectation-Maximization (EM) Algorithm θ0 = given For i = 0, 1, 2, . . .: Expectation step: Evaluate Ez|y,θi{ln p(y, z, θ)} ≜gi(θ) Maximization step: Compute θi+1 = arg max θ gi(θ) (4.9.42) This is the EM algorithm in a nutshell. An important aspect of the EM algorithm, which must be considered in every application, is the choice of the unobserved data vector z. This choice should be done such that the maximization step of (4.9.42) is “easy” or, in any case, much easier than the maximization of the likelihood function. In general, doing so is not an easy task. In addition, the evaluation of the conditional expectation in (4.9.42) may also be rather challenging. Somewhat paradoxically, these difficulties associated with the EM algorithm may have been a cause for its considerable popularity. Indeed, the detailed derivation of the EM algorithm for a particular application is a more challenging research problem (and hence more appealing to many researchers) than, for instance, the derivation of a cyclic minimizer (which also possesses the key property (4.9.24) of the EM algorithm). 4.9.6 Frequency-selective ESPRIT-based Method In several applications of spectral analysis, the user is interested only in the com-ponents lying in a small frequency band of the spectrum. A frequency-selective method deals precisely with this kind of spectral analysis: it estimates the param-eters of only those sinusoidal components in the data which lie in a pre-specified band of the spectrum with as little interference as possible from the out-of-band components and in a computationally efficient way. To be more specific, let us consider the sinusoidal data model in (4.1.1): y(t) = ¯ n X k=1 βkeiωkt + e(t); βk = αkeiϕk, t = 0, . . . , N −1 (4.9.43) In some applications, (see,e.g., [McKelvey and Viberg 2001; Stoica, Sand-gren, Sel´ en, Vanhamme, and Van Huffel 2003] and the references therein) it would be computationally too intensive to estimate the parameters of all com-ponents in (4.9.43). For instance, this is the case when ¯ n takes on values close to N or when ¯ n ≪N but we have many sets of data to process. In such applications, because of computational and other reasons (see points (i) and (ii) below for de-tails), we focus on only those components of (4.9.43) that are of direct interest to us. Let us assume that the components of interest lie in a pre-specified frequency band comprised by the following Fourier frequencies: 2π N k1, 2π N k2, . . . , 2π N kM  (4.9.44) “sm2” 2004/2/ page 186 i i i i i i i i 186 Chapter 4 Parametric Methods for Line Spectra where {k1, . . . , kM} are M given (typically consecutive) integers. We assume that the number of components of (4.9.43) lying in (4.9.44), which we denote by n ≤¯ n (4.9.45) is given. If n is a priori unknown then it could be estimated from the data by the methods described in Appendix C. Our problem is to estimate the parameters of the n components of (4.9.43) that lie in the frequency band in (4.9.44). Furthermore, we want to find a solution to this frequency-selective estimation problem that has the following properties: (i) It is computationally efficient. In particular, the computational complexity of such a solution should be comparable with that of a standard ESPRIT method for a sinusoidal model with n components. (ii) It is statistically accurate. To be more specific about this aspect we will split the discussion in two parts. From a theoretical standpoint, estimating n < ¯ n components of (4.9.43) (in the presence of the remaining components and noise) cannot produce more accurate estimates than estimating all ¯ n com-ponents. However, for a good frequency-selective method the degradation of theoretical statistical accuracy should not be significant. On the other hand, from a practical standpoint, a sound frequency-selective method may give better performance than a non-frequency-selective counterpart that deals with all ¯ n components of (4.9.43). This is so because some components of (4.9.43) that do not belong to (4.9.44) may not be well-described by a si-nusoidal model; consequently, treating such components as interference and eliminating them from the model may improve the estimation accuracy of the components of interest. In this complement, following [McKelvey and Viberg 2001] and [Stoica, Sandgren, Sel´ en, Vanhamme, and Van Huffel 2003], we present a frequency-selective ESPRIT-based (FRES-ESPRIT) method that possesses the above two desirable features. The following notation will be frequently used in the following: wk = ei 2π N k k = 0, 1, . . . , N −1 (4.9.46) uk = [wk, . . . , wm k ]T (4.9.47) vk = [1, wk, . . . , wN−1 k ]T (4.9.48) y = [y(0), . . . , y(N −1)]T (4.9.49) Yk = v∗ ky k = 0, 1, . . . , N −1 (4.9.50) e = [e(0), . . . , e(N −1)]T (4.9.51) Ek = v∗ ke k = 0, 1, . . . , N −1 (4.9.52) a(ωk) = eiωk, . . . , eimωkT (4.9.53) b(ωk) = h 1, eiωk, . . . , ei(N−1)ωk iT (4.9.54) Hereafter, m is a user parameter whose choice will be discussed later on. Note that {Yk} is the FFT of the data. “sm2” 2004/2/ page 18 i i i i i i i i Section 4.9 Complements 187 First, we show that the following key equation involving the FFT sequence {Yk} holds true: ukYk = [a(ω1), . . . , a(ω¯ n)]    β1v∗ kb(ω1) . . . β¯ nv∗ kb(ω¯ n)   + Γuk + ukEk (4.9.55) where Γ is an m × m matrix defined in equation (4.9.61) below (as will become clear shortly, the definition of Γ has no importance for what follows, and hence it is not repeated here). To prove (4.9.55), we first write the data vector y as y = ¯ n X ℓ=1 βℓb(ωℓ) + e (4.9.56) Next, we note that (for p = 1, . . . , m): wp k [v∗ kb(ω)] = N−1 X t=0 ei(ω−2π N k)tei 2π N kp = eiωp N−1 X t=0 ei(ω−2π N k)(t−p) = eiωp [v∗ kb(ω)] + eiωp "p−1 X t=0 eiω(t−p)e−i 2π N k(t−p) − N+p−1 X t=N eiω(t−p)e−i 2π N k(t−p) # = eiωp [v∗ kb(ω)] + eiωp p X ℓ=1 h e−iωℓei 2π N kℓ−eiω(N−ℓ)ei 2π N kℓi = eiωp [v∗ kb(ω)] + p X ℓ=1 eiω(p−ℓ) 1 −eiωN wℓ k (4.9.57) Let (for p = 1, . . . , m): γ∗ p(ω) = 1 −eiωN h eiω(p−1), eiω(p−2), . . . , eiω, 1, 0, . . . , 0 i (1 × m) (4.9.58) Using (4.9.58) we can rewrite (4.9.57) in the following more compact form (for p = 1, . . . , m): wp k [v∗ kb(ω)] = eiωp [v∗ kb(ω)] + γ∗ p(ω)uk (4.9.59) or, equivalently, uk [v∗ kb(ω)] = a(ω) [v∗ kb(ω)] +    γ∗ 1(ω) . . . γ∗ m(ω)   uk (4.9.60) “sm2” 2004/2/ page 188 i i i i i i i i 188 Chapter 4 Parametric Methods for Line Spectra From (4.9.56) and (4.9.60) it follows that ukYk = ¯ n X ℓ=1 βℓuk [v∗ kb(ωℓ)] + ukEk = [a(ω1), . . . , a(ω¯ n)]    β1v∗ kb(ω1) . . . β¯ nv∗ kb(ω¯ n)   +      ¯ n X ℓ=1 βℓ    γ∗ 1(ωℓ) . . . γ∗ m(ωℓ)         uk + ukEk (4.9.61) which proves (4.9.55). In the following we let {ωk}n k=1 denote the frequencies of interest, i.e., those frequencies of (4.9.43) that lie in (4.9.44). To separate the terms in (4.9.55) cor-responding to the components of interest from those associated with the nuisance components, we use the notation A = [a(ω1), . . . , a(ωn)] (4.9.62) xk =    β1v∗ kb(ω1) . . . βnv∗ kb(ωn)    (4.9.63) for the components of interest, and similarly ˜ A and ˜ xk for the other components. Finally, to write the equation (4.9.55) for k = k1, . . . , kM in a compact matrix form we need the following additional notation: Y = [uk1Yk1, . . . , ukM YkM ] , (m × M) (4.9.64) E = [uk1Ek1, . . . , ukM EkM ] , (m × M) (4.9.65) U = [uk1, . . . , ukM ] , (m × M) (4.9.66) X = [xk1, . . . , xkM ] , (n × M) (4.9.67) and similarly for ˜ X. Using this notation, we can write (4.9.55) (for k = k1, . . . , kM) as follows: Y = AX + ΓU + ˜ A ˜ X + E (4.9.68) Next we assume that M ≥n + m (4.9.69) which can be satisfied by choosing the user parameter m appropriately. Under (4.9.69) (in fact only M ≥m is required for this part), the orthogonal projection matrix onto the null space of U is given by (see Appendix A): Π⊥ U = I −U ∗(UU ∗)−1 U (4.9.70) We will eliminate the second term in (4.9.68) by post-multiplying (4.9.68) with Π⊥ U (see below). However, before doing so we make the following observations about the third and fourth terms in (4.9.68): “sm2” 2004/2/ page 189 i i i i i i i i Section 4.9 Complements 189 (a) The elements of the noise term E in (4.9.68) are much smaller than the ele-ments of AX. In effect, it can be shown that Ek = O N 1/2 (stochastically), whereas the order of the elements of X is typically O (N). (b) Assuming that the out-of-band components are not much stronger than the components of interest, and that the frequencies of the former are not too close to the interval of interest in (4.9.44), the elements of ˜ X are also much smaller than the elements of X. (c) To understand what happens in the case that the assumption made in (b) above does not hold, let us consider a generic out-of-band component (ω, β). The part of y corresponding to this component can be written as βb(ω). Hence, the corresponding part in ukYk is given by βuk [v∗ kb(ω)] and, conse-quently, the part of Y due to this generic component is βU    v∗ k1b(ω) 0 ... 0 v∗ kM b(ω)    (4.9.71) Even if ω is relatively close to the band of interest, (4.9.44), we may expect that v∗ kb(ω) does not vary significantly for k ∈[k1, kM] (in other words, the “spectral tail” of the out-of-band component may well have a small dynamic range in the interval of interest). As a consequence, the matrix in (4.9.71) will be approximately proportional to U and hence it will be attenuated via the post-multiplication of it by Π⊥ U (see below). A similar argument shows that the noise term in (4.9.68) is also attenuated by post-multiplying (4.9.68) with Π⊥ U. It follows from the above discussion and (4.9.68) that Y Π⊥ U ≃AXΠ⊥ U (4.9.72) This equation resembles equation (4.7.7) on which the standard ESPRIT method is based, provided that rank XΠ⊥ U  = n (4.9.73) (similarly to rank(C) = n for (4.7.7)). In the following we prove that (4.9.73) holds under (4.9.69) and the regularity condition that eiNωk ̸= 1 (for k = 1, . . . , n). To prove (4.9.73) we first note that rank Π⊥ U  = M −m, which implies that M ≥m + n (i.e., (4.9.69)) is a necessary condition for (4.9.73) to hold. Next we show that (4.9.73) is equivalent to rank  X U  = m + n (4.9.74) To verify this equivalence let us decompose X additively as follows: X = XΠU + XΠ⊥ U = XU ∗(UU ∗)−1 U + XV ∗V (4.9.75) “sm2” 2004/2/ page 190 i i i i i i i i 190 Chapter 4 Parametric Methods for Line Spectra where the M × (M −m) matrix V ∗comprises a unitary basis of N(U); hence, UV ∗= 0 and V V ∗= I. Now, the matrix in (4.9.74) has the same rank as  I −XU ∗(UU ∗)−1 0 I   X U  =  XV ∗V U  (4.9.76) (we used (4.9.75) to obtain (4.9.76)), which, in turn, has the same rank as  XV ∗V U  V ∗V X∗ U ∗ =  XV ∗V X∗ 0 0 UU ∗  (4.9.77) However, rank(UU ∗) = m. Hence, (4.9.74) holds if and only if rank(XV ∗V X∗) = n As rank(XV ∗V X∗) = rank(XΠ⊥ UX∗) = rank(XΠ⊥ U) the equivalence between (4.9.73) and (4.9.74) is proven. It follows from the equivalence shown above and the definition of X and U that we want to prove that rank                         v∗ k1b(ω1) · · · v∗ kM b(ω1) . . . . . . v∗ k1b(ωn) · · · v∗ kM b(ωn) uk1 · · · ukM      | {z } (n+m)×M                    = n + m (4.9.78) As v∗ kb(ω) = N−1 X t=0 ei(ω−2π N k)t = 1 −eiN(ω−2π N k) 1 −ei(ω−2π N k) = 1 −eiNω wk −eiω wk we can rewrite the matrix in (4.9.78) as follows:            1 −eiNω1 ... 0 1 −eiNωn 1 0 ... 1                        wk1 wk1−eiω1 · · · wkM wkM −eiω1 . . . . . . wk1 wk1−eiωn · · · wkM wkM −eiωn wk1 · · · wkM . . . . . . wm k1 · · · wm kM             (4.9.79) Because, by assumption, 1 −eiNωk ̸= 0 (for k = 1, . . . , n), it follows that (4.9.78) holds if and only if the second matrix in (4.9.79) has full row rank (under “sm2” 2004/2/ page 19 i i i i i i i i Section 4.9 Complements 191 (4.9.69)), which holds true if and only if we cannot find some numbers {ρk}m+n k=1 (not all zero) such that ρ1z z −eiω1 + · · · + ρnz z −eiωn + ρn+1z + · · · + ρn+mzm = z  ρ1 z −eiω1 + · · · + ρn z −eiωn + ρn+1 + · · · + ρn+mzm−1  (4.9.80) is equal to zero at z = wk1, . . . , z = wkM . However, (4.9.80) can only have m + n − 1 < M zeroes of the above form. With this observation, the proof of (4.9.73) is concluded. To make use of (4.9.72) and (4.9.73) in an ESPRIT-like approach we also assume that m ≥n (4.9.81) (which is an easily satisfied condition). Then, it follows from (4.9.72) and (4.9.73) that the effective rank of the “data” matrix Y Π⊥ U is n, and that ˆ S ≃A ˆ C (4.9.82) where ˆ C is an n × n nonsingular transformation matrix, and ˆ S = the m×n matrix whose columns are the left singular vectors of Y Π⊥ U associated with the n largest singular values. (4.9.83) Equation (4.9.82) is very similar to (4.7.7), and hence it can be used in an ESPRIT-like approach to estimate the frequencies {ωk}n k=1. Following the frequency estima-tion step, the amplitudes {βk}n k=1 can be estimated, for instance, as described in [McKelvey and Viberg 2001; Stoica, Sandgren, Sel´ en, Vanhamme, and Van Huffel 2003]. An implementation detail that we would like to address, at least briefly, is the choice of m. We recommend choosing m as the integer part of M/2: m = ⌊M/2⌋ (4.9.84) provided that ⌊M/2⌋∈[n, M −n] to satisfy the assumptions in (4.9.69) and (4.9.81). To motivate the above choice of m we refer to the matrix equation (4.9.72) that lies at the basis of the proposed estimation approach. Previous experience with ESPRIT, MUSIC and other similar approaches has shown that their accuracy increases as the number of independent equations in (4.9.72) (and its counterparts) increases. The matrix Y Π⊥ U in (4.9.72) is m × M and its rank is generically equal to min{rank(Y ), rank(Π⊥ U)} = min(m, M −m) (4.9.85) “sm2” 2004/2/ page 19 i i i i i i i i 192 Chapter 4 Parametric Methods for Line Spectra Evidently the above rank determines the aforementioned number of linearly inde-pendent equations in (4.9.72). Hence, for enhanced estimation accuracy we should maximize (4.9.85) with respect to m: the solution is clearly given by (4.9.84). To end this complement we show that, interestingly, the proposed FRES-ESPRIT method with M = N is equivalent to the standard ESPRIT method. For M = N we have that [b1, . . . , bN] ≜      w1 · · · wN w2 1 · · · w2 N . . . . . . wN 1 · · · wN N     =  U ¯ U |{z} N  } m } N −m (4.9.86) where U is as defined before (with M = N) and ¯ U is defined via (4.9.86). Note that: UU ∗= NI; ¯ U ¯ U ∗= NI; U ¯ U ∗= 0; U ∗U + ¯ U ∗¯ U = NI (4.9.87) Hence Π⊥ U = I −1 N U ∗U = 1 N ¯ U ∗¯ U (4.9.88) Also, note that (for p = 1, . . . , m): wp kYk = N−1 X t=0 y(t)e−i 2π N k(t−p) = p−1 X t=0 y(t)wp−t k + N−1 X t=p y(t)wN+p−t k = [y(p −1), . . . , y(0), 0, . . . , 0]    wk . . . wm k   + [0, . . . , 0, y(N −1), . . . , y(p)]    wk . . . wN k    ≜µ∗ puk + ψ∗ pbk (4.9.89) where uk and bk are as defined before (see (4.9.47) and (4.9.86)). Consequently, for M = N, the “data” matrix Y Π⊥ U used in the FRES–ESPRIT method can be written as (cf. (4.9.86)–(4.9.89)): [u1Y1, . . . , uNYN] Π⊥ U =         µ∗ 1 . . . µ∗ m   [u1, . . . , uN] +    ψ∗ 1 . . . ψ∗ m   [b1, . . . , bN]      ¯ U ∗¯ U · 1 N =         µ∗ 1 . . . µ∗ m   U +    ψ∗ 1 . . . ψ∗ m     U ¯ U       ¯ U ∗¯ U · 1 N =    ψ∗ 1 . . . ψ∗ m     0 ¯ U  =      y(N −m) · · · y(1) y(N −m + 1) · · · y(2) . . . . . . y(N −1) · · · y(m)      ¯ U (4.9.90) “sm2” 2004/2/ page 193 i i i i i i i i Section 4.9 Complements 193 It follows from (4.9.90) that the n principal (or dominant) left singular vectors of Y Π⊥ U are equal to the n principal eigenvectors of the following matrix (obtained by post-multiplying the right-hand side of (4.9.90) with its conjugate transpose and using the fact that ¯ U ¯ U ∗= NI from (4.9.87)):    y(N −m) · · · y(1) . . . . . . y(N −1) · · · y(m)       y∗(N −m) · · · y∗(N −1) . . . . . . y∗(1) · · · y∗(m)    = N−m X t=1    y(t) . . . y(t + m −1)   [y∗(t), . . . , y∗(t + m −1)] (4.9.91) which is precisely the type of sample covariance matrix used in the standard ES-PRIT method (compare with (4.5.14); the difference between (4.9.91) and (4.5.14) is due to some notational changes made in this complement, such as in the definition of the matrix A). 4.9.7 A Useful Result for Two-Dimensional (2D) Sinusoidal Signals For a noise-free 1D sinusoidal signal, y(t) = n X k=1 βkeiωkt, t = 0, 1, 2, . . . (4.9.92) a data vector of length m can be written as      y(0) y(1) . . . y(m −1)     =      1 · · · 1 eiω1 · · · eiωn . . . . . . ei(m−1)ω1 · · · ei(m−1)ωn         β1 . . . βn   ≜Aβ (4.9.93) The matrix A introduced above is the complex conjugate of the one in (4.2.4). In this complement we prefer to work with the type of A matrix in (4.9.93), to simplify the notation, but note that the following discussion applies without change to the complex conjugate of the above A as well (or, to its extension to 2D sinusoidal signals). Let {ck}n k=1 be uniquely defined via the equation: 1 + c1z + · · · + cnzn = n Y k=1 1 −ze−iωk (4.9.94) Then, it can be readily checked (see (4.5.21)) that the matrix C∗=    1 c1 · · · cn 0 ... ... ... 0 1 c1 · · · cn   , (m −n) × m (4.9.95) “sm2” 2004/2/ page 194 i i i i i i i i 194 Chapter 4 Parametric Methods for Line Spectra satisfies C∗A = 0 (4.9.96) (to verify (4.9.96) it is enough to observe from (4.9.94) that 1 + c1eiωk + · · · + cneinωk = 0 for k = 1, . . . , n). Furthermore, as rank(C) = m−n and dim[N(A∗)] = m −n too, it follows from (4.9.96) that C is a basis for the null space of A∗, N(A∗) (4.9.97) The matrix C plays an important role in the derivation and analysis of several frequency estimators, see, e.g., Section 4.5, [Bresler and Macovski 1986], and [Stoica and Sharman 1990]. In this complement we will extend the result (4.9.97) to 2D sinusoidal signals. The derivation of a result similar to (4.9.97) for such signals is a rather more difficult problem than in the 1D case. The solution that we will present was introduced in [Clark and Scharf 1994] (see also [Clark, Eld´ en, and Stoica 1997]). Using the extended result we can derive parameter estimation methods for 2D sinusoidal signals in much the same manner as for 1D signals (see the cited papers and Section 4.5). A noise-free 2D sinusoidal signal is described by the equation (compare with (4.9.92)): y(t, ¯ t) = n X k=1 βkeiωktei¯ ωk¯ t, t, ¯ t = 0, 1, 2, . . . (4.9.98) Let γk = eiωk, λk = ei¯ ωk (4.9.99) Using this notation allows us to write (4.9.98) in a more compact form, y(t, ¯ t) = n X k=1 βkγt kλ¯ t k (4.9.100) Moreover, equation (4.9.100) (unlike (4.9.98)) also covers the case of damped (2D) sinusoidal signals, for which γk = eµk+iωk, λk = e¯ µk+i¯ ωk (4.9.101) with {µk, ¯ µk} being the damping parameters (µk, ¯ µk ≤0). “sm2” 2004/2/ page 19 i i i i i i i i Section 4.9 Complements 195 The following notation will be frequently used in this complement: g∗ t = γt 1 . . . γt n (4.9.102) Γ =    γ1 0 ... 0 γn    (4.9.103) Λ =    λ1 0 ... 0 λn    (4.9.104) β = β1 . . . βn T (4.9.105) AL =      1 . . . 1 λ1 . . . λn . . . . . . λL−1 1 . . . λL−1 n      for L ≥n (4.9.106) Using (4.9.102), (4.9.104), and (4.9.105) we can write: y(t, ¯ t) = g∗ t Λ¯ tβ (4.9.107) Hence, similarly to (4.9.93), we can write the m ¯ m × 1 data vector obtained from (4.9.98) for t = 0, . . . , m −1 and ¯ t = 0, . . . , ¯ m −1 as:                  y(0, 0) . . . y(0, ¯ m −1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . y(m −1, 0) . . . y(m −1, ¯ m −1)                  =                  g∗ 0Λ0 . . . g∗ 0Λ ¯ m−1 . . . . . . . . . . . . . . . . . . . . . . . . . g∗ m−1Λ0 . . . g∗ m−1Λ ¯ m−1                  β ≜Aβ (4.9.108) The matrix A defined above, i.e., A =                  g∗ 0Λ0 . . . g∗ 0Λ ¯ m−1 . . . . . . . . . . . . . . . . . . . . . . . . . g∗ m−1Λ0 . . . g∗ m−1Λ ¯ m−1                  , (m ¯ m × n) (4.9.109) “sm2” 2004/2/ page 196 i i i i i i i i 196 Chapter 4 Parametric Methods for Line Spectra plays the same role for 2D sinusoidal signals as the matrix A in (4.9.93) for 1D signals. Therefore, it is the null space of (4.9.109) that we want to characterize. More precisely, we want to find a linearly parameterized basis for the null space of the matrix A∗in (4.9.109), similar to the basis C for A∗in (4.9.93) (see (4.9.97)). Note that using (4.9.103) we can also write y(t, ¯ t) as: y(t, ¯ t) = λ¯ t 1 . . . λ¯ t n Γtβ (4.9.110) This means that A can also be written as follows: A =        A ¯ mΓ0 . . . . . . . . . . . . . . . . . . . . . A ¯ mΓm−1        (4.9.111) Similarly to (4.9.94), let us define the parameters {ck}n k=1 uniquely via the equation 1 + c1z + · · · + cnzn = n Y k=1  1 −z λk  (4.9.112) Note that there is a one-to-one mapping between {ck} and {λk} (λk ̸= 0). In particular, we can obtain {λk} uniquely from {ck} (see [Stoica and Sharman 1990] for more details on this aspect in the case of {λk = eiωk}). Consequently, we can see the introduction of {ck} as a new parameterization of the problem, which replaces the parameterization via {λk}. Using {ck} we build the following matrix, similarly to (4.9.95), assuming ¯ m > n: C∗=    1 c1 · · · cn 0 ... ... ... 0 1 c1 · · · cn   , ( ¯ m −n) × ¯ m (4.9.113) and note that (cf. (4.9.96)) C∗A ¯ m = 0 (4.9.114) It follows from (4.9.111) and (4.9.114) that    C∗ 0 ... 0 C∗    | {z } [m( ¯ m−n)]×m ¯ m A = 0 (4.9.115) Hence, we have found (m ¯ m−mn) vectors of the sought basis for N(A∗). It remains to find (m −1)n additional (linearly independent) vectors of this basis (note that dim[N(A∗)] = m ¯ m−n). To find the remaining vectors we need an approach which is rather different from that used so far. “sm2” 2004/2/ page 19 i i i i i i i i Section 4.9 Complements 197 Let us assume that λk ̸= λp for k ̸= p (4.9.116) and let the vector b∗= [b1, . . . , bn] be defined via the linear (interpolation) equation b∗An = [γ1, . . . , γn] (4.9.117) (with An as defined in (4.9.106)). Under (4.9.116) and for given {λk} there exists a one-to-one map between {bk} and {γk}, and hence we can view the use of {bk} as a reparameterization of the problem (note that if (4.9.116) does not hold, i.e., λk = λp, then, for identifiability reasons, we must have γk ̸= γp, and therefore no vector b that satisfies (4.9.117) can exist). From (4.9.117) we obtain easily b∗AnΓt = [γ1, . . . , γn] Γt = g∗ t+1 and hence (see also (4.9.109) and (4.9.111)) b∗    g∗ t Λ0 . . . g∗ t Λn−1   = b∗AnΓt = g∗ t+1Λ0 (4.9.118) Next, we assume that ¯ m ≥2n −1 (4.9.119) which is a weak condition (typically we have m, ¯ m ≫n). Under (4.9.119) we can write (making use of (4.9.118)):    b∗ 0 ... 0 b∗    | {z } B∗    g∗ t Λ0 . . . g∗ t Λ ¯ m−1   −    g∗ t+1Λ0 . . . g∗ t+1Λn−1   = 0 (4.9.120) where B∗=    b1 b2 . . . bn 0 . . . 0 ... ... ... . . . 0 b1 b2 . . . bn 0    (n × ¯ m) Note that, indeed, we need ¯ m ≥2n −1 to be able to write (4.9.120) (if ¯ m > 2n −1 then the rightmost ¯ m−2n−1 columns of B∗are zeroes). Combining (4.9.115) and (4.9.120) yields the following matrix whose rows lie in the left null space of A:        D I D I 0 ... ... 0 D I C∗                     m block rows (4.9.121) “sm2” 2004/2/ page 198 i i i i i i i i 198 Chapter 4 Parametric Methods for Line Spectra where D =  C∗ B∗  =           1 c1 · · · cn 0 ... ... ... 0 1 c1 · · · cn b1 b2 . . . bn 0 ... ... ... 0 b1 b2 . . . bn             ¯ m −n   n ( ¯ m × ¯ m) I =           0 · · · 0 . . . . . . 0 · · · 0 −1 0 . . . 0 ... ... . . . 0 −1 0 . . . 0             ¯ m −n   n ( ¯ m × ¯ m) The matrix in (4.9.121) is of dimension [(m −1) ¯ m + ( ¯ m −n)] × m ¯ m, that is (m ¯ m − n) × m ¯ m, and its rank is equal to m ¯ m −n (i.e., it has full row rank, as cn ̸= 0). Consequently, the rows of (4.9.121) form a linearly parameterized basis for the null space of A. We remind the reader that, under (4.9.116), there is a one-to-one map between {λk, γk} and the basis parameters {ck, bk} (see (4.9.112) and (4.9.117)). Hence, we can think of estimating {ck, bk} in lieu of {λk, γk}, at least in a first stage, and when doing so the linear dependence of (4.9.121) on the unknown parameters comes in quite handy. As a simple example of such an estimation method based on (4.9.121), note that the modified MUSIC procedure outlined in Section 4.5 can be easily extended to the case of 2D signals making use of (4.9.121). Compared with the basis matrix for the 1D case (see (4.9.95)), the null space basis (4.9.121) in the 2D case is apparently much more complicated. In addition, the above 2D basis result depends on the condition (4.9.116); if (4.9.116) is even approximately violated (i.e., if there exist λk and λp with k ̸= p such that λk ≃λp) then the mapping {γk} ↔{bk} may become ill-conditioned, which may result in a deterioration of the estimation accuracy. Finally, we remark on the fact that for damped sinusoids, the parameterization via {bk} and {ck} is parsimonious. However, for undamped sinusoidal signals the parameterization via {ωk, ¯ ωk} contains 2n real-valued unknowns, whereas the one based on {bk, ck} has 4n unknowns, or 3n unknowns if a certain conjugate symmetry property of {bk} is exploited (see, e.g., [Stoica and Sharman 1990]); hence in such a case the use of {bk} and, in particular, {ck} leads to an overparameterized problem, which may also result in a (slight) accuracy degradation. The previous criticism of the result (4.9.121) is, however, minor and in fact (4.9.121) is the only known basis for N(A∗). 4.10 EXERCISES Exercise 4.1: Speed Measurement by a Doppler Radar as a Frequency Determination Problem “sm2” 2004/2/ page 199 i i i i i i i i Section 4.10 Exercises 199 Assume that a radar system transmits a sinusoidal signal towards an object. For the sake of simplicity, further assume that the object moves along a trajectory parallel to the wave propagation direction, at a constant velocity v. Let αeiωt denote the signal emitted by the radar. Show that the backscattered signal, measured by the radar system after reflection offthe object, is given by: s(t) = βei(ω−ωD)t + e(t) (4.10.1) where e(t) is measurement noise, ωD is the so–called Doppler frequency, ωD ≜2ωv/c and β = µαe−2iωr/c Here c denotes the speed of wave propagation, r is the object range, and µ is an attenuation coefficient. Conclude from (4.10.1) that the problem of speed measurement can be reduced to one of frequency determination. The latter problem can be solved by using the methods of this chapter. Exercise 4.2: ACS of Sinusoids with Random Amplitudes or Nonuniform Phases In some applications, it is not reasonable to assume that the amplitudes of the sinusoidal terms are fixed or that their phases are uniformly distributed. Ex-amples are fast fading in mobile telecommunications (where the amplitudes vary) or sinusoids that have been tracked, so that their phase is random, near zero, but not uniformly distributed. We derive the ACS for such cases. Let x(t) = αei(ω0t+ϕ), where α and ϕ are statistically independent random variables and ω0 is a constant. Assume that α has mean ¯ α and variance σ2 α. (a) If ϕ is uniformly distributed on [−π, π], find E {x(t)} and rx(k). Show also that if α is constant, the expression for rx(k) reduces to equation (4.1.5). (b) If ϕ is not uniformly distributed on [−π, π], express E {x(t)} in terms of the probability density function p(ϕ). Find sufficient conditions on p(ϕ) such that x(t) is zero mean, find rx(k) in this case, and give an example of such a p(ϕ). Exercise 4.3: A Nonergodic Sinusoidal Signal As shown in Complement 4.9.1, the signal x(t) = αei(ωt+ϕ) with α and ω being nonrandom constants and ϕ being uniformly distributed on [0, 2π], is second–order ergodic in the sense that the mean and covariances deter-mined from an (infinitely long) temporal realization of the signal coincide with the mean and covariances obtained from an ensemble of (infinitely many) realizations. In the present exercise, assume that α and ω are independent random variables, with ω being uniformly distributed on [0, 2π]; the initial–phase variable ϕ may be “sm2” 2004/2/ page 200 i i i i i i i i 200 Chapter 4 Parametric Methods for Line Spectra arbitrarily distributed (in particular it can be nonrandom). Show that in such a case, E {x(t)x∗(t −k)} =  E  α2 for k = 0 0 for k ̸= 0 (4.10.2) Also, show that the covariances obtained by “temporal averaging” differ from those given, and hence deduce that the signal is not ergodic. Comment on the behavior of such a signal over the ensemble of realizations and in each realization, respectively. Exercise 4.4: AR Model–Based Frequency Estimation Consider the following noisy sinusoidal signal: y(t) = x(t) + e(t) where x(t) = αei(ω0t+ϕ) (with α > 0 and ϕ uniformly distributed on [0, 2π]), and where e(t) is white noise with zero mean and unit variance. An AR model of order n ≥1 is fitted to {y(t)} using the Yule–Walker or LS method. Assuming the limiting case of an infinitely long data sample, the AR coefficients are given by the solution to (3.4.4). Show that the PSD, corresponding to the AR model determined from (3.4.4), has a global peak at ω = ω0. Conclude that AR modeling can be used in this case to determine the sinusoidal frequency, in spite of the fact that {y(t)} does not satisfy an AR equation of finite order (in the case of multiple sinusoids, the AR frequency estimates are biased). Regarding the estimation of the signal power, however, show that the height of the global peak of the AR spectrum does not directly provide an “estimate” of α2. Exercise 4.5: An ARMA Model–Based Derivation of the Pisarenko Met-hod Let R denote the covariance matrix (4.2.7) with m = n + 1, and let g be the eigenvector of R associated with its minimum eigenvalue. The Pisarenko method determines the signal frequencies by exploiting the fact that a∗(ω)g = 0 for ω = ωk, k = 1, . . . , n (4.10.3) (cf. (4.5.13) and (4.5.17)). Derive the property (4.10.3) directly from the ARMA model equation (4.2.3). Exercise 4.6: Frequency Estimation when Some Frequencies are Known Assume that y(t) is known to have p sinusoidal components at known frequen-cies {˜ ωk}p k=1 (but with unknown amplitudes and phases), and n−p other sinusoidal components whose frequencies are unknown. Develop a modification of the HOYW method to estimate the unknown frequencies from measurements {y(t)}N t=1, without estimating the known frequencies. Exercise 4.7: A Combined HOYW-ESPRIT Method for the MA Noise Case The HOYW method, presented in Section 4.4 for the white noise case, is based on the matrix Γ in (4.2.8). Let us assume that the noise sequence {e(t)} in “sm2” 2004/2/ page 20 i i i i i i i i Section 4.10 Exercises 201 (4.1.1) is known to be an MA process of order m, and that m is given. A simple way to handle such a colored noise in the HOYW method consists of modifying the expression (4.2.8) of Γ as follows: ˜ Γ = E         y(t −L −1 −m) . . . y(t −L −M −m)   [y∗(t), . . . , y∗(t −L)]      (4.10.4) Derive an expression for ˜ Γ similar to the one for Γ in (4.2.8). Furthermore, make use of that expression in an ESPRIT-like method to estimate the frequencies {ωk}, instead of using it in an HOYW-like method (see Section 4.4). Discuss the advan-tage of the so-obtained HOYW-ESPRIT method over the HOYW method based on ˜ Γ. Assuming that the noise is white (i.e., m = 0) and hence that ESPRIT is directly applicable, would you prefer using HOYW-ESPRIT (with m = 0) in lieu of ESPRIT? Why or why not? Exercise 4.8: Chebyshev Inequality and the Convergence of Sample Co-variances Let x be a random variable with finite mean µ and variance σ2. Show that, for any positive constant c, the so–called Chebyshev inequality holds: Pr(|x −µ| ≥cσ) ≤1/c2 (4.10.5) Use (4.10.5) to show that if a sample covariance lag ˆ rN (estimated from N data samples) converges to the true value r in the mean square sense, i.e., lim N→∞E  |ˆ rN −r|2 = 0 (4.10.6) then ˆ rN also converges to r in probability: lim N→∞Pr(|ˆ rN −r| ̸= 0) = 0 (4.10.7) For sinusoidal signals, the mean square convergence of {ˆ rN(k)} to {r(k)}, as N → ∞, has been proven in Complement 4.9.1. (In this exercise, we omit the argument k in ˆ rN(k) and r(k), for notational simplicity.) Additionally, discuss the use of (4.10.5) to set bounds (which hold with a specified probability) on an arbitrary random variable with given mean and variance. Comment on the conservatism of the bounds obtained from (4.10.5) by comparing them with the bounds corresponding to a Gaussian random variable. Exercise 4.9: More about the Forward–Backward Approach The sample covariance matrix in (4.8.3), used by the forward–backward ap-proach, is often a better estimate of the theoretical covariance matrix than ˆ R is (as argued in Section 4.8). Another advantage of (4.8.3) is that the forward–backward sample covariance is always numerically better conditioned than the usual (forward– only) sample covariance matrix ˆ R. To explain this statement, let R be a Hermitian “sm2” 2004/2/ page 20 i i i i i i i i 202 Chapter 4 Parametric Methods for Line Spectra matrix (not necessarily a Toeplitz one, as the R in (4.2.7)). The “condition number” of R is defined as cond(R) = λmax(R)/λmin(R) where λmax(R) and λmin(R) are the maximum and minimum eigenvalues of R, respectively. The numerical errors that affect many algebraic operations on R, such as inversion, eigendecomposition and so on, are essentially proportional to cond(R). Hence, the smaller cond(R) the better. (See Appendix A for details on this aspect.) Next, let U be a unitary matrix (the J in (4.8.3) is a special case of such a matrix). Observe that the forward–backward covariance in equation (4.8.3) is of the form R + U ∗RT U. Prove that cond(R) ≥cond(R + U ∗RT U) (4.10.8) for any unitary matrix U. We note that the result (4.10.8) applies to any Hermitian matrix R and unitary matrix U, and thus is valid in more general cases than the forward–backward approach in Section 4.8, in which R is Toeplitz and U = J. Exercise 4.10: ESPRIT and Min–Norm Under the Same Umbrella ESPRIT and Min–Norm methods are seemingly quite different from one an-other, and hence it might seem unlikely that there is any strong relationship between them. It is the goal of this exercise to show that in fact ESPRIT and Min–Norm are quite related to each other. We will see that ESPRIT and Min–Norm are members of a well-defined class of frequency estimates. Consider the equation ˆ S∗ 2 ˆ Ψ = ˆ S∗ 1 (4.10.9) where ˆ S1 and ˆ S2 are as defined in Section 4.7. The (m −1) × (m −1) matrix ˆ Ψ in (4.10.9) is the unknown. First show that the asymptotic counterpart of (4.10.9), S∗ 2Ψ = S∗ 1 (4.10.10) has the property that any of its solutions Ψ has n eigenvalues equal to {e−iωk}n k=1. This property, along with the fact that there is an infinite number of matrices ˆ Ψ satisfying (4.10.9) (see Section A.8 in Appendix A), imply that (4.10.9) generates a class of frequency estimators with an infinite number of members. As a second task, show that ESPRIT and Min–Norm belong to this class of estimators. In other words, prove that there is a solution of (4.10.9) whose nonzero eigenvalues have exactly the same arguments as the eigenvalues of the ESPRIT matrix ˆ φ in (4.7.12), and also that there is another solution of (4.10.9) whose eigenvalues are equal to the roots of the Min–Norm polynomial in (4.6.3). For more details on the topic of this exercise, see [Hua and Sarkar 1990]. Exercise 4.11: Yet Another Relationship between ESPRIT and Min– Norm “sm2” 2004/2/ page 203 i i i i i i i i Section 4.10 Exercises 203 Let the vector [ˆ ρT , 1]T be defined similarly to the Min–Norm vector [1, ˆ gT ]T (see (4.6.1)), with the only difference that we now constrain the last element to be equal to one. Hence, ˆ ρ is the minimum-norm solution to (see (4.6.5)): ˆ S∗  ˆ ρ 1  = 0 Use the Min–Norm vector ˆ ρ to build the following matrix ˜ φ = ˆ S∗  0 Im−1 −ˆ ρ∗  ˆ S (n × n) Prove the somewhat curious fact that ˜ φ above is equal to the ESPRIT matrix, ˆ φ, in (4.7.12). COMPUTER EXERCISES Tools for Frequency Estimation: The text web site www.prenhall.com/stoica contains the following Matlab functions for use in computing frequency estimates and estimating the number of sinusoidal terms. In the first four functions, y is the data vector and n is the desired number of frequency estimates. The remaining variables are described below. • w=hoyw(y,n,L,M) The HOYW estimator given in the box on page 159; L and M are the matrix dimensions as in (4.4.8). • w=music(y,n,m) The Root MUSIC estimator given by (4.5.12); m is the dimension of a(ω). This function also implements the Pisarenko method by setting m = n + 1. • w=minnorm(y,n,m) The Root Min–Norm estimator given by (4.6.3); m is the dimension of a(ω). • w=esprit(y,n,m) The ESPRIT estimator given by (4.7.12); m is the size of the square matrix ˆ R there, and S1 and S2 are chosen as in equations (4.7.5) and (4.7.6). • order=sinorder(mvec,sig2,N,nu) Computes the AIC, AICc, GIC, and BIC model order selections for sinusoidal parameter estimation problems (see Appendix C for details on the derivations of these methods). Here, mvec is a vector of candidate sinusoidal model orders, sig2 is the vector of estimated residual variances corresponding to the model orders in mvec, N is the length of the observed data vector, and nu is a parameter in the GIC method. The 4-element output vector order contains the selected model orders obtained from AIC, AICc, GIC, and BIC, respectively. “sm2” 2004/2/ page 204 i i i i i i i i 204 Chapter 4 Parametric Methods for Line Spectra Exercise C4.12: Resolution Properties of Subspace Methods for Estima-tion of Line Spectra In this exercise we test and compare the resolution properties of four subspace methods, Min–Norm, MUSIC, ESPRIT, and HOYW. Generate realizations of the sinusoidal signal y(t) = 10 sin(0.24πt + ϕ1) + 5 sin(0.26πt + ϕ2) + e(t), t = 1, . . . , N where N = 64, e(t) is Gaussian white noise with variance σ2, and where ϕ1, ϕ2 are independent random variables each uniformly distributed on [−π, π]. Generate 50 Monte–Carlo realizations of y(t), and present the results from these experiments. The results of frequency estimation can be presented compar-ing the sample means and variances of the frequency estimates from the various estimators. (a) Find the exact ACS for y(t). Compute the “true” frequency estimates from the four methods, for n = 4 and various choices of the order m ≥5 (and corresponding choices of M and L for HOYW). Which method(s) are able to resolve the two sinusoids, and for what values of m (or M and L)? (b) Consider now N = 64, and set σ2 = 0; this corresponds to the finite data length but infinite SNR case. Compute frequency estimates for the four tech-niques again using n = 4 and various choices of m, M and L. Which method(s) are reliably able to resolve the sinusoids? Explain why. (c) Obtain frequency estimates from the four methods when N = 64 and σ2 = 1. Use n = 4, and experiment with different choices of m, M and L to see the effect on estimation accuracy (e.g., try m = 5, 8, and 12 for MUSIC, Min– Norm and ESPRIT, and try L = M = 4, 8, and 12 for HOYW). Which method(s) give reliable “super–resolution” estimation of the sinusoids? Is it possible to resolve the two sinusoids in the signal? Discuss how the choices of m, M and L influence the resolution properties. Which method appears to have the best resolution? You may want to experiment further by changing the SNR and the relative amplitudes of the sinusoids to gain a better understanding of the differences between the methods. (d) Compare the estimation results with the AR and ARMA results obtained in Exercise C3.18 in Chapter 3. What are the major differences between the techniques? Which method(s) do you prefer for this problem? Exercise C4.13: Model Order Selection for Sinusoidal Signals In this exercise we examine four methods for model order selection for sinu-soidal signals. As discussed in Appendix C, several important model order selection rules have the following general form (see (C.8.1)–(C.8.2)): −2 ln pn(y, ˆ θn) + η(r, N)r (4.10.11) “sm2” 2004/2/ page 20 i i i i i i i i Section 4.10 Exercises 205 with different penalty coefficients η(r, N) for the different methods: AIC : η(r, N) = 2 AICc : η(r, N) = 2 N N −r −1 GIC : η(r, N) = ν (e.g., ν = 4) BIC : η(r, N) = ln N (4.10.12) Here, N is the length of the observed data vector y and for sinusoidal signals r is given by (see Appendix C): r = 3n + 1 for AIC, AICc, and GIC r = 5n + 1 for BIC where n is the number of sinusoids in the model. The term ln pn(y, ˆ θn) is the log-likelihood of the observed data vector y given the maximum-likelihood (ML) estimate of the parameter vector θ for a model order of n; it is given by (cf. (C.2.7)– (C.2.8) in Appendix C): −2 ln pn(y, ˆ θn) = N ˆ σ2 n + constant (4.10.13) where ˆ σ2 n = 1 N N X t=1 y(t) − n X k=1 ˆ αkei(ˆ ωkt+ ˆ ϕk) 2 (4.10.14) The selected model order is the value of n that minimizes (4.10.11). The order selection rules above, while derived for ML estimates of θ, can be used even with approximate ML estimates of θ, albeit with some loss of performance. Well-Separated Sinusoids: (a) Generate 100 realizations of y(t) = 10 sin[2πf0t + ϕ1] + 5 sin[2π(f0 + ∆f)t + ϕ2] + e(t), t = 1, . . . , N for f0 = 0.24, ∆f = 3/N, and N = 128. Here, e(t) is real-valued white noise with variance σ2. For each realization, generate ϕ1 and ϕ2 as random variables uniformly distributed on [0, 2π]. (b) Set σ2 = 10. For each realization, estimate the frequencies of n = 1, . . . , 10 real-valued sinusoidal components using ESPRIT, and estimate the ampli-tudes and phases using the second equation in (4.3.8) where ˆ ω is the vector of ESPRIT frequency estimates. Note that you will need to use two complex exponentials to model each real-valued sinusoid, so the number of frequencies to estimate with ESPRIT will be 2, 4, . . . , 20; however, the frequency esti-mates will be in symmetric pairs. Use m = 40 as the covariance matrix size in ESPRIT. (c) Find the model orders that minimize AIC, AICc, GIC (with ν = 4), and BIC. For each of the four order selection methods, plot a histogram of the selected orders for the 100 realizations. Comment on their relative performance. “sm2” 2004/2/ page 206 i i i i i i i i 206 Chapter 4 Parametric Methods for Line Spectra (d) Repeat the above experiment using σ2 = 1 and σ2 = 0.1, and comment on the performance of the order selection methods as a function of SNR. Closely-Spaced Sinusoids: Generate 100 realizations of y(t) as above, but this time using ∆f = 0.5/N. Repeat the experiments above. In addition, compare the relative performance of the order selection methods for well-separated versus closely-spaced sinusoidal signals. Exercise C4.14: Line Spectral Methods applied to Measured Data Apply the Min–Norm, MUSIC, ESPRIT, and HOYW frequency estimators to the data in the files sunspotdata.mat and lynxdata.mat (use both the original lynx data and the logarithmically transformed data as in Exercise C2.23). These files can be obtained from the text web site www.prenhall.com/stoica. Try to answer the following questions: (a) Is the sinusoidal model appropriate for the data sets under study? (b) Suggest how to choose the number of sinusoids in the model (see Exer-cise C4.13). (c) What periodicities can you find in the two data sets? Compare the results you obtain here to the AR(MA) and nonparametric spectral estimation results you obtained in Exercises C2.23 and C3.20. “sm2” 2004/2/ page 20 i i i i i i i i C H A P T E R 5 Filter Bank Methods 5.1 INTRODUCTION The problem of estimating the PSD function φ(ω) of a signal from a finite num-ber of observations N is ill posed from a statistical standpoint, unless we make some appropriate assumptions on φ(ω). More precisely, without any assumption on the PSD we are required to estimate an infinite number of independent values {φ(ω)}π ω=−π from a finite number of samples. Evidently, we cannot do that in a consistent manner. In order to overcome this problem, we can either Parameterize {φ(ω)} by means of a finite–dimensional model (5.1.1) or Smooth the set {φ(ω)}π ω=−π by assuming that φ(ω) is constant (or nearly constant) over the band [ω −βπ, ω + βπ], for some given β ≪1. (5.1.2) The approach based on (5.1.1) leads to the parametric spectral methods of Chapters 3 and 4, for which the estimation of {φ(ω)} is reduced to the problem of estimating a number of parameters that is usually much smaller than the data length N. The other approach to PSD estimation, (5.1.2), leads to the methods to be described in this chapter. The nonparametric methods of Chapter 2 are also (im-plicitly) based on (5.1.2), as shown in Section 5.2. The approach (5.1.2) should, of course, be used for PSD estimation when we do not have enough information about the studied signal to be able to describe it (and its PSD) by a simple model (such as the ARMA equation in Chapter 3 or the equation of superimposed sinusoidal signals in Chapter 4). On one hand, this implies that the methods derived from (5.1.2) can be used in cases where those based on (5.1.1) cannot.1 On the other hand, we should expect to pay some price in using (5.1.2) over (5.1.1). Under the assumption in (5.1.2), φ(ω) is described by 2π/2πβ = 1/β values. In order to esti-mate these values from the available data in a consistent manner, we must require 1This statement should be interpreted with some care. One can certainly use, for instance, an ARMA spectral model even if one does not know that the studied signal is really an ARMA signal. However, in such a case one does not only have to estimate the model parameters but must also face the rather difficult task of determining the structure of the parametric model used (for example, the orders of the ARMA model). The nonparametric approach to PSD estimation does not require any structure determination step. 207 “sm2” 2004/2/ page 208 i i i i i i i i 208 Chapter 5 Filter Bank Methods that 1/β < N or Nβ > 1 (5.1.3) As β increases, the achievable statistical accuracy of the estimates of {φ(ω)} should increase (because the number of PSD values estimated from the given N data sam-ples decreases) but the resolution decreases (because φ(ω) is assumed to be constant on a larger interval). This tradeoffbetween statistical variability and resolution is the price paid for the generality of the methods derived from (5.1.2). We already met this tradeoffin our discussion of the periodogram–based methods in Chapter 2. Note from (5.1.3) that the resolution threshold β of the methods based on (5.1.2) can be lowered down to 1/N only if we are going to accept a significant statistical variability for our spectral estimates (because for β = 1/N we will have to estimate N spectral values from the available N data samples). The parametric (or model– based) approach embodied in (5.1.1) describes the PSD by a number of parameters that is often much smaller than N, and yet it may achieve better resolution (i.e., a resolution threshold less than 1/N) compared to the approach derived from (5.1.2). When taking the approach (5.1.2) to PSD estimation, we are basically fol-lowing the “definition” (1.1.1) of the spectral estimation problem, which we restate here (in abbreviated form) for easy reference: From a finite–length data sequence, estimate how the power is distributed over narrow spectral bands. (5.1.4) There is an implicit assumption in (5.1.4) that the power is (nearly) constant over “narrow spectral bands”, which is a restatement of (5.1.2). The most natural implementation of the approach to spectral estimation re-sulting from (5.1.2) and (5.1.4) is depicted in Figure 5.1. The bandpass filter in this figure, which sweeps through the frequency interval of interest, can be viewed as a bank of (bandpass) filters. This observation motivates the name of filter bank approach given to the PSD estimation scheme sketched in Figure 5.1. Depending on the bandpass filter chosen, we may obtain various filter bank methods of spectral estimation. Even for a given bandpass filter, we may implement the scheme of Fig-ure 5.1 in different ways, which leads to an even richer class of methods. Examples of bandpass filters that can be used in the scheme of Figure 5.1, as well as specific ways in which they may be implemented, are given in the remainder of this chapter. First, however, we discuss a few more aspects regarding the scheme in Figure 5.1. As a mathematical motivation of the filter bank approach (FBA) to spectral “sm2” 2004/2/ page 209 i i i i i i i i Section 5.1 Introduction 209             ! "   #!$    % &')(        +-,.  /  0     1 243 ,.5 687!9;:<= > ? @ :A B >C DE= F ? :< :GC H = I >B F J 3 K5 L M NO ,. O , 2P'3 ,5 Q Q R R R R S SUT T L M N)O ,. O , 2V P'3 ,5 S S L M W ,. , XY3 ,5 Figure 5.1. The filter bank approach to PSD estimation. estimation, we prove the following result. Assume that: (i) φ(ω) is (nearly) constant over the filter passband; (ii) The filter gain is (nearly) one over the passband and (nearly) zero outside the passband; and (iii) The power of the filtered signal is consistently estimated. Then: The PSD estimate, ˆ φFB(ω), obtained with the filter bank approach, is a good approximation of φ(ω). (5.1.5) Let H(ω) denote the transfer function of the bandpass filter, and let 2πβ denote its bandwidth. Then by using the formula (1.4.9) and the assumptions (iii), (ii) and (i) (in that order), we can write ˆ φFB(ω) ≃ 1 2πβ Z π −π |H(ψ)|2φ(ψ) dψ ≃ 1 2πβ Z ω+βπ ω−βπ φ(ψ) dψ ≃ 1 2πβ 2πβφ(ω) = φ(ω) (5.1.6) where ω denotes the center frequency of the bandpass filter. This is the result which we set out to prove. If all three assumptions in (5.1.5) could be satisfied, then the FBA methods would produce spectral estimates with high resolution and low statistical variability. Unfortunately, these assumptions contain conflicting requirements that cannot be met simultaneously. In high–resolution applications, assumption (i) can be satisfied “sm2” 2004/2/ page 210 i i i i i i i i 210 Chapter 5 Filter Bank Methods if we use a filter with a very sharp passband. According to the time–bandwidth product result (2.6.5), such a filter has a very long impulse response. This implies that we may be able to get only a few samples of the filtered signal (sometimes only one sample, see Section 5.2!). Hence, assumption (iii) cannot be met. In order to satisfy (iii), we need to average many samples of the filtered signal and, therefore, should consider a bandpass filter with a relatively short impulse response and hence a not too narrow passband. Assumption (i) may then be violated or, in other words, the resolution may be sacrificed. The above discussion has brought once more to light the compromise between resolution and statistical variability and the fact that the resolution is limited by the sample length. These are the critical issues for any PSD estimation method based on the approach (5.1.2), such as those of Chapter 2 and the ones discussed in the following sections. The previous two issues will always surface within the nonparametric approach to spectral estimation, in many different ways depending on the specific method at hand. 5.2 FILTER BANK INTERPRETATION OF THE PERIODOGRAM The value of the basic periodogram estimator (2.2.1) at a given frequency, say ˜ ω, can be expressed as ˆ φp(˜ ω) = 1 N N X t=1 y(t)e−i˜ ωt 2 = 1 N N X t=1 y(t)ei˜ ω(N−t) 2 = 1 β N−1 X k=0 hky(N −k) 2 (5.2.1) where β = 1/N and hk = 1 N ei˜ ωk k = 0, . . . , N −1 (5.2.2) The truncated convolution sum that appears in (5.2.1) can be written as the usual convolution sum associated with a linear causal system, if the weighting sequence in (5.2.2) is padded with zeroes: yF (N) = ∞ X k=0 hky(N −k) (5.2.3) with hk =  ei˜ ωk/N for k = 0, . . . , N −1 0 otherwise (5.2.4) The transfer function (or the frequency response) of the linear filter corresponding to {hk} in (5.2.4) is readily evaluated: H(ω) = ∞ X k=0 hke−iωk = 1 N N−1 X k=0 ei(˜ ω−ω)k = 1 N eiN(˜ ω−ω) −1 ei(˜ ω−ω) −1 “sm2” 2004/2/ page 21 i i i i i i i i Section 5.2 Filter Bank Interpretation of the Periodogram 211 which gives H(ω) = 1 N sin[N(˜ ω −ω)/2] sin[(˜ ω −ω)/2] ei(N−1)(˜ ω−ω)/2 (5.2.5) Figure 5.2 shows |H(ω)| as a function of ∆ω = ˜ ω −ω, for N = 50. It can be seen that H(ω) in (5.2.5) is the transfer function of a bandpass filter with center frequency equal to ˜ ω. The 3dB bandwidth of this filter can be shown to be approximately 2π/N radians per sampling interval, or 1/N cycles per sampling interval. In fact, by comparing (5.2.5) to (2.4.17) we see that H(ω) resembles the DTFT of the rectangular window, the only differences being the phase term (due to the time offset) and the window lengths ((2N −1) in (2.4.17) versus N in (5.2.5)). −3 −2 −1 0 1 2 3 −40 −35 −30 −25 −20 −15 −10 −5 0 dB ANGULAR FREQUENCY Figure 5.2. The magnitude of the frequency response of the bandpass filter H(ω) in (5.2.5), associated with the periodogram (N = 50), plotted as a function of (˜ ω −ω). Thus, we have proven the following filter bank interpretation of the basic pe-riodogram. The periodogram ˆ φp(ω) can be exactly obtained by the FBA in Figure 5.1, where the bandpass filter’s frequency response is given by (5.2.5), its bandwidth is 1/N cycles per sampling interval, and the power calculation is done from a single sample of the filtered signal. (5.2.6) This interpretation of ˆ φp(ω) highlights a conclusion that is reached, in a different way, in Chapter 2: the unmodified periodogram sacrifices statistical accuracy for resolution. Indeed, ˆ φp(ω) uses a bandpass filter with the smallest bandwidth af-forded by a time aperture of length N. In this way, it achieves a good resolution “sm2” 2004/2/ page 21 i i i i i i i i 212 Chapter 5 Filter Bank Methods Z[]Y^8_4a bc d ^e f ghi j-k j l\m n m o p j q g)r hi j-k j l\m n m o q Figure 5.3. The relationship between the PSDs of the original signal y(t) and the demodulated signal ˜ y(t). (see assumption (i) in (5.1.5)). The consequence of doing so is that only one (fil-tered) data sample is obtained for the power calculation stage, which explains the erratic fluctuations of ˆ φp(ω) (owing to violation of assumption (iii) in (5.1.5)). As explained in Chapter 2, the modified periodogram methods (Bartlett, Welch and Daniell) reduce the variance of the periodogram at the expense of increasing the bias (or, equivalently, worsening the resolution). The FBA interpretation of these modified methods provides an interesting explanation of their behavior. In the filter bank context, the basic idea behind all of these modified periodograms is to improve the power calculation stage which is done so poorly within the unmodified periodogram. The Bartlett and Welch methods split the available sample in several stretches which are separately (bandpass) filtered. In principle, the larger the number of stretches, the more samples are averaged in the power calculation stage and the smaller the variance of the estimated PSD, but the worse the resolution (owing to the inability to design an appropriately narrow bandpass filter for a small–aperture stretch). The Daniell method, on the other hand, does not split the sample of observa-tions but processes it as a whole. This method improves the “power calculation” in a different way. For each value of φ(ω) to be estimated, a number of different bandpass filters are employed, each with center frequency near ω. Each bandpass filter yields only one sample of the filtered signal, but as there are several band-pass filters we may get enough information for the power calculation stage. As the number of filters used increases, the variance of the estimated PSD decreases but the resolution becomes worse (since φ(ω) is implicitly assumed to be constant over a wider and wider frequency interval centered on the current ω and approximately equal to the union of the filters’ passbands). 5.3 REFINED FILTER BANK METHOD The bandpass filter used in the periodogram is nothing but one of many possible choices. Since the periodogram was not designed as a filter bank method, we may wonder whether we could not find other better choices of the bandpass filter. In this section, we present a refined filter bank (RFB) approach to spectral estimation. Such an approach was introduced in [Thomson 1982] and was further developed in [Mullis and Scharf 1991] (more recent references on this approach include [Bronez 1992; Onn and Steinhardt 1993; Riedel and Sidorenko 1995]). For the discussion that follows, it is convenient to use a baseband filter in the “sm2” 2004/2/ page 213 i i i i i i i i Section 5.3 Refined Filter Bank Method 213 filter bank approach of Figure 5.1, in lieu of the bandpass filter. Let HBF(ω) denote the frequency response of the bandpass filter with center frequency ˜ ω (say), and let the baseband filter be defined by: H(ω) = HBF(ω + ˜ ω) (5.3.1) (the center frequency of H(ω) is equal to zero). If the input to the FBA scheme is also modified in the following way, y(t) − →˜ y(t) = e−i˜ ωty(t) (5.3.2) then, according to the complex (de)modulation formula (1.4.11), the output of the scheme is left unchanged by the translation in (5.3.1) of the passband down to baseband. In order to help interpret the transformations above, we depict in Figure 5.3 the type of PSD translation implied by the demodulation process in (5.3.2). It is clearly seen from this figure that the problem of isolating the band around ˜ ω by bandpass filtering becomes one of baseband filtering. The modified FBA scheme is shown in Figure 5.4. The baseband filter design problem is the subject of the next subsection. 5.3.1 Slepian Baseband Filters In the following, we address the problem of designing a finite impulse response (FIR) baseband filter which passes the baseband [−βπ, βπ] (5.3.3) as undistorted as possible, and which attenuates the frequencies outside baseband as much as possible. Let h = [h0 . . . hN−1]∗ (5.3.4) denote the impulse response of such a filter, and let H(ω) = N−1 X k=0 hke−iωk = h∗a(ω) s s s s t s uwv xwy z { | }8| ~!| €w‚ ƒ4„ †‡ Eˆ€E‰EŠ-| ‰ ‹ Œ 'Š †‡  ˆ „ ŽE„ ˆ | €  ˆ~ † Eˆ€E‰ ‘ | „ !†'‡ ’ “”'• –U— ˜;™ š›œ- ž ŸG ›-¡8¢ž£ ¤¥ ¦ Ÿ ›œG› £ §  ¨ ž¢¦ • © ” ª — © ” ª — «4¬®­ ¯ °4± ² ³ ´ – µ ” –U— Figure 5.4. The modified filter bank approach to PSD estimation. “sm2” 2004/2/ page 214 i i i i i i i i 214 Chapter 5 Filter Bank Methods (where a(ω) = [1 e−iω . . . e−i(N−1)ω]T ) be the corresponding frequency response. The two design objectives can be turned into mathematical specifications in the following way. Let the input to the filter be white noise of unit variance. Then the power of the output is: 1 2π Z π −π |H(ω)|2dω = N−1 X k=0 N−1 X p=0 hkh∗ p  1 2π Z π −π eiω(p−k)dω  = N−1 X k=0 N−1 X p=0 hkh∗ pδk,p = h∗h (5.3.5) We note in passing that equation (5.3.5) above can be recognized as the Parseval’s theorem (1.2.6). The part of the total power, (5.3.5), that lies in the baseband is given by 1 2π Z βπ −βπ |H(ω)|2dω = h∗ ( 1 2π Z βπ −βπ a(ω)a∗(ω)dω ) h ≜h∗Γh (5.3.6) The k, p element of the N × N matrix Γ defined in (5.3.6) is given by Γk,p = 1 2π Z βπ −βπ e−i(k−p)ωdω = sin[(k −p)βπ] (k −p)π (5.3.7) which, using the sinc function, can be written as Γk,p = βsinc[(k −p)βπ] ≜γ|k−p| (5.3.8) Note that the matrix Γ is symmetric and Toeplitz. Also, note that this matrix has already been encountered in the window design example in Section 2.6.3. In fact, as we will shortly see, the window design strategy in that example is quite similar to the baseband filter design method employed here. Since the filter h must be such that the power of the filtered signal in the baseband is as large as possible relative to the total power, we are led to the following optimization problem: max h h∗Γh subject to h∗h = 1 (5.3.9) The solution to the problem above is given in Result R13 in Appendix A: the maxi-mizing h is equal to the eigenvector of Γ corresponding to its maximum eigenvalue. Hence, we have proven the following result. The impulse response h of the “most selective” baseband filter (according to the design objectives in (5.3.9)) is given by the dominant eigenvector of Γ, and is called the first Slepian sequence. (5.3.10) “sm2” 2004/2/ page 21 i i i i i i i i Section 5.3 Refined Filter Bank Method 215 The matrix Γ played a key role in the foregoing derivation. In what follows, we look in more detail at the eigenstructure of Γ. In particular, we provide an intuitive explanation as to why the first dominant eigenvector of Γ behaves like a baseband filter. We also show that, depending on the relation between β and N, the next dominant eigenvectors of Γ might also be used as baseband filters. Our discussion of these aspects will be partly heuristic. Note that the eigenvectors of Γ are called the Slepian sequences [Slepian 1964] (as already indicated in (5.3.10)). We denote these eigenvectors by {sk}N k=1. Remark: The Slepian sequences should not be computed by the eigendecomposition of Γ. Numerically more efficient and reliable ways for computing these sequences exist (see, e.g., [Slepian 1964]), for instance as solutions to some differential equa-tions or as eigenvectors of certain tridiagonal matrices. ■ The theoretical eigenanalysis of Γ is a difficult problem in the case of finite N. (Of course, the eigenvectors and eigenvalues of Γ may always be computed, for given β and N; here we are interested in establishing theoretical expressions for Γ’s eigenelements.) For N sufficiently large, however, “reasonable approximations” to the eigenelements of Γ can be derived. Let a(ω) be defined as before: a(ω) = [1 e−iω . . . e−i(N−1)ω]T (5.3.11) Assume that β is chosen larger than 1/N, and define K = Nβ ≥1 (5.3.12) (To simplify the discussion, K and N are assumed to be even integers in what follows.) With these preparations and assuming that N is large, we can approximate the integral in (5.3.6) and write Γ as Γ ≃1 2π K/2−1 X p=−K/2 a 2π N p  a∗ 2π N p  2π N = 1 N K/2−1 X p=−K/2 a 2π N p  a∗ 2π N p  ≜Γ0 (5.3.13) The vectors {a( 2π N p)/ √ N} N 2 p=−N 2 +1, part of which appears in (5.3.13), can be readily shown to form an orthonormal set: 1 N a∗ 2π N p  a 2π N s  = 1 N N−1 X k=0 ei 2π N (p−s)k =          1 N ei2π(p−s) −1 ei 2π N (p−s) −1 = 0, s ̸= p 1, s = p (5.3.14) “sm2” 2004/2/ page 216 i i i i i i i i 216 Chapter 5 Filter Bank Methods The eigenvectors of the matrix on the right hand side of equation (5.3.13), Γ0, are therefore given by {a 2π N p  / √ N}N/2 p=−N/2+1, with eigenvalues of 1 (with multiplicity K) and 0 (with multiplicity N −K). The eigenvectors corresponding to the eigen-values equal to one are {a 2π N p  / √ N}K/2 p=−K/2+1. By paralleling the calculations in (5.2.3)–(5.2.5), it is not hard to show that each of these dominant eigenvectors of Γ0 is the impulse response of a narrow bandpass filter with bandwidth equal to about 1/N and center frequency 2π N p; the set of these filters therefore covers the interval [−βπ, βπ]. Now, the elements of Γ approach those of Γ0 as N increases; more precisely, |[Γ]i,j −[Γ0]i,j| = O(1/N) for sufficiently large N. However, this does not mean that ∥Γ −Γ0∥→0, as N →∞, for any reasonable matrix norm, because Γ and Γ0 are (N × N) matrices. Consequently, the eigenelements of Γ do not necessarily converge to the eigenelements of Γ0 as N →∞. However, based on the previous analysis, we can at least expect that the eigenelements of Γ are not “too different” from those of Γ0. This observation of the theoretical analysis, backed up with empirical evidence from the computation of the eigenelements of Γ in specific cases, leads us to conclude the following. The matrix Γ has K eigenvalues close to one and (N −K) eigen-values close to zero, provided N is large enough, where K is given by the “time–bandwidth” product (5.3.12). The dominant eigen-vectors corresponding to the K largest eigenvalues form a set of orthogonal impulse responses of K bandpass filters that approx-imately cover the baseband [−βπ, βπ]. (5.3.15) As we argue in the next subsections, in some situations (specified there) we may want to use the whole set of K Slepian baseband filters, not only the dominant Slepian filter in this set. 5.3.2 RFB Method for High–Resolution Spectral Analysis Assume that the spectral analysis problem dealt with is one in which it is impor-tant to achieve the maximum resolution afforded by the approach at hand (such a problem appears, for instance, in the case of PSD’s with closely spaced peaks). Then we set β = 1/N ⇐ ⇒K = 1 (5.3.16) (Note that we cannot set β to a value less than 1/N since that choice would lead to K < 1, which is meaningless; the fact that we must choose β ≥1/N is one of the many facets of the 1/N–resolution limit of the nonparametric spectral estimation.) Since K = 1, we can only use the first Slepian sequence as a bandpass filter h = s1 (5.3.17) The way in which the RFB scheme based on (5.3.17) works is described in the following. “sm2” 2004/2/ page 21 i i i i i i i i Section 5.3 Refined Filter Bank Method 217 First, note from (5.3.5), (5.3.9) and (5.3.16) that 1 = h∗h = 1 2π Z π −π |H(ω)|2dω ≃1 2π Z βπ −βπ |H(ω)|2dω ≃β|H(0)|2 = 1 N |H(0)|2 (5.3.18) Hence, under the (idealizing) assumption that H(ω) is different from zero only in the baseband where it takes a constant value, we have |H(0)|2 ≃N (5.3.19) Next, consider the sample at the filter’s output obtained by the convolution of the whole input sequence {˜ y(t)}N t=1 with the filter impulse response {hk}: x ≜ N−1 X k=0 hk˜ y(N −k) = N X t=1 hN−t˜ y(t) (5.3.20) The power of x should be approximately equal to the PSD value φ(˜ ω), which is confirmed by the following calculation: E  |x|2 = 1 2π Z π −π |H(ω)|2φ˜ y(ω)dω ≃N 2π Z βπ −βπ φ˜ y(ω)dω = N 2π Z βπ −βπ φy(ω + ˜ ω)dω ≃N 2π φy(˜ ω) × 2πβ = Nβφy(˜ ω) = φy(˜ ω) (5.3.21) The second “equality” above follows from the properties of H(ω) (see, also, (5.3.19)), the third from the complex demodulation formula (1.4.11), and the fourth from the assumption that φy(ω) is nearly constant over the passband considered. In view of (5.3.21), the PSD estimation problem reduces to estimating the power of the filtered signal. Since only one sample, x, of that signal is available, the obvious estimate for the signal power is |x|2. This leads to the following estimate of φ(ω): ˆ φ(ω) = N X t=1 hN−ty(t)e−iωt 2 (5.3.22) where {hk} is given by the first Slepian sequence (see (5.3.17)). The reason we did not divide (5.3.22) by the filter bandwidth is that |H(0)|2 ≃N by (5.3.19), which differs from assumption (ii) in (5.1.5). The spectral estimate (5.3.22) is recognized to be a windowed periodogram with temporal window {hN−k}. For large values of N, it follows from the analysis in the previous section that h can be expected to be reasonably close to the vector [1 . . . 1]T / √ N. When inserting the latter vector in (5.3.22), we get the unwindowed “sm2” 2004/2/ page 218 i i i i i i i i 218 Chapter 5 Filter Bank Methods periodogram. Hence, we reach the conclusion that for N large enough, the RFB estimate (5.3.22) will behave not too differently from the unmodified periodogram (which is quite natural in view of the fact that we wanted a high–resolution spectral estimator, and the basic periodogram is known to be such an estimator). Remark: We warn the reader, once again, that the above discussion is heuristic. As explained before (see the discussion related to (5.3.15)), as N increases {hk} may be expected to be “reasonably close” but not necessarily converge to 1/ √ N. In addition, even if {hk} in (5.3.22) converges to 1/ √ N as N →∞, the function in (5.3.22) may not converge to ˆ φp(ω) if the convergence rate of {hk} is too slow (note that the number of {hk} in (5.3.22) is equal to N). Hence ˆ φ(ω) in (5.3.22) and the periodogram ˆ φp(ω) may differ from one another even for large values of N. ■ In any case, even though the two estimators ˆ φ(ω) in (5.3.22) and ˆ φp(ω) gen-erally give different PSD values, they both base the power calculation stage of the FBA scheme on only a single sample. Hence, similarly to ˆ φp(ω), the RFB estimate (5.3.22) is expected to exhibit erratic fluctuations. The next subsection discusses a way in which the variance of the RFB spectral estimate can be reduced, at the expense of reducing the resolution of this estimate. 5.3.3 RFB Method for Statistically Stable Spectral Analysis The FBA interpretation of the modified periodogram methods, as explained in Sec-tion 5.2, highlighted two approaches to reduce the statistical variability of the spec-tral estimate (5.3.22). The first approach consists of splitting the available sample {y(t)}N t=1 into a number of subsequences, computing (5.3.22) for each stretch, and then averaging the so–obtained values. The problem with this way of proceeding is that the values taken by (5.3.22) for different subsequences are not guaranteed to be statistically independent. In fact, if the subsequences overlap then those values may be strongly correlated. The consequence of this fact is that one can never be sure of the “exact” reduction of variance that is achieved by averaging, in a given situation. The second approach to reduce the variance consists of using several bandpass filters, in lieu of only one, which operate on the whole data sample [Thomson 1982]. This approach aims at producing statistically independent samples for the power calculation stage. When this is achieved the variance is reduced K times, where K is the number of samples averaged (which equals the number of bandpass filters used). In the following, we focus on this second approach which appears particularly suitable for the RFB method. We set β to some value larger than 1/N, which gives (cf. (5.3.12)) K = Nβ > 1 (5.3.23) The larger β (i.e., the lower the resolution), the larger K and hence the larger the reduction in variance that can be achieved. By using the result (5.3.15), we define “sm2” 2004/2/ page 219 i i i i i i i i Section 5.3 Refined Filter Bank Method 219 K baseband filters as hp = [hp,0 . . . hp,N−1]∗= sp, (p = 1, . . . , K) (5.3.24) Here hp denotes the impulse response vector of the pth filter, and sp is the pth domi-nant Slepian sequence. Note that sp is real–valued (see Result R12 in Appendix A), and thus so is hp. According to the discussion leading to (5.3.15), the set of filters (5.3.24) covers the baseband [−βπ, βπ], with each of these filters passing (roughly speaking) (1/K)th of this baseband. Let xp be defined similarly to x in (5.3.20), but now for the pth filter: xp = N−1 X k=0 hp,k˜ y(N −k) = N X t=1 hp,N−t˜ y(t) (5.3.25) The calculation (5.3.21) applies to {xp} in exactly the same way, and hence E  |xp|2 ≃φy(˜ ω), p = 1, . . . , K (5.3.26) In addition, a straightforward calculation gives E {xpx∗ k} = E ("N−1 X t=0 hp,t˜ y(N −t) # "N−1 X s=0 h∗ k,s˜ y∗(N −s) #) = N−1 X t=0 N−1 X s=0 hp,th∗ k,sr˜ y(s −t) = 1 2π Z π −π N−1 X t=0 N−1 X s=0 hp,th∗ k,sφ˜ y(ω)ei(s−t)ωdω = 1 2π Z π −π Hp(ω)H∗ k(ω)φ˜ y(ω)dω ≃φ˜ y(0)h∗ p " 1 2π Z βπ −βπ a(ω)a∗(ω)dω # hk = φy(˜ ω)h∗ pΓhk = 0 for k ̸= p (5.3.27) Thus, the random variables xp and xk (for p ̸= k) are approximately uncorrelated under the assumptions made. This implies, at least under the assumption that the {xk} are Gaussian, that |xp|2 and |xk|2 are statistically independent (for p ̸= k). According to the calculations above, {|xp|2}K p=1 can approximately be consid-ered to be independent random variables all with the same mean φy(˜ ω). Then, we can estimate φy(˜ ω) by the following average of {|xp|2}: 1 K PK p=1 |xp|2, or ˆ φ(ω) = 1 K K X p=1 N X t=1 hp,N−ty(t)e−iωt 2 (5.3.28) “sm2” 2004/2/ page 220 i i i i i i i i 220 Chapter 5 Filter Bank Methods We may suspect that the random variables {|xp|2} have not only the same mean, but also the same variance (this can, in fact, be readily shown under the Gaussian hypothesis). Whenever this is true, the variance of the average in (5.3.28) is K times smaller than the variance of each of the variables averaged. The above findings are summarized in the following. If the resolution threshold β is increased K times from β = 1/N (the lowest value) to β = K/N, then the variance of the RFB estimate in (5.3.22) may be reduced by a factor K by construct-ing the spectral estimate as in (5.3.28), where the pth baseband filter’s impulse response {hp,t}N−1 t=0 is given by the pth dominant Slepian sequence (p = 1, . . . , K). (5.3.29) The RFB spectral estimator (5.3.28) can be given two interpretations. First, arguments similar with those following equation (5.3.22) suggest that for large N the RFB estimate (5.3.28) behaves similarly to the Daniell method of periodogram averaging. For small or medium–sized values of N, the RFB and Daniell methods behave differently. In such a case, we can relate (5.3.28) to the class of multiwindow spectral estimators [Thomson 1982]. Indeed, the RFB estimate (5.3.28) can be in-terpreted as the average of K windowed periodograms, where the pth periodogram is computed from the raw data sequence {y(t)} windowed with the pth dominant Slepian sequence. Note that since the Slepian sequences are given by the eigenvec-tors of the real Toeplitz matrix Γ, they must be either symmetric: hp,N−t = hp,t−1; or skew–symmetric: hp,N−t = −hp,t−1 (see Result R25 in Appendix A). This means that (5.3.28) can alternatively be written as ˆ φ(ω) = 1 K K X p=1 N X t=1 hp,t−1y(t)e−iωt 2 (5.3.30) This form of the RFB estimate makes its interpretation as a multiwindow spectrum estimator more direct. For a second interpretation of the RFB estimate (5.3.28), consider the follow-“sm2” 2004/2/ page 22 i i i i i i i i Section 5.3 Refined Filter Bank Method 221 ing (Daniell–type) spectrally smoothed periodogram estimator of φ(˜ ω): ˆ φ(˜ ω) = 1 2πβ Z ˜ ω+βπ ˜ ω−βπ ˆ φp(ω)dω = 1 2πβ Z βπ −βπ ˆ φp(ω + ˜ ω)dω = 1 2πβ Z βπ −βπ 1 N N X t=1 y(t)e−i(ω+˜ ω)t 2 dω = 1 2πK Z βπ −βπ N X t=1 N X s=1 ˜ y(t)˜ y∗(s)e−iωteiωsdω = 1 K [˜ y∗(1) . . . ˜ y∗(N)] ·          1 2π Z βπ −βπ      1 eiω . . . ei(N−1)ω     [1 e−iω . . . e−i(N−1)ω]dω             ˜ y(1) . . . ˜ y(N)    = 1 K [˜ y∗(1) . . . ˜ y∗(N)]Γ    ˜ y(1) . . . ˜ y(N)    (5.3.31) where we made use of the fact that Γ is real–valued. It follows from the result (5.3.15) that Γ can be approximated by the rank–K matrix: Γ ≃ K X p=1 spsT p = K X p=1 hphT p (5.3.32) Inserting (5.3.32) into (5.3.31) and using the fact that the Slepian sequences sp = hp are real–valued leads to the following PSD estimator: ˆ φ(˜ ω) ≃1 K K X p=1 N X t=1 hp,t−1˜ y(t) 2 (5.3.33) which is precisely the RFB estimator (5.3.30). Hence, the RFB estimate of the PSD can also be interpreted as a reduced–rank smoothed periodogram. We might think of using the full–rank smoothed periodogram (5.3.31) as an estimator for PSD, in lieu of the reduced–rank smoothed periodogram (5.3.33) which coincides with the RFB estimate. However, from a theoretical standpoint we have no strong reason to do so. Moreover, from a practical standpoint we have clear reasons against such an idea. We can explain this briefly as follows. The K dominant eigenvectors of Γ can be precomputed with satisfactory numerical ac-curacy. Then, evaluation of (5.3.33) can be done by using an FFT algorithm in approximately 1 2KN log2 N = 1 2βN 2 log2 N flops. On the other hand, a direct evaluation of (5.3.31) would require N 2 flops for each value of ω, which leads to a prohibitively large total computational burden. A computationally efficient evalu-ation of (5.3.31) would require some factorization of Γ to be performed, such as the “sm2” 2004/2/ page 22 i i i i i i i i 222 Chapter 5 Filter Bank Methods eigendecomposition of Γ. However, Γ is an extremely ill–conditioned matrix (recall that N −K = N(1 −β) of its eigenvalues are close to zero), which means that such a complete factorization cannot easily be performed with satisfactory numerical accuracy. In any case even if we were able to precompute the eigendecomposition of Γ, evaluation of (5.3.31) would require 1 2N 2 log2 N flops, which is still larger by a factor of 1/β than what is required for (5.3.33). 5.4 CAPON METHOD The periodogram was previously shown to be a filter bank approach which uses a bandpass filter whose impulse response vector is given by the standard Fourier transform vector (i.e., [1, e−i˜ ω, . . . , e−i(N−1)˜ ω]T ). In the periodogram approach there is no attempt to purposely design the bandpass filter to achieve some desired characteristics (see, however, Section 5.5). The RFB method, on the other hand, uses a bandpass filter specifically designed to be “as selective as possible” for a white noise input (see (5.3.5) and the discussion preceding it). The RFB’s filter is still data independent in the sense that it does not adapt to the processed data in any way. Presumably, it might be valuable to take the data properties into consideration when designing the bandpass filter. In other words, the filter should be designed to be “as selective as possible” (according to a criterion to be specified) not for a fictitious white noise input, but for the input consisting of the studied data themselves. This is the basic idea behind the Capon method, which is an FBA procedure based on a data–dependent bandpass filter [Capon 1969; Lacoss 1971]. 5.4.1 Derivation of the Capon Method The Capon method (CM), in contrast to the RFB estimator (5.3.28), uses only one bandpass filter for computing one estimated spectrum value. This suggests that if the CM is to provide statistically stable spectral estimates, then it should make use of the other approach which affords this: splitting the raw sample into subsequences and averaging the results obtained from each subsequence. Indeed, as we shall see the Capon method is essentially based on this second approach. Consider a filter with a finite impulse response of length m, denoted by h = [h0 h1 . . . hm]∗ (5.4.1) where m is a positive integer that is unspecified for the moment. The output of the filter at time t, when the input is the raw data sequence {y(t)}, is given by yF (t) = m X k=0 hky(t −k) = h∗      y(t) y(t −1) . . . y(t −m)      (5.4.2) Let R denote the covariance matrix of the data vector in (5.4.2). Then the power “sm2” 2004/2/ page 223 i i i i i i i i Section 5.4 Capon Method 223 of the filter output can be written as: E  |yF (t)|2 = h∗Rh (5.4.3) where, according to the definition above, R = E         y(t) . . . y(t −m)   [y∗(t) . . . y∗(t −m)]      (5.4.4) The response of the filter (5.4.2) to a sinusoidal component of frequency ω (say) is determined by the filter’s frequency response: H(ω) = m X k=0 hke−iωk = h∗a(ω) (5.4.5) where a(ω) = [1 e−iω . . . e−imω]T (5.4.6) If we want to make the filter as selective as possible for a frequency band around the current value ω, then we may think of minimizing the total power in (5.4.3) subject to the constraint that the filter passes the frequency ω undistorted. This idea leads to the following optimization problem: min h h∗Rh subject to h∗a(ω) = 1 (5.4.7) The solution to (5.4.7) is given in Result R35 in Appendix A: h = R−1a(ω)/a∗(ω)R−1a(ω) (5.4.8) Inserting (5.4.8) into (5.4.3) gives E  |yF (t)|2 = 1/a∗(ω)R−1a(ω) (5.4.9) This is the power of y(t) in a passband centered on ω. Then, assuming that the (idealized) conditions (i) and (ii) in (5.1.5) hold, we can approximately determine the value of the PSD of y(t) at the passband’s center frequency as φ(ω) ≃E  |yF (t)|2 β = 1 βa∗(ω)R−1a(ω) (5.4.10) where β denotes the frequency bandwidth of the filter given by (5.4.8). The division by β, as above, is sometimes omitted in the literature, but it is required to complete the FBA scheme in Figure 5.1. Note that since the bandpass filter (5.4.8) is data dependent, its bandwidth β is not necessarily data independent, nor is it necessarily frequency independent. Hence, the division by β in (5.4.10) may not represent a “sm2” 2004/2/ page 224 i i i i i i i i 224 Chapter 5 Filter Bank Methods simple scaling of E  |yF (t)|2 , but it may change the shape of this quantity as a function of ω. There are various possibilities for determining the bandwidth β, depending on the degree of precision we are aiming for. The simplest possibility is to set β = 1/(m + 1) (5.4.11) This choice is motivated by the time–bandwidth product result (2.6.5), which says that for a filter whose temporal aperture is equal to (m + 1), the bandwidth should roughly be given by 1/(m + 1). By inserting (5.4.11) in (5.4.10), we obtain φ(ω) ≃ (m + 1) a∗(ω)R−1a(ω) (5.4.12) Note that if y(t) is white noise of variance σ2, (5.4.12) takes the correct value: φ(ω) = σ2. In the general case, however, (5.4.11) gives only a rough indication of the filter’s bandwidth, as the time–bandwidth product result does not apply exactly to the present situation (see the conditions under which (2.6.5) has been derived). An often more exact expression for β can be obtained as follows [Lagunas, Santamaria, Gasull, and Moreno 1986]. The (equivalent) bandwidth of a bandpass filter can be defined as the support of the rectangle centered on ω (the filter’s center frequency) that concentrates the whole energy in the filter’s frequency response. According to this definition, β can be assumed to satisfy: Z π −π |H(ψ)|2dψ = |H(ω)|22πβ (5.4.13) Since in the present case H(ω) = 1 (see (5.4.7)), we obtain from (5.4.13): β = 1 2π Z π −π |h∗a(ψ)|2dψ = h∗  1 2π Z π −π a(ψ)a∗(ψ)dψ  h (5.4.14) The (k, p) element of the central matrix in the above quadratic form is given by 1 2π Z π −π e−iψ(k−p)dψ = δk,p (5.4.15) With this observation and (5.4.8), (5.4.14) leads to β = h∗h = a∗(ω)R−2a(ω) [a∗(ω)R−1a(ω)]2 (5.4.16) Note that this expression of the bandwidth is both data and frequency dependent (as was alluded to previously). Inserting (5.4.16) in (5.4.10) gives φ(ω) ≃a∗(ω)R−1a(ω) a∗(ω)R−2a(ω) (5.4.17) Remark: The expression for β in (5.4.16) is based on the assumption that most of the area under the curve of |H(ψ)|2 = |h∗a(ψ)|2 (for ψ ∈[−π, π]) is located “sm2” 2004/2/ page 22 i i i i i i i i Section 5.4 Capon Method 225 around the center frequency ω. This assumption is often true, but not always true. For instance, consider a data sequence {y(t)} consisting of a number of sinusoidal components with frequencies {ωk} in noise with small power. Then the Capon filter (5.4.8) with center frequency ω will likely place nulls at {ψ = ωk} to annihilate the strong sinusoidal components in the data, but will pay little attention to the weak noise component. The consequence is that |H(ψ)|2 will be nearly zero at {ψ = ωk}, and one at ψ = ω (by (5.4.7)), but may take rather large values at other frequencies (see, for example, the numerical examples in [Li and Stoica 1996a], which demonstrate this behavior of the Capon filter). In such a case, the formula (5.4.16) may significantly overestimate the “true” bandwidth, and hence the spectral formula (5.4.17) may significantly underestimate the PSD φ(ω). ■ In the derivations above, the true data covariance matrix R has been assumed available. In order to turn the previous PSD formulas into practical spectral esti-mation algorithms, we must replace R in these formulas by a sample estimate, for instance by ˆ R = 1 N −m N X t=m+1    y(t) . . . y(t −m)   [y∗(t) . . . y∗(t −m)] (5.4.18) Doing so, we obtain the following two spectral estimators corresponding to (5.4.12) and (5.4.17), respectively: CM–Version 1: ˆ φ(ω) = m + 1 a∗(ω) ˆ R−1a(ω) (5.4.19) CM–Version 2: ˆ φ(ω) = a∗(ω) ˆ R−1a(ω) a∗(ω) ˆ R−2a(ω) (5.4.20) There is an implicit assumption in both (5.4.19) and (5.4.20) that ˆ R−1 exists. This assumption sets a limit on the maximum value that can be chosen for m: m < N/2 (5.4.21) (Observe that rank( ˆ R) ≤N −m, which is less than dim( ˆ R) = m + 1 if (5.4.21) is violated.) The inequality (5.4.21) is important since it sets a limit on the resolution achievable by the Capon method. Indeed, since the Capon method is based on a bandpass filter with impulse response’s aperture equal to m, we may expect its resolution threshold to be on the order of 1/m > 2/N (with the inequality following from (5.4.21)). As m is decreased, we can expect the resolution of Capon method to become worse (cf. the previous discussion). On the other hand, the accuracy with which ˆ R is determined increases with decreasing m (since more outer products are averaged “sm2” 2004/2/ page 226 i i i i i i i i 226 Chapter 5 Filter Bank Methods in (5.4.18)). The main consequence of the increased accuracy of ˆ R is to statistically stabilize the spectral estimate (5.4.19) or (5.4.20). Hence, the choice of m should be done with the ubiquitous tradeoffbetween resolution and statistical accuracy in mind. It is interesting to note that for the Capon method both the filter design and power calculation stages are data dependent. The accuracy of both these stages may worsen if m is chosen too large. In applications, the maximum value that can be chosen for m might also be limited from considerations of computational complexity. Empirical studies have shown that the ability of the Capon method to resolve fine details of a PSD, such as closely spaced peaks, is superior to the corresponding performance of the periodogram–based methods. This superiority may be attributed to the higher statistical stability of Capon method, as explained next. For m smaller than N/2 (see (5.4.21)), we may expect the Capon method to possess worse res-olution but better statistical accuracy compared with the unwindowed or “mildly windowed” periodogram method. It should be stressed that the notion of “reso-lution” refers to the ability of the theoretically averaged spectral estimate E{ˆ φ(ω)} to resolve fine details in the true PSD φ(ω). This resolution is roughly inversely proportional to the window’s length or the bandpass filter impulse response’s aper-ture. The “resolving power” corresponding to the estimate ˆ φ(ω) is more difficult to quantify, but — of course — it is what interests the most. It should be clear that the resolving power of ˆ φ(ω) depends not only on the bias of this estimate (i.e., on E{ˆ φ(ω)}), but also on its variance. A spectral estimator with low bias–based resolution but high statistical accuracy may be better able to resolve finer details in a studied PSD than can a high resolution/low accuracy estimator. Since the pe-riodogram may achieve better bias–based resolution than the Capon method, the higher (empirically observed) “resolving power” of the latter should be due to a better statistical accuracy (i.e., a lower variance). In the context of the previous discussion, it is interesting to note that the Blackman–Tukey periodogram with a Bartlett window of length 2m + 1, which is given by (see (2.5.1)): ˆ φBT(ω) = m X k=−m (m + 1 −|k|) m + 1 ˆ r(k)e−iωk can be written in a form that bears some resemblance with the form (5.4.19) of the CM–Version 1 estimator. A straightforward calculation gives ˆ φBT(ω) = m X t=0 m X s=0 ˆ r(t −s)e−iω(t−s)/(m + 1) (5.4.22) = 1 m + 1a∗(ω) ˆ Ra(ω) (5.4.23) where a(ω) is as defined in (5.4.6), and ˆ R is the Hermitian Toeplitz sample covari-“sm2” 2004/2/ page 22 i i i i i i i i Section 5.4 Capon Method 227 ance matrix ˆ R =       ˆ r(0) ˆ r(1) . . . ˆ r(m) ˆ r∗(1) ˆ r(0) ... . . . . . . ... ... ˆ r(1) ˆ r∗(m) . . . ˆ r∗(1) ˆ r(0)       Comparing the above expression for ˆ φBT(ω) with (5.4.19), it is seen that the CM– Version 1 can be obtained from Blackman–Tukey estimator by replacing ˆ R in the Blackman–Tukey estimator with ˆ R−1, and then inverting the so–obtained quadratic form. Below we provide a brief explanation as to why this replacement and inversion make sense. That is, if we ignore for a moment the technically sound filter bank derivation of the Capon method, then why should the above way of obtaining CM–Version 1 from the Blackman–Tukey method provide a reasonable spectral estimator? We begin by noting that (cf. Section 1.3.2): lim m→∞E    1 m + 1 m X t=0 y(t)e−iωt 2  = φ(ω) However, a simple calculation shows that E    1 m + 1 m X t=0 y(t)e−iωt 2  = 1 m + 1 m X t=0 m X s=0 r(t−s)e−iωteiωs = 1 m + 1a∗(ω)Ra(ω) Hence, lim m→∞ 1 m + 1a∗(ω)Ra(ω) = φ(ω) (5.4.24) Similarly, one can show that lim m→∞ 1 m + 1a∗(ω)R−1a(ω) = φ−1(ω) (5.4.25) (see, e.g., [Hannan and Wahlberg 1989]). Comparing (5.4.24) with (5.4.25) pro-vides the explanation we were looking for. Observe that the CM–Version 1 estimator is a finite–sample approximation to equation (5.4.25), whereas the Blackman–Tukey estimator is a finite–sample approximation to equation (5.4.24). The Capon method has also been compared with the AR method of spectral estimation (see Section 3.2). It has been empirically observed that the CM–Version 1 possesses less variance but worse resolution than the AR spectral estimator. This may be explained by making use of the relationship that exists between the CM– Version 1 and AR spectral estimators; see the next subsection (and also [Burg 1972]). The CM–Version 2 spectral estimator is less well studied and hence its properties are not so well understood. In the following subsection, we also relate the CM–Version 2 to the AR spectral estimator. In the case of CM–Version 2, the relationship is more involved, hence leaving less room for intuitive explanations. “sm2” 2004/2/ page 228 i i i i i i i i 228 Chapter 5 Filter Bank Methods 5.4.2 Relationship between Capon and AR Methods The AR method of spectral estimation has been described in Chapter 3. In the following we consider the covariance matrix estimate in (5.4.18). The AR method corresponding to this sample covariance matrix is the LS method discussed in Sec-tion 3.4.2. Let us denote the matrix ˆ R in (5.4.18) by ˆ Rm+1 and its principal lower–right k × k block by ˆ Rk (k = 1, . . . , m + 1), as shown below: ˆ R = k 1 m + 1 m + 1 1 k ˆ R1 ˆ Rk ˆ Rm+1 (5.4.26) With this notation, the coefficient vector θk and the residual power σ2 k of the kth– order AR model fitted to the data {y(t)} are obtained as the solutions to the following matrix equation (refer to (3.4.6)): ˆ Rk+1  1 ˆ θc k  =  ˆ σ2 k 0  (5.4.27) (the complex conjugate in (5.4.27) appears owing to the fact that ˆ Rk above is equal to the complex conjugate of the sample covariance matrix used in Chapter 3). The nested structure of (5.4.26) along with the defining equation (5.4.27) imply: ˆ Rm+1         1 0 . . . 0 0 1 . . . . . . ... 0 1 0 ˆ θc m ˆ θc m−1 ˆ θc 1 1         =       ˆ σ2 m x . . . x 0 ˆ σ2 m−1 ... . . . . . . ... ... x 0 · · · 0 ˆ σ2 0       (5.4.28) “sm2” 2004/2/ page 229 i i i i i i i i Section 5.4 Capon Method 229 where “x” stands for undetermined elements. Let ˆ H =         1 0 . . . 0 0 1 . . . . . . ... 0 1 0 ˆ θc m ˆ θc m−1 ˆ θc 1 1         (5.4.29) It follows from (5.4.28) that ˆ H∗ˆ Rm+1 ˆ H =       ˆ σ2 m x . . . x ˆ σ2 m−1 ... . . . 0 ... x ˆ σ2 0       (5.4.30) (where, once more, x denotes undetermined elements). Since ˆ H∗ˆ Rm+1 ˆ H is a Her-mitian matrix, the elements designated by “x” in (5.4.30) must be equal to zero. Hence, we have proven the following result which is essential in establishing a rela-tion between the AR and Capon methods of spectral estimation (this result extends the one in Exercise 3.7 to the non–Toeplitz covariance case). The parameters {ˆ θk, ˆ σ2 k} of the AR models of orders k = 1, 2, . . . , m determine the following factorization of the inverse (sample) covariance matrix: ˆ R−1 m+1 = ˆ Hˆ Σ−1 ˆ H∗ ; ˆ Σ =      ˆ σ2 m 0 ˆ σ2 m−1 ... 0 ˆ σ2 0      (5.4.31) Let ˆ Ak(ω) = [1 e−iω . . . e−ikω]  1 ˆ θk  (5.4.32) denote the polynomial corresponding to the kth–order AR model, and let ˆ φAR k (ω) = ˆ σ2 k | ˆ Ak(ω)|2 (5.4.33) denote its associated PSD (see Chapter 3). It is readily verified that a∗(ω) ˆ H = [1 eiω . . . eimω]         1 0 . . . 0 0 1 . . . . . . ... 0 1 0 ˆ θc m ˆ θc m−1 ˆ θc 1 1         = [ ˆ A∗ m(ω), eiω ˆ A∗ m−1(ω), . . . , eimω ˆ A∗ 0(ω)] (5.4.34) “sm2” 2004/2/ page 230 i i i i i i i i 230 Chapter 5 Filter Bank Methods It follows from (5.4.31) and (5.4.34) that the quadratic form in the denominator of the CM–Version 1 spectral estimator can be written as a∗(ω) ˆ R−1a(ω) = a∗(ω) ˆ Hˆ Σ−1 ˆ H∗a(ω) = m X k=0 | ˆ Ak(ω)|2/ˆ σ2 k = m X k=0 1/ˆ φAR k (ω) (5.4.35) which leads at once to the following result: ˆ φCM–1(ω) = 1 1 m + 1 m X k=0 1/ˆ φAR k (ω) (5.4.36) This is the desired relation between the CM–Version 1 and the AR spectral esti-mates. This relation says that the inverse of the CM–Version 1 spectral estimator can be obtained by averaging the inverse estimated AR spectra of orders from 0 to m. In view of the averaging operation in (5.4.36), it is not difficult to understand why the CM–Version 1 possesses less statistical variability than the AR estimator. Moreover, the fact that the CM–Version 1 has also been found to have worse res-olution and bias properties than the AR spectral estimate should be due to the presence of low–order AR models in (5.4.36). Next, consider the CM–Version 2. The previous analysis of CM–Version 1 already provides a relation between the numerator in the spectral estimate corre-sponding to CM–Version 2, (5.4.20), and the AR spectra. In order to obtain a similar expression for the denominator in (5.4.20), some preparations are required. The (sample) covariance matrix ˆ R can be used to define m + 1 AR models of order m, depending on which coefficient of the AR equation ˆ a0y(t) + ˆ a1y(t −1) + . . . + ˆ amy(t −m) = residuals (5.4.37) we choose to set to one. The AR model {ˆ θm, ˆ σ2 m} used in the previous analysis corresponds to setting ˆ a0 = 1 in (5.4.37). However, in principle, any other AR coefficient in (5.4.37) may be normalized to one. The mth–order LS AR model obtained by setting ˆ ak = 1 in (5.4.37) is denoted by {ˆ µk = coefficient vector and ˆ γk = residual variance}, and is given by the solution to the following linear system of equations (compare with (5.4.27)): ˆ Rm+1ˆ µc k = ˆ γkuk (5.4.38) where the (k + 1)st component of ˆ µk is equal to one (k = 0, . . . , m), and where uk stands for the (k + 1)st column of the (m + 1) × (m + 1) identity matrix: uk = [0 . . . 0 | {z } k 1 0 . . . 0 | {z } m−k ]T (5.4.39) Evidently, [1 ˆ θT m]T = ˆ µ0 and ˆ σ2 m = ˆ γ0. “sm2” 2004/2/ page 23 i i i i i i i i Section 5.5 Filter Bank Reinterpretation of the Periodogram 231 Similarly to (5.4.32) and (5.4.33), the (estimated) PSD corresponding to the kth mth–order AR model given by (5.4.38) is obtained as ˆ φAR(m) k (ω) = ˆ γk |a∗(ω)ˆ µc k|2 (5.4.40) It is shown in the following calculation that the denominator in (5.4.20) can be expressed as a (weighted) average of the AR spectra in (5.4.40): m X k=0 1 ˆ γk ˆ φAR(m) k (ω) = m X k=0 |a∗(ω)ˆ µc k|2 ˆ γ2 k = a∗(ω) " m X k=0 ˆ µc kˆ µT k ˆ γ2 k # a(ω) = a∗(ω) ˆ R−1 " m X k=0 uku∗ k # ˆ R−1a(ω) = a∗(ω) ˆ R−2a(ω) (5.4.41) Combining (5.4.35) and (5.4.41) gives ˆ φCM–2 (ω) = Pm k=0 1/ˆ φAR k (ω) Pm k=0 1/ˆ γk ˆ φAR(m) k (ω) (5.4.42) The above relation appears to be more involved, and hence more difficult to in-terpret, than the similar relation (5.4.36) corresponding to CM–Version 1. Nev-ertheless, since (5.4.42) is still obtained by averaging various AR spectra, we may expect that the CM–Version 2 estimator, like the CM–Version 1 estimator, is more statistically stable but has poorer resolution than the AR spectral estimator. 5.5 FILTER BANK REINTERPRETATION OF THE PERIODOGRAM As we saw in Section 5.2, the basic periodogram spectral estimator can be in-terpreted as an FBA method with a preimposed bandpass filter (whose impulse response is equal to the Fourier transform vector). In contrast, RFB and Capon are FBA methods based on designed bandpass filters. The filter used in the RFB method is data independent, whereas it is a function of the data covariances in the Capon method. The use of a data–dependent bandpass filter, such as in the Capon method, is intuitively appealing but it also leads to the following drawback: since we need to consistently estimate the filter impulse response, the temporal aperture of the filter should be chosen (much) smaller than the sample length, which sets a rather hard limit on the achievable spectral resolution. In addition, it appears that any other filter design methodology, except the one originally suggested by Capon, will most likely lead to a problem (such as an eigenanalysis) that should be solved for each value of the center frequency; which — of course — would be a rather prohibitive computational task. With these difficulties of the data–dependent de-sign in mind, we may content ourselves with a “well–designed” data–independent filter. The purpose of this section is to show that the basic periodogram and the Daniell method can be interpreted as FBA methods based on well–designed data– independent filters, similar to the RFB method. As we will see, the bandpass filters “sm2” 2004/2/ page 23 i i i i i i i i 232 Chapter 5 Filter Bank Methods used by the aforementioned periodogram methods are obtained by combining the design procedures employed in the RFB and Capon methods. The following result is required (see R35 in Appendix A for a proof). Let R, H, A and C be matrices of dimensions (m × m), (m × K), (m × n) and (K × n), respectively. Assume that R is positive definite and A has full column rank equal to n (hence, m ≥n). Then the solution to the following quadratic optimization problem with linear constraints: min H (H∗RH) subject to H∗A = C is given by H = R−1A(A∗R−1A)−1C∗ (5.5.1) We can now proceed to derive our “new” FBA–based spectral estimation method (as we will see below, it turns out that this method is not really new!). We would like this method to possess a facility for compromising between the bias and variance of the estimated PSD. As explained in the previous sections of this chapter, there are two main ways of doing this within the FBA: we either (i) use a bandpass filter with temporal aperture less than N, obtain the allowed number of samples of the filtered signal and then calculate the power from these samples; or (ii) use a set of K bandpass filters with length–N impulse responses, that cover a band centered on the current frequency value, obtain one sample of the filtered signals for each filter in the set and calculate the power by averaging these K samples. As argued in Section 5.3, approach (ii) may be more effective than (i) in reducing the variance of the estimated PSD, while keeping the bias low. In the sequel, we follow approach (ii). Let β ≥1/N be the prespecified (desired) resolution and let K be defined by equation (5.3.12): K = βN. According to the time–bandwidth product result, a bandpass filter with a length–N impulse response may be expected to have a bandwidth on the order of 1/N (but not less). Hence, we can cover the preimposed passband [˜ ω −βπ, ˜ ω + βπ] (5.5.2) (here ˜ ω stands for the current frequency value) by using 2πβ/(2π/N) = K filters, which pass essentially nonoverlapping 1/N–length frequency bands in the interval (5.5.2). The requirement that the filters’ passbands are (nearly) nonoverlapping is a key condition for variance reduction. In order to see this, let xp denote the sample obtained at the output of the pth filter: xp = N−1 X k=0 hp,ky(N −k) = N X t=1 hp,N−ty(t) (5.5.3) Here {hp,k}N−1 k=0 is the pth filter’s impulse response. The associated frequency re-sponse is denoted by Hp(ω). Note that in the present case we consider bandpass filters operating on the raw data, in lieu of baseband filters operating on demod-ulated data (as in RFB). Assume that the center–frequency gain of each filter is normalized so that Hp(˜ ω) = 1, p = 1, . . . , K (5.5.4) “sm2” 2004/2/ page 233 i i i i i i i i Section 5.5 Filter Bank Reinterpretation of the Periodogram 233 Then, we can write E  |xp|2 = 1 2π Z π −π |Hp(ω)|2φ(ω)dω ≃1 2π Z ˜ ω+π/N ˜ ω−π/N φ(ω)dω ≃2π/N 2π φ(˜ ω) = 1 N φ(˜ ω) (5.5.5) The second “equality” in (5.5.5) follows from (5.5.4) and the assumed bandpass characteristics of Hp(ω), and the third equality results from the assumption that φ(ω) is approximately constant over the passband. (Note that the angular frequency passband of Hp(ω) is 2π/N, as explained before.) In view of (5.5.5), we can estimate φ(˜ ω) by averaging over the squared magnitudes of the filtered samples {xp}K p=1. By doing so, we may achieve a reduction in variance by a factor K, provided {xp} are statistically independent (see Section 5.3 for details). Under the assumption that the filters {Hp(ω)} pass essentially nonoverlapping frequency bands, we readily get (compare (5.3.27)): E {xpx∗ k} = 1 2π Z π −π Hp(ω)H∗ k(ω)φ(ω)dω ≃0 (5.5.6) which implies that the random variables {|xp|2} are independent at least under the Gaussian hypothesis. Without the previous assumption on {Hp(ω)}, the filtered samples {xp} may be strongly correlated and, therefore, a reduction in variance by a factor K cannot be guaranteed. The conclusion from the previous (more or less heuristic) discussion is sum-marized in the following. If the passbands of the filters used to cover the prespecified inter-val (5.5.2) do not overlap, then by using all filters’ output samples — as contrasted to using the output sample of only one filter — we achieve a reduction in the variance of the estimated PSD by a factor equal to the number of filters. The maximum number of such filters that can be found is given by K = βN. (5.5.7) By using the insights provided by the above discussion, as summarized in (5.5.7), we can now approach the bandpass filters design problem. We sample the frequency axis as in the FFT (as almost any practical implementation of a spectral estimation method does): ˜ ωs = 2π N s s = 0, . . . , N −1 (5.5.8) The frequency samples that fall within the passband (5.5.2) are readily seen to be the following: 2π N (s + p) p = −K/2, . . . , 0, . . . , K/2 −1 (5.5.9) (to simplify the discussion we assume that K is an even integer). Let H = [h1 . . . hK] (N × K) (5.5.10) “sm2” 2004/2/ page 234 i i i i i i i i 234 Chapter 5 Filter Bank Methods denote the matrix whose pth column is equal to the impulse response vector cor-responding to the pth bandpass filter. We assume that the input to the filters is white noise (as in RFB) and design the filters so as to minimize the output power under the constraint that each filter passes undistorted one (and only one) of the frequencies in (5.5.9) (as in Capon). These design objectives lead to the following optimization problem: min H (H∗H) subject to H∗A = I where A = a 2π N s −K 2  , . . . , a 2π N s + K 2 −1  (5.5.11) and where a(ω) = [1 e−iω . . . e−i(N−1)ω]T . Note that the constraint in (5.5.11) guarantees that each frequency in the passband (5.5.9) is passed undistorted by one filter in the set, and it is annihilated by all the other (K −1) filters. In particular, observe that (5.5.11) implies (5.5.4). The solution to (5.5.11) follows at once from the result (5.5.1): the minimizing H matrix is given by H = A(A∗A)−1 (5.5.12) However, the columns in A are orthogonal A∗A = NI (see (4.3.15)); therefore, (5.5.12) simplifies to H = 1 N A (5.5.13) which is the solution of the filter design problem previously formulated. By using (5.5.13) in (5.5.3), we get |xp|2 = 1 N 2 N X t=1 ei(N−t) 2π N (s+p)y(t) 2 = 1 N 2 N X t=1 y(t)e−i 2π N (s+p)t 2 = 1 N ˆ φp 2π N (s + p)  p = −K/2, . . . , K/2 −1 (5.5.14) where the dependence of |xp|2 on s (and hence on ˜ ωs) is omitted to simplify the notation, and where ˆ φp(ω) is the standard periodogram. Finally, (5.5.14) along with (5.5.5) lead to the following FBA spectral estimator: ˆ φ 2π N s  = 1 K K/2−1 X p=−K/2 N|xp|2 = 1 K s+K/2−1 X l=s−K/2 ˆ φp 2π N l  (5.5.15) “sm2” 2004/2/ page 23 i i i i i i i i Section 5.6 Complements 235 which coincides with the Daniell periodogram estimator (2.7.16). Furthermore, for K = 1 (i.e., β = 1/N, which is the choice suitable for “high–resolution” applica-tions), (5.5.15) reduces to the unmodified periodogram. Recall also that the RFB method in Section 5.3, for large data lengths, is expected to have similar per-formance to the Daniell method for K > 1 and to the basic periodogram for K = 1. Hence, in the family of nonparametric spectral estimation methods the periodograms “are doing well”. 5.6 COMPLEMENTS 5.6.1 Another Relationship between the Capon and AR Methods The relationship between the AR and Capon spectra established in Section 5.4.2 involves all AR spectral models of orders 0 through m. Another interesting rela-tionship, which involves the AR spectrum of order m alone, is presented in this complement. Let ˆ θ = [ˆ a0 ˆ a1 . . . ˆ am]T (with ˆ a0 = 1) denote the vector of the coefficients of the mth–order AR model fitted to the data sample covariances, and let ˆ σ2 denote the corresponding residual variance (see Chapter 3 and (5.4.27)). Then the mth– order AR spectrum is given by: ˆ φAR(ω) = ˆ σ2 |a∗(ω)ˆ θc|2 = ˆ σ2 | Pm k=0 ˆ ake−iωk|2 (5.6.1) By a simple calculation, ˆ φAR(ω) above can be rewritten in the following form: ˆ φAR(ω) = ˆ σ2 Pm s=−m ˆ ρ(s)eiωs (5.6.2) where ˆ ρ(s) = m−s X k=0 ˆ akˆ a∗ k+s = ˆ ρ∗(−s), s = 0, . . . , m. (5.6.3) To show this, note that m X k=0 ˆ ake−iωk 2 = m X k=0 m X p=0 ˆ akˆ a∗ pe−iω(k−p) = m X k=0 k X s=k−m ˆ akˆ a∗ k−se−iωs = m X k=0 m X s=−m ˆ akˆ a∗ k−se−iωs = m X s=−m m X k=0 ˆ akˆ a∗ k+seiωs = m X s=−m m−s X k=0 ˆ akˆ a∗ k+s ! eiωs and (5.6.2)–(5.6.3) immediately follows. “sm2” 2004/2/ page 236 i i i i i i i i 236 Chapter 5 Filter Bank Methods Next, assume that the (sample) covariance matrix ˆ R is Toeplitz. (We note in passing that this is a minor restriction for the temporal spectral estimation problem of this chapter, but it may be quite a restrictive assumption for the spatial problem of the next chapter.) Then the Capon spectrum in equation (5.4.19) (with the factor m + 1 omitted, for convenience) can be written as: ˆ φCM(ω) = ˆ σ2 Pm s=−m ˆ µ(s)eiωs (5.6.4) where ˆ µ(s) = m−s X k=0 (m + 1 −2k −s)ˆ akˆ a∗ k+s = ˆ µ∗(−s), s = 0, . . . , m (5.6.5) To prove (5.6.4) we make use of the Gohberg–Semencul (GS) formula derived in Complement 3.9.4, which is repeated here for convenience: ˆ σ2 ˆ R−1 =       1 · · · · · · 0 ˆ a∗ 1 ... . . . . . . ... ... . . . ˆ a∗ m · · · ˆ a∗ 1 1             1 ˆ a1 · · · ˆ am . . . ... ... . . . . . . ... ˆ a1 0 · · · · · · 1       −       0 · · · · · · 0 ˆ am ... . . . . . . ... ... . . . ˆ a1 · · · ˆ am 0             0 ˆ a∗ m · · · ˆ a∗ 1 . . . ... ... . . . . . . ... ˆ a∗ m 0 · · · · · · 0       (The above formula is in fact the complex conjugate of the GS formula in Com-plement 3.9.4 because the matrix ˆ R above is the complex conjugate of the one considered in Chapter 3). For the sake of convenience, let ˆ ak = 0 for k / ∈[0, m]. By making use of this convention, and of the GS formula, we obtain: f(ω) ≜ˆ σ2a∗(ω) ˆ R−1a(ω) = m X p=0    m X k=0 ˆ ak−p e−iωk 2 − m X k=0 ˆ a∗ m+1−k+pe−iωk 2   = m X p=0 m X k=0 m X ℓ=0 (ˆ ak−pˆ a∗ ℓ−p −ˆ a∗ m+1+p−kˆ am+1−ℓ+p)eiω(ℓ−k) = m X ℓ=0 m X p=0 ℓ X s=ℓ−m (ˆ aℓ−s−pˆ a∗ ℓ−p −ˆ a∗ m+1−ℓ+s+pˆ am+1+p−ℓ)eiωs (5.6.6) “sm2” 2004/2/ page 23 i i i i i i i i Section 5.6 Complements 237 where the last equality has been obtained by the substitution s = ℓ−k. Next, make the substitution j = ℓ−p in (5.6.6) to obtain: f(ω) = m X ℓ=0 ℓ X j=ℓ−m ℓ X s=ℓ−m (ˆ aj−sˆ a∗ j −ˆ am+1−jˆ a∗ m+1+s−j)eiωs (5.6.7) Since ˆ aj−s = 0 and ˆ a∗ m+1+s−j = 0 for s > j, we can extend the summation over s in (5.6.7) up to s = m. Furthermore, the summand in (5.6.7) is zero for j < 0, and hence we can truncate the summation over j to the interval [0, ℓ]. These two observations yield: f(ω) = m X ℓ=0 ℓ X j=0 m X s=ℓ−m (ˆ aj−sˆ a∗ j −ˆ am+1−jˆ a∗ m+1+s−j)eiωs (5.6.8) Next, decompose f(ω) additively as follows: f(ω) = T1(ω) + T2(ω) where T1(ω) = m X ℓ=0 ℓ X j=0 m X s=0 (ˆ aj−sˆ a∗ j −ˆ am+1−jˆ a∗ m+1+s−j)eiωs T2(ω) = m X ℓ=0 ℓ X j=0 −1 X s=ℓ−m (ˆ aj−sˆ a∗ j −ˆ am+1−jˆ a∗ m+1+s−j)eiωs (The term in T2 corresponding to ℓ= m is zero.) Let ˆ µ(s) ≜ m X ℓ=0 ℓ X j=0 (ˆ aj−sˆ a∗ j −ˆ am+1−jˆ a∗ m+1+s−j) (5.6.9) By using this notation, we can write T1(ω) as T1(ω) = m X s=0 ˆ µ(s)eiωs Since f(ω) is real–valued for any ω ∈[−π, π], we must also have T2(ω) = −m X s=−1 ˆ µ∗(−s)eiωs As the summand in (5.6.9) does not depend on ℓ, we readily obtain ˆ µ(s) = m X j=0 (m + 1 −j) (ˆ aj−sˆ a∗ j −ˆ am+1−jˆ a∗ m+1+s−j) = m−s X k=0 (m + 1 −k −s) ˆ akˆ a∗ k+s − m X k=1 kˆ akˆ a∗ k+s = m−s X k=0 (m + 1 −2k −s)ˆ akˆ a∗ k+s “sm2” 2004/2/ page 238 i i i i i i i i 238 Chapter 5 Filter Bank Methods which coincides with (5.6.5). Thus, the proof of (5.6.4) is concluded. Remark: The reader may wonder what happens with the formulas derived above if the AR model parameters are calculated by using the same sample covariance matrix as in the Capon estimator. In such a case, the parameters {ˆ ak} in (5.6.1) and in the GS formula above should be replaced by {ˆ a∗ k} (see (5.4.27)). Consequently both (5.6.2)–(5.6.3) and (5.6.4)–(5.6.5) continue to hold but with {ˆ ak} replaced by {ˆ a∗ k} (and {ˆ a∗ k} replaced by {ˆ ak}, of course). ■ By comparing (5.6.2) and (5.6.4) we see that the reciprocals of both ˆ φAR(ω) and ˆ φCM(ω) have the form of a Blackman–Tukey spectral estimate associated with the “covariance sequences” {ˆ ρ(s)} and {ˆ µ(s)}, respectively. The only difference between ˆ φAR(ω) and ˆ φCM(ω) is that the sequence {ˆ µ(s)} corresponding to ˆ φCM(ω) is a “linearly tapered” version of the sequence {ˆ ρ(s)} corresponding to ˆ φAR(ω). Similar to the interpretation in Section 5.4.2, the previous observation can be used to intuitively understand why the Capon spectral estimates are smoother and have poorer resolution than the AR estimates of the same order. (For more details on this aspect and other aspects related to the discussion in this complement, see [Musicus 1985].) We remark in passing that the name “covariance sequence” given, for exam-ple, to {ˆ ρ(s)} is not coincidental: {ˆ ρ(s)} are so–called sample inverse covariances associated with ˆ R and they can be shown to possess a number of interesting and useful properties (see, e.g., [Cleveland 1972; Bhansali 1980]). The formula (5.6.4) can be used for the computation of ˆ φCM(ω), as we now show. Assuming that ˆ R is already available, we can use the Levinson–Durbin algo-rithm to compute {ˆ ak} and ˆ σ2, and then {ˆ µ(s)} in O(m2) flops. Then (5.6.4) can be evaluated at M Fourier frequencies (say) by using the FFT. The resulting total computational burden is on the order of O(m2 + M log2 M) flops. For commonly encountered values of m and M, this is about m times smaller than the burden associated with the eigendecomposition–based computational procedure of Exer-cise 5.5. Note, however, that the latter algorithm can be applied to a general ˆ R matrix, whereas the one derived in this complement is limited to Toeplitz covari-ance matrices. Finally, note that the extension of the results in this complement to two–dimensional (2D) signals can be found in [Jakobsson, Marple, and Stoica 2000]. 5.6.2 Multiwindow Interpretation of Daniell and Blackman–Tukey Periodograms As stated in Exercise 5.1, the Bartlett and Welch periodograms can be cast into the multiwindow framework of Section 5.3.3. In other words, they can be written in the following form (see (5.7.1)) ˆ φ(ω) = 1 K K X p=1 N X t=1 wp,t y(t)e−iωt 2 (5.6.10) for certain temporal (or data) windows {wp,t} (also called tapers). Here, K denotes the number of windows used by the method in question. “sm2” 2004/2/ page 239 i i i i i i i i Section 5.6 Complements 239 In this complement we show that the Daniell periodogram, as well as the Blackman–Tukey periodogram with some commonly-used lag windows, can also be interpreted as multiwindow methods. Unlike the approximate multiwindow interpretation of a spectrally smoothed periodogram described in Section 5.3.3 (see equations (5.3.31)–(5.3.33) there), the multiwindow interpretations presented in this complement are exact. More details on the topic of this complement can be found in [McCloud, Scharf, and Mullis 1999], where it is also shown that the Blackman–Tukey periodogram with any “good” window can be cast in a multiwindow framework, but only approximately. We begin by writing (5.6.10) as a quadratic form in the data sequence. Let z(ω) =    y(1)e−iω . . . y(N)e−iNω   , (N × 1) W =    w1,1 · · · w1,N . . . . . . wK,1 · · · wK,N   , (K × N) and let [x]p denote the pth element of a vector x. Using this notation we can rewrite (5.6.10) in the desired form: ˆ φ(ω) = 1 K K X p=1 [Wz(ω)]p 2 or ˆ φ(ω) = 1 K z∗(ω)W ∗Wz(ω) (5.6.11) which is a quadratic form in z(ω). The rank of the matrix W ∗W is less than or equal to K; typically, rank(W ∗W) = K ≪N. Next we turn our attention to the Daniell periodogram (see (2.7.16)): ˆ φD(ω) = 1 2J + 1 J X j=−J ˆ φp  ω + j 2π N  (5.6.12) where ˆ φp(ω) is the standard periodogram given in (2.2.1): ˆ φp(ω) = 1 N N X t=1 y(t)e−iωt 2 Letting a∗ j = h e−i 2π N j, e−i 2π N (2j), . . . , e−i 2π N (Nj)i (5.6.13) “sm2” 2004/2/ page 240 i i i i i i i i 240 Chapter 5 Filter Bank Methods we can write ˆ φp  ω + j 2π N  = 1 N N X t=1 y(t)e−iωte−i 2π N (jt) 2 = 1 N a∗ jz(ω) 2 = 1 N z∗(ω)aja∗ jz(ω) (5.6.14) which implies that ˆ φD(ω) = 1 N(2J + 1)z∗(ω)W ∗ DWDz(ω) (5.6.15) where WD = [a−J, . . . , a0, . . . , aJ]∗, (2J + 1) × N (5.6.16) This establishes the fact that the Daniell periodogram can be interpreted as a mul-tiwindow method using K = 2J +1 tapers given by (5.6.16). Similarly to the tapers used by the seemingly more elaborate RFB approach, the Daniell periodogram tapers can also be motivated using a sound design methodology (see Section 5.5). In the remaining part of this complement we consider the Blackman–Tukey periodogram in (2.5.1) with a window of length M = N: ˆ φBT (ω) = N−1 X k=−(N−1) w(k)ˆ r(k)e−iωk (5.6.17) A commonly-used class of windows, including the Hanning and Hamming windows in Table 2.1, is described by the equation: w(k) = α + β cos(∆k) =  α + β 2 ei∆k + β 2 e−i∆k (5.6.18) for various parameters α, β, and ∆. Inserting (5.6.18) into (5.6.17) yields: ˆ φBT (ω) = N−1 X k=−(N−1)  α + β 2 ei∆k + β 2 e−i∆k ˆ r(k)e−iωk = αˆ φp(ω) + β 2 ˆ φp(ω −∆) + β 2 ˆ φp(ω + ∆) (5.6.19) where ˆ φp(ω) is the standard periodogram given by (2.2.1) or, equivalently, by (2.2.2): ˆ φp(ω) = N−1 X k=−(N−1) ˆ r(k)e−iωk Comparing (5.6.19) with (5.6.12) (as well as (5.6.14)–(5.6.16)) allows us to rewrite ˆ φBT (ω) in the following form: ˆ φBT (ω) = 1 N z∗(ω)W ∗ BT WBT z(ω) (5.6.20) “sm2” 2004/2/ page 24 i i i i i i i i Section 5.6 Complements 241 where WBT = q β 2 a−∆, √αa0, q β 2 a∆ ∗ , (3 × N) (5.6.21) for α, β ≥0 and where a∆is given by (similarly to aj in (5.6.13)) a∗ ∆= e−i∆, . . . , e−i∆N Hence, we conclude that the Blackman–Tukey periodogram with a Hamming or Hanning window (or any other window having the form of (5.6.18)) can be inter-preted as a multiwindow method using K = 3 tapers given by (5.6.21). Similarly, ˆ φBT (ω) using the Blackman window in Table 2.1 can be shown to be equivalent to a multiwindow method with K = 7 tapers. Interestingly, as a byproduct of the analysis in this complement, we note from (5.6.19) that the Blackman–Tukey periodogram with a window of the form in (5.6.18) can be very efficiently computed from the values of the standard peri-odogram. Since the Blackman window has a form similar to (5.6.18), ˆ φBT (ω) using the Blackman window can be similarly implemented in an efficient way. This way of computing ˆ φBT (ω) is faster than the method outlined in Complement 2.8.2 for a general lag window. 5.6.3 Capon Method for Exponentially Damped Sinusoidal Signals The signals which are dealt with in some applications of spectral analysis, such as in magnetic resonance spectroscopy, consist of a sum of exponentially damped sinu-soidal components, (or damped sinusoids, for short), instead of the pure sinusoids as in (4.1.1). Such signals are described by the equation y(t) = n X k=1 βke(ρk+iωk)t + e(t), t = 1, . . . , N (5.6.22) where βk and ωk are the amplitude and frequency of the kth component (as in Chapter 4), and ρk < 0 is the so-called damping parameter. The (noise-free) signal in (5.6.22) is nonstationary and hence it does not have a power spectral density. However, it possesses an amplitude spectrum that is defined as follows: |β(ρ, ω)| = ( |βk|, for ω = ωk, ρ = ρk (k = 1, . . . , n) 0, elsewhere (5.6.23) Furthermore, because an exponentially damped sinusoid satisfies the finite energy condition in (1.2.1), the (noise-free) signal in (5.6.22) also possesses an energy spec-trum. Similarly to (5.6.23), we can define the energy spectrum of the damped sinusoidal signal in (5.6.22) as a 2D function of (ρ, ω) that consists of n pulses at {ρk, ωk}, where the height of the function at each of these points is equal to the energy of the corresponding component. The energy of a generic component with parameters (β, ρ, ω) is given by N X t=1 βe(ρ+iω)t 2 = |β|2e2ρ N−1 X t=0 e2ρt = |β|2e2ρ 1 −e2ρN 1 −e2ρ (5.6.24) “sm2” 2004/2/ page 24 i i i i i i i i 242 Chapter 5 Filter Bank Methods It follows from (5.6.24) and the above discussion that the energy spectrum can be expressed as a function of the amplitude spectrum in (5.6.23) via the formula: E(ρ, ω) = |β(ρ, ω)|2L(ρ) (5.6.25) where L(ρ) = e2ρ 1 −e2ρN 1 −e2ρ (5.6.26) The amplitude spectrum, and hence the energy spectrum, of the signal in (5.6.22) can be estimated by using an extension of the Capon method that is introduced in Section 5.4. To develop this extension, we consider the following data vector ˜ y(t) = [y(t), y(t + 1), . . . , y(t + m)] (5.6.27) in lieu of the data vector used in (5.4.2). First we explain why, in the case of damped sinusoidal signals, the use of (5.6.27) is preferable to that of [y(t), y(t −1), . . . , y(t −m)]T (5.6.28) (as is used in (5.4.2)). Let h denote the coefficient vector of the Capon FIR filter as in (5.4.1). Then, the output of the filter using the data vector in (5.6.27) is given by: ˜ yF (t) = h∗˜ y(t) = h∗    y(t) . . . y(t + m)   , t = 1, . . . , N −m (5.6.29) Hence, when performing the filtering operation as in (5.6.29), we lose m samples from the end of the data string. Because the SNR of those samples is typically rather low (owing to the damping of the signal components), the data loss is not significant. In contrast, the use of (5.4.2) leads to a loss of m data samples from the beginning of the data string (since (5.4.2) can be computed for t = m + 1, . . . , N), where the SNR is higher. Hence, in the case of damped sinusoidal signals we should indeed prefer (5.6.29) to (5.4.2). Next, we derive Capon-like estimates of the amplitude and energy spectra of (5.6.22). Let ˆ R = 1 N −m N−m X t=1 ˜ y(t)˜ y∗(t) (5.6.30) denote the sample covariance matrix of the data vector in (5.6.27). Then the sample variance of the filter output can be written as: 1 N −m N−m X t=1 |˜ yF (t)|2 = h∗ˆ Rh (5.6.31) By definition, the Capon filter minimizes (5.6.31) under the constraint that the filter passes, without distortion, a generic damped sinusoid with parameters (β, ρ, ω). “sm2” 2004/2/ page 243 i i i i i i i i Section 5.6 Complements 243 The filter output corresponding to such a generic component is given by h∗      βe(ρ+iω)t βe(ρ+iω)(t+1) . . . βe(ρ+iω)(t+m)     =     h∗      1 eρ+iω . . . e(ρ+iω)m          βe(ρ+iω)t (5.6.32) Hence, the distortionless filtering constraint can be expressed as h∗a(ρ, ω) = 1 (5.6.33) where a(ρ, ω) = h 1, eρ+iω, . . . , e(ρ+iω)miT (5.6.34) The minimizer of the quadratic function in (5.6.31) under the linear constraint (5.6.33) is given by the familiar formula (see (5.4.7)–(5.4.8)): h(ρ, ω) = ˆ R−1a(ρ, ω) a∗(ρ, ω) ˆ R−1a(ρ, ω) (5.6.35) where we have stressed, via notation, the dependence of h on both ρ and ω. The output of the filter in (5.6.35) due to a possible (generic) damped sinusoid in the signal with parameters (β, ρ, ω), is given by (cf. (5.6.32) or (5.6.33)): h∗(ρ, ω)˜ y(t) = βe(ρ+iω)t + eF (t), t = 1, . . . , N −m (5.6.36) where eF (t) denotes the filter output due to noise and to any other signal com-ponents. For given (ρ, ω), the least-squares estimate of β in (5.6.36) is (see, e.g., Result R32 in Appendix A): ˆ β(ρ, ω) = N−m X t=1 h∗(ρ, ω)˜ y(t)e(ρ−iω)t N−m X t=1 e2ρt (5.6.37) Let ˜ L(ρ) be defined similarly to L(ρ) in (5.6.26), but with N replaced by N −m, and let ˜ Y (ρ, ω) = 1 ˜ L(ρ) N−m X t=1 ˜ y(t)e(ρ−iω)t (5.6.38) It follows from (5.6.37), along with (5.6.25), that Capon-like estimates of the am-plitude spectrum and energy spectrum of the signal in (5.6.22) can be obtained, “sm2” 2004/2/ page 244 i i i i i i i i 244 Chapter 5 Filter Bank Methods respectively, as: ˆ β(ρ, ω) = h∗(ρ, ω) ˜ Y (ρ, ω) (5.6.39) and ˆ E(ρ, ω) = ˆ β(ρ, ω) 2 L(ρ) (5.6.40) Remark: We could have estimated the amplitude, β, of a generic component with parameters (β, ρ, ω) directly from the unfiltered data samples {y(t)}N t=1. However, the use of the Capon filtered data in (5.6.36) usually leads to enhanced perfor-mance. The main reason for this performance gain lies in the fact that the SNR corresponding to the generic component in the filtered data is typically much higher than in the raw data, owing to the good rejection properties of the Capon filter. This higher SNR leads to more accurate amplitude estimates, in spite of the loss of m data samples in the filtering operation in (5.6.36). ■ Finally, we note that the sample Capon energy or amplitude spectrum can be used to estimate the signal parameters {βk, ρk, ωk} in a standard manner. Specif-ically, we compute either |ˆ β(ρ, ω)| or ˆ E(ρ, ω) at the points of a fine grid covering the region of interest in the two–dimensional (ρ, ω) plane, and obtain estimates of (ρk, ωk) as the locations of the n largest spectral peaks; estimates of βk can then be derived from (5.6.37) with (ρ, ω) replaced by the estimated values of (ρk, ωk). There is empirical evidence that the use of ˆ E(ρ, ω) in general leads to (slightly) more accurate signal parameter estimates than the use of |ˆ β(ρ, ω)| (see [Stoica and Sundin 2001]). For more details on the topic of this complement, including the computation of the two–dimensional spectra in (5.6.39) and (5.6.40), we refer the reader to [Stoica and Sundin 2001]. 5.6.4 Amplitude and Phase Estimation Method (APES) The design idea behind the Capon filter is based on the following two principles, as discussed in Section 5.4: (a) the sinusoid with frequency ω (currently considered in the analysis) passes through the filter in a distortionless manner; and (b) any other frequencies in the data (corresponding, e.g., to other sinusoidal components in the signal or to noise) are suppressed by the filter as much as possible. The output of the filter whose input is a sinusoid with frequency ω, {βeiωt}, is given by (assuming forward filtering, as in (5.4.2)): h∗      eiωt eiω(t−1) . . . eiω(t−m)     β =     h∗      1 e−iω . . . e−iωm          βeiωt (5.6.41) “sm2” 2004/2/ page 24 i i i i i i i i Section 5.6 Complements 245 For backward filtering, as used in Complement 5.6.3, a similar result can be derived. It follows from (5.6.41) that the design objective in (a) above can be expressed mathematically via the following linear constraint on h: h∗a(ω) = 1 (5.6.42) where a(ω) = 1, e−iω, . . . , e−iωmT (5.6.43) (see (5.4.5)–(5.4.7)). Regarding the second design objective, its statement in (b) above is sufficiently general to allow several different mathematical formulations. The Capon method is based on the idea that the goal in (b) is achieved if the power at the filter output is minimized (see (5.4.7)). In this complement, another way to formulate (b) mathematically is described. At a given frequency ω, let us choose h such that the filter output, {h∗˜ y(t)}, where ˜ y(t) = [y(t), y(t −1), . . . , y(t −m)]T is as close as possible in a least-squares (LS) sense to a sinusoid with frequency ω and constant amplitude β. Mathematically, we obtain both h and β, for a given ω, by minimizing the LS criterion: min h,β 1 N −m N X t=m+1 h∗˜ y(t) −βeiωt 2 subject to h∗a(ω) = 1 (5.6.44) Note that the estimation of the amplitude and phase (i.e., |β| and arg(β)) of the sinusoid with frequency ω is an intrinsic part of the method based on (5.6.44). This observation motives the name of Amplitude and Phase EStimation (APES) given to the method described by (5.6.44). Because (5.6.44) is a linearly constrained quadratic problem, we should be able to find its solution in closed form. Let g(ω) = 1 N −m N X t=m+1 ˜ y(t)e−iωt (5.6.45) Then, a straightforward calculation shows that the criterion function in (5.6.44) can be rewritten as: 1 N −m N X t=m+1 h∗˜ y(t) −βeiωt 2 = h∗ˆ Rh −β∗h∗g(ω) −βg∗(ω)h + |β|2 = |β −h∗g(ω)|2 + h∗ˆ Rh −|h∗g(ω)|2 = |β −h∗g(ω)|2 + h∗ ˆ R −g(ω)g∗(ω) h (5.6.46) “sm2” 2004/2/ page 246 i i i i i i i i 246 Chapter 5 Filter Bank Methods where ˆ R = 1 N −m N X t=m+1 ˜ y(t)˜ y∗(t) (5.6.47) (see (5.4.18)). The minimization of (5.6.46) with respect to β is immediate: β(ω) = h∗g(ω) (5.6.48) Inserting (5.6.48) into (5.6.46) yields the following problem whose solution will determine the filter coefficient vector: min h h∗ˆ Q(ω)h subject to h∗a(ω) = 1 (5.6.49) where ˆ Q(ω) = ˆ R −g(ω)g∗(ω) (5.6.50) As (5.6.49) has the same form as the Capon filter design problem (see (5.4.7)), the solution to (5.6.49) is readily derived (compare with (5.4.8)): h(ω) = ˆ Q−1(ω)a(ω) a∗(ω) ˆ Q−1(ω)a(ω) (5.6.51) A direct implementation of (5.6.51) would require the inversion of the matrix ˆ Q(ω) for each value of ω ∈[0, 2π] considered. To avoid such an intensive compu-tational task, we can use the matrix inversion lemma (Result R27 in Appendix A) to express the inverse in (5.6.51) as follows: ˆ Q−1(ω) = h ˆ R −g(ω)g∗(ω) i−1 = ˆ R−1 + ˆ R−1g(ω)g∗(ω) ˆ R−1 1 −g∗(ω) ˆ R−1g(ω) (5.6.52) Inserting (5.6.52) into (5.6.51) yields the following expression for the APES filter: h(ω) = h 1 −g∗(ω) ˆ R−1g(ω) i ˆ R−1a(ω) + h g∗(ω) ˆ R−1a(ω) i ˆ R−1g(ω) h 1 −g∗(ω) ˆ R−1g(ω) i a∗(ω) ˆ R−1a(ω) + a∗(ω) ˆ R−1g(ω) 2 (5.6.53) From (5.6.48) and (5.6.53) we obtain the following formula for the APES estimate of the (complex) amplitude spectrum (see Complement 5.6.3 for a definition of the amplitude spectrum): β(ω) = a∗(ω) ˆ R−1g(ω) h 1 −g∗(ω) ˆ R−1g(ω) i a∗(ω) ˆ R−1a(ω) + a∗(ω) ˆ R−1g(ω) 2 (5.6.54) Compared with the Capon estimate of the amplitude spectrum given by β(ω) = a∗(ω) ˆ R−1g(ω) a∗(ω) ˆ R−1a(ω) (5.6.55) “sm2” 2004/2/ page 24 i i i i i i i i Section 5.6 Complements 247 we see that the APES estimate in (5.6.54) is more computationally involved, but not by much. Remark: Our discussion has focused on the estimation of the amplitude spectrum. If the power spectrum is what we want to estimate, then we can use the APES filter, (5.6.53), in the PSD estimation approach described in Section 5.4, or we can simply take |β(ω)|2 (along with a possible scaling) as an estimate of the PSD. ■ The above derivation of APES is adapted from [Stoica, Li, and Li 1999]. The original derivation of APES, provided in [Li and Stoica 1996a], was different: it was based on an approximate maximum likelihood approach. We refer the reader to [Li and Stoica 1996a] for the original derivation of APES as well as many other details on this approach to spectral analysis. We end this complement with a brief comparison of Capon and APES from a performance standpoint. Extensive empirical and analytical studies of these two methods (see, e.g., [Larsson, Li, and Stoica 2003] and its references) have shown that Capon has a (slightly) higher resolution than APES and also that the Capon estimates of the frequencies of a multicomponent sinusoidal signal in noise are more accurate than the APES estimates. On the other hand, for a given set of frequency estimates {ˆ ωk} in the vicinity of the true frequencies, the APES estimates of the amplitudes {βk} are much more accurate than the Capon estimates; the Capon estimates are always biased towards zero, sometimes significantly so. This suggests that, at least for spectral line analysis, a better method than both Capon and APES can be obtained by combining them in the following way: • Estimate the frequencies {ωk} as the locations of the dominant peaks of the Capon spectrum. • Estimate the amplitudes {βk} using the APES formula (5.6.54) evaluated at the frequency estimates obtained in the previous step. The above combined Capon-APES (CAPES) method was introduced in [Jakobsson and Stoica 2000]. 5.6.5 Amplitude and Phase Estimation Method for Gapped Data (GAPES) In some applications of spectral analysis the data sequence has gaps, owing to the failure of a measuring device, or owing to the impossibility to perform measurements for some periods of time (such as in astronomy). In this complement we will present an extension of the Amplitude and Phase EStimation (APES) method, outlined in Complement 5.6.4, to gapped-data sequences. Gapped-data sequences are evenly sampled data strings that contain unknown samples which are usually, but not always, clustered together in groups of reasonable size. We will use the acronym GAPES to designate the extended approach. Most of the available methods for the spectral analysis of gapped data perform (either implicitly or explicitly) an interpolation of the missing data, followed by a standard full-data spectral analysis. The data interpolation step is critical and it cannot be completed without making (sometimes hidden) assumptions on the data sequence. For example, one such assumption is that the data is bandlimited with a “sm2” 2004/2/ page 248 i i i i i i i i 248 Chapter 5 Filter Bank Methods known cutofffrequency. Intuitively, these assumptions can be viewed as attempts to add extra “information” to the spectral analysis problem, which might be able to compensate for the lost information due to the missing data samples. The problem with these assumptions, though, is that they are not generally easy to check in applications, either a priori or a posteriori. The GAPES approach presented here is based on the sole assumption that the spectral content of the missing data is similar to that of the available data. This assumption is very natural, and one could argue that it introduces no restriction at all. We begin the derivation of GAPES by rewriting the APES least-squares fit-ting criterion (see equation (5.6.44) in Complement 5.6.4) in a form that is more convenient for the discussion here. Specifically, we use the notation h(ω) and β(ω) to stress the dependence on ω of both the APES filter and the amplitude spec-trum. Also, we note that in applications the frequency variable is usually sampled as follows: ωk = 2π K k, k = 1, . . . , K (5.6.56) where K is an integer (much) larger than N. Making use of the above notation and (5.6.56) we rewrite the APES criterion as follows: min K X k=1 N X t=m+1 h∗(ωk)˜ y(t) −β(ωk)eiωkt 2 subject to h∗(ωk)a(ωk) = 1 for k = 1, . . . , K (5.6.57) Evidently, the minimization of the criterion in (5.6.57) with respect to {h(ωk)} and {β(ωk)} reduces to the minimization of the inner sum in (5.6.57) for each k. Hence, in the full-data case the problem in (5.6.57) is equivalent to the standard APES problem in equation (5.6.44) in Complement 5.6.4. However, in the gapped data case the form of the APES criterion in (5.6.57) turns out to be more convenient than that in (5.6.44), as we will see below. To continue, we need some additional notation. Let ya = the vector containing the available samples in {y(t)}N t=1 yu = the vector containing the unavailable samples in {y(t)}N t=1 The main idea behind the GAPES approach is to minimize (5.6.57) with respect to both {h(ωk)} and {β(ωk)} as well as with respect to yu. Such a formulation of the gapped-data problem is appealing, because it leads to: (i) an analysis filter bank {h(ωk)} for which the filtered sequence is as close as possible in a LS sense to the (possible) sinusoidal component in the data that has frequency ωk, which is the main design goal in the filter bank approach to spectral analysis; and (ii) an estimate of the missing samples in yu whose spectral content mimics the spectral content of the available data as much as possible in the LS sense of (5.6.57). “sm2” 2004/2/ page 249 i i i i i i i i Section 5.6 Complements 249 The criterion in (5.6.57) is a quartic function of the unknowns {h(ωk)}, {β(ωk)}, and yu. Consequently, in general, its minimization requires the use of an iterative algorithm; that is, a closed-form solution is unlikely to exist. The GAPES method uses a cyclic minimizer to minimize the criterion in (5.6.57) (see Complement 4.9.5 for a general description of cyclic minimizers). A step-by-step description of GAPES is as follows: The GAPES Algorithm Step 0. Obtain initial estimates of {h(ωk)} and {β(ωk)}. Step 1. Use the most recent estimates of {h(ωk)} and {β(ωk)} to estimate yu via the minimization of (5.6.57). Step 2. Use the most recent estimate of yu to estimate {h(ωk)} and {β(ωk)} via the minimization of (5.6.57). Step 3. Check the convergence of the iteration, e.g., by checking whether the relative change of the criterion between two consecutive iterations is smaller than a pre-assigned value. If no, then go to Step 1. If yes, then we have a final am-plitude spectrum estimate given by {ˆ β(ωk)}K k=1. If desired, this estimate can be transformed into a power spectrum estimate as explained in Complement 5.6.4. To reduce the computational burden of the above algorithm we can run it with a value of K that is not much larger than N (e.g., K ∈[2N, 4N]). After the iterations are terminated, the final spectral estimate can be evaluated on a (much) finer frequency grid, if desired. A cyclic minimizer reduces the criterion function at each iteration (see the discussion in Complement 4.9.5). Furthermore, in the present case this reduction is strict because the solutions to the minimization problems with respect to yu and to {h(ωk), β(ωk)} in Steps 1 and 2 are unique under weak conditions. Combining this observation with the fact that the criterion in (5.6.57) is bounded from below by zero, we can conclude that the GAPES algorithm converges to a minimum point of (5.6.57). This minimum may be a local or global minimum, depending in part on the quality of the initial estimates of {h(ωk), β(ωk)} used in Step 0. The initialization step, as well as the remaining steps in the GAPES algorithm, are discussed in more detail below. Step 0. A simple way to obtain initial estimates of {h(ωk), β(ωk)} is to apply APES to the full-data sequence with yu = 0. This way of initializing GAPES can be interpreted as permuting Step 1 with Step 2 in the algorithm and initializing the algorithm in Step 0 with yu = 0. A more elaborate initialization scheme consists of using only the available data samples to build the sample covariance matrix ˆ R in (5.6.47) needed in APES. Pro-vided that there are enough samples so that the resulting ˆ R matrix is nonsingular, this initialization scheme usually gives more accurate estimates of {h(ωk), β(ωk)} than the ones obtained by setting yu = 0 (see [Stoica, Larsson, and Li 2000] for details). Step 1. We want to find the solution ˆ yu to the problem: min yu K X k=1 N X t=m+1 ˆ h∗(ωk)˜ y(t) −ˆ β(ωk)eiωkt 2 (5.6.58) “sm2” 2004/2/ page 250 i i i i i i i i 250 Chapter 5 Filter Bank Methods where ˜ y(t) = [y(t), y(t −1), . . . , y(t −m)]T . We will show that the above minimiza-tion problem is quadratic in yu (for given {ˆ h(ωk)} and {ˆ β(ωk)}), and thus admits a closed-form solution. Let ˆ h∗(ωk) = [h0,k, h1,k, . . . , hm,k] and define Hk =    h0,k h1,k · · · hm,k 0 ... ... ... 0 h0,k h1,k · · · hm,k   , (N −m) × N µk = ˆ β(ωk)    eiωkN . . . eiωk(m+1)   , (N −m) × 1 Using this notation we can write the quadratic criterion in (5.6.58) as K X k=1 Hk    y(N) . . . y(1)   −µk 2 (5.6.59) Next, we define the matrices Ak and Uk via the following equality: Hk    y(N) . . . y(1)   = Akya + Ukyu (5.6.60) With this notation, the criterion in (5.6.59) becomes: K X k=1 ∥Ukyu −(µk −Akya)∥2 (5.6.61) The minimizer of (5.6.61) with respect to yu is readily found to be (see Result R32 in Appendix A): ˆ yu = " K X k=1 U ∗ kUk #−1 " K X k=1 U ∗ k(µk −Akya) # (5.6.62) The inverse matrix above exists under weak conditions; for details, see [Stoica, Larsson, and Li 2000]. Step 2. The solution to this step can be computed by applying the APES algorithm in Complement 5.6.4 to the data sequence made from ya and ˆ yu. The description of the GAPES algorithm in now complete. Numerical experi-ence with this algorithm, reported in [Stoica, Larsson, and Li 2000], suggests that GAPES has good performance, particularly for data consisting of a mixture of sinusoidal signals superimposed in noise. “sm2” 2004/2/ page 25 i i i i i i i i Section 5.6 Complements 251 5.6.6 Extensions of Filter Bank Approaches to Two–Dimensional Signals The following filter bank approaches for one-dimensional (1D) signals were discussed so far in this chapter and its complements: • the periodogram, • the refined filter bank method, • the Capon method, and • the APES method In this complement we will explain briefly how the above nonparametric spectral analysis methods can be extended to the case of two–dimensional (2D) signals. In the process, we also provide new interpretations for some of these methods, which are particularly useful when we want very simple (although somewhat heuristic) derivations of the methods in question. We will in turn discuss the extension of each of the methods listed above. Note that 2D spectral analysis finds applica-tions in image processing, synthetic aperture radar imagery, and so forth. See [Larsson, Li, and Stoica 2003] for a review that covers the 2D methods dis-cussed in this complement, and their application to synthetic aperture radar. The 2D extension of some parametric methods for spectral line analysis is discussed in Complement 4.9.7. Periodogram The 1D periodogram can be obtained by a least-squares (LS) fitting of the data {y(t)} to a generic 1D sinusoidal sequence {βeiωt}: min β N X t=1 y(t) −βeiωt 2 (5.6.63) The solution to (5.6.63) is readily found to be β(ω) = 1 N N X t=1 y(t)e−iωt (5.6.64) The squared modulus of (5.6.64) (scaled by N; see Section 5.2) gives the 1D peri-odogram 1 N N X t=1 y(t)e−iωt 2 (5.6.65) In the 2D case, let {y(t, ¯ t )} (for t = 1, . . . , N and ¯ t = 1, . . . , ¯ N) denote the available data matrix, and let {βei(ωt+¯ ω¯ t )} denote a generic 2D sinusoid. The LS fit of the data to the generic sinusoid, that is: min β N X t=1 ¯ N X ¯ t=1 y(t, ¯ t ) −βei(ωt+¯ ω¯ t ) 2 ⇐ ⇒min β N X t=1 ¯ N X ¯ t=1 y(t, ¯ t )e−i(ωt+¯ ω¯ t ) −β 2 (5.6.66) “sm2” 2004/2/ page 25 i i i i i i i i 252 Chapter 5 Filter Bank Methods has the following solution: β(ω, ¯ ω) = 1 N ¯ N N X t=1 ¯ N X ¯ t=1 y(t, ¯ t )e−i(ωt+¯ ω¯ t ) (5.6.67) Similarly to the 1D case, the scaled squared magnitude of (5.6.67) yields the 2D periodogram 1 N ¯ N N X t=1 ¯ N X ¯ t=1 y(t, ¯ t )e−i(ωt+¯ ω¯ t ) 2 (5.6.68) which can be efficiently computed by means of a 2D FFT algorithm as described below. The 2D FFT algorithm computes the 2D DTFT of a sequence {y(t, ¯ t )} (for t = 1, . . . , N; ¯ t = 1, . . . , ¯ N ) on a grid of frequency values defined by ωk = 2πk N , k = 0, . . . , N −1 ¯ ωℓ= 2πℓ ¯ N , ℓ= 0, . . . , ¯ N −1 The 2D FFT algorithm achieves computational efficiency by making use of the 1D FFT described in Section 2.3. Let Y (k, ℓ) = N X t=1 ¯ N X ¯ t=1 y(t, ¯ t )e−i2πk N t+ 2πℓ ¯ N ¯ t = N X t=1 e−i 2πk N t ¯ N X ¯ t=1 y(t, ¯ t )e−i 2πℓ ¯ N ¯ t | {z } ≜Vt(ℓ) (5.6.69) = N X t=1 Vt(ℓ)e−i 2πk N t (5.6.70) For each t = 1, . . . , N, the sequence {Vt(ℓ)} ¯ N−1 ℓ=0 defined in (5.6.69) can be efficiently computed using a 1D FFT of length ¯ N (cf. Section 2.3). In addition, for each ℓ= 0, . . . , ¯ N −1, the sum in (5.6.70) can be efficiently computed using a 1D FFT of length N. If N is a power of two, an N-point 1D FFT requires N 2 log2 N flops. Thus, if N and ¯ N are powers of two, then the number of operations needed to compute {Y (k, ℓ)} is N ¯ N 2 log2 ¯ N + ¯ N N 2 log2 N = N ¯ N 2 log2(N ¯ N) flops (5.6.71) If N or ¯ N is not a power of two, zero padding can be used. “sm2” 2004/2/ page 253 i i i i i i i i Section 5.6 Complements 253 Refined Filter Bank (RFB) Method Similarly to the 1D case (see (5.3.30) or (5.7.1)), the 2D RFB method can be implemented as a multiwindowed periodogram (cf. (5.6.68)): 1 K K X p=1 N X t=1 ¯ N X ¯ t=1 wp(t, ¯ t ) y(t, ¯ t ) e−i(ωt+¯ ω¯ t ) 2 (5.6.72) where {wp(t, ¯ t )}K p=1 are the 2D Slepian data windows (or tapers). The problem left is to derive 2D extensions of the 1D Slepian tapers discussed in Section 5.3.1. The frequency response of a 2D taper {w(t, ¯ t )} is given by N X t=1 ¯ N X ¯ t=1 w(t, ¯ t )e−i(ωt+¯ ω¯ t ) (5.6.73) Let us define the matrices W =    w(1, 1) · · · w(1, ¯ N) . . . . . . w(N, 1) · · · w(N, ¯ N)    B =    e−i(ω+¯ ω) · · · e−i(ω+¯ ω ¯ N) . . . . . . e−i(ωN+¯ ω) · · · e−i(ωN+¯ ω ¯ N)    and let vec(·) denote the vectorizaton operator which stacks the columns of its matrix argument into a single vector. Also, let a(ω) =    e−iω . . . e−iNω   , ¯ a(ω) =    e−i¯ ω . . . e−i ¯ N ¯ ω    (5.6.74) and let the symbol ⊗denote the Kronecker matrix product; the Kronecker product of two matrices, X of size m × n and Y of size ¯ m × ¯ n, is an m ¯ m × n¯ n matrix whose (i, j) block of size ¯ m × ¯ n is given by Xij · Y , for i = 1, . . . , m and j = 1, . . . , n, where Xij denotes the (i, j)th element of X (see, e.g., [Horn and Johnson 1985] for the properties of ⊗). Finally, let w = vec(W) = w(1, 1), . . . , w(N, 1)| · · · |w(1, ¯ N), . . . , w(N, ¯ N) T (5.6.75) and b(ω, ¯ ω) = vec(B) = h e−i(ω+¯ ω), . . . , e−i(ωN+¯ ω)| · · · |e−i(ω+¯ ω ¯ N), . . . , e−i(ωN+¯ ω ¯ N)iT = ¯ a(¯ ω) ⊗a(ω) (5.6.76) “sm2” 2004/2/ page 254 i i i i i i i i 254 Chapter 5 Filter Bank Methods (the last equality in (5.6.76) follows from the definition of ⊗). Using (5.6.75) and (5.6.76), we can write (5.6.73) as w∗b(ω, ¯ ω) (5.6.77) which is similar to the expression h∗a(ω) for the 1D frequency response in Sec-tion 5.3.1. Hence, the analysis in Section 5.3.1 carries over to the 2D case, with the only difference that now the matrix Γ is given by Γ2D = 1 (2π)2 Z βπ −βπ Z ¯ βπ −¯ βπ b(ω, ¯ ω)b∗(ω, ¯ ω)dω d¯ ω = 1 (2π)2 Z βπ −βπ Z ¯ βπ −¯ βπ [¯ a(¯ ω)¯ a∗(¯ ω)] ⊗[a(ω)a∗(ω)] dω d¯ ω where we have used the fact that (A⊗B)(C ⊗D) = AC ⊗BD for any conformable matrices (see, e.g., [Horn and Johnson 1985]). Hence, Γ2D = ¯ Γ1D ⊗Γ1D (5.6.78) where Γ1D = 1 2π Z βπ −βπ a(ω)a∗(ω)dω, ¯ Γ1D = 1 2π Z ¯ βπ −¯ βπ ¯ a(¯ ω)¯ a∗(¯ ω)d¯ ω (5.6.79) The above Kronecker product expression of Γ2D implies that (see [Horn and Johnson 1985]): (a) The eigenvalues of Γ2D are equal to the products of the eigenvalues of Γ1D and ¯ Γ1D. (b) The eigenvectors of Γ2D are given by the Kronecker products of the eigenvec-tors of Γ1D and ¯ Γ1D. The conclusion is that the computation of 2D Slepian tapers can be reduced to the computation of 1D Slepian tapers. We refer the reader to Section 5.3.1, and the references cited there, for details on 1D Slepian taper computation. Capon and APES Methods In the 1D case we can obtain the Capon and APES methods by a weighted LS fit of the data vectors {˜ y(t)}, where ˜ y(t) = [y(t), y(t −1), . . . , y(t −m)]T (5.6.80) to the vectors corresponding to a generic sinusoidal signal with frequency ω. Specif-ically, consider the LS problem: min β N X t=m+1 ˜ y(t) −a(ω)βeiωt∗W −1 ˜ y(t) −a(ω)βeiωt (5.6.81) “sm2” 2004/2/ page 25 i i i i i i i i Section 5.6 Complements 255 where W −1 is a weighting matrix which is yet to be specified, and where a(ω) = 1, e−iω, . . . , e−imωT (5.6.82) Note that the definition of a(ω) in (5.6.82) differs from that of a(ω) in (5.6.74). The solution to (5.6.81) is given by β(ω) = a∗(ω)W −1g(ω) a∗(ω)W −1a(ω) (5.6.83) where g(ω) = 1 N −m N X t=m+1 ˜ y(t)e−iωt (5.6.84) For W = ˆ R ≜ 1 N −m N X t=m+1 ˜ y(t)˜ y∗(t) (5.6.85) the weighted LS estimate of the amplitude spectrum in (5.6.83) reduces to the Capon method (see equation (5.6.55) in Complement 5.6.4), whereas for W = ˆ R −g(ω)g∗(ω) ≜ˆ Q(ω) (5.6.86) equation (5.6.83) gives the APES method (see equations (5.6.48), (5.6.49), and (5.6.51) in Complement 5.6.4). The extension of the above derivation to the 2D case is straightforward. By analogy with the 1D data vector in (5.6.80), let y(t −k, ¯ t −¯ k) =    y(t, ¯ t ) · · · y(t, ¯ t −¯ m ) . . . . . . y(t −m, ¯ t ) · · · y(t −m, ¯ t −¯ m )    (5.6.87) be the 2D data matrix, and let ˜ y(t, ¯ t ) = vec y(t −k, ¯ t −¯ k)  = y(t, ¯ t ), . . . , y(t −m, ¯ t )| · · · |y(t, ¯ t −¯ m ), . . . , y(t −m, ¯ t −¯ m ) T (5.6.88) Our goal is to fit the data matrix in (5.6.87) to the matrix corresponding to a generic 2D sinusoid with frequency pair (ω, ¯ ω), that is: h βei[ω(t−k)+¯ ω(¯ t−¯ k )]i = β    ei[ωt+¯ ω¯ t ] · · · ei[ωt+¯ ω(¯ t−¯ m )] . . . . . . ei[ω(t−m)+¯ ω¯ t ] · · · ei[ω(t−m)+¯ ω(¯ t−¯ m )]    (5.6.89) Similarly to (5.6.88), let us vectorize (5.6.89): vec h βei[ω(t−k)+¯ ω(¯ t−¯ k )]i = βei(ωt+¯ ω¯ t ) vec h e−i(ωk+¯ ω¯ k )i = βei(ωt+¯ ω¯ t )¯ a(¯ ω) ⊗a(ω) (5.6.90) “sm2” 2004/2/ page 256 i i i i i i i i 256 Chapter 5 Filter Bank Methods As in (5.6.76), let b(ω, ¯ ω) = ¯ a(¯ ω) ⊗a(ω), (m + 1)( ¯ m + 1) × 1 (5.6.91) We deduce from (5.6.88)–(5.6.91) that the 2D counterpart of the 1D weighted LS fitting problem in (5.6.81) is the following: min β N X t=m+1 ¯ N X ¯ t= ¯ m+1 h ˜ y(t, ¯ t ) −βei(ωt+¯ ω¯ t )b(ω, ¯ ω) i∗ W −1 · h ˜ y(t, ¯ t ) −βei(ωt+¯ ω¯ t )b(ω, ¯ ω) i (5.6.92) The solution to (5.6.92) is given by: β(ω, ¯ ω) = b∗(ω, ¯ ω)W −1g(ω, ¯ ω) b∗(ω, ¯ ω)W −1b(ω, ¯ ω) (5.6.93) where g(ω, ¯ ω) = 1 (N −m)( ¯ N −¯ m ) N X t=m+1 ¯ N X ¯ t= ¯ m+1 ˜ y(t, ¯ t )e−i(ωt+¯ ω¯ t ) (5.6.94) The 2D Capon method is given by (5.6.93) with W = 1 (N −m)( ¯ N −¯ m ) N X t=m+1 ¯ N X ¯ t= ¯ m+1 ˜ y(t, ¯ t )˜ y∗(t, ¯ t ) ≜ˆ R (5.6.95) whereas the 2D APES method is given by (5.6.93) with W = ˆ R −g(ω, ¯ ω)g∗(ω, ¯ ω) ≜ˆ Q(ω, ¯ ω) (5.6.96) Note that g(ω, ¯ ω) in (5.6.94) can be efficiently evaluated using a 2D FFT algo-rithm. However, an efficient implementation of the 2D spectral estimate in (5.6.93) is not so direct. A naive implementation may be rather time consuming owing to the large dimensions of the vectors and matrices involved, as well as the need to evaluate β(ω, ¯ ω) on a 2D frequency grid. We refer the reader to [Larsson, Li, and Stoica 2003] and the references therein for a discussion of computationally efficient implementations of 2D Capon and 2D APES spectral estimation methods. “sm2” 2004/2/ page 25 i i i i i i i i Section 5.7 Exercises 257 5.7 EXERCISES Exercise 5.1: Multiwindow Interpretation of Bartlett and Welch Meth-ods Equation (5.3.30) allows us to interpret the RFB method as a multiwindow (or multitaper) approach. Indeed, according to equation (5.3.30), we can write the RFB spectral estimator as: ˆ φ(ω) = 1 K K X p=1 N X t=1 wp,ty(t)e−iωt 2 (5.7.1) where K is the number of data windows (or tapers), and where in the case of RFB the wp,t are obtained from the pth dominant Slepian sequence (p = 1, . . . , K). Show that the Bartlett and Welch methods can also be cast into the previous multiwindow framework. Make use of the multiwindow interpretation of these methods to compare them with one another and with the RFB approach. Exercise 5.2: An Alternative Statistically Stable RFB Estimate In Section 5.3.3 we developed a statistically stable RFB spectral estimator using a bank of narrow bandpass filters. In Section 5.4 we derived the Capon method, which employs a shorter filter length than the RFB. In this exercise we derive the RFB analog of the Capon approach and show its correspondence with the Welch and Blackman–Tukey estimators. As an alternative technique to the filter in (5.3.4), consider a passband filter of shorter length: h = [h0, . . . , hm]∗ (5.7.2) for some m < N. The optimal h will be the first Slepian sequence in (5.3.10) found using a Γ matrix of size m × m. In this case, the filtered output yF (t) = m X k=0 hk˜ y(t −k) (5.7.3) (with ˜ y(t) = y(t)e−iωt) can be computed for t = m + 1, . . . , N. The resulting RFB spectral estimate is given by ˆ φ(ω) = 1 N −m N X t=m+1 |yF (t)|2 (5.7.4) (a) Show that the estimator in (5.7.4) is an unbiased estimate of φ(ω), under the standard assumptions considered in this chapter. (b) Show that ˆ φ(ω) can be written as ˆ φ(ω) = 1 m + 1h∗(ω) ˆ R h(ω) (5.7.5) where ˆ R is an (m + 1) × (m + 1) Hermitian (but not Toeplitz) estimate of the covariance matrix of y(t). Find the corresponding filter h(ω). “sm2” 2004/2/ page 258 i i i i i i i i 258 Chapter 5 Filter Bank Methods (c) Compare (5.7.5) with the Blackman–Tukey estimate in equation (5.4.22). Dis-cuss how the two compare when N is large. (d) Interpret ˆ φ(ω) as a Welch–type estimator. What is the overlap parameter K in the corresponding Welch method? Exercise 5.3: Another Derivation of the Capon FIR Filter The Capon FIR filter design problem can be restated as follows: min h h∗Rh/|h∗a(ω)|2 (5.7.6) Make use of the Cauchy–Schwartz inequality (Result R22 in Appendix A) to obtain a simple proof of the fact that h given by (5.4.8) is a solution to the optimization problem above. Exercise 5.4: The Capon Filter is a Matched Filter Compare the Capon filter design problem (5.4.7) with the following classical matched filter design. • Filter: A causal FIR filter with an (m + 1)–dimensional impulse response vector denoted by h. • Signal–in–noise model: y(t) = αeiωt + ε(t), which gives the following expres-sion for the input vector to the filter: z(t) = αa(ω)eiωt + e(t) (5.7.7) where a(ω) is as defined in (5.4.6), αeiωt is a sinusoidal signal, z(t) = [y(t), y(t −1), . . . , y(t −m)]T and e(t) is a possibly colored noise vector defined similarly to z(t). The signal and noise terms above are assumed to be uncorrelated. • Design goal: Maximize the signal–to–noise ratio in the filter’s output, max h |h∗a(ω)|2/h∗Qh (5.7.8) where Q is the noise covariance matrix. Show that the Capon filter is identical to the matched filter which solves the above design problem. The adjective “matched” attached to the above filter is motivated by the fact that the filter impulse response vector h depends on, and hence is “matched to”, the signal term in (5.7.7). Exercise 5.5: Computation of the Capon Spectrum The Capon spectral estimators are defined in equations (5.4.19) and (5.4.20). The bulk of the computation of either estimator consists in the evaluation of an expression of the form a∗(ω)Qa(ω), where Q is a given positive definite matrix, at “sm2” 2004/2/ page 259 i i i i i i i i Section 5.7 Exercises 259 a number of points on the frequency axis. Let these evaluation points be given by {ωk = 2πk/M}M−1 k=0 for some sufficiently large M value (which we assume to be a power of two). The direct evaluation of a∗(ωk)Qa(ωk), for k = 0, . . . , M −1, would require O(Mm2) flops. Show that an evaluation based on the eigendecomposition of Q and the use of FFT is usually much more efficient computationally. Exercise 5.6: A Relationship between the Capon Method and MUSIC (Pseudo)Spectra Assume that the covariance matrix R, entering the Capon spectrum formula, has the expression (4.2.7) in the frequency estimation application. Then, show that lim σ2→0 (σ2R−1) = I −A(A∗A)−1A∗ (5.7.9) Conclude that the limiting (for N ≫1) Capon and MUSIC (pseudo)spectra, asso-ciated with the frequency estimation data, are close to one another, provided that all signal–to–noise ratios are large enough. Exercise 5.7: A Capon–like Implementation of MUSIC The Capon and MUSIC (pseudo)spectra, as the data length N increases, are given by the functions in equations (5.4.12) and (4.5.13), respectively. Recall that the columns of the matrix G in (4.5.13) are equal to the (m −n) eigenvectors corresponding to the smallest eigenvalues of the covariance matrix R in (5.4.12). Consider the following Capon–like pseudospectrum: gk(ω) = a∗(ω)R−ka(ω)λk (5.7.10) where λ is the minimum eigenvalue of R; the covariance matrix R is assumed to have the form (4.2.7) postulated by MUSIC. Show that, under this assumption, lim k→∞gk(ω) = a∗(ω)GG∗a(ω) = (4.5.13) (5.7.11) (where the convergence is uniform in ω). Explain why the convergence in (5.7.11) may be slow in difficult scenarios, such as those with closely spaced frequencies, and hence the use of (5.7.10) with a large k to approximate the MUSIC pseu-dospectrum may be computationally inefficient. However, the use of (5.7.10) for frequency estimation has a potential advantage over MUSIC that may outweigh its computational inefficiency. Find and comment on that advantage. Exercise 5.8: Capon Estimate of the Parameters of a Single Sine Wave Assume that the data under study consists of a sinusoidal signal observed in white noise. In such a case, the covariance matrix R is given by (cf. (4.2.7)): R = α2a(ω0)a(ω0)∗+ σ2I, (m × m) where ω0 denotes the true frequency value. Show that the limiting (as N →∞) Capon spectrum (5.4.12) peaks at ω = ω0. Derive the height of the peak and show “sm2” 2004/2/ page 260 i i i i i i i i 260 Chapter 5 Filter Bank Methods that it is not equal to α2 (as might have been expected) but is given by a function of α2, m and σ2. Conclude that the Capon method can be used to obtain a consistent estimate of the frequency of a single sinusoidal signal in white noise (but not of the signal power). We note that, for two or more sinusoidal signals, the Capon frequency esti-mates are inconsistent. Hence the Capon frequency estimator behaves somewhat similarly to the AR frequency estimation method in this respect; see Exercise 4.4. Exercise 5.9: An Alternative Derivation of the Relationship between the Capon and AR Methods Make use of the equation (3.9.17) relating R−1 m+1 to R−1 m to obtain a simple proof of the formula (5.4.36) relating the Capon and AR spectral estimators. COMPUTER EXERCISES Tools for Filter Bank Spectral Estimation: The text web site www.prenhall.com/stoica contains the following Matlab functions for use in computing filter bank spectral estimates. • h=slepian(N,K,J) Returns the first J Slepian sequences given N and K as defined in Section 5.3; h is an N × J matrix whose ith column gives the ith Slepian sequence. • phi=rfb(y,K,L) The RFB spectral estimator. The vector y is the input data vector, L controls the frequency sample spacing of the output, and the output vector phi= ˆ φ(ωk) where ωk = 2πk L . For K = 1, this function implements the high resolution RFB method in equation (5.3.22), and for K > 1 it implements the statisti-cally stable RFB method. • phi=capon(y,m,L) The CM Version–1 spectral estimator in equation (5.4.19); y, L, and phi are as for the RFB spectral estimator, and m is the size of the square matrix ˆ R. Exercise C5.10: Slepian Window Sequences We consider the Slepian window sequences for both K = 1 (high resolution) and K = 4 (lower resolution, higher statistical stability) and compare them with classical window sequences. (a) Evaluate and plot the first 8 Slepian window sequences and their Fourier transforms for K = 1 and 4 and for N = 32, 64, and 128 (and perhaps other values, too). Qualitatively describe the filter passbands of these first 8 Slepian sequences for K = 1 and K = 4. Which act as lowpass filters and which act as “other” types of filters? (b) In this chapter we showed that for “large N” and K = 1, the first Slepian sequence is “reasonably close to” the rectangular window; compare the first “sm2” 2004/2/ page 26 i i i i i i i i Section 5.7 Exercises 261 Slepian sequence and its Fourier transform for N = 32, 64, and 128 to the rectangular window and its Fourier transform. How do they compare as a function of N? Based on this comparison, how do you expect the high reso-lution RFB PSD estimator to perform relative to the periodogram? Exercise C5.11: Resolution of Refined Filter Bank Methods We will compare the resolving power of the RFB spectral estimator with K = 1 to that of the periodogram. To do so we look at the spectral estimates of sequences which are made up of two sinusoids in noise, and where we vary the frequency difference. Generate the sequences yα(t) = 10 sin(0.2 · 2πt) + 5 sin((0.2 + α/N)2πt) for various values of α near 1. Compare the resolving ability of the RFB power spectral estimate for K = 1 and of the periodogram for both N = 32 and N = 128. Discuss your results in relation to the theoretical comparisons between the two estimators. Do the results echo the theoretical predictions based on the analysis of Slepian sequences? Exercise C5.12: The Statistically Stable RFB Power Spectral Estimator In this exercise we will compare the RFB power spectral estimator when K = 4 to the Blackman–Tukey and Daniell estimators. We will use the narrowband and broadband processes considered in Exercise C2.22. Broadband ARMA Process: (a) Generate 50 realizations of the broadband ARMA process in Exercise C2.22, using N = 256. Estimate the spectrum using: • The RFB method with K = 4. • The Blackman–Tukey method with an appropriate window (such as the Bartlett window) and window length M. Choose M to obtain similar performance to the RFB method (you can select an appropriate value of M off–line and verify it in your experiments). • The Daniell method with ˜ N = 8N and an appropriate choice of J. Choose J to obtain similar performance to the RFB method (you can select J off–line and verify it in your experiments). (b) Evaluate the relative performance of the three estimators in terms of bias and variance. Are the comparisons in agreement with the theoretical predictions? Narrowband ARMA Process: Repeat parts (a) and (b) above using 50 real-izations (with N = 256) of the narrowband ARMA process in Exercise C2.22. Exercise C5.13: The Capon Method “sm2” 2004/2/ page 26 i i i i i i i i 262 Chapter 5 Filter Bank Methods In this exercise we compare the Capon method to the RFB and AR methods. Consider the sinusoidal data sequence in equation (2.9.20) from Exercise C2.19, with N = 64. (a) We first compare the data filters corresponding to a RFB method (in which the filter is data independent) with the filter corresponding to the CM Version–1 method using both m = N/4 and m = N/2 −1; we choose the Slepian RFB method with K = 1 and K = 4 for this comparison. For two estimation frequencies, ω = 0 and ω = 2π · 0.1, plot the frequency response of the five filters (1 for K = 1 and 4 for K = 4) shown in the first block of Figure 5.1 for the two RFB methods, and also plot the response of the two Capon filters (one for each value of m; see (5.4.5) and (5.4.8)). What are their characteristic features in relation to the data? Based on these plots, discuss how data dependence can improve spectral estimation performance. (b) Compare the two Capon estimators with the RFB estimator for both K = 1 and K = 4. Generate 50 Monte–Carlo realizations of the data and overlay plots of the 50 spectral estimates for each estimator. Discuss the similarities and differences between the RFB and Capon estimators. (c) Compare Capon and Least Squares AR spectral estimates, again by generating 50 Monte–Carlo realizations of the data and overlaying plots of the 50 spectral estimates. Use m = 8, 16, and 30 for both the Capon method and the AR model order. How do the two methods compare in terms of resolution and variance? What are your main summarizing conclusions? Explain your results in terms of the data characteristics. “sm2” 2004/2/ page 263 i i i i i i i i C H A P T E R 6 Spatial Methods 6.1 INTRODUCTION In this chapter, we consider the problem of locating n radiating sources by using an array of m passive sensors, as shown in Figure 6.1. The emitted energy from the sources may for example be acoustic, electromagnetic, and so on, and the receiv-ing sensors may be any transducers that convert the received energy to electrical signals. Examples of sensors include electromagnetic antennas, hydrophones, and seismometers. This type of problem finds applications in radar and sonar systems, communications, astrophysics, biomedical research, seismology, underwater surveil-lance (also called passive listening) and many other fields. This problem basically consists of determining how the “energy” is distributed over space (which may be air, water or the earth), with the source positions representing points in space with high concentrations of energy. Hence, it can be named a spatial spectral estima-tion problem. This name is also motivated by the fact that there are close ties between the source location problem and the problem of temporal spectral estima-tion treated in Chapters 1–5. In fact, as we will see, almost any of the methods encountered in the previous chapters may be used to derive a solution for the source location problem. The emphasis in this chapter will be on developing a model for the output signal of the receiving sensor array. When this model is derived, the source location problem is turned into a parameter estimation problem that is quite similar to the temporal–frequency finding application discussed in Chapter 4. Hence, as we shall see, most of the methods developed for frequency estimation can be used to solve the spatial problem of source location. The sources in Figure 6.1 generate a wave field that travels through space and is sampled, in both space and time, by the sensor array. By making analogy with temporal sampling, we may expect that the spatial sampling done by the array provides more and more information on the incoming waves as the array’s aperture increases. The array’s aperture is the space occupied by the array, as measured in units of signal wavelength. It is then no surprise that an array of sensors may provide significantly enhanced location performance as compared to the use of a single antenna (which was the system used in the early applications of the source location problem.) The development of the array model in the next section is based on a number of simplifying assumptions. Some of these assumptions, which have a more general character, are listed below. The sources are assumed to be situated in the far field of the array. Furthermore, we assume that both the sources and the sensors in the array are in the same plane and that the sources are point emitters. In addition, it is assumed that the propagation medium is homogeneous (i.e., not dispersive) so 263 “sm2” 2004/2/ page 264 i i i i i i i i 264 Chapter 6 Spatial Methods Source n Source 2 Source 1 B B B B B N @ @ @ @ R v v v Sensor 1 @ @ @ @ @ @ Sensor 2 Sensor m ccc c c c cc c cc c       , , , , , , , , , , , , Figure 6.1. The setup of the source location problem. that the waves arriving at the array can be considered to be planar. Under these assumptions, the only parameter that characterizes the source locations is the so– called angle of arrival, or direction of arrival (DOA); the DOA will be formally defined later on. The above assumptions may be relaxed at the expense of significantly com-plicating the array model. Note that in the general case of a near–field source and a three–dimensional array, three parameters are required to define the position of one source, for instance the azimuth, elevation and range. Nevertheless, if the assumption of planar waves is maintained then we can treat the case of several un-known parameters per source without complicating the model too much. However, in order to keep the discussion as simple as possible, we will only consider the case of one parameter per source. In this chapter, it is also assumed that the number of sources n is known. The selection of n, when it is unknown, is a problem of significant importance for many applications, which is often referred to as the detection problem. For solutions to the detection problem (which is analogous to the problem of order selection in signal modeling), the reader is referred to [Wax and Kailath 1985; Fuchs 1988; Viberg, Ottersten, and Kailath 1991; Fuchs 1992] and Appendix C. Finally, it is assumed that the sensors in the array can be modeled as linear (time–invariant) systems; and that their transfer characteristics as well as their locations are known. In short, we say that the array is assumed to be calibrated. “sm2” 2004/2/ page 26 i i i i i i i i Section 6.2 Array Model 265 6.2 ARRAY MODEL We begin by considering the case of a single source. Once we establish a model of the array for this case, the general model for the multiple source case is simply obtained by the superposition principle. Suppose that a single waveform impinges upon the array and let x(t) denote the value of the signal waveform as measured at some reference point, at time t. The “reference point” may be one of the sensors in the array, or any other point placed near enough to the array so that the previously made assumption of planar wave propagation holds true. The physical signals received by the array are continuous time waveforms and hence t is a continuous variable here, unless otherwise stated. Let τk denote the time needed for the wave to travel from the reference point to sensor k (k = 1, . . . , m). Then the output of sensor k can be written as ¯ yk(t) = ¯ hk(t) ∗x(t −τk) + ¯ ek(t) (6.2.1) where ¯ hk(t) is the impulse response of the kth sensor, “∗” denotes the convolu-tion operation, and ¯ ek(t) is an additive noise. The noise may enter in equation (6.2.1) either as “thermal noise” generated by the sensor’s circuitry, as “random background radiation” impinging on the array, or in other ways. In (6.2.1), ¯ hk(t) is assumed known and the “input” signal x(t) as well as the delay τk are unknown. The parameters characterizing the source location enter in (6.2.1) through {τk}. Hence, the source location problem is basically one of time–delay estimation for the unknown input case. The model equation (6.2.1) can be simplified significantly if the signals are assumed to be narrowband. In order to show how this can be done, a number of preliminaries are required. Let X(ω) denote the Fourier transform of the (continuous–time) signal x(t): X(ω) = Z ∞ −∞ x(t)e−iωtdt (6.2.2) (which is assumed to exist and be finite for all ω ∈(−∞, ∞)). The inverse trans-form, which expresses x(t) as a linear functional of X(ω), is given by x(t) = 1 2π Z ∞ −∞ X(ω)eiωtdω (6.2.3) Similarly, we define the transfer function ¯ Hk(ω) of the kth sensor as the Fourier transform of ¯ hk(t). In addition, let ¯ Yk(ω) and ¯ Ek(ω) denote the Fourier trans-forms of the signal ¯ yk(t) and noise ¯ ek(t) in (6.2.1). By using this notation and the properties of the Fourier transform, ¯ Yk(ω) can be written as ¯ Yk(ω) = ¯ Hk(ω)X(ω)e−iωτk + ¯ Ek(ω) (6.2.4) For a general class of physical signals, such as carrier modulated signals encountered in communications, the energy spectral density of x(t) has the form shown in Figure 6.2. There, ωc denotes the center (or carrier) frequency which is usually the center of the frequency band occupied by the signal (hence its name). A signal having an “sm2” 2004/2/ page 266 i i i i i i i i 266 Chapter 6 Spatial Methods ¶¸· ¹ ¶¸· º » ¼¾½ ¶Y¿ » À ¶ Á  Figure 6.2. The energy spectrum of a bandpass signal. energy spectrum of the form depicted in Figure 6.2 is called a bandpass signal (by direct analogy with the notion of bandpass filters). For now, we assume that the received signal x(t) is bandpass. It is clear from Figure 6.2 that the spectrum of such a signal is completely defined by the spec-trum of a corresponding baseband (or lowpass) signal. The baseband spectrum, say |S(ω)|2, corresponding to the one in Figure 6.2, is displayed in Figure 6.3. Let s(t) denote the baseband signal associated with x(t). The process of obtaining x(t) from s(t) is called modulation, whereas the inverse process is named demodulation. In the following we make a number of comments on the modulation and demodulation processes, which — while not being strictly relevant to the source location problem — may be helpful in clarifying some claims in the text. Ã ÄÆÅ ÇÉÈÃ Ê Ç Ë Ì Í Figure 6.3. The baseband spectrum that gives rise to the bandpass spectrum in Figure 6.2. 6.2.1 The Modulation–Transmission–Demodulation Process The physical signal x(t) is real–valued and hence its spectrum |X(ω)|2 should be even (i.e., symmetric about ω = 0; see, for instance, Figure 6.2). On the other hand, the spectrum of the demodulated signal s(t) may not be even (as indicated “sm2” 2004/2/ page 26 i i i i i i i i Section 6.2 Array Model 267 in Figure 6.3) and hence s(t) may be complex–valued. The way in which this may happen is explained as follows. The transmitted signal is, of course, obtained by modulating a real–valued signal. Hence, in the spectrum of the transmitted signal the baseband spectrum is symmetric about ω = ωc. The characteristics of the transmission channel (or the propagation medium), however, most often are asymmetric about ω = ωc. This results in a received bandpass signal with an associated baseband spectrum that is not even. Hence, the demodulated received signal is complex–valued. This observation supports a claim made in Chapter 1 that complex–valued signals are not uncommon in spectral estimation problems. The Modulation Process: If s(t) is multiplied by eiωct, then the Fourier trans-form of s(t) is translated in frequency to the right by ωc (assumed to be positive), as is verified by Z ∞ −∞ s(t)eiωcte−iωtdω = Z ∞ −∞ s(t)e−i(ω−ωc)tdω = S(ω −ωc) (6.2.5) The above formula describes the essence of the so–called complex modulation pro-cess. (An analogous formula for random discrete–time signals is given by equation (1.4.11) in Chapter 1.) The output of the complex modulation process is always complex–valued (hence the name of this form of modulation). If the modulated signal is real–valued, as x(t) is, then it must have an even spectrum. In such a case the translation of S(ω) to the right by ωc, as in (6.2.5), must be accompanied by a translation to the left (also by ωc) of the folded and complex–conjugated baseband spectrum. This process results in the following expression for X(ω): X(ω) = S(ω −ωc) + S∗(−(ω + ωc) ) (6.2.6) It is readily verified that in the time domain, the real modulation process leading to (6.2.6) corresponds to taking the real part of the complex–modulated signal s(t)eiωct: x(t) = 1 2π Z ∞ −∞ [S(ω −ωc) + S∗(−ω −ωc)]eiωtdω = 1 2π Z ∞ −∞ S(ω −ωc)ei(ω−ωc)teiωctdω +  1 2π Z ∞ −∞ S(−ω −ωc)e−i(ω+ωc)teiωctdω ∗ = s(t)eiωct + [s(t)eiωct]∗ which gives x(t) = 2Re[s(t)eiωct] (6.2.7) or x(t) = 2α(t) cos(ωct + ϕ(t)) (6.2.8) “sm2” 2004/2/ page 268 i i i i i i i i 268 Chapter 6 Spatial Methods where α(t) and ϕ(t) are the amplitude and phase of s(t), respectively: s(t) = α(t)eiϕ(t) If we let sI(t) and sQ(t) denote the real and imaginary parts of s(t), then we can also write (6.2.7) as x(t) = 2[sI(t) cos(ωct) −sQ(t) sin(ωct)] (6.2.9) We note in passing the following terminology associated with the equivalent time– domain representations (6.2.7)–(6.2.9) of a bandpass signal: s(t) is called the com-plex envelope of x(t); and sI(t) and sQ(t) are said to be the in–phase and quadrature components of x(t). The Demodulation Process: A calculation similar to (6.2.5) shows that the Fourier transform of x(t)e−iωct is given by [S(ω) + S∗(−ω −2ωc)] which is simply X(ω) translated in frequency to the left by ωc. The baseband (or lowpass) signal s(t) can then be obtained by filtering x(t)e−iωct with a baseband (or lowpass) filter whose bandwidth is matched to that of S(ω). The hardware implementation of the demodulation process is presented later on, in block form, in Figure 6.4. 6.2.2 Derivation of the Model Equation Given the background of the previous subsection, we return to equation (6.2.4) describing the output of sensor k. Since x(t) is assumed to be a bandpass signal, X(ω) is given by (6.2.6) which, when inserted in (6.2.4), leads to ¯ Yk(ω) = ¯ Hk(ω)[S(ω −ωc) + S∗(−ω −ωc)]e−iωτk + ¯ Ek(ω) (6.2.10) Let ˜ yk(t) denote the demodulated signal: ˜ yk(t) = ¯ yk(t)e−iωct It follows from (6.2.10) and the previous discussion on the demodulation process that the Fourier transform of ˜ yk(t) is given by ˜ Yk(ω) = ¯ Hk(ω + ωc)[S(ω) + S∗(−ω −2ωc)]e−i(ω+ωc)τk + ¯ Ek(ω + ωc) (6.2.11) When ˜ yk(t) is passed through a lowpass filter with bandwidth matched to S(ω), in the filter output (say, yk(t)) the component in (6.2.11) centered at ω = −2ωc is eliminated along with all the other frequency components that fall in the stopband of the lowpass filter. Hence, we obtain: Yk(ω) = Hk(ω + ωc)S(ω)e−i(ω+ωc)τk + Ek(ω + ωc) (6.2.12) “sm2” 2004/2/ page 269 i i i i i i i i Section 6.2 Array Model 269 where Hk(ω+ωc) and Ek(ω+ωc) denote the parts of ¯ Hk(ω+ωc) and ¯ Ek(ω+ωc) that fall within the lowpass filter’s passband, Ω, and where the frequency ω is restricted to Ω. We now make the following key assumption. The received signals are narrowband, so that |S(ω)| decreases rapidly with increasing |ω|. (6.2.13) Under the assumption above, (6.2.12) reduces (in an approximate way) to the following equation: Yk(ω) = Hk(ωc)S(ω)e−iωcτk + Ek(ω + ωc) for ω ∈Ω (6.2.14) Because Hk(ωc) must be different from zero, the sensor transfer function ¯ Hk(ω) should pass frequencies near ω = ωc (as expected, since ωc is the center frequency of the received signal). Also note that we do not replace Ek(ω + ωc) in (6.2.14) by Ek(ωc) since this term might not be (nearly) constant over the signal bandwidth (for instance, this would be the case when the noise term in (6.2.12) contains a narrowband interference with the same center frequency as the signal). Remark: It is sometimes claimed that (6.2.12) can be reduced to (6.2.14) even if the signals are broadband but the sensors in the array are narrowband with center frequency ω = ωc. Under such an assumption, |Hk(ω + ωc)| goes quickly to zero as |ω| increases and hence (6.2.12) becomes Yk(ω) = Hk(ω + ωc)S(0)e−iωcτk + Ek(ω + ωc) (6.2.15) which apparently is different from (6.2.14). In order to obtain (6.2.14) from (6.2.12) under the previous conditions, we need to make some additional assumptions. Hence, if we further assume that the sensor frequency response is flat over the passband (so that Hk(ω + ωc) = Hk(ωc)) and that the signal spectrum varies over the sensor passband (so that S(ω) differs quite a bit from S(0) over the passband in question) then we can still obtain (6.2.14) from (6.2.12). The model of the array is derived in a straightforward manner from equation (6.2.14). The time–domain counterpart of (6.2.14) is the following: yk(t) = Hk(ωc)e−iωcτks(t) + ek(t) (6.2.16) where yk(t) and ek(t) are the inverse Fourier transforms of the corresponding terms in (6.2.14) (by a slight abuse of notation, ek(t) is associated with Ek(ω + ωc), not Ek(ω)). The hardware implementation required to obtain {yk(t)}, as defined above, is indicated in Figure 6.4. Note that the scheme in Figure 6.4 generates samples of the real and imaginary components of yk(t). These samples are paired in the digital machine following the analog scheme of Figure 6.4 to obtain samples of the complex–valued signal yk(t). (We stress once more that all physical analog signals are real–valued.) Note that the continuous–time signal in (6.2.16) is bandlimited: according to (6.2.14) (and the related discussion), Yk(ω) is approximately equal to “sm2” 2004/2/ page 270 i i i i i i i i 270 Chapter 6 Spatial Methods zero for ω ̸∈Ω. Here Ωis the support of S(ω) (recall that the filter bandwidth is matched to the signal bandwidth), and hence it is a narrow interval. Consequently we can sample (6.2.16) with a rather low sampling frequency. The sampled version of {yk(t)} is used by the “digital processing equipment” for the purpose of DOA estimation. Of course, the digital form of {yk(t)} satisfies an equation directly analogous to (6.2.16). In fact, to avoid a complication of notation by the introduction of a new discrete–time variable, from here on we consider that t in equation (6.2.16) takes discrete values t = 1, 2, . . . , N (6.2.17) (as usual, we choose the sampling period as the unit of the time axis). We remark once again that the scheme in Figure 6.4 samples the baseband signal, which may be done using lower sampling rates compared to those needed for the bandpass signal (see also [Proakis, Rader, Ling, and Nikias 1992]). Î Î Î Î Î Î Î Ï Ð Ð Ð Ð Ð Ð Ð Ñ ÒÓEÔÕ'Ö×4ØÙ Ú ÛÜ ÝGÕ Þ×4ß'ÛàwÜ ×Eá â Ü á×4ØÙ ãä åæÉçè'é â ÒêÚ â4ëíì î î î ï ï ï ð ñ Øò4Ù Ü ×Eá Ø×4ó ô)Ü Ù ÔÕ'ÖÜ ×Eá õ Þ×Ô!Õ'Ö ×4ØÙ Ú ÛÜ ÝGÕ öÖØ×4ÝóE÷4ßÕÖ ð ø è ä å!é ù ÒUú ë ù û üý)öþÞ ë Ú ÿ ÿ   ë Ýß'Ü Ù Ù ØÔ ÛÖ ü®Û4ØÝ!Ý ôÜ Ù Ô!Õ'Ö ü®Û4ØÝ!Ý ôÜ Ù Ô!Õ'Ö ð ð õ Ý!Ü × ä å!é ß'ÛÝ ä å!é ð ð Þà øèä å!é ì Õ ø è ä å!é ð ð Þà  øèä å!é ì Õ  ø è ä å!é ýù ñ ë ÚíÒ ì â Þ ë Ú     ð ð ýù ýù Figure 6.4. A simplified block diagram of the analog processing in a receiving array element. Next, we introduce the so–called array transfer vector (or direction vector): a(θ) = [H1(ωc)e−iωcτ1 . . . Hm(ωc)e−iωcτm]T (6.2.18) Here, θ denotes the source’s direction of arrival which is the parameter of interest in our problem. Note that since the transfer characteristics and positions of the sensors in the array are assumed to be known, the vector in (6.2.18) is a function of θ only, as indicated by notation (this fact will be illustrated shortly by means of a particular form of array). By making use of (6.2.18), we can write equation (6.2.16) as y(t) = a(θ)s(t) + e(t) (6.2.19) where y(t) = [y1(t) . . . ym(t)]T e(t) = [e1(t) . . . em(t)]T “sm2” 2004/2/ page 27 i i i i i i i i Section 6.2 Array Model 271 denote the array’s output vector and the additive noise vector, respectively. It should be noted that θ enters in (6.2.18) not only through {τk} but also through {Hk(ωc)}. In some cases, the sensors may be considered to be omnidirectional over the DOA range of interest, and then {Hk(ωc)}m k=1 are independent of θ. Sometimes, the sensors may also be assumed to be identical. Then by redefining the signal (H(ωc)s(t) is redefined as s(t)) and selecting the first sensor as the reference point, the expression (6.2.18) can be simplified to the following form: a(θ) = [1 e−iωcτ2 . . . e−iωcτm]T (6.2.20) The extension of equation (6.2.19) to the case of multiple sources is straightfor-ward. Since the sensors in the array were assumed to be linear elements, a direct application of the superposition principle leads to the following model of the array. y(t) = [a(θ1) . . . a(θn)]    s1(t) . . . sn(t)   + e(t) ≜As(t) + e(t) θk = the DOA of the kth source sk(t) = the signal corresponding to the kth source (6.2.21) It is interesting to note that the above model equation mainly relies on the narrowband assumption (6.2.13). The planar wave assumption made in the intro-ductory part of this chapter has not been used so far. This assumption is to be used when deriving the explicit dependence of {τk} as a function of θ, as is illustrated in the following for an array with a special geometry. Uniform Linear Array: Consider the array of m identical sensors uniformly spaced on a line, depicted in Figure 6.5. Such an array is commonly referred to as a uniform linear array (ULA). Let d denote the distance between two consecutive sensors, and let θ denote the DOA of the signal illuminating the array, as measured (counterclockwise) with respect to the normal to the line of sensors. Then, under the planar wave hypothesis and the assumption that the first sensor in the array is chosen as the reference point, we find that τk = (k −1)d sin θ c for θ ∈[−90◦, 90◦] (6.2.22) where c is the propagation velocity of the impinging waveform (for example, the speed of light in the case of electromagnetic waves). Inserting (6.2.22) into (6.2.20) gives a(θ) = h 1, e−iωcd sin θ/c, . . . , e−i(m−1)ωcd sin θ/ciT (6.2.23) The restriction of θ to lie in the interval [−90◦, 90◦] is a limitation of ULAs: two sources at locations symmetric with respect to the array line yield identical sets of delays {τk} and hence cannot be distinguished from one another. In practice “sm2” 2004/2/ page 27 i i i i i i i i 272 Chapter 6 Spatial Methods this ambiguity of ULAs is eliminated by using sensors that only pass signals whose DOAs are in [−90◦, 90◦]. Let λ denote the signal wavelength: λ = c/fc, fc = ωc/2π (6.2.24) (which is the distance traveled by the waveform in one period of the carrier). Define fs = fc d sin θ c = d sin θ λ (6.2.25) and ωs = 2πfs = ωc d sin θ c (6.2.26) ! #"%$& ' ( ) + ) ,,-, &.#/%!" 0 1 2 3 4 5 /6.7) 8 89 8 8 + 5 ' : 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 ; ; ; ; ; < = > > > > > Figure 6.5. The uniform linear array scenario. With this notation, the transfer vector (6.2.23) can be rewritten as: a(θ) = [1 e−iωs . . . e−i(m−1)ωs]T (6.2.27) This is a Vandermonde vector which is completely analogous with the vector made from the uniform samples of the sinusoidal signal {e−iωst}. Let us explore this analogy a bit further. First, by the above analogy, ωs is called the spatial frequency. Second, if we were to sample a continuous–time sinusoidal signal with fre-quency ωc then, in order to avoid aliasing effects, the sampling frequency f0 should satisfy (by the Shannon sampling theorem): f0 ≥2fc (6.2.28) “sm2” 2004/2/ page 273 i i i i i i i i Section 6.3 Nonparametric Methods 273 or, equivalently, T0 ≤Tc 2 (6.2.29) where T0 is the sampling period and Tc is the period of the continuous-time si-nusoidal signal. Now, in the ULA case considered in this example, we see from (6.2.27) that the vector a(θ) is uniquely defined (i.e., there is no “spatial aliasing”) if and only if ωs is constrained as follows: |ωs| ≤π (6.2.30) However, (6.2.30) is equivalent to |fs| ≤1 2 ⇐ ⇒d| sin θ| ≤λ 2 (6.2.31) Note that the above condition on d depends on θ. In particular, for a broadside source (i.e., a source with θ = 0◦), (6.2.31) imposes no constraint on d. However, in general we have no knowledge about the DOA of the source signal. Consequently, we would like (6.2.31) to hold for any θ, which leads to the following condition on d: d ≤λ 2 (6.2.32) Since we may think of the ULA as performing a uniform spatial sampling of the wavefield, equation (6.2.32) simply says that the (spatial) sampling period d should be smaller than half of the signal wavelength. By analogy with (6.2.29), this result may be interpreted as a spatial Shannon sampling theorem. Equipped with the array model (6.2.21) derived previously, we can reduce the problem of DOA finding to that of estimating the parameters {θk} in (6.2.21). As there is a direct analogy between (6.2.21) and the model (4.2.6) for sinusoidal signals in noise, we may expect that most of the methods developed in Chapter 4 for (temporal) frequency estimation can also be used for DOA estimation. This is shown to be the case in the following sections, which briefly review the most important DOA finding methods. 6.3 NONPARAMETRIC METHODS The methods to be described in this section do not make any assumption on the covariance structure of the data. As such, they may be considered to be “nonpara-metric”. On the other hand, they assume that the functional form of the array’s transfer vector a(θ) is known. Can we then still categorize them as “nonparametric methods”? The array performs a spatial sampling of the incoming wavefront, which is analogous to the temporal sampling done by the tapped–delay line implemen-tation of a (temporal) finite impulse response (FIR) filter, see Figure 6.6. Thus, assuming that the form of a(θ) is available is no more restrictive than making the same assumption for a(ω) in Figure 6.6a. In conclusion, the functional form of a(θ) characterizes the array as a spatial sampling device, and assuming it is known “sm2” 2004/2/ page 274 i i i i i i i i 274 Chapter 6 Spatial Methods (Temporal sampling) z -q q q ? ? ?           ----        @ @ @ @ @ R -? ? q q q -s -? s × × × P u(t) = eiωt z−1 z−1 1 m −1                 1 e−iω . . . e−i(m−1)ω                 | {z } a(ω) u(t) h0 h1 hm−1 [h∗a(ω)]u(t) (a) Temporal filter ? ? ? ? ?@ A A A A A A A A A A A A BCDDFEHGJIKCBLNME%ODFPRQGS TFUNVWJXYZ [ \^] CTFS C_MFCa ] _ S Bbdc e e e f g h i i i j j j k k k k k k k k k l m m m m mon h h h h pq rJs pJq rJs pJq rJs pJq rJs h t tu u h t tu u h t tu u i i i DQwv QRDFQwBPRQ ] EdS B%T x y z {| | | | | | | | | | | } x ~d€ d‚„ƒo †‡ ˆ ˆ ˆ ~d€ d‚„‰ †‡ ŠF‹ ‹ ‹ ‹ ‹ ‹ ‹ ‹ ‹ ‹ ‹ Œ  Ž  ‘“’„”^• –’ —o• ˜š™ ˜“› ˜Kœ  ›  ž#Ÿ¡ ¢R£ ¤H¥F¦§¢F¨%¤ (b) Spatial filter Figure 6.6. Analogy between temporal sampling and filtering and the correspond-ing (spatial) operations performed by an array of sensors. “sm2” 2004/2/ page 27 i i i i i i i i Section 6.3 Nonparametric Methods 275 should not be considered to be parametric (or model–based) information. As al-ready mentioned, an array for which the functional form of a(θ) is know is said to be calibrated. Figure 6.6 also makes an analogy between temporal FIR filtering and spatial filtering using an array of sensors. In what follows, we comment briefly on this analogy since it is of interest for the nonparametric approach to DOA finding. In the time series case, a FIR filter is defined by the relation yF (t) = m−1 X k=0 hku(t −k) ≜h∗y(t) (6.3.1) where {hk} are the filter weights, u(t) is the input to the filter and h = [h0 . . . hm−1]∗ (6.3.2) y(t) = [u(t) . . . u(t −m + 1)]T (6.3.3) Similarly, we can use the spatial samples {yk(t)}m k=1 obtained with a sensor array to define a spatial filter: yF (t) = h∗y(t) (6.3.4) A temporal filter can be made to enhance or attenuate some selected frequency bands by appropriately choosing the vector h. More precisely, since the filter output for a sinusoidal input u(t) is given by yF (t) = [h∗a(ω)]u(t) (6.3.5) (where a(ω) is as defined, for instance, in Figure 6.6), then by selecting h so that h∗a(ω) is large (small) we can enhance (attenuate) the power of yF (t) at frequency ω. In direct analogy with (6.3.5), the (noise–free) spatially filtered output (as in (6.3.4)) of an array illuminated by a narrowband wavefront with complex envelope s(t) and DOA equal to θ is given by (cf. (6.2.19)): yF (t) = [h∗a(θ)]s(t) (6.3.6) This equation clearly shows that the spatial filter can be selected to enhance (at-tenuate) the signals coming from a given direction θ, by making h∗a(θ) in (6.3.6) large (small). This observation lies at the basis of the DOA finding methods to be described in this section. All of these methods can be derived by using the filter bank approach of Chapter 5. More specifically, assume that a filter h has been found such that (i) It passes undistorted the signals with a given DOA θ; and (ii) It attenuates all the other DOAs different from θ as much as possible. (6.3.7) “sm2” 2004/2/ page 276 i i i i i i i i 276 Chapter 6 Spatial Methods Then, the power of the spatially filtered signal in (6.3.4), E  |yF (t)|2 = h∗Rh, R = E {y(t)y∗(t)} (6.3.8) should give a good indication of the energy coming from direction θ. (Note that θ enters in (6.3.8) via h.) Hence, h∗Rh should peak at the DOAs of the sources located in the array’s viewing field when evaluated over the DOA range of interest. This fact may be exploited for the purpose of DOA finding. Depending on the specific way in which the (loose) design objectives in (6.3.7) are formulated, the above approach can lead to different DOA estimation methods. In the following, we present spatial extensions of the periodogram and Capon techniques. The RFB method of Chapter 5 may also be extended to the spatial processing case, provided the array’s geometry is such that the transfer vector a(θ + α) can be factored as a(θ + α) = D(θ)a(α) (6.3.9) where D is a unitary (possibly diagonal) matrix. Without such a property, the RFB spatial filter should be computed, for each θ, by solving an m × m eigendecompo-sition problem, which would be computationally prohibitive in most applications. Since it is not a priori obvious that an arbitrary array satisfies (6.3.9), we do not consider the RFB approach in what follows.1 Finally, we remark that a spatial filter satisfying the design objectives in (6.3.7) can be viewed as forming a (reception) beam in the direction θ, as pictorially indicated in Figure 6.7. Because of this inter-pretation, the methods resulting from this approach to the DOA finding problem, in particular the method of the next subsection, are called beamforming methods [Van Veen and Buckley 1988; Johnson and Dudgeon 1992]. 6.3.1 Beamforming In view of (6.3.6), condition (i) of the filter design problem (6.3.7) can be formulated as: h∗a(θ) = 1 (6.3.10) In what follows, we assume that the transfer vector a(θ) has been normalized so that a∗(θ)a(θ) = m (6.3.11) Note that in the case of an array with identical sensors, the condition (6.3.11) is automatically met (cf. (6.2.20)). Regarding condition (ii) in (6.3.7), if y(t) in (6.3.8) were spatially white with R = I, then we would obtain the following expression for the power of the filtered signal: E  |yF (t)|2 = h∗h (6.3.12) which is different from zero for every θ (note that we cannot have h = 0, because of condition (6.3.10)). This fact indicates that a spatially white signal in the ar-ray output can be considered as impinging on the array with equal power from all 1Referring back to Chapter 5 may prove useful for understanding these comments on RFB and for several other discussions in this section. “sm2” 2004/2/ page 27 i i i i i i i i Section 6.3 Nonparametric Methods 277 2 4 6 8 10 0 Magnitude Theta (deg) −90 −60 −30 0 30 60 90 Figure 6.7. The response magnitude |h∗a(θ)|, versus θ, of a spatial filter (or beamformer). Here, h = a(θ0), where θ0 = 25◦is the DOA of interest; the array is a 10–element ULA with d = λ/2. directions θ (in the same manner as a temporally white signal in the array out-put contains equal power in all frequency bands). We deduce from this observation that a natural mathematical formulation of condition (ii) would be to require that h minimizes the power in (6.3.12). Hence, we are led to the following design problem: min h h∗h subject to h∗a(θ) = 1 (6.3.13) As (6.3.13) is a special case of the optimization problem (5.4.7) in Chapter 5, we obtain the solution to (6.3.13) from (5.4.8) as: h = a(θ)/a∗(θ)a(θ) (6.3.14) By making use of (6.3.11), (6.3.14) reduces to h = a(θ)/m (6.3.15) which, when inserted in (6.3.8), gives E  |yF (t)|2 = a∗(θ)Ra(θ)/m2 (6.3.16) The theoretical covariance matrix R in (6.3.16) cannot be (exactly) determined from the available finite sample {y(t)}N t=1 and hence it must be replaced by some “sm2” 2004/2/ page 278 i i i i i i i i 278 Chapter 6 Spatial Methods estimate, such as ˆ R = 1 N N X t=1 y(t)y∗(t) (6.3.17) By doing so and omitting the factor 1/m2 in (6.3.16), which has no influence on the DOA estimates, we obtain the beamforming method which determines the DOAs as summarized in the next box. The beamforming DOA estimates are given by the locations of the n highest peaks of the function a∗(θ) ˆ Ra(θ) (6.3.18) When the estimated spatial spectrum in (6.3.18) is compared to the expres-sion derived in Section 5.4 for the Blackman–Tukey periodogram, it is seen that beamforming is a direct (spatial) extension of the periodogram. In fact, the func-tion in (6.3.18) may be thought of as being obtained by averaging the “spatial periodograms” |a∗(θ)y(t)|2 (6.3.19) over the set of available “snapshots” (t = 1, . . . , N). The connection established in the previous paragraph, between beamform-ing and the (averaged) periodogram, suggests that the resolution properties of the beamforming method are analogous to those of the periodogram method. In fact, by an analysis similar to that in Chapters 2 and 5 it can be shown that the beamwidth2 of the spatial filter used by beamforming is approximately equal to the inverse of the array’s aperture (as measured in signal wavelengths). This sets a limit on the resolution achievable with beamforming, as indicated below (see Exercise 6.2): Beamforming DOA resolution limit ≃wavelength / array “length” (6.3.20) Next, we note that as N increases, the sample spatial spectrum in (6.3.18) converges (under mild conditions) to (6.3.16), uniformly in θ. Hence the beam-forming estimates of the DOAs converge to the n maximum points of (6.3.16), as N tends to infinity. If the array model (6.2.21) holds (it has not been used so far!), the noise e(t) is spatially white and has the same power σ2 in all sensors, and if there is only one source (with DOA denoted by θ0, for convenience), then R in (6.3.16) is given by R = a(θ0)a∗(θ0)P + σ2I (6.3.21) where P = E  |s(t)|2 denotes the signal power. Hence, a∗(θ)Ra(θ) = |a∗(θ)a(θ0)|2P + a∗(θ)a(θ)σ2 ≤|a∗(θ)a(θ)||a∗(θ0)a(θ0)|P + σ2a∗(θ)a(θ) = m(mP + σ2) (6.3.22) 2The beamwidth is the spatial counterpart of the temporal notion of bandwidth associated with a bandpass filter. “sm2” 2004/2/ page 279 i i i i i i i i Section 6.3 Nonparametric Methods 279 where the inequality follows from the Cauchy–Schwartz lemma (see Result R22 in Appendix A) and the last equality from (6.3.11). The upper bound in (6.3.22) is achieved for a(θ) = a(θ0) which, under mild conditions, implies θ = θ0. In conclu-sion, the beamforming DOA estimate is consistent under the previous assumptions (n = 1, etc.). In the general case of multiple sources, however, the DOA esti-mates obtained with beamforming are inconsistent. The (asymptotic) bias of these estimates may be significant if the sources are strongly correlated or closely spaced. As explained above, beamforming is the spatial analog of the Blackman–Tukey periodogram (with a certain covariance estimate) and the Bartlett periodogram (if we interpret the m–dimensional snapshots in (6.3.19) as “subsamples” of the available “sample” [yT (1), . . . , yT (N)]T ). Note, however, that the value of m in the periodogram methods can be chosen by the user, whereas in the beamforming method m is fixed. This difference might seem small at first, but it has a significant impact on the consistency properties of beamforming. More precisely, it can be shown that, for instance, the Bartlett periodogram estimates of temporal frequencies are consistent under the model (4.2.7), provided that m increases without bound as the number of samples N tends to infinity (e.g., we can set m = N, which yields the unmodified periodogram).3 For beamforming, on the other hand, the value of m (i.e., the number of array elements) is limited by physical considerations. This prevents beamforming from providing consistent DOA estimates in the multiple signal case. An additional difficulty is that in the spatial scenario the signals can be correlated with one another, whereas they are always uncorrelated in the temporal frequency estimation case. Explaining why this is so and completing a consistency analysis of the beamforming DOA estimates is left as an exercise for the reader. Now, if the model (6.2.21) holds, if the minimum DOA separation is larger than the array beamwidth (which implies that m is sufficiently large), if the sig-nals are uncorrelated, and if the noise is spatially white, then it is readily seen that the multiple–source spectrum (6.3.16) decouples (approximately) in n single–source spectra; this means that beamforming may provide reasonably accurate DOA esti-mates in such a case. In fact, in this case beamforming can be shown to provide an approximation to the nonlinear LS DOA estimation method discussed in Sec-tion 6.4.1; see the remark in that section. 6.3.2 Capon Method The derivation of the Capon method for array signal processing is entirely analogous with the derivation of the Capon method for the time series data case developed in Section 5.4 [Capon 1969; Lacoss 1971]. The Capon spatial filter design problem is the following: min h h∗Rh subject to h∗a(θ) = 1 (6.3.23) 3The unmodified periodogram in an inconsistent estimator for continuous PSDs (as shown in Chapter 2). However, as asserted above, the plain periodogram estimates of discrete (or line) PSDs are consistent. Showing this is left as an exercise to the reader. (Make use of the covariance matrix model (4.2.7) with m →∞, and the fact that the Fourier (or Vandermonde) vectors, at different frequencies, become orthogonal to one another as their dimension increases.) “sm2” 2004/2/ page 280 i i i i i i i i 280 Chapter 6 Spatial Methods Hence, objective (i) in the general design problem (6.3.7) is ensured by constraining the filter exactly as in the beamforming approach (see (6.3.10)). Objective (ii) in (6.3.7), however, is accomplished in a more sound way: by requiring the filter to minimize the output power, when fed with the actual array data {y(t)}. Hence, in the Capon approach, objective (ii) is formulated in a “data–dependent” way, whereas it is formulated independently of the data in the beamforming method. As a consequence, the goal of the Capon filter steered to a certain direction θ is to attenuate any other signal that actually impinges on the array from a DOA ̸= θ, whereas the beamforming filter pays uniform attention to all other DOAs ̸= θ, even though there might be no incoming signal for many of those DOAs. The solution to (6.3.23), as derived in Section 5.4, is given by h = R−1a(θ) a∗(θ)R−1a(θ) (6.3.24) which, when inserted in the output power formula (6.3.8), leads to E  |yF (t)|2 = 1 a∗(θ)R−1a(θ) (6.3.25) It only remains to replace R in (6.3.25) by a sample estimate, such as ˆ R in (6.3.17), to obtain the Capon DOA estimator. The Capon DOA estimates are obtained as the locations of the n largest peaks of the following function: 1 a∗(θ) ˆ R−1a(θ) (6.3.26) There is an implicit assumption in (6.3.26) that ˆ R−1 exists, but this can be ensured under weak conditions (in particular, ˆ R−1 exists with probability 1 if N ≥m and if the noise term has a positive definite spatial covariance matrix). Note that the “spatial spectrum” in (6.3.26) corresponds to the “CM–Version 1” PSD in the time series case (see equation (5.4.12) in Section 5.4). A Capon spatial spectrum similar to the “CM–Version 2” PSD formula (see (5.4.17)) might also be derived, but it appears to be more complicated than the time series formula if the array is not a ULA. Capon DOA estimation has been empirically found to possess superior per-formance as compared with beamforming. The common advantage of these two nonparametric methods is that they do not assume anything about the statistical properties of the data and, therefore, they can be used in situations where we lack information about these properties. On the other hand, in the cases where such in-formation is available, for example in the form of a covariance model of the data, a nonparametric approach does not give the performance that one can achieve with a parametric (model–based) approach. The parametric approach to DOA estimation is the subject of the next section. “sm2” 2004/2/ page 28 i i i i i i i i Section 6.4 Parametric Methods 281 6.4 PARAMETRIC METHODS In this section, we postulate the array model (6.2.21). Furthermore, the noise e(t) is assumed to be spatially white with components having identical variance: E {e(t)e∗(t)} = σ2I (6.4.1) In addition, the signal covariance matrix P = E {s(t)s∗(t)} (6.4.2) is assumed to be nonsingular (but not necessarily diagonal; hence the signals may be (partially) correlated). When the signals are fully correlated, so that P is singular, they are said to be coherent. Finally, we assume that the signals and the noise are uncorrelated with one another. Under the previous assumptions, the theoretical covariance matrix of the array output vector is given by R = E {y(t)y∗(t)} = APA∗+ σ2I (6.4.3) There is a direct analogy between the array models above, (6.2.21) and (6.4.3), and the corresponding models encountered in our discussion of the sinusoids–in– noise case in Chapter 4. More specifically, the “nonlinear regression” model (6.2.21) of the array is analogous to (4.2.6), and the array covariance model (6.4.3) is much the same as (4.2.7). The consequence of these analogies is that all methods intro-duced in Chapter 4 for frequency estimation can also be used for DOA estimation without any essential modification. In the following, we briefly review these meth-ods with a view of pointing out any differences from the frequency estimation application. When the assumed array model is a good representation of reality, the parametric DOA estimation methods reviewed in the sequel provide highly accu-rate DOA estimates, even in adverse situations (such as low SNR scenarios). As our main thrust in this text has been the understanding of the basic ideas behind the presented spectral estimation methodologies, we do not dwell on the details of the analysis required to establish the statistical properties of the DOA estimators discussed in the following; see, however, Appendix B for a discussion on the Cram´ er– Rao bound and the best accuracy achievable in DOA estimation problems. Such analysis details are available in [Stoica and Nehorai 1989a; Stoica and Ne-horai 1990; Stoica and Sharman 1990; Stoica and Nehorai 1991; Viberg and Ottersten 1991; Rao and Hari 1993]. For reviews of many of the re-cent advances in spatial spectral analysis, the reader can consult [Pillai 1989], [Ottersten, Viberg, Stoica, and Nehorai 1993], and [Van Trees 2002]. 6.4.1 Nonlinear Least Squares Method This method determines the unknown DOAs as the minimizing elements of the following function f = 1 N N X t=1 ∥y(t) −As(t) ∥2 (6.4.4) “sm2” 2004/2/ page 28 i i i i i i i i 282 Chapter 6 Spatial Methods Minimization with respect to {s(t)} gives (see Result R32 in Appendix A) s(t) = (A∗A)−1A∗y(t) t = 1, . . . , N (6.4.5) By inserting (6.4.5) into (6.4.4), we get the following concentrated nonlinear least squares (LS) criterion: f = 1 N N X t=1 ∥{I −A(A∗A)−1A∗}y(t) ∥2 = 1 N N X t=1 y∗(t)[I −A(A∗A)−1A∗]y(t) = tr{[I −A(A∗A)−1A∗] ˆ R} (6.4.6) The second equality in (6.4.6) follows from the fact that the matrix I−A(A∗A)−1A∗ is idempotent (it is the orthogonal projector onto N(A∗)), and the third from the properties of the trace operator (see Result R8 in Appendix A). It follows from (6.4.6) that the nonlinear LS DOA estimates are given by {ˆ θk} = arg max {θk} tr[A(A∗A)−1A∗ˆ R] (6.4.7) Remark: Similar to the frequency estimation case, it can be shown that beam-forming provides an approximate solution to the previous nonlinear LS problem whenever the DOAs are known to be well separated. To see this, let us assume that we restrict the search for the maximizers of (6.4.7) to a set of well–separated DOAs (according to the a priori information that the true DOAs belong to this set.) In such a set, A∗A ≃mI under weak conditions, and hence the function in (6.4.7) can approximately be written as: tr h A(A∗A)−1A∗ˆ R i ≃1 m n X k=1 a∗(θk) ˆ Ra(θk) Paralleling the discussion following equation (4.3.16) in Chapter 4 we can show that the beamforming DOA estimates maximize the right–hand side of the above equa-tion over the set under consideration. With this observation, the proof of the fact that the computationally efficient beamforming method provides an approximate solution to (6.4.7) in scenarios with well–separated DOAs is concluded. One difference between (6.4.7) and the corresponding optimization problem in the frequency estimation application (see (4.3.8) in Section 4.3) lies in the fact that in the frequency estimation application only one “snapshot” of data is available, in contrast to the N snapshots available in the DOA estimation application. Another, more important difference is that for non–ULA cases the matrix A in (6.4.7) does not have the Vandermonde structure of the corresponding matrix in (4.3.8). As a consequence, several of the algorithms used to (approximately) solve the frequency estimation problem (such as the one in [Kumaresan, Scharf, and Shaw 1986] and [Bresler and Macovski 1986]) are no longer applicable to solving (6.4.7) unless the array is a ULA. “sm2” 2004/2/ page 283 i i i i i i i i Section 6.4 Parametric Methods 283 6.4.2 Yule–Walker Method The matrix Γ, which lies at the basis of the Yule–Walker method (see Section 4.4), can be constructed from any block of R in (6.4.3) that does not include diagonal elements. To be more precise, partition the array model (6.2.21) into the following two nonoverlapping parts: y(t) =  ¯ y(t) ˜ y(t)  =  ¯ A ˜ A  s(t) +  ¯ e(t) ˜ e(t)  (6.4.8) Since ¯ e(t) and ˜ e(t) are uncorrelated (by assumption), we have Γ ≜E {¯ y(t)˜ y∗(t)} = ¯ AP ˜ A∗ (6.4.9) which is assumed to be of dimension M × L (with M + L = m). For M > n, L > n (6.4.10) (which cannot hold unless m > 2n), the rank of Γ is equal to n (under weak con-ditions) and the (L −n)–dimensional null space of this matrix contains complete information about the DOAs. To see this, let G be an L × (L −n) matrix whose columns form a basis of N(Γ) (G can be obtained from the SVD of Γ; see Re-sult R15 in Appendix A). Then we have ΓG = 0, which implies (using the fact that rank( ¯ AP) = n): ˜ A∗G = 0 This observation can be used, in the manner of Sections 4.4 (YW) and 4.5 (MUSIC), to estimate the DOAs from a sample estimate of Γ such as ˆ Γ = 1 N N X t=1 ¯ y(t)˜ y∗(t) (6.4.11) Unlike all the other methods discussed in the following, the Yule–Walker method does not impose the rather stringent condition (6.4.1). The Yule–Walker method requires only that E {¯ e(t)˜ e∗(t)} = 0, which is a much weaker assumption. This is a distinct advantage of the Yule–Walker method (see [Viberg, Stoica, and Ottersten 1995] for details). Its relative drawback is that it can only be used if m > 2n (all the other methods require only that m > n); in general, it has been found to provide accurate DOA estimates only in those applications involving large–aperture arrays. Interestingly enough, whenever the condition (6.4.1) holds (i.e., the noise at the array output is spatially white) we can use a modification of the above technique that does not require that m > 2n [Fuchs 1996]. To see this, let ˜ Γ ≜E {y(t)˜ y∗(t)} = R  0 IL  (m × L) “sm2” 2004/2/ page 284 i i i i i i i i 284 Chapter 6 Spatial Methods where ˜ y(t) is as defined in (6.4.8); hence ˜ Γ is made from the last L columns of R. By making use of the expression (6.4.3) for R, we obtain ˜ Γ = AP ˜ A∗+ σ2  0 IL  (6.4.12) Because the noise terms in y(t) and ˜ y(t) are correlated, the noise is still present in ˜ Γ (as can be seen from (6.4.12)), and hence ˜ Γ is not really a YW matrix. Nevertheless, ˜ Γ has a property similar to that of the YW matrix Γ above, as we now show. First observe that ˜ Γ∗˜ Γ = ˜ A(2σ2P + PA∗AP) ˜ A∗+ σ4I The matrix 2σ2P + PA∗AP is readily shown to be nonsingular if and only if P is nonsingular. As ˜ Γ∗˜ Γ has the same form as R in (6.4.3), we conclude that (for m ≥L > n) the L × (L −n) matrix ˜ G, whose columns are the eigenvectors of ˜ Γ∗˜ Γ that correspond to the multiple minimum eigenvalue of σ4, satisfies ˜ A∗˜ G = 0 (6.4.13) The columns of ˜ G are also equal to the (L −n) right singular vectors of ˜ Γ corre-sponding to the multiple minimum singular value of σ2. For numerical precision reasons ˜ G should be computed from the singular vectors of ˜ Γ rather than from the eigenvectors of ˜ Γ∗˜ Γ (see Section A.8.2). Because (6.4.13) has the same form as ˜ A∗G = 0, we can use (6.4.13) for subspace–based DOA estimation in exactly the same way as we used ˜ A∗G = 0 (see equation (4.5.6) and the discussion following it in Chapter 4). Note that for the method based on ˜ Γ to be usable, we require only that m ≥L > n (6.4.14) instead of the more restrictive conditions {m −L > n, L > n} (see (6.4.10)) required in the YW method based on Γ. Observe that (6.4.14) can always be satisfied if m > n, whereas (6.4.10) requires that m > 2n. Finally, note that Γ is made from the first m −L rows of ˜ Γ, and hence Γ contains “less information” than ˜ Γ; this provides a quick intuitive explanation why the method based on Γ requires more sensors to be applicable than does the method based on ˜ Γ. 6.4.3 Pisarenko and MUSIC Methods The MUSIC algorithm (with Pisarenko as a special case), developed in Section 4.5 for the frequency estimation application, can be used without modification for DOA estimation [Bienvenu 1979; Schmidt 1979; Barabell 1983]. There are only minor differences between the DOA and the frequency estimation applications of MUSIC, as pointed out below. First, in the spatial application we can choose between the Spectral and Root MUSIC estimators only in the case of a ULA. For most of the other array geometries, only Spectral MUSIC is applicable. Second, the standard MUSIC algorithm (4.5.15) breaks down in the case of coherent signals, as in that case the rank condition (4.5.1) no longer holds. (Such “sm2” 2004/2/ page 28 i i i i i i i i Section 6.4 Parametric Methods 285 a situation cannot happen in the frequency estimation application, because P is always (diagonal and) nonsingular there.) However, the modified MUSIC algorithm (outlined at the end of Section 4.5) can be used when the signals are coherent pro-vided that the array is uniform and linear. This is so because the property (4.5.23), on which the modified MUSIC algorithm is based, continues to hold even if P is singular (see Exercise 6.14). 6.4.4 Min–Norm Method There is no essential difference between the use of the Min–Norm method for fre-quency estimation and for DOA finding in the noncoherent case. As for MUSIC, in the DOA estimation application the Min–Norm method should not be used in sce-narios with coherent signals, and the Root Min–Norm algorithm can only be used in the ULA case [Kumaresan and Tufts 1983]. In addition, the key property that the true DOAs are asymptotically the unique solutions of the Min–Norm esti-mation problem holds in the ULA case (see Complement 6.5.1) but not necessarily for other array geometries. 6.4.5 ESPRIT Method In the ULA case, ESPRIT can be used for DOA estimation exactly as it is for frequency estimation (see Section 4.7). In the non–ULA case ESPRIT can be used only in certain situations. More precisely, and unlike the other algorithms in this section, ESPRIT can be used for DOA finding only if the array at hand contains two identical subarrays which are displaced by a known displacement vector [Roy and Kailath 1989; Stoica and Nehorai 1991]. Mathematically, this condition can be formulated as follows. Let ¯ m denote the number of sensors in the two twin subarrays, and let A1 and A2 denote the sub–matrices of A corresponding to these subarrays. Since the sensors in the array are arbitrarily numbered, there is no restriction to assume that A1 is made from the first ¯ m rows in A and A2 from the last ¯ m: A1 = [I ¯ m 0]A ( ¯ m × n) (6.4.15) A2 = [0 I ¯ m]A ( ¯ m × n) (6.4.16) (here I ¯ m denotes the ¯ m × ¯ m identity matrix). Note that the two subarrays overlap if ¯ m > m/2; otherwise, they might not overlap. If the array is purposely built to meet ESPRIT’s subarray condition, then normally ¯ m = m/2 and the two subarrays are nonoverlapping. Mathematically, the ESPRIT requirement means that A2 = A1D (6.4.17) where D =    e−iωcτ(θ1) 0 ... 0 e−iωcτ(θn)    (6.4.18) “sm2” 2004/2/ page 286 i i i i i i i i 286 Chapter 6 Spatial Methods and where τ(θ) denotes the time needed by a wavefront impinging upon the array from the direction θ to travel between (the “reference points” of) the two twin subarrays. If the angle of arrival θ is measured with respect to the perpendicular of the line between the subarrays’ center points, then a calculation similar to the one that led to (6.2.22) shows that: τ(θ) = d sin(θ)/c (6.4.19) where d is the distance between the two subarrays. Hence, estimates of the DOAs can readily be derived from estimates of the diagonal elements of D in (6.4.18). Equations (6.4.17) and (6.4.18) are basically equivalent to (4.7.3) and (4.7.4) in Section 4.7, and hence the ESPRIT DOA estimation method is analogous to the ESPRIT frequency estimator. The ESPRIT DOA estimation method, like the ESPRIT frequency estimator, determines the DOA estimates by solving an n×n eigenvalue problem. There is no search involved, in contrast to the previous methods; in addition, there is no problem of separating the “signal DOAs” from the “noise DOAs”, once again in contrast to the Yule–Walker, MUSIC and Min–Norm methods. However, unlike these other methods, ESPRIT can only be used with the special array configuration described earlier. In particular, this requirement limits the number of resolvable sources at n < ¯ m (as both A1 and A2 must have full column rank). Note that the two subarrays do not need to be calibrated although they need to be identical, and ESPRIT may be sensitive to differences between the two subarrays in the same way as Yule–Walker, MUSIC, and Min–Norm are sensitive to imperfections in array calibration. Finally, note that similar to the other DOA finding algorithms presented in this section (with the exception of the NLS method), ESPRIT is not usable in the case of coherent signals. 6.5 COMPLEMENTS 6.5.1 On the Minimum Norm Constraint As explained in Section 6.4.4 the Root Min–Norm (temporal) frequency estimator, introduced in Section 4.6, can without modification be used for DOA estimation with a uniform linear array. Using the definitions and notation in Section 4.6, let ˆ g = [1 ˆ g1 . . . ˆ gm−1]T denote the vector in R( ˆ G) that has first element equal to one and minimum Euclidean norm. Then, the Root Min–Norm DOA estimates are obtained from the roots of the polynomial ˆ g(z) = 1 + ˆ g1z−1 + · · · + ˆ gm−1z−(m−1) (6.5.1) which are located nearest the unit circle. (See the description of Min–Norm in Section 4.6.) As N increases, the polynomial in (6.5.1) approaches g(z) = 1 + g1z−1 + · · · + gm−1z−(m−1) (6.5.2) where g = [1 g1 . . . gm−1]T is the minimum–norm vector in R(G). In this com-plement we show that (6.5.2) has n zeroes at {e−iωk}n k=1 (the so–called “signal zeroes”) and (m −n −1) extraneous zeroes situated strictly inside the unit circle “sm2” 2004/2/ page 28 i i i i i i i i Section 6.5 Complements 287 (the latter are normally called “noise zeroes”); here {ωk}n k=1 are either temporal frequencies, or spatial frequencies as in (6.2.27). Let g = [1, g1, . . . , gm−1]T ∈R(G). Then (4.2.4) and (4.5.6) imply that a∗(ωk)      1 g1 . . . gm−1     = 0 ⇐ ⇒ 1 + g1eiωk + · · · + gm−1ei(m−1)ωk = 0 (for k = 1, . . . , n) (6.5.3) Hence, any polynomial g(z) whose coefficient vector belongs to R(G) must have zeroes at {e−iωk}n k=1, and thus it can be factored as: g(z) = gs(z)gn(z) (6.5.4) where gs(z) = n Y k=1 (1 −e−iωkz−1) The (m −n −1)–degree polynomial gn(z) in (6.5.4) contains the noise zeroes, and at this point is arbitrary. (As the coefficients of gn(z) vary, the vectors made from the corresponding coefficients of g(z) span R(G).) Next, assume that g satisfies the minimum norm constraint: m−1 X k=0 |gk|2 = min (g0 ≜1) (6.5.5) By using Parseval’s theorem (see (1.2.6)), we can rewrite (6.5.5) as follows: 1 2π Z π −π |g(ω)|2 dω = min ⇐ ⇒1 2π Z π −π |gn(ω)|2 |gs(ω)|2 dω = min (where, by convention, g(ω) = g(z) z=eiω). Since gs(z) in (6.5.4) is fixed, the minimization in (6.5.6) is over gn(z). To proceed, some additional notation is required. Let gn(z) = 1 + α1z−1 + · · · + αm−n−1z−(m−n−1) and let y(t) be a signal whose PSD is equal to |gs(ω)|2; hence, y(t) is an nth–order MA process. By making use of (1.3.9) and (1.4.9), along with the above notation, we can write (6.5.6) in the following equivalent form: min {αk} E  |y(t) + α1y(t −1) + · · · + αm−n−1y(t −m + n + 1)|2 (6.5.7) The minimizing coefficients {αk} are given by the solution to a Yule–Walker system of equations similar to (3.4.6). (To show this, parallel the calculation lead-ing to (3.4.8) and (3.4.12).) Since the covariance matrix, of any finite dimension, associated with a moving average signal is positive definite, it follows that: “sm2” 2004/2/ page 288 i i i i i i i i 288 Chapter 6 Spatial Methods • The coefficients {αk}, and hence {gk}, are uniquely determined by the mini-mum norm constraint. • The polynomial gn(z) whose coefficients are obtained from (6.5.7) has all its zeroes strictly inside the unit circle (cf. Exercise 3.8). which was to be proven. Thus, the choice of ˆ g in the Min–Norm algorithm makes it possible to separate the signal zeroes from the noise zeroes, at least for data samples that are sufficiently long. (For small or medium–sized samples, it might happen that noise zeroes get closer to the unit circle than signal zeroes, which would lead to spurious frequency or DOA estimates.) As a final remark, note from (6.5.6) that there is little reason for gn(z) to have zeroes in the sectors where the signal zeroes are present (since the integrand in (6.5.6) is already quite small for ω values close to {ωk}n k=1). Hence, we can expect the extraneous zeroes to be more–or–less uniformly distributed inside the unit circle, in sectors which do not contain signal zeroes (see, e.g., [Kumaresan 1983]). For more details on the topic of this complement, see [Tufts and Kumare-san 1982; Kumaresan 1983]. 6.5.2 NLS Direction-of-Arrival Estimation for a Constant-Modulus Signal The NLS estimation of the DOA of a single signal impinging on an array of sensors is obtained by minimizing the criterion (6.4.4) with n = 1, N X t=1 ∥y(t) −a(θ)s(t)∥2 (6.5.8) with respect to {s(t)}N t=1 and θ. The result is obtained from equation (6.4.7), which for n = 1 reduces to: ˆ θ = arg max θ a∗(θ) ˆ Ra(θ) = arg max θ N X t=1 |a∗(θ)y(t)|2 (6.5.9) This, of course, is nothing but the beamforming DOA estimate for n = 1 (see (6.3.18)). Hence, as expected (see the Remark following (6.4.7) and also (4.3.11)), the NLS estimate of the DOA of an arbitrary signal coincides with the beamforming estimate. In this complement we will solve the NLS direction-of-arrival estimation prob-lem in (6.5.8), under the assumption that {s(t)} is a constant-modulus signal: s(t) = αeiφ(t) (6.5.10) where α > 0 denotes the unknown signal amplitude and {φ(t)} is its unknown phase sequence. We assume α > 0 to avoid a phase ambiguity in {φ(t)}. Signals of this type are often encountered in communication applications with phase-modulated waveforms. “sm2” 2004/2/ page 289 i i i i i i i i Section 6.5 Complements 289 Inserting (6.5.10) in (6.5.8) yields the following criterion which is to be mini-mized with respect to {φ(t)}N t=1, α, and θ: N X t=1 y(t) −αeiφ(t)a(θ) 2 = N X t=1 n ∥y(t)∥2 + α2∥a(θ)∥2 −2α Re h a∗(θ)y(t)e−iφ(t)io (6.5.11) It follows from (6.5.11) that the NLS estimate of {φ(t)}N t=1 is given by the maximizer of the function: Re h a∗(θ)y(t)e−iφ(t)i = Re h |a∗(θ)y(t)| ei arg[a∗(θ)y(t)]e−iφ(t)i = |a∗(θ)y(t)| cos arg a∗(θ)y(t)  −φ(t) (6.5.12) which is easily seen to be ˆ φ(t) = arg [a∗(θ)y(t)] , t = 1, . . . , N (6.5.13) From (6.5.11)–(6.5.13), along with the assumption that ∥a(θ)∥is constant (which is also used to derive (6.5.9)), we can readily verify that the NLS estimate of θ for the constant modulus signal case is given by: ˆ θ = arg max θ N X t=1 |a∗(θ)y(t)| (6.5.14) Finally, the NLS estimate of α is obtained by minimizing (6.5.11) (with {φ(t)} and θ replaced by (6.5.13) and (6.5.14), respectively): ˆ α = 1 N∥a(ˆ θ)∥2 N X t=1 a∗(ˆ θ)y(t) (6.5.15) Remark: It follows easily from the above derivation that if α is known (which may be the case when the emitted signal has a known amplitude that is not significantly distorted during propagation), the NLS estimates of θ and {φ(t)} are still given by (6.5.13) and (6.5.14). ■ Interestingly, the only difference between the beamformer for an arbitrary sig-nal, (6.5.9), and the beamformer for a constant-modulus signal, (6.5.14), is that the “squaring operation” is missing in the latter. This difference is somewhat analogous to the one pointed out in Complement 4.9.4, even though the models considered there and in this complement are rather different from one another. For more details on the subject of this complement, see [Stoica and Besson 2000] and its references. “sm2” 2004/2/ page 290 i i i i i i i i 290 Chapter 6 Spatial Methods 6.5.3 Capon Method: Further Insights and Derivations The spatial filter (or beamformer) used in the beamforming method is data-inde-pendent. In contrast, the Capon spatial filter is data-dependent, or data-adaptive; see equation (6.3.24). It is this data-adaptivity that confers to the Capon method better resolution and significantly reduced leakage compared with the beamforming method. An interesting fact about the Capon method for temporal or spatial spec-tral analysis is that it can be derived in several ways. The standard derivation is given in Section 6.3.2. This complement presents four additional derivations of the Capon method, which are not as well-known as the standard derivation. Each of the derivations presented here is based on an intuitively appealing design cri-terion. Collectively, they provide further insights into the features and possible interpretations of the Capon method. APES-Like Derivation Let θ denote a generic DOA, and consider equation (6.2.19): y(t) = a(θ)s(t) + e(t) (6.5.16) that describes the array output, y(t), as a sum of a possible signal component impinging from the generic DOA θ and a term e(t) that includes noise and any other signals with DOAs different from θ. Let σ2 s denote the power of the signal s(t) in (6.5.16), which is the main parameter we want to estimate: σ2 s as a function of θ provides an estimate of the spatial spectrum. Let us estimate the spatial filter vector, h, as well as the signal power, σ2 s, by solving the following least squares (LS) problem: min h,σ2 s E  |h∗y(t) −s(t)|2 (6.5.17) Of course, the signal s(t) in (6.5.17) is not known. However, as we show below, (6.5.17) does not depend on s(t) but only on its power σ2 s, so the fact that s(t) in (6.5.17) is unknown does not pose a problem. Also, note that the vector h in (6.5.17) is not constrained, as it is in (6.3.24). Assuming that s(t) in (6.5.16) is uncorrelated with the noise-plus-interference term e(t), we obtain: E {y(t)s∗(t)} = a(θ)σ2 s (6.5.18) which implies that E  |h∗y(t) −s(t)|2 = h∗Rh −h∗a(θ)σ2 s −a∗(θ)hσ2 s + σ2 s = h −σ2 sR−1a(θ) ∗R h −σ2 sR−1a(θ) + σ2 s 1 −σ2 sa∗(θ)R−1a(θ) (6.5.19) Omitting the trivial solution (h = 0, σ2 s = 0), the minimization of (6.5.19) with “sm2” 2004/2/ page 29 i i i i i i i i Section 6.5 Complements 291 respect to h and σ2 s yields: h = R−1a(θ) a∗(θ)R−1a(θ) (6.5.20) σ2 s = 1 a∗(θ)R−1a(θ) (6.5.21) which coincides with the Capon solution in (6.3.24) and (6.3.25). To obtain σ2 s in (6.5.21) we used the fact that the criterion in (6.5.19) should be greater than or equal to zero for any h and σ2 s. The LS fitting criterion in (6.5.17) is reminiscent of the APES approach dis-cussed in Complement 5.6.4. The use of APES for array processing is discussed in Complement 6.5.6, under the assumption that {s(t)} is an unknown deterministic sequence. Interestingly, using the APES design principle in the above manner, un-der the assumption that the signal s(t) in (6.5.16) is stochastic, leads to the Capon method. Inverse-Covariance Fitting Derivation The covariance matrix of the signal term a(θ)s(t) in (6.5.16) is given by σ2 sa(θ)a∗(θ) (6.5.22) We can obtain the beamforming method (see Section 6.3.1) by fitting (6.5.22) to R in a least squares sense: min σ2 s R −σ2 sa(θ)a∗(θ) 2 = min σ2 s  constant + σ4 s[a∗(θ)a(θ)]2 −2σ2 sa∗(θ)Ra(θ) (6.5.23) As a∗(θ)a(θ) = m (by assumption; see (6.3.11)), it follows from (6.5.23) that the minimizing σ2 s is given by: σ2 s = 1 m2 a∗(θ)Ra(θ) (6.5.24) which coincides with the beamforming estimate of the power coming from DOA θ (see (6.3.16)). To obtain the Capon method by following a similar idea to the one above, we fit the pseudoinverse of (6.5.22) to the inverse of R: min σ2 s R−1 − σ2 sa(θ)a∗(θ) † 2 (6.5.25) It is easily verified that the Moore–Penrose pseudoinverse of σ2 sa(θ)a∗(θ) is given by σ2 sa(θ)a∗(θ) † = 1 σ2 s a(θ)a∗(θ) [a∗(θ)a(θ)]2 = 1 σ2 s a(θ)a∗(θ) m2 (6.5.26) “sm2” 2004/2/ page 29 i i i i i i i i 292 Chapter 6 Spatial Methods This follows, for instance, from (A.8.8) and the fact that σ2 sa(θ)a∗(θ) = σ2 s∥a(θ)∥2  a(θ) ∥a(θ)∥   a(θ) ∥a(θ)∥ ∗ ≜σuv∗ (6.5.27) is the singular value decomposition (SVD) of σ2 sa(θ)a∗(θ). Inserting (6.5.26) into (6.5.25) leads to the problem min σ2 s R−1 −1 σ2 s a(θ)a∗(θ) m2 2 (6.5.28) whose solution, by analogy with (6.5.23)–(6.5.24), is given by the Capon estimate of the signal power: σ2 s = 1 a∗(θ)R−1a(θ) (6.5.29) It is worth noting that in the present covariance fitting-based derivation, the signal power σ2 s is estimated directly without the need to first obtain an intermediate spatial filter h. The remaining two derivations of the Capon method are of the same type. Weighted Covariance Fitting Derivation The least squares criterion in (6.5.23), which yields the beamforming method, does not take into account the fact that the sample estimates of the different elements of the data covariance matrix do not have the same accuracy. It was shown, e.g., in [Ottersten, Stoica, and Roy 1998] (and its references) that the following weighted LS covariance fitting criterion takes the accuracies of the different elements of the sample covariance matrix into account in an optimal manner: min σ2 s R−1/2 R −σ2 sa(θ)a∗(θ) R−1/2 2 (6.5.30) Here, R−1/2 denotes the Hermitian square root of R−1. By a straightforward calculation, we can rewrite the criterion in (6.5.30) in the following equivalent form: I −σ2 sR−1/2a(θ)a∗(θ)R−1/2 2 = constant −2σ2 sa∗(θ)R−1a(θ) + σ4 s a∗(θ)R−1a(θ) 2 (6.5.31) The minimization of (6.5.31) with respect to σ2 s yields: σ2 s = 1 a∗(θ)R−1a(θ) which coincides with the Capon solution in (6.3.26). “sm2” 2004/2/ page 293 i i i i i i i i Section 6.5 Complements 293 Constrained Covariance Fitting Derivation The final derivation of the Capon method that we will present is also based on a covariance fitting criterion, but in a manner which is quite different from those in the previous two derivations. Our goal here is still to obtain the signal power by fitting σ2 sa(θ)a∗(θ) to R, but now we explicitly impose the condition that the residual covariance matrix, R−σ2 sa(θ)a∗(θ), should be positive semidefinite, and we “minimize” the approximation (or fitting) error by choosing the maximum possible value of σ2 s for which this condition holds. Mathematically, σ2 s is the solution to the following constrained covariance fitting problem: max σ2 s σ2 s subject to R −σ2 sa(θ)a∗(θ) ≥0 (6.5.32) The solution to (6.5.32) can be obtained in the following way, which is a simplified version of the original derivation in [Marzetta 1983]. Let R−1/2 again denote the Hermitian square root of R−1. Then, the following equivalences can be readily verified: R −σ2 sa(θ)a∗(θ) ≥0 ⇐ ⇒I −σ2 sR−1/2a(θ)a∗(θ)R−1/2 ≥0 ⇐ ⇒1 −σ2 sa∗(θ)R−1a(θ) ≥0 ⇐ ⇒σ2 s ≤ 1 a∗(θ)R−1a(θ) (6.5.33) The third line in equation (6.5.33) follows from the fact that the eigenvalues of the matrix I −σ2 sR−1/2a(θ)a∗(θ)R−1/2 are equal to one minus the eigenvalues of σ2 sR−1/2a(θ)a∗(θ)R−1/2 (see Result R5 in Appendix A), and the latter eigenvalues are given by σ2 sa∗(θ)R−1a(θ) (which is the trace of the previous matrix) along with (m −1) zeroes. From (6.5.33) we can see that the Capon spectral estimate is the solution to the problem (6.5.32) as well. The equivalence between the formulation of the Capon method in (6.5.32) and the standard formulation in Section 6.3.2 can also be shown as follows. The constraint in (6.5.32) is equivalent to the requirement that h∗ R −σ2 sa(θ)a∗(θ) h ≥0 for any h ∈Cm×1 (6.5.34) which, in turn, is equivalent to h∗ R −σ2 sa(θ)a∗(θ) h ≥0 for any h such that h∗a(θ) = 1 (6.5.35) Clearly, (6.5.34) implies (6.5.35). To also show that (6.5.35) implies (6.5.34), let h be such that h∗a(θ) = α ̸= 0; then h/α∗satisfies (h/α∗)∗a(θ) = 1 and hence, by the assumption that (6.5.35) holds, 1 |α|2 h∗ R −σ2 sa(θ)a∗(θ) h ≥0 “sm2” 2004/2/ page 294 i i i i i i i i 294 Chapter 6 Spatial Methods which shows that (6.5.35) implies (6.5.34) for any h satisfying h∗a(θ) ̸= 0. Now, if h is such that h∗a(θ) = 0 then h∗ R −σ2 sa(θ)a∗(θ) h = h∗Rh ≥0 because R > 0 by assumption. This observation concludes the proof that (6.5.34) is equivalent to (6.5.35). Using the equivalence of (6.5.34) and (6.5.35), we can rewrite (6.5.34) as follows h∗Rh ≥σ2 s for any h such that h∗a(θ) = 1 (6.5.36) From (6.5.36) we can see that the solution to (6.5.32) is given by σ2 s = min h h∗Rh subject to h∗a(θ) = 1 which coincides with the standard formulation of the Capon method in (6.3.24). The formulation of the Capon method in (6.5.32) will be used in Comple-ment 6.5.4 to extend the method to the case where the direction vector a(θ) is imprecisely known. 6.5.4 Capon Method for Uncertain Direction Vectors The Capon method has better resolution and much better interference rejection capability (i.e., much lower leakage) than the beamforming method, provided that the direction vector, a(θ), is accurately known. However, whenever the knowledge of a(θ) is imprecise, the performance of the Capon method may become worse than that of the beamforming method. To see why this is so, consider a scenario in which the problem is to determine the power coming from a source with DOA assumed to be equal to θ0. Let us assume that in actuality the true DOA of the source is θ0+∆. For the Capon beamformer pointed toward θ0, the source of interest (located at θ0 + ∆) will play the role of an interference and will be attenuated. Consequently, the power of the signal of interest will be underestimated; the larger ∆is, the larger the underestimation error. Because steering vector errors are common in applications, it follows that a robust version of the Capon method (i.e., one that is as insensitive to steering vector errors as possible) would be highly desirable. In this complement we will present an extension of the Capon method to the case of uncertain direction vectors. Specifically, we will assume that the only knowl-edge we have about a(θ) is that it belongs to the following uncertainty ellipsoid: (a −¯ a)∗C−1(a −¯ a) ≤1 (6.5.37) where the vector ¯ a and the positive definite matrix C are given. Note that both a and ¯ a, as well as C, usually depend on θ; however, for the sake of notational convenience, we drop the θ dependence of these variables. In some applications there may be too little available information about the errors in the steering vector to make a competent choice of the full matrix C in (6.5.37). In such cases we may simply set C = εI, so that (6.5.37) becomes ∥a −¯ a∥2 ≤ε (6.5.38) “sm2” 2004/2/ page 29 i i i i i i i i Section 6.5 Complements 295 where ε is a positive number. Let a0 denote the true (and unknown) direction vector, and let ε0 = ∥a0 −¯ a∥2 where, as before, ¯ a is the assumed direction vector. Ideally we should choose ε = ε0. However, it can be shown that the performance of the robust Capon method remains almost unchanged when ε is varied in a relatively large interval around ε0 (see [Stoica, Wang, and Li 2003], [Li, Stoica, and Wang 2003]). As already stated, our goal here is to obtain a robust Capon method that is insensitive to errors in the direction (or steering) vector. We will do so by combining the covariance fitting formulation in (6.5.32) for the standard Capon method with the steering uncertainty set in (6.5.37). Hence, we aim to derive estimates of both σ2 s and a by solving the following constrained covariance fitting problem: max a,σ2 s σ2 s subject to: R −σ2 saa∗≥0 (a −¯ a)∗C−1(a −¯ a) ≤1 (6.5.39) To avoid the trivial solution (a →0, σ2 s →∞), we assume that a = 0 does not belong to the uncertainty ellipsoid in (6.5.39), or equivalently that ¯ a∗C−1¯ a > 1 (6.5.40) (which is a regularity condition). Because both σ2 s and a are considered to be free parameters in the above fitting problem, there is a scaling ambiguity in the signal covariance term in (6.5.39), in the sense that both (σ2 s, a) and (σ2 s/µ, µ1/2a) for any µ > 0 give the same covariance term σ2 saa∗. To eliminate this ambiguity we can use the knowledge that the true steering vector satisfies the condition (see (6.3.11)): a∗a = m (6.5.41) However, the constraint in (6.5.41) is non-convex, which makes the combined prob-lem (6.5.39) and (6.5.41) somewhat more difficult to solve than (6.5.39). On the other hand, (6.5.39) (without (6.5.41)) can be quite efficiently solved, as we show below. To take advantage of this fact, we can make use of (6.5.41) to eliminate the scaling ambiguity in the following pragmatic way: • Obtain the solution (˜ σ2 s, ˜ a) of (6.5.39). • Obtain an estimate of a which satisfies (6.5.41) by scaling ˜ a: ˆ a = √m ∥˜ a∥˜ a and a corresponding estimate of σ2 s by scaling ˜ σ2 s such that the signal covari-ance term is left unchanged, i.e., ˜ σ2 s˜ a˜ a∗= ˆ σ2 sˆ aˆ a∗, which gives: ˆ σ2 s = ˜ σ2 s ∥˜ a∥2 m (6.5.42) “sm2” 2004/2/ page 296 i i i i i i i i 296 Chapter 6 Spatial Methods To derive the solution (˜ σ2 s, ˜ a) of (6.5.39) we first note that, for any fixed a, the maximizing σ2 s is given by ˜ σ2 s = 1 a∗R−1a (6.5.43) (see equation (6.5.33) in Complement 6.5.3). This simple observation allows us to eliminate σ2 s from (6.5.39) and hence reduce (6.5.39) to the following problem: min a a∗R−1a subject to: (a −¯ a)∗C−1(a −¯ a) ≤1 (6.5.44) Under the regularity condition in (6.5.40), the solution ˜ a to (6.5.44) will occur on the boundary of the constraint set, and therefore we can reformulate (6.5.44) as the following quadratic problem with a quadratic equality constraint min a a∗R−1a subject to: (a −¯ a)∗C−1(a −¯ a) = 1 (6.5.45) This problem can be solved efficiently by using the Lagrange multiplier approach, see [Li, Stoica, and Wang 2003]. In the remaining part of this complement we derive the Lagrange multiplier solver in [Li, Stoica, and Wang 2003], but in a more self-contained way. To simplify the notation, consider (6.5.45) with C = εI as in (6.5.38): min a a∗R−1a subject to: ∥a −¯ a∥2 = ε (6.5.46) (the case of C ̸= εI can be similarly treated). Define x = a −¯ a (6.5.47) and rewrite (6.5.46) using x in lieu of a: min x x∗R−1x + x∗R−1¯ a + ¯ a∗R−1x subject to: ∥x∥2 = ε (6.5.48) Owing to the constraint in (6.5.48), the x that solves (6.5.48) is also a solution to the problem: min x x∗(R−1 + λI)x + x∗R−1¯ a + ¯ a∗R−1x subject to: ∥x∥2 = ε (6.5.49) where λ is an arbitrary constant. Let us consider a particular choice of λ, which is a solution of the equation: ¯ a∗(I + λR)−2¯ a = ε (6.5.50) and which is also such that R−1 + λI > 0 (6.5.51) Then, the unconstrained minimizer of the function in (6.5.49) is given by x = − R−1 + λI −1 R−1¯ a = −(I + λR)−1 ¯ a (6.5.52) and it satisfies the constraint in (6.5.49) (cf. (6.5.50)). It follows that x in (6.5.52) with λ given by (6.5.50) and (6.5.51) is the solution to (6.5.49) (and hence to “sm2” 2004/2/ page 29 i i i i i i i i Section 6.5 Complements 297 (6.5.48)). Hence, what is left to explain is how to solve (6.5.50) under the condition (6.5.51) in an efficient manner, which we do next. Let R = UΛU ∗ (6.5.53) denote the eigenvalue decomposition (EVD) of R, where U ∗U = UU ∗= I and Λ =    λ1 0 ... 0 λm   ; λ1 ≥λ2 ≥· · · ≥λm (6.5.54) Also, let b = U ∗¯ a (6.5.55) Using (6.5.53)–(6.5.55) we can rewrite the left-hand side of equation (6.5.50) as: g(λ) ≜¯ a∗[I + λR]−2 ¯ a = ¯ a∗[U(I + λΛ)U ∗]−2 ¯ a = b∗(I + λΛ)−2b = m X k=1 |bk|2 (1 + λλk)2 (6.5.56) where bk is the kth element of the vector b. Note that m X k=1 |bk|2 = ∥b∥2 = ∥¯ a∥2 > ε (6.5.57) (see (6.5.55) and (6.5.40)). It follows from (6.5.56) and (6.5.57) that λ can be a solution of the equation g(λ) = ε only if (1 + λλk)2 > 1 (6.5.58) for some value of k. At the same time, λ should be such that (see (6.5.51)): R−1 + λI > 0 ⇐ ⇒I + λR > 0 ⇐ ⇒1 + λλk > 0 for k = 1, . . . , m (6.5.59) It follows from (6.5.58) and (6.5.59) that 1 + λλk > 1 for at least one value of k, which implies that λ > 0 (6.5.60) This inequality sets a lower bound on the solution to (6.5.50). To refine this lower bound, and also to obtain an upper bound, first observe that g(λ) is a monotonically decreasing function of λ for λ > 0. Furthermore, for λL = ∥¯ a∥−√ε λ1 √ε (6.5.61) we have that g(λL) > 1 (1 + λLλ1)2 ∥b∥2 = ε ∥¯ a∥2 ∥¯ a∥2 = ε (6.5.62) “sm2” 2004/2/ page 298 i i i i i i i i 298 Chapter 6 Spatial Methods Similarly, for λU = ∥¯ a∥−√ε λm √ε ≥λL (6.5.63) we can verify that g(λU) < 1 (1 + λUλm)2 ∥b∥2 = ε (6.5.64) Summarizing the previous facts, it follows that equation (6.5.50) has a unique so-lution for λ that satisfies (6.5.51), which belongs to the interval [λL, λU] ⊂(0, ∞). With this observation, the derivation of the robust version of the Capon method is complete. The following is a step-by-step summary of the Robust Capon algorithm. The Robust Capon Algorithm Step 1. Compute the eigendecomposition R = UΛU ∗and set b = U ∗¯ a. Step 2. Solve the equation g(λ) = ε for λ using, e.g., a Newton method along with the fact that there is a unique solution in the interval [λL, λU]. Step 3. Compute (cf. (6.5.47), (6.5.52), (6.5.53)): ˜ a = ¯ a −U(I + λΛ)−1b (6.5.65) and, finally, compute the power estimate (see (6.5.42) and (6.5.43)) ˆ σ2 s = ˜ a∗˜ a m ˜ a∗UΛ−1U ∗˜ a (6.5.66) where, from (6.5.65), U ∗˜ a = b −(I + λΛ)−1b. The bulk of the computation in the algorithm involves computing the EVD of R, which requires O(m3) arithmetic operations. Hence, the computational com-plexity of the above Robust Capon method is comparable to that of the standard Capon method. We refer the reader to [Li, Stoica, and Wang 2003] and also to [Stoica, Wang, and Li 2003] for further computational considerations and insights, as well as many numerical examples illustrating the good performance of the Robust Capon method, including its insensitivity to the choice of ε in (6.5.38) or C in (6.5.37). 6.5.5 Capon Method with Noise Gain Constraint As explained in Complement 6.5.4, the Capon method performs poorly as a power estimator in the presence of steering vector errors (yet, it may perform fairly well as a DOA estimator, provided that the SNR is reasonably large; see [Cox 1973; Li, Stoica, and Wang 2003] and references therein). The same happens when the number of snapshots, N, is relatively small, such as when N is equal to or only slightly larger than the number of sensors, m. In fact, there is a close rela-tionship between the cases of steering vector errors and small-sample errors, see e.g. [Feldman and Griffiths 1994]. More precisely, the sampling estimation errors of the covariance matrix can be viewed as steering vector errors in a cor-responding theoretical covariance matrix, and vice versa. For example, consider a uniform linear array and assume that the source signals are uncorrelated with one “sm2” 2004/2/ page 299 i i i i i i i i Section 6.5 Complements 299 another. In this case, the theoretical covariance matrix R of the array output is Toeplitz. Assume that the sample covariance matrix ˆ R is also Toeplitz. According to the Carath´ eodory parameterization of Toeplitz matrices (see Complement 4.9.2), we can view ˆ R as being the theoretical covariance matrix associated with a ficti-tious ULA on which uncorrelated signals impinge, but the powers and DOAs of the latter signals are different from those of the actual signals. Hence, the small sample estimation errors in ˆ R can be viewed as being due to steering vector errors in a corresponding theoretical covariance matrix. The robust Capon method (RCM) presented in Complement 6.5.4 significantly outperforms the standard Capon method (CM) in power estimation applications in which the sample length is insufficient for accurate estimation of R, or in which the steering vector is imprecisely known. The RCM was introduced in [Stoica, Wang, and Li 2003; Li, Stoica, and Wang 2003]. An earlier approach, whose goal is also to enhance the performance of CM in the presence of sampling estimation errors or steering vector mismatch, is the so-called diagonal loading approach (see, e.g., [Hudson 1981; Van Trees 2002] and references therein). The main idea of diagonal loading is to replace R in the Capon formula for the spatial filter h, (6.3.24), by the following matrix: R + λI (6.5.67) where the diagonal loading factor λ > 0 is a user-selected parameter. The so-obtained filter vector h is given by h = (R + λI)−1a a∗(R + λI)−1a (6.5.68) The use of the diagonally-loaded matrix in (6.5.67) instead of R is the reason for the name of the approach based on (6.5.68). The symbol R in this complement refers to either a theoretical covariance matrix or a sample covariance matrix. There have been several rules proposed in the literature for choosing the parameter λ in (6.5.68). Most of these rules choose λ in a rather ad-hoc and data-independent manner. As illustrated in [Li, Stoica, and Wang 2003] and its references, a data-independent selection of the diagonal loading factor cannot improve the performance for a reasonably large range of SNR values. Hence, a data-dependent choice of λ is desired. One commonly-used data-dependent rule selects the diagonal loading factor λ > 0 that satisfies ∥h∥2 = a∗(R + λI)−2a [a∗(R + λI)−1a]2 = c (6.5.69) where the constant c must be chosen by the user. Let us explain briefly why choosing λ via (6.5.69) makes sense intuitively. Assume that the array output vector contains a spatially white noise component whose covariance matrix is proportional to I (see (6.4.1)). Then the power at the output of the spatial filter h due to the noise component is ∥h∥2; for this reason ∥h∥2 is sometimes called the (white) noise gain of h. In scenarios with a large number of (possibly closely-spaced) source signals, the Capon spatial filter h in (6.3.24) may run out of “degrees of freedom” and hence may not pay enough attention to the noise in the data (unless the SNR is very “sm2” 2004/2/ page 300 i i i i i i i i 300 Chapter 6 Spatial Methods low). The result is a relatively high noise gain, ∥h∥2, which may well degrade the accuracy of signal power estimation. To prevent this from happening, it makes sense to limit ∥h∥2 as in (6.5.69). By doing so we are left with the problem of choosing c. While the choice of c may be easier than the direct choice of λ in (6.5.68), it is far from trivial, and in fact clear-cut rules for selecting c are hardly available. In particular, a “too small” value of c may limit the noise gain unnecessarily, and result in decreased resolution and increased leakage. In this complement we will show that the spatial filter of the diagonally-loaded Capon method in (6.5.68), (6.5.69) is the solution to the following design problem: min h h∗Rh subject to: h∗a = 1 and ∥h∥2 ≤c (6.5.70) Because (6.5.70) is obtained by adding the noise gain constraint ∥h∥2 ≤c to the standard Capon problem in (6.3.23), we will call the method that follows from (6.5.70) the constrained Capon method (CCM). While the fact that (6.5.68), (6.5.69) is the solution to (6.5.70) is well known from the previous literature (see, e.g., [Hudson 1981]), we present a rigorous and thorough analysis of this solution. As a byproduct, the following analysis also suggests some guidelines for choosing the user parameter c in (6.5.69). Note that in general a, c, and h in (6.5.70) depend on the DOA θ; to simplify notation we will omit the functional dependence on θ here. It is interesting to observe that the RCM, described in Complement 6.5.4, can also be cast into a diagonal loading framework. To see this, first note from (6.5.47) and (6.5.52) that the steering vector estimate used in the RCM is given by: a = ¯ a −(I + λR)−1¯ a = (I + λR)−1 [(I + λR) −I] ¯ a = 1 λR−1 + I −1 ¯ a (6.5.71) The RCM estimates the signal power by 1 a∗R−1a (6.5.72) with a as given in (6.5.71) above, and hence RCM does not directly use any spatial filter. However, the power estimate in (6.5.72) is equal to h∗Rh, where h = R−1a a∗R−1a (6.5.73) and hence (6.5.72) can be viewed as being obtained by the (implicit) use of the spatial filter in (6.5.71), (6.5.73). Inserting (6.5.71) into (6.5.73) we obtain: h = R + 1 λI −1 a a∗R + 1 λI  R−1 R + 1 λI −1 a (6.5.74) which, except for the scalar in the denominator, has the form in (6.5.68) of the spatial filter used by the diagonal loading approach. Note that the diagonal loading factor, 1/λ, in (6.5.74) is data-dependent. Furthermore, the selection of λ in the “sm2” 2004/2/ page 30 i i i i i i i i Section 6.5 Complements 301 RCM (see Complement 6.5.4 for details on this aspect) relies entirely on information about the uncertainty set of the steering vector, as defined, for instance, by the sphere with radius ε1/2 in (6.5.38). Such information is more readily available in applications than is information which would help the user select the noise gain constraint c in the CCM. Indeed, in many applications we should be able to make a more competent guess about ε than about c (for all DOAs of interest in the analysis). This appears to be a significant advantage of RCM over CCM, despite the fact that both methods can be interpreted as data-dependent diagonal loading approaches. Remark: The reader may have noted by now that the CCM problem in (6.5.70) is similar to the combined RCM problem in (6.5.44), (6.5.41) discussed in Com-plement 6.5.4. This observation has two consequences. First, it follows that the combined RCM design problem in (6.5.44), (6.5.41) could be solved by an algorithm similar to the one presented below for solving the CCM problem; indeed, this is the case as shown in [Li, Stoica, and Wang 2004]. Second, the CCM problem (6.5.70) and the combined RCM problem (6.5.44), (6.5.41) both have two con-straints, and are more complicated than the RCM problem (6.5.44), which has only one constraint. Hence, the CCM algorithm described below will be (slightly) more involved computationally than the RCM algorithm outlined in Complement 6.5.4. ■ We begin the analysis of the CCM problem in (6.5.70) by deriving a feasible range for the user parameter c. Let S denote the set of vectors h that satisfy both constraints in (6.5.70): S =  h h∗a = 1 and ∥h∥2 ≤c (6.5.75) By the Cauchy–Schwartz inequality (see Result R12 in Appendix A), we have that: 1 = |h∗a|2 ≤∥h∥2∥a∥2 ≤cm = ⇒c ≥1 m (6.5.76) where we also used the fact that (by assumption; see (6.3.11)) ∥a∥2 = m (6.5.77) The inequality in (6.5.76) sets a lower bound on c; otherwise, S is empty. To obtain an upper bound we can argue as follows. The vector h used in the CM has the following norm: ∥hCM∥2 = a∗R−2a (a∗R−1a)2 (6.5.78) As the noise gain of the CM is typically too high, we should like to choose c so that c < a∗R−2a (a∗R−1a)2 (6.5.79) Note that if c does not satisfy (6.5.79), then the CM spatial filter h satisfies both constraints in (6.5.70) and hence it is the solution to the CCM problem. Combining “sm2” 2004/2/ page 30 i i i i i i i i 302 Chapter 6 Spatial Methods (6.5.76) and (6.5.79) yields the following interval for c: c ∈ " 1 m, a∗R−2a (a∗R−1a)2 # (6.5.80) Similarly to (6.5.53), let R = UΛU ∗ (6.5.81) be the eigenvalue decomposition (EVD) of R, where U ∗U = UU ∗= I and Λ =    λ1 0 ... 0 λm   ; λ1 ≥λ2 ≥· · · ≥λm (6.5.82) As a∗R−2a [a∗R−1a]2 ≤∥a∥2/λ2 m [∥a∥2/λ1]2 = λ2 1 mλ2 m (6.5.83) it follows from (6.5.79) that c also satisfies: mc < λ2 1 λ2 m (6.5.84) The above inequality will be useful later on. Next, let us define the function g(h, λ, µ) = h∗Rh + λ(∥h∥2 −c) + µ(−h∗a −a∗h + 2) (6.5.85) where µ ∈R is arbitrary and where λ > 0 (6.5.86) Remark: We note in passing that λ and µ are the so-called Lagrange multipliers, and g(h, λ, µ) is the so-called Lagrangian function associated with the CCM problem in (6.5.70); however, to make the following derivation as self-contained as possible, we will not explicitly use any result from Lagrange multiplier theory. ■ Evidently, by the definition of g(h, λ, µ) we have that: g(h, λ, µ) ≤h∗Rh for any h ∈S (6.5.87) and for any µ ∈R and λ > 0. The part of (6.5.85) that depends on h can be written as h∗(R + λI)h −µh∗a −µa∗h = h −µ(R + λI)−1a ∗(R + λI) h −µ(R + λI)−1a −µ2a∗(R + λI)−1a (6.5.88) “sm2” 2004/2/ page 303 i i i i i i i i Section 6.5 Complements 303 Hence, for fixed λ and µ, the unconstrained minimizer of g(h, λ, µ) with respect to h is given by: ˆ h(λ, µ) = µ(R + λI)−1a (6.5.89) Let us choose µ such that (6.5.89) satisfies the first constraint in (6.5.70): ˆ h∗(λ, ˆ µ)a = 1 ⇐ ⇒ˆ µ = 1 a∗(R + λI)−1a (6.5.90) (which is always possible, for λ > 0). Also, let us choose λ so that (6.5.89) also satisfies the second constraint in (6.5.70) with equality, i.e., ∥ˆ h(ˆ λ, ˆ µ)∥2 = c ⇐ ⇒ a∗(R + ˆ λI)−2a h a∗(R + ˆ λI)−1a i2 = c (6.5.91) We will show shortly that the above equation has a unique solution ˆ λ > 0 for any c satisfying (6.5.80). Before doing so, we remark on the following important fact. Inserting (6.5.90) into (6.5.89), we get the diagonally-loaded version of the Capon method (see (6.5.68)): ˆ h(ˆ λ, ˆ µ) = (R + ˆ λI)−1a a∗(R + ˆ λI)−1a (6.5.92) As ˆ λ satisfies (6.5.91), the above vector ˆ h(ˆ λ, ˆ µ) lies on the boundary of S, and hence (see also (6.5.87)): g  ˆ h(ˆ λ, ˆ µ), ˆ λ, ˆ µ  = ˆ h∗(ˆ λ, ˆ µ)Rˆ h(ˆ λ, ˆ µ) ≤h∗Rh for any h ∈S (6.5.93) From (6.5.93) we conclude that (6.5.92) is the (unique) solution to the CCM prob-lem. It remains to show that, indeed, equation (6.5.91) has a unique solution ˆ λ > 0 under (6.5.80), and also to provide a computationally convenient way of finding ˆ λ. Towards that end, we use the EVD of R in (6.5.91) (with the hat on ˆ λ omitted, for notational simplicity) to rewrite (6.5.91) as follows: f(λ) = c (6.5.94) where f(λ) = a∗(R + λI)−2a [a∗(R + λI)−1a]2 = " m X k=1 |bk|2 (λk + λ)2 # " m X k=1 |bk|2 (λk + λ) #2 (6.5.95) and where bk is the kth element of the vector b = U ∗a (6.5.96) “sm2” 2004/2/ page 304 i i i i i i i i 304 Chapter 6 Spatial Methods Differentiation of (6.5.95) with respect to λ yields: f ′(λ) =   −2 " m X k=1 |bk|2 (λk + λ)3 # " m X k=1 |bk|2 (λk + λ) #2 +2 " m X k=1 |bk|2 (λk + λ)2 # " m X k=1 |bk|2 (λk + λ) # " m X k=1 |bk|2 (λk + λ)2 #   · 1 " m X k=1 |bk|2 (λk + λ) #4 = −2    " m X k=1 |bk|2 (λk + λ)3 # " m X k=1 |bk|2 (λk + λ) # − " m X k=1 |bk|2 (λk + λ)2 #2   · " m X k=1 |bk|2 (λk + λ) # " m X k=1 |bk|2 (λk + λ) #4 (6.5.97) Making use of the Cauchy–Schwartz inequality once again, we can show that " m X k=1 |bk|2 (λk + λ)2 #2 = " m X k=1 |bk| (λk + λ)3/2 |bk| (λk + λ)1/2 #2 < " m X k=1 |bk|2 (λk + λ)3 # " m X k=1 |bk|2 (λk + λ) # (6.5.98) Hence, f ′(λ) <0 for any λ > 0 (and λk ̸= λp for at least one pair k ̸= p) (6.5.99) which means that f(λ) is a monotonically strictly decreasing function for λ > 0. Combining this observation with the fact that f(0) > c (see (6.5.79)) shows that indeed the equation f(λ) = c in (6.5.91) has a unique solution for λ > 0. For efficiently solving the equation f(λ) = c, an upper bound on λ would also be useful. Such a bound can be obtained as follows. A simple calculation shows that c = f(λ) < ∥b∥2 (λm + λ)2 ∥b∥4 (λ1 + λ)2 = (λ1 + λ)2 m(λm + λ)2 = ⇒mc(λm + λ)2 < (λ1 + λ)2 (6.5.100) “sm2” 2004/2/ page 30 i i i i i i i i Section 6.5 Complements 305 where we used the fact that ∥b∥2 = ∥a∥2 = m. From (6.5.100) we see that λ must satisfy the inequality λ < λ1 −√mcλm √mc −1 ≜λU (6.5.101) Note that both the numerator and the denominator in (6.5.101) are positive; see (6.5.76) and (6.5.84). The derivation of the constrained Capon method is now complete. The fol-lowing is a step-by-step summary of the CCM. The Constrained Capon Algorithm Step 1. Compute the eigendecomposition R = UΛU ∗and set b = U ∗a. Step 2. Solve the equation f(λ) = c for λ using, e.g., a Newton method along with the fact that there is a unique solution which lies in the interval (0, λU). Step 3. Compute the (diagonally-loaded) spatial filter vector h = (R + λI)−1a a∗(R + λI)−1a = U(Λ + λI)−1b b∗(Λ + λI)−1b where λ is found in Step 2, and estimate the signal power as h∗Rh. To conclude this complement, we note that the above CCM algorithm is quite similar to the RCM algorithm presented in Complement 6.5.4. The only differences are that the equation for λ associated with the CCM is slightly more complicated, and more importantly, that it is harder to select c needed in the CCM (for any DOA of interest) than it is to select ε in the RCM. As we have shown, for CCM one should choose c in the interval (6.5.80). Note that for c = 1/m we get λ →∞ and h = a/m, which is the beamforming method. For c = a∗R−2a/(a∗R−1a)2 we obtain λ = 0 and h = hCM, which is the standard Capon method. Values of c between these two extremes should be chosen in an application-dependent manner. 6.5.6 Spatial Amplitude and Phase Estimation (APES) As explained in Section 6.3.2, the Capon method estimates the spatial spectrum by using a spatial filter that passes the signal impinging on the array from direction θ in a distortionless manner, and at the same time attenuates signals with DOAs different from θ as much as possible. The Capon method for temporal spectral analysis is based on exactly the same idea (see Section 5.4), as is the temporal APES method described in Complement 5.6.4. In this complement we will present an extension of APES that can be used for spatial spectral analysis. Let θ denote a generic DOA and consider the equation (6.2.19), y(t) = a(θ)s(t) + e(t), t = 1, . . . , N (6.5.102) that describes the array output, y(t), as a function of a signal, s(t), possibly im-pinging on the array from a DOA equal to θ, and a term, e(t), that includes noise along with any other signals whose DOAs are different from θ. We assume that the array is uniform and linear, in which case a(θ) is given by a(θ) = h 1, e−iωs, . . . , e−i(m−1)ωsiT (6.5.103) “sm2” 2004/2/ page 306 i i i i i i i i 306 Chapter 6 Spatial Methods where m denotes the number of sensors in the array, and ωs = (ωcd sin θ)/c is the spatial frequency (see (6.2.26) and (6.2.27)). As we will explain later, the spatial extension of APES presented in this complement appears to perform well only in the case of ULAs. While this is a limitation, it is not a serious one because there are techniques which can be used to approximately transform the direction vector of a general array into the direction vector of a fictitious ULA (see, e.g., [Doron, Doron, and Weiss 1993]). Such a technique performs a relatively simple DOA-independent linear transformation of the array output snapshots; the so-obtained linearly transformed snapshots can then be used as the input to the spatial APES method presented here. See [Abrahamsson, Jakobsson, and Stoica 2004] for details on how to use the spatial APES approach of this complement for arrays that are not uniform and linear. Let σ2 s denote the power of the signal s(t) in (6.5.102), which is the main parameter we want to estimate; note that the estimated signal power ˆ σ2 s, as a function of θ, provides an estimate of the spatial spectrum. In this complement, we assume that {s(t)}N t=1 is an unknown deterministic sequence, and hence we define σ2 s as σ2 s = lim N→∞ 1 N N X t=1 |s(t)|2 (6.5.104) An important difference between equation (6.5.102) and its temporal counter-part (see, e.g., equation (5.6.81) in Complement 5.6.6) is that in (6.5.102) the signal s(t) is completely unknown, whereas in the temporal case we had s(t) = βeiωt and only the amplitude is unknown. Because of this difference, the use of the APES principle for spatial spectral estimation is somewhat different from its use for tem-poral spectral estimation. Remark: We remind the reader that {s(t)}N t=1 is assumed to be an unknown de-terministic sequence here. The case in which {s(t)} is assumed to be stochastic is considered in Complement 6.5.3. Interestingly, application of the APES principle in the stochastic signal case leads to the (standard) Capon method! ■ Let ¯ m < m be an integer, and define the following two vectors: ¯ ak = h e−i(k−1)ωs, e−ikωs, . . . , e−i(k+ ¯ m−2)ωsiT ( ¯ m × 1) (6.5.105) ¯ yk(t) = [yk(t), yk+1(t), . . . , yk+ ¯ m−1(t)]T ( ¯ m × 1) (6.5.106) for k = 1, . . . , L, with L = m −¯ m + 1 (6.5.107) In (6.5.106), yk(t) denotes the kth element of y(t); also, we omit the dependence of ¯ ak on θ to simplify notation. The choice of the user parameter ¯ m will be discussed later. Owing to the assumed ULA structure, the direction subvectors {¯ ak} satisfy the following relations: ¯ ak = e−i(k−1)ωs¯ a1, k = 2, . . . , L (6.5.108) “sm2” 2004/2/ page 30 i i i i i i i i Section 6.5 Complements 307 Consequently, ¯ yk(t) can be written as (see (6.5.102)): ¯ yk(t) = ¯ aks(t) + ¯ ek(t) = e−i(k−1)ωs¯ a1s(t) + ¯ ek(t) (6.5.109) where ¯ ek(t) is a noise vector defined similarly to ¯ yk(t). Let h denote the ( ¯ m × 1) coefficient vector of a spatial filter that is applied to {ei(k−1)ωs ¯ yk(t)}L k=1. Then it follows from (6.5.109) that h passes the signal s(t) in each of these data sets in a distortionless manner if and only if: h∗¯ a1 = 1 (6.5.110) Using the above observations along with the APES principle presented in Com-plement 5.6.4, we can determine both the spatial filter h and an estimate of the complex-valued sequence {s(t)}N t=1 (we estimate both amplitude and phase — recall that APES stands for Amplitude and Phase EStimation) by solving the following linearly-constrained least squares (LS) problem: min h;{s(t)} N X t=1 L X k=1 h∗¯ yk(t)ei(k−1)ωs −s(t) 2 subject to: h∗¯ a1 = 1 (6.5.111) The quadratic criterion in (6.5.111) expresses our desire to make the outputs of the spatial filter, {h∗¯ yk(t)ei(k−1)ωs}L k=1, resemble a signal s(t) (that is indepen-dent of k) as much as possible, in a least squares sense. Said another way, the above LS criterion expresses our goal to make the filter h attenuate any signal in {¯ yk(t)ei(k−1)ωs}L k=1, whose DOA is different from θ, as much as possible. The linear constraint in (6.5.111) forces the spatial filter h to pass the signal s(t) undistorted. To derive a solution to (6.5.111), let g(t) = 1 L L X k=1 ¯ yk(t)ei(k−1)ωs (6.5.112) and observe that 1 L L X k=1 h∗¯ yk(t)ei(k−1)ωs −s(t) 2 = |s(t)|2 + h∗ " 1 L L X k=1 ¯ yk(t)¯ y∗ k(t) # h −h∗g(t)s∗(t) −g∗(t)hs(t) = h∗ " 1 L L X k=1 ¯ yk(t)¯ y∗ k(t) # h −h∗g(t)g∗(t)h + |s(t) −h∗g(t)|2 (6.5.113) Hence, the sequence {s(t)} that minimizes (6.5.111), for fixed h, is given by ˆ s(t) = h∗g(t) (6.5.114) Inserting (6.5.114) into (6.5.111) (see also (6.5.113)) we obtain the reduced problem: min h h∗ˆ Qh subject to: h∗¯ a1 = 1 (6.5.115) “sm2” 2004/2/ page 308 i i i i i i i i 308 Chapter 6 Spatial Methods where ˆ Q = ˆ R −ˆ G ˆ R = 1 N N X t=1 1 L L X k=1 ¯ yk(t)¯ y∗ k(t) ˆ G = 1 N N X t=1 g(t)g∗(t) (6.5.116) The solution to the quadratic problem with linear constraints in (6.5.115) can be obtained by using Result R35 in Appendix A: ˆ h = ˆ Q−1¯ a1 ¯ a∗ 1 ˆ Q−1¯ a1 (6.5.117) Using (6.5.117) in (6.5.114) we can obtain an estimate of the signal sequence, which may be of interest in some applications, as well as an estimate of the signal power: ˆ σ2 s = 1 N N X t=1 |ˆ s(t)|2 = ˆ h∗ˆ Gˆ h (6.5.118) The above equation, as a function of DOA θ, provides an estimate of the spatial spectrum. The matrix ˆ Q in (6.5.116) can be rewritten in the following form: ˆ Q = 1 N N X t=1 1 L L X k=1 h ei(k−1)ωs ¯ yk(t) −g(t) i h ei(k−1)ωs ¯ yk(t) −g(t) i∗ (6.5.119) It follows from (6.5.119) that ˆ Q is always positive semidefinite. For L = 1 (or, equivalently, ¯ m = m) we have ˆ Q = 0 because g(t) = ¯ y1(t) for t = 1, . . . , N. Thus, for L = 1 (6.5.117) is not valid. This is expected: indeed, for L = 1 we can make (6.5.111) equal to zero, for any h, by choosing ˆ s(t) = h∗¯ y1(t); consequently, the problem of minimizing (6.5.111) with respect to (h; {s(t)}N t=1) is underdetermined for L = 1, and hence an infinite number of solutions exist. To prevent this from happening, we should choose L ≥2 (or, equivalently, ¯ m ≤m −1). For L ≥2 the ( ¯ m × ¯ m) matrix ˆ Q is a sum of NL outer products; if NL ≥¯ m, which is a weak condition, ˆ Q is almost surely strictly positive definite and hence nonsingular. From a performance point of view, it turns out that a good choice of ¯ m is its maximum possible value: ¯ m = m −1 ⇐ ⇒ L = 2 (6.5.120) A numerical study of performance, reported in [Gini and Lombardini 2002], supports the above choice of ¯ m, and also suggests that the spatial APES method may outperform the Capon method in both spatial spectrum estimation and DOA “sm2” 2004/2/ page 309 i i i i i i i i Section 6.5 Complements 309 estimation applications. The APES spatial filter is, however, more difficult to compute than is the Capon spatial filter, owing to the dependence of ˆ Q in (6.5.117) on the DOA. In the remainder of this complement we will explain why the APES method may be expected to outperform the Capon method. In doing so we assume that ¯ m = m−1 (and thus L = 2) as in (6.5.120). Intuitively, this choice of ¯ m provides the APES filter with the maximum possible number of degrees of freedom, and hence it makes sense that it should lead to better resolution and interference rejection capability than would smaller values of ¯ m. For L = 2 we have g(t) = 1 2 ¯ y1(t) + eiωs ¯ y2(t) (6.5.121) and hence ˆ Q = 1 2N N X t=1 1 4 ¯ y1(t) −eiωs ¯ y2(t) ¯ y1(t) −eiωs ¯ y2(t) ∗ +1 4 eiωs ¯ y2(t) −¯ y1(t) eiωs ¯ y2(t) −¯ y1(t) ∗ = 1 4N N X t=1 ¯ y1(t) −eiωs ¯ y2(t) ¯ y1(t) −eiωs ¯ y2(t) ∗ (6.5.122) It follows that the APES spatial filter is the solution to the problem (see (6.5.115)) min h N X t=1 h∗ ¯ y1(t) −eiωs ¯ y2(t) 2 subject to: h∗¯ a1 = 1 (6.5.123) and that the APES signal estimate is given by (see (6.5.114)) ˆ s(t) = 1 2h∗ ¯ y1(t) + eiωs ¯ y2(t) (6.5.124) On the other hand, the Capon spatial filter is obtained as the solution to the problem min h N X t=1 |h∗y(t)|2 subject to: h∗a = 1 (6.5.125) and the Capon signal estimate is given by ˆ s(t) = h∗y(t) (6.5.126) To explain the main differences between the APES and Capon approaches let us assume that, in addition to the signal of interest (SOI) s(t) impinging on the array from the DOA under consideration θ, there is an interference signal i(t) that im-pinges on the array from another DOA, denoted θi. We consider the situation in which only one interference signal is present to simplify the discussion, but the case “sm2” 2004/2/ page 310 i i i i i i i i 310 Chapter 6 Spatial Methods of multiple interference signals can be similarly treated. The array output vector in (6.5.102) and the subvectors in (6.5.109) become y(t) = a(θ)s(t) + b(θi)i(t) + e(t) (6.5.127) ¯ y1(t) = ¯ a1(θ)s(t) + ¯ b1(θi)i(t) + ¯ e1(t) (6.5.128) ¯ y2(t) = ¯ a2(θ)s(t) + ¯ b2(θi)i(t) + ¯ e2(t) (6.5.129) where the quantities b, ¯ b1, and ¯ b2 are defined similarly to a, ¯ a1, and ¯ a2. We have shown the dependence of the various quantities on θ and θi in equations (6.5.127)– (6.5.129), but will drop the DOA dependence in the remainder of the derivation to simplify notation. For the above scenario, the Capon method is known to have poor performance in either of the following two situations: (i) The SOI steering vector is imprecisely known, for example owing to pointing or calibration errors. (ii) The SOI is highly correlated or coherent with the interference, which happens in multipath propagation or smart jamming scenarios. To explain the difficulty of the Capon method in case (i), let us assume that the true steering vector of the SOI is a0 ̸= a. Then, by design, the Capon filter will be such that |h∗a0| ≃0 (where ≃0 denotes a “small” value). Therefore, the SOI, whose steering vector is different from the assumed vector a, is treated as an interference signal and is attenuated or cancelled. As a consequence, the power of the SOI will be significantly underestimated, unless special measures are taken to make the Capon method robust against steering vector errors (see Complements 6.5.4 and 6.5.5). The performance degradation of the Capon method in case (ii) is also easy to understand. Assume that the interference is coherent with the SOI and hence that i(t) = ρs(t) for some nonzero constant ρ. Then (6.5.127) can be rewritten as y(t) = (a + ρb)s(t) + e(t) (6.5.130) which shows that the SOI steering vector is given by (a+ρb) in lieu of the assumed vector a. Consequently, the Capon filter will by design be such that |h∗(a+ρb)| ≃0, and therefore the SOI will be attenuated or cancelled in the filter output h∗y(t), as in case (i). In fact, case (ii) can be considered as an extreme example of case (i), in which the SOI steering vector errors can be significant. Modifying the Capon method to work well in the case of coherent multipath signals is thus a more difficult problem than modifying it to be robust to small steering vector errors. Next, let us consider the APES method in case (ii). From (6.5.128) and (6.5.129), along with (6.5.108), we get ¯ y1(t) −eiωs ¯ y2(t) = ¯ a1 −eiωs¯ a2  s(t) + ¯ b1 −eiωs¯ b2  i(t) + ¯ e1(t) −eiωs¯ e2(t) = h 1 −ei(ωs−ωi)i ¯ b1i(t) + ¯ e1(t) −eiωs¯ e2(t) (6.5.131) “sm2” 2004/2/ page 31 i i i i i i i i Section 6.5 Complements 311 and 1 2 ¯ y1(t) + eiωs ¯ y2(t) = 1 2 ¯ a1 + eiωs¯ a2  s(t) + 1 2 ¯ b1 + eiωs¯ b2  i(t) + 1 2 ¯ e1(t) + eiωs¯ e2(t) = ¯ a1s(t) + 1 2 h 1 + ei(ωs−ωi)i ¯ b1i(t) + 1 2 ¯ e1(t) + eiωs¯ e2(t) (6.5.132) where ωi = (ωcd sin θi)/c denotes the spatial frequency of the interference. It follows from (6.5.131) and the design criterion in (6.5.123) that the APES spatial filter will be such that |1 −ei(ωs−ωi)| · |h∗¯ b1| ≃0 (6.5.133) Hence, because the SOI is absent from the data vector in (6.5.131), the APES filter is able to cancel the interference only, despite the fact that the interference and the SOI are coherent. This interference rejection property of the APES filter (i.e., |h∗¯ b1| ≃0) is precisely what is needed when estimating the SOI from the data in (6.5.132). To summarize, the APES method circumvents the problem in case (ii) by implicitly eliminating the signal from the data that is used to derive the spatial filter. However, if there is more than one coherent interference in the observed data, then APES also breaks down similarly to the Capon method. The reason is that the vector multiplying i(t) in (6.5.131) is no longer proportional to the vector multiplying i(t) in (6.5.132), and hence a filter h that, by design, cancels the interference i(t) in (6.5.131) is not guaranteed to have the desirable effect of cancelling i(t) in (6.5.132); the details are left to the interested reader. Remark: A similar argument to the one above explains why APES will not work well for non-ULA array geometries, in spite of the fact that it can be extended to such geometries in a relatively straightforward manner. Specifically, for non-ULA geometries, the steering vectors of the interference terms in the data sets used to obtain h and to estimate s(t), respectively, are not proportional to one another. As a consequence, the design objective does not provide the APES filter with the desired capability of attenuating the interference terms in the data that is used to estimate {s(t)}. ■ Next consider the APES method in case (i). To simplify the discussion, let us assume that there are no calibration errors but only a pointing error, so that the true spatial frequency of the SOI is ω0 s ̸= ωs. Then equation (6.5.131) becomes ¯ y1(t) −eiωs ¯ y2(t) = h 1 −ei(ωs−ω0 s)i ¯ a0 1s(t) + h 1 −ei(ωs−ωi)i ¯ b1i(t) + ¯ e1(t) −eiωs¯ e2(t) (6.5.134) It follows that in case (i) the APES spatial filter tends to cancel the SOI as well, in addition to cancelling the interference. However, the pointing errors are usually quite small, and therefore the residual term of s(t) in (6.5.134) is small as well. Hence, the SOI may well pass through the APES filter (i.e., |h∗¯ a0 1| may be reason-ably close to |h∗¯ a1| = 1), because the filter uses most of its degrees of freedom to “sm2” 2004/2/ page 31 i i i i i i i i 312 Chapter 6 Spatial Methods cancel the much stronger interference term in (6.5.134). As a consequence, APES is less sensitive to steering vector errors than is the Capon method. The above discussion also explains why APES can provide better power esti-mates than the Capon method, even in “ideal” cases in which there are no multipath signals that are coherent with the SOI and no steering vector errors, but the num-ber of snapshots N is not very large. Indeed, as argued in Complement 6.5.5, the finite-sample effects associated with practical values of N can be viewed as inducing both correlation among the signals and steering vector errors, to which the APES method is less sensitive than the Capon method as explained above. We also note that the power of the elements of the noise vector in the data in (6.5.131), that is used to derive the APES filter, is larger than the power of the noise elements in the raw data y(t) that is used to compute the Capon filter. Somewhat counterintuitively, this is another potential advantage of the APES method over the Capon method. Indeed, the increased noise power in the data used by APES has a regularizing effect on the APES filter, which keeps the filter noise gain down, whereas the Capon filter is known to have a relatively large noise gain that can have a detrimental effect on signal power estimation (see Complement 6.5.5). On the downside, APES has been found to have a slightly lower resolution than the Capon method (see, e.g., [Jakobsson and Stoica 2000]. Our previous discussion also provides a simple explanation to this result: when the interference and the SOI are closely-spaced (i.e., when ωs ≃ωi), the first factor in (6.5.133) becomes rather small, which may allow the second factor to increase somewhat. This explains why the beamwidth of the APES spatial filter may be larger than that of the Capon filter, and hence why APES may have a slightly lower resolution. 6.5.7 The CLEAN Algorithm The CLEAN algorithm is a semi-parametric method that can be used for spatial spectral estimation. As we will see, this algorithm can be introduced in a non-parametric fashion (see [H¨ ogbom 1974]), yet its performance depends heavily on an implicit parametric assumption about the structure of the spatial covariance matrix; thus, CLEAN lies in between the class of nonparametric and parametric approaches, and it can be called a semi-parametric approach. There is a significant literature about CLEAN and its many applications in diverse areas, including array signal processing, image processing, and astronomy (see, e.g., [Cornwell and Bridle 1996] and its references). Our discussion of CLEAN will focus on its application to spatial spectral analysis and DOA estima-tion. First, we present an intuitive motivation of CLEAN. Consider the beamform-ing spatial spectral estimate in (6.3.18): ˆ φ1(θ) = a∗(θ) ˆ Ra(θ) (6.5.135) where a(θ) and ˆ R are defined as in Section 6.3.1. Let ˆ θ1 = arg max θ ˆ φ1(θ) (6.5.136) ˆ σ2 1 = 1 m2 ˆ φ1(ˆ θ1) (6.5.137) “sm2” 2004/2/ page 313 i i i i i i i i Section 6.5 Complements 313 In words, ˆ σ2 1 is the scaled height of the highest peak of ˆ φ1(θ), and ˆ θ1 is its corre-sponding DOA (see (6.3.16) and (6.3.18)). As we know, the beamforming method suffers from resolution and leakage problems. However, the dominant peak of the beamforming spectrum, ˆ φ1(θ), is likely to indicate that there is a source, or possibly several closely-spaced sources, at or in the vicinity of ˆ θ1. The covariance matrix of the part of the array output due to a source signal with DOA equal to ˆ θ1 and power equal to ˆ σ2 1 is given by (see, e.g., (6.2.19)): ˆ σ2 1a(ˆ θ1)a∗(ˆ θ1) (6.5.138) Consequently, the expected term in ˆ φ1(θ) due to (6.5.138) is ˆ σ2 1 a∗(θ)a(ˆ θ1) 2 (6.5.139) We partly eliminate the term (6.5.139) from ˆ φ1(θ), and hence define a new spectrum ˆ φ2(θ) = ˆ φ1(θ) −ρˆ σ2 1 a∗(θ)a(ˆ θ1) 2 (6.5.140) where ρ is a user parameter that satisfies ρ ∈(0, 1] (6.5.141) The reason for using a value of ρ < 1 in (6.5.140) can be explained as follows. (a) The assumption that there is a source with parameters (ˆ σ2 1, ˆ θ1) corresponding to the maximum peak of the beamforming spectrum, which led to (6.5.140), may not necessarily be true. For example, there may be several sources clus-tered around ˆ θ1 that were not resolved by the beamforming method. Sub-tracting only a (small) part of the beamforming response to a source signal with parameters (ˆ σ2 1, ˆ θ1) leaves “some power” at and around ˆ θ1. Hence, the algorithm will likely return to this DOA region of the beamforming spectrum in future iterations when it may have a better chance to resolve the power around ˆ θ1 into its true constituent components. (b) Even if there is indeed a single source at or close to ˆ θ1, the estimation of its parameters may be affected by leakage from other sources; this leakage will be particularly strong when the source signal in question is correlated with other source signals. In such a case, (6.5.139) is a poor estimate of the contribution of the source in question to the beamforming spectrum. By subtracting only a part of (6.5.139) from ˆ φ1(θ), we give the algorithm a chance to improve the parameter estimates of the source at or close to ˆ θ1 in future iterations, similarly to what we said in (a) above. (c) In both situations above, and possibly in other cases as well, in which (6.5.139) is a poor approximation of the part of the beamforming spectrum that is due to the source(s) at or around ˆ θ1, subtracting (6.5.139) from ˆ φ1(θ) fully (i.e., using ρ = 1) may yield a spatial spectrum that takes on negative values at some DOAs (which it should not). Using ρ < 1 in (6.5.140) reduces the likelihood that this undesirable event happens too early in the iterative process of the CLEAN algorithm (see below). “sm2” 2004/2/ page 314 i i i i i i i i 314 Chapter 6 Spatial Methods The calculation of ˆ φ2(θ), as in (6.5.140), completes the first iteration of CLEAN. In the second iteration, we proceed similarly but using ˆ φ2(θ) instead of ˆ φ1(θ). Hence, we let ˆ θ2 = arg max θ ˆ φ2(θ) (6.5.142) ˆ σ2 2 = 1 m2 ˆ φ2(ˆ θ2) (6.5.143) and ˆ φ3(θ) = ˆ φ2(θ) −ρˆ σ2 2 a∗(θ)a(ˆ θ2) 2 (6.5.144) Continuing the iterations in the same manner as above yields the CLEAN algorithm, a compact description of which is as follows: The CLEAN Algorithm Initialization: ˆ φ1(θ) = a∗(θ) ˆ Ra(θ) For k = 1, 2, . . . do: ˆ θk = arg max θ ˆ φk(θ) ˆ σ2 k = 1 m2 ˆ φk(ˆ θk) ˆ φk+1(θ) = ˆ φk(θ) −ρˆ σ2 k a∗(θ)a(ˆ θk) 2 We continue the iterative process in the CLEAN algorithm until either we complete a prespecified number of iterations or until ˆ φk(θ) for some k has become (too) negative at some DOAs (see, e.g., [H¨ ogbom 1974; Cornwell and Bridle 1996]). Regarding the choice of ρ in the CLEAN algorithm, while there are no clear guidelines about how this choice should be made to enhance the performance of the CLEAN algorithm in a given application, ρ ∈[0.1, 0.25] is usually recommended (see, e.g., [H¨ ogbom 1974; Cornwell and Bridle 1996; Schwarz 1978b]). We will make further comments on the choice of ρ later in this complement. In the CLEAN literature, the beamforming spectral estimate ˆ φ1(θ) that forms the starting point of CLEAN is called the “dirty” spectrum due to its mainlobe smearing and sidelobe leakage problems. The discrete spatial spectral estimate {ρˆ σ2 k, ˆ θk}k=1,2,... provided by the algorithm (or a suitably smoothed version of it) is called the “clean” spectrum. The iterative process that yields the “clean” spectrum is, then, called the CLEAN algorithm. It is interesting to observe that the above derivation of CLEAN is not based on a parametric model of the array output or of its covariance matrix, of the type considered in (6.2.21) or (6.4.3). More precisely, we have not made any assumption that there is a finite number of point source signals impinging on the array, nor “sm2” 2004/2/ page 31 i i i i i i i i Section 6.5 Complements 315 that the noise is spatially white. However, we have used the assumption that the covariance matrix due to a source signal has the form in (6.5.138), which cannot be true unless the signals impinging on the array are uncorrelated with one another. CLEAN is known to have poor performance if this parametric assumption does not hold. Hence, CLEAN is a combined nonparametric-parametric approach, which we call semi-parametric for short. Next, we present a more formal derivation of the CLEAN algorithm. Consider the following semi-parametric model of the array output covariance matrix R = σ2 1a(θ1)a∗(θ1) + σ2 2a(θ2)a∗(θ2) + · · · (6.5.145) As implied by the previous discussion, this is the covariance model assumed by CLEAN. Let us fit (6.5.145) to the sample covariance matrix ˆ R in a least squares sense: min {σ2 k,θk} ˆ R −σ2 1a(θ1)a∗(θ1) −σ2 2a(θ2)a∗(θ2) −· · · 2 (6.5.146) We will show that CLEAN is a sequential algorithm for approximately minimizing the above LS covariance fitting criterion. We begin by assuming that the initial estimates of σ2 2, σ2 3, . . . are equal to zero (in which case θ2, θ3, . . . are immaterial). Consequently, we obtain an estimate of the pair (σ2 1, θ1) by minimizing (6.5.146) with σ2 2 = σ2 3 = · · · = 0: min σ2 1,θ1 ˆ R −σ2 1a(θ1)a∗(θ1) 2 (6.5.147) As shown in Complement 6.5.3, the solution to (6.5.147) is given by ˆ θ1 = arg max θ ˆ φ1(θ); ˆ σ2 1 = 1 m2 ˆ φ1(ˆ θ1) (6.5.148) where ˆ φ1(θ) is as defined previously. We reduce the above power estimate by using ρˆ σ2 1 in lieu of ˆ σ2 1. The reasons for this reduction are discussed in points (a)–(c) above; in particular, we would like the residual covariance matrix ˆ R−ρˆ σ2 1a(ˆ θ1)a∗(ˆ θ1) to be positive definite. We will discuss this aspect in more detail after completing the derivation of CLEAN. Next, we obtain an estimate of the pair (σ2 2, θ2) by minimizing (6.5.146) with σ2 1 = ρˆ σ2 1, θ1 = ˆ θ1 and σ2 3 = σ2 4 = · · · = 0: min σ2 2,θ2 ˆ R −ρˆ σ2 1a(ˆ θ1)a∗(ˆ θ1) −σ2 2a(θ2)a∗(θ2) 2 (6.5.149) The solution to (6.5.149) can be shown to be (similarly to solving (6.5.147)): ˆ θ2 = arg max θ ˆ φ2(θ); ˆ σ2 2 = 1 m2 ˆ φ2(ˆ θ2) (6.5.150) where ˆ φ2(θ) = a∗(θ) h ˆ R −ρˆ σ2 1a(ˆ θ1)a∗(ˆ θ1) i a(θ) = ˆ φ1(θ) −ρˆ σ2 1 a∗(θ)a(ˆ θ1) 2 (6.5.151) “sm2” 2004/2/ page 316 i i i i i i i i 316 Chapter 6 Spatial Methods Observe that (6.5.148) and (6.5.150) coincide with (6.5.136)–(6.5.137) and (6.5.142)– (6.5.143). Evidently, continuing the above iterative process, for which (6.5.148) and (6.5.150) are the first two steps, leads to the CLEAN algorithm on page 314. The above derivation of CLEAN sheds some light on the properties of this algorithm. First, note that the LS covariance fitting criterion in (6.5.146) is de-creased at each iteration of CLEAN. For instance, consider the first iteration. A straightforward calculation shows that: ˆ R −ρˆ σ2 1a(ˆ θ1)a∗(ˆ θ1) 2 = ∥ˆ R∥2 −2ρˆ σ2 1a∗(ˆ θ1) ˆ Ra(ˆ θ1) + m2ρ2ˆ σ4 1 = ∥ˆ R∥2 −ρ(2 −ρ)m2ˆ σ4 1 (6.5.152) Clearly, (6.5.152) is less than ∥ˆ R∥2 for any ρ ∈(0, 2), and the maximum decrease occurs for ρ = 1 (as expected). A similar calculation shows that the criterion in (6.5.146) monotonically decreases as we continue the iterative process, for any ρ ∈(0, 2), and that at each iteration the maximum decrease occurs for ρ = 1. As a consequence, we might think of choosing ρ = 1, but this is not advisable. The reason is that our goal is not only to decrease the fitting criterion (6.5.146) as much and as fast as possible, but also to ensure that the residual covariance matrices ˆ Rk+1 = ˆ Rk −ρˆ σ2 ka(ˆ θk)a∗(ˆ θk); ˆ R1 = ˆ R (6.5.153) remain positive definite for k = 1, 2, . . .; otherwise, fitting σ2 k+1a(θk+1)a∗(θk+1) to ˆ Rk+1 would make little statistical sense. By a calculation similar to that in equation (6.5.33) of Complement 6.5.3, it can be shown that the condition ˆ Rk+1 > 0 is equivalent to ρ < 1 ˆ σ2 ka∗(ˆ θk) ˆ R−1 k a(ˆ θk) (6.5.154) Note that the right-hand side of (6.5.154) is bounded above by one, because by the Cauchy–Schwartz inequality: ˆ σ2 ka∗(ˆ θk) ˆ R−1 k a(ˆ θk) = 1 m2 h a∗(ˆ θk) ˆ Rka(ˆ θk) i h a∗(ˆ θk) ˆ R−1 k a(ˆ θk) i = 1 m2 ˆ R1/2 k a(ˆ θk) 2 ˆ R−1/2 k a(ˆ θk) 2 ≥ 1 m2 a∗(ˆ θk) ˆ R1/2 k ˆ R−1/2 k a(ˆ θk) 2 = 1 m2 a∗(ˆ θk)a(ˆ θk) 2 = 1 Also note that, depending on the scenario under consideration, satisfaction of the inequality in (6.5.154) for k = 1, 2, . . . may require choosing a value for ρ much less than one. In summary, the above discussion has provided a a precise argument for choosing ρ < 1 (or even ρ ≪1) in the CLEAN algorithm. The LS covariance fitting derivation of CLEAN also makes the semi-parametric nature of CLEAN more transparent. Specifically, the discussion has shown that “sm2” 2004/2/ page 31 i i i i i i i i Section 6.5 Complements 317 CLEAN fits the semi-parametric covariance model in (6.5.145) to the sample co-variance matrix ˆ R. Finally, note that although there is a significant literature on CLEAN, its sta-tistical properties are not well understood; in fact, other than the preliminary study of CLEAN reported in [Schwarz 1978b] there appear to be very few statistical studies in the literature. The derivation of CLEAN based on the LS covariance fitting criterion in (6.5.146) may also be useful to understand the statistical prop-erties of CLEAN. However, we will not attempt to provide a statistical analysis of CLEAN in this complement. 6.5.8 Unstructured and Persymmetric ML Estimates of the Covariance Matrix Let {y(t)}t=1,2,... be a sequence of independent and identically distributed (i.i.d.) m × 1 random vectors with mean zero and covariance matrix R. The array output given by equation (6.2.21) is an example of such a sequence, under the assumption that the signal s(t) and the noise e(t) in (6.2.21) are temporally white. Furthermore, let y(t) be circularly Gaussian distributed (see Section B.3 in Appendix B), in which case its probability density function is given by p y(t)  = 1 πm|R|e−y∗(t)R−1y(t) (6.5.155) Assume that N observations of {y(t)} are available: {y(1), . . . , y(N)} (6.5.156) Owing to the i.i.d. assumption made on the sequence {y(t)}t=1,2,... the probability density function of the sample in (6.5.156) is given by: p y(1), . . . , y(N)  = N Y t=1 p y(t)  = 1 πmN|R|N e−PN t=1 y∗(t)R−1y(t) (6.5.157) The maximum likelihood (ML) estimate of the covariance matrix R, based on the sample in (6.5.156), is given by the maximizer of the likelihood function in (6.5.157) (see Section B.1 in Appendix B) or, equivalently, by the minimizer of the negative log-likelihood function: −ln p y(1), . . . , y(N)  = mN ln(π) + N ln |R| + N X t=1 y∗(t)R−1y(t) (6.5.158) The part of (6.5.158) that depends on R is given by (after multiplying by 1 N ) ln |R| + 1 N N X t=1 y∗(t)R−1y(t) = ln |R| + tr  R−1 ˆ R  (6.5.159) where ˆ R = 1 N N X t=1 y(t)y∗(t) (m × m) (6.5.160) “sm2” 2004/2/ page 318 i i i i i i i i 318 Chapter 6 Spatial Methods In this complement we discuss the minimization of (6.5.159) with respect to R, which yields the ML estimate of R, under either of the following two assumptions: A: R has no assumed structure or B: R is persymmetric As explained in Section 4.8, R is persymmetric (or centrosymmetric) if and only if JRT J = R ⇐ ⇒ R = 1 2 R + JRT J  (6.5.161) where J is the so-called reversal matrix defined in (4.8.4). Remark: If y(t) is the output of an array that is uniform and linear and the source signals are uncorrelated with one another, then the covariance matrix R is Toeplitz, and hence persymmetric. ■ We will show that the unstructured ML estimate of R, denoted ˆ RU,ML, is given by the standard sample covariance matrix in (6.5.160), ˆ RU,ML = ˆ R (6.5.162) whereas the persymmetric ML estimate of R, denoted ˆ RP,ML, is given by ˆ RP,ML = 1 2  ˆ R + J ˆ RT J  (6.5.163) To prove (6.5.162) we need to show that (see (6.5.159)): ln |R| + tr  R−1 ˆ R  ≥ln | ˆ R| + m for any R > 0 (6.5.164) Let ˆ C be a square root of ˆ R (see Definition D12 in Appendix A) and note that tr  R−1 ˆ R  = tr  R−1 ˆ C ˆ C∗ = tr  ˆ C∗R−1 ˆ C  (6.5.165) Using (6.5.165) in (6.5.164) we obtain the following series of equivalences: (6.5.164) ⇐ ⇒tr  ˆ C∗R−1 ˆ C  −ln R−1 ˆ R ≥m ⇐ ⇒tr  ˆ C∗R−1 ˆ C  −ln ˆ C∗R−1 ˆ C ≥m ⇐ ⇒ m X k=1 (λk −ln λk −1) ≥0 (6.5.166) where {λk} are the eigenvalues of the matrix ˆ C∗R−1 ˆ C. “sm2” 2004/2/ page 319 i i i i i i i i Section 6.6 Exercises 319 Next we show, with reference to (6.5.166), that f(λ) ≜λ −ln λ −1 ≥0 for any λ > 0 (6.5.167) To verify (6.5.167), observe that f ′(λ) = 1 −1 λ; f ′′(λ) = 1 λ2 Hence, the function f(λ) in (6.5.167) has a unique minimum at λ = 1, and f(1) = 0; this proves (6.5.167). With this observation, the proof of (6.5.166), and therefore of (6.5.162), is complete. The proof of (6.5.163) is even simpler. In view of (6.5.161), we have that tr  R−1 ˆ R  = tr hJRT J −1 ˆ R i = tr  R−T J ˆ RJ  = tr  R−1J ˆ RT J  (6.5.168) Hence, the function to be minimized with respect to R (under the constraint (6.5.161)) can be written as: ln |R| + tr h R−1 · 1 2  ˆ R + J ˆ RT J i (6.5.169) As shown earlier in this complement, the unstructured minimizer of (6.5.169) is given by R = 1 2  ˆ R + J ˆ RT J  (6.5.170) Because (6.5.170) satisfies the persymmetry constraint, by construction, it also gives the constrained minimizer of the negative log-likelihood function, and hence the proof of (6.5.163) is concluded as well. The reader interested in more details on the topic of this complement, includ-ing a comparison of the statistical estimation errors associated with ˆ RU,ML and ˆ RP,ML, can consult [Jansson and Stoica 1999]. 6.6 EXERCISES Exercise 6.1: Source Localization using a Sensor in Motion This exercise illustrates how the directions of arrival of planar waves can be determined by using a single moving sensor. Conceptually this problem is related to that of DOA estimation by sensor array methods. Indeed, we can think of a sensor in motion as creating a synthetic aperture similar to the one corresponding to a physical array of spatially distributed sensors. Assume that the sensor has a linear motion with constant speed equal to v. Also, assume that the sources are far field point emitters at fixed locations in the same plane as the sensor. Let θk denote the kth DOA parameter (defined as the angle between the direction of wave propagation and the normal to the sensor trajectory). Finally, assume that the sources emit sinusoidal signals {αkeiωt}n k=1 with the same (center) frequency ω. These signals may be reflections of a probing “sm2” 2004/2/ page 320 i i i i i i i i 320 Chapter 6 Spatial Methods sinusoidal signal from different point scatterers of a target, in which case it is not restrictive to assume that they all have the same frequency. Show that, under the previous assumptions and after elimination of the high– frequency component corresponding to the frequency ω, the sensor output signal can be written as s(t) = n X k=1 αkeiωD k t + e(t) (6.6.1) where e(t) is measurement noise, and where ωD k is the kth Doppler frequency defined by: ωD k = −v · ω c sin θk with c denoting the velocity of signal propagation. Conclude from (6.6.1) that the DOA estimation problem associated with the scenario under consideration can be solved by using the estimation methods discussed in this chapter and in Chapter 4 (provided that the sensor speed v can be accurately determined). Exercise 6.2: Beamforming Resolution for Uniform Linear Arrays Consider a ULA comprising m sensors, with inter-element spacing equal to d. Let λ denote the wavelength of the signals impinging on the array. According to the discussion in Chapter 2, the spatial frequency resolution of the beamforming used with the above ULA is given by ∆ωs = 2π m ⇐ ⇒ ∆fs = 1 m (6.6.2) Make use of the previous observation to show that the DOA resolution of beam-forming for signals coming from broadside is ∆θ ≃sin−1(1/L) (6.6.3) where L is the array’s length measured in wavelengths: L = (m −1)d λ (6.6.4) Explain how (6.6.3) approximately reduces to (6.3.20), for sufficiently large L. Next, show that for signals impinging from an arbitrary direction angle θ, the DOA resolution of beamforming is approximately: ∆θ ≃ 1 L| cos θ| (6.6.5) Hence, for signals coming from nearly end–fire directions, the DOA resolution is much worse than what is suggested in (6.3.20). Exercise 6.3: Beamforming Resolution for Arbitrary Arrays “sm2” 2004/2/ page 32 i i i i i i i i Section 6.6 Exercises 321 The beampattern W(θ) = |a∗(θ)a(θ0)|2, (some θ0) has the same shape as a spectral window: it has a peak at θ = θ0, is symmetric about that point, and the peak is narrow (for large enough values of m). Consequently the beamwidth of the array with direction vector a(θ) can approximately be derived by using the window bandwidth formula proven in Exercise 2.15: ∆θ ≃2 p |W(θ0)/W ′′(θ0)| (6.6.6) Now, the array’s beamwidth and the resolution of beamforming are closely related. To see this, consider the case where the array output covariance matrix is given by (6.4.3). Let n = 2, and assume that P = I (for simplicity of explanation). The average beamforming spectral function is then given by: a∗(θ)Ra(θ) = |a∗(θ)a(θ1)|2 + |a∗(θ)a(θ2)|2 + mσ2 which clearly shows that the sources with DOAs θ1 and θ2 are resolvable by beam-forming if and only if |θ1 −θ2| is larger than the array’s beamwidth. Consequently, we can approximately determine the beamforming resolution by using (6.6.6). Spe-cialize equation (6.6.6) to a ULA and compare to the results obtained in Exer-cise 6.2. Exercise 6.4: Beamforming Resolution for L–Shaped Arrays Consider an m–element array, with m odd, shaped as an “L” with element spacing d. Thus, the array elements are located at points (0, 0), (0, d), . . . , (0, d(m− 1)/2) and (d, 0), . . . , (d(m−1)/2, 0). Using the results in Exercise 6.3, find the DOA resolution of beamforming for signals coming from an angle θ. What is the mini-mum and maximum resolution, and for what angles are these extremal resolutions realized? Compare your results with the m–element ULA case in Exercise 6.2. Exercise 6.5: Relationship between Beamwidth and Array Element Lo-cations Consider an m-element planar array with elements located at rk = [xk, yk]T for k = 1, . . . , m. Assume that the array is centered at the origin, so Pm k=1 rk = 0. Use equation (6.6.6) to show that the array beamwidth at direction θ0 is given by ∆θ ≃ √ 2 λ 2π 1 D(θ0) (6.6.7) where D(θ0) is the root mean square distance of the array elements to the origin in the direction orthogonal to θ0 (see Figure 6.8): D(θ0) = v u u t 1 m m X k=1 d2 k(θ0), dk(θ0) = xk sin θ0 −yk cos θ0 “sm2” 2004/2/ page 32 i i i i i i i i 322 Chapter 6 Spatial Methods As in Exercise 2.15, the beamwidth approximation in equation (6.6.7) slightly un-derestimates the true beamwidth; a better approximation is given by: ∆θ ≃1.15 √ 2 λ 2π 1 D(θ0) (6.6.8) DOA rk dk x y r1 r2 θ0 Figure 6.8. Array element projected distances from the origin for DOA angle θ0 (see Exercise 6.5). Exercise 6.6: Isotropic Arrays An array whose beamwidth is the same for all directions is said to be isotropic. Consider an m-element planar array with elements located at rk = [xk, yk]T for k = 1, . . . , m and centered at the origin (Pm k=1 rk = 0) as in Exercise 6.5. Show that the array beamwidth (as given by (6.6.7)) is the same for all DOAs if and only if RT R = cI2 (6.6.9) where R =      x1 y1 x2 y2 . . . . . . xm ym      and where c is a positive constant. (See [Baysal and Moses 2003] for additional details and properties of isotropic arrays.) Exercise 6.7: Grating Lobes The results of Exercise 6.2 might suggest that an m–element ULA can have very high resolution simply by using a large array element spacing d. However, “sm2” 2004/2/ page 323 i i i i i i i i Section 6.6 Exercises 323 there is an ambiguity associated with choosing d > λ/2; this drawback is sometimes referred to as the problem of grating lobes. Identify this drawback, and discuss what ambiguities exist as a function of d (refer to the discussion on ULAs in Section 6.2.2). One potential remedy to this drawback is to use two ULAs: one with m1 elements and element spacing d1 = λ/2, and another with m2 elements and element spacing d2. Discuss how to choose m1, m2, and d2 to both avoid ambiguities and increase resolution over a conventional ULA with element spacing d = λ/2 and m1+m2 elements. Consider as an example using a 10–element ULA with d2 = 3λ/2 for the second ULA; find m1 to resolve ambiguities in this array. Finally, discuss any potential drawbacks of the two–array approach. Exercise 6.8: Beamspace Processing Consider an array comprising many sensors (m ≫1). Such an array should be able to resolve sources that are quite closely spaced (cf. (6.3.20) and the discussion in Exercise 6.3). There is, however, a price to be paid for the high–resolution performance achieved by using many sensors: the computational burden associated with the elementspace processing (ESP) (i.e., the direct processing of the output of all sensors) may be prohibitively high, and the involved circuitry (A–D converters, etc.) may be quite expensive. Let B∗be an ¯ m×m matrix with ¯ m < m, and consider the transformed output vector B∗y(t). The latter vector satisfies the following equation (cf. (6.2.21)): B∗y(t) = B∗As(t) + B∗e(t) (6.6.10) The transformation matrix B∗above can be interpreted as a beamformer or spatial filter acting on y(t). Determination of the DOAs of the signals impinging on the array using B∗y(t) is called beamspace processing (BSP). Since ¯ m < m, BSP should have a lower computational burden than ESP. The critical question is then how to choose the beamformer B so as not to significantly degrade the performance achievable by ESP. Assume that a certain DOA sector is known to contain the source(s) of interest (whose DOAs are designated by the generic variable θ0). By using this informa-tion, design a matrix B∗which passes the signals from direction θ0 approximately undistorted. Choose B in such a way that the noise in beamspace, B∗e(t), is still spatially white. For a given sector size, discuss the tradeoffbetween the computa-tional burden associated with BSP and the distorting effect of the beamformer on the desired signals. Finally, use the results of Exercise 6.3 to show that the reso-lution of beamforming in elementspace and beamspace are nearly the same, under the previous conditions. Exercise 6.9: Beamspace Processing (cont’d) In this exercise, for simplicity, we consider the Beamspace Processing (BSP) equation (6.6.10) for the case of a single source (n = 1): B∗y(t) = B∗a(θ)s(t) + B∗e(t) (6.6.11) The Elementspace Processing (ESP) counterpart of (6.6.11) is (cf. (6.2.19)) y(t) = a(θ)s(t) + e(t) (6.6.12) “sm2” 2004/2/ page 324 i i i i i i i i 324 Chapter 6 Spatial Methods Assume that ∥a(θ)∥2 = m (see (6.3.11)), and that the ¯ m × m matrix B∗is unitary (i.e., B∗B = I). Furthermore, assume that a(θ) ∈R(B) (6.6.13) To satisfy (6.6.13) we need knowledge about a DOA sector that contains θ, which is usually assumed to be available in BSP applications; note that the narrower this sector, the smaller the value we can choose for ¯ m. As ¯ m decreases, the implemen-tation advantages of BSP compared with ESP become more significant. However, the DOA estimation performance achievable by BSP might be expected to decrease as ¯ m decreases. As indicated in Exercise 6.8, this is not necessarily the case. In the present exercise we lend further support to the fact that the estimation per-formances of ESP and BSP can be quite similar to one another, provided that the condition (6.6.13) is satisfied. To be specific, define the array SNR for (6.6.12) as E  ∥a(θ)s(t)∥2 E∥e(t)∥2 = mP mσ2 = P σ2 (6.6.14) where P denotes the power of s(t). Show that the “array SNR” for the BPS equation, (6.6.11), is m/ ¯ m times that in (6.6.14). Conclude that this increase in the array SNR associated with BPS may well counterbalance the presumably negative impact on DOA performance caused by the decrease from m to ¯ m in the number of observed output signals. Exercise 6.10: Beamforming and MUSIC under the Same Umbrella Define the scalars Y ∗ t (θ) = a∗(θ)y(t), t = 1, . . . , N. By using previous notation, we can write the beamforming spatial spectrum in (6.3.18) as follows: Y ∗(θ)WY (θ) (6.6.15) where W = (1/N)I (for beamforming) and Y (θ) = [Y1(θ) . . . YN(θ)]T Show that the MUSIC spatial pseudospectrum a∗(θ) ˆ S ˆ S∗a(θ) (6.6.16) (see Sections 4.5 and 6.4.3) can also be put in the form (6.6.15), for a certain “weighting matrix” W. The columns of the matrix ˆ S in (6.6.16) are the n principal eigenvectors of the sample covariance matrix ˆ R in (6.3.17). Exercise 6.11: Subspace Fitting Interpretation of MUSIC In words, the result (4.5.9) (on which MUSIC for both frequency and DOA estimation is based) says that the direction vectors {a(θk)} belong to the subspace “sm2” 2004/2/ page 32 i i i i i i i i Section 6.6 Exercises 325 spanned by the columns of S. Therefore, we can think of estimating the DOAs by choosing θ (a generic DOA variable) so that the distance between a(θ) and the closest vector in the span of ˆ S is minimized: min β,θ ∥a(θ) −ˆ Sβ∥2 (6.6.17) where ∥·∥denotes the Euclidean vector norm. Note that the dummy vector variable β in (6.6.17) is defined in such a way so that ˆ Sβ is closest to a(θ) in Euclidean norm. Show that the DOA estimation method derived from the subspace fitting criterion (6.6.17) is the same as MUSIC. Exercise 6.12: Subspace Fitting Interpretation of MUSIC (cont’d.) The result (4.5.9) can also be invoked to arrive at the following subspace fitting criterion: min B,θ ∥A(θ) −ˆ SB∥2 F (6.6.18) where ∥· ∥F stands for the Frobenius matrix norm, and θ is now the vector of all DOA parameters. This criterion seems to be a more general version of equation (6.6.17) in Exercise 6.11. Show that the minimization of the multidimensional subspace fitting criterion in (6.6.18), with respect to the DOA vector θ, still leads to the one–dimensional MUSIC method. Hint: It will be useful to refer to the type of result proven in equations (4.3.12)–(4.3.16) in Section 4.3. Exercise 6.13: Subspace Fitting Interpretation of MUSIC (cont’d.) The subspace fitting interpretations of the previous two exercises provide some insights into the properties of the MUSIC estimator. Assume, for instance, that two or more source signals are coherent. Make use of the subspace fitting interpretation in Exercise 6.12 to show that MUSIC cannot be expected to yield meaningful results in such a case. Follow the line of your argument explaining why MUSIC fails in the case of coherent signals, to suggest a subspace fitting criterion that works in such a case. Discuss the computational complexity of the method based on the latter criterion. Exercise 6.14: Modified MUSIC for Coherent Signals Consider an m–element ULA. Assume that n signals impinge on the array at angles {θk}n k=1, and also that some signals are coherent (so that the signal covariance matrix P is singular). Derive a modified MUSIC DOA estimator for this case, analogous to the modified MUSIC frequency estimator in Section 4.5, and show that this method is capable of determining the n DOAs even in the coherent signal case. COMPUTER EXERCISES Tools for Array Signal Processing: The text web site www.prenhall.com/stoica contains the following Matlab functions for use in DOA estimation. “sm2” 2004/2/ page 326 i i i i i i i i 326 Chapter 6 Spatial Methods • Y=uladata(theta,P,N,sig2,m,d) Generates an m × N data matrix Y = [y(1), . . . , y(N)] for a ULA with n sources arriving at angles (in degrees from −90◦to 90◦) given by the elements of the n × 1 vector theta. The source signals are zero mean Gaussian with covariance matrix P = E {s(t)s∗(t)}. The noise component is spatially white Gaussian with covariance σ2I, where σ2 =sig2. The element spacing is equal to d in wavelengths. • phi=beamform(Y,L,d) Implements the beamforming spatial spectral estimate in equation (6.3.18) for an m–element ULA with sensor spacing d in wavelengths. The m×N matrix Y is as defined above. The parameter L controls the DOA sampling, and phi is the spatial spectral estimate phi= [ˆ φ(θ1), . . . , ˆ φ(θL)] where θk = −π 2 + πk L . • phi=capon_sp(Y,L,d) Implements the Capon spatial spectral estimator in equation (6.3.26); the input and output parameters are defined as those in beamform. • theta=root_music_doa(Y,n,d) Implements the Root MUSIC method in Section 4.5, adapted for spatial spec-tral estimation using a ULA. The parameters Y and d are as in beamform, and theta is the vector containing the n DOA estimates [ˆ θ1, . . . , ˆ θn]T . • theta=esprit_doa(Y,n,d) Implements the ESPRIT method for a ULA. The parameters Y and d are as in beamform, and theta is the vector containing the n DOA estimates [ˆ θ1, . . . , ˆ θn]T . The two subarrays for ESPRIT are made from the first m −1 and last m −1 elements of the array. Exercise C6.15: Comparison of Spatial Spectral Estimators Simulate the following scenario. Two signals with wavelength λ impinge on an array of sensors from DOAs θ1 = 0◦and a θ2 that will be varied. The sig-nals are mutually uncorrelated complex Gaussian with unit power, so that P = E {s(t)s∗(t)} = I. The array is a 10–element ULA with element spacing d = λ/2. The measurements are corrupted by additive complex Gaussian white noise with unit power. A total of N = 100 snapshots are collected. (a) Let θ2 = 15◦. Compare the results of the beamforming, Capon, Root MUSIC, and ESPRIT methods for this example. The results can be shown by plotting the spatial spectrum estimates from beamforming and Capon for 50 Monte– Carlo experiments; for Root MUSIC and ESPRIT, plot vertical lines of equal height located at the DOA estimates from the 50 Monte–Carlo experiments. How do the methods compare? Are the properties of the various estimators analogous to the time series case for two sinusoids in noise? (b) Repeat for θ2 = 7.5◦. “sm2” 2004/2/ page 32 i i i i i i i i Section 6.6 Exercises 327 Exercise C6.16: Performance of Spatial Spectral Estimators for Coherent Source Signals In this exercise we will see what happens when the source signals are fully correlated (or coherent). Use the same parameters and estimation methods as in Exercise C6.15 with θ2 = 15◦, but with P =  1 1 1 1  Note that the sources are coherent as rank(P) = 1. Compare the results of the four methods for this case, again by plotting the spatial spectrum and “DOA line spectrum” estimates (as in Exercise C6.15) for 50 Monte–Carlo experiments from each estimator. Which method appears to be the best in this case? Exercise C6.17: Spatial Spectral Estimators applied to Measured Data Apply the four DOA estimators from Exercise C6.15 to the real data in the file submarine.mat, which can be found at the text web site www.prenhall.com/stoica. These data are underwater measurements collected by the Swedish Defense Agency in the Baltic Sea. The 6–element array of hydrophones used in the experiment can be assumed to be a ULA with inter-element spacing equal to 0.9m. The wavelength of the signal is approximately 5.32m. Can you find the “submarine(s)”? “sm2” 2004/2/ page 328 i i i i i i i i A P P E N D I X A Linear Algebra and Matrix Analysis Tools A.1 INTRODUCTION In this appendix we provide a review of the linear algebra terms and matrix proper-ties used in the text. For the sake of brevity we do not present proofs for all results stated in the following, nor do we discuss related results which are not needed in the previous chapters. For most of the results included, however, we do provide proofs and motivation. The reader interested in finding out more about the topic of this appendix can consult the books [Stewart 1973; Horn and Johnson 1985; Strang 1988; Horn and Johnson 1989; Golub and Van Loan 1989] to which we also refer for the proofs omitted here. A.2 RANGE SPACE, NULL SPACE, AND MATRIX RANK Let A be an m × n matrix with possibly complex–valued elements, A ∈Cm×n, and let (·)T and (·)∗denote the transpose and the conjugate transpose operators, respectively. Definition D1: The range space of A, also called the column space, is the subspace spanned by (all linear combinations of) the columns of A: R(A) = {α ∈Cm×1|α = Aβ for β ∈Cn×1} (A.2.1) The range space of AT is usually called the row space of A, for obvious reasons. Definition D2: The null space of A, also called kernel, is the following subspace: N(A) = {β ∈Cn×1|Aβ = 0} (A.2.2) The previous definitions are all that we need to introduce the matrix rank and its basic properties. We return to the range and null subspaces in Section A.4 where we discuss the singular value decomposition. In particular, we derive there some convenient bases and useful projectors associated with the previous matrix subspaces. Definition D3: The following are equivalent definitions of the rank of A, r ≜rank(A). 328 “sm2” 2004/2/ page 329 i i i i i i i i Section A.2 Range Space, Null Space, and Matrix Rank 329 (i) r is equal to the maximum number of linearly independent columns of A. The latter number is by definition the dimension of the R(A); hence r = dim R(A) (A.2.3) (ii) r is equal to the maximum number of linearly independent rows of A, r = dim R(AT ) = dim R(A∗) (A.2.4) (iii) r is the dimension of the nonzero determinant of maximum size that can be built from the elements of A. The equivalence between the definitions (i) and (ii) above is an important and pleasing result (without which one should have considered the row rank and column rank of a matrix separately!). Definition D4: A is said to be: • Rank deficient whenever r < min(m, n). • Full column rank if r = n ≤m. • Full row rank if r = m ≤n • Nonsingular whenever r = m = n. Result R1: Premultiplication or postmultiplication of A by a nonsingular matrix does not change the rank of A. Proof: This fact directly follows from the definition of rank(A) because the afore-mentioned multiplications do not change the number of linearly independent columns (or rows) of A. Result R2: Let A ∈Cm×n and B ∈Cn×p be two conformable matrices of rank rA and rB, respectively. Then: rank(AB) ≤min(rA, rB) (A.2.5) Proof: We can prove the previous assertion by using the definition of the rank once again. Indeed, premultiplication of B by A cannot increase the number of linearly independent columns of B, hence rank(AB) ≤rB. Similarly, post–multiplication of A by B cannot increase the number of linearly independent columns of AT , which means that rank(AB) ≤rA. “sm2” 2004/2/ page 330 i i i i i i i i 330 Appendix A Linear Algebra and Matrix Analysis Tools Result R3: Let A ∈Cm×m be given by A = N X k=1 xk y∗ k where xk, yk ∈Cm×1. Then, rank(A) ≤min(m, N) Proof: Since A can be rewritten as A = [x1 . . . xN]    y∗ 1 . . . y∗ N    the result follows from R2. Result R4: Let A ∈Cm×n with n ≤m, let B ∈Cn×p, and let rank(A) = n (A.2.6) Then rank(AB) = rank(B) (A.2.7) Proof: Assumption (A.2.6) implies that A contains a nonsingular n×n submatrix, the post–multiplication of which by B gives a block of rank equal to rank(B) (cf. R1). Hence, rank(AB) ≥rank(B) However, by R2, rank(AB) ≤rank(B) and hence (A.2.7) follows. A.3 EIGENVALUE DECOMPOSITION Definition D5: We say that the matrix A ∈Cm×m is Hermitian if A∗= A. In the real–valued case, such an A is said to be symmetric. Definition D6: A matrix U ∈Cm×m is said to be unitary (orthogonal if U is real–valued) whenever U ∗U = UU ∗= I If U ∈Cm×n, with m > n, is such that U ∗U = I then we say that U is semiuni-tary. Next, we present a number of definitions and results pertaining to the matrix eigenvalue decomposition (EVD), first for general matrices and then for Hermitian ones. “sm2” 2004/2/ page 33 i i i i i i i i Section A.3 Eigenvalue Decomposition 331 A.3.1 General Matrices Definition D7: A scalar λ ∈C and a (nonzero) vector x ∈Cm×1 are an eigenvalue and its associated eigenvector of a matrix A ∈Cm×m if Ax = λx (A.3.1) In particular, an eigenvalue λ is a solution of the so–called characteristic equa-tion of A: |A −λI| = 0 (A.3.2) and x is a vector in N(A −λI). The pair (λ, x) is called an eigenpair. Observe that if {(λi, xi)}p i=1 are p eigenpairs of A (with p ≤m) then we can write the defining equations Axi = λxi (i = 1, . . . , p) in the following compact form: AX = XΛ (A.3.3) where X = [x1 . . . xp] and Λ =    λ1 0 . . . 0 λp    Result R5: Let (λ, x) be an eigenpair of A ∈Cm×m. If B = A + αI, with α ∈C, then (λ + α, x) is an eigenpair of B. Proof: The result follows from the fact that Ax = λx = ⇒(A + αI)x = (λ + α)x. Result R6: The matrices A and B ≜Q−1AQ, where Q is any nonsingular ma-trix, share the same eigenvalues. (B is said to be related to A by a similarity transformation). Proof: Indeed, the equation |B −λI| = |Q−1(A −λI)Q| = |Q−1||A −λI||Q| = 0 is equivalent to |A −λI| = 0. In general there is no simple relationship between the elements {Aij} of A and its eigenvalues {λk}. However, the trace of A, which is the sum of the diagonal elements of A, is related in a simple way to the eigenvalues, as described next. Definition D8: The trace of a square matrix A ∈Cm×m is defined as tr(A) = m X i=1 Aii (A.3.4) “sm2” 2004/2/ page 33 i i i i i i i i 332 Appendix A Linear Algebra and Matrix Analysis Tools Result R7: If {λi}m i=1 are the eigenvalues of A ∈Cm×m, then tr(A) = m X i=1 λi (A.3.5) Proof: We can write |λI −A| = n Y i=1 (λ −λi) (A.3.6) The right hand side of (A.3.6) is a polynomial in λ whose λn−1 coefficient is Pn i=1 λi. From the definition of the determinant (see, e.g., [Strang 1988]) we find that the left hand side of (A.3.6) is a polynomial whose λn−1 coefficient is Pn i=1 Aii = tr(A). This proves the result. Interestingly, while the matrix product is not commutative, the trace is in-variant to commuting the factors in a matrix product, as shown next. Result R8: Let A ∈Cm×n and B ∈Cn×m. Then: tr(AB) = tr(BA) (A.3.7) Proof: A straightforward calculation, based on the definition of tr(·) in (A.3.4), shows that tr(AB) = m X i=1 n X j=1 AijBji = n X j=1 m X i=1 BjiAij = n X j=1 [BA]jj = tr(BA) We can also prove (A.3.7) by using Result R7. Along the way we will obtain some other useful results. First we note the following. Result R9: Let A, B ∈Cm×m and let α ∈C. Then |AB| = |A| |B| |αA| = αm|A| Proof: The identities follow directly from the definition of the determinant; see, e.g., [Strang 1988]. Next we prove the following results. “sm2” 2004/2/ page 333 i i i i i i i i Section A.3 Eigenvalue Decomposition 333 Result R10: Let A ∈Cm×n and B ∈Cn×m. Then: |I −AB| = |I −BA|. (A.3.8) Proof: It is straightforward to verify that:  I A 0 I   I −A −B I   I 0 B I  =  I −AB 0 0 I  (A.3.9) and  I 0 B I   I −A −B I   I A 0 I  =  I 0 0 I −BA  (A.3.10) Because the matrices in the left–hand sides of (A.3.9) and (A.3.10) have the same determinant, equal to I −A −B I , it follows that the right–hand sides must also have the same determinant, which concludes the proof. Result R11: Let A ∈Cm×n and B ∈Cn×m. The nonzero eigenvalues of AB and of BA are identical. Proof: Let λ ̸= 0 be an eigenvalue of AB. Then, 0 = |AB −λI| = λm|AB/λ −I| = λm|BA/λ −I| = λm−n|BA −λI| where the third equality follows from R10. Hence, λ is also an eigenvalue of BA. We can now obtain R8 as a simple corollary of R11, by using the property (A.3.5) of the trace operator. A.3.2 Hermitian Matrices An important property of the class of Hermitian matrices, which does not neces-sarily hold for general matrices, is the following. Result R12: (i) All eigenvalues of A = A∗∈Cm×m are real–valued. (ii) The m eigenvectors of A = A∗∈Cm×m form an orthonormal set. In other words, the matrix whose columns are the eigenvectors of A is unitary. It follows from (i) and (ii) and from (A.3.3) that for a Hermitian matrix we can write: AU = UΛ where U ∗U = UU ∗= I and the diagonal elements of Λ are real numbers. Equiva-lently, A = UΛU ∗ (A.3.11) which is the so–called eigenvalue decomposition (EVD) of A = A∗. The EVD of a Hermitian matrix is a special case of the singular value decomposition of a general matrix discussed in the next section. The following is a useful result associated with Hermitian matrices. “sm2” 2004/2/ page 334 i i i i i i i i 334 Appendix A Linear Algebra and Matrix Analysis Tools Result R13: Let A = A∗∈Cm×m and let v ∈Cm×1 (v ̸= 0). Also, let the eigenvalues of A be arranged in a nonincreasing order: λ1 ≥λ2 ≥· · · ≥λm. Then: λm ≤v∗Av v∗v ≤λ1 (A.3.12) The ratio in (A.3.12) is called the Rayleigh quotient. As this ratio is invariant to the multiplication of v by any complex number, we can rewrite (A.3.12) in the form: λm ≤v∗Av ≤λ1 for any v ∈Cm×1 with v∗v = 1 (A.3.13) The equalities in (A.3.13) are evidently achieved when v is equal to the eigen-vector of A associated with λm and λ1, respectively. Proof: Let the EVD of A be given by (A.3.11), and let w = U ∗v =    w1 . . . wm    We need to prove that λm ≤w∗Λw = m X k=1 λk|wk|2 ≤λ1 for any w ∈Cm×1 satisfying w∗w = m X k=1 |wk|2 = 1. However, this is readily verified as follows: λ1 − m X k=1 λk|wk|2 = m X k=1 (λ1 −λk)|wk|2 ≥0 and m X k=1 λk|wk|2 −λm = m X k=1 (λk −λm)|wk|2 ≥0 and the proof is concluded. The following result is an extension of R13. Result R14: Let V ∈Cm×n, with m > n, be a semiunitary matrix (i.e., V ∗V = I), and let A = A∗∈Cm×m have its eigenvalues ordered as in R13. Then: m X k=m−n+1 λk ≤tr(V ∗AV ) ≤ n X k=1 λk (A.3.14) “sm2” 2004/2/ page 33 i i i i i i i i Section A.3 Eigenvalue Decomposition 335 where the equalities are achieved, for instance, when the columns of V are the eigen-vectors of A corresponding to (λm−n+1, . . . , λm) and, respectively, to (λ1, . . . , λn). The ratio tr(V ∗AV ) tr(V ∗V ) = tr(V ∗AV ) n is sometimes called the extended Rayleigh quotient. Proof: Let A = UΛU ∗ (cf. (A.3.11)), and let S = U ∗V ≜    s∗ 1 . . . s∗ m    (m × n) (hence s∗ k is the kth row of S). By making use of the above notation, we can write: tr(V ∗AV ) = tr(V ∗UΛU ∗V ) = tr(S∗ΛS) = tr(ΛSS∗) = m X k=1 λkck (A.3.15) where ck ≜s∗ ksk, k = 1, . . . m (A.3.16) Clearly, ck ≥0, k = 1, . . . , m (A.3.17) and m X k=1 ck = tr(SS∗) = tr(S∗S) = tr(V ∗UU ∗V ) = tr(V ∗V ) = tr(I) = n (A.3.18) Furthermore, ck ≤1, k = 1, . . . , m. (A.3.19) To see this, let G ∈Cm×(m−n) be such that the matrix [S G] is unitary; and let g∗ k denote the kth row of G. Then, by construction, [s∗ k g∗ k]  sk gk  = ck + g∗ kgk = 1 = ⇒ck = 1 −g∗ kgk ≤1 which is (A.3.19). Finally, by combining (A.3.15) with (A.3.17)–(A.3.19) we can readily verify that tr(V ∗AV ) satisfies (A.3.14), where the equalities are achieved for c1 = · · · = cm−n = 0; cm−n+1 = · · · = cm = 1 and, respectively, c1 = · · · = cn = 1; cn+1 = · · · = cm = 0 These conditions on {ck} are satisfied if, for example, S is equal to [0 I]T and [I 0]T , respectively. With this observation, the proof is concluded. Result R13 is clearly a special case of Result R14. The only reason for consid-ering R13 separately is that the simpler result R13 is more often used in the text than R14. “sm2” 2004/2/ page 336 i i i i i i i i 336 Appendix A Linear Algebra and Matrix Analysis Tools A.4 SINGULAR VALUE DECOMPOSITION AND PROJECTION OPERATORS For any matrix A ∈Cm×n there exist unitary matrices U ∈Cm×m and V ∈Cn×n and a diagonal matrix Σ ∈Rm×n with nonnegative diagonal elements, such that A = UΣV ∗ (A.4.1) By appropriate permutation, the diagonal elements of Σ can be arranged in a nonincreasing order: σ1 ≥σ2 ≥· · · ≥σmin(m,n) The factorization (A.4.1) is called the singular value decomposition (SVD) of A and its existence is a significant result from both a theoretical and practical standpoint. We reiterate that the matrices U, Σ, and V in (A.4.1) satisfy: U ∗U = UU ∗= I (m × m) V ∗V = V V ∗= I (n × n) Σij =  σi ≥0 0 for i = j for i ̸= j The following terminology is most commonly associated with the SVD: • The left singular vectors of A are the columns of U. These singular vectors are also the eigenvectors of the matrix AA∗. • The right singular vectors of A are the columns of V . These vectors are also the eigenvectors of the matrix A∗A. • The singular values of A are the diagonal elements {σi} of Σ. Note that {σi} are the square roots of the largest min(m, n) eigenvalues of AA∗or A∗A. • The singular triple of A is the triple (singular value, left singular vector, and right singular vector) (σk, uk, vk), where uk (vk) is the kth column of U (V ). If rank(A) = r ≤min(m, n) then one can show that: ( σk > 0, k = 1, . . . , r σk = 0, k = r + 1, . . . , min(m, n) Hence, for a matrix of rank r the SVD can be written as: A = [ U1 |{z} r U2 |{z} m−r ]  Σ1 0 0 0   V ∗ 1 V ∗ 2  r n−r = U1Σ1V ∗ 1 (A.4.2) where Σ1 ∈Rr×r is nonsingular. The factorization of A in (A.4.2) has a number of important consequences. Result R15: Consider the SVD of A ∈Cm×n in (A.4.2), where r ≤min(m, n). Then: “sm2” 2004/2/ page 33 i i i i i i i i Section A.4 Singular Value Decomposition and Projection Operators 337 (i) U1 is an orthonormal basis of R(A) (ii) U2 is an orthonormal basis of N(A∗) (iii) V1 is an orthonormal basis of R(A∗) (iv) V2 is an orthonormal basis of N(A). Proof: We see that (iii) and (iv) follow from the properties (i) and (ii) applied to A∗. To prove (i) and (ii), we need to show that: R(A) = R(U1) (A.4.3) and, respectively, N(A∗) = R(U2) (A.4.4) To show (A.4.3), note that α ∈R(A) ⇒there exists β such that α = Aβ ⇒ ⇒α = U1(Σ1V ∗ 1 β) = U1γ ⇒α ∈R(U1) so R(A) ⊂R(U1). Also, α ∈R(U1) ⇒there exists β such that α = U1β From (A.4.2), U1 = AV1Σ−1 1 ; it follows that α = A(V1Σ−1 1 β) = Aρ ⇒α ∈R(A) which shows R(U1) ⊂R(A). Combining R(U1) ⊂R(A) with R(A) ⊂R(U1) gives (A.4.3). Similarly, α ∈N(A∗) ⇒A∗α = 0 ⇒V1Σ1U ∗ 1 α = 0 ⇒Σ−1 1 V ∗ 1 V1Σ1U ∗ 1 α = 0 ⇒U ∗ 1 α = 0 Now, any vector α can be written as α = [U1 U2]  γ β  since [U1 U2] is nonsingular. However, 0 = U ∗ 1 α = U ∗ 1 U1γ + U ∗ 1 U2β = γ, so γ = 0, and thus α = U2β. Thus, N(A∗) ⊂R(U2). Finally, α ∈R(U2) ⇒there exists β such that α = U2β Then A∗α = V1Σ1U ∗ 1 U2β = 0 ⇒α ∈N(A∗) which leads to (A.4.4). The previous result, readily derived by using the SVD, has a number of in-teresting corollaries which complement the discussion on range and null subspaces in Section A.2. “sm2” 2004/2/ page 338 i i i i i i i i 338 Appendix A Linear Algebra and Matrix Analysis Tools Result R16: For any A ∈Cm×n the subspaces R(A) and N(A∗) are orthogonal to each other and they together span Cm. Consequently, we say that N(A∗) is the orthogonal complement of R(A) in Cm, and vice versa. In particular, we have: dim N(A∗) = m −r (A.4.5) dim N(A) = n −r (A.4.6) (Recall that dim R(A) = dim R(A∗) = r.) Proof: This result is a direct corollary of R15. The SVD of a matrix also provides a convenient representation for the pro-jectors onto the range and null spaces of A and A∗. Definition D9: Let y ∈Cm×1 be an arbitrary vector. By definition the orthogonal projector onto R(A) is the matrix Π, which is such that (i) R(Π) = R(A) and (ii) the Euclidean distance between y and Πy ∈R(A) is minimum: ∥y −Πy∥2 = min over R(A) Hereafter, ∥x∥2 = x∗x denotes the Euclidean vector norm. Result R17: Let A ∈Cm×n. The orthogonal projector onto R(A) is given by Π = U1U ∗ 1 (A.4.7) whereas the orthogonal projector onto N(A∗) is Π⊥= I −U1U ∗ 1 = U2U ∗ 2 (A.4.8) Proof: Let y ∈Cm×1 be an arbitrary vector. As R(A) = R(U1), according to R15, we can find the vector in R(A) that is of minimal distance from y by solving the problem: min β ∥y −U1β∥2 (A.4.9) Because ∥y −U1β∥2 = (β∗−y∗U1)(β −U ∗ 1 y) + y∗(I −U1U ∗ 1 )y = ∥β −U ∗ 1 y∥2 + ∥U ∗ 2 y∥2 it readily follows that the solution to the minimization problem (A.4.9) is given by β = U ∗ 1 y. Hence the vector U1U ∗ 1 y is the orthogonal projection of y onto R(A) and the minimum distance from y to R(A) is ∥U ∗ 2 y∥. This proves (A.4.7). Then (A.4.8) follows immediately from (A.4.7) and the fact that N(A∗) = R(U2). Note, for instance, that for the projection of y onto R(A) the error vector is y −U1U ∗ 1 y = U2U ∗ 2 y, which is in R(U2) and is therefore orthogonal to R(A) by R15. For this reason, Π is given the name “orthogonal projector” in D9 and R17. As an aside, we remark that the orthogonal projectors in (A.4.7) and (A.4.8) are idempotent matrices; see the next definition. “sm2” 2004/2/ page 339 i i i i i i i i Section A.4 Singular Value Decomposition and Projection Operators 339 Definition D10: The matrix A ∈Cm×m is idempotent if A2 = A (A.4.10) Furthermore, observe by making use of R11 that the idempotent matrix in (A.4.7), for example, has r eigenvalues equal to 1 and (m −r) eigenvalues equal to zero. This is a general property of idempotent matrices: their eigenvalues are either zero or one. Finally we present a result that even alone would be enough to make the SVD an essential matrix analysis tool. Result R18: Let A ∈Cm×n, with elements Aij. Let the SVD of A (with the singular values arranged in a nonincreasing order) be given by: A = [ U1 |{z} p U2 |{z} m−p ]  Σ1 0 0 Σ2   V ∗ 1 V ∗ 2  p n−p (A.4.11) where p ≤min(m, n) is an integer. Let ∥A∥2 = tr(A∗A) = m X i=1 n X j=1 |Aij|2 = min(m,n) X k=1 σ2 k (A.4.12) denote the square of the so–called Frobenius norm. Then the best rank–p approximant of A in the Frobenius norm metric, that is, the solution to min B ∥A −B∥2 subject to rank(B) = p , (A.4.13) is given by B0 = U1Σ1V ∗ 1 (A.4.14) Furthermore, B0 above is the unique solution to the approximation problem (A.4.13) if and only if σp > σp+1. Proof: It follows from R4 and (A.4.2) that we can parameterize B in (A.4.13) as: B = CD∗ (A.4.15) where C ∈Cm×p and D ∈Cn×p are full column rank matrices. The previous parameterization of B is of course nonunique but, as we will see, this fact does not introduce any problem. By making use of (A.4.15) we can rewrite the problem (A.4.13) in the following form: min C,D ∥A −CD∗∥2 rank(C) = rank(D) = p (A.4.16) The reparameterized problem is essentially constraint free. Indeed, the full column rank condition that must be satisfied by C and D can be easily handled, see below. “sm2” 2004/2/ page 340 i i i i i i i i 340 Appendix A Linear Algebra and Matrix Analysis Tools First, we minimize (A.4.16) with respect to D, for a given C. To that end, observe that: ∥A −CD∗∥2 = tr{D −A∗C(C∗C)−1[D∗−(C∗C)−1C∗A] +A∗[I −C(C∗C)−1C∗]A} (A.4.17) By result (iii) in Definition D11 in the next section, the matrix [D −A∗C(C∗C)−1]· (C∗C)[D∗−(C∗C)−1C∗A] is positive semidefinite for any D. This observation implies that (A.4.17) is minimized with respect to D for D0 = A∗C(C∗C)−1 (A.4.18) and the corresponding minimum value of (A.4.17) is given by tr{A∗[I −C(C∗C)−1C∗]A} (A.4.19) Next we minimize (A.4.19) with respect to C. Let S ∈Cm×p denote an orthogonal basis of R(C); that is, S∗S = I and S = CΓ for some nonsingular p × p matrix Γ. It is then straightforward to verify that I −C(C∗C)−1C∗= I −SS∗ (A.4.20) By combining (A.4.19) and (A.4.20) we can restate the problem of minimizing (A.4.19) with respect to C as: max S; S∗S=I tr[S∗(AA∗)S] (A.4.21) The solution to (A.4.21) follows from R14: the maximizing S is given by S0 = U1 which yields C0 = U1Γ−1 (A.4.22) It follows that: B0 = C0D∗ 0 = C0(C∗ 0C0)−1C∗ 0A = S0S∗ 0A = U1U ∗ 1 (U1Σ1V ∗ 1 + U2Σ2V ∗ 2 ) = U1Σ1V ∗ 1 . Furthermore, we observe that the minimum value of the Frobenius distance in (A.4.13) is given by ∥A −B0∥2 = ∥U2Σ2V ∗ 2 ∥2 = min(m,n) X k=p+1 σ2 k If σp > σp+1 then the best rank–p approximant B0 derived above is unique. Other-wise it is not unique. Indeed, whenever σp = σp+1 we can obtain B0 by using either the singular vectors associated with σp or those corresponding to σp+1, which will generally lead to different solutions. “sm2” 2004/2/ page 34 i i i i i i i i Section A.5 Positive (Semi)Definite Matrices 341 A.5 POSITIVE (SEMI)DEFINITE MATRICES Let A = A∗∈Cm×m be a Hermitian matrix, and let {λk}m k=1 denote its eigenvalues. Definition D11: We say that A is positive semidefinite (psd) or positive defi-nite (pd) if any of the following equivalent conditions holds true. (i) λk ≥0 (λk > 0 for pd) for k = 1, . . . , m. (ii) α∗Aα ≥0 (α∗Aα > 0 for pd) for any nonzero vector α ∈Cm×1 (iii) There exists a matrix C such that A = CC∗ (A.5.1) (with rank(C) = m for pd) (iv) |A(i1, . . . , ik)| ≥0 (> 0 for pd) for all k = 1, . . . , m and all indices i1, . . . , ik ∈ [1, m], where A(i1, . . . , ik) is the submatrix formed from A by eliminating the i1, . . . , ik rows and columns of A. (A(i1, . . . , ik) is called a principal submatrix of A). The condition for A to be positive definite can be simplified to requiring that |A(k + 1, . . . , m)| > 0 (for k = 1, . . . , m −1) and |A| > 0. (A(k + 1, . . . , m) is called a leading submatrix of A). The notation A > 0 (A ≥0) is commonly used to denote that A is pd (psd). Of the previous defining conditions, (iv) is apparently more involved. The necessity of (iv) can be proven as follows. Let α be a vector in Cm with zeroes at the positions {i1, . . . , ik} and arbitrary elements elsewhere. Then, by using (ii) we readily see that A ≥0 (> 0) implies A(i1, . . . , ik) ≥0 (> 0) which, in turn, implies (iv) by making use of (i) and the fact that the determinant of a matrix equals the product of its eigenvalues. The sufficiency of (iv) is shown in [Strang 1988]. The equivalence of the remaining conditions, (i), (ii), and (iii), is easily proven by making use of the EVD of A: A = UΛU ∗. To show that (i) ⇔(ii), assume first that (i) holds and let β = U ∗α. Then: α∗Aα = β∗Λβ = m X k=1 λk|βk|2 ≥0 (A.5.2) and hence, (ii) holds as well. Conversely, since U is invertible it follows from (A.5.2) that (ii) can hold only if (i) holds; indeed, if (i) does not hold one can choose β to make (A.5.2) negative; thus there exists an α = Uβ such that α∗Aα < 0, which contradicts the assumption that (ii) holds. Hence (i) and (ii) are equivalent. To show that (iii) ⇒(ii), note that α∗Aα = α∗CC∗α = ∥C∗α∥2 ≥0 and hence (ii) holds as well. Since (iii) ⇒(ii) and (ii) ⇒(i), we have (iii) ⇒(i). To show that (i) ⇒(iii), we assume (i) and write A = UΛU ∗= (UΛ1/2Λ1/2U ∗) = (UΛ1/2U ∗)(UΛ1/2U ∗) ≜CC∗ (A.5.3) “sm2” 2004/2/ page 34 i i i i i i i i 342 Appendix A Linear Algebra and Matrix Analysis Tools and hence (iii) is also satisfied. In (A.5.3), Λ1/2 is a diagonal matrix the diagonal elements of which are equal to {λ1/2 k }. In other words, Λ1/2 is the “square root” of Λ. In a general context, the square root of a positive semidefinite matrix is defined as follows. Definition D12: Let A = A∗be a positive semidefinite matrix. Then any matrix C that satisfies A = CC∗ (A.5.4) is called a square root of A. Sometimes such a C is denoted by A1/2. If C is a square root of A, then so is CB for any unitary matrix B, and hence there are an infinite number of square roots of a given positive semidefinite matrix. Two often–used particular choices for square roots are: (i) Hermitian square root: C = C∗. In this case we can simply write (A.5.4) as A = C2. Note that we have already obtained such a square root of A in (A.5.3): C = UΛ1/2U ∗ (A.5.5) If C is also constrained to be positive semidefinite (C ≥0) then the Hermitian square root is unique. (ii) Cholesky factor. If C is lower triangular with nonnegative diagonal elements, then C is called the Cholesky factor of A. In computational exercises, the triangular form of the square–root matrix is often preferred to other forms. If A is positive definite, the Cholesky factor is unique. We also note that equation (A.5.4) implies that A and C have the same rank as well as the same range space. This follows easily, for example, by inserting the SVD of C into (A.5.4). Next we prove three specialized results on positive semidefinite matrices re-quired in Section 2.5 and in Appendix B. Result R19: Let A ∈Cm×m and B ∈Cm×m be positive semidefinite matri-ces. Then the matrix A ⊙B is also positive semidefinite, where ⊙denotes the Hadamard matrix product (also called elementwise multiplication: [A ⊙ B]ij = AijBij ). Proof: Because B is positive semidefinite it can be written as B = CC∗for some matrix C ∈Cm×m. Let c∗ k denote the kth row of C. Then, [A ⊙B]ij = AijBij = Aij c∗ i cj and hence, for any α ∈Cm×1, α∗(A ⊙B)α = m X i=1 m X j=1 α∗ i Aijc∗ i cjαj (A.5.6) “sm2” 2004/2/ page 343 i i i i i i i i Section A.5 Positive (Semi)Definite Matrices 343 By letting {cjk}m k=1 denote the elements of the vector cj, we can rewrite (A.5.6) as: α∗(A ⊙B)α = m X k=1 m X i=1 m X j=1 α∗ i c∗ ikAijαjcjk = m X k=1 β∗ kAβk (A.5.7) where βk ≜[α1c1k · · · αmcmk]T As A is positive semidefinite by assumption, β∗ kAβk ≥0 for each k, and it follows from (A.5.7) that A ⊙B must be positive semidefinite as well. Result R20: Let A ∈Cm×m and B ∈Cm×m be Hermitian matrices. Assume that B is nonsingular and that the partitioned matrix  A I I B  is positive semidefinite. Then the matrix (A −B−1) is also positive semidefinite, A ≥B−1 Proof: By Definition D11, part (ii), α∗ 1 α∗ 2  A I I B   α1 α2  ≥0 (A.5.8) for any vectors α1, α2 ∈Cm×1. Let α2 = −B−1α1 Then (A.5.8) becomes: α∗ 1(A −B−1)α1 ≥0 As the above inequality must hold for any α1 ∈Cm×1, the proof is concluded. Result R21: Let C ∈Cm×m be a (Hermitian) positive definite matrix depending on a real–valued parameter α. Assume that C is a differentiable function of α. Then ∂ ∂α [ln |C|] = tr  C−1 ∂C ∂α  Proof: Let {λi} ∈R (i = 1, . . . , m) denote the eigenvalues of C. Then ∂ ∂α [ln |C|] = ∂ ∂α " ln m Y k=1 λk # = m X k=1 ∂ ∂α(ln λk) = m X k=1 1 λk ∂λk ∂α = tr  Λ−1 ∂Λ ∂α  “sm2” 2004/2/ page 344 i i i i i i i i 344 Appendix A Linear Algebra and Matrix Analysis Tools where Λ = diag(λ1, . . . , λm). Let Q be a unitary matrix such that Q∗ΛQ = C (which is the EVD of C). Since Q is unitary, Q∗Q = I, we obtain ∂Q∗ ∂α Q + Q∗∂Q ∂α = 0 Thus, we get tr  Λ−1 ∂Λ ∂α  = tr Q∗Λ−1Q   Q∗∂Λ ∂αQ  = tr  C−1  ∂ ∂α (Q∗ΛQ) −∂Q∗ ∂α ΛQ −Q∗Λ∂Q ∂α  = tr  C−1 ∂C ∂α  −tr  Q∗Λ−1Q ∂Q∗ ∂α ΛQ + Q∗Λ∂Q ∂α  = tr  C−1 ∂C ∂α  −tr ∂Q∗ ∂α Q + Q∗∂Q ∂α  = tr  C−1 ∂C ∂α  which is the result stated. Finally we make use of a simple property of positive semidefinite matrices to prove the Cauchy–Schwartz inequality for vectors and for functions. Result R22: (Cauchy–Schwartz inequality for vectors). Let x, y ∈Cm×1. Then: |x∗y|2 ≤∥x∥2 ∥y∥2 (A.5.9) where | · | denotes the modulus of a possibly complex–valued number, and ∥· ∥ denotes the Euclidean vector norm ( ∥x∥2 = x∗x). Equality in (A.5.9) is achieved if and only if x is proportional to y. Proof: The (2 × 2) matrix  ∥x∥2 x∗y y∗x ∥y∥2  =  x∗ y∗  x y (A.5.10) is clearly positive semidefinite (observe that condition (iii) in D11 is satisfied). It follows from condition (iv) in D11 that the determinant of the above matrix must be nonnegative: ∥x∥2 ∥y∥2 −|x∗y|2 ≥0 which gives (A.5.9). Equality in (A.5.9) holds if and only if the determinant of (A.5.10) is equal to zero. The latter condition is equivalent to requiring that x is proportional to y (cf. D3: the columns of the matrix [x y] will then be linearly dependent). “sm2” 2004/2/ page 34 i i i i i i i i Section A.6 Matrices with Special Structure 345 Result R23: (Cauchy–Schwartz inequality for functions). Let f(x) and g(x) be two complex–valued functions defined for real–valued argument x. Then, assuming that the integrals below exist, Z I f(x)g∗(x)dx 2 ≤ Z I |f(x)|2dx  Z I |g(x)|2dx  where I ⊂R is an integration interval. The inequality above becomes an equality if and only if f(x) is proportional to g(x) on I. Proof: The following matrix Z I  f(x) g(x)  [f ∗(x) g∗(x)] dx is seen to be positive semidefinite (since the integrand is a positive semidefinite matrix for every x ∈I). Hence the stated result follows from the type of argument used in the proof of Result R22. A.6 MATRICES WITH SPECIAL STRUCTURE In this section we consider several types of matrices with a special structure, for which we prove some basic properties used in the text. Definition D13: A matrix A ∈Cm×n is called Vandermonde if it has the follow-ing structure: A =      1 · · · 1 z1 zn . . . . . . zm−1 1 · · · zm−1 n      (A.6.1) where zk ∈C are usually assumed to be distinct. Result R24: Consider the matrix A in (A.6.1) with zk ̸= zp for k, p = 1, . . . , n and k ̸= p . Also let m ≥n and assume that zk ̸= 0 for all k. Then any n consecutive rows of A are linearly independent. Proof: To prove the assertion, it is sufficient to show that the following n × n Vandermonde matrix is nonsingular: ¯ A =      1 · · · 1 z1 zn . . . . . . zn−1 1 · · · zn−1 n      Let β = [β0 · · · βn−1]∗̸= 0. The equation β∗¯ A = 0 is equivalent to β0 + β1z + · · · + βn−1zn−1 = 0 at z = zk (k = 1, . . . , n) (A.6.2) However, (A.6.2) is impossible as a (n−1)-degree polynomial cannot have n zeroes. Hence, ¯ A has full rank. “sm2” 2004/2/ page 346 i i i i i i i i 346 Appendix A Linear Algebra and Matrix Analysis Tools Definition D14: A matrix A ∈Cm×n is called: • Toeplitz when Aij = Ai−j • Hankel when Aij = Ai+j Observe that a Toeplitz matrix has the same element along each diagonal, whereas a Hankel matrix has identical elements on each of the antidiagonals. Result R25: The eigenvectors of a symmetric Toeplitz matrix A ∈Rm×m are either symmetric or skew–symmetric. More precisely, if J denotes the exchange (or reversal) matrix J =   0 1 ... 1 0   and if x is an eigenvector of A, then either x = Jx or x = −Jx. Proof: By the property (3.5.3) proven in Section 3.5, A satisfies AJx = JAx or equivalently (JAJ)x = Ax for any x ∈Cm×1. Hence, we must have: JAJ = A (A.6.3) Let (λ, x) denote an eigenpair of A: Ax = λx (A.6.4) Combining (A.6.3) and (A.6.4) yields: λJx = JAx = J(JAJ)x = A(Jx) (A.6.5) Because the eigenvectors of a symmetric matrix are unique modulo multiplication by a scalar, it follows from (A.6.5) that: x = αJx for some α ∈R As x and hence Jx must have unit norm, α must satisfy α2 = 1 ⇒α = ±1; thus, either x = Jx (x is symmetric) or x = −Jx (x is skew–symmetric). One can show that for m even, the number of symmetric eigenvectors is m/2, as is the number of skew–symmetric eigenvectors; for odd m the number of sym-metric eigenvectors is (m + 1)/2 and the number of skew–symmetric eigenvectors is (m −1)/2 (see [Cantoni and Butler 1976]). For many additional results on Toeplitz matrices, the reader can consult [Iohvidov 1982; B¨ ottcher and Silbermann 1983]. “sm2” 2004/2/ page 34 i i i i i i i i Section A.7 Matrix Inversion Lemmas 347 A.7 MATRIX INVERSION LEMMAS The following formulas for the inverse of a partitioned matrix are used in the text. Result R26: Let A ∈Cm×m, B ∈Cn×n, C ∈Cm×n and D ∈Cn×m. Then, provided that the matrix inverses appearing below exist,  A C D B −1 =  I 0  A−1 I 0 +  −A−1C I  (B −DA−1C)−1[−DA−1 I] =  0 I  B−1 0 I +  I −B−1D  (A −CB−1D)−1[I −CB−1] Proof: By direct verification. By equating the top–left blocks in the above two equations we obtain the so–called Matrix Inversion Lemma. Result R27: (Matrix Inversion Lemma) Let A, B, C and D be as in R26. Then, assuming that the matrix inverses appearing below exist, (A −CB−1D)−1 = A−1 + A−1C(B −DA−1C)−1DA−1 A.8 SYSTEMS OF LINEAR EQUATIONS Let A ∈Cm×n, B ∈Cm×p, and X ∈Cn×p. A general system of linear equations in X can be written as: AX = B (A.8.1) where A and B are given and X is the unknown matrix. The special case of (A.8.1) corresponding to p = 1 (for which X and B are vectors) is perhaps the most common one in applications. For the sake of generality, we consider the system (A.8.1) with p ≥1. (The ESPRIT system of equations encountered in Section 4.7 is of the form of (A.8.1) with p > 1.) We say that (A.8.1) is exactly determined whenever m = n, overdetermined if m > n and underdetermined if m < n. In the following discussion, we first address the case where (A.8.1) has an exact solution and then the case where (A.8.1) cannot be exactly satisfied. A.8.1 Consistent Systems Result R28: The linear system (A.8.1) is consistent, that is it admits an exact solution X, if and only if R(B) ⊂R(A) or equivalently rank([A B]) = rank(A) (A.8.2) Proof: The result is readily shown by using simple rank and range properties. “sm2” 2004/2/ page 348 i i i i i i i i 348 Appendix A Linear Algebra and Matrix Analysis Tools Result R29: Let X0 be a particular solution to (A.8.1). Then the set of all solutions to (A.8.1) is given by: X = X0 + ∆ (A.8.3) where ∆∈Cn×p is any matrix whose columns are in N(A). Proof: Obviously (A.8.3) satisfies (A.8.1). To show that no solution outside the set (A.8.3) exists, let Ω∈Cn×p be a matrix whose columns do not all belong to N(A). Then AΩ̸= 0 and A(X0 + ∆+ Ω) = AΩ+ B ̸= B and hence X0 + ∆+ Ωis not a solution to AX = B. Result R30: The system of linear equations (A.8.1) has a unique solution if and only if (A.8.2) holds and A has full column rank: rank(A) = n ≤m (A.8.4) Proof: The assertion follows from R28 and R29. Next let us assume that (A.8.1) is consistent but A does not satisfy (A.8.4) (hence dim N(A) ≥1). Then, according to R29 there are an infinite set of solutions. In what follows we obtain the unique solution X0, which has minimum norm. Result R31: Consider a linear system that satisfies the consistency condition in (A.8.2). Let A have rank r ≤min (m, n), and let A = [ U1 |{z} r U2 |{z} m−r ]  Σ1 0 0 0   V ∗ 1 V ∗ 2  r n−r = U1Σ1V ∗ 1 denote the SVD of A. (Here Σ1 is nonsingular, cf. the discussion in Section A.4). Then: X0 = V1Σ−1 1 U ∗ 1 B (A.8.5) is the minimum Frobenius norm solution of (A.8.1) in the sense that ∥X0∥2 < ∥X∥2 (A.8.6) for any other solution X ̸= X0. Proof: First we verify that X0 satisfies (A.8.1). We have AX0 = U1U ∗ 1 B (A.8.7) In (A.8.7) U1U ∗ 1 is the orthogonal projector onto R(A) (cf. R17). Because B must belong to R(A) (see R28), we conclude that U1U ∗ 1 B = B and hence that X0 is indeed a solution. “sm2” 2004/2/ page 349 i i i i i i i i Section A.8 Systems of Linear Equations 349 Next note that, according to R15, N(A) = R(V2) Consequently, the general solution (A.8.3) can be written as (cf. R29) X = X0 + V2Q ; Q ∈C(n−r)×p from which we obtain: ∥X∥2 = tr[(X∗ 0 + Q∗V ∗ 2 )(X0 + V2Q)] = ∥X0∥2 + ∥V2Q∥2 > ∥X0∥2 for X ̸= X0 Definition D15: The matrix A† ≜V1Σ−1 1 U ∗ 1 (A.8.8) in (A.8.5) is the so–called Moore–Penrose pseudoinverse (or generalized in-verse) of A. It can be shown that A† is the unique solution to the following set of equations:    AA†A = A A†AA† = A† A†A and AA† are Hermitian Evidently whenever A is square and nonsingular we have A† = A−1, which partly motivates the name of “generalized inverse” (or “pseudoinverse”) given to A† in the general case. The computation of a solution to (A.8.1), whenever one exists, is an important issue which we address briefly in the following. We begin by noting that in the general case there is of course no computer algorithm which can compute a solution to (A.8.1) exactly (i.e., without any numerical errors). In effect, the best we can hope for is to compute the exact solution to a slightly perturbed (fictitious) system of linear equations: (A + ∆A)(X + ∆X) = B + ∆B (A.8.9) where ∆A and ∆B are small perturbation terms, the magnitude of which depends on the algorithm and the length of the computer word, and where ∆X is the solu-tion perturbation induced. An algorithm which, when applied to (A.8.1), provides a solution to (A.8.9) corresponding to perturbation terms (∆A, ∆B) whose mag-nitude is of the order afforded by the “machine epsilon” is said to be numerically stable. Now, assuming that (A.8.1) has a unique solution (and hence that A satisfies (A.8.4)), one can show that the perturbations in A and B in (A.8.9) are retrieved in ∆X multiplied by a proportionality factor given by cond(A) = σ1/σn (A.8.10) where σ1 and σn are the largest and smallest singular values of A, respectively, and where “cond” is short for “condition”. The system (A.8.1) is said to be well– conditioned if the corresponding ratio (A.8.10) is “small” (that is, not much larger “sm2” 2004/2/ page 350 i i i i i i i i 350 Appendix A Linear Algebra and Matrix Analysis Tools than one). The ratio in (A.8.10) is called the condition number of the matrix A and is an important parameter of a given system of linear equations. Note from the previous discussion that even a numerically stable algorithm (i.e., one that in-duces quite small ∆A and ∆B) can yield an inaccurate solution X when applied to an ill–conditioned system of linear equations (i.e., a system with a very large cond(A)). For more details on the topic of this paragraph, including specific algo-rithms for solving linear systems, we refer the reader to [Stewart 1973; Golub and Van Loan 1989]. A.8.2 Inconsistent Systems The systems of linear equations that appear in applications (such as those in the text) are quite often perturbed versions of a “nominal system” and usually they do not admit any exact solution. Such systems are said to be inconsistent, and frequently they are overdetermined and have a matrix A that has full column rank: rank(A) = n ≤m (A.8.11) In what follows, we present two approaches to obtain an approximate solution to an inconsistent system of linear equations AX ≃B (A.8.12) under the condition (A.8.11). Definition D16: The least squares (LS) approximate solution to (A.8.12) is given by the minimizer XLS of the following criterion: ∥AX −B∥2 Equivalently, XLS can be defined as follows. Obtain the minimal perturbation ∆B that makes the system (A.8.12) consistent: min ∥∆B∥2 subject to AX = B + ∆B (A.8.13) Then derive XLS by solving the system in (A.8.13) corresponding to the optimal perturbation ∆B. The LS solution introduced above can be obtained in several ways. A simple way is as follows. Result R32: The LS solution to (A.8.12) is given by: XLS = (A∗A)−1A∗B (A.8.14) The inverse matrix in the above equation exists in view of (A.8.11). Proof: The matrix B0 that makes the system consistent and which is of mini-mal distance (in the Frobenius norm metric) from B is given by the orthogonal projection of (the columns of) B onto R(A): B0 = A(A∗A)−1A∗B (A.8.15) “sm2” 2004/2/ page 35 i i i i i i i i Section A.8 Systems of Linear Equations 351 To motivate (A.8.15) by using only the results proven so far in this appendix, we digress from the main proof and let U1 denote an orthogonal basis of R(A). Then R17 implies that B0 = U1U ∗ 1 B. However, U1 and A span the same subspace and hence they must be related to one another by a nonsingular linear transformation: U1 = AQ ( |Q| ̸= 0). It follows from this observation that U1U ∗ 1 = AQQ∗A∗and also that Q∗A∗AQ = I, which lead to the following projector formula: U1U ∗ 1 = A(A∗A)−1A∗(as used in (A.8.15)). Next, we return to the proof of (A.8.14). The unique solution to AX −B0 = A[X −(A∗A)−1A∗B] is obviously (A.8.14) since dim N(A) = 0 by assumption. The LS solution XLS can be computed by means of the SVD of the m × n matrix A. The XLS can, however, be obtained in a computationally more efficient way as briefly described below. Note that XLS should not be computed by directly evaluating the formula in (A.8.14) as it stands. Briefly stated, the reason is as follows. Recall from (A.8.10) that the condition number of A is given by: cond(A) = σ1/σn (A.8.16) (note that σn ̸= 0 under (A.8.11)). When working directly on A, the numerical errors made in the computation of XLS can be shown to be proportional to (A.8.16). However, in (A.8.14) one would need to invert the matrix A∗A whose condition number is: cond(A∗A) = σ2 1/σ2 n = [cond(A)]2 (A.8.17) Working with (A∗A) may hence induce much larger numerical errors during the computation of XLS and is therefore not advisable. The algorithm sketched in what follows derives XLS by operating on A directly. For any matrix A satisfying (A.8.11) there exist a unitary matrix Q ∈Cm×m and nonsingular upper–triangular matrix R ∈Cn×n such that A = Q  R 0  ≜[ Q1 |{z} n Q2 |{z} m−n ]  R 0  (A.8.18) The previous factorization of A is called the QR decomposition (QRD). Inserting (A.8.18) into (A.8.14) we obtain XLS = R−1Q∗ 1B Hence, once the QRD of A has been performed, XLS can be conveniently obtained as the solution of a triangular system of linear equations: RXLS = Q∗ 1B (A.8.19) We note that the computation of the QRD is faster than that of the SVD (see, e.g., [Stewart 1973; Golub and Van Loan 1989]). The previous definition and derivation of XLS make it clear that the LS approach derives an approximate solution to (A.8.12) by implicitly assuming that “sm2” 2004/2/ page 35 i i i i i i i i 352 Appendix A Linear Algebra and Matrix Analysis Tools only the right–hand side matrix B is perturbed. In applications quite frequently both A and B can be considered to be perturbed versions of some nominal (and unknown) matrices. In such cases we may think of determining an approximate solution to (A.8.12) by explicitly recognizing the fact that neither A nor B are perturbation free. An approach based on this idea is described next (see, e.g., [Van Huffel and Vandewalle 1991]). Definition D17: The total least squares (TLS) approximate solution to (A.8.12) is defined as follows. First derive the minimal perturbations ∆A and ∆B that make the system consistent: min ∥[∆A ∆B]∥2 subject to (A + ∆A)X = B + ∆B (A.8.20) Then obtain XT LS by solving the system in (A.8.20) corresponding to the optimal perturbations (∆A, ∆B). A simple way to derive a more explicit formula for calculating the XT LS runs as follows. Result R33: Let [A B] = [ ˜ U1 |{z} n ˜ U2 |{z} m−n ]  ˜ Σ1 0 0 ˜ Σ2   ˜ V ∗ 1 ˜ V ∗ 2  n p (A.8.21) denote the SVD of the matrix [A B]. Furthermore, partition ˜ V ∗ 2 as ˜ V ∗ 2 = [ ˜ V ∗ 21 |{z} n ˜ V ∗ 22 |{z} p ] (A.8.22) Then XT LS = −˜ V21 ˜ V −1 22 (A.8.23) if ˜ V −1 22 exists. Proof: The optimization problem with constraints in (A.8.20) can be restated in the following way: Find the minimal perturbation [∆A ∆B] and the corresponding matrix X such that { [A B] + [∆A ∆B] }  −X I  = 0 (A.8.24) Since rank  −X I  = p it follows that [∆A ∆B] should be such that dim N( [A B]+ [∆A ∆B] ) ≥p or, equivalently, rank( [A B] + [∆A ∆B] ) ≤n (A.8.25) According to R18, the minimal perturbation matrix [∆A ∆B] that achieves (A.8.25) is given by [∆A ∆B] = −˜ U2 ˜ Σ2 ˜ V ∗ 2 (A.8.26) “sm2” 2004/2/ page 353 i i i i i i i i Section A.9 Quadratic Minimization 353 Inserting (A.8.26) along with (A.8.21) into (A.8.24), we obtain the following matrix equation in X: ˜ U1 ˜ Σ1 ˜ V ∗ 1  −X I  = 0 or, equivalently, ˜ V ∗ 1  −X I  = 0 (A.8.27) Equation (A.8.27) implies that X must satisfy  −X I  = ˜ V2Q =  ˜ V21 ˜ V22  Q (A.8.28) for some nonsingular normalizing matrix Q. The expression (A.8.23) for XT LS is readily obtained from (A.8.28). The TLS solution in (A.8.23) is unique if and only if the singular values {˜ σk} of the matrix [A B] are such that ˜ σn > ˜ σn+1 (this follows from R18). When ˜ V22 is singular, the TLS solution does not exist; see [Van Huffel and Vandewalle 1991]. The computation of the XT LS requires the SVD of the m × (n + p) matrix [A B]. The solution XT LS can be rewritten in a slightly different form. Let ˜ V11, ˜ V12 be defined via the following partition of ˜ V ∗ 1 ˜ V ∗ 1 = [ ˜ V11 |{z} n ˜ V12 |{z} p ] The orthogonality condition ˜ V ∗ 1 ˜ V2 = 0 can be rewritten as ˜ V11 ˜ V21 + ˜ V12 ˜ V22 = 0 which yields XT LS = −˜ V21 ˜ V −1 22 = ˜ V −1 11 ˜ V12 (A.8.29) Since usually p is (much) smaller than n, the formula (A.8.23) for XT LS may often be computationally more convenient than (A.8.29) (for example, in the common case of p = 1, (A.8.23) does not require any matrix inversion whereas (A.8.29) requires the calculation of an n × n matrix inverse). A.9 QUADRATIC MINIMIZATION Several problems in this text require the solution to quadratic minimization prob-lems. In this section, we make use of matrix analysis techniques to derive two results: one on unconstrained minimization, and the other on constrained mini-mization. Result R34: Let A be an (n × n) Hermitian positive definite matrix, let X and B be (n × m) matrices, and let C be an m × m Hermitian matrix. Then the unique solution to the minimization problem min X F(X), F(X) = X∗AX + X∗B + B∗X + C (A.9.1) “sm2” 2004/2/ page 354 i i i i i i i i 354 Appendix A Linear Algebra and Matrix Analysis Tools is given by X0 = −A−1B, F(X0) = C −B∗A−1B (A.9.2) Here, the matrix minimization means F(X0) ≤F(X) for every X ̸= X0; that is, F(X) −F(X0) is a positive semidefinite matrix. Proof: Let X = X0 + ∆, where ∆is an arbitrary (n × m) complex matrix. Then F(X) = (−A−1B + ∆)∗A(−A−1B + ∆) + (−A−1B + ∆)∗B +B∗(−A−1B + ∆) + C = ∆∗A∆+ F(X0) (A.9.3) Since A is positive definite, ∆∗A∆≥0 for all nonzero ∆; thus, the minimum value of F(X) is F(X0), and the result is proven. We next present a result on linearly constrained quadratic minimization. Result R35: Let A be an (n × n) Hermitian positive definite matrix, and let X ∈ Cn×m, B ∈Cn×k, and C ∈Cm×k. Assume that B has full column rank equal to k (hence n ≥k). Then the unique solution to the minimization problem min X X∗AX subject to X∗B = C (A.9.4) is given by X0 = A−1B(B∗A−1B)−1C∗. (A.9.5) Proof: First note that (B∗A−1B)−1 exists and that X∗ 0B = C. Let X = X0 + ∆, where ∆∈Cn×m satisfies ∆∗B = 0 (so that X also satisfies the constraint X∗B = C). Then X∗AX = X∗ 0AX0 + X∗ 0A∆+ ∆∗AX0 + ∆∗A∆ (A.9.6) where the two middle terms are equal to zero: ∆∗AX0 = ∆∗B(B∗A−1B)−1C∗= 0 Hence, X∗AX −X∗ 0AX0 = ∆∗A∆≥0 (A.9.7) as A is positive definite. It follows from (A.9.7) that the minimizing X matrix is given by X0. A common special case of Result R35 is m = k = 1 (so X and B are both vectors) and C = 1. Then X0 = A−1B B∗A−1B “sm2” 2004/2/ page 35 i i i i i i i i A P P E N D I X B Cram´ er–Rao Bound Tools B.1 INTRODUCTION In the text we have kept the discussion of statistical aspects at a minimum for con-ciseness reasons. However, we have presented certain statistical tools and analyses that we have found useful to the understanding of the spectral analysis material discussed. In this appendix we introduce some basic facts on an important statis-tical tool: the Cram´ er–Rao bound (abbreviated as CRB). We begin our discussion by explaining the importance of the CRB for parametric spectral analysis. Let φ(ω, θ) denote a parametric spectral model, depending on a real–valued vector θ, and let φ(ω, ˆ θ) denote the spectral density estimated from N data samples. Assume that the estimate ˆ θ of θ is consistent such that the estimation error is small for large values of N. Then, by making use of a Taylor series expansion technique, we can approximately write the estimation error [φ(ω, ˆ θ) −φ(ω, θ)] as a linear function of ˆ θ −θ: [φ(ω, ˆ θ) −φ(ω, θ)] ≃ψT (ω, θ)(ˆ θ −θ) (B.1.1) where the symbol ≃denotes an asymptotically (in N) valid approximation, and ψ(ω, θ) is the gradient of φ(ω, θ) with respect to θ (evaluated at the true parameter values): ψ(ω, θ) = ∂φ(ω, θ) ∂θ (B.1.2) It follows from (B.1.1) that the mean squared error (MSE) of φ(ω, ˆ θ) is approxi-mately given by MSE[φ(ω, ˆ θ)] ≃ψT (ω, θ)Pψ(ω, θ) (for N ≫1) (B.1.3) where P = MSE[ˆ θ] = E n (ˆ θ −θ)(ˆ θ −θ)T o (B.1.4) We see from (B.1.3) that the variance (or MSE) of the estimation errors in the spectral domain are linearly related to the variance (or MSE) of the parameter vector estimate ˆ θ, so that we can get an accurate spectral estimate only if we use an accurate parameter estimator. We start from this simple observation, which reduces the statistical analysis of φ(ω, ˆ θ) to the analysis of ˆ θ, to explain the importance of the CRB for the performance study of spectral analysis. Toward that end, we discuss several facts in the paragraphs that follow. Assume that ˆ θ is some unbiased estimate of θ (that is E{ˆ θ} = θ), and let P denote the covariance matrix of ˆ θ (cf. (B.1.4)): P = E n (ˆ θ −θ)(ˆ θ −θ)T o (B.1.5) 355 “sm2” 2004/2/ page 356 i i i i i i i i 356 Appendix B Cram´ er–Rao Bound Tools (Note that here we do not require that N be large). Then, under quite general conditions, there is a matrix (which we denote by Pcr) such that P ≥Pcr (B.1.6) in the sense that the difference (P −Pcr) is a positive semidefinite matrix. This is basically the celebrated Cram´ er–Rao bound result [Cram´ er 1946; Rao 1945]. We will derive the inequality (B.1.6) along with an expression for the CRB in the next section. In view of (B.1.6) we may think of assessing the performance of a given es-timation method by comparing its covariance matrix P with the CRB. Such a comparison would make perfect sense whenever the CRB is achievable; that is, whenever there exists an estimation method such that its P equals the CRB. Un-fortunately, this is rarely the case for finite N. Additionally, biased estimators may exist whose MSEs are smaller than the CRB under discussion (see, for example, [Stoica and Moses 1990; Stoica and Ottersten 1996]). Hence, in the finite sample case (particularly for small samples) comparing with the CRB does not really make too much sense because: (i) There might be no unbiased estimator that attains the CRB and, conse-quently, a large difference (P −Pcr) may not necessarily mean bad accuracy; and (ii) The equality P = Pcr does not necessarily mean that we have achieved the ultimate possible performance, as there might be biased estimators with lower MSE than the CRB. In the large sample case, on the other hand, the utility of the CRB result for the type of parameter estimation problems addressed in the text is significant, as explained next. Let y ∈RN×1 denote the sample of available observations. Any estimate ˆ θ of θ will be a function of y. We assume that both θ and y are real–valued. Working with real θ and y vectors appears to be the most convenient way when discussing the CRB theory, even when the original parameters and measurements are complex– valued. (If the parameters and measurements are complex–valued, θ and y are obtained by concatenating the real and imaginary parts of the complex parameter and data vectors, respectively.) We also assume that the probability density of y, which we denote by p(y, θ), is a differentiable function of θ. An important general method for parameter estimation consists of maximizing p(y, θ) with respect to θ: ˆ θ = arg max θ p(y, θ) (B.1.7) The p(y, θ) in (B.1.7) with y fixed and θ variable is called the likelihood function, and ˆ θ is called the maximum likelihood (ML) estimate of θ. Under regularity conditions the ML estimate (MLE) is consistent (i.e., limN→∞ˆ θ = θ stochastically) and its covariance matrix approaches the CRB as N increases: P ≃Pcr for a MLE with N ≫1 (B.1.8) “sm2” 2004/2/ page 35 i i i i i i i i Section B.1 Introduction 357 The aforementioned regularity conditions basically amount to requiring that the number of free parameters does not increase with N, which is true for all but one of the parametric spectral estimation problems discussed in the text. The array processing problem of Chapter 6 does not satisfy the previous requirement when the signal snapshots are assumed to be unknown deterministic variables; in such a case the number of unknown parameters grows without bound as N increases, and the equality in (B.1.8) does not hold, see [Stoica and Nehorai 1989a; Stoica and Nehorai 1990] and also Section B.6. In summary, then, in large samples the ML method attains the ultimate per-formance corresponding to the CRB, under rather general conditions. Furthermore, there are no other known practical methods that can provide consistent estimates of θ with lower variance than the CRB1. Hence, the ML method can be said to be asymptotically a statistically efficient practical estimation approach. The accuracy achieved by any other estimation method can therefore be assessed by comparing the (large sample) covariance matrix of that method with the CRB, which approxi-mately equals the covariance matrix of the MLE in large samples (cf. (B.1.8)). This performance comparison ability is one of the most important uses of the CRB. With reference to the spectral estimation problem, it follows from (B.1.3) and the previous observation that we can assess the performance of a given spectral estimator by comparing its large sample MSE values with ψT (ω, θ)[Pcr]ψ(ω, θ) (B.1.9) The MSE values can be obtained by the Monte–Carlo simulation of a typical sce-nario representative of the problem of interest, or by using analytical MSE formulas whenever they are available. In the text we have emphasized the former, more prag-matic way of determining the MSE of a given spectral estimator. Remark: The CRB formula (B.1.9) for parametric (or model-based) spectral anal-ysis holds in the case where the model order (i.e., the dimension of θ) is equal to the “true order”. Of course, in any practical spectral analysis exercise using the parametric approach we will have to estimate n, the model order, in addition to θ, the (real-valued) model parameters. The need for order estimation is a distinctive feature and an additional complication of parametric spectral analysis, as compared with nonparametric spectral analysis. There are several available rules for order selection (see Appendix C). For most of these rules, the probability of underestimating the true order approaches zero as N increases (if that is not the case, then the estimated spectrum may be heavily biased). The probability of overestimating the true order, on the other hand, may be nonzero even when N →∞. Let ˆ n denote the estimated order, n0 the true order, and pn = Pr(ˆ n = n) for N →∞. Assume that pn = 0 for n < n0 and that the CRB formula (B.1.9) holds for any n ≥n0 (which is a relatively mild restriction). Then it can be shown (see [Sando, Mitra, and Stoica 2002] 1Consistent estimation methods whose asymptotic variance is lower than the CRB, at certain points in the parameter set, do exist! However, such methods (which are called “asymptoti-cally statistically super–efficient”) have little practical relevance (they are mainly of a theoretical interest); see, e.g., [Stoica and Ottersten 1996]. “sm2” 2004/2/ page 358 i i i i i i i i 358 Appendix B Cram´ er–Rao Bound Tools and the references therein) that whenever n is estimated along with θ the formula (B.1.9) should be replaced with its average over the distribution of order estimates: nMAX X n=n0 pnψT n (ω, θn)[Pcr,n]ψn(ω, θn) (B.1.10) where we have emphasized by notation the dependence of ψ, θ, and Pcr on the model order n, and where nMAX denotes the maximum order value considered in the order selection rule. The set of probabilities {pn} associated with various order estimation rules are tabulated, e.g., in [McQuarrie and Tsai 1998]. As expected, it can be proven that the spectral CRB in (B.1.10) increases (for each ω) with increasing nMAX (see [Sando, Mitra, and Stoica 2002]). This increase of the spectral estimation error is the price paid for not knowing the true model order. ■ B.2 THE CRB FOR GENERAL DISTRIBUTIONS Result R36: (Cram´ er–Rao Bound) Consider the likelihood function p(y, θ), intro-duced in the previous section, and define Pcr = E (∂ln p(y, θ) ∂θ  ∂ln p(y, θ) ∂θ T )!−1 (B.2.1) where the inverse is assumed to exist. Then P ≥Pcr (B.2.2) holds for any unbiased estimate of θ. Furthermore, the CRB matrix can alterna-tively be expressed as: Pcr = −  E ∂2 ln p(y, θ) ∂θ ∂θT −1 (B.2.3) Proof: As p(y, θ) is a probability density function, Z p(y, θ)dy = 1 (B.2.4) where the integration is over RN. The assumption that ˆ θ is an unbiased estimate implies Z ˆ θp(y, θ)dy = θ (B.2.5) “sm2” 2004/2/ page 359 i i i i i i i i Section B.3 The CRB for Gaussian Distributions 359 Differentiation of (B.2.4) and (B.2.5) with respect to θ yields, under regularity conditions, Z ∂p(y, θ) ∂θ dy = Z ∂ln p(y, θ) ∂θ p(y, θ)dy = E ∂ln p(y, θ) ∂θ  = 0 (B.2.6) and Z ˆ θ∂p(y, θ) ∂θ dy = Z ˆ θ∂ln p(y, θ) ∂θ p(y, θ)dy = E  ˆ θ∂ln p(y, θ) ∂θ  = I (B.2.7) It follows from (B.2.6) and (B.2.7) that E  (ˆ θ −θ)∂ln p(y, θ) ∂θ  = I (B.2.8) Next note that the matrix E      (ˆ θ −θ) ∂ln p(y, θ) ∂θ    (ˆ θ −θ)T ∂ln p(y, θ) ∂θ T   =  P I I P −1 cr  (B.2.9) is, by construction, positive semidefinite. (To obtain the equality in (B.2.9) we used (B.2.8)). This observation implies (B.2.2) (see Result R20 in Appendix A). Next we prove the equality in (B.2.3). Differentiation of (B.2.6) gives: Z ∂2 ln p(y, θ) ∂θ ∂θT p(y, θ)dy + Z ∂ln p(y, θ) ∂θ  ∂ln p(y, θ) ∂θ T p(y, θ)dy = 0 or, equivalently, E (∂ln p(y, θ) ∂θ  ∂ln p(y, θ) ∂θ T ) = −E ∂2 ln p(y, θ) ∂θ ∂θT  which is precisely what we had to prove. The matrix J = E (∂ln p(y, θ) ∂θ  ∂ln p(y, θ) ∂θ T ) = −E ∂2 ln p(y, θ) ∂θ ∂θT  , (B.2.10) the inverse of which appears in the CRB formula (B.2.1) (or (B.2.3)), is called the (Fisher) information matrix [Fisher 1922]. B.3 THE CRB FOR GAUSSIAN DISTRIBUTIONS The CRB matrix in (B.2.1) depends implicitly on the data properties via the prob-ability density function p(y, θ). To obtain a more explicit expression for the CRB “sm2” 2004/2/ page 360 i i i i i i i i 360 Appendix B Cram´ er–Rao Bound Tools we should specify the data distribution. A particularly convenient CRB formula is obtained if the data vector is assumed to be Gaussian distributed: p(y, θ) = 1 (2π)N/2|C|1/2 e−(y−µ)T C−1(y−µ)/2 (B.3.1) where µ and C are, respectively, the mean and the covariance matrix of y (C is assumed to be invertible). In the case of (B.3.1), the log–likelihood function that appears in (B.2.1) is given by: ln p(y, θ) = −N 2 ln 2π −1 2 ln |C| −1 2(y −µ)T C−1(y −µ) (B.3.2) Result R37: The CRB matrix corresponding to the Gaussian data distribution in (B.3.1) is given (elementwise) by: [P −1 cr ]ij = 1 2 tr C−1C′ iC−1C′ j + µ′T i C−1µ′ j (B.3.3) where C′ i denotes the derivative of C with respect to the ith element of θ (and similarly for µ′ i). Proof: By using Result R21 and the notational convention for the first–order and second–order derivatives, we obtain: 2[ln p(y, θ)]′′ ij = ∂ ∂θi  −tr C−1C′ j + 2µ′T j C−1(y −µ) +(y −µ)T C−1C′ jC−1(y −µ) = tr C−1C′ iC−1C′ j −tr C−1C′′ ij +2 n µ′T j C−1′ i (y −µ) −µ′T j C−1µ′ i o −2µ′T i C−1C′ jC−1(y −µ) + tr  (y −µ)(y −µ)T · −C−1C′ iC−1C′ jC−1 + C−1C′′ ijC−1 −C−1C′ jC−1C′ iC−1 Taking the expectation of both sides of the equation above yields: 2 P −1 cr ij = −tr C−1C′ iC−1C′ j + tr C−1C′′ ij + 2µ′T i C−1µ′ j + tr C−1C′ iC−1C′ j −tr C−1C′′ ij + tr C−1C′ iC−1C′ j = tr C−1C′ iC−1C′ j + 2µ′T i C−1µ′ j which concludes the proof. The CRB expression in (B.3.3) is sometimes referred to as the Slepian–Bangs formula. (The second term in (B.3.3) is due to Slepian [Slepian 1954] and the first to Bangs [Bangs 1971]). “sm2” 2004/2/ page 36 i i i i i i i i Section B.3 The CRB for Gaussian Distributions 361 Next we specialize the CRB formula (B.3.3) to a particular type of Gaussian distribution. Let N = 2 ¯ N (hence, N is assumed to be even). Partition the vector y as y =  y1 y2  } ¯ N } ¯ N (B.3.4) Accordingly, partition µ and C as µ =  µ1 µ2  (B.3.5) and C =  C11 C12 CT 12 C22  (B.3.6) The vector y is said to have a circular (or circularly symmetric) Gaussian distribu-tion if C11 = C22 (B.3.7) CT 12 = −C12 (B.3.8) Let y △ = y1 + iy2 (B.3.9) and µ µ = µ1 + iµ2 (B.3.10) We also say that the complex–valued random vector y has a circular Gaussian distri-bution whenever the conditions (B.3.7) and (B.3.8) are satisfied. It is a straightfor-ward exercise to verify that the aforementioned conditions can be more compactly written as: E  (y −µ µ)(y −µ µ)T = 0 (B.3.11) The Fourier transform, as well as the complex demodulation operation (see Chapter 6), often lead to signals satisfying (B.3.11) (see, e.g., [Brillinger 1981]). Hence, the circularity is a relatively frequent property of the Gaussian random signals encountered in the spectral analysis problems discussed in this text. Remark: If a random vector y satisfies the “circularity condition” (B.3.11) then it is readily verified that y and yeiz have the same second–order properties for every constant z in [−π, π]. Hence, the second–order properties of y do not change if its generic element yk is replaced by any other value, ykeiz, on the circle with radius |yk| (recall that z is nonrandom and it does not depend on k). This observation provides a motivation for the name “circularly symmetric” given to such a random vector y. ■ “sm2” 2004/2/ page 36 i i i i i i i i 362 Appendix B Cram´ er–Rao Bound Tools Let Γ = E {(y −µ µ)(y −µ µ)∗} (B.3.12) For circular Gaussian random vectors y (or y), the CRB formula (B.3.3) can be rewritten in a compact form as a function of Γ and µ µ. (Note that the dimensions of Γ and µ µ are half the dimensions of C and µ appearing in (B.3.3).) In order to show how this can be done, we need some preparations. Let ¯ C = C11 = C22 (B.3.13) ˜ C = CT 12 = −C12 (B.3.14) Hence, C =  ¯ C −˜ C ˜ C ¯ C  (B.3.15) and Γ = 2( ¯ C + i ˜ C) (B.3.16) To any complex–valued matrix C = ¯ C + i ˜ C we associate a real–valued matrix C as defined in (B.3.15), and vice versa. It is a simple exercise to verify that if A = BC ⇐ ⇒¯ A + i ˜ A = ( ¯ B + i ˜ B)( ¯ C + i ˜ C) (B.3.17) then the real–valued matrix associated with A is given by A = BC ⇐ ⇒  ¯ A −˜ A ˜ A ¯ A  =  ¯ B −˜ B ˜ B ¯ B   ¯ C −˜ C ˜ C ¯ C  (B.3.18) In particular, it follows from (B.3.17) and (B.3.18) with A = I (and hence A = I) that the matrices C−1 and C−1 form a real–complex pair as defined above. We deduce from the results previously derived that the matrix in the first term of (B.3.3), D = C−1C′ iC−1C′ j (B.3.19) is associated with D = C−1C′ iC−1C′ j = Γ−1Γ′ iΓ−1Γ′ j (B.3.20) Furthermore, we have 1 2 tr(D) = tr( ¯ D) = tr(D) (B.3.21) The second equality above follows from the fact that C is Hermitian, and hence tr(D∗) = tr(C′ jC−1C′ iC−1) = tr(C−1C′ iC−1C′ j) = tr(D) which in turn implies that tr( ˜ D) = 0 and therefore that tr(D) = tr( ¯ D). Combining (B.3.20) and (B.3.21) shows that the first term in (B.3.3) can be rewritten as: tr(Γ−1Γ′ iΓ−1Γ′ j) (B.3.22) Next we consider the second term in (B.3.3). Let x =  x1 x2  and z =  z1 z2  “sm2” 2004/2/ page 363 i i i i i i i i Section B.3 The CRB for Gaussian Distributions 363 be two arbitrary vectors partitioned similarly to µ, and let x = x1 + ix2 and z = z1 + iz2. A straightforward calculation shows that: xT Az = xT 1 ¯ Az1 + xT 2 ¯ Az2 + xT 2 ˜ Az1 −xT 1 ˜ Az2 = Re {x∗Az} (B.3.23) Hence, µ′T i C−1µ′ j = Re  µ µ′∗ i C−1µ µ′ j = 2 Re  µ µ′∗ i Γ−1µ µ′ j (B.3.24) Insertion of (B.3.22) and (B.3.24) into (B.3.3) yields the following CRB formula that holds in the case of circularly Gaussian distributed data vectors y (or y): [P −1 cr ]ij = tr Γ−1Γ′ iΓ−1Γ′ j + 2 Re µ µ′∗ i Γ−1µ µ′ j (B.3.25) The importance of the Gaussian CRB formulas lies not only in the fact that Gaus-sian data are rather frequently encountered in applications, but also in a more subtle aspect explained in what follows. Briefly stated, the second reason for the importance of the CRB formulas derived in this section is that: Under rather general conditions and (at least) in large samples, the Gaussian CRB is the largest of all CRB matrices correspond-ing to different congruous distributions of the data sample2. (B.3.26) To motivate the previous assertion, consider the ML estimate of θ derived under the Gaussian data hypothesis, which we denote by ˆ θG. According to the discus-sion around equation (B.1.8), the large sample covariance matrix of ˆ θ equals P G cr (similar to ˆ θG, we use an index G to denote the CRB matrix in the Gaussian hy-pothesis case). Now, under rather general conditions, the large sample properties of the Gaussian ML estimator are independent of the data distribution (see, e.g., [S¨ oderstr¨ om and Stoica 1989]). In other words, the large sample covariance matrix of ˆ θG is equal to P G cr for many other data distributions besides the Gaussian one. This observation, along with the general CRB inequality, implies that: P G cr ≥Pcr (B.3.27) where the right–hand side is the CRB matrix corresponding to the data distribution at hand. The inequality (B.3.27) (or, equivalently, the assertion (B.3.26)) shows that a method whose covariance matrix is much larger than P G cr cannot be a good esti-mation method. As a matter of fact, the “asymptotic properties” of most existing 2A meaningful comparison of the CRBs under two different data distributions requires that the hypothesized distributional models do not contain conflicting assumptions. In particular, when one of the two distributions is the Gaussian, the mean and covariance matrix should be the same for both distributions. “sm2” 2004/2/ page 364 i i i i i i i i 364 Appendix B Cram´ er–Rao Bound Tools parameter estimation methods do not depend on the data distribution. This means that P G cr is a lower bound for the covariance matrices of a large class of estimation methods, regardless of the data distribution. On the other hand, the inequal-ity (B.3.27) also shows that for non–Gaussian data it should be possible to beat the Gaussian CRB (for instance by exploiting higher–order moments of the data, beyond the first and second–order moments used in the Gaussian ML method). However, general estimation methods with covariance matrices uniformly smaller than P G cr are yet to be discovered. In summary, comparing against the P G cr makes sense in most parameter estimation exercises. In what follows, we briefly consider the application of the general Gaussian CRB formulas derived above to the three main parameter estimation problems treated in the text. B.4 THE CRB FOR LINE SPECTRA As explained in Chapter 4 the estimation of line spectra is basically a parameter estimation problem. The corresponding parameter vector is θ = α1 . . . αn, ϕ1 . . . ϕn, ω1 . . . ωn, σ2T (B.4.1) and the data vector is y = [y(1) · · · y(N)]T (B.4.2) or, in real–valued form, y = Re[y(1)] · · · Re[y(N)] Im[y(1)] · · · Im[y(N)] T (B.4.3) When {ϕk} are assumed to be random variables uniformly distributed on [0, 2π] (whereas {αk} and {ωk} are deterministic constants), the distribution of y is not Gaussian and hence neither of the CRB formulas of the previous section are usable. To overcome this difficulty it is customary to consider the distribution of y condi-tioned on {ϕk} (i.e., for {ϕk} fixed). This distribution is circular Gaussian, under the assumption that the (white) noise is circularly Gaussian distributed, with the following mean and covariance matrix: µ µ = E {y} =      1 · · · 1 eiω1 · · · eiωn . . . . . . ei(N−1)ω1 · · · ei(N−1)ωn         α1eiϕ1 . . . αneiϕn    (B.4.4) Γ = E {(y −µ µ)(y −µ µ)∗} = σ2I (B.4.5) The differentiation of (B.4.4) and (B.4.5) with respect to the elements of the parameter vector θ can be easily done (we leave the details of this differentiation operation as an exercise to the reader). Hence, we can readily obtain all ingredients required to evaluate the CRB matrix in equation (B.3.25). If the distribution of y (or y) is Gaussian but not circular, we need additional parameters, besides σ2, to characterize the matrix E  (y −µ µ)(y −µ µ)T . Once these parameters are introduced, the use of formula (B.3.3) to obtain the CRB is straightforward. “sm2” 2004/2/ page 36 i i i i i i i i Section B.5 The CRB for Rational Spectra 365 In Section 4.3 we have given a simple formula for the block of the CRB matrix corresponding to the frequency estimates {ˆ ωk}. That formula holds asymptotically, as N increases. For finite values of N, it is a good approximation of the exact CRB whenever the minimum frequency separation is larger than 1/N [Stoica, Moses, Friedlander, and S¨ oderstr¨ om 1989]. In any case, the approximate (large sample) CRB formula given in Section 4.3 is computationally much simpler to implement than the exact CRB. The computation and properties of the CRB for line spectral models are discussed in great detail in [Ghogho and Swami 1999]. In particular, a modified lower bound on the variance of any unbiased estimates of {αk} and {ωk} is derived for the case in which {ϕk} are independent random variables uniformly distributed on [0, 2π]. That bound, which was obtained using the so-called posterior CRB introduced in [Van Trees 1968] (as indicated above, the standard CRB does not apply to such a case), has an expression that is quite similar to the large-sample CRB given in [Stoica, Moses, Friedlander, and S¨ oderstr¨ om 1989] (see Section 4.3 for the large-sample CRB for {ˆ ωk}). The paper [Ghogho and Swami 1999] also discusses the derivation of the CRB in the case of non-Gaussian noise distributions. The extension of the asymptotic CRB formula in Section 4.3 to the case of colored noise can be found in [Stoica, Jakobsson, and Li 1997]. B.5 THE CRB FOR RATIONAL SPECTRA For rational (or ARMA) spectra, the Cram´ er–Rao lower bound on the variance of any consistently estimated spectrum is asymptotically (for N ≫1) given by (B.1.9). The CRB matrix for the parameter vector estimate, which appears in (B.1.9), can be evaluated as outlined in what follows. In the case of ARMA spectral models, the parameter vector consists of the white noise power σ2 and the polynomial coefficients {ak, bk}. We arrange the ARMA coefficients in the following real–valued vector: θ = [Re(a1) · · · Re(an) Re(b1) · · · Re(bm) Im(a1) · · · Im(an) Im(b1) · · · Im(bm)]T The data vector is defined as in equations (B.4.2) or (B.4.3) and has zero mean (µ = 0). The calculation of the covariance matrix of the data vector reduces to the calculation of ARMA covariances: r(k) = σ2E B(z) A(z) w(t)  B(z) A(z) w(t −k) ∗ where the white noise sequence {w(t)} is normalized such that its variance is one. Methods for computation of {rk} (for given values of σ2 and θ) were outlined in Exercises C1.12 and 3.2. The method in Exercise C1.12 should perform reasonably well as long as the zeroes of A(z) are not too close to the unit circle. If the zeroes of A(z) are close to the unit circle, it is advisable to use the method in Exercise 3.2 or in [Kinkel, Perl, Scharf, and Stubberud 1979; Demeure and Mullis 1989]. The calculation of the derivatives of {r(k)} with respect to σ2 and the elements of θ, which appear in the CRB formulas (B.3.3) or (B.3.25), can also be reduced to “sm2” 2004/2/ page 366 i i i i i i i i 366 Appendix B Cram´ er–Rao Bound Tools ARMA (cross)covariance computation. To see this, let α and γ be the real parts of ap and bp, respectively. Then ∂r(k) ∂α = −σ2E  B(z) A2(z)w(t −p)  B(z) A(z) w(t −k) ∗ + B(z) A(z) w(t)   B(z) A2(z)w(t −k −p) ∗ and ∂r(k) ∂γ = σ2E  1 A(z)w(t −p)  B(z) A(z) w(t −k) ∗ + B(z) A(z) w(t)   1 A(z)w(t −k −p) ∗ The derivatives of r(k) with respect to the imaginary parts of ap and bp can be sim-ilarly obtained. The differentiation of r(k) with respect to σ2 is immediate. Hence, by making use of an algorithm for ARMA cross–covariance calculation (similar to the ones for autocovariance calculation in Exercises C1.12 and 3.2) we can readily obtain all the ingredients needed to evaluate the CRB matrix in equation (B.3.3) or (B.3.25). Similarly to the case of line spectra, for relatively large values of N (e.g., on the order of hundreds) the use of the exact CRB formula for rational spectra may be computationally burdensome (owing to the need to multiply and invert matrices of large dimensions). In such large–sample cases, we may want to use an asymptotically valid approximation of the exact CRB such as the one developed in [S¨ oderstr¨ om and Stoica 1989]. Below we present such an approximate (large sample) CRB formula for ARMA parameter estimates. Let Λ = E  Re[e(t)] Im[e(t)]  Re[e(t)] Im[e(t)]  (B.5.1) Typically the real and imaginary parts of the complex–valued white noise sequence {e(t)} are assumed to be mutually uncorrelated and have the same variance σ2/2. In such a case, we have Λ = (σ2/2)I. However, this assumption is not necessary for the result discussed below to hold, and hence we do not impose it (in other words, Λ in (B.5.1) is only constrained to be a positive definite matrix). We should also remark that, for the sake of simplicity, we assumed the ARMA signal under discussion is scalar. Nevertheless, the extension of the discussion that follows to multivariate ARMA signals is immediate. Finally, note that for real–valued signals the imaginary parts in (B.5.1) (and in equation (B.5.2)) should be omitted. The real–valued white noise vector in (B.5.1) satisfies the following equation:     Re[e(t)] Im[e(t)]     | {z } ε(t) =     Re A(z) B(z)  −Im A(z) B(z)  Im A(z) B(z)  Re A(z) B(z)      | {z } H(z)     Re[y(t)] Im[y(t)]     | {z } v(t) (B.5.2) “sm2” 2004/2/ page 36 i i i i i i i i Section B.6 The CRB for Spatial Spectra 367 where z−1 is to be treated as the unit delay operator (not as a complex variable). As the coefficients of the polynomials A(z) and B(z) in H(z) above are the unknowns in our estimation problem, we can rewrite (B.5.2) in the following form to stress the dependence of ε(t) on θ: ε(t, θ) = H(z, θ)v(t) (B.5.3) Because the polynomials of the ARMA model are monic by assumption, we have: H(z, θ)|z−1=0 = I (for any θ) (B.5.4) This observation, along with the fact that ε(t) is white and the “whitening fil-ter” H(z) is stable and causal (which follows from the fact that the complex– valued (equivalent) counterpart of (B.5.2), e(t) = A(z) B(z)y(t), is stable and causal) implies that (B.5.3) is a standard prediction error model to which the CRB result of [S¨ oderstr¨ om and Stoica 1989] applies. Let ∆(t) = ∂εT (t, θ) ∂θ (B.5.5) (ε(t, θ) depends on θ via H(z, θ) only; see (B.5.2)). Then an asymptotically valid expression for the CRB block corresponding to the parameters in θ is given by: Pcr,θ = E  ∆(t)Λ−1∆T (t) −1 (B.5.6) The calculation of the derivative matrix in (B.5.5) is straightforward. The eval-uation of the statistical expectation in (B.5.6) can be reduced to ARMA cross– covariance calculations. Since equation (B.5.6) does not require handling matrices of large dimensions (on the order of N), its implementation is much simpler than that of the exact CRB formula. For some recent results on the CRB for rational spectral analysis, see [Ninness 2003]. B.6 THE CRB FOR SPATIAL SPECTRA Consider the model (6.2.21) for the output sequence {y(t)}N t=1 of an array that receives the signals emitted by n narrowband point sources: y(t) = As(t) + e(t) A = [a(θ1), . . . , a(θn)] (B.6.1) The noise term, e(t), in (B.6.1) is assumed to be circularly Gaussian distributed with mean zero and the following covariances: E {e(t)e∗(τ)} = σ2Iδt,τ (B.6.2) Regarding the signal vector, s(t), in the equation (B.6.1), we can assume that either: Det: {s(t)} is a deterministic, unknown sequence “sm2” 2004/2/ page 368 i i i i i i i i 368 Appendix B Cram´ er–Rao Bound Tools or Sto: {s(t)} is a random sequence which is circularly Gaussian dis-tributed with mean zero and covariances E {s(t)s∗(τ)} = Pδt,τ (B.6.3) Hereafter, the acronyms Det and Sto are used to designate the case of deterministic or stochastic signals, respectively. Note that making one of these two assumptions on {s(t)} is similar to assuming in the line spectral analysis problem that the initial phases {ϕk} are deterministic or random (see Section B.4). As we will see shortly, both the CRB analysis and the resulting CRB formulas depend heavily on which of the two assumptions we make on {s(t)}. The reader may already wonder which assumption should then be used in a given application. This is not a simple question, and we will be better prepared to answer it after deriving the corresponding CRB formulas. In Chapter 6 we used the symbol θ to denote the DOA vector. To conform with the notation used in this appendix (and by a slight abuse of notation), we will here let θ denote the entire parameter vector. As explained in Chapter 6, the use of array processing for spatial spectral analysis leads essentially to a parameter estimation problem. Under the Det as-sumption the parameter vector to be estimated is given by: θ = θ1, . . . , θn ; ¯ sT (1), . . . , ¯ sT (N) ; . . . ; ˜ sT (1), . . . , ˜ sT (N) ; σ2T (B.6.4) whereas under the Sto assumption θ = h θ1, . . . , θn ; P11, ¯ P12, ˜ P12, . . . , ¯ P1n, ˜ P1n, P22, ¯ P23, ˜ P23, . . . , Pnn, ; σ2iT (B.6.5) Hereafter, ¯ s(t) and ˜ s(t) denote the real and imaginary parts of s(t), and Pij de-notes the (i, j)th element of the matrix P. Furthermore, under both Det and Sto assumptions the observed array output sample, y(t) = yT (1), . . . , yT (N) T (B.6.6) is circularly Gaussian distributed with the following mean µ and covariance Γ: Under Det: µ =    As(1) . . . As(N)   , Γ =    σ2I 0 ... 0 σ2I    (B.6.7) Under Sto: µ = 0, Γ =    R 0 ... 0 R    (B.6.8) where R is given by (see (6.4.3)) R = APA∗+ σ2I (B.6.9) “sm2” 2004/2/ page 369 i i i i i i i i Section B.6 The CRB for Spatial Spectra 369 The differentiation of either (B.6.7) or (B.6.8) with respect to the elements of the parameter vector θ is straightforward. Using the so-obtained derivatives of µ and Γ in the general CRB formula in (B.3.25) provides a simple means of computing CRBDet and CRBSto for the entire parameter vector θ as defined in (B.6.4) or (B.6.5). Computing the CRB as described above may be sufficient for many applica-tions. However, sometimes we may need more than just that. For example, we may be interested in using the CRB for the design of array geometry or for getting insights into the various features of a specific spatial spectral analysis scenario. In such cases we may want to have a closed-form (or analytical) expression for the CRB. More precisely, as the DOAs are usually the parameters of major interest, we may often want a closed-form expression for CRB(DOA) (i.e., the block of the CRB matrix that corresponds to the DOA parameters). Below we consider the problem of obtaining such a closed-form CRB expression under both the Det and Sto assumptions made above. First, consider the Det assumption. Let us write the corresponding µ vector in (B.6.7) as µ = Gs (B.6.10) where G =    A 0 ... 0 A   , s =    s(1) . . . s(N)    (B.6.11) Then, a straightforward calculation yields: ∂µ ∂¯ sT = G, ∂µ ∂˜ sT = iG; (B.6.12) and ∂µ ∂θk =    ∂A ∂θk s(1) . . . ∂A ∂θk s(N)   =    dksk(1) . . . dksk(N)   , k = 1, . . . , n (B.6.13) where sk(t) is the kth element of s(t) and dk = ∂a(θ) ∂θ θ=θk (B.6.14) Using the notation ∆=    d1s1(1) · · · dnsn(1) . . . . . . d1s1(N) · · · dnsn(N)   , (N × n) (B.6.15) we can then write: d µ d θT = ∆, G, iG, 0 (B.6.16) “sm2” 2004/2/ page 370 i i i i i i i i 370 Appendix B Cram´ er–Rao Bound Tools which gives the following expression for the second term in the general CRB formula in (B.3.25): 2 Re dµ∗ dθ Γ−1 d µ dθT  =  J 0 0 0  (B.6.17) where J ≜2 σ2 Re      ∆∗ G∗ −iG∗   ∆ G iG    (B.6.18) Furthermore, as Γ depends only on σ2 and as d Γ d σ2 =    I 0 ... 0 I    we can easily verify that the matrix corresponding to the first term in the general CRB formula, (B.3.25), is given by tr Γ−1Γ′ iΓ−1Γ′ j = 0 0 0 mN σ4  , i, j = 1, 2, . . . (B.6.19) Combining (B.6.17) and (B.6.19) yields the following CRB formula for the param-eter vector θ in (B.6.4), under the Det assumption: CRBDet = J−1 0 0 σ4 mN  (B.6.20) Hence, to obtain the CRB for the DOA subvector of θ we need to extract the corresponding block of J−1. One convenient way of doing this is by suitably block-diagonalizing the matrix J. To this end, let us introduce the matrix B = (G∗G)−1G∗∆ (B.6.21) Note that the inverse in (B.6.21) exists because A∗A is nonsingular by assumption. Also, let F =   I 0 0 −¯ B I 0 −˜ B 0 I   (B.6.22) where ¯ B = Re{B} and ˜ B = Im{B}. It can be verified that ∆ G iG F = (∆−GB) G iG = Π⊥ G∆ G iG (B.6.23) where Π⊥ G = I −G(G∗G)−1G∗ is the orthogonal projector onto the null space of G∗(see Result R17 in Ap-pendix A); in particular, observe that G∗Π⊥ G = 0. It follows from (B.6.18) and “sm2” 2004/2/ page 37 i i i i i i i i Section B.6 The CRB for Spatial Spectra 371 (B.6.23) that F T JF = 2 σ2 Re   F ∗   ∆∗ G∗ −iG∗   ∆ G iG F    = 2 σ2 Re      ∆∗Π⊥ G G∗ −iG∗   Π⊥ G∆ G iG    = 2 σ2 Re      ∆∗Π⊥ G∆ 0 0 0 G∗G iG∗G 0 −iG∗G G∗G      (B.6.24) and hence that the CRB matrix for the DOAs and the signal sequence is given by J−1 = F F T JF −1 F T = σ2 2   I 0 0 −¯ B I 0 −˜ B 0 I     Re(∆∗Π⊥ G∆) −1 0 0 0 x x 0 x x     I −¯ BT −˜ BT 0 I 0 0 0 I   =   σ2 2 Re(∆∗Π⊥ G∆) −1 x x x x x x x x   (B.6.25) where we used the symbol x to denote a block of no interest in the derivation. From (B.6.4) and (B.6.25) we can immediately see that the CRB matrix for the DOAs is given by: CRBDet(DOA) = σ2 2 Re(∆∗Π⊥ G∆) −1 (B.6.26) It is possible to rewrite (B.6.26) in a more convenient form. To do so, we note that Π⊥ G =    I 0 ... 0 I   −    ΠA 0 ... 0 ΠA   =    Π⊥ A 0 ... 0 Π⊥ A    (B.6.27) and hence ∆∗Π⊥ G∆ kp = N X t=1 d∗ ks∗ k(t)Π⊥ Adpsp(t) = N d∗ kΠ⊥ Adp " 1 N N X t=1 sp(t)s∗ k(t) # = N D∗Π⊥ AD kp h ˆ P T i kp (B.6.28) “sm2” 2004/2/ page 37 i i i i i i i i 372 Appendix B Cram´ er–Rao Bound Tools where D = d1 . . . dn (B.6.29) ˆ P = 1 N N X t=1 s(t)s∗(t) (B.6.30) It follows from (B.6.28) that ∆∗Π⊥ G∆= N D∗Π⊥ AD  ⊙ˆ P T (B.6.31) where ⊙denotes the Hadamard (or elementwise) matrix product (see the definition in Result R19 in Appendix A). Inserting (B.6.31) in (B.6.26) yields the following analytical expression for the CRB matrix associated with the DOA vector under the Det assumption: CRBDet(DOA) = σ2 2N n Re hD∗Π⊥ AD  ⊙ˆ P T io−1 (B.6.32) We refer the reader to [Stoica and Nehorai 1989a] for more details about (B.6.32) and its possible uses in array processing. The presented derivation of (B.6.32) has been adapted from [Stoica and Larsson 2001]. Note that (B.6.32) can be directly applied to the temporal line spectral model in Section B.4 (see equations (B.4.4) and (B.4.5) there) to obtain an analytical CRB formula for the sinusoidal frequencies. The derivation of an analytical expression for the CRB matrix associated with the DOAs under the Sto assumption is more intricate, and we give only the final formula here (see [Stoica, Larsson, and Gershman 2001] and its references for a derivation): CRBSto(DOA) = σ2 2N  Re D∗Π⊥ AD  ⊙ PA∗R−1AP)T  −1 (B.6.33) At this point we should emphasize the fact that the two CRBs discussed above, CRBDet and CRBSto, correspond to two different models of the data vector y (see (B.6.7) and (B.6.8)), and hence they are not directly comparable. On the other hand, the CRBs for the DOA parameters can be compared with one another. To make this comparison possible, let us introduce the assumption that the sample covariance matrix ˆ P in (B.6.30) converges to the P matrix in (B.6.3), as N →∞. Let CRBDet(DOA) denote the CRB matrix in (B.6.32) with ˆ P replaced by P. Then, the following interesting order relation holds true: CRBSto(DOA) ≥CRBDet(DOA) (B.6.34) To prove (B.6.34) we need to show that (see (B.6.32) and (B.6.33)):  Re D∗Π⊥ AD  ⊙ PA∗R−1AP)T  −1 ≥  Re D∗Π⊥ AD  ⊙P T −1 “sm2” 2004/2/ page 373 i i i i i i i i Section B.6 The CRB for Spatial Spectra 373 or, equivalently, that Re D∗Π⊥ AD  ⊙ P −PA∗R−1AP)T  ≥0 (B.6.35) The real part of a positive semidefinite matrix is positive semidefinite itself, i.e. H ≥0 = ⇒Re[H] ≥0 (B.6.36) (indeed, for any real-valued vector h we have: h∗Re[H]h = Re[h∗Hh] ≥0 for H ≥0). Combining this observation with Result R19 in Appendix A shows that to prove (B.6.35) it is sufficient to verify that: P ≥PA∗R−1AP (B.6.37) or, equivalently, I ≥P 1/2A∗R−1AP 1/2 (B.6.38) where P 1/2 denotes the Hermitian square root of P (see Definition D12 in Ap-pendix A). Let Z = AP 1/2 Then (B.6.38) can be rewritten as I −Z∗ZZ∗+ σ2I −1 Z ≥0 (B.6.39) To prove (B.6.39) we use the fact that the following matrix is evidently positive semidefinite:  I Z∗ Z ZZ∗+ σ2I  =  I Z  I Z∗ +  0 0 0 σ2I  ≥0 (B.6.40) and therefore  I −Z∗ZZ∗+ σ2I −1 0 I   I Z∗ Z ZZ∗+ σ2I   I 0 − ZZ∗+ σ2I −1 Z I  =  I −Z∗ZZ∗+ σ2I −1 Z 0 0 ZZ∗+ σ2I  ≥0 (B.6.41) The inequality in (B.6.39) is a simple consequence of (B.6.41), and hence the proof of (B.6.34) is concluded. To understand (B.6.34) at an intuitive level we note that the ML method for DOA estimation under the Sto assumption, MLSto, can be shown to achieve CRBSto(DOA) (for sufficiently large values of N); see, e.g., [Stoica and Neho-rai 1990] and [Ottersten, Viberg, Stoica, and Nehorai 1993]. This result should in fact be no surprise because the general ML method of parameter esti-mation is known to be asymptotically statistically efficient (i.e., it achieves the CRB as N →∞) under some regularity conditions which are satisfied in the Sto assumption case. Specifically, the regularity conditions require that the number of unknown parameters does not increase as N increases, which is indeed true for the “sm2” 2004/2/ page 374 i i i i i i i i 374 Appendix B Cram´ er–Rao Bound Tools Sto model (see (B.6.5)). Let CMLSto(DOA) denote the asymptotic covariance ma-trix of the MLSto estimate of the DOA parameter vector. According to the above discussion, we have that CMLSto(DOA) = CRBSto(DOA) (B.6.42) At the same time, under the Det assumption the MLSto can be viewed as some method for DOA estimation, and hence its asymptotic covariance matrix must satisfy the CRB inequality (corresponding to the Det assumption): CMLSto(DOA) ≥CRBDet(DOA) (B.6.43) (Note that the asymptotic covariance matrix of MLSto can be shown to be the same under either the Sto or Det assumption). The above equation along with (B.6.42) provide a heuristic motivation for the relationship between CRBSto(DOA) and CRBDet(DOA) in (B.6.34). Note that the inequality in (B.6.34) is in gen-eral strict, but the relative difference between CRBSto(DOA) and CRBDet(DOA) is usually fairly small (see, e.g., [Ottersten, Viberg, Stoica, and Nehorai 1993]). A similar remark to the one in the previous paragraph can be made on the ML method for DOA estimation under the Det assumption, which we abbreviate as MLDet. Note that MLDet can be readily seen to coincide with the NLS method discussed in Section 6.4.1. Under the Sto assumption, MLDet (i.e., the NLS method) can be viewed as just some method for DOA estimation. Hence, its (asymptotic) covariance matrix must be bounded below by the CRB corresponding to the Sto assumption: CMLDet(DOA) ≥CRBSto(DOA) (B.6.44) Similarly to MLSto, the asymptotic covariance matrix of MLDet can also be shown to be the same under either the Sto or Det assumption. Hence, we can infer from (B.6.34) and (B.6.44) that MLDet may not attain CRBDet(DOA) which is indeed the case (as is shown in, e.g., [Stoica and Nehorai 1989a]). To understand why this happens, note that the Det model contains (2N+1)n+1 real-valued parameters (see (B.6.4)) which must be estimated from 2mN data samples. Hence, for large N, the ratio between the number of unknown parameters and the available data samples approaches a constant (equal to n/m), which violates one of the aforementioned regularity conditions for the statistical efficiency of the ML method. Remark: CRBDet(DOA) depends on the signal sequence {s(t)}N t=1. However, nei-ther CRBDet(DOA) nor the asymptotic covariance matrices of MLSto, MLDet, or in fact many other DOA estimation methods depend on this sequence. We will use the symbol C to denote the (asymptotic) covariance matrix of such a DOA estimation method for which C is independent of the signal sequence. From CRBDet(DOA) we can obtain a matrix, different from CRBDet(DOA), which is independent of the signal sequence, in the following manner: ACRBDet(DOA) = ˜ E {CRBDet(DOA)} (B.6.45) where ˜ E is an averaging operator, and ACRBDet stands for Averaged CRBDet. For example, ˜ E{·} in (B.6.45) can be a simple arithmetic averaging of CRBDet(DOA) “sm2” 2004/2/ page 37 i i i i i i i i Section B.6 The CRB for Spatial Spectra 375 over a set of signal sequences. Using the fact that ˜ E {C} = C (since C does not depend on the sequence {s(t)}N t=1), we can apply the operator ˜ E{·} to both sides of the following CRB inequality: C ≥CRBDet(DOA) (B.6.46) to obtain C ≥ACRBDet(DOA) (B.6.47) (Note that the inequality in (B.6.46) and hence that in (B.6.47) hold at least for suf-ficiently large values of N). It follows from (B.6.47) that ACRBDet(DOA) can also be used as a lower bound on the DOA estimation error covariance. Furthermore, it can be shown that ACRBDet(DOA) is tighter than CRBDet(DOA): ACRBDet(DOA) ≥CRBDet(DOA) (B.6.48) To prove (B.6.48), we introduce the matrix: X = 2N σ2 Re h (D∗Π⊥ AD) ⊙ˆ P T i (B.6.49) Using this notation along with the fact that ˜ E{ ˆ P} = P (which holds under mild conditions), we can rewrite (B.6.48) as follows: ˜ E  X−1 ≥ h ˜ E {X} i−1 (B.6.50) To prove (B.6.50), we note that the matrix ˜ E  X−1 I I X  = ˜ E  X−1/2 X1/2  X−1/2 X1/2 (where X1/2 and X−1/2 denote the Hermitian square roots of X and X−1, re-spectively) is clearly positive semidefinite, and therefore so must be the following matrix: " I − h ˜ E {X} i−1 0 I #  ˜ E  X−1 I I ˜ E {X}  " I 0 − h ˜ E {X} i−1 I # = " ˜ E  X−1 − h ˜ E {X} i−1 0 0 ˜ E {X} # ≥0 (B.6.51) The matrix inequality in (B.6.50), which is somewhat similar to the scalar Jensen inequality (see, e.g., Complement 4.9.5) readily follows from (B.6.51). The inequality (B.6.48) looks appealing. On the other hand, ACRBDet(DOA) should be less tight than CRBSto(DOA), in view of the results in (B.6.42) and (B.6.47). Also, CRBSto(DOA) has a simpler analytical form. Hence, we may have little reason to use ACRBDet(DOA) in lieu of CRBSto(DOA). Despite these draw-backs of ACRBDet(DOA), we have included this discussion for the potential use-fulness of the inequality in (B.6.50) and of the basic idea behind the introduction of ACRBDet(DOA). ■ “sm2” 2004/2/ page 376 i i i i i i i i 376 Appendix B Cram´ er–Rao Bound Tools In the remainder of this section we rely on the previous results to compare the Det and Sto model assumptions, to discuss the consequences of making these assumptions, and to draw some conclusions. First, consider the array output model in equation (B.6.1). To derive the ML estimates of the unknown parameters in (B.6.1) we must make some assumptions on the signal sequence {s(t)}. The MLSto method for DOA estimation (derived under the Sto assumption) turns out to be more accurate than the MLDet method (obtained under the Det assumption), under quite general conditions on {s(t)}. However, the MLSto method is somewhat more complicated computationally than the MLDet method; see, e.g., [Ottersten, Viberg, Stoica, and Nehorai 1993]. The previous discussion implies that the question as to which assumption should be used (because “it is more likely to be true”) is in fact irrelevant in this case. Indeed, we should see the two assumptions only as instruments for deriving the two corresponding ML methods. Once we have completed the derivations, the assumption issue is no longer important and we can simply choose the ML method that we prefer, regardless of the nature of {s(t)}. The choice should be based on the facts that (a) MLDet is computationally simpler than MLSto, and (b) MLSto is statistically more accurate than MLDet under quite general conditions on {s(t)}. Second, regarding the two CRB matrices that correspond to the Det and Sto assumptions, respectively, we can argue as follows. Under the Sto assumption, CRBSto(DOA) is the Cram´ er–Rao bound and hence the lower bound to use. Under the Det assumption, while CRBSto(DOA) is no longer the true CRB, it is still a tight lower bound on the asymptotic covariance matrix of any known DOA esti-mation method. CRBDet(DOA) is also a lower bound, but it is not tight. Hence CRBSto(DOA) should be the normal choice for a lower bound, regardless of the as-sumption (Det or Sto) that the signal sequence is likely to satisfy. Note that, under the Det assumption, MLSto can be seen as some DOA estimation method. There-fore, in principle, a better DOA estimation method than MLSto may exist (where by “better” we mean that the covariance matrix of such an estimation method would be smaller than CRBSto(DOA)). However, no such DOA estimation method appears to be available, in spite of a significant literature on the so-called problem of “estimation in the presence of many nuisance parameters,” of which the DOA estimation problem under the Det assumption is a special case. “sm2” 2004/2/ page 37 i i i i i i i i A P P E N D I X C Model Order Selection Tools C.1 INTRODUCTION The parametric methods of spectral analysis (discussed in Chapters 3, 4, and 6) require not only the estimation of a vector of real-valued parameters but also the selection of one or several integer-valued parameters that are equally important for the specification of the data model. Specifically, these integer-valued parameters of the model are the ARMA model orders in Chapter 3, the number of sinusoidal components in Chapter 4, and the number of source signals impinging on the array in Chapter 6. In each of these cases, the integer-valued parameters determine the dimension of the real-valued parameter vector of the data model. In what follows we will use the following symbols: y = the vector of available data (of size N) θ = the (real-valued) parameter vector n = the dimension of θ For short, we will refer to n as the model order, even though sometimes n is not really an order (see, e.g., the above examples). We assume that both y and θ are real-valued: y ∈RN, θ ∈Rn Whenever we need to emphasize that the number of elements in θ is n, we will use the notation θn. A method that estimates n from the data vector y will be called an order selection rule. Note that the need for estimating a model order is typical of the parametric approaches to spectral analysis. The nonparametric methods of spectral analysis do not have such a requirement. The discussion in the text on the parametric spectral methods has focused on estimating the model parameter vector θ for a specific order n. In this general appendix1 we explain how to estimate n as well. The literature on order selection is as considerable as that on (real-valued) parameter estimation (see, e.g., [Choi 1992; S¨ oderstr¨ om and Stoica 1989; McQuarrie and Tsai 1998; Linhart and Zucchini 1986; Burnham and Anderson 2002; Sakamoto, Ishiguro, and Kitagawa 1986; Stoica, Eykhoff, Jannsen, and S¨ oderstr¨ om 1986] and the many references therein). However, many order selection rules are tied to specific parameter estimation methods and hence their applicability is rather limited. Here we will concentrate on order selection rules that are associated with the maximum likelihood method (MLM) of parameter estimation. As explained 1Based on “Model order selection: A review of the AIC, GIC, and BIC rules,” by P. Stoica and Y. Sel´ en, IEEE Signal Processing Magazine, 21(2), March 2004. 377 “sm2” 2004/2/ page 378 i i i i i i i i 378 Appendix C Model Order Selection Tools briefly in Appendix B (see also below), the MLM is likely the most commonly used parameter estimation method. Consequently, the order estimation rules that can be used with the MLM are of quite a general interest. In the next section we review briefly the ML method of parameter estimation and some of its main properties. C.2 MAXIMUM LIKELIHOOD PARAMETER ESTIMATION Let p(y, θ) = the probability density function (pdf) of the data vec-tor y, which depends on the parameter vector θ; also called the likelihood function. The ML estimate of θ, which we denote by ˆ θ, is given by the maximizer of p(y, θ) (see, e.g., [Anderson 1971; Brockwell and Davis 1991; Hannan and Deistler 1988; Papoulis 1977; Porat 1994; Priestley 1981; Scharf 1991; Therrien 1992; S¨ oderstr¨ om and Stoica 1989] and also Appendix B). Alter-natively, as ln(·) is a monotonically increasing function, ˆ θ = arg max θ ln p(y, θ) (C.2.1) Under the Gaussian data assumption, the MLM typically reduces to the nonlinear least-squares (NLS) method of parameter estimation (particular forms of which are discussed briefly in Chapter 3 and in more detail in Chapters 4 and 6). To illustrate this fact, let us assume that the observation vector y can be written as: y = µ(γ) + e (C.2.2) where e is a (real-valued) Gaussian white noise vector with mean zero and covariance matrix given by E  eeT = σ2I, γ is an unknown parameter vector, and µ(γ) is a deterministic function of γ. It follows readily from (C.2.2) that p(y, θ) = 1 (2π)N/2(σ2)N/2 e−∥y−µ(γ)∥2 2σ2 (C.2.3) where θ =  γ σ2  (C.2.4) Remark: Note that in this appendix we use the symbol θ for the whole parameter vector, unlike in some previous discussions where we used θ to denote the signal parameter vector (which is denoted by γ here). ■ We deduce from (C.2.3) that −2 ln p(y, θ) = N ln(2π) + N ln σ2 + ∥y −µ(γ)∥2 σ2 (C.2.5) “sm2” 2004/2/ page 379 i i i i i i i i Section C.2 Maximum Likelihood Parameter Estimation 379 A simple calculation based on (C.2.5) shows that the ML estimates of γ and σ2 are given by: ˆ γ = arg min γ ∥y −µ(γ)∥2 (C.2.6) ˆ σ2 = 1 N ∥y −µ(ˆ γ)∥2 (C.2.7) The corresponding value of the likelihood function is given by −2 ln p(y, ˆ θ) = constant + N ln ˆ σ2 (C.2.8) As can be seen from (C.2.6), in the present case the MLM indeed reduces to the NLS. In particular, note that the NLS method for sinusoidal parameter estimation discussed in Chapter 4 is precisely of the form of (C.2.6). If we let Ns denote the number of observed complex-valued samples of the noisy sinusoidal signal, and nc denote the number of sinusoidal components present in the signal, then: N = 2Ns (C.2.9) n = 3nc + 1 (C.2.10) We will use the sinusoidal signal model of Chapter 4 as a vehicle for illustrating how the various general order selection rules presented in what follows should be used in a specific situation. These rules can also be used with the parametric spectral analysis methods of Chapters 3 and 6. The task of deriving explicit forms of these order selection rules for the aforementioned methods is left as an interesting exercise to the reader (see, e.g., [McQuarrie and Tsai 1998; Brockwell and Davis 1991; Porat 1994]). Next, we note that under regularity conditions the pdf of the ML estimate ˆ θ converges, as N →∞, to a Gaussian pdf with mean θ and covariance matrix equal to the Cram´ er–Rao Bound (CRB) matrix (see Section B.2 for a discussion about the CRB). Consequently, asymptotically in N, the pdf of ˆ θ is given by: p(ˆ θ) = 1 (2π)n/2|J−1|1/2 e−1 2 (ˆ θ−θ)T J(ˆ θ−θ) (C.2.11) where (see (B.2.10)) J = −E ∂2 ln p(y, θ) ∂θ ∂θT  (C.2.12) Remark: To simplify the notation, we use the symbol θ for both the true parameter vector and the parameter vector viewed as an unknown variable (as we also did in Appendix B). The exact meaning of θ should be clear from the context. ■ The “regularity conditions” referred to above require that n is not a function of N, and hence that the ratio between the number of unknown parameters and the number of observations tends to zero as N →∞. This is true for the parametric “sm2” 2004/2/ page 380 i i i i i i i i 380 Appendix C Model Order Selection Tools spectral analysis problems discussed in Chapters 3 and 4. However, the previous condition does not hold for the parametric spectral analysis problem addressed in Chapter 6. Indeed, in the latter case the number of parameters to be estimated from the data is proportional to N, owing to the assumption that the signal se-quence is completely unknown. To overcome this difficulty we can assume that the signal vector is temporally white and Gaussian distributed, which leads to a ML problem that satisfies the previously stated regularity condition (we refer the interested reader to [Ottersten, Viberg, Stoica, and Nehorai 1993; Stoica and Nehorai 1990; Van Trees 2002] for details on this ML approach to the spatial spectral analysis problem of Chapter 6). To close this section, we note that under mild conditions:  −1 N ∂2 ln p(y, θ) ∂θ ∂θT −1 N J  →0 as N →∞ (C.2.13) To motivate (C.2.13) for the fairly general data model in (C.2.2) we can argue as follows. Let us rewrite the negative log-likelihood function associated with (C.2.2) as (see (C.2.5)): −ln p(y, θ) = constant + N 2 ln(σ2) + 1 2σ2 N X t=1 [yt −µt(γ)]2 (C.2.14) where the subindex t denotes the t-th component. From (C.2.14) we obtain by a simple calculation: −∂ln p(y, θ) ∂θ =       −1 σ2 N X t=1 [yt −µt(γ)] µ′ t(γ) N 2σ2 − 1 2σ4 N X t=1 [yt −µt(γ)]2       (C.2.15) where µ′ t(γ) = ∂µt(γ) ∂γ (C.2.16) Differentiating (C.2.15) once again gives: −∂2 ln p(y, θ) ∂θ ∂θT =       −1 σ2 N X t=1 etµ′′ t (γ) + 1 σ2 N X t=1 µ′ t(γ)µ′T t (γ) 1 σ4 N X t=1 etµ′ t(γ) 1 σ4 N X t=1 etµ′ t(γ) −N 2σ4 + 1 σ6 N X t=1 e2 t       (C.2.17) where et = yt −µt(γ) and µ′′ t (γ) = ∂2µt(γ) ∂γ ∂γT (C.2.18) “sm2” 2004/2/ page 38 i i i i i i i i Section C.3 Useful Mathematical Preliminaries and Outlook 381 Taking the expectation of (C.2.17) and dividing by N, we get: 1 N J =     1 σ2 1 N N X t=1 µ′ t(γ)µ′T t (γ) ! 0 0 1 2σ4     (C.2.19) We assume that µ(γ) is such that the above matrix has a finite limit as N →∞. Under this assumption, and the previously-made assumption on e, we can also show from (C.2.17) that −1 N ∂2 ln p(y, θ) ∂θ ∂θT converges (as N →∞) to the right side of (C.2.19), which concludes the motivation of (C.2.13). Letting ˆ J = −∂2 ln p(y, θ) ∂θ ∂θT θ=ˆ θ (C.2.20) we deduce from (C.2.13), (C.2.19), and the consistency of ˆ θ that, for sufficiently large values of N, 1 N ˆ J ≃1 N J = O(1) (C.2.21) Hereafter, ≃denotes an asymptotic (approximate) equality, in which the higher-order terms have been neglected, and O(1) denotes a term that tends to a constant as N →∞. Interestingly enough, the assumption that the right side of (C.2.19) has a finite limit, as N →∞, holds for many problems, but not for the sinusoidal parameter estimation problem of Chapter 4. In the latter case, (C.2.21) needs to be modified as follows (see, e.g., Appendix B): KN ˆ JKN ≃KNJKN = O(1) (C.2.22) where KN = " 1 N3/2 s Inc 0 0 1 N1/2 s I2nc+1 # (C.2.23) and where Ik denotes the k ×k identity matrix; to write (C.2.23), we assumed that the upper-left nc × nc block of J corresponds to the sinusoidal frequencies, but this fact is not really important for the analysis in this appendix, as we will see below. C.3 USEFUL MATHEMATICAL PRELIMINARIES AND OUTLOOK In this section we discuss a number of mathematical tools that will be used in the next sections to derive several important order selection rules. We will keep the discussion at an informal level to make the material as accessible as possible. In Section C.3.1 we will formulate the model order selection as a hypothesis testing problem, with the main goal of showing that the maximum a posteriori (MAP) approach leads to the optimal order selection rule (in a sense specified there). In Section C.3.2 we discuss the Kullback-Leibler information criterion, which lies at the basis of another approach that can be used to derive model order selection rules. “sm2” 2004/2/ page 38 i i i i i i i i 382 Appendix C Model Order Selection Tools C.3.1 Maximum A Posteriori (MAP) Selection Rule Let Hn denote the hypothesis that the model order is n, and let ¯ n denote a known upper bound on n: n ∈[1, ¯ n] (C.3.1) We assume that the hypotheses {Hn}¯ n n=1 are mutually exclusive (i.e., only one of them can hold true at a time). As an example, for a real-valued AR signal with coefficients {ak} we can define Hn as follows: Hn : an ̸= 0 and an+1 = · · · = a¯ n = 0 (C.3.2) For a sinusoidal signal we can proceed similarly, after observing that for such a signal the number of components nc is related to n as in (C.2.10), viz. n = 3nc + 1 (C.3.3) Hence, for a sinusoidal signal with amplitudes {αk} we can consider the following hypotheses: Hnc : αk ̸= 0 for k = 1, . . . , nc, and αk = 0 for k = nc + 1, . . . , ¯ nc (C.3.4) for nc ∈[1, ¯ nc] (with the corresponding “model order” n being given by (C.3.3)). Remark: The hypotheses {Hn} can be either nested or non-nested. We say that H1 and H2 are nested whenever the model corresponding to H1 can be obtained as a special case of that associated with H2. To give an example, the following hypotheses H1 : the signal is a first-order AR process H2 : the signal is a second-order AR process are nested, whereas the above H1 and H3 : the signal consists of one sinusoid in noise are non-nested. ■ Let pn(y|Hn) = the pdf of y under Hn (C.3.5) Whenever we want to emphasize the possible dependence of the pdf in (C.3.5) on the parameter vector of the model corresponding to Hn, we write: pn(y, θn) ≜pn(y|Hn) (C.3.6) Assuming that (C.3.5) is available, along with the a priori probability of Hn, pn(Hn), we can write the conditional probability of Hn, given y, as: pn(Hn|y) = pn(y|Hn)pn(Hn) p(y) (C.3.7) “sm2” 2004/2/ page 383 i i i i i i i i Section C.3 Useful Mathematical Preliminaries and Outlook 383 The maximum a posteriori probability (MAP) rule selects the order n (or the hypothesis Hn) that maximizes (C.3.7). As the denominator in (C.3.7) does not depend on n, the MAP rule is given by: max n∈[1,¯ n] pn(y|Hn)pn(Hn) (C.3.8) Most typically, the hypotheses {Hn} are a priori equiprobable, i.e., pn(Hn) = 1 ¯ n, n = 1, . . . , ¯ n (C.3.9) in which case the MAP rule reduces to: max n∈[1,¯ n] pn(y|Hn) (C.3.10) Next, we define the average (or total) probability of correct detection as Pcd = Pr{[(decide H1) ∩(H1 =true)] ∪· · · ∪[(decide H¯ n) ∩(H¯ n =true)]} (C.3.11) The attribute “average” that has been attached to Pcd is motivated by the fact that (C.3.11) gives the probability of correct detection “averaged” over all possible hypotheses (as opposed, for example, to only considering the probability of correctly detecting that the model order is 2 (let us say), which is Pr{decide H2|H2}). Remark: Regarding the terminology, note that the determination of a real-valued parameter from the available data is called “estimation,” whereas it is usually called “detection” for an integer-valued parameter, such as a model order. ■ In the following we prove that the MAP rule is optimal in the sense of max-imizing Pcd. To do so, consider a generic rule for selecting n, or, equivalently, for testing the hypotheses {Hn} against each other. Such a rule will implicitly or ex-plicitly partition the observation space, RN, into ¯ n sets {Sn}¯ n n=1, which are such that: We decide Hn if and only if y ∈Sn (C.3.12) Making use of (C.3.12) along with the fact that the hypotheses {Hn} are mutually exclusive, we can write Pcd in (C.3.11) as: Pcd = ¯ n X n=1 Pr{(decide Hn) ∩(Hn =true)} = ¯ n X n=1 Pr{(decide Hn)|Hn} Pr{Hn} = ¯ n X n=1 Z Sn pn(y|Hn)pn(Hn) dy = Z RN " ¯ n X n=1 In(y)pn(y|Hn)pn(Hn) # dy (C.3.13) “sm2” 2004/2/ page 384 i i i i i i i i 384 Appendix C Model Order Selection Tools where In(y) is the so-called indicator function given by: In(y) = ( 1, if y ∈Sn 0, otherwise (C.3.14) Next, observe that for any given data vector, y, one and only one indicator function can be equal to one (as the sets Sn do not overlap and their union is RN). This observation along with the expression (C.3.13) for Pcd imply that the MAP rule in (C.3.8) maximizes Pcd, as stated. Note that the sets {Sn} corresponding to the MAP rule are implicitly defined via (C.3.8); however, {Sn} are of no real interest in the proof, as both they and the indicator functions are introduced only to simplify the above proof. For more details on the topic of this subsection, we refer the reader to [Scharf 1991; Van Trees 2002]. C.3.2 Kullback-Leibler Information Let p0(y) denote the true pdf of the observed data vector y, and let ˆ p(y) denote the pdf of a generic model of the data. The “discrepancy” between p0(y) and ˆ p(y) can be measured using the Kullback-Leibler (KL) information or discrepancy function (see [Kullback and Leibler 1951]): D(p0, ˆ p) = Z p0(y) ln p0(y) ˆ p(y)  dy (C.3.15) To simplify the notation, we omit the region of integration when it is the entire space. Letting E0{·} denote the expectation with respect to the true pdf, p0(y), we can rewrite (C.3.15) as: D(p0, ˆ p) = E0  ln p0(y) ˆ p(y)  = E0{ln p0(y)} −E0{ln ˆ p(y)} (C.3.16) Next, we prove that (C.3.15) possesses some properties of a suitable discrepancy function, viz. D(p0, ˆ p) ≥0 D(p0, ˆ p) = 0 if and only if p0(y) = ˆ p(y) (C.3.17) To verify (C.3.17) we use the fact shown in Complement 6.5.8 that −ln λ ≥1 −λ for any λ > 0 (C.3.18) and −ln λ = 1 −λ if and only if λ = 1 (C.3.19) Hence, letting λ(y) = ˆ p(y)/p0(y), we have that: D(p0, ˆ p) = Z p0(y) [−ln λ(y)] dy ≥ Z p0(y) [1 −λ(y)] dy = Z p0(y)  1 −ˆ p(y) p0(y)  dy = 0 “sm2” 2004/2/ page 38 i i i i i i i i Section C.3 Useful Mathematical Preliminaries and Outlook 385 where the equality holds if and only if λ(y) ≡1, i.e. ˆ p(y) ≡p0(y). Remark: The inequality in (C.3.17) also follows from Jensen’s inequality (see equa-tion (4.9.36) in Complement 4.9.5) and the concavity of the function ln(·): D(p0, ˆ p) = −E0  ln  ˆ p(y) p0(y)  ≥−ln  E0  ˆ p(y) p0(y)  = −ln Z ˆ p(y) p0(y)p0(y) dy  = −ln(1) = 0 ■ The KL discrepancy function can be viewed as quantifying the “loss of in-formation” induced by the use of ˆ p(y) in lieu of p0(y). For this reason, D(p0, ˆ p) is sometimes called an information function, and the order selection rules derived from it are called information criteria (see Sections C.4–C.6). C.3.3 Outlook: Theoretical and Practical Perspectives Neither the MAP rule nor the KL information can be directly used for order se-lection because neither the pdfs of the data vector under the various hypotheses nor the true data pdf are available in any of the parametric spectral analysis prob-lems discussed in the text. A possible way of using the MAP approach for order estimation consists of assuming an a priori pdf for the unknown parameter vector, θn, and integrating θn out of pn(y, θn) to obtain pn(y|Hn). This Bayesian-type approach will be discussed in Section C.7. Regarding the KL approach, a natural way of using it for order selection consists of using an estimate, ˆ D(p0, ˆ p), in lieu of the unavailable D(p0, ˆ p) (for a suitably chosen model pdf, ˆ p(y)), and determining the model order by minimizing ˆ D(p0, ˆ p). This KL-based approach will be discussed in Sections C.4–C.6. The derivations of all model order selection rules in the sections that follow rely on the assumption that one of the hypotheses {Hn} is true. As this assumption is unlikely to hold in applications with real-life data, the reader may justifiably wonder whether an order selection rule derived under such an assumption has any practical value. To address this concern, we remark that good parameter estimation methods (such as the MLM), derived under rather strict modeling assumptions, perform quite well in applications where the assumptions made are rarely satisfied exactly. Similarly, order selection rules based on sound theoretical principles (such as the ML, KL, and MAP principles used in this text) are likely to perform well in applications despite the fact that some of the assumptions made when deriving them do not hold exactly. While the precise behavior of order selection rules (such as those presented in the sections to follow) in various mismodeling scenarios is not well understood, extensive simulation results (see, e.g., [McQuarrie and Tsai 1998; Linhart and Zucchini 1986; Burnham and Anderson 2002]) lend support to the above claim. “sm2” 2004/2/ page 386 i i i i i i i i 386 Appendix C Model Order Selection Tools C.4 DIRECT KULLBACK-LEIBLER (KL) APPROACH: NO-NAME RULE The model-dependent part of the Kullback-Leibler (KL) information, (C.3.16), is given by −E0{ln ˆ p(y)} (C.4.1) where ˆ p(y) is the pdf or likelihood of the model (to simplify the notation, we omit the index n of ˆ p(y); we will reinstate the index n later on, when needed). Minimization of (C.4.1) with respect to the model order is equivalent to maximization of the function: I(p0, ˆ p) ≜E0{ln ˆ p(y)} (C.4.2) which is sometimes called the relative KL information. The ideal choice for ˆ p(y) in (C.4.2) would be the model likelihood, pn(y|Hn) = pn(y, θn). However, the model likelihood function is not available, and hence this choice is not possible. Instead, we might think of using ˆ p(y) = p(y, ˆ θ) (C.4.3) in (C.4.2), which would give I  p0, p(y, ˆ θ)  = E0 n ln p(y, ˆ θ) o (C.4.4) Because the true pdf of the data vector is unknown, we cannot evaluate the ex-pectation in (C.4.4). Apparently, what we could easily do is to use the following unbiased estimate of I  p0, p(y, ˆ θ)  , instead of (C.4.4) itself, ˆ I = ln p(y, ˆ θ) (C.4.5) However, the order selection rule that maximizes (C.4.5) does not have satisfactory properties. This is especially true for nested models, in the case of which the order selection rule based on the maximization of (C.4.5) fails completely: indeed, for nested models this rule will always choose the maximum possible order, ¯ n, owing to the fact that ln pn(y, ˆ θn) monotonically increases with increasing n. A better idea consists of approximating the unavailable log-pdf of the model, ln pn(y, θn), by a second-order Taylor series expansion around ˆ θn, and using the so-obtained approximation to define ln ˆ p(y) in (C.4.2): ln pn(y, θn) ≃ln pn(y, ˆ θn) + (θn −ˆ θn)T  ∂ln pn(y, θn) ∂θn θn=ˆ θn  + 1 2(θn −ˆ θn)T  ∂2 ln pn(y, θn) (∂θn) (∂θn)T θn=ˆ θn  (θn −ˆ θn) ≜ln ˆ pn(y) (C.4.6) Because ˆ θn is the maximizer of ln pn(y, θn), the second term in (C.4.6) is equal to zero. Hence, we can write (see also (C.2.21)): ln ˆ pn(y) ≃ln pn(y, ˆ θn) −1 2(θn −ˆ θn)T J(θn −ˆ θn) (C.4.7) According to (C.2.11), E0 n (θn −ˆ θn)T J(θn −ˆ θn) o = tr h JE0 n (θn −ˆ θn)(θn −ˆ θn)T oi = tr[In] = n (C.4.8) “sm2” 2004/2/ page 38 i i i i i i i i Section C.5 Cross-Validatory KL Approach: The AIC Rule 387 which means that, for the choice of ˆ pn(y) in (C.4.7), we have I = E0 n ln pn(y, ˆ θn) −n 2 o (C.4.9) An unbiased estimate of the above relative KL information is given by ln pn(y, ˆ θn) −n 2 (C.4.10) The corresponding order selection rule maximizes (C.4.10), or, equivalently, mini-mizes NN(n) = −2 ln pn(y, ˆ θn) + n (C.4.11) with respect to model order n. This no-name (NN) rule can be shown to perform better than that based on (C.4.5), but worse than the rules presented in the next sections. Essentially, the problem with (C.4.11) is that it tends to overfit (i.e., to select model orders larger than the “true” order). To understand intuitively how this happens, note that the first term in (C.4.11) decreases with increasing n (for nested models), whereas the second term increases. Hence, the second term in (C.4.11) penalizes overfitting; however, it turns out that it does not penalize quite enough. The rules presented in the following sections have a form similar to (C.4.11), but with a larger penalty term, and they do have better properties than (C.4.11). Despite this fact, we have chosen to present (C.4.11) briefly in this section for two reasons: (i) the discussion here has revealed the failure of using maxn ln pn(y, ˆ θn) as an order selection rule, and has shown that it is in effect quite easy to obtain rules with better properties; and (ii) this section has laid groundwork for the derivation of better order selection rules based on the KL approach in the next two sections. To close this section, we motivate the multiplication by -2 in going from (C.4.10) to (C.4.11). The reason for preferring (C.4.11) to (C.4.10) is that for the fairly common NLS model in (C.2.2) and the associated Gaussian likelihood in (C.2.3), −2 ln pn(y, ˆ θn) takes on the following convenient form: −2 ln pn(y, ˆ θn) = N ln ˆ σ2 n + constant (C.4.12) (see (C.2.5)–(C.2.7)). Hence, in such a case we can replace −2 ln pn(y, ˆ θn) in (C.4.11) by the scaled logarithm of the residual variance, N ln ˆ σ2 n. This remark also applies to the order selection rules presented in the following sections, which are written in a form similar to (C.4.11). C.5 CROSS-VALIDATORY KL APPROACH: THE AIC RULE As explained in the previous section, a possible approach to model order selection consists of minimizing the KL discrepancy between the “true” pdf of the data and the pdf (or likelihood) of the model, or equivalently maximizing the relative KL information (see (C.4.2)): I(p0, ˆ p) = E0{ln ˆ p(y)} (C.5.1) When using this approach, the first (and, likely the main) hurdle that we have to overcome is the choice of the model likelihood, ˆ p(y). As discussed in the previous “sm2” 2004/2/ page 388 i i i i i i i i 388 Appendix C Model Order Selection Tools section, we would ideally like to use the true pdf of the model as ˆ p(y) in (C.5.1), i.e. ˆ p(y) = pn(y, θn), but this is not possible since pn(y, θn) is unknown. Hence, we have to choose ˆ p(y) in a different way. This choice is important, as it eventually determines the model order selection rule that we will obtain. The other issue we should consider when using the approach based on (C.5.1) is that the expectation in (C.5.1) cannot be evaluated because the true pdf of the data is unknown. Con-sequently, we will have to use an estimate, ˆ I, in lieu of the unavailable I(p0, ˆ p) in (C.5.1). Let x denote a fictitious data vector with the same size, N, and the same pdf as y, but which is independent of y. Also, let ˆ θx denote the ML estimate of the model parameter vector that would be obtained from x if x were available (we omit the superindex n of ˆ θx as often as possible, to simplify notation). In this section we will consider the following choice of the model’s pdf: ln ˆ p(y) = Ex n ln p(y, ˆ θx) o (C.5.2) which, when inserted in (C.5.1), yields: I = Ey n Ex n ln p(y, ˆ θx) oo (C.5.3) Hereafter, Ex{·} and Ey{·} denote the expectation with respect to the pdf of x and y, respectively. The above choice of ˆ p(y), which was introduced in [Akaike 1974; Akaike 1978], has an interesting cross-validation interpretation: we use the sample x for estimation and the independent sample y for validation of the so-obtained model’s pdf. Note that the dependence of (C.5.3) on the fictitious sample x is eliminated (as it should be, since x is unavailable) via the expectation operation Ex{·}; see below for details. An asymptotic second-order Taylor series expansion of ln p(y, ˆ θx) around ˆ θy, similar to (C.4.6)–(C.4.7), yields: ln p(y, ˆ θx) ≃ln p(y, ˆ θy) + (ˆ θx −ˆ θy)T " ∂ln p(y, θ) ∂θ θ=ˆ θy # + 1 2(ˆ θx −ˆ θy)T " ∂2 ln p(y, θ) ∂θ ∂θT θ=ˆ θy # (ˆ θx −ˆ θy) ≃ln p(y, ˆ θy) −1 2(ˆ θx −ˆ θy)T Jy(ˆ θx −ˆ θy) (C.5.4) where Jy is the J matrix, as defined in (C.2.20), associated with the data vector y. Using the fact that x and y have the same pdf (which implies that Jy = Jx) along with the fact that they are independent of each other, we can show that: Ey n Ex n (ˆ θx −ˆ θy)T Jy(ˆ θx −ˆ θy) oo = Ey  Ex  tr  Jy h (ˆ θx −θ) −(ˆ θy −θ) i h (ˆ θx −θ) −(ˆ θy −θ) iT  = tr Jy J−1 x + J−1 y  = 2n (C.5.5) “sm2” 2004/2/ page 389 i i i i i i i i Section C.5 Cross-Validatory KL Approach: The AIC Rule 389 Inserting (C.5.5) in (C.5.4) yields the following asymptotic approximation of the relative KL information in (C.5.3): I ≃Ey n ln pn(y, ˆ θn) −n o (C.5.6) (where we have omitted the subindex y of ˆ θ but reinstated the superindex n). Evidently, (C.5.6) can be estimated in an unbiased manner by ln pn(y, ˆ θn) −n (C.5.7) Maximizing (C.5.7) with respect to n is equivalent to minimizing the following function of n: AIC = −2 ln pn(y, ˆ θn) + 2n (C.5.8) where the acronym AIC stands for Akaike Information Criterion (the reasons for multiplying (C.5.7) by -2 to get (C.5.8), and for the use of the word “information” in the name given to (C.5.8) have been explained before, see the previous two sections). As an example, for the sinusoidal signal model with nc components (see Sec-tion C.2), AIC takes on the following form (see (C.2.6)–(C.2.10)): AIC = 2Ns ln ˆ σ2 nc + 2(3nc + 1) (C.5.9) where Ns denotes the number of available complex-valued samples, {yc(t)}Ns t=1, and ˆ σ2 nc = 1 Ns Ns X t=1 yc(t) − nc X k=1 ˆ αkei(ˆ ωkt+ ˆ ϕk) 2 (C.5.10) Remark: AIC can also be obtained by using the following relative KL information function, in lieu of (C.5.3), I = Ey n Ex n ln p(x, ˆ θy) oo (C.5.11) Note that, in (C.5.11), x is used for validation and y for estimation. However, the derivation of AIC from (C.5.11) is more complicated; such a derivation, which is left as an exercise to the reader, will make use of two Taylor series expansions, and the fact that Ex{ln p(x, θ)} = Ey{ln p(y, θ)}. ■ The performance of AIC has been found to be satisfactory in many case studies and applications to real-life data reported in the literature (see, e.g., [McQuarrie and Tsai 1998; Linhart and Zucchini 1986; Burnham and Anderson 2002; Sakamoto, Ishiguro, and Kitagawa 1986]). The performance of a model order selection rule, such as AIC, can be measured in different ways, as explained in the next two paragraphs. “sm2” 2004/2/ page 390 i i i i i i i i 390 Appendix C Model Order Selection Tools As a first possibility, we can consider a scenario in which the data generating mechanism belongs to the class of models under test, and thus there is a true order. In such a case, analytical or numerical studies can be used to determine the probability with which the rule selects the true order. For AIC, it can be shown that, under quite general conditions, the probability of underfitting →0 (C.5.12) the probability of overfitting →constant > 0 (C.5.13) as N →∞(see, e.g., [McQuarrie and Tsai 1998; Kashyap 1980]). We can see from (C.5.13) that the behavior of AIC with respect to the probability of correct detection is not entirely satisfactory. Interestingly, it is precisely this kind of be-havior that appears to make AIC perform satisfactorily with respect to the other possible type of performance measure, as explained below. An alternative way of measuring the performance is to consider a more prac-tical scenario in which the data generating mechanism is more complex than any of the models under test, which is usually the case in practical applications. In such a case we can use analytical or numerical studies to determine the performance of the model picked by the rule as an approximation of the data generating mechanism: for instance, we can consider the average distance between the estimated and true spectral densities, or the average prediction error of the model. With respect to such a performance measure, AIC performs well, partly because of its tendency to select models with relatively large orders, which may be a good thing to do in a case in which the data generating mechanism is more complex than the models used to fit it. The nonzero overfitting probability of AIC is due to the fact that the term 2n in (C.5.8) (that penalizes high-order models), while larger than the term n that appears in the NN rule, is still too small. Extensive simulation studies (see, e.g., [Bhansali and Downham 1977]) have empirically found that the following Generalized Information Criterion (GIC): GIC = −2 ln pn(y, ˆ θn) + νn (C.5.14) may outperform AIC with respect to various performance measures if ν > 2. Specif-ically, depending on the considered scenario as well as the value of N and the per-formance measure, values of ν in the interval ν ∈[2, 6] have been found to give the best performance. In the next section we show that GIC can be obtained as a natural theoretical extension of AIC. Hence, the use of (C.5.14) with ν > 2 can be motivated on formal grounds. However, the choice of a particular ν in GIC is a more difficult problem that cannot be solved in the current KL framework (see the next section for details). The different framework of Section C.7 appears to be necessary to arrive at a rule having the form of (C.5.14) with a specific expression for ν. We close this section with a brief discussion on another modification of the AIC rule suggested in the literature (see, e.g., [Hurvich and Tsai 1993]). As explained before, AIC is derived by maximizing an asymptotically unbiased estimate of the relative KL information I in (C.5.3). Interestingly, for linear regression models “sm2” 2004/2/ page 39 i i i i i i i i Section C.6 Generalized Cross-Validatory KL Approach: the GIC Rule 391 (given by (C.2.2) where µ(γ) is a linear function of γ), the following corrected AIC rule, AICc, can be shown to be an exactly unbiased estimate of I: AICc = −2 ln pn(y, ˆ θn) + 2N N −n −1n (C.5.15) (see, e.g., [Hurvich and Tsai 1993; Cavanaugh 1997]). As N →∞, AICc → AIC (as expected). However, for finite values of N the penalty term of AICc is larger than that of AIC. Consequently, in finite samples AICc has a smaller risk of overfitting than AIC, and therefore we can say that AICc trades offa decrease of the risk of overfitting (which is rather large for AIC) for an increase in the risk of underfitting (which is quite small for AIC, and hence it can be slightly increased without a significant deterioration of performance). With this fact in mind, AICc can be used as an order selection rule for more general models than just linear regressions, even though its motivation in the general case is pragmatic rather than theoretical. For other finite-sample corrections of AIC we refer the reader to [de Waele and Broersen 2003; Broersen 2000; Broersen 2002; Seghouane, Bekara, and Fleury 2003]. C.6 GENERALIZED CROSS-VALIDATORY KL APPROACH: THE GIC RULE In the cross-validatory approach of the previous section, the estimation sample x has the same length as the validation sample y. In that approach, ˆ θx (obtained from x) is used to approximate the likelihood of the model via Ex{p(y, ˆ θx)}. The AIC rule so obtained has a nonzero probability of overfitting (even asymptotically). Intuitively, the risk of overfitting will decrease if we let the length of the validation sample be (much) larger than that of the estimation sample, i.e. N ≜length(y) = ρ · length(x), ρ ≥1 (C.6.1) Indeed, overfitting occurs when the model corresponding to ˆ θx also fits the “noise” in the sample x so that p(x, ˆ θx) has a “much” larger value than the true pdf, p(x, θ). Such a model may behave reasonably well on a short validation sample y, but not on a long validation sample (in the latter case, p(y, ˆ θx) will take on very small values). The simple idea in (C.6.1) of letting the lengths of the validation and estimation samples be different leads to a natural extension of AIC, as shown below. A straightforward calculation shows that under (C.6.1) we have Jy = ρJx (C.6.2) (see, e.g., (C.2.19)). With this small difference, the calculations in the previous section carry over to the present case and we obtain (see (C.5.4)–(C.5.5)): I ≃Ey n ln pn(y, ˆ θy) o −1 2Ey  Ex  tr  Jy h (ˆ θx −θ) −(ˆ θy −θ) i h (ˆ θx −θ) −(ˆ θy −θ) iT  = Ey  ln pn(y, ˆ θy) −1 2 tr Jy ρJ−1 y + J−1 y  = Ey  ln pn(y, ˆ θy) −1 + ρ 2 n  (C.6.3) “sm2” 2004/2/ page 39 i i i i i i i i 392 Appendix C Model Order Selection Tools An unbiased estimate of the right side in (C.6.3) is given by: ln p(y, ˆ θy) −1 + ρ 2 n (C.6.4) The generalized information criterion (GIC) rule maximizes (C.6.4) or, equiva-lently, minimizes GIC = −2 ln pn(y, ˆ θn) + (1 + ρ)n (C.6.5) As expected, (C.6.5) reduces to AIC for ρ = 1. Note also that, for a given y, the order selected by (C.6.5) with ρ > 1 is always smaller than the order selected by AIC (because the penalty term in (C.6.5) is larger than that in (C.5.8)); hence, as predicted by the previous intuitive discussion, the risk of overfitting associated with GIC is smaller than for AIC when ρ > 1. On the negative side, there is no clear guideline for choosing ρ in (C.6.5). The “optimal” value of ρ in the GIC rule has been empirically shown to depend on the performance measure, the number of data samples, and the data generating mechanism itself [McQuarrie and Tsai 1998; Bhansali and Downham 1977]. Consequently, ρ should be chosen as a function of all these factors, but there is no clear rule as to how that should be done. The approach of the next section appears to be more successful than the present approach in suggesting a specific choice for ρ in (C.6.5). Indeed, as we will see, that approach leads to an order selection rule of the GIC type but with a concrete expression for ρ as a function of N. C.7 BAYESIAN APPROACH: THE BIC RULE The order selection rule to be presented in this section can be obtained in two ways. First, let us consider the KL framework of the previous sections. Therefore, our goal is to maximize the relative KL information (see (C.5.1)): I(p0, ˆ p) = E0{ln ˆ p(y)} (C.7.1) The ideal choice of ˆ p(y) would be ˆ p(y) = pn(y, θn). However, this choice is not possible since the likelihood of the model, pn(y, θn), is not available. Hence we have to use a “surrogate likelihood” in lieu of pn(y, θn). Let us assume, as before, that a fictitious sample x is used to make inferences about θ. The pdf of the estimate, ˆ θx, obtained from x can alternatively be viewed as an a priori pdf of θ, and hence it will be denoted by p(θ) in what follows (once again, we omit the superindex n of θ, ˆ θ, etc. to simplify the notation, whenever there is no risk for confusion). Note that we do not constrain p(θ) to be Gaussian. We only assume that: p(θ) is flat around ˆ θ (C.7.2) where, as before, ˆ θ denotes the ML estimate of the parameter vector obtained from the available data sample, y. Furthermore, now we assume that the length of the fictitious sample is a constant that does not depend on N, which implies that: p(θ) is independent of N (C.7.3) “sm2” 2004/2/ page 393 i i i i i i i i Section C.7 Bayesian Approach: The BIC Rule 393 As a consequence of assumption (C.7.3), the ratio between the lengths of the val-idation sample and the (fictitious) estimation sample grows without bound as N increases. According to the discussion in the previous section, this fact should lead to an order selection rule with an asymptotically much larger penalty term than that of AIC or GIC (with ρ =constant), and hence with a reduced risk of overfitting. The scenario introduced above leads naturally to the following choice of sur-rogate likelihood: ˆ p(y) = Eθ {p(y, θ)} = Z p(y, θ)p(θ) dθ (C.7.4) Remark: In the previous sections we used a surrogate likelihood given by (see (C.5.2)): ln ˆ p(y) = Ex n ln p(y, ˆ θx) o (C.7.5) However, we could have instead used a ˆ p(y) given by ˆ p(y) = Eˆ θx n p(y, ˆ θx) o (C.7.6) The rule that would be obtained by using (C.7.6) can be shown to have the same form as AIC and GIC, but with a (slightly) different penalty term. Note that the choice of ˆ p(y) in (C.7.6) is similar to the choice in (C.7.4), with the difference that for (C.7.6) the “a priori” pdf, p(ˆ θx), depends on N. ■ To obtain a simple asymptotic approximation of the integral in (C.7.4) we make use of the asymptotic approximation of p(y, θ) given by (C.4.6)–(C.4.7): p(y, θ) ≃p(y, ˆ θ)e−1 2 (ˆ θ−θ)T ˆ J(ˆ θ−θ) (C.7.7) which holds for θ in the vicinity of ˆ θ. Inserting (C.7.7) in (C.7.4) and using the assumption in (C.7.2) along with the fact that p(y, θ) is asymptotically much larger at θ = ˆ θ than at any θ ̸= ˆ θ, we obtain: ˆ p(y) ≃p(y, ˆ θ)p(ˆ θ) Z e−1 2 (ˆ θ−θ)T ˆ J(ˆ θ−θ) dθ = p(y, ˆ θ)p(ˆ θ)(2π)n/2 | ˆ J|1/2 Z 1 (2π)n/2| ˆ J−1|1/2 e−1 2 (ˆ θ−θ)T ˆ J(ˆ θ−θ) dθ | {z } =1 = p(y, ˆ θ)p(ˆ θ)(2π)n/2 | ˆ J|1/2 (C.7.8) (see [Djuri´ c 1998] and references therein for the exact conditions under which the above approximation holds true). It follows from (C.7.1) and (C.7.8) that ˆ I = ln p(y, ˆ θ) + ln p(ˆ θ) + n 2 ln 2π −1 2 ln | ˆ J| (C.7.9) is an asymptotically unbiased estimate of the relative KL information. Note, how-ever, that (C.7.9) depends on the a priori pdf of θ, which has not been specified. “sm2” 2004/2/ page 394 i i i i i i i i 394 Appendix C Model Order Selection Tools To eliminate this dependence, we use the fact that | ˆ J| increases without bound as N increases. Specifically, in most cases (but not in all; see below) we have that (cf. (C.2.21)): ln | ˆ J| = ln N · 1 N ˆ J = n ln N + ln 1 N ˆ J = n ln N + O(1) (C.7.10) where we used the fact that |cJ| = cn|J| for a scalar c and an n × n matrix J. Using (C.7.10) and the fact that p(θ) is independent of N (see (C.7.3)) yields the following asymptotic approximation of the right side in (C.7.9): ˆ I ≃ln pn(y, ˆ θn) −n 2 ln N (C.7.11) The Bayesian information criterion (BIC) rule selects the order that maximizes (C.7.11), or, equivalently, minimizes: BIC = −2 ln pn(y, ˆ θn) + n ln N (C.7.12) We remind the reader that (C.7.12) has been derived under the assumption that (C.2.21) holds, which is not always true. As an example (see [Djuri´ c 1998] for more examples), consider once again the sinusoidal signal model with nc com-ponents (as also considered in Section C.5), in the case of which we have that (cf. (C.2.22)–(C.2.23)): ln | ˆ J| = ln K−2 N + ln KN ˆ JKN = (2nc + 1) ln Ns + 3nc ln Ns + O(1) = (5nc + 1) ln Ns + O(1) (C.7.13) Hence, in the case of sinusoidal signals, BIC takes on the form: BIC = −2 ln pnc(y, ˆ θnc) + (5nc + 1) ln Ns = 2Ns ln ˆ σ2 nc + (5nc + 1) ln Ns (C.7.14) where ˆ σ2 nc is as defined in (C.5.10), and Ns denotes the number of complex-valued data samples. The attribute Bayesian in the name of the rule in (C.7.12) or (C.7.14) is motivated by the use of the a priori pdf, p(θ), in the rule derivation, which is typical of a Bayesian approach. In fact, the BIC rule can be obtained using a full Bayesian approach, as explained next. To obtain the BIC rule in a Bayesian framework we assume that the parameter vector θ is a random variable with a given a priori pdf denoted by p(θ). Owing to this assumption on θ, we need to modify the previously used notation as follows: p(y, θ) will now denote the joint pdf of y and θ, and p(y|θ) will denote the conditional pdf of y given θ. Using this notation and Bayes’ rule we can write: p(y|Hn) = Z pn(y, θn) dθn = Z pn(y|θn)pn(θn) dθn (C.7.15) “sm2” 2004/2/ page 39 i i i i i i i i Section C.8 Summary and the Multimodel Approach 395 The right side of (C.7.15) is identical to that of (C.7.4). It follows from this observation and the analysis conducted in the first part of this section that, under the assumptions (C.7.2) and (C.7.3) and asymptotically in N, ln p(y|Hn) ≃ln pn(y, ˆ θn) −n 2 ln N = −1 2BIC (C.7.16) (see (C.7.12)). Hence, maximizing p(y|Hn) is asymptotically equivalent with min-imizing BIC, independently of the prior p(θ) (as long as it satisfies (C.7.2) and (C.7.3)). The rediscovery of BIC in the above Bayesian framework is important, as it reveals the interesting fact that the BIC rule is asymptotically equivalent to the optimal MAP rule (see Section C.3.1), and hence that the BIC rule can be expected to maximize the total probability of correct detection, at least for sufficiently large values of N. The BIC rule has been proposed in [Schwarz 1978a; Kashyap 1982] among others. In [Rissanen 1978; Rissanen 1982] the same type of rule has been ob-tained by a different approach based on coding arguments and the minimum descrip-tion length (MDL) principle. The fact that the BIC rule can be derived in several different ways suggests that it may have a fundamental character. In particular, it can be shown that, under the assumption that the data generating mechanism belongs to the model class considered, the BIC rule is consistent; that is, For BIC: the probability of correct detection →1 as N →∞ (C.7.17) (see, e.g., [S¨ oderstr¨ om and Stoica 1989; McQuarrie and Tsai 1998]). This should be contrasted with the nonzero overfitting probability of AIC and GIC (with ρ=constant), see (C.5.12)–(C.5.13). Note that the result in (C.7.17) is not surprising in view of the asymptotic equivalence between the BIC rule and the optimal MAP rule. Finally, we note in passing that if we remove the condition in (C.7.3) that p(θ) is independent of N, then the term ln p(ˆ θ) may no longer be eliminated from (C.7.9) by letting N →∞. Consequently, (C.7.9) would lead to a prior-dependent rule which could be used to obtain any other rule described in this appendix by suitably choosing the prior. While this line of argument can serve the theoretical purpose of interpreting various order selection rules in a common Bayesian framework, it appears to have little practical value, as it can hardly be used to derive new sound order selection rules. C.8 SUMMARY AND THE MULTIMODEL APPROACH In the first part of this section we summarize the model order selection rules pre-sented in the previous sections. Then we briefly discuss and motivate the multi-model approach which, as the name suggests, is based on the idea of using more than just one model for making inferences about the signal under study. C.8.1 Summary We begin with the observation that all the order selection rules discussed in this appendix have a common form, i.e.: −2 ln pn(y, ˆ θn) + η(n, N)n (C.8.1) “sm2” 2004/2/ page 396 i i i i i i i i 396 Appendix C Model Order Selection Tools but with different penalty coefficients η(n, N): AIC : η(n, N) = 2 AICc : η(n, N) = 2 N N −n −1 GIC : η(n, N) = ν = ρ + 1 BIC : η(n, N) = ln N (C.8.2) Before using any of these rules for order selection in a specific problem, we need to carry out the following steps: (i) Obtain an explicit expression for the term −2 ln pn(y, ˆ θn) in (C.8.1). This requires the specification of the model structures to be tested as well as their postulated likelihoods. An aspect that should receive some attention here is the fact that the derivation of all previous rules assumed real-valued data and parameters. Consequently, complex-valued data and parameters must be converted to real-valued quantities in order to apply the results in this appendix. (ii) Count the number of unknown (real-valued) parameters in each model struc-ture under consideration. This is easily done in the parametric spectral anal-ysis problems in which we are interested. (iii) Verify that the assumptions which have been made to derive the rules hold true. Fortunately, most of the assumptions made are quite weak and hence they will usually hold. Indeed, the models under test may be either nested or non-nested, and they may even be only approximate descriptions of the data generating mechanism. However, there are two particular assumptions, made on the information matrix J, that do not always hold and hence they must be checked. First, we assumed in all derivations that the inverse matrix, J−1, exists, which is not always the case. Second, we made the assumption that J is such that J/N = O(1). For some models this is not true; when it is not true, a different normalization of J is required to make it tend to a constant matrix as N →∞(this aspect is important for the BIC rule only). We have used the sinusoidal signal model as an example throughout this appendix to illustrate the steps above and the involved aspects. Once the above aspects have been carefully considered, we can go on to use one of the four rules in (C.8.1)–(C.8.2) for selecting the order in our estimation problem. The question as to which rule should be used is not an easy one. In general we can prefer AICc over AIC: indeed, there is empirical evidence that AICc outperforms AIC in small samples (whereas in medium or large samples the two rules are almost equivalent). We also tend to prefer BIC over AIC or AICc on the grounds that BIC is an asymptotic approximation of the optimal MAP rule. Regarding GIC, as mentioned in Sections C.5 and C.6, GIC with ν ∈[2, 6] (depending on the scenario under study) can outperform AIC and AICc. Hence, for lack of a more precise guideline, we can think of using GIC with ν = 4, the value in the middle of the above interval. In summary, then, a possible ranking of the four rules discussed in this appendix is as follows (the first being considered the best): “sm2” 2004/2/ page 39 i i i i i i i i Section C.8 Summary and the Multimodel Approach 397 10 1 10 2 10 3 0 1 2 3 4 5 6 7 Data Length, N η(n,N) GIC with ν=4 BIC AIC AICc for n=5 Figure C.1. Penalty coefficients of AIC, GIC with ν = 4 (ρ = 3), AICc (for n = 5), and BIC, as functions of data length N. • BIC • GIC with ν = 4 (ρ = 3) • AICc • AIC In Figure C.1 we show the penalty coefficients of the above rules, as functions of N, to further illustrate the relationship between them. C.8.2 The Multimodel Approach We close this section with a brief discussion on a multimodel approach. Assume that we have used our favorite information criterion, let us say XIC, and have computed its values for the model orders under test: XIC(n); n = 1, . . . , ¯ n (C.8.3) We can then pick the order that minimizes XIC(n) and hence end up using a single model; this is the single model approach. Alternatively, we can consider a multimodel approach. Specifically, let us pick a M ∈[1, ¯ n] (such as M = 3) and consider the model orders that give the M smallest values of XIC(n), let us say n1, . . . , nM. From the derivations presented “sm2” 2004/2/ page 398 i i i i i i i i 398 Appendix C Model Order Selection Tools in the previous sections of this appendix, we can see that all information criteria attempt to estimate twice the negative log-likelihood of the model: −2 ln pn(y, θn) = −2 ln p(y|Hn) (C.8.4) Hence, we can use e−1 2 XIC(n) (C.8.5) as an estimate of the likelihood of the model with order equal to n (to within a mul-tiplicative constant). Consequently, instead of using just one model corresponding to the order that minimizes XIC(n), we can think of considering a combined use of the selected models (with orders n1, . . . , nM) in which the contribution of each model is proportional to its likelihood value, viz.: e−1 2 XIC(nk) PM j=1 e−1 2 XIC(nj) , k = 1, . . . , M (C.8.6) For more details on the multimodel approach, including guidelines for choosing M, we refer the interested reader to [Burnham and Anderson 2002; Stoica, Sel´ en, and Li 2004]. “sm2” 2004/2/ page 399 i i i i i i i i A P P E N D I X D Answers to Selected Exercises 1.3(a): Z{h−k} = H(1/z); Z{gk} = H(z)H∗(1/z∗) 1.4(a): φ(ω) = σ2 (1 + a1e−iω)(1 + a∗ 1eiω) 1 + |b1|2 + b1e−iω + b∗ 1eiω r(0) = σ2 1 −|a1|2  |1 −b1a∗ 1|2 + |b1|2(1 −|a1|2) r(k) = σ2 1 −|a1|2  1 −b1 a1  (1 −b∗ 1a1)  (−a1)k, k ≥1 1.9(a): φy(ω) = σ2 1|H1(ω)|2 + ρσ1σ2 [H1(ω)H∗ 2(ω) + H2(ω)H∗ 1(ω)] + σ2 2|H2(ω)|2 2.3: An example is y(t) = {1, 1.1, 1}, whose unbiased ACS estimate is ˆ r(k) = {1.07, 1.1, 1}, giving ˆ φ(ω) = 1.07 + 2.2 cos(ω) + 2 cos(2ω). 2.4(b): var{ˆ r(k)} = σ4α2(k)(N −k) [1 + δk,0] 2.9: (a) E {Y (ωk)Y ∗(ωr)} = σ2 N N−1 X t=0 ei(ωr−ωk)t =  σ2 k = r 0 k ̸= r (c) E n ˆ φ(ω) o = σ2 = φ(ω), so ˆ φ(ω) is an unbiased estimate. 3.2: Decompose the ARMA system as x(t) = 1 A(z)e(t) and y(t) = B(z)x(t). Then {x(t)} is an AR(n) process. To find {rx(k)} from {σ2, a1 . . . an}, write the Yule–Walker equations as:       1 0 a1 ... . . . ... an . . . a1 1            rx(0) rx(1) . . . rx(n)     +       0 a1 . . . an . . . . . . 0 . . . an 0 . . . 0 0 . . . 0            r∗ x(0) r∗ x(1) . . . r∗ x(n)     =      σ2 0 . . . 0      or A1rx + A2rc x =  σ2 0  which can be solved for {rx(m)}n m=0. Then find rx(k) for k > n from equation (3.3.4) and rx(k) for k < 0 from r∗ x(−k). Finally, ry(k) = m X j=0 m X p=0 rx(k + p −j) bjb∗ p 399 “sm2” 2004/2/ page 400 i i i i i i i i 400 Appendix D Answers to Selected Exercises 3.4: σ2 b = E  |eb(t)|2 = [1 θT b ]Rn+1  1 θc b  = [1 θ∗ b]Rc n+1  1 θb  giving θb = θf and σ2 b = σ2 f. 3.5(a): RT 2m+1             cm . . . c1 1 d1 . . . dm             =             0 . . . 0 σ2 s 0 . . . 0             3.14: cℓ= Pn i=0 ai˜ r(ℓ−i) for 0 ≤ℓ≤p, where ˜ r(k) = r(k) for k ≥1 , ˜ r(0) = r(0)/2, and ˜ r(k) = 0 for k < 0. 3.15(b): First solve for b1, . . . , bm from      cn cn−1 · · · cn−m+1 cn+1 cn · · · cn−m+2 . . . . . . ... . . . cn+m−1 cn+m−2 . . . cn           b1 b2 . . . bm     = −      cn+1 cn+2 . . . cn+m      Then a1, . . . , an can be obtained from ak = ck + Pm i=1 bick−i. 4.2: (a) E {x(t)} = 0; rx(k) = (α2 + σ2 α)eiω0k (b) Let p(ϕ) = P∞ k=−∞cke−ikϕ be the Fourier series of p(ϕ) for ϕ ∈[−π, π]. Then E {x(t)} = αeiω0t 2π c1. Thus, E {x(t)} = 0 if and only if either α = 0 or c1 = 0. In this case, rx(k) is the same as in part (a). 5.8: The height of the peak of the (unnormalized) Capon spectrum is 1/a∗(ω)R−1a(ω)|ω=ω0 = mα2 + σ2 m “sm2” 2004/2/2 page 401 i i i i i i i i Bibliography Abrahamsson, R., A. Jakobsson, and P. Stoica (2004). Spatial Amplitude and Phase Estimation Method for Arbitrary Array Geometries. Technical report, IT Department, Uppsala University, Sweden. Akaike, H. (1974). “A new look at the statistical model identification,” IEEE Transactions on Automatic Control 19, 716–723. Akaike, H. (1978). “On the likelihood of a time series model,” The Statistician 27, 217–235. Anderson, T. W. (1971). The Statistical Analysis of Time Series. New York: Wiley. Aoki, M. (1987). State Space Modeling of Time Series. Berlin: Springer-Verlag. Bangs, W. J. (1971). Array Processing with Generalized Beamformers. Ph. D. thesis, Yale University, New Haven, CT. Barabell, A. J. (1983). “Improving the resolution performance of eigenstructure-based direction-finding algorithms,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Boston, MA, pp. 336–339. Bartlett, M. S. (1948). “Smoothing periodograms for time series with continuous spectra,” Nature 161, 686–687. (reprinted in [Kesler 1986]). Bartlett, M. S. (1950). “Periodogram analysis and continuous spectra,” Biometrika 37, 1–16. Baysal, ¨ U. and R. Moses (2003). “On the geometry of isotropic arrays,” IEEE Transactions on Signal Processing 51(6), 1469–1478. Beex, A. A. and L. L. Scharf (1981). “Covariance sequence approximation for paramet-ric spectrum modeling,” IEEE Transactions on Acoustics, Speech, and Signal Process-ing ASSP–29(5), 1042–1052. Besson, O. and P. Stoica (1999). “Nonlinear least-squares approach to frequency estima-tion and detection of sinusoidal signals with arbitrary envelope,” Digital Signal Processing: A Review Journal 9, 45–56. Bhansali, R. J. (1980). “Autoregressive and window estimates of the inverse correlation function,” Biometrika 67, 551–566. Bhansali, R.-J. and D. Y. Downham (1977). “Some properties of the order of an autore-gressive model selected by a generalization of Akaike’s FPE criterion,” Biometrika 64, 547–551. Bienvenu, G. (1979). “Influence of the spatial coherence of the background noise on high resolution passive methods,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Washington, DC, pp. 306–309. Blackman, R. B. and J. W. Tukey (1959). The Measurement of Power Spectra from the Point of View of Communication Engineering. New York: Dover. Bloomfield, P. (1976). Fourier Analysis of Time Series — An Introduction. New York: Wiley. 401 “sm2” 2004/2/2 page 402 i i i i i i i i 402 BIBLIOGRAPHY B¨ ohme, J. F. (1991). “Array processing,” in S. Haykin (Ed.), Advances in Spectrum Analysis and Array Processing, Volume 2, pp. 1–63. Englewood Cliffs, NJ: Prentice Hall. B¨ ottcher, A. and B. Silbermann (1983). Invertibility and Asymptotics of Toeplitz Matrices. Berlin: Akademie-Verlag. Bracewell, R. N. (1986). The Fourier Transform and its Applications, 2nd Edition. New York: McGraw-Hill. Bresler, Y. and A. Macovski (1986). “Exact maximum likelihood parameter estimation of superimposed exponential signals in noise,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-34(5), 1081–1089. Brillinger, D. R. (1981). Time Series — Data Analysis and Theory. New York: Holt, Rinehart, and Winston. Brockwell, R. J. and R. A. Davis (1991). Time Series — Theory and Methods, 2nd Edition. New York: Springer-Verlag. Broersen, P. M. T. (2000). “Finite sample criteria for autoregressive order selection,” IEEE Transactions on Signal Processing 48(12), 3550–3558. Broersen, P. M. T. (2002). “Automatic spectral analysis with time series models,” IEEE Transactions on Instrumentation and Measurement 51(2), 211–216. Bronez, T. P. (1992). “On the performance advantage of multitaper spectral analysis,” IEEE Transactions on Signal Processing 40(12), 2941–2946. Burg, J. P. (1972). “The relationship between maximum entropy spectra and maximum likelihood spectra,” Geophysics 37, 375–376. (reprinted in [Childers 1978]). Burg, J. P. (1975). Maximum Entropy Spectral Analysis. Ph. D. thesis, Stanford University. Burnham, K. P. and D. R. Anderson (2002). Model Selection and Multi-Model Inference. New York: Springer. Byrnes, C. L., T. T. Georgiou, and A. Lindquist (2000). “A new approach to spectral estimation: a tunable high-resolution spectral estimator,” IEEE Transactions on Signal Processing 48(11), 3189–3205. Byrnes, C. L., T. T. Georgiou, and A. Lindquist (2001). “A generalized entropy cri-terion for Nevanlinna-Pick interpolation with degree constraint,” IEEE Transactions on Automatic Control 46(6), 822–839. Cadzow, J. A. (1982). “Spectrum estimation: An overdetermined rational model equation approach,” Proceedings of the IEEE 70(9), 907–939. Calvez, L. C. and P. Vilb´ e (1992). “On the uncertainty principle in discrete signals,” IEEE Transactions on Circuits and Systems 39(6), 394–395. Cantoni, A. and P. Butler (1976). “Eigenvalues and eigenvectors of symmetric centrosym-metric matrices,” Linear Algebra and its Applications 13(3), 275–288. Capon, J. (1969). “High-resolution frequency-wavenumber spectrum analysis,” Proceed-ings of the IEEE 57(8), 1408–1418. (reprinted in [Childers 1978]). Cavanaugh, J. E. (1997). “Unifying the derivations for the Akaike and corrected Akaike information criteria,” Statistics and Probability Letters 23, 201–208. Childers, D. G. (Ed.) (1978). Modern Spectrum Analysis. New York: IEEE Press. “sm2” 2004/2/2 page 403 i i i i i i i i BIBLIOGRAPHY 403 Choi, B. (1992). ARMA Model Identification. New York: Springer-Verlag. Clark, M. P., L. Eld´ en, and P. Stoica (1997). “A computationally efficient implementation of 2D IQML,” in Proceedings of the 31st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, pp. 1730–1734. Clark, M. P. and L. L. Scharf (1994). “Two-dimensional modal analysis based on maximum likelihood,” IEEE Transactions on Signal Processing 42(6), 1443–1452. Cleveland, W. S. (1972). “The inverse autocorrelations of a time series and their applica-tions,” Technometrics 14, 277–298. Cohen, L. (1995). Time-Frequency Analysis. Englewood Cliffs, NJ: Prentice Hall. Cooley, J. W. and J. W. Tukey (1965). “An algorithm for the machine calculation of complex Fourier series,” Math. Computation 19, 297–301. Cornwell, T. and A. Bridle (1996). Deconvolution Tutorial. Technical report, National Ra-dio Astronomy Observatory. Cox, H. (1973). “Resolving power and sensitivity to mismatch of optimum array proces-sors,” Journal of the Acoustical Society of America 54, 771–785. Cram´ er, H. (1946). Mathematical Methods of Statistics. Princeton, NJ: Princeton Univer-sity Press. Daniell, P. J. (1946). “Discussion of ‘On the theoretical specification and sampling prop-erties of autocorrelated time-series’,” Journal of the Royal Statistical Society 8, 88–90. de Waele, S. and P. M. T. Broersen (2003). “Order selection for vector autoregressive models,” IEEE Transactions on Signal Processing 51(2), 427–433. DeGraaf, S. R. (1994). “Sidelobe reduction via adaptive FIR filtering in SAR imagery,” IEEE Transactions on Image Processing 3(3), 292–301. Delsarte, P. and Y. Genin (1986). “The split Levinson algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–34(3), 470–478. Demeure, C. J. and C. T. Mullis (1989). “The Euclid algorithm and the fast computation of cross–covariance and autocovariance sequences,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–37(4), 545–552. Dempster, A., N. Laird, and D. Rubin (1977). “Maximimum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society 39, 1–38. Djuri´ c, P. (1998). “Asymptotic MAP criteria for model selection,” IEEE Transactions on Signal Processing 46(10), 2726–2735. Doron, M., E. Doron, and A. Weiss (1993). “Coherent wide-band processing for arbitrary array geometry,” IEEE Transactions on Signal Processing 41(1), 414–417. Doroslovacki, M. I. (1998). “Product of second moments in time and frequency for discrete-time signals and the uncertainty limit,” Signal Processing 67(1), 59–76. Dumitrescu, B., I. Tabus, and P. Stoica (2001). “On the parameterization of positive real sequences and MA parameter estimation,” IEEE Transactions on Signal Process-ing 49(11), 2630–2639. Durbin, J. (1959). “Efficient estimation of parameters in moving-average models,” Biometrika 46, 306–316. “sm2” 2004/2/2 page 404 i i i i i i i i 404 BIBLIOGRAPHY Durbin, J. (1960). “The fitting of time series models,” Review of the International Institute of Statistics 28, 233–244. Faurre, P. (1976). “Stochastic realization algorithms,” in R. K. Mehra and D. G. Lainiotis (Eds.), System Identification: Advances and Case Studies. London, England: Academic Press. Feldman, D. D. and L. J. Griffiths (1994). “A projection approach for robust adaptive beamforming,” IEEE Transactions on Signal Processing 42(4), 867–876. Fisher, R. A. (1922). “On the mathematical foundations of theoretical statistics,” Philo-sophical Transactions of the Royal Society of London 222, 309–368. Friedlander, B., M. Morf, T. Kailath, and L. Ljung (1979). “New inversion formulas for matrices classified in terms of their distance from Toeplitz matrices,” Linear Algebra and its Applications 27, 31–60. Fuchs, J. J. (1987). “ARMA order estimation via matrix perturbation theory,” IEEE Transactions on Automatic Control AC–32(4), 358–361. Fuchs, J. J. (1988). “Estimating the number of sinusoids in additive white noise,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–36(12), 1846–1854. Fuchs, J. J. (1992). “Estimation of the number of signals in the presence of unknown correlated sensor noise,” IEEE Transactions on Signal Processing 40(5), 1053–1061. Fuchs, J. J. (1996). “Rectangular Pisarenko method applied to source localization,” IEEE Transactions on Signal Processing 44(10), 2377–2383. Georgiou, T. T. (1987). “Realization of power spectra from partial covariance sequences,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–35(4), 438–449. Gersh, W. (1970). “Estimation of the autoregressive parameters of mixed autoregressive moving–average time series,” IEEE Transactions on Automatic Control AC–15(5), 583– 588. Ghogho, M. and A. Swami (1999). “Fast computation of the exact FIM for deterministic signals in colored noise,” IEEE Transactions on Signal Processing 47(1), 52–61. Gini, F. and F. Lombardini (2002). “Multilook APES for multibaseline SAR interferom-etry,” IEEE Transactions on Signal Processing 50(7), 1800–1803. Golub, G. H. and C. F. Van Loan (1989). Matrix Computations, 2nd Edition. Baltimore: The Johns Hopkins University Press. Gray, R. M. (1972). “On the asymptotic eigenvalue distribution of Toeplitz matrices,” IEEE Transactions on Information Theory IT–18, 725–730. Hannan, E. and B. Wahlberg (1989). “Convergence rates for inverse Toeplitz matrix forms,” Journal of Multivariate Analysis 31, 127–135. Hannan, E. J. and M. Deistler (1988). The Statistical Theory of Linear Systems. New York: Wiley. Harris, F. J. (1978). “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proceedings of the IEEE 66(1), 51–83. (reprinted in [Kesler 1986]). Hayes III, M. H. (1996). Statistical Digital Signal Processing and Modeling. New York: Wiley. “sm2” 2004/2/2 page 405 i i i i i i i i BIBLIOGRAPHY 405 Haykin, S. (Ed.) (1991). Advances in Spectrum Analysis and Array Processing, Volumes 1 and 2. Englewood Cliffs, NJ: Prentice Hall. Haykin, S. (Ed.) (1995). Advances in Spectrum Analysis and Array Processing, Volume 3. Englewood Cliffs, NJ: Prentice Hall. Heiser, W. J. (1995). “Convergent computation by iterative majorization: theory and ap-plications in multidimensional data analysis,” in W. J. Krzanowski (Ed.), Recent Advances in Descriptive Multivariate Analysis, pp. 157–189. Oxford: Oxford University Press. H¨ ogbom, J. (1974). “Aperture synthesis with a non-regular distribution of interferometer baselines,” Astronomy and Astrophysics, Supplement 15, 417–426. Horn, R. A. and C. A. Johnson (1985). Matrix Analysis. Cambridge, England: Cambridge University Press. Horn, R. A. and C. A. Johnson (1989). Topics in Matrix Analysis. Cambridge, England: Cambridge University Press. Hua, Y. and T. Sarkar (1990). “Matrix pencil method for estimating parameters of expo-nentially damped/undamped sinusoids in noise,” IEEE Transactions on Acoustics, Speech, and Signal Processing 38(5), 814–824. Hudson, J. E. (1981). Adaptive Array Principles. London: Peter Peregrinus. Hurvich, C. and C. Tsai (1993). “A corrected Akaike information criterion for vector autoregressive model selection,” Journal of Time Series Analysis 14, 271–279. Hwang, J.-K. and Y.-C. Chen (1993). “A combined detection-estimation algorithm for the harmonic-retrieval problem,” Signal Processing 30(2), 177–197. Iohvidov, I. S. (1982). Hankel and Toeplitz Matrices and Forms. Boston, MA: Birkh¨ auser. Ishii, R. and K. Furukawa (1986). “The uncertainty principle in discrete signals,” IEEE Transactions on Circuits and Systems 33(10), 1032–1034. Jakobsson, A., L. Marple, and P. Stoica (2000). “Computationally efficient two-dimensional Capon spectrum analysis,” IEEE Transactions on Signal Processing 48(9), 2651–2661. Jakobsson, A. and P. Stoica (2000). “Combining Capon and APES for estimation of spectral lines,” Circuits, Systems, and Signal Processing 19, 159–169. Janssen, P. and P. Stoica (1988). “On the expectation of the product of four matrix-valued Gaussian random variables,” IEEE Transactions on Automatic Control AC–33(9), 867–870. Jansson, M. and P. Stoica (1999). “Forward-only and forward-backward sample covari-ances — a comparative study,” Signal Processing 77(3), 235–245. Jenkins, G. M. and D. G. Watts (1968). Spectral Analysis and its Applications. San Francisco, CA: Holden-Day. Johnson, D. H. and D. E. Dudgeon (1992). Array Signal Processing — Concepts and Methods. Englewood Cliffs, NJ: Prentice Hall. Kailath, T. (1980). Linear Systems. Englewood Cliffs, NJ: Prentice Hall. Kashyap, R. L. (1980). “Inconsistency of the AIC rule for estimating the order of autore-gressive models,” IEEE Transactions on Automatic Control 25(5), 996–998. “sm2” 2004/2/2 page 406 i i i i i i i i 406 BIBLIOGRAPHY Kashyap, R. L. (1982). “Optimal choice of AR and MA parts in autoregressive moving average models,” IEEE Transactions on Pattern Analysis and Machine Intelligence 4(2), 99–104. Kay, S. M. (1988). Modern Spectral Estimation, Theory and Application. Englewood Cliffs, NJ: Prentice Hall. Kesler, S. B. (Ed.) (1986). Modern Spectrum Analysis II. New York: IEEE Press. Kinkel, J. F., J. Perl, L. Scharf, and A. Stubberud (1979). “A note on covariance–invariant digital filter design and autoregressive–moving average spectral estimation,” IEEE Trans-actions on Acoustics, Speech, and Signal Processing ASSP–27(2), 200–202. Koopmans, L. H. (1974). The Spectral Analysis of Time Series. New York: Academic Press. Kullback, S. and R. A. Leibler (1951). “On information and sufficiency,” Annals of Math-ematical Statistics 22, 79–86. Kumaresan, R. (1983). “On the zeroes of the linear prediction-error filter for deterministic signals,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–31(1), 217–220. Kumaresan, R., L. L. Scharf, and A. K. Shaw (1986). “An algortihm for pole-zero model-ing and spectral analysis,” IEEE Transactions on Acoustics, Speech, and Signal Process-ing ASSP–34(6), 637–640. Kumaresan, R. and D. W. Tufts (1983). “Estimating the angles of arrival of multiple plane waves,” IEEE Transactions on Aerospace and Electronic Systems AES–19, 134–139. Kung, S. Y., K. S. Arun, and D. V. B. Rao (1983). “State-space and singular-value decomposition-based approximation methods for the harmonic retrieval problem,” J. Op-tical Soc. Amer. 73, 1799–1811. Lacoss, R. T. (1971). “Data adaptive spectral analysis methods,” Geophysics 36, 134–148. (reprinted in [Childers 1978]). Lagunas, M., M. Santamaria, A. Gasull, and A. Moreno (1986). “Maximum likelihood filters in spectral estimation problems,” Signal Processing 10(1), 7–18. Larsson, E., J. Li, and P. Stoica (2003). “High-resolution nonparametric spectral analysis: Theory and applications,” in Y. Hua, A. Gershman, and Q. Cheng (Eds.), High-Resolution and Robust Signal Processing. New York: Marcel Dekker. Lee, J. and D. C. Munson Jr. (1995). “Effectiveness of spatially-variant apodization,” in Proceedings of the International Conference on Image Processing, Volume 1, pp. 147–150. Levinson, N. (1947). “The Wiener RMS (root mean square) criterion in filter design and prediction,” Journal of Math. and Physics 25, 261–278. Li, J. and P. Stoica (1996a). “An adaptive filtering approach to spectral estimation and SAR imaging,” IEEE Transactions on Signal Processing 44(6), 1469–1484. Li, J. and P. Stoica (1996b). “Efficient mixed-spectrum estimation with applications to target feature extraction,” IEEE Transactions on Signal Processing 44(2), 281–295. Li, J., P. Stoica, and Z. Wang (2003). “On robust Capon beamforming and diagonal loading,” IEEE Transactions on Signal Processing 51(7), 1702–1715. “sm2” 2004/2/2 page 407 i i i i i i i i BIBLIOGRAPHY 407 Li, J., P. Stoica, and Z. Wang (2004). “Doubly constrained robust Capon beamformer,” IEEE Transactions on Signal Processing 52. Linhart, H. and W. Zucchini (1986). Model Selection. New York: Wiley. Ljung, L. (1987). System Identification — Theory for the User. Englewood Cliffs, NJ: Prentice Hall. Markel, J. D. (1971). “FFT pruning,” IEEE Transactions on Audio and Electroacous-tics AU–19(4), 305–311. Marple, L. (1987). Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice Hall. Marzetta, T. L. (1983). “A new interpretation for Capon’s maximum likelihood method of frequency-wavenumber spectrum estimation,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–31(2), 445–449. Mayne, D. Q. and F. Firoozan (1982). “Linear identification of ARMA processes,” Auto-matica 18, 461–466. McCloud, M., L. Scharf, and C. Mullis (1999). “Lag-windowing and multiple-data-windowing are roughly equivalent for smooth spectrum estimation,” IEEE Transactions on Signal Processing 47(3), 839–843. McKelvey, T. and M. Viberg (2001). “A robust frequency domain subspace algorithm for multi-component harmonic retrieval,” in Proceedings of the 35th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, pp. 68–72. McLachlan, G. J. and T. Krishnan (1997). The EM Algorithm and Extensions. New York: Wiley. McQuarrie, A. D. R. and C.-L. Tsai (1998). Regression and Time Series Model Selection. Singapore: World Scientific Publishing. Moon, T. K. (1996). “The expectation-maximization algorithm,” IEEE Signal Processing Magazine 13, 47–60. Moses, R. and A. A. Beex (1986). “A comparison of numerator estimators for ARMA spectra,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–34(6), 1668–1671. Moses, R., V. ˇ Simonyt˙ e, P. Stoica, and T. S¨ oderstr¨ om (1994). “An efficient linear method for ARMA spectral estimation,” International Journal of Control 59(2), 337–356. Mullis, C. T. and L. L. Scharf (1991). “Quadratic estimators of the power spectrum,” in S. Haykin (Ed.), Advances in Spectrum Analysis and Array Processing. Englewood Cliffs, NJ: Prentice Hall. Musicus, B. (1985). “Fast MLM power spectrum estimation from uniformly spaced cor-relations,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-33(6), 1333–1335. Naidu, P. S. (1996). Modern Spectrum Analysis of Time Series. Boca Raton, FL: CRC Press. Ninness, B. (2003). “The asymptotic CRLB for the spectrum of ARMA processes,” IEEE Transactions on Signal Processing 51(6), 1520–1531. “sm2” 2004/2/2 page 408 i i i i i i i i 408 BIBLIOGRAPHY Onn, R. and A. O. Steinhardt (1993). “Multi-window spectrum estimation — a linear algebraic approach,” International Journal on Adaptive Control and Signal Processing 7, 103–116. Oppenheim, A. V. and R. W. Schafer (1989). Discrete-Time Signal Processing. Englewood Cliffs, NJ: Prentice Hall. Ottersten, B., P. Stoica, and R. Roy (1998). “Covariance matching estimation techniques for array signal processing applications,” Digital Signal Processing 8, 185–210. Ottersten, B., M. Viberg, P. Stoica, and A. Nehorai (1993). “Exact and large sample max-imum likelihood techniques for parameter estimation and detection in array processing,” in S. Haykin, J.Litva, and T. J. Shephard (Eds.), Radar Array Processing, pp. 99–151. New York: Springer Verlag. Papoulis, A. (1977). Signal Analysis. New York: McGraw-Hill. Paulraj, A., R. Roy, and T. Kailath (1986). “A subspace rotation approach to signal parameter estimation,” Proceedings of the IEEE 74(7), 1044–1046. Percival, D. B. and A. T. Walden (1993). Spectral Analysis for Physical Applications — Multitaper and Conventional Univariate Techniques. Cambridge, England: Cambridge University Press. Pillai, S. U. (1989). Array Signal Processing. New York: Springer-Verlag. Pisarenko, V. F. (1973). “The retrieval of harmonics from a covariance function,” Geo-physical Journal of the Royal Astronomical Society 33, 347–366. (reprinted in [Kesler 1986]). Porat, B. (1994). Digital Processing of Random Signals — Theory and Methods. Engle-wood Cliffs, NJ: Prentice Hall. Porat, B. (1997). A Course in Digital Signal Processing. New York: Wiley. Priestley, M. B. (1981). Spectral Analysis and Time Series. London, England: Academic Press. Priestley, M. B. (1997). “Detection of periodicities,” in T. S. Rao, M. B. Priestley, and O. Lessi (Eds.), Applications of Time Series Analysis in Astronomy and Meteorology, pp. 65–88. London, England: Chapman and Hall. Proakis, J. G., C. M. Rader, F. Ling, and C. L. Nikias (1992). Advanced Digital Signal Processing. New York: Macmillan. Rao, B. D. and K. S. Arun (1992). “Model based processing of signals: A state space approach,” Proceedings of the IEEE 80(2), 283–309. Rao, B. D. and K. V. S. Hari (1993). “Weighted subspace methods and spatial smoothing: Analysis and comparison,” IEEE Transactions on Signal Processing 41(2), 788–803. Rao, C. R. (1945). “Information and accuracy attainable in the estimation of statistical parameters,” Bulletin of the Calcutta Mathematical Society 37, 81–91. Riedel, K. and A. Sidorenko (1995). “Minimum bias multiple taper spectral estimation,” IEEE Transactions on Signal Processing 43(1), 188–195. Rissanen, J. (1978). “Modeling by the shortest data description,” Automatica 14(5), 465–471. “sm2” 2004/2/2 page 409 i i i i i i i i BIBLIOGRAPHY 409 Rissanen, J. (1982). “Estimation of structure by minimum description length,” Circuits, Systems, and Signal Processing 1(3–4), 395–406. Roy, R. and T. Kailath (1989). “ESPRIT—Estimation of signal parameters via rota-tional invariance techniques,” IEEE Transactions on Acoustics, Speech, and Signal Pro-cessing ASSP-37(7), 984–995. Sakamoto, Y., M. Ishiguro, and G. Kitagawa (1986). Akaike Information Criterion Statis-tics. Tokyo: KTK Scientific Publishers. Sando, S., A. Mitra, and P. Stoica (2002). “On the Cram´ er-Rao bound for model-based spectral analysis,” IEEE Signal Processing Letters 9(2), 68–71. Scharf, L. L. (1991). Statistical Signal Processing — Detection, Estimation, and Time Series Analysis. Reading, MA: Addison-Wesley. Schmidt, R. O. (1979). “Multiple emitter location and signal parameter estimation,” in Proc. RADC, Spectral Estimation Workshop, Rome, NY, pp. 243–258. (reprinted in [Kesler 1986]). Schuster, A. (1898). “On the investigation of hidden periodicities with application to a supposed twenty-six-day period of meteorological phenomena,” Teor. Mag. 3(1), 13–41. Schuster, A. (1900). “The periodogram of magnetic declination as obtained from the records of the Greenwich Observatory during the years 1871–1895,” Trans. Cambridge Philos. Soc 18, 107–135. Schwarz, G. (1978a). “Estimating the dimension of a model,” Annals of Statistics 6, 461–464. Schwarz, U. J. (1978b). “Mathematical-statistical description of the iterative beam re-moving technique (method CLEAN),” Astronomy and Astrophysics 65, 345–356. Seghouane, A.-K., M. Bekara, and G. Fleury (2003). “A small sample model selection criterion based on Kullback’s symmetric divergence,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Volume 6, Hong Kong, pp. 145– 148. Slepian, D. (1954). “Estimation of signal parameters in the presence of noise,” Transac-tions of the IRE Professional Group on Information Theory 3, 68–89. Slepian, D. (1964). “Prolate spheroidal wave functions, Fourier analysis and uncertainty — IV,” Bell System Technical Journal 43, 3009–3057. (see also Bell System Technical Journal, vol. 40, pp. 43–64, 1961; vol. 44, pp. 1745–1759, 1965; and vol. 57, pp. 1371– 1429, 1978). S¨ oderstr¨ om, T. and P. Stoica (1989). System Identification. London, England: Prentice Hall International. Stankwitz, H. C., R. J. Dallaire, and J. R. Fienup (1994). “Spatially variant apodization for sidelobe control in SAR imagery,” in Record of the 1994 IEEE National Radar Conference, pp. 132–137. Stewart, G. W. (1973). Introduction to Matrix Computations. New York: Academic Press. Stoica, P. and O. Besson (2000). “Maximum likelihood DOA estimation for constant-modulus signal,” Electronics Letters 36(9), 849–851. Stoica, P., O. Besson, and A. Gershman (2001). “Direction-of-arrival estimation of an amplitude-distorted wavefront,” IEEE Transactions on Signal Processing 49(2), 269–276. “sm2” 2004/2/2 page 410 i i i i i i i i 410 BIBLIOGRAPHY Stoica, P., P. Eykhoff, P. Jannsen, and T. S¨ oderstr¨ om (1986). “Model structure selection by cross-validation,” International Journal of Control 43(11), 1841–1878. Stoica, P., B. Friedlander, and T. S¨ oderstr¨ om (1987a). “Approximate maximum-likelihood approach to ARMA spectral estimation,” International Journal of Control 45(4), 1281– 1310. Stoica, P., B. Friedlander, and T. S¨ oderstr¨ om (1987b). “Instrumental variable methods for ARMA models,” in C. T. Leondes (Ed.), Control and Dynamic Systems — Advances in Theory and Applications, Volume 25, pp. 79–150. New York: Academic Press. Stoica, P., A. Jakobsson, and J. Li (1997). “Cisoid parameter estimation in the colored noise case — asymptotic Cram´ er-Rao bound, maximum likelihood, and nonlinear least squares,” IEEE Transactions on Signal Processing 45(8), 2048–2059. Stoica, P. and E. G. Larsson (2001). “Comments on ‘Linearization method for find-ing Cram´ er-Rao bounds in signal processing’,” IEEE Transactions on Signal Process-ing 49(12), 3168–3169. Stoica, P., E. G. Larsson, and A. B. Gershman (2001). “The stochastic CRB for array processing: a textbook derivation,” IEEE Signal Processing Letters 8(5), 148–150. Stoica, P., E. G. Larsson, and J. Li (2000). “Adaptive filter-bank approach to restoration and spectral analysis of gapped data,” The Astronomical Journal 120(4), 2163–2173. Stoica, P., H. Li, and J. Li (1999). “A new derivation of the APES filter,” IEEE Signal Processing Letters 6(8), 205–206. Stoica, P., T. McKelvey, and J. Mari (2000). “MA estimation in polynomial time,” IEEE Transactions on Signal Processing 48(7), 1999–2012. Stoica, P. and R. Moses (1990). “On biased estimators and the unbiased Cram´ er-Rao lower bound,” Signal Processing 21, 349–350. Stoica, P., R. Moses, B. Friedlander, and T. S¨ oderstr¨ om (1989). “Maximum likelihood estimation of the parameters of multiple sinusoids from noisy measurements,” IEEE Trans-actions on Acoustics, Speech, and Signal Processing ASSP–37(3), 378–392. Stoica, P., R. Moses, T. S¨ oderstr¨ om, and J. Li (1991). “Optimal high-order Yule-Walker estimation of sinusoidal frequencies,” IEEE Transactions on Signal Processing 39(6), 1360–1368. Stoica, P. and A. Nehorai (1986). “An asymptotically efficient ARMA estimator based on sample covariances,” IEEE Transactions on Automatic Control AC–31(11), 1068–1071. Stoica, P. and A. Nehorai (1987). “On stability and root location of linear prediction models,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-35, 582– 584. Stoica, P. and A. Nehorai (1989a). “MUSIC, maximum likelihood, and Cram´ er-Rao bound,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–37(5), 720–741. Stoica, P. and A. Nehorai (1989b). “Statistical analysis of two nonlinear least-squares estimators of sine-wave parameters in the colored-noise case,” Circuits, Systems, and Signal Processing 8(1), 3–15. Stoica, P. and A. Nehorai (1990). “Performance study of conditional and unconditional direction-of-arrival estimation,” IEEE Transactions on Signal Processing SP-38(10), 1783–1795. “sm2” 2004/2/2 page 411 i i i i i i i i BIBLIOGRAPHY 411 Stoica, P. and A. Nehorai (1991). “Performance comparison of subspace rotation and MUSIC methods for direction estimation,” IEEE Transactions on Signal Processing 39(2), 446–453. Stoica, P. and B. Ottersten (1996). “The evil of superefficiency,” Signal Processing 55(1), 133–136. Stoica, P., N. Sandgren, Y. Sel´ en, L. Vanhamme, and S. Van Huffel (2003). “Frequency-domain method based on the singular value decomposition for frequency-selective NMR spectroscopy,” Journal of Magnetic Resonance 165(1), 80–88. Stoica, P. and Y. Sel´ en (2004a). “Cyclic minimizers, majorization techniques, and the expectation-maximization algorithm: A refresher,” IEEE Signal Processing Maga-zine 21(1), 112–114. Stoica, P. and Y. Sel´ en (2004b). “Model order selection: A review of the AIC, GIC, and BIC rules,” IEEE Signal Processing Magazine 21(2). Stoica, P., Y. Sel´ en, and J. Li (2004). Multi-Model Approach to Model Selection. Technical report, IT Department, Uppsala University, Sweden. Stoica, P. and K. C. Sharman (1990). “Maximum likelihood methods for direction-of-arrival estimation,” IEEE Transactions on Acoustics, Speech, and Signal Process-ing ASSP-38(7), 1132–1143. Stoica, P., T. S¨ oderstr¨ om, and F. Ti (1989). “Asymptotic properties of the high-order Yule-Walker estimates of sinusoidal frequencies,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–37(11), 1721–1734. Stoica, P. and T. S¨ oderstr¨ om (1991). “Statistical analysis of MUSIC and subspace rotation estimates of sinusoidal frequencies,” IEEE Transactions on Signal Processing SP-39(8), 1836–1847. Stoica, P. and T. Sundin (2001). “Nonparametric NMR spectroscopy,” Journal of Mag-netic Resonance 152(1), 57–69. Stoica, P., Z. Wang, and J. Li (2003). “Robust Capon beamforming,” IEEE Signal Pro-cessing Letters 10(6), 172–175. Strang, G. (1988). Linear Algebra and its Applications. Orlando, FL: Harcourt Brace Jovanovich. Sturm, J. F. (1999). “Using SeDuMi, a Matlab toolbox for optimization over symmetric cones,” Optimization Methods and Software 11–12, 625–653. Software available on-line at Therrien, C. W. (1992). Discrete Random Signals and Statistical Signal Processing. En-glewood Cliffs, NJ: Prentice Hall. Thomson, D. J. (1982). “Spectrum estimation and harmonic analysis,” Proceedings of the IEEE 72(9), 1055–1096. Tufts, D. W. and R. Kumaresan (1982). “Estimation of frequencies of multiple sinu-soids: Making linear prediction perform like maximum likelihood,” Proceedings of the IEEE 70(9), 975–989. Umesh, S. and D. W. Tufts (1996). “Estimation of parameters of exponentially damped sinusoids using fast maximum likelihood estimation with application to NMR spectroscopy data,” IEEE Transactions on Signal Processing 44(9), 2245–2259. “sm2” 2004/2/ page 41 i i i i i i i i 412 BIBLIOGRAPHY Van Huffel, S. and J. Vandewalle (1991). The Total Least Squares Problem: Computational Aspects and Analysis. Philadelphia, PA: SIAM. van Overschee, P. and B. de Moor (1996). Subspace Identification for Linear Systems: Theory - Implementation - Methods. Boston, MA: Kluwer Academic. Van Trees, H. L. (1968). Detection, Estimation, and Modulation Theory, Part I. New York: Wiley. Van Trees, H. L. (2002). Optimum Array Processing (Part IV of Detection, Estimation, and Modulation Theory). New York: Wiley. Van Veen, B. D. and K. M. Buckley (1988). “Beamforming: A versatile approach to spatial filtering,” IEEE ASSP Magazine 5(2), 4–24. Viberg, M. (1995). “Subspace-based methods for the identification of linear time-invariant systems,” Automatica 31(12), 1835–1851. Viberg, M. and B. Ottersten (1991). “Sensor array processing based on subspace fitting,” IEEE Transactions on Signal Processing 39(5), 1110–1121. Viberg, M., B. Ottersten, and T. Kailath (1991). “Detection and estimation in sensor arrays using weighted subspace fitting,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP–34(11), 2436–2449. Viberg, M., P. Stoica, and B. Ottersten (1995). “Array processing in correlated noise fields based on instrumental variables and subspace fitting,” IEEE Transactions on Signal Processing 43(5), 1187–1195. Vostry, Z. (1975). “New algorithm for polynomial spectral factorization with quadratic convergence, Part I,” Kybernetika 11, 415–422. Vostry, Z. (1976). “New algorithm for polynomial spectral factorization with quadratic convergence, Part II,” Kybernetika 12, 248–259. Walker, G. (1931). “On periodicity in series of related terms,” Proceedings of the Royal Society of London 131, 518–532. Wax, M. and T. Kailath (1985). “Detection of signals by information theoretic criteria,” IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-33(2), 387–392. Wei, W. (1990). Time Series Analysis. New York: Addison-Wesley. Welch, P. D. (1967). “The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms,” IEEE Transactions on Audio and Electroacoustics AU–15(2), 70–76. (reprinted in [Kesler 1986]). Wilson, G. (1969). “Factorization of the covariance generating function of a pure moving average process,” SIAM Journal on Numerical Analysis 6(1), 1–7. Ying, C. J., L. C. Potter, and R. Moses (1994). “On model order determination for complex exponential signals: Performance of an FFT-initialized ML algorithm,” in Proceedings of IEEE Seventh SP workshop on SSAP, Quebec City, Quebec, pp. 43–46. Yule, G. U. (1927). “On a method of investigating periodicities in disturbed series, with special reference to Wolfer’s sunspot numbers,” Philos. Trans. R. Soc. London 226, 267– 298. (reprinted in [Kesler 1986]). Ziskind, I. and M. Wax (1988). “Maximum likelihood localization of multiple sources by alternating projection,” IEEE Transactions on Acoustics, Speech, and Signal Process-ing ASSP–36(10), 1553–1560. “sm2” 2004/2/ page 413 i i i i i i i i References Grouped by Subject Books on Spectral Analysis [Bloomfield 1976] [Bracewell 1986] [Childers 1978] [Cohen 1995] [Hayes III 1996] [Haykin 1991] [Haykin 1995] [Kay 1988] [Kesler 1986] [Koopmans 1974] [Marple 1987] [Naidu 1996] [Percival and Walden 1993] [Priestley 1981] Books about Spectral Analysis and Allied Topics [Aoki 1987] [Porat 1994] [Proakis, Rader, Ling, and Nikias 1992] [Scharf 1991] [S¨ oderstr¨ om and Stoica 1989] [Therrien 1992] [van Overschee and de Moor 1996] Books on Linear Systems and Signals [Hannan and Deistler 1988] [Kailath 1980] [Oppenheim and Schafer 1989] [Porat 1997] Books on Array Signal Processing 413 “sm2” 2004/2/ page 414 i i i i i i i i 414 BIBLIOGRAPHY [Haykin 1991] [Haykin 1995] [Hudson 1981] [Pillai 1989] [Van Trees 2002] Works on Time Series, Estimation Theory, and Statistics [Anderson 1971] [Bhansali 1980] [Brillinger 1981] [Brockwell and Davis 1991] [Cleveland 1972] [Cram´ er 1946] [Dempster, Laird, and Rubin 1977] [Fisher 1922] [Heiser 1995] [Janssen and Stoica 1988] [McLachlan and Krishnan 1997] [Moon 1996] [Rao 1945] [Slepian 1954] [Stoica and Moses 1990] [Stoica and Ottersten 1996] [Stoica and Sel´ en 2004a] [Viberg 1995] [Wei 1990] Works on Matrix Analysis and Linear Algebra [B¨ ottcher and Silbermann 1983] [Cantoni and Butler 1976] [Golub and Van Loan 1989] [Gray 1972] [Horn and Johnson 1985] [Horn and Johnson 1989] [Iohvidov 1982] [Stewart 1973] “sm2” 2004/2/ page 41 i i i i i i i i BIBLIOGRAPHY 415 [Strang 1988] [Van Huffel and Vandewalle 1991] Works on Nonparametric Temporal Spectral Analysis (a) Historical [Bartlett 1948] [Bartlett 1950] [Daniell 1946] [Schuster 1898] [Schuster 1900] (b) Classical [Blackman and Tukey 1959] [Burg 1972] [Cooley and Tukey 1965] [Harris 1978] [Jenkins and Watts 1968] [Lacoss 1971] [Slepian 1964] [Thomson 1982] [Welch 1967] (c) More Recent [Bronez 1992] [Calvez and Vilb´ e 1992] [DeGraaf 1994] [Doroslovacki 1998] [Ishii and Furukawa 1986] [Jakobsson, Marple, and Stoica 2000] [Lagunas, Santamaria, Gasull, and Moreno 1986] [Larsson, Li, and Stoica 2003] [Lee and Munson Jr. 1995] [Li and Stoica 1996a] [McCloud, Scharf, and Mullis 1999] [Mullis and Scharf 1991] [Musicus 1985] [Onn and Steinhardt 1993] [Riedel and Sidorenko 1995] [Stankwitz, Dallaire, and Fienup 1994] [Stoica, Larsson, and Li 2000] “sm2” 2004/2/ page 416 i i i i i i i i 416 BIBLIOGRAPHY [Stoica, Li, and Li 1999] Works on Parametric Temporal Rational Spectral Analysis (a) Historical [Yule 1927] [Walker 1931] (b) Classical [Burg 1975] [Cadzow 1982] [Durbin 1959] [Durbin 1960] [Gersh 1970] [Levinson 1947] (c) More Recent [Byrnes, Georgiou, and Lindquist 2000] [Byrnes, Georgiou, and Lindquist 2001] [Choi 1992] [Delsarte and Genin 1986] [Dumitrescu, Tabus, and Stoica 2001] [Fuchs 1987] [Kinkel, Perl, Scharf, and Stubberud 1979] [Mayne and Firoozan 1982] [Moses and Beex 1986] [Moses, ˇ Simonyt˙ e, Stoica, and S¨ oderstr¨ om 1994] [Stoica, Friedlander, and S¨ oderstr¨ om 1987a] [Stoica, Friedlander, and S¨ oderstr¨ om 1987b] [Stoica, McKelvey, and Mari 2000] [Stoica and Nehorai 1986] [Stoica and Nehorai 1987] Works on Parametric Temporal Line Spectral Analysis (a) Classical [Bangs 1971] [H¨ ogbom 1974] [Kumaresan 1983] [Kung, Arun, and Rao 1983] [Paulraj, Roy, and Kailath 1986] [Pisarenko 1973] “sm2” 2004/2/ page 41 i i i i i i i i BIBLIOGRAPHY 417 [Tufts and Kumaresan 1982] (b) More Recent [Besson and Stoica 1999] [Bresler and Macovski 1986] [Clark, Eld´ en, and Stoica 1997] [Clark and Scharf 1994] [Cornwell and Bridle 1996] [Fuchs 1988] [Hua and Sarkar 1990] [Jakobsson, Marple, and Stoica 2000] [Kumaresan, Scharf, and Shaw 1986] [Li and Stoica 1996b] [McKelvey and Viberg 2001] [Schwarz 1978a] [Stoica, Besson, and Gershman 2001] [Stoica, Jakobsson, and Li 1997] [Stoica, Moses, Friedlander, and S¨ oderstr¨ om 1989] [Stoica, Moses, S¨ oderstr¨ om, and Li 1991] [Stoica and Nehorai 1989b] [Stoica and S¨ oderstr¨ om 1991] [Stoica, S¨ oderstr¨ om, and Ti 1989] [Umesh and Tufts 1996] Works on Nonparametric Spatial Spectral Analysis (a) Classical [Capon 1969] (b) More Recent [Abrahamsson, Jakobsson, and Stoica 2004] [Bangs 1971] [Feldman and Griffiths 1994] [Gini and Lombardini 2002] [Johnson and Dudgeon 1992] [Li, Stoica, and Wang 2003] [Li, Stoica, and Wang 2004] [Marzetta 1983] [Stoica, Wang, and Li 2003] [Van Veen and Buckley 1988] “sm2” 2004/2/ page 418 i i i i i i i i 418 BIBLIOGRAPHY Works on Parametric Spatial Spectral Analysis (a) Classical [Barabell 1983] [Bienvenu 1979] [Kumaresan and Tufts 1983] [Roy and Kailath 1989] [Schmidt 1979] [Wax and Kailath 1985] (b) More Recent [B¨ ohme 1991] [Doron, Doron, and Weiss 1993] [Fuchs 1992] [Fuchs 1996] [Ottersten, Viberg, Stoica, and Nehorai 1993] [Pillai 1989] [Rao and Hari 1993] [Stoica and Besson 2000] [Stoica, Besson, and Gershman 2001] [Stoica and Nehorai 1989a] [Stoica and Nehorai 1990] [Stoica and Nehorai 1991] [Stoica and Sharman 1990] [Viberg and Ottersten 1991] [Viberg, Ottersten, and Kailath 1991] [Viberg, Stoica, and Ottersten 1995] [Ziskind and Wax 1988] Works on Model Order Selection (a) Classical [Akaike 1974] [Akaike 1978] [Bhansali and Downham 1977] [Kashyap 1980] [Kashyap 1982] [Rissanen 1978] [Rissanen 1982] [Schwarz 1978b] (b) More Recent “sm2” 2004/2/ page 419 i i i i i i i i BIBLIOGRAPHY 419 [Broersen 2000] [Broersen 2002] [Burnham and Anderson 2002] [Cavanaugh 1997] [Choi 1992] [de Waele and Broersen 2003] [Djuri´ c 1998] [Hurvich and Tsai 1993] [Linhart and Zucchini 1986] [McQuarrie and Tsai 1998] [Sakamoto, Ishiguro, and Kitagawa 1986] [Seghouane, Bekara, and Fleury 2003] [Stoica, Eykhoff, Jannsen, and S¨ oderstr¨ om 1986] [Stoica and Sel´ en 2004b] “sm2” 2004/2/2 page 420 i i i i i i i i Index Akaike information criterion, 387–391 corrected, 391 generalized, 390–392 all-pole signals, 90 amplitude and phase estimation (APES) method, 244–247, 291 for gapped data, 247–250 for spatial spectra, 305–312 for two–dimensional signals, 254–256 amplitude spectrum, 241, 246 Capon estimates of, 242–244 angle of arrival, 264 aperture, 263 APES method, see amplitude and phase estimation method apodization, 59–64 AR process, see autoregressive process AR spectral estimation, see autoregres-sive spectral estimation ARMA process, see autoregressive mov-ing average process array aperture of, 263 beamforming resolution, 320 beamspace processing, 323 beamwidth, 278, 321 broadband signals in, 269 coherent signals in, 281, 325 isotropic, 322 L–shaped, 321 narrowband, 269, 271 planar, 263 uniform linear, 271–273 array model, 265–273 autocorrelation function, 117 autocorrelation method, 93 autocovariance sequence computation using FFT, 55–56 computer generation of, 18 definition of, 5 estimates, 23 estimation variance, 72 extensions, 118–119, 174 for signals with unknown means, 71 for sinusoidal signals, 145, 146 generation from ARMA parameters, 130 mean square convergence of, 170 of ARMA processes, 88–89 properties, 5–6 autoregressive (AR) process covariance structure, 88 definition of, 88 stability of, 133 autoregressive moving average (ARMA) process covariance structure, 88 definition of, 88 multivariate, 109–117 state–space equations, 109 autoregressive moving average spectral estimation, 103–117 least squares method, 106–108 modified Yule–Walker method, 103– 106 multivariate, 113–117 autoregressive spectral estimation, 90– 94 autocorrelation method, 93 Burg method, 119–122 covariance method, 90, 93 least squares method, 91–94 postwindow method, 93 prewindow method, 93 Yule–Walker method, 90 backward prediction, 117, 131 bandpass signal, 266 bandwidth approximate formula, 77 definition of, 67 equivalent, 40, 54, 69, 224 Bartlett method, 49–50 Bartlett window, 29, 42 baseband signal, 266 basis linearly parameterized, 193–198 null space, 193–198 Bayesian information criterion, 392–395 beamforming, 276–279, 288–290 beamforming method, 294 and CLEAN, 312–317 420 “sm2” 2004/2/2 page 421 i i i i i i i i Index 421 beamspace processing, 323 beamwidth, 278, 321, 322 BIC rule, 392–395 Blackman window, 42 Blackman–Tukey method, 37–39 computation using FFT, 57–59 nonnegativeness property, 39 block–Hankel matrix, 113 broadband signal, 269 Burg method, 119–122 CAPES method, 247 Capon method, 222–231, 290–294 as a matched filter, 258 comparison with APES, 246–247 constrained, 298–305 derivation of, 222–227, 258 for damped sinusoids, 241–244 for DOA estimation, 279–280 for two–dimensional signals, 254–256 relationship to AR methods, 228– 231, 235–238 robust, 294–305 spectrum of, 258 stochastic signal, 290–291 Carath´ eodory parameterization, 299 carrier frequency, 265 Cauchy–Schwartz inequality, 258, 279, 301, 304, 316, 344–345 for functions, 345 for vectors, 344 centrosymmetric matrix, 169, 318 Chebyshev inequality, 201 Chebyshev window, 41 chi-squared distribution, 176 Cholesky factor, 128, 342 circular Gaussian distribution, 76, 317, 361, 367, 368 circular white noise, 32, 36 CLEAN algorithm, 312–317 coherency spectrum, 64–66 coherent signals, 281, 325 column space, 328 complex demodulation, 268 complex envelope, 268 complex modulation, 267 complex white noise, 32 concave function, 183 condition number, 202 and AR parameter estimation, 105 and forward–backward approach, 201 definition of, 349 confidence interval, 75 consistent estimator, 355 consistent linear equations, 347–350 constant-modulus signal, 288–289 constrained Capon method, 298–305 continuous spectra, 86 convergence in probability, 201 mean square, 170, 172, 201 uniform, 259 corrected Akaike information criterion, 391 correlation coefficient, 13 correlogram method, 23–25 covariance definition of, 5 matrix, 5 covariance fitting, 291–294, 315 using CLEAN, 315–317 covariance fitting criterion, 126 covariance function, see autocovariance sequence covariance matrix diagonalization of, 133 eigenvalue decomposition of, 297, 302 persymmetric, 169, 318 properties of, 5 covariance method, 93 covariance sequence, see autocovariance sequence Cram´ er–Rao bound, 355–376, 379 for Gaussian distributions, 359–364 for general distributions, 358–359 for line spectra, 364–365 for rational spectra, 365–367 for spatial spectra, 367–376 for unknown model order, 357 cross covariance sequence, 18 cross–spectrum, 12, 18, 64 cyclic minimization, 180–181 cyclic minimizer, 249 damped sinusoidal signals, 193–198, 241– 244 Daniell method, 52–54 delay operator, 10 Delsarte–Genin Algorithm, 97–101 demodulation, 268 diagonal loading, 299–305 Dirac impulse, 146 “sm2” 2004/2/2 page 422 i i i i i i i i 422 Index direction of arrival, 264 direction of arrival estimation, 263–286 beamforming, 276–279 Capon method, 279–280 ESPRIT method, 285–286 Min–Norm method, 285 MUSIC method, 284 nonlinear least squares method, 281 nonparametric methods, 273–280 parametric methods, 281–286 Pisarenko method, 284 Yule–Walker method, 283 direction vector uncertainty, 294–305 Dirichlet kernel, 30 discrete Fourier transform (DFT), 25 linear transformation interpretation, 73 discrete signals, 2 discrete spectrum, 146 discrete–time Fourier transform (DTFT), 3 discrete–time system, 10 finite impulse response (FIR), 17 frequency response, 210 minimum phase, 88, 129 transfer function, 210 displacement operator, 123 displacement rank, 125 Doppler frequency, 320 Durbin’s method, 102, 108 efficiency, statistical, 357 eigenvalue, 331 of a matrix product, 333 eigenvalue decomposition, 297, 302, 330– 335 eigenvector, 331 EM algorithm, 179–185 energy spectral density, 3 Capon estimates of, 242–244 of damped sinusoids, 241 ergodic, 170 ESPRIT method and min–norm, 202 combined with HOYW, 200 for DOA estimation, 285–286 for frequency estimation, 166–167 frequency selective, 185–193 statistical accuracy of, 167 estimate consistent, 86, 135, 147, 152, 176, 260, 279, 355 statistically efficient, 357 unbiased, 355 Euclidean vector norm, 338 exchange matrix, 346 Expectation-Maximization algorithm, 179– 185 expected value, 5 exponentially damped sinusoids, 241–244 extended Rayleigh quotient, 335 far field, 263 fast Fourier transform (FFT), 26–27 for two–sided sequences, 19 pruning in, 28 radix–two, 26–27 two–dimensional, 252, 256 zero padding and, 27 Fejer kernel, 29 filter bank methods, 207–222 and periodogram, 210–211, 231–235 APES, 244–247, 291 for gapped data, 247–250 for two–dimensional signals, 253–254 refined, 212–222 spatial APES, 305–312 Fisher information matrix, 359 flatness, spectral, 132 forward prediction, 117, 130 forward–backward approach, 168–170 frequency, 2, 3, 8 angular, 3 conversion, 3 resolution, 31 scaling, 14 spatial, 272 frequency band, 185 frequency estimation, 146–170 ESPRIT method, 166–167 forward–backward approach, 168–170 frequency-selective ESPRIT, 185–193 FRES-ESPRIT, 185–193 high–order Yule–Walker method, 155– 159 Min–Norm method, 164–166 modified MUSIC method, 163 MUSIC method, 159–162 nonlinear least squares, 151–155 Pisarenko method, 162 spurious estimates, 163 “sm2” 2004/2/2 page 423 i i i i i i i i Index 423 two–dimensional, 193–198 frequency-selective method, 185–193 Frobenius norm, 339, 348, 350 GAPES method, 247–250 gapped data, 247–250 Gaussian distribution circular, 361, 367, 368 Gaussian random variable circular, 76 Cram´ er–Rao bound for, 359–364 moment property, 33 generalized Akaike information criterion, 390 generalized inverse, 349 Gohberg–Semencul formula, 122–125 grating lobes, 322 Hadamard matrix product, 342, 372 Hamming window, 42 Hankel matrix, 346 block, 113 Hanning window, 42 Heisenberg uncertainty principle, 67 Hermitian matrix, 330, 333–335 Hermitian square root, 342 hypothesis testing, 175 idempotent, 282, 339 impulse response, 19, 68, 210, 213, 214, 216, 265 in–phase component, 268 inconsistent linear equations, 350–353 information matrix, 359 interior point methods, 129 inverse covariances, 238 Jensen’s inequality, 183, 375 Kaiser window, 41, 42 kernel, 328 Kronecker delta, 4 Kronecker product, 253, 254 Kullback-Leibler information metric, 384– 385 Lagrange multiplier, 296–297, 302 leading submatrix, 341 least squares, 18, 104, 164, 290, 291, 307, 315 spectral approximation, 17 with quadratic constraints, 296 least squares method, 90–94, 228, 243, 245, 248, 251, 254, 256 least squares solution, 350 Levinson–Durbin algorithm, 96 split, 97–101 likelihood function, 182, 356, 358, 360, 378 line spectrum, 146 linear equations consistent, 347–350 inconsistent, 350–353 least squares solution, 350 minimum norm solution, 348 systems of, 347–353 total least squares solution, 352 linear prediction, 91, 117, 119, 130–132 linear predictive modeling, 92 linearly parameterized basis, 193–198 lowpass signal, 266 MA covariance parameterization, 127 MA parameter estimation, 125–129 MA process, see moving average process majorization, 181–182 majorizing function, 181 matrix centrosymmetric, 169, 318 Cholesky factor, 342 condition, 202, 349 eigenvalue decomposition, 330–335 exchange, 346 fraction, 137 Frobenius norm, 339, 348, 350 Hankel, 346 idempotent, 282, 339 inversion lemma, 347 Moore–Penrose pseudoinverse, 349 orthogonal, 330 partition, 343, 347 persymmetric, 169, 318 positive (semi)definite, 341–345 QR decomposition, 351 rank, 328 rank deficient, 329 semiunitary, 330, 334 singular value decomposition, 113, 157, 336–340 square root, 318, 342 Toeplitz, 346 trace, 331, 332 “sm2” 2004/2/2 page 424 i i i i i i i i 424 Index unitary, 157, 166, 202, 330, 333, 336, 344, 351 Vandermonde, 345 matrix fraction description, 137 matrix inversion lemma, 246, 347 maximum a posteriori detection, 381–384 maximum likelihood estimate, 75, 151, 356, 363, 373, 377 of covariance matrix, 317–319 maximum likelihood estimation, 378–381 regularity conditions, 379 maximum likelihood method, 182 MDL principle, 395 mean square convergence, 170 mean squared error, 28 Min–norm and ESPRIT, 202 Min–Norm method and ESPRIT, 202 for DOA estimation, 285 for frequency estimation, 164–166 root, 164 spectral, 164 minimization cyclic, 180–181 majorization, 181–182 quadratic, 353–354 relaxation algorithms, 181 minimum description length, 395 minimum norm constraint, 286–288 minimum norm solution, 348 minimum phase, 88, 129 missing data, 247–250 model order selection, 357–358, 377–398 AIC rule, 387–391 BIC rule, 392–395 corrected AIC rule, 391 generalized AIC rule, 391–392 generalized information criterion, 390 Kullback-Leibler metric, 384–385 maximum a posteriori, 381–384 MDL rule, 395 multimodel, 397 modified MUSIC method, 163, 193–198 modified Yule–Walker method, 103–106 modulation, 267 Moore–Penrose pseudoinverse, 291, 349 moving average noise, 200 moving average parameter estimation, 125– 129 moving average process covariance structure, 88 definition of, 88 parameter estimation, 125–129 reflection coefficients of, 134 moving average spectral estimation, 101– 103, 125–129 multimodel order selection, 397 multiple signal classification, see MUSIC method multivariate systems, 109–117 MUSIC method for DOA estimation, 284 modified, 163, 325 root, 161 spectral, 161 subspace fitting interpretation, 324 narrowband, 271 nilpotent matrix, 124 NLS method, see nonlinear least squares method noise complex white, 32 noise gain, 299–301 nonlinear least squares method for direction estimation, 281–282 for frequency estimation, 151–155 nonsingular, 329 normal equations, 90 null space, 160, 328 null space basis, 193–198 order selection, see model order selection orthogonal complement, 338 orthogonal matrix, 330 orthogonal projection, 161, 188, 189, 338 overdetermined linear equations, 104, 347, 350–353 Pad´ e approximation, 136 parameter estimation maximum likelihood, 378–381 PARCOR coefficient, 96 Parseval’s theorem, 4, 126 partial autocorrelation sequence, 117 partial correlation coefficients, 96 partitioned matrix, 343, 347 periodogram and frequency estimation, 153 bias analysis of, 28–32 definition of, 22 “sm2” 2004/2/2 page 425 i i i i i i i i Index 425 FFT computation of, 25–27 for two–dimensional signals, 251–252 properties of, 28–36 variance analysis of, 32–36 windowed, 47 periodogram method, 22 periodogram–based methods Bartlett, 49–50 Daniell, 52–54 refined, 48–54 Welch, 50–52 persymmetric matrix, 169, 318 Pisarenko method ARMA model derivation of, 200 for DOA estimation, 284 for frequency estimation, 159, 162 relation to MUSIC, 162 planar wave, 264, 271 positive (semi)definite matrices, 341–345 postwindow method, 93 power spectral density and linear systems, 11 continuous, 86 definition of, 6, 7 properties of, 8 rational, 87 prediction backward, 117 forward, 130 linear, 91, 117, 119, 130–132, 367 prediction error, 91, 367 prewindow method, 93 principal submatrix, 341 probability density function, 68, 75, 356, 359 projection matrix, 188, 189 projection operator, 338 QR decomposition, 351 quadratic minimization, 353–354 quadratic program, 129 quadrature component, 268 random signals, 2 range space, 160, 328 rank, 189 rank deficient, 329 rank of a matrix, 328 rank of a matrix product, 329 rational spectra, 87 Rayleigh quotient, 334 extended, 335 rectangular window, 42 reflection coefficient, 96 properties of, 134 region of convergence, 87 RELAX algorithm, 181 relaxation algorithms, 181 resolution and time–bandwidth product, 68 and window design, 40–41 and zero padding, 27 for filter bank methods, 208 for parametric methods, 155, 204 frequency, 31 limit, 31 of beamforming method, 278, 320 of Blackman–Tukey method, 38 of Capon method, 225, 230, 238 of common windows, 42 of Daniell method, 53 of periodogram, 22, 31 of periodogram–based methods, 83 spatial, 278, 320–323 super–resolution, 139, 140, 147 Riccati equation, 111 Rihaczek distribution, 15 robust Capon method, 299–305 root MUSIC for DOA estimation, 284 for frequency estimation, 161 row space, 328 sample covariance, 23, 49, 55, 71–73, 75, 78 sample covariance matrix ML estimates of, 317–319 sampling Shannon sampling theorem, 8 spatial, 263, 272, 273 temporal, 3 semi-parametric estimation, 312 semidefinite quadratic program, 129 semiunitary matrix, 330, 334 Shannon sampling theorem spatial, 273 temporal, 8 sidelobe, 31, 41, 42 signal modeling, 88 similarity transformation, 111, 167, 331 singular value decomposition (SVD), 113, 157, 292, 336–340 “sm2” 2004/2/2 page 426 i i i i i i i i 426 Index sinusoidal signals amplitude estimation, 146 ARMA model, 149 covariance matrix model, 149 damped, 193–198, 241–244 frequency estimation, 146–170 models of, 144, 148–150 nonlinear regression model, 148 phase estimation, 146 two–dimensional, 251–256 skew–symmetric vector, 346 Slepian sequences, 215–216 two–dimensional, 253–254 smoothed periodogram, 221 smoothing filter, 131 spatial filter, 275, 278 spatial frequency, 272 spatial sampling, 273 spatial spectral estimation problem, 263 spectral analysis high–resolution, 147 nonparametric, 2 parametric, 2 semi-parametric, 312 super–resolution, 147 spectral density energy, 3 power, 4 spectral estimation definition of, 1, 12 spectral factorization, 87, 126, 163 spectral flatness, 132 spectral line analysis, 146 spectral LS criterion, 126 spectral MUSIC for DOA estimation, 284 for frequency estimation, 161 spectrum coherency, 12–14 continuous, 86 cross, 18 discrete, 146 rational, 87 split Levinson algorithm, 97 square root of a matrix, 318, 342 stability for AR models, 133 of AR estimates, 90 of Pad´ e approximation, 136 of Yule–Walker estimates, 94, 133 state–space equations for ARMA process, 109 minimality, 112 nonuniqueness of, 111 statistically efficient estimator, 357 steering vector uncertainty, 294–305 structure indices, 109 subarrays, 285 submatrix leading, 341 principal, 341 subspace and state–space representations, 109, 112–117 noise, 161 signal, 161 super–resolution, 139, 147 symmetric matrix, 330 symmetric vector, 346 synthetic aperture, 319 systems of linear equations, 347–353 taper, 47 Taylor series expansion, 355 time width definition of, 67 equivalent, 40, 50, 54, 69 time–bandwidth product, 40–41, 66–71 time–frequency distributions, 15 Toeplitz matrix, 346 total least squares, 104, 158, 164, 167, 352 trace of a matrix, 331 trace of a matrix product, 332 transfer function, 10 two–dimensional sinusoidal signals, 193– 198 two–dimensional spectral analysis APES method, 254–256 Capon method, 254–256 periodogram, 251–252 refined filter bank method, 253–254 two–sided sequences, 19 unbiased estimate, 355 uncertainty principle, 67 uniform linear array, 271–273 beamforming resolution, 320 spatial APES, 305–312 unitary matrix, 157, 166, 202, 330, 333, 336, 344, 351 Vandermonde matrix, 345 “sm2” 2004/2/2 page 427 i i i i i i i i Index 427 vector skew–symmetric, 346 symmetric, 346 vectorization, 253, 255 wave field, 263 planar, 264 Welch method, 50–52 white noise complex, 32 real, 36 whitening filter, 144 Wiener–Hopf equation, 18 window function Bartlett, 42 Chebyshev, 41 common, 41–42 data and frequency dependent, 59 design of, 39 Hamming, 42 Hanning, 42 Kaiser, 41 leakage, 31 main lobe, 30 rectangular, 42 resolution, 31 resolution–variance tradeoffs, 40–41 sidelobes, 31 Yule–Walker equations, 90 Yule–Walker method for AR processes, 90 for DOA estimation, 283 for frequency estimation, 155–159 modified, 103–106 overdetermined, 104 stability property, 94 zero padding, 26–27 zeroes extraneous, 286, 288 in ARMA model, 87, 108
761
https://www.khanacademy.org/python-program/fibonacci-sequence/5708583045873664
Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
762
https://www.mathopenref.com/coordincenter.html
Math Open Reference Home Contact About Subject Index Incenter of a triangle (Coordinate Geometry) Given the coordinates of the three vertices of a triangle ABC, the coordinates of the incenter O are a A x + b B x + c C x p O y = a A y + b B y + c C y p where: | | | --- | | Ax and Ay | are the x and y coordinates of the point A etc.. | | a, b and c | are the side lengths opposite vertex A, B and C | | p | is perimeter of the triangle (a+b+c) | Try this Drag any point A,B,C. The incenter O of the triangle ABC is continuously recalculated using the above formula. You can also drag the origin point at (0,0). Options Hide |< >| RESET | | | --- | | ✔ | Coordinates | | ✔ | Grid | p = 26.9 + 41.3 + 34.0 = 102.2 O x = 26.9 ( 24 ) + 41.3 ( 47 ) + 34.0 ( 65 ) 102.2 = 47 O y = 26.9 ( 15 ) + 41.3 ( 40 ) + 34.0 ( ) 102.2 = Recall that the incenter of a triangle is the point where the triangle's three angle bisectors intersect. It is also the center of the triangle's incircle. The coordinates of the incenter are the weighted average of the coordinates of the vertices, where the weights are the lengths of the corresponding sides. The formula first requires you calculate the three side lengths of the triangle. To do this use the method described in Distance between two points. Once you know the three side lengths, you calculate the perimeter as the sum of these three lengths. Calculator | | | | --- | | | | | | x | y | | A | | | | B | | | | C | | | | | | | | Incenter | | | | | | | | | | | | Use the calculator above to calculate coordinates of the incenter of the triangle ABC. Enter the x,y coordinates of each vertex, in any order. Things to try In the diagram at the top of the page, Drag the points A, B or C around and notice how the incenter moves and the coordinates are calculated. Try points that are negative in x and y. You can drag the origin point to move the axes. Click "hide details". Drag the triangle to some random new shape. Calculate the incenter position then click "show details" to see if you got it right. Limitations In the interest of clarity in the applet above, the coordinates are rounded off to integers. This can cause calculations to be slightly off. For more see Teaching Notes Other Coordinate Geometry topics Introduction to coordinate geometry The coordinate plane The origin of the plane Axis definition Coordinates of a point Distance between two points Introduction to Lines in Coordinate Geometry Line (Coordinate Geometry) Ray (Coordinate Geometry) Segment (Coordinate Geometry) Midpoint Theorem Distance from a point to a line - When line is horizontal or vertical - Using two line equations - Using trigonometry - Using a formula Intersecting lines Cirumscribed rectangle (bounding box) Area of a triangle (formula method) Area of a triangle (box method) Centroid of a triangle Incenter of a triangle Area of a polygon Algorithm to find the area of a polygon Area of a polygon (calculator) Rectangle Definition and properties, diagonals Area and perimeter Square Definition and properties, diagonals Area and perimeter Trapezoid Definition and properties, altitude, median Area and perimeter Parallelogram Definition and properties, altitude, diagonals Print blank graph paper (C) 2011 Copyright Math Open Reference. All rights reserved
763
https://bioinformatics.stackexchange.com/questions/5281/how-to-deal-with-duplicate-genes-having-different-expression-values
r - How to deal with duplicate genes having different expression values? - Bioinformatics Stack Exchange Join Bioinformatics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Bioinformatics helpchat Bioinformatics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Bioinformatics Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more How to deal with duplicate genes having different expression values? Ask Question Asked 6 years, 10 months ago Modified6 years, 10 months ago Viewed 5k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. I have RNA-Seq data which is FPKM. In the dataframe df first column is gene_name and the other 100 columns are samples. Usually if it is counts data I do like following: r df2 <- aggregate(. ~ gene_name, data = df, max) I'm not sure what do with the FPKM data if there are duplicate genes with different FPKM value for the same sample. Lets say it looks like this: r gene_name sample1 sample2 sample3 5S_rRNA 0.3206147 0.3327312 0.377578 5S_rRNA 0.3342000 0.0000000 0.1305166 Any suggestions please. r rna-seq gene-expression fpkm Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications asked Oct 25, 2018 at 7:48 beginnerbeginner 631 8 8 silver badges 16 16 bronze badges 2 How were the read mapped? Do you have access to the raw counts data?llrs –llrs 2018-10-25 09:16:15 +00:00 Commented Oct 25, 2018 at 9:16 1 I have downloaded TCGA FPKM data. Would like to use the data for co-expression analysis with cemitool beginner –beginner 2018-10-25 09:57:49 +00:00 Commented Oct 25, 2018 at 9:57 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. I assume you're familiar with the various issues surrounding FPKMs, so I'll not expound upon them. As a general rule, you should be using gene IDs rather than gene names, since the former are unique while the latter are not. If you only have access to data quantified on gene names, then the appropriate way to merge RPKMs is with a weighted sum: F P K M g e n e=F P K M c o p y 1∗L e n g t h c o p y 1+F P K M c o p y 2∗L e n g t h c o p y 2 L e n g t h c o p y 1+L e n g t h c o p y 2 F P K M g e n e=F P K M c o p y 1∗L e n g t h c o p y 1+F P K M c o p y 2∗L e n g t h c o p y 2 L e n g t h c o p y 1+L e n g t h c o p y 2 As an aside, rRNA expression levels will tend to be wrong, since they're normally excluded by either poly-A enrichment or the use of ribo-zero. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Oct 25, 2018 at 9:57 Devon RyanDevon Ryan 19.8k 2 2 gold badges 31 31 silver badges 61 61 bronze badges 6 Sure I will use gene ids. but lets say I have counts data and before doing differential analysis do you think this is the right to way to filter duplicates; aggregate(. ~ gene_name, data = df, max)beginner –beginner 2018-10-25 15:09:58 +00:00 Commented Oct 25, 2018 at 15:09 If you have counts then sum them.Devon Ryan –Devon Ryan 2018-10-25 17:07:48 +00:00 Commented Oct 25, 2018 at 17:07 So, if the first column is Ensembl ids and all other columns are samples in a dataframe "df", Is this right way to sum the duplicate Ensembl ids? aggregate(. ~Ensembl_ids, data=df, FUN=sum)beginner –beginner 2018-10-25 18:17:12 +00:00 Commented Oct 25, 2018 at 18:17 2 There should never be duplicate Ensembl IDs, they're unique. You only have issues like this with gene names or UCSC IDs.Devon Ryan –Devon Ryan 2018-10-25 21:24:03 +00:00 Commented Oct 25, 2018 at 21:24 oh yes, so with gene names Is this the way to sum the counts of duplicates aggregate(. ~gene_names, data=df, FUN=sum)beginner –beginner 2018-10-25 21:29:37 +00:00 Commented Oct 25, 2018 at 21:29 |Show 1 more comment This answer is useful 3 Save this answer. Show activity on this post. The best way to deal with this is to use unique gene IDs, for example ensembl accession numbers. So use the ensemble gtf annotation when quantifying the read counts and not the gene symbols. Just to illustrate, when I look for "5S_rRNA" in ensembl's annotation, i see 18 different "genes" with that gene symbol. But which 2 you have is unclear now. r grep "5S_rRNA" ensembl_symbol.txt "ENSG00000252830" "5S_rRNA" "ENSG00000276442" "5S_rRNA" "ENSG00000274408" "5S_rRNA" "ENSG00000274059" "5S_rRNA" "ENSG00000276861" "5S_rRNA" "ENSG00000274759" "5S_rRNA" "ENSG00000280646" "5S_rRNA" "ENSG00000277411" "5S_rRNA" "ENSG00000201285" "5S_rRNA" "ENSG00000212595" "5S_rRNA" "ENSG00000277418" "5S_rRNA" "ENSG00000277049" "5S_rRNA" "ENSG00000274097" "5S_rRNA" "ENSG00000277488" "5S_rRNA" "ENSG00000274663" "5S_rRNA" "ENSG00000283433" "5S_rRNA" "ENSG00000275305" "5S_rRNA" "ENSG00000278457" "5S_rRNA" Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Oct 25, 2018 at 11:13 answered Oct 25, 2018 at 9:44 bennbenn 3,541 11 11 silver badges 28 28 bronze badges 3 Yes you are right. But can I take median of those duplicates? something like aggregate(. ~ gene_name, data = df, median)beginner –beginner 2018-10-25 09:57:04 +00:00 Commented Oct 25, 2018 at 9:57 1 Expanding this answer, it seems that it was mapped to ensembl ids. So I would recommend to keep them as they are, and only when looking for biological meaning to change to gene symbols (or when talking to others)llrs –llrs 2018-10-25 10:03:18 +00:00 Commented Oct 25, 2018 at 10:03 1 I would strongly advice not to use FPKM at all, it is pretty clear now that they are not the right values to use for any analysis. If you still want to use it, keep your two genes apart like they were.benn –benn 2018-10-25 10:04:07 +00:00 Commented Oct 25, 2018 at 10:04 Add a comment| Your Answer Thanks for contributing an answer to Bioinformatics Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions r rna-seq gene-expression fpkm See similar questions with these tags. Featured on Meta stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 4What is the best distribution to model the FPKM values from normalized RNA-Seq data? 0How to find correlation between two specific genes in same dataset? 3How to convert featureCounts to FPKM? 2Calculating Z-score from logCPM values using edgeR 0RNA-seq: How to get new expression count after normalization 1What RNA-Seq expression value would be closest to Microarray equivalent? 2How to download RNAseq gene expression data from GTEx 0Clustering RNAseq fold-change data 2How to measure mean of a column for every 10000 rows condition on other column? Hot Network Questions How should I support an aluminum door threshold on a sloped surface? When can the enemy actually use their legendary action on the first turn of combat? How to install Libby in Ubuntu? \MakeLowercase inside \label What is the sci fi story where an alien ship lands in a park and when opened (empty) takes over a man’s cat causing havoc in the world? Find a topological group which is T_5 but not T_6 Mystery book in the world of The Stand Why is it okay to say "to see a movie" but not "to see TV"? Did the Buddha say to prioritize personal experience over his teachings? How is Berkeley's argument an ontological one? Flying to Antarctica from Southern Chile Contour plot for streamlines Why do I glow after I heal? Display the table of content inside tabularx environment Preventing meta communication in a massive multiplayer game Boltzmann correction factor for free particles but not for harmonic oscillators? Why std::set::contains() calls the spaceship operator twice on a target element? How can I keep a caption, table, and short legend from being broken across columns? Three ways of not coming to a flight IIS not showing management icons How much am I likely to get when my preferred stock reaches maturity? Unembedded subjunctive clause? How do Salesforce's internal Build Numbers map to releases? Languages with infinite syntactic monoid Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-r Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Bioinformatics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.9.33772 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
764
https://math.stackexchange.com/questions/162515/integral-using-euler-substitution
calculus - Integral - using Euler Substitution - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Integral - using Euler Substitution Ask Question Asked 13 years, 3 months ago Modified11 years, 1 month ago Viewed 2k times This question shows research effort; it is useful and clear 6 Save this question. Show activity on this post. I've been trying to solve one simple Integral with Euler substitution several times, but can't find where I'm going wrong. The integral is (+ the answer given here, too): ∫1 x x 2+x+1−−−−−−−−−√d x=log(x)−log(2 x 2+x+1−−−−−−−−−√+x+2)+constant∫1 x x 2+x+1 d x=log⁡(x)−log⁡(2 x 2+x+1+x+2)+constant The problem is, I cannot get this result. Below is my solution of the problem. I've checked it many times, must be something very obvious that I'm missing: (original image) Euler Substituion ∫d x x x 2+x+1−−−−−−−−−√∫d x x x 2+x+1 Let x 2+x+1−−−−−−−−−√=t−x x 2+x+1=t−x. x 2+x+1=t 2−2 x t+x 2 x 2+x+1=t 2−2 x t+x 2 x(1+2 t)=t 2−1⟹x=t 2−1 1+2 t x(1+2 t)=t 2−1⟹x=t 2−1 1+2 t d x=(t 2−1 1+2 t)′d t=2 t(1+2 t)−(t 2−1)2(1+2 t)2=2 t+4 t 2−2 t 2+2(1+2 t)2=2(t 2+t+1)(1+2 t)2 d x=(t 2−1 1+2 t)′d t=2 t(1+2 t)−(t 2−1)2(1+2 t)2=2 t+4 t 2−2 t 2+2(1+2 t)2=2(t 2+t+1)(1+2 t)2 x 2+x+1−−−−−−−−−√=t−x=t−t 2−1 1+2 t=t 2+t+1 1+2 t x 2+x+1=t−x=t−t 2−1 1+2 t=t 2+t+1 1+2 t ⟹∫d x x x 2+x+1−−−−−−−−−√=2∫t 2+t+1(1+2 t)2 d t t 2−1 1+2 t⋅t 2+t+1 1+2 t=2∫1 t 2−1 d t⟹∫d x x x 2+x+1=2∫t 2+t+1(1+2 t)2 d t t 2−1 1+2 t⋅t 2+t+1 1+2 t=2∫1 t 2−1 d t 1 t 2−1=1(t+1)(t−1)=A t+1+B t−1⟹A t−A+B t+B=1 A+B=0⟹A=−B B−A=1⟹B=1 2,A=−1 2 1 t 2−1=1(t+1)(t−1)=A t+1+B t−1⟹A t−A+B t+B=1 A+B=0⟹A=−B B−A=1⟹B=1 2,A=−1 2 ⟹2∫1 2 1 2 t−1 d t−2∫1 2 1 t+1 d t=∫1 t−1 d t−∫1 t+1 d t=⟹2∫1 2 1 2 t−1 d t−2∫1 2 1 t+1 d t=∫1 t−1 d t−∫1 t+1 d t= =ln|t−1|−ln|t+1|=ln∣∣∣t−1 t+1∣∣∣=ln⁡|t−1|−ln⁡|t+1|=ln⁡|t−1 t+1| t−x=x 2+x+1−−−−−−−−−√⟹t=x 2+x+1−−−−−−−−−√+x t−x=x 2+x+1⟹t=x 2+x+1+x ⟹ln∣∣∣t−1 t+1∣∣∣=ln∣∣∣x 2+x+1−−−−−−−−−√+x−1 x 2+x+1−−−−−−−−−√+x+1∣∣∣⟹ln⁡|t−1 t+1|=ln⁡|x 2+x+1+x−1 x 2+x+1+x+1| I'll appreciate any help. Thanks in advance! calculus integration Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jun 12, 2020 at 10:38 CommunityBot 1 asked Jun 24, 2012 at 18:22 cypressxcypressx 63 3 3 bronze badges 3 1 I've typed your scanned document in LaTeX to ensure readability, but I may have inadvertently introduced changes from the original. If so, I apologize, and feel free to change any errors I made (or anything else for that matter).Zev Chonoles –Zev Chonoles 2012-06-24 18:43:51 +00:00 Commented Jun 24, 2012 at 18:43 See here and here for how to format your mathematics expressions with LaTeX, and see here for how to use Markdown formatting. If you need to format more advanced math, there are many excellent LaTeX references on the internet, including Stack Exchange's own TeX.SE site. If you see a piece of LaTeX you want to know the code for on the site, you can right click on it, go to "Show Math As", then choose "TeX Commands".Zev Chonoles –Zev Chonoles 2012-06-24 18:44:01 +00:00 Commented Jun 24, 2012 at 18:44 Hi Zev, thanks for the tip. I'll definitely use LaTeX next time.cypressx –cypressx 2012-06-24 19:40:44 +00:00 Commented Jun 24, 2012 at 19:40 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. What you have done is correct. All you need to do is to rewrite it a different form. ln(x 2+x+1−−−−−−−−−√+x−1)=ln⎛⎝⎜(x 2+x+1−−−−−−−−−√+x−1)(x 2+x+1−−−−−−−−−√−x+1)(x 2+x+1−−−−−−−−−√−x+1)⎞⎠⎟=ln((x 2+x+1−(x 2−2 x+1)))−ln(x 2+x+1−−−−−−−−−√−x+1)=ln(x)−ln(x 2+x+1−−−−−−−−−√−x+1)ln⁡(x 2+x+1+x−1)=ln⁡((x 2+x+1+x−1)(x 2+x+1−x+1)(x 2+x+1−x+1))=ln⁡((x 2+x+1−(x 2−2 x+1)))−ln⁡(x 2+x+1−x+1)=ln⁡(x)−ln⁡(x 2+x+1−x+1) Hence, ln(x 2+x+1−−−−−−−−−√+x−1 x 2+x+1−−−−−−−−−√+x+1)=ln(x 2+x+1−−−−−−−−−√+x−1)−ln(x 2+x+1−−−−−−−−−√+x+1)=ln(x)−ln(x 2+x+1−−−−−−−−−√−x+1)−ln(x 2+x+1−−−−−−−−−√+x+1)=ln(x)−(ln(x 2+x+1−−−−−−−−−√−x+1)+ln(x 2+x+1−−−−−−−−−√+x+1))=ln(x)−ln((x 2+x+1−−−−−−−−−√+1)2−x 2)=ln(x)−ln(x 2+x+1+1+2 x 2+x+1−−−−−−−−−√−x 2)=ln(x)−ln(2 x 2+x+1−−−−−−−−−√+x+2)ln⁡(x 2+x+1+x−1 x 2+x+1+x+1)=ln⁡(x 2+x+1+x−1)−ln⁡(x 2+x+1+x+1)=ln⁡(x)−ln⁡(x 2+x+1−x+1)−ln⁡(x 2+x+1+x+1)=ln⁡(x)−(ln⁡(x 2+x+1−x+1)+ln⁡(x 2+x+1+x+1))=ln⁡(x)−ln⁡((x 2+x+1+1)2−x 2)=ln⁡(x)−ln⁡(x 2+x+1+1+2 x 2+x+1−x 2)=ln⁡(x)−ln⁡(2 x 2+x+1+x+2) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Jun 24, 2012 at 18:37 answered Jun 24, 2012 at 18:31 user17762 user17762 2 Thanks! I'm wondering, when I have to rewrite the answers - i.e. how do you know the rewriting is enough and the answer is in it's best form.cypressx –cypressx 2012-06-24 19:46:03 +00:00 Commented Jun 24, 2012 at 19:46 @cypressx There is no best form. Your answer is equally "best" as wolframalpha.user17762 –user17762 2012-06-24 19:51:43 +00:00 Commented Jun 24, 2012 at 19:51 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. Looks about right. In your expression, multiply top and bottom of the thing inside the log by x 2+x+1−−−−−−−−−√−(x−1)x 2+x+1−(x−1). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Jun 24, 2012 at 18:48 answered Jun 24, 2012 at 18:29 André NicolasAndré Nicolas 515k 47 47 gold badges 584 584 silver badges 1k 1k bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. You may use substitution x=1/t, or Euler substitution \sqrt{x^2+x+1}=tx-1 Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Aug 3, 2014 at 20:25 user167788user167788 1 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions calculus integration See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 6Help with Euler Substitution 3Integral using Euler's substitution 0Solving Integrals using trig substitution? 4Evaluate the definite integral using substitution 3Integral of trig fraction using substitution? 0Computing an integral using Euler's substitution 0Help with understanding this u-substitution that makes no sense to me 4Generalized Euler substitution 4What am I doing wrong with this Euler substitution? 3Multiple cases within an integral after a u u-substitution Hot Network Questions Should I let a player go because of their inability to handle setbacks? Direct train from Rotterdam to Lille Europe My dissertation is wrong, but I already defended. How to remedy? Making sense of perturbation theory in many-body physics ICC in Hague not prosecuting an individual brought before them in a questionable manner? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? For every second-order formula, is there a first-order formula equivalent to it by reification? Why include unadjusted estimates in a study when reporting adjusted estimates? RTC battery and VCC switching circuit Repetition is the mother of learning How to start explorer with C: drive selected and shown in folder list? Program that allocates time to tasks based on priority Bypassing C64's PETSCII to screen code mapping How long would it take for me to get all the items in Bongo Cat? Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? Matthew 24:5 Many will come in my name! How to rsync a large file by comparing earlier versions on the sending end? Can a cleric gain the intended benefit from the Extra Spell feat? ConTeXt: Unnecessary space in \setupheadertext Determine which are P-cores/E-cores (Intel CPU) Overfilled my oil Storing a session token in localstorage With line sustain pedal markings, do I release the pedal at the beginning or end of the last note? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
765
https://www.advancedconverter.com/other-converters/fuel-consumption/liters-per-100-km-to-kilometers-per-liter
Convert Liters per 100 km to Kilometers per liter english español ελληνικά Home Unit Conversions Angle Area Data Size Converter Energy Flow Length Mass - Weight Power Pressure Speed Temperature Time Torque Volume Other Converters Fuel Consumption Converter Age Calculator How old will I be? Calculate age in future Color Converter RGB to CMYK CMYK to RGB HEX to RGB Pace Calculator Date Difference Calculator Download Time Calculator Paper Size in Length Map tools Fuel Cost Calculator Find route between cities Find coordinates of a location Find coordinates of an address Find altitude of a location Find county Altitude on map Find altitude of an address Find altitude by coordinates Area and perimeter calculator Distance measurement of a path Postal code Address on map Time Tools Sunrise-Sunset time Finance Discount Calculator VAT calculator Currency Rates Contact Other Converters Fuel Consumption Converter Fuel Liter per 100 km Liter per km Kilometer per liter Kilometer per gallon(US) Mile per liter Gallon(US) per 100 mi Gallon(US) per mi Mile per gallon(US) Gallon(UK) per 100 mi Gallon(UK) per mi Popular Converters Length Converter Mass(Weight) Converter Volume Converter Age Calculator Download Time Calculator Color Converter Fuel Consumption Converter Your Age in Future Sunrise-sunset time Fuel Cost Calculator Map tools Find distance between places Area-perimeter calculator Distance of a path Find coordinates of a location Elevation on map Postal code Address finder Conversion liters per 100km to kilometers per liter (kpl) To convert liters per 100 km to kilometers per liter, you must divide 100 by liters per 100 km. km/L=100 L/100km\text{km/L} = \frac{100}{\text{L/100km}} Online tool converts liters per 100 km to kilometers per liter Liters per 100km is fuel consumption indicator. Therefore, this is, how many liters a vehicle consumes every 100km. This tool converts liters per 100 km to kilometers per liter (l/100km to kpl) and vice versa. These sizes (lt/100km to km/lt) are inversely proportional. Therefore, when one increases the other size decreases. 1 liter per 100 kilometer = 100 kilometers per liter. The user must fill in one of the two fields and the conversion will become automatically. <=> precision: 1 liters per 100 km = 100 kilometers per liter Discover more calculator Calculator Formula liters per 100 km in kilometers per liter (l/100km in kpl). Kpl = 100/(lt per 100km) Discover more Calculator calculator How to convert liters per 100 km to kilometers per liter To convert liters per 100 kilometers (L/100 km) to kilometers per liter (km/L), follow these steps: Understand the Relationship: Liters per 100 kilometers (L/100 km) measures how many liters of fuel are used to travel 100 kilometers. Kilometers per liter (km/L) measures how many kilometers can be traveled using one liter of fuel. Apply the Conversion Formula: km/L=100 L/100 km\text{km/L} = \frac{100}{\text{L/100 km}} Step-by-Step Guide: Identify the Value in L/100 km: This is your fuel consumption rate. Let's call this value X. Divide 100 by the L/100 km Value: km/L=100 X\text{km/L} = \frac{100}{X} Example Conversion: Given Fuel Consumption: 8 L/100 km Apply the Formula:km/L=100 8=12.5\text{km/L} = \frac{100}{8} = 12.5 km/L=8100​=12.5 So, if a car consumes 8 liters of fuel per 100 kilometers, it has a fuel efficiency of 12.5 kilometers per liter. Summary: Formula: km/L=100 L/100 km\text{km/L} = \frac{100}{\text{L/100 km}} Example: Given: 8 L/100 km Calculation:km/L=100 8=12.5 km/L\text{km/L} = \frac{100}{8} = 12.5 \, \text{km/L} How many kilometers per liter are 7 liters per 100 km? 7 liters per 100 kilometers is approximately 14.29 kilometers per liter. 8 liters per 100km to kilometers per liter To convert 8 liters per 100 kilometers (L/100 km) to kilometers per liter (km/L), you can use the conversion formula: km/L=100 L/100 km\text{km/L} = \frac{100}{\text{L/100 km}} Here, the L/100 km value is 8. So, applying the formula: km/L=100 8=12.5\text{km/L} = \frac{100}{8} = 12.5 Therefore, 8 liters per 100 kilometers is equivalent to 12.5 kilometers per liter. How many kilometers per liter are 10 liters per 100 km? 10 liters per 100 kilometers is equivalent to 10 kilometers per liter. Common conversion from liters per 100 km to kilometers per liter How many kilometers per liter are 2 liters per 100 km? 2 liters per 100 kilometers is equivalent to 50 kilometers per liter. How many kilometers per liter are 3 liters per 100 km? 3 liters per 100 kilometers is equivalent to 33.33 kilometers per liter. How many kilometers per liter are 4 liters per 100 km? 4 liters per 100 kilometers is equivalent to 25 kilometers per liter. How many kilometers per liter are 5 liters per 100 km? 5 liters per 100 kilometers is equivalent to 20 kilometers per liter. How many kilometers per liter are 6 liters per 100 km? 6 liters per 100 kilometers is equivalent to 16.67 kilometers per liter. How many kilometers per liter are 9 liters per 100 km? 9 liters per 100 kilometers is equivalent to 11.11 kilometers per liter. How many kilometers per liter are 11 liters per 100 km? 11 liters per 100 kilometers is equivalent to 9.09 kilometers per liter. How many kilometers per liter are 12 liters per 100 km? 12 liters per 100 kilometers is equivalent to 8.33 kilometers per liter. How many kilometers per liter are 13 liters per 100 km? 13 liters per 100 kilometers is equivalent to 7.69 kilometers per liter. How many kilometers per liter are 14 liters per 100 km? 14 liters per 100 kilometers is equivalent to 7.14 kilometers per liter. How many kilometers per liter are 15 liters per 100 km? 15 liters per 100 kilometers is equivalent to 6.67 kilometers per liter. How many kilometers per liter are 16 liters per 100 km? 16 liters per 100 kilometers is equivalent to 6.25 kilometers per liter. How many kilometers per liter are 17 liters per 100 km? 17 liters per 100 kilometers is equivalent to 5.88 kilometers per liter. How many kilometers per liter are 18 liters per 100 km? 18 liters per 100 kilometers is equivalent to 5.56 kilometers per liter. How many kilometers per liter are 19 liters per 100 km? 19 liters per 100 kilometers is equivalent to 5.26 kilometers per liter. How many kilometers per liter are 20 liters per 100 km? 20 liters per 100 kilometers is equivalent to 5 kilometers per liter. How many kilometers per liter are 21 liters per 100 km? 21 liters per 100 kilometers is equivalent to 4.76 kilometers per liter. How many kilometers per liter are 22 liters per 100 km? 22 liters per 100 kilometers is equivalent to 4.55 kilometers per liter. How many kilometers per liter are 23 liters per 100 km? 23 liters per 100 kilometers is equivalent to 4.35 kilometers per liter. How many kilometers per liter are 24 liters per 100 km? 24 liters per 100 kilometers is equivalent to 4.17 kilometers per liter. How many kilometers per liter are 25 liters per 100 km? 25 liters per 100 kilometers is equivalent to 4 kilometers per liter. How many kilometers per liter are 26 liters per 100 km? 26 liters per 100 kilometers is equivalent to 3.85 kilometers per liter. How many kilometers per liter are 27 liters per 100 km? 27 liters per 100 kilometers is equivalent to 3.70 kilometers per liter. How many kilometers per liter are 28 liters per 100 km? 28 liters per 100 kilometers is equivalent to 3.57 kilometers per liter. How many kilometers per liter are 29 liters per 100 km? 29 liters per 100 kilometers is equivalent to 3.45 kilometers per liter. How many kilometers per liter are 30 liters per 100 km? 30 liters per 100 kilometers is equivalent to 3.33 kilometers per liter. How many kilometers per liter are 31 liters per 100 km? 31 liters per 100 kilometers is equivalent to 3.23 kilometers per liter. How many kilometers per liter are 32 liters per 100 km? 32 liters per 100 kilometers is equivalent to 3.13 kilometers per liter. How many kilometers per liter are 33 liters per 100 km? 33 liters per 100 kilometers is equivalent to 3.03 kilometers per liter. How many kilometers per liter are 34 liters per 100 km? 34 liters per 100 kilometers is equivalent to 2.94 kilometers per liter. How many kilometers per liter are 35 liters per 100 km? 35 liters per 100 kilometers is equivalent to 2.86 kilometers per liter. How many kilometers per liter are 36 liters per 100 km? 36 liters per 100 kilometers is equivalent to 2.78 kilometers per liter. How many kilometers per liter are 37 liters per 100 km? 37 liters per 100 kilometers is equivalent to 2.70 kilometers per liter. How many kilometers per liter are 38 liters per 100 km? 38 liters per 100 kilometers is equivalent to 2.63 kilometers per liter. How many kilometers per liter are 39 liters per 100 km? 39 liters per 100 kilometers is equivalent to 2.56 kilometers per liter. How many kilometers per liter are 40 liters per 100 km? 40 liters per 100 kilometers is equivalent to 2.5 kilometers per liter. How many kilometers per liter are 41 liters per 100 km? 41 liters per 100 kilometers is equivalent to 2.44 kilometers per liter. How many kilometers per liter are 42 liters per 100 km? 42 liters per 100 kilometers is equivalent to 2.38 kilometers per liter. How many kilometers per liter are 43 liters per 100 km? 43 liters per 100 kilometers is equivalent to 2.33 kilometers per liter. How many kilometers per liter are 44 liters per 100 km? 44 liters per 100 kilometers is equivalent to 2.27 kilometers per liter. How many kilometers per liter are 45 liters per 100 km? 45 liters per 100 kilometers is equivalent to 2.22 kilometers per liter. How many kilometers per liter are 46 liters per 100 km? 46 liters per 100 kilometers is equivalent to 2.17 kilometers per liter. How many kilometers per liter are 47 liters per 100 km? 47 liters per 100 kilometers is equivalent to 2.13 kilometers per liter. How many kilometers per liter are 48 liters per 100 km? 48 liters per 100 kilometers is equivalent to 2.08 kilometers per liter. How many kilometers per liter are 49 liters per 100 km? 49 liters per 100 kilometers is equivalent to 2.04 kilometers per liter. How many kilometers per liter are 50 liters per 100 km? 50 liters per 100 kilometers is equivalent to 2 kilometers per liter. Frequently asked questions about converting from liters per 100 km to kilometers per liter 51 liters per 100 km to kilometers per liter: 51 liters per 100 kilometers is equivalent to 1.96 kilometers per liter. 52 liters per 100 km to kilometers per liter: 52 liters per 100 kilometers is equivalent to 1.92 kilometers per liter. 53 liters per 100 km to kilometers per liter: 53 liters per 100 kilometers is equivalent to 1.89 kilometers per liter. 54 liters per 100 km to kilometers per liter: 54 liters per 100 kilometers is equivalent to 1.85 kilometers per liter. 55 liters per 100 km to kilometers per liter: 55 liters per 100 kilometers is equivalent to 1.82 kilometers per liter. 56 liters per 100 km to kilometers per liter: 56 liters per 100 kilometers is equivalent to 1.79 kilometers per liter. 57 liters per 100 km to kilometers per liter: 57 liters per 100 kilometers is equivalent to 1.75 kilometers per liter. 58 liters per 100 km to kilometers per liter: 58 liters per 100 kilometers is equivalent to 1.72 kilometers per liter. 59 liters per 100 km to kilometers per liter: 59 liters per 100 kilometers is equivalent to 1.69 kilometers per liter. 60 liters per 100 km to kilometers per liter: 60 liters per 100 kilometers is equivalent to 1.67 kilometers per liter. 61 liters per 100 km to kilometers per liter: 61 liters per 100 kilometers is equivalent to 1.64 kilometers per liter. 62 liters per 100 km to kilometers per liter: 62 liters per 100 kilometers is equivalent to 1.61 kilometers per liter. 63 liters per 100 km to kilometers per liter: 63 liters per 100 kilometers is equivalent to 1.59 kilometers per liter. 64 liters per 100 km to kilometers per liter: 64 liters per 100 kilometers is equivalent to 1.56 kilometers per liter. 65 liters per 100 km to kilometers per liter: 65 liters per 100 kilometers is equivalent to 1.54 kilometers per liter. 66 liters per 100 km to kilometers per liter: 66 liters per 100 kilometers is equivalent to 1.52 kilometers per liter. 67 liters per 100 km to kilometers per liter: 67 liters per 100 kilometers is equivalent to 1.49 kilometers per liter. 68 liters per 100 km to kilometers per liter: 68 liters per 100 kilometers is equivalent to 1.47 kilometers per liter. 69 liters per 100 km to kilometers per liter: 69 liters per 100 kilometers is equivalent to 1.45 kilometers per liter. 70 liters per 100 km to kilometers per liter: 70 liters per 100 kilometers is equivalent to 1.43 kilometers per liter. 71 liters per 100 km to kilometers per liter: 71 liters per 100 kilometers is equivalent to 1.41 kilometers per liter. 72 liters per 100 km to kilometers per liter: 72 liters per 100 kilometers is equivalent to 1.39 kilometers per liter. 73 liters per 100 km to kilometers per liter: 73 liters per 100 kilometers is equivalent to 1.37 kilometers per liter. 74 liters per 100 km to kilometers per liter: 74 liters per 100 kilometers is equivalent to 1.35 kilometers per liter. 75 liters per 100 km to kilometers per liter: 75 liters per 100 kilometers is equivalent to 1.33 kilometers per liter. 76 liters per 100 km to kilometers per liter: 76 liters per 100 kilometers is equivalent to 1.32 kilometers per liter. 77 liters per 100 km to kilometers per liter: 77 liters per 100 kilometers is equivalent to 1.30 kilometers per liter. 78 liters per 100 km to kilometers per liter: 78 liters per 100 kilometers is equivalent to 1.28 kilometers per liter. 79 liters per 100 km to kilometers per liter: 79 liters per 100 kilometers is equivalent to 1.27 kilometers per liter. 80 liters per 100 km to kilometers per liter: 80 liters per 100 kilometers is equivalent to 1.25 kilometers per liter. 81 liters per 100 km to kilometers per liter: 81 liters per 100 kilometers is equivalent to 1.23 kilometers per liter. 82 liters per 100 km to kilometers per liter: 82 liters per 100 kilometers is equivalent to 1.22 kilometers per liter. 83 liters per 100 km to kilometers per liter: 83 liters per 100 kilometers is equivalent to 1.20 kilometers per liter. 84 liters per 100 km to kilometers per liter: 84 liters per 100 kilometers is equivalent to 1.19 kilometers per liter. 85 liters per 100 km to kilometers per liter: 85 liters per 100 kilometers is equivalent to 1.18 kilometers per liter. 86 liters per 100 km to kilometers per liter: 86 liters per 100 kilometers is equivalent to 1.16 kilometers per liter. 87 liters per 100 km to kilometers per liter: 87 liters per 100 kilometers is equivalent to 1.15 kilometers per liter. 88 liters per 100 km to kilometers per liter: 88 liters per 100 kilometers is equivalent to 1.14 kilometers per liter. 89 liters per 100 km to kilometers per liter: 89 liters per 100 kilometers is equivalent to 1.12 kilometers per liter. 90 liters per 100 km to kilometers per liter: 90 liters per 100 kilometers is equivalent to 1.11 kilometers per liter. 91 liters per 100 km to kilometers per liter: 91 liters per 100 kilometers is equivalent to 1.10 kilometers per liter. 92 liters per 100 km to kilometers per liter: 92 liters per 100 kilometers is equivalent to 1.09 kilometers per liter. 93 liters per 100 km to kilometers per liter: 93 liters per 100 kilometers is equivalent to 1.08 kilometers per liter. 94 liters per 100 km to kilometers per liter: 94 liters per 100 kilometers is equivalent to 1.06 kilometers per liter. 95 liters per 100 km to kilometers per liter: 95 liters per 100 kilometers is equivalent to 1.05 kilometers per liter. 96 liters per 100 km to kilometers per liter: 96 liters per 100 kilometers is equivalent to 1.04 kilometers per liter. 97 liters per 100 km to kilometers per liter: 97 liters per 100 kilometers is equivalent to 1.03 kilometers per liter. 98 liters per 100 km to kilometers per liter: 98 liters per 100 kilometers is equivalent to 1.02 kilometers per liter. 99 liters per 100 km to kilometers per liter: 99 liters per 100 kilometers is equivalent to 1.01 kilometers per liter. 100 liters per 100 km to kilometers per liter: 100 liters per 100 kilometers is equivalent to 1 kilometer per liter. Conversions liters per 100 km to other units Liters/100km to Liters/kmLiters/100km to Km/ltLiters/100km to Km/gal(US) Liters/100km to Miles/ltLiters/100km to Gal/100mi(US)Liters/100km to Gallons/mi(US) Liters/100km to MpgLiters/100km to Gal/100mi(UK)Liters/100km to Gallons/mi(UK) Liters/100km to Mpg (UK) | | Table liters per 100km to km per lt | | --- | 1 lt/100km = 100 km/lt | 11 lt/100km = 9.0909 km/lt | 21 lt/100km = 4.7619 km/lt | | 2 lt/100km = 50 km/lt | 12 lt/100km = 8.3333 km/lt | 22 lt/100km = 4.5455 km/lt | | 3 lt/100km = 33.3333 km/lt | 13 lt/100km = 7.6923 km/lt | 23 lt/100km = 4.3478 km/lt | | 4 lt/100km = 25 km/lt | 14 lt/100km = 7.1429 km/lt | 24 lt/100km = 4.1667 km/lt | | 5 lt/100km = 20 km/lt | 15 lt/100km = 6.6667 km/lt | 25 lt/100km = 4 km/lt | | 6 lt/100km = 16.6667 km/lt | 16 lt/100km = 6.25 km/lt | 26 lt/100km = 3.8462 km/lt | | 7 lt/100km = 14.2857 km/lt | 17 lt/100km = 5.8824 km/lt | 27 lt/100km = 3.7037 km/lt | | 8 lt/100km = 12.5 km/lt | 18 lt/100km = 5.5556 km/lt | 28 lt/100km = 3.5714 km/lt | | 9 lt/100km = 11.1111 km/lt | 19 lt/100km = 5.2632 km/lt | 29 lt/100km = 3.4483 km/lt | | 10 lt/100km = 10 km/lt | 20 lt/100km = 5 km/lt | 30 lt/100km = 3.3333 km/lt | | 11 lt/100km = 9.0909 km/lt | 21 lt/100km = 4.7619 km/lt | 31 lt/100km = 3.2258 km/lt | | 12 lt/100km = 8.3333 km/lt | 22 lt/100km = 4.5455 km/lt | 32 lt/100km = 3.125 km/lt | | 13 lt/100km = 7.6923 km/lt | 23 lt/100km = 4.3478 km/lt | 33 lt/100km = 3.0303 km/lt | | 14 lt/100km = 7.1429 km/lt | 24 lt/100km = 4.1667 km/lt | 34 lt/100km = 2.9412 km/lt | | 15 lt/100km = 6.6667 km/lt | 25 lt/100km = 4 km/lt | 35 lt/100km = 2.8571 km/lt | | 16 lt/100km = 6.25 km/lt | 26 lt/100km = 3.8462 km/lt | 36 lt/100km = 2.7778 km/lt | | 17 lt/100km = 5.8824 km/lt | 27 lt/100km = 3.7037 km/lt | 37 lt/100km = 2.7027 km/lt | | 18 lt/100km = 5.5556 km/lt | 28 lt/100km = 3.5714 km/lt | 38 lt/100km = 2.6316 km/lt | | 19 lt/100km = 5.2632 km/lt | 29 lt/100km = 3.4483 km/lt | 39 lt/100km = 2.5641 km/lt | | 20 lt/100km = 5 km/lt | 30 lt/100km = 3.3333 km/lt | 40 lt/100km = 2.5 km/lt | | 21 lt/100km = 4.7619 km/lt | 31 lt/100km = 3.2258 km/lt | 41 lt/100km = 2.439 km/lt | | 22 lt/100km = 4.5455 km/lt | 32 lt/100km = 3.125 km/lt | 42 lt/100km = 2.381 km/lt | | 23 lt/100km = 4.3478 km/lt | 33 lt/100km = 3.0303 km/lt | 43 lt/100km = 2.3256 km/lt | | 24 lt/100km = 4.1667 km/lt | 34 lt/100km = 2.9412 km/lt | 44 lt/100km = 2.2727 km/lt | | 25 lt/100km = 4 km/lt | 35 lt/100km = 2.8571 km/lt | 45 lt/100km = 2.2222 km/lt | | 26 lt/100km = 3.8462 km/lt | 36 lt/100km = 2.7778 km/lt | 46 lt/100km = 2.1739 km/lt | | 27 lt/100km = 3.7037 km/lt | 37 lt/100km = 2.7027 km/lt | 47 lt/100km = 2.1277 km/lt | | 28 lt/100km = 3.5714 km/lt | 38 lt/100km = 2.6316 km/lt | 48 lt/100km = 2.0833 km/lt | | 29 lt/100km = 3.4483 km/lt | 39 lt/100km = 2.5641 km/lt | 49 lt/100km = 2.0408 km/lt | | 30 lt/100km = 3.3333 km/lt | 40 lt/100km = 2.5 km/lt | 50 lt/100km = 2 km/lt | | 31 lt/100km = 3.2258 km/lt | 41 lt/100km = 2.439 km/lt | 51 lt/100km = 1.9608 km/lt | | 32 lt/100km = 3.125 km/lt | 42 lt/100km = 2.381 km/lt | 52 lt/100km = 1.9231 km/lt | | 33 lt/100km = 3.0303 km/lt | 43 lt/100km = 2.3256 km/lt | 53 lt/100km = 1.8868 km/lt | | 34 lt/100km = 2.9412 km/lt | 44 lt/100km = 2.2727 km/lt | 54 lt/100km = 1.8519 km/lt | | 35 lt/100km = 2.8571 km/lt | 45 lt/100km = 2.2222 km/lt | 55 lt/100km = 1.8182 km/lt | | 36 lt/100km = 2.7778 km/lt | 46 lt/100km = 2.1739 km/lt | 56 lt/100km = 1.7857 km/lt | | 37 lt/100km = 2.7027 km/lt | 47 lt/100km = 2.1277 km/lt | 57 lt/100km = 1.7544 km/lt | | 38 lt/100km = 2.6316 km/lt | 48 lt/100km = 2.0833 km/lt | 58 lt/100km = 1.7241 km/lt | | 39 lt/100km = 2.5641 km/lt | 49 lt/100km = 2.0408 km/lt | 59 lt/100km = 1.6949 km/lt | | 40 lt/100km = 2.5 km/lt | 50 lt/100km = 2 km/lt | 60 lt/100km = 1.6667 km/lt | | 41 lt/100km = 2.439 km/lt | 51 lt/100km = 1.9608 km/lt | 61 lt/100km = 1.6393 km/lt | | 42 lt/100km = 2.381 km/lt | 52 lt/100km = 1.9231 km/lt | 62 lt/100km = 1.6129 km/lt | | 43 lt/100km = 2.3256 km/lt | 53 lt/100km = 1.8868 km/lt | 63 lt/100km = 1.5873 km/lt | | 44 lt/100km = 2.2727 km/lt | 54 lt/100km = 1.8519 km/lt | 64 lt/100km = 1.5625 km/lt | | 45 lt/100km = 2.2222 km/lt | 55 lt/100km = 1.8182 km/lt | 65 lt/100km = 1.5385 km/lt | | 46 lt/100km = 2.1739 km/lt | 56 lt/100km = 1.7857 km/lt | 66 lt/100km = 1.5152 km/lt | | 47 lt/100km = 2.1277 km/lt | 57 lt/100km = 1.7544 km/lt | 67 lt/100km = 1.4925 km/lt | | 48 lt/100km = 2.0833 km/lt | 58 lt/100km = 1.7241 km/lt | 68 lt/100km = 1.4706 km/lt | | 49 lt/100km = 2.0408 km/lt | 59 lt/100km = 1.6949 km/lt | 69 lt/100km = 1.4493 km/lt | | 50 lt/100km = 2 km/lt | 60 lt/100km = 1.6667 km/lt | 70 lt/100km = 1.4286 km/lt | | 51 lt/100km = 1.9608 km/lt | 61 lt/100km = 1.6393 km/lt | 71 lt/100km = 1.4085 km/lt | | 52 lt/100km = 1.9231 km/lt | 62 lt/100km = 1.6129 km/lt | 72 lt/100km = 1.3889 km/lt | | 53 lt/100km = 1.8868 km/lt | 63 lt/100km = 1.5873 km/lt | 73 lt/100km = 1.3699 km/lt | | 54 lt/100km = 1.8519 km/lt | 64 lt/100km = 1.5625 km/lt | 74 lt/100km = 1.3514 km/lt | | 55 lt/100km = 1.8182 km/lt | 65 lt/100km = 1.5385 km/lt | 75 lt/100km = 1.3333 km/lt | | 56 lt/100km = 1.7857 km/lt | 66 lt/100km = 1.5152 km/lt | 76 lt/100km = 1.3158 km/lt | | 57 lt/100km = 1.7544 km/lt | 67 lt/100km = 1.4925 km/lt | 77 lt/100km = 1.2987 km/lt | | 58 lt/100km = 1.7241 km/lt | 68 lt/100km = 1.4706 km/lt | 78 lt/100km = 1.2821 km/lt | | 59 lt/100km = 1.6949 km/lt | 69 lt/100km = 1.4493 km/lt | 79 lt/100km = 1.2658 km/lt | | 60 lt/100km = 1.6667 km/lt | 70 lt/100km = 1.4286 km/lt | 80 lt/100km = 1.25 km/lt | | 61 lt/100km = 1.6393 km/lt | 71 lt/100km = 1.4085 km/lt | 81 lt/100km = 1.2346 km/lt | | 62 lt/100km = 1.6129 km/lt | 72 lt/100km = 1.3889 km/lt | 82 lt/100km = 1.2195 km/lt | | 63 lt/100km = 1.5873 km/lt | 73 lt/100km = 1.3699 km/lt | 83 lt/100km = 1.2048 km/lt | | 64 lt/100km = 1.5625 km/lt | 74 lt/100km = 1.3514 km/lt | 84 lt/100km = 1.1905 km/lt | | 65 lt/100km = 1.5385 km/lt | 75 lt/100km = 1.3333 km/lt | 85 lt/100km = 1.1765 km/lt | | 66 lt/100km = 1.5152 km/lt | 76 lt/100km = 1.3158 km/lt | 86 lt/100km = 1.1628 km/lt | | 67 lt/100km = 1.4925 km/lt | 77 lt/100km = 1.2987 km/lt | 87 lt/100km = 1.1494 km/lt | | 68 lt/100km = 1.4706 km/lt | 78 lt/100km = 1.2821 km/lt | 88 lt/100km = 1.1364 km/lt | | 69 lt/100km = 1.4493 km/lt | 79 lt/100km = 1.2658 km/lt | 89 lt/100km = 1.1236 km/lt | | 70 lt/100km = 1.4286 km/lt | 80 lt/100km = 1.25 km/lt | 90 lt/100km = 1.1111 km/lt | Fuel Consumption Conversions Liters/km to Liters/100kmLiters/km to Km/ltLiters/km to Km/gal(US) Liters/km to Miles/ltLiters/km to Gal/100mi(US)Liters/km to Gallons/mi(US) Liters/km to MpgLiters/km to Gal/100mi(UK)Liters/km to Gallons/mi(UK) Liters/km to Mpg (UK)Km/lt to Liters/100kmKm/lt to Liters/km Km/lt to Km/gal(US)Km/lt to Miles/ltKm/lt to Gal/100mi(US) Km/lt to Gallons/mi(US)Km/lt to MpgKm/lt to Gal/100mi(UK) Km/lt to Gallons/mi(UK)Km/lt to Mpg (UK)Km/gal(US) to Liters/100km Km/gal(US) to Liters/kmKm/gal(US) to Km/ltKm/gal(US) to Miles/lt Km/gal(US) to Gal/100mi(US)Km/gal(US) to Gallons/mi(US)Km/gal(US) to Mpg Km/gal(US) to Gal/100mi(UK)Km/gal(US) to Gallons/mi(UK)Km/gal(US) to Mpg (UK) Miles/lt to Liters/100kmMiles/lt to Liters/kmMiles/lt to Km/lt Miles/lt to Km/gal(US)Miles/lt to Gal/100mi(US)Miles/lt to Gallons/mi(US) Miles/lt to MpgMiles/lt to Gal/100mi(UK)Miles/lt to Gallons/mi(UK) Miles/lt to Mpg (UK)Gal/100mi(US) to Liters/100kmGal/100mi(US) to Liters/km Gal/100mi(US) to Km/ltGal/100mi(US) to Km/gal(US)Gal/100mi(US) to Miles/lt Gal/100mi(US) to Gallons/mi(US)Gal/100mi(US) to MpgGal/100mi(US) to Gal/100mi(UK) Gal/100mi(US) to Gallons/mi(UK)Gal/100mi(US) to Mpg (UK)Gallons/mi(US) to Liters/100km Gallons/mi(US) to Liters/kmGallons/mi(US) to Km/ltGallons/mi(US) to Km/gal(US) Gallons/mi(US) to Miles/ltGallons/mi(US) to Gal/100mi(US)Gallons/mi(US) to Mpg Gallons/mi(US) to Gal/100mi(UK)Gallons/mi(US) to Gallons/mi(UK)Gallons/mi(US) to Mpg (UK) Mpg to Liters/100kmMpg to Liters/kmMpg to Km/lt Mpg to Km/gal(US)Mpg to Miles/ltMpg to Gal/100mi(US) Mpg to Gallons/mi(US)Mpg to Gal/100mi(UK)Mpg to Gallons/mi(UK) Mpg to Mpg (UK)Gal/100mi(UK) to Liters/100kmGal/100mi(UK) to Liters/km Gal/100mi(UK) to Km/ltGal/100mi(UK) to Km/gal(US)Gal/100mi(UK) to Miles/lt Gal/100mi(UK) to Gal/100mi(US)Gal/100mi(UK) to Gallons/mi(US)Gal/100mi(UK) to Mpg Gal/100mi(UK) to Gallons/mi(UK)Gal/100mi(UK) to Mpg (UK)Gallons/mi(UK) to Liters/100km Gallons/mi(UK) to Liters/kmGallons/mi(UK) to Km/ltGallons/mi(UK) to Km/gal(US) Gallons/mi(UK) to Miles/ltGallons/mi(UK) to Gal/100mi(US)Gallons/mi(UK) to Gallons/mi(US) Gallons/mi(UK) to MpgGallons/mi(UK) to Gal/100mi(UK)Gallons/mi(UK) to Mpg (UK) Mpg (UK) to Liters/100kmMpg (UK) to Liters/kmMpg (UK) to Km/lt Mpg (UK) to Km/gal(US)Mpg (UK) to Miles/ltMpg (UK) to Gal/100mi(US) Mpg (UK) to Gallons/mi(US)Mpg (UK) to MpgMpg (UK) to Gal/100mi(UK) Mpg (UK) to Gallons/mi(UK) Terms and Conditions - Privacy Policy
766
http://www.mathaddict.net/palindrome.htm
Categories of Palindromic Numbers | | | | | | | | --- --- --- | No. of Digits | Prime | Triangular | Perfect square | sum of consecutive squares (2 or more) | cube | product of successive numbers | | 1 | 1,2,3,5,7 | 1,3,6 | 1,4,9 | 5 | 1 | | | 2 | 11 | 55,66 55,77 | | | | 3 | 101,131,151,181, 191,313,353,373, 383,727,757,787, 797.919,929 | 171,595,666 | 121,484,676 | 121,181,313,434,484,505,545,595,636,676,818, | 343 | 272=1617 | | 4 | Nil- | 3003,5995.8778 1001,1111,1441,1771,4334,6446, | 1331 | 6006=7778 | | 5 | 93 primes | 15051,66066 | 10201,12321,14641,40804 44944,69696,94249 | 10201,12321,14641,17371,17871,19691,21712, 40804,41214,42924,44444,44944,46564,51015, 65756,69696,81818,94249,97679,99199 | | | | 6 | Nil- | 617716,828828 | 698896 | | | 289982=538539 | | 7 | 668 primes | 1269621,1680861 3544453,5073705 5676765,6295926 | 1002001,1234321,4008004, 5221225,6948496, | | 1030301(1013) | 6039306=24572458 | | | | Properties of Palindromic Numbers | | Palindromes have their origin from the Greek word palindromos meaning "running back again". | | Take any number. Add it to the reverse number. If it it is not a palindrome, again add the new number to its reverse till it becomes a palindrome. 80% of the numbers under 10000 yield palindromes in 4 or less steps. about 10% in more than 4 but up to 7 steps. A rare number 89 as well as 98 take 24 iterations to become a palindrome. Palindrome words -- eye, noon, radar, mom, dad, deed, civic, level | | There are some numbers which do not form a palindrome in the above process. These are called Lychrels. The first Lychrels is 196. The reverse no. 691 is also Lychrels. If one goes on adding these numbers as above, all the numbers generated are lychrels. For example, in the chain of (196,691)-->(788,887), (1675, 5761) and up above are all lychrels. | | All palindromes with even number of digits are divisible by 11.Therefore barring 11 which is a 2 digit Palindrome, there are no palindromes with 4,6, 8, ..... no. of digits. All even digit palindromes can be expressed as 100a( 101 + 1) + 10b (103 + 1) + c (105 +1) +.... ( this is the format for 6 digit palindrome. One can construct on similar lines for more or less number of even digits ) & each co-efficient reduces to the equation 10 (2k+1) +1 = 11x where k, x have integer values. | | All Palindromic numbers which are triangular must have either of 0,1,3,5,6,8 as their last digit and the lower of the product digits of the triangular number has to be even. In other words for n(n+1)/2 to be palindromic, n is to an even number. | | In molecular biology, the DNA and RNA sequences that read the same from both the ends i.e 5' & 3' end are called Palindromes. The sites of many restriction enzymes such as Restriction Endonuclease that cut DNA are palindromes. Long DNA palindromes pose a threat to genome stability. This instability is primarily mediated by slippage on the lagging strand of the replication fork between short directly repeated sequences close to the ends of the palindrome. The role of the palindrome is likely to be the juxtaposition of the directly repeated sequences by intra-strand base-pairing. This intra-strand base-pairing, if present on both strands, results in a cruciform structure. In bacteria, cruciform structures have proved difficult to detect in vivo, suggesting that if they form, they are either not replicated or are destroyed. SbcCD, a recently discovered exonuclease of Escherichia coli, is responsible for preventing the replication of long palindromes. These observations lead to the proposal that cells may have evolved a post-replicative mechanism for the elimination and/or repair of large DNA secondary structures It has been found that Direct and inverted repeats elicit genetic instability by both exploiting and eluding DNA double-strand break repair systems in mycobacteria. It has also been found that widespread and non-random distribution of DNA palindromes in cancer cells provides a structural platform for subsequent gene amplification | | Frequency of Palindrome distribution : (n-1) k/2 where k is even and(n-1) (k+1)/2 where k is odd. n represents the Base system and k is the total number of digits in the number. In general for a number in any base n with k digits the number of unique reverses of k digits is given by (n - 1) 10k-1 - [(n - 1)k/2 + (10k-1 - 1)] where k is even, and (n - 1) 10k-1 -[(n - 1)(k+1)/2+ (10k-1 - 1)] where k is odd General formulae which give the frequency of palindromes for all numbers in any base n less than or equal to k digits. If k is even, the formula = 2(n - l)[(n - 1)k/2 - 1] n - 2 If k is odd, formula = 2(n - l)[(n - 1)(k+1)/2 - 1] - (n - 1)(k+1)/2 n - 2 for all n > 2. | | | | References -- Palindromic Numbers | | Mathematischa | | |
767
https://math.stackexchange.com/questions/355318/prove-fracd-lnyd-lnx-fracdydx-fracxy-using-limits
Skip to main content Prove dln(y)dln(x)=dydxxy using limits Ask Question Asked Modified 12 years, 4 months ago Viewed 8k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. I see this equation used again and again in economics but I really can't find a rigorous limits based proof. I have that dln(y)dln(x)=limh→0ln(f(ln(x)+h))−lnf(ln(x))h Not sure how to finish. Thanks real-analysis Share CC BY-SA 3.0 Follow this question to receive notifications edited Apr 8, 2013 at 22:38 Mhenni Benghorbal 48.1k77 gold badges5454 silver badges9191 bronze badges asked Apr 8, 2013 at 22:28 ArnoldArnold 39944 silver badges1212 bronze badges 5 I'm interested in why do you need this in economics? Cocopuffs – Cocopuffs 2013-04-08 23:07:56 +00:00 Commented Apr 8, 2013 at 23:07 Elasticity. en.wikipedia.org/wiki/Elasticity_(economics) Arnold – Arnold 2013-04-09 00:00:39 +00:00 Commented Apr 9, 2013 at 0:00 For example in a regression ln(y)=a+bln(x)+e, dln(y)/dln(x) = b => 1% increase in x causes a 1% increase in y Arnold – Arnold 2013-04-09 00:01:15 +00:00 Commented Apr 9, 2013 at 0:01 I understand that I don't and likely won't understand. Thanks, though. Cocopuffs – Cocopuffs 2013-04-09 00:09:55 +00:00 Commented Apr 9, 2013 at 0:09 Because it is easier to deal with relative changes than absolute especially when comparing changes that have different units. user132346 – user132346 2014-03-01 18:43:41 +00:00 Commented Mar 1, 2014 at 18:43 Add a comment | 3 Answers 3 Reset to default This answer is useful 4 Save this answer. Show activity on this post. Let z=logx, i.e. x=exp(z), and y=y(x) be a smooth and nonzero enough function of x; then dlogy(z)dz=1y(z)y′(z)=1y(z)y′(x(z))x′(z)=dydxxy where we use the chain rule twice. Share CC BY-SA 3.0 Follow this answer to receive notifications edited Apr 8, 2013 at 23:06 answered Apr 8, 2013 at 22:41 CocopuffsCocopuffs 10.5k3131 silver badges4242 bronze badges 8 How do you go from the 2nd to 3rd equality? Maybe I'm not getting you're prime notation. When I see y′(z) I think dydx evaluated at the point z. So y′(z)=y′(ex) Arnold – Arnold 2013-04-09 00:12:36 +00:00 Commented Apr 9, 2013 at 0:12 The chain rule says that y′(x(z))x′(z)=(y(x))′(z). I understand the latter as y′(z), where y is a function of z. Cocopuffs – Cocopuffs 2013-04-09 00:13:48 +00:00 Commented Apr 9, 2013 at 0:13 @Kevin Given that y=y(x), the only way I see to understand y as a function of z is y=y(exp(z))=y(x(z)). This lends itself to the chain rule. Cocopuffs – Cocopuffs 2013-04-09 00:17:17 +00:00 Commented Apr 9, 2013 at 0:17 Correction on my comment. y′(z)=y′(ln(x)) ... still thinking on your solution. Thanks! Arnold – Arnold 2013-04-09 00:19:29 +00:00 Commented Apr 9, 2013 at 0:19 Here's where my confusion lies: By chain rule dydz(y(x(z))=y′(x(z))x′(z). But in the argument, we have y′(z)=y′(x(z))x′(z), and at least from the way I am interpreting the notation, y′(z)≠dydz(y(x(z)) since y′(z) is dydz evaluated at z, not x. Arnold – Arnold 2013-04-09 00:26:12 +00:00 Commented Apr 9, 2013 at 0:26 | Show 3 more comments This answer is useful 2 Save this answer. Show activity on this post. Here is the limit solution, I thought of how it would work. It is true for g differentiable with a nonzero derivative that df(x)dg(x)=df(x)dxdg(x)dx Proof: the RHS = limt→t0f(t)−f(t0)g(t)−g(t0)=limt→t0f(g−1(g(t)))−f(g−1(g(t)))g(t)−g(t0) which is the definition of the derivative of f at t with respect to g(t). To see this, note that if u=g(x), dfdu=f(g−1(u))−f(g−1(u0))g(t)−g(t0)=f(g−1(g(t)))−f(g−1(g(t)))g(t)−g(t0) The result desired follows from this formula. Share CC BY-SA 3.0 Follow this answer to receive notifications edited Apr 17, 2013 at 2:40 answered Apr 17, 2013 at 2:28 ArnoldArnold 39944 silver badges1212 bronze badges Add a comment | This answer is useful 1 Save this answer. Show activity on this post. Implicit assumptions in your question: y=f(x), x>0, y>0. With these, the following calculation is valid: dlnydlnx=dlnydx⋅(dlnxdx)−1=1ydydx⋅(1x)−1=xydydx. Share CC BY-SA 3.0 Follow this answer to receive notifications answered Apr 8, 2013 at 23:05 Sammy BlackSammy Black 28.8k33 gold badges3939 silver badges6565 bronze badges 1 How come you can manipulate the dx's this way? Whats the rigorous argument behind that Arnold – Arnold 2013-04-09 00:02:26 +00:00 Commented Apr 9, 2013 at 0:02 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions real-analysis See similar questions with these tags. Featured on Meta stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 2 How to prove the limx→11logx does not exist. 2 Using definition of limits to prove 1 How to prove that a limit exists for limn→∞n+2n=1 5 limσA,σB→∞e−σA−σB∑∞k=0σAkk!⋅σBkk! 0 Show f(x):=cos(|x|−−√) is not differentiable at x=0. 2 Rigorously proving limx→(2n+1)+tan(πx2)=−∞ 2 Proving limits of functions using first principles 0 Supremum and infimum of a given set without using limits or differentiation 0 Determine one-sided limits of a function given its graph (no analytical formula) Hot Network Questions Why does \\ not show the multiplication sign "×" when using unicode-math with a math font? Is it better to add conditional logic to a heavily used stored procedure or create a separate one for a new use case? Did Isaac Newton write with a quill pen? Some don't like me, some people do How does one resolve competing intuitions in philosophy? Whom is James Baldwin quoting in this section from "Stranger in the Village"? Single author paper from undergrad Is an idea stronger with many reasoning chains or only one solid chain? When is a rim depth too much? How does printing a union itself and not its member work in C? I am being asked to "Tokenise my stay" for a hotel reservation in Delhi India. Do I need to do this? Line to be filled by students Police machine uses hypnosis for interrogation and punishment Mystery book in the world of The Stand Was the C64's "Bytes Free" message hard coded to show 38911 bytes? What type of data is data on the number of shark attacks in different regions as registered in ISAF from 2004-2013 Higher extensions in triangulated categories with t-structure Why didn't the transaction fail? Does Telekinetic feat's Invisible Mage Hand or Arcane Trickster's Invisible Mage Hand Conceal Objects It Carries? What's going on with the big Nepali riots against social media bans (September 2025)? Context["x"] returns different result in documentation and notebook Why do DSP systems not use caches? Why is it okay to say "to see a movie" but not "to see TV"? Unembedded subjunctive clause? Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
768
https://www.doubtnut.com/pcmb-questions/203018
T V γ−1 is constant for an ideal gas undergoing an adiabatic process where γ is the ratio of heat capacities Cp/Cv. This is a direct consequence of: A. the zeroth law of thermodynamics alone B. the zeroth law and the ideal gas equation of state C. the first law of thermodynamics alone D. the ideal gas equation of state aloneE. the first law and the equation of state More from this Exercise E. the first law and the equation of state Related Solutions Ideal gas equation for n mole of gas State the zeroth law of thermodynamics. State the zeroth law of thermodynamics. State the zeroth law of thermodynamics. State Zeroth law of thermodynamics. State Zeroth law of thermodynamics State Zeroth law of thermodynamic . State Zeroth law of thermodynamics. State Zeroth law of thermodynamics. State Zeroth law of thermodynamic . State Zeroth law of thermodynamics? State Zeroth law of thermodynamic . What is an adiabatic process ? Write the first law of thermodynamics for an ideal gas. Ideal gas equation: equation of state :: Law of volume:___________. Thermodynamics - Zeroth Law And First Law Of Thermodynamics Zeroth Law OF Thermodynamics || First Law OF Thermodynamics State and explain zeroth law of thermodynamics . State and explain zeroth law of thermodynamics. What is an isobaric process ? Write the first law of thermodynamics for an ideal gas. Write first law of thermodynamics for isothermal process in an ideal gas. Recommended Content Name the method of redemption of debentures in which there is no requi... T V γ−1 is constant for an ideal gas undergoing an adiabatic process ... What is the nature of receipt of premium on issue of shares? Exams Free Textbook Solutions Free Ncert Solutions English Medium Free Ncert Solutions Hindi Medium Boards Resources Doubtnut is No.1 Study App and Learning App with Instant Video Solutions for NCERT Class 6, Class 7, Class 8, Class 9, Class 10, Class 11 and Class 12, IIT JEE prep, NEET preparation and CBSE, UP Board, Bihar Board, Rajasthan Board, MP Board, Telangana Board etc NCERT solutions for CBSE and other state boards is a key requirement for students. Doubtnut helps with homework, doubts and solutions to all the questions. It has helped students get under AIR 100 in NEET & IIT JEE. Get PDF and video solutions of IIT-JEE Mains & Advanced previous year papers, NEET previous year papers, NCERT books for classes 6 to 12, CBSE, Pathfinder Publications, RD Sharma, RS Aggarwal, Manohar Ray, Cengage books for boards and competitive exams. Doubtnut is the perfect NEET and IIT JEE preparation App. Get solutions for NEET and IIT JEE previous years papers, along with chapter wise NEET MCQ solutions. Get all the study material in Hindi medium and English medium for IIT JEE and NEET preparation Contact Us
769
https://www.scribd.com/document/7089249/Earth-Fact-Sheet-NASA
Earth Fact Sheet - NASA | PDF | Atmosphere | Orbit Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Close suggestions Search Search en Change Language Upload Sign in Sign in Download free for 30 days 0 ratings 0% found this document useful (0 votes) 448 views 3 pages Earth Fact Sheet - NASA The document provides key facts about Earth including its: 1) Bulk parameters such as mass, volume, radius, density, and escape velocity. 2) Orbital parameters around the Sun such as semi… Full description Uploaded by api-3704956 AI-enhanced title and description Go to previous items Go to next items Download Save Save Earth Fact Sheet_NASA For Later Share 0%0% found this document useful, undefined 0%, undefined Print Embed Ask AI Report Download Save Earth Fact Sheet_NASA For Later You are on page 1/ 3 Search Fullscreen Earth Fact Sheet Bulk parameters Mass (10 24 kg) 5.9736 Volume (10 10 km 3 ) 108.321 Equatorial radius (km) 6378.1 Polar radius (km) 6356.8 Volumetric mean radius (km) 6371.0 Core radius (km) 3485 Ellipticity (Flattening) 0.00335 Mean density (kg/m 3 ) 5515 Surface gravity (m/s 2 ) 9.798 Surface acceleration (m/s 2 ) 9.780 Escape velocity (km/s) 11.186 GM (x 10 6 km 3 /s 2 ) 0.3986 Bond albedo 0.306 Visual geometric albedo 0.367 Visual magnitude V(1,0) -3.86 Solar irradiance (W/m 2 ) 1367.6 Black-body temperature (K) 254.3 Topographic range (km) 20 Moment of inertia (I/MR 2 ) 0.3308 J 2 (x 10 -6 ) 1082.63 Number of natural satellites 1 Planetary ring system No Orbital parameters Semimajor axis (10 6 km) 149.60 Sidereal orbit period (days) 365.256 Tropical orbit period (days) 365.242 Perihelion (10 6 km) 147.09 Aphelion (10 6 km) 152.10 Mean orbital velocity (km/s) 29.78 Max. orbital velocity (km/s) 30.29 Min. orbital velocity (km/s) 29.29 Page 1 of 3 Earth Fact Sheet 7/10/2008 adDownload to read ad-free Orbit inclination (deg) 0.000 Orbit eccentricity 0.0167 Sidereal rotation period (hrs) 23.9345 Length of day (hrs) 24.0000 Obliquity to orbit (deg) 23.45 Earth Mean Orbital Elements (J2000) Semimajor axis (AU) 1.00000011 Orbital eccentricity 0.01671022 Orbital inclination (deg) 0.00005 Longitude of ascending node (deg) -11.26064 Longitude of perihelion (deg) 102.94719 Mean Longitude (deg) 100.46435 North Pole of Rotation Right Ascension: 0.00 - 0.641T Declination : 90.00 - 0.557T Reference Date : 12:00 UT 1 Jan 2000 (JD 2451545.0) T = Julian centuries from reference date Terrestrial Magnetosphere Dipole field strength: 0.3076 gauss-Re 3 Latitude/Longitude of dipole N: 78.6 degrees N/70.1 degrees W Dipole offset (planet center to dipole center) distance: 0.0725 Re Latitude/Longitude of offset vector: 18.3 degrees N/147.8 degrees E Note: Re denotes Earth radii, 6,378 km Terrestrial Atmosphere Surface pressure: 1014 mb Surface density: 1.217 kg/m 3 Scale height: 8.5 km Total mass of atmosphere: 5.1 x 10 18 kg Total mass of hydrosphere: 1.4 x 10 21 kg Average temperature: 288 K (15 C) Diurnal temperature range: 283 K to 293 K (10 to 20 C) Wind speeds: 0 to 100 m/s Mean molecular weight: 28.97 g/mole Atmospheric composition (by volume, dry air): Major : 78.084% Nitrogen (N 2 ), 20.946% Oxygen (O 2 ), Minor (ppm): Argon (Ar) - 9340; Carbon Dioxide (CO 2 ) - 380 Neon (Ne) - 18.18; Helium (He) - 5.24; CH 4 1.7 Krypton (Kr) - 1.14; Hydrogen (H 2 ) - 0.55 Page 2 of 3 Earth Fact Sheet 7/10/2008 adDownload to read ad-free Water is highly variable, typically makes up about 1% The Moon For information on the Moon, see the Moon Fact Sheet If no sub- or superscripts appear on this page - for example, if the "Mass" is given in units of "(1024 kg)" - you may want to check the notes on the sub- and superscripts.Notes on the factsheets- definitions of parameters, units, notes on sub- and superscripts, etc. Planetary Fact Table- metric units Planetary Fact Table- U.S. units Planetary Fact Table- Earth ratio Earth Page Directory to other Planetary Fact Sheets Author/Curator: Dr. David R. Williams, dave.williams@nasa.gov NSSDC, Mail Code 690.1 NASA Goddard Space Flight Center Greenbelt, MD 20771 +1-301-286-1258 NASA Official: Ed Grayzeck, edwin.j.grayzeck@n asa.gov Last Updated: 19 April 2007, DRW Page 3 of 3 Earth Fact Sheet 7/10/2008 Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Copy link Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like Simplex 2001 Fire Alarm Wiring Diagrams No ratings yet Simplex 2001 Fire Alarm Wiring Diagrams 124 pages Planetary Fact Sheet - Metric - NASA No ratings yet Planetary Fact Sheet - Metric - NASA 2 pages N.Som & S.C.Das 75% (4) N.Som & S.C.Das 427 pages 1) 1C LSZH No ratings yet 1) 1C LSZH 44 pages Jntu Kakinada ECE Syllabus No ratings yet Jntu Kakinada ECE Syllabus 83 pages Jntu Kakinada ECE Syllabus No ratings yet Jntu Kakinada ECE Syllabus 83 pages CNC Motion Control Guide No ratings yet CNC Motion Control Guide 84 pages Citroën HDI Injection Training 100% (1) Citroën HDI Injection Training 154 pages Psychrometric Chart No ratings yet Psychrometric Chart 39 pages Transportation Optimization Guide No ratings yet Transportation Optimization Guide 13 pages Railway Signalling and Telecommunication: Industrial Traning Report No ratings yet Railway Signalling and Telecommunication: Industrial Traning Report 44 pages SCAC Inverter - ODU's V.04a No ratings yet SCAC Inverter - ODU's V.04a 117 pages Cissel Form Finisher No ratings yet Cissel Form Finisher 40 pages Lennox 2018 Catalogue 0% (1) Lennox 2018 Catalogue 203 pages FSD Cessna 337 POH 67% (3) FSD Cessna 337 POH 9 pages Operations Research Course File No ratings yet Operations Research Course File 42 pages ASTM D5182 FZG Visual Method No ratings yet ASTM D5182 FZG Visual Method 6 pages Unified Application Form For Building Permit 100% (1) Unified Application Form For Building Permit 2 pages Earth Fact Sheet No ratings yet Earth Fact Sheet 1 page JB Gupta Electrical No ratings yet JB Gupta Electrical 1,138 pages 8086 Datasheet 0% (1) 8086 Datasheet 30 pages Submarine Pipeline Design Challenges No ratings yet Submarine Pipeline Design Challenges 19 pages Common Collector Design and Analysis 100% (6) Common Collector Design and Analysis 21 pages Satellite Communications Basics No ratings yet Satellite Communications Basics 51 pages VTU EC 6TH SEM Syllabus No ratings yet VTU EC 6TH SEM Syllabus 35 pages BJT Regions With Simple Examples 100% (1) BJT Regions With Simple Examples 15 pages Planetary Fact Sheet - Ratio To Earth Values - NASA No ratings yet Planetary Fact Sheet - Ratio To Earth Values - NASA 2 pages Bicmos Rabaye No ratings yet Bicmos Rabaye 19 pages Staff Management Plan (OSIAdmin 3456) No ratings yet Staff Management Plan (OSIAdmin 3456) 21 pages Basics of Semiconductors 100% (2) Basics of Semiconductors 22 pages Pure Metals No ratings yet Pure Metals 9 pages BJT in Terms of Currents 100% (2) BJT in Terms of Currents 6 pages Ismc 200 To Ismb 200 No ratings yet Ismc 200 To Ismb 200 2 pages Boeing AH-64 Apache No ratings yet Boeing AH-64 Apache 9 pages The Vertitrak System: in Class No ratings yet The Vertitrak System: in Class 8 pages The Mechanical Vapor Compression 38 Years of Experience PDF No ratings yet The Mechanical Vapor Compression 38 Years of Experience PDF 10 pages Co2 Sensor - Honeywell No ratings yet Co2 Sensor - Honeywell 2 pages ECE Staff Teaching Load June-Nov 2007 No ratings yet ECE Staff Teaching Load June-Nov 2007 8 pages Fundamental of Solar Energy 95% (19) Fundamental of Solar Energy 238 pages 1.5KE Series 1N6267 A 1N6303A - Vishay (TransZorb Transiente Voltage Suppressors) No ratings yet 1.5KE Series 1N6267 A 1N6303A - Vishay (TransZorb Transiente Voltage Suppressors) 7 pages Summer Internship No ratings yet Summer Internship 4 pages Structural Engineering Data No ratings yet Structural Engineering Data 2 pages Hi-Tech College of Engineering & Technology Gandipet-Himayathnagar, CB Post, Hyderabad-75 Dept of Electronics and Communication Engineering No ratings yet Hi-Tech College of Engineering & Technology Gandipet-Himayathnagar, CB Post, Hyderabad-75 Dept of Electronics and Communication Engineering 6 pages JNSQ CelestialBodies No ratings yet JNSQ CelestialBodies 32 pages Solar System No ratings yet Solar System 13 pages S500 427 No ratings yet S500 427 2 pages Earth No ratings yet Earth 52 pages V Pradarshina2007 - Pamplet No ratings yet V Pradarshina2007 - Pamplet 2 pages Earth Statistics: Earth To Sun: 93000000 Miles Earth To Moon: 238000 Miles No ratings yet Earth Statistics: Earth To Sun: 93000000 Miles Earth To Moon: 238000 Miles 2 pages Gravitational: + Text Only Site + Non-Flash Version + Contact Glenn No ratings yet Gravitational: + Text Only Site + Non-Flash Version + Contact Glenn 2 pages Vignyana Pradarshan 2008 Symposium No ratings yet Vignyana Pradarshan 2008 Symposium 2 pages En The Solar System by Slidesgo No ratings yet En The Solar System by Slidesgo 30 pages Here No ratings yet Here 42 pages Chapter 12 Solar System: Learning Outcomes No ratings yet Chapter 12 Solar System: Learning Outcomes 5 pages Evolution of Intel Microprocessors No ratings yet Evolution of Intel Microprocessors 1 page This Article Is About The Planet No ratings yet This Article Is About The Planet 3 pages My Word No ratings yet My Word 2 pages Earth: Navigation Search No ratings yet Earth: Navigation Search 4 pages Solar Syatem No ratings yet Solar Syatem 27 pages Embedded SYstems Syllabus - R05 - LessonPlan No ratings yet Embedded SYstems Syllabus - R05 - LessonPlan 5 pages VHDL Quick Start No ratings yet VHDL Quick Start 42 pages Usb in A Nutshell by Craig Peacock (Craig - Peacock@beyondlogic - Org) 100% (2) Usb in A Nutshell by Craig Peacock (Craig - Peacock@beyondlogic - Org) 30 pages VHDL Intro No ratings yet VHDL Intro 19 pages M No ratings yet M 29 pages Unit 2 No ratings yet Unit 2 7 pages Earth's Structure and Divisions No ratings yet Earth's Structure and Divisions 3 pages CH 6 No ratings yet CH 6 19 pages Sample of JPL Files No ratings yet Sample of JPL Files 2 pages SC Chapter 12 G4 No ratings yet SC Chapter 12 G4 21 pages Earth: Jump To Navigation Jump To Search World Earth (Disambiguation) Planet Earth (Disambiguation) No ratings yet Earth: Jump To Navigation Jump To Search World Earth (Disambiguation) Planet Earth (Disambiguation) 18 pages Venus, Earth Mars No ratings yet Venus, Earth Mars 10 pages Earth No ratings yet Earth 16 pages Earth No ratings yet Earth 2 pages Global Sci No ratings yet Global Sci 58 pages What Is Earth? No ratings yet What Is Earth? 6 pages Abm-Bon - Hilario Las 2 PDF No ratings yet Abm-Bon - Hilario Las 2 PDF 5 pages Physics of Sun, Earth, Moon, and Mars - 20250726 - 080015 - 0000 No ratings yet Physics of Sun, Earth, Moon, and Mars - 20250726 - 080015 - 0000 4 pages The Solar System No ratings yet The Solar System 10 pages Planetary Data and Characteristics No ratings yet Planetary Data and Characteristics 13 pages Solar System WG No ratings yet Solar System WG 13 pages Physics No ratings yet Physics 9 pages Geography No ratings yet Geography 24 pages 5th SAO (2017) Constants Sheet (CMU Serif) No ratings yet 5th SAO (2017) Constants Sheet (CMU Serif) 1 page Mercury: Planetary Facts & Features No ratings yet Mercury: Planetary Facts & Features 24 pages Bon Hilario PDF No ratings yet Bon Hilario PDF 5 pages Sci l3 No ratings yet Sci l3 3 pages Venus Fact Sheet No ratings yet Venus Fact Sheet 4 pages Planet Earth No ratings yet Planet Earth 2 pages Planetary Fact Sheet Notes No ratings yet Planetary Fact Sheet Notes 2 pages L2 Solar System No ratings yet L2 Solar System 64 pages Jupiter Fact Sheet No ratings yet Jupiter Fact Sheet 4 pages NASA Mars Fact Sheet No ratings yet NASA Mars Fact Sheet 4 pages Ramanujani System No ratings yet Ramanujani System 13 pages Planet and Moon Properties No ratings yet Planet and Moon Properties 2 pages Mercury Fact Sheet No ratings yet Mercury Fact Sheet 4 pages Celestial Simulation Guide No ratings yet Celestial Simulation Guide 14 pages Heat Transfer Assignments No ratings yet Heat Transfer Assignments 2 pages 1 3 Appendix Earth Profile No ratings yet 1 3 Appendix Earth Profile 2 pages Solar System Overview & Planet Details No ratings yet Solar System Overview & Planet Details 13 pages Neptune vs Earth: Key Comparisons No ratings yet Neptune vs Earth: Key Comparisons 4 pages Earth's Structure and Characteristics No ratings yet Earth's Structure and Characteristics 43 pages Astronomy Assignmdsdadsdssaadddent No ratings yet Astronomy Assignmdsdadsdssaadddent 2 pages Mercury: Facts & Figures: Discovered by Date of Discovery Average Distance From The Sun No ratings yet Mercury: Facts & Figures: Discovered by Date of Discovery Average Distance From The Sun 2 pages Physics Scrapbook No ratings yet Physics Scrapbook 10 pages Basic Information About The Earth: 365 Days and 6 Hours 0.46 KM Per Second 23 Hours and 56 Minutes 150 Million KM No ratings yet Basic Information About The Earth: 365 Days and 6 Hours 0.46 KM Per Second 23 Hours and 56 Minutes 150 Million KM 1 page Planets: Orbital Properties: Planet Distance Revolution Eccentricity Inclination (A.U.) (Deg) No ratings yet Planets: Orbital Properties: Planet Distance Revolution Eccentricity Inclination (A.U.) (Deg) 3 pages ad Footer menu Back to top About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Support Help / FAQ Accessibility Purchase help AdChoices Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Support Help / FAQ Accessibility Purchase help AdChoices Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps Documents Language: English Copyright © 2025 Scribd Inc. We take content rights seriously. Learn more in our FAQs or report infringement here. We take content rights seriously. Learn more in our FAQs or report infringement here. Language: English Copyright © 2025 Scribd Inc. 576648e32a3d8b82ca71961b7a986505 scribd.scribd.scribd.scribd.
770
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry
Published Time: 2013-10-02T00:41:43Z Electrochemistry - Chemistry LibreTexts Skip to main content Table of Contents menu search Searchbuild_circle Toolbarfact_check Homeworkcancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode Supplemental Modules (Analytical Chemistry) Analytical Chemistry { Basics_of_Electrochemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Electrochemistry_and_Thermodynamics : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Electrodes : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Electrolytic_Cells : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Exemplars : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Faraday\'s_Law" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Nernst_Equation : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Nonstandard_Conditions:_The_Nernst_Equation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Redox_Chemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Redox_Potentials : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Voltage_Amperage_and_Resistance_Basics : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Voltaic_Cells : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1" } { Analytical_Chemiluminescence : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Analytical_Sciences_Digital_Library : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Data_Analysis : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Electrochemistry : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Instrumentation_and_Analysis : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Microscopy : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Qualitative_Analysis : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", Quantifying_Nature : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1" } { "Analytical_Chemistry_2.1_(Harvey)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Analytical_Chemistry_Volume_II_(Harvey)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Analytical_Chemistry_Volume_I_(Harvey)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "An_Introduction_to_Mass_Spectrometry_(Van_Bramer)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Chemometrics_Using_R_(Harvey)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Crystallography_in_a_Nutshell_(Ripoll_and_Cano)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Instrumental_Analysis_(LibreTexts)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Molecular_and_Atomic_Spectroscopy_(Wenzel)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Non-Isothermal_Kinetic_Methods_(Arhangel\'skii_et_al.)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1", "Supplemental_Modules_(Analytical_Chemistry)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.<PageSubPageProperty>b__1" } Tue, 29 Aug 2023 08:46:59 GMT Electrochemistry 250 250 admin { } Anonymous Anonymous User 2 false false [ "article:topic-guide", "showtoc:no", "license:ccbyncsa", "licenseversion:40" ] [ "article:topic-guide", "showtoc:no", "license:ccbyncsa", "licenseversion:40" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents Home Bookshelves Analytical Chemistry Supplemental Modules (Analytical Chemistry) Electrochemistry Expand/collapse global location Home Campus Bookshelves Bookshelves Introductory, Conceptual, and GOB Chemistry General Chemistry Organic Chemistry Inorganic Chemistry Analytical Chemistry Supplemental Modules (Analytical Chemistry) Analytical Sciences Digital Library Data Analysis Quantifying Nature Qualitative Analysis Instrumentation and Analysis Electrochemistry Analytical Chemiluminescence Microscopy Analytical Chemistry 2.1 (Harvey) Chemometrics Using R (Harvey) Instrumental Analysis (LibreTexts) Physical Methods in Chemistry and Nano Science (Barron) Non-Isothermal Kinetic Methods (Arhangel'skii et al.) Molecular and Atomic Spectroscopy (Wenzel) Qualitative Analysis of Common Cations in Water (Malik) An Introduction to Mass Spectrometry (Van Bramer) Crystallography in a Nutshell (Ripoll and Cano) Analytical Chemistry Volume I (Harvey) Analytical Chemistry Volume II (Harvey) Physical & Theoretical Chemistry Biological Chemistry Environmental Chemistry Learning Objects Electrochemistry Last updated Aug 29, 2023 Save as PDF X-ray Diffraction Electrochemistry Basics Page ID 250 ( \newcommand{\kernel}{\mathrm{null}\,}\) Table of contents No headers Electrochemistry is the study of electricity and how it relates to chemical reactions. In electrochemistry, electricity can be generated by movements of electrons from one element to another in a reaction known as redox or oxidation-reduction reaction. Electrochemistry Basics Electrochemistry is the study of chemical processes that cause electrons to move. This movement of electrons is called electricity, which can be generated by movements of electrons from one element to another in a reaction known as an oxidation-reduction ("redox") reaction. Electrochemistry Cell EMF Electrochemistry Review Electrolysis Galvanic Cells Half-Cell Reaction Nernst Equation Connection between Cell Potential, ∆G, and K The connection between cell potential, Gibbs energy and constant equilibrium are directly related in the following multi-part equation: ΔGo=−RTlnKeq=−nFEocell Electrodes Standard Electrodes Electrolytic Cells Voltaic cells are driven by a spontaneous chemical reaction that produces an electric current through an outside circuit. These cells are important because they are the basis for the batteries that fuel modern society. But they are not the only kind of electrochemical cell. The reverse reaction in each case is non-spontaneous and requires electrical energy to occur. Electrolysis Electrolysis I Electroplating Exemplars Batteries: Electricity though chemical reactions Case Study: Battery Types Case Study: Fuel Cells Case Study: Industrial Electrolysis Commercial Galvanic Cells Corrosion Corrosion Basics Galvanization Sacrificial Anode Membrane Potentials Rechargeable Batteries Faraday's Law Faraday's law of electrolysis might be stated this way: the amount of substance produced at each electrode is directly proportional to the quantity of charge flowing through the cell. Of course, this is somewhat of a simplification. Substances with different oxidation/reduction changes in terms of the electrons/atom or ion will not be produced in the same molar amounts. But when those additional ratios are factored in, the law is correct in all cases. Nernst Equation The Nernst Equation enables the determination of cell potential under non-standard conditions. It relates the measured cell potential to the reaction quotient and allows the accurate determination of equilibrium constants (including solubility constants). Nonstandard Conditions: The Nernst Equation The standard cell potentials refer to cells in which all dissolved substances are at unit activity, which essentially means an "effective concentration" of 1 M. Similarly, any gases that take part in an electrode reaction are at an effective pressure of 1 atm. If these concentrations or pressures have other values, the cell potential will change in a manner that can be predicted from the principles you already know. Redox Chemistry Balancing Redox Reactions Balancing Redox Reactions - Examples Comparing Strengths of Oxidants and Reductants Definitions of Oxidation and Reduction Half-Reactions Oxidation-Reduction Reactions Oxidation State Oxidation States II Oxidation States (Oxidation Numbers) Oxidizing and Reducing Agents Standard Reduction Potential The Fall of the Electron Writing Equations for Redox Reactions Redox Potentials Factors that Influence Reduction Potential Reduction Potential Intuition Standard Potentials Using Redox Potentials to Predict the Feasibility of Reactions Electricity and the Waterfall Analogy Voltaic Cells In redox reactions, electrons are transferred from one species to another. If the reaction is spontaneous, energy is released, which can then be used to do useful work. To harness this energy, the reaction must be split into two separate half reactions: the oxidation and reduction reactions. The reactions are put into two different containers and a wire is used to drive the electrons from one side to the other. In doing so, a Voltaic/ Galvanic Cell is created. Cell Diagrams Discharging Batteries Electrochemical Cells under Nonstandard Conditions Concentration Cell Electrochemical Cell Conventions The Cell Potential Electrochemistry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts. Back to top X-ray Diffraction Electrochemistry Basics Was this article helpful? Yes No Recommended articles Faraday's LawFaraday's law of electrolysis might be stated this way: the amount of substance produced at each electrode is directly proportional to the quantity of... Electrochemistry BasicsElectrochemistry is the study of chemical processes that cause electrons to move. This movement of electrons is called electricity, which can be gener... Exemplars Connection between Cell Potential, ∆G, and KThe connection between cell potential, Gibbs energy and constant equilibrium are directly related in the following multi-part equation: ΔGo=−RTlnKeq=... Electrodes Article type Chapter License CC BY-NC-SA License Version 4.0 Show Page TOC no on page Tags This page has no tags. © Copyright 2025 Chemistry LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries are Powered by NICE CXone Expert and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement. For more information contact us at info@libretexts.org. Support Center How can we help? Contact SupportSearch the Insight Knowledge BaseCheck System Status× contentsreadabilityresourcestools ☰ X-ray Diffraction Electrochemistry Basics
771
https://www.splashlearn.com/s/math-games/identify-the-related-fact
Free Identifying the Related Fact Game | SplashLearn Parents Explore by Grade Preschool (Age 2-5)KindergartenGrade 1Grade 2Grade 3Grade 4Grade 5 Explore by Subject Math ProgramEnglish Program More Programs Homeschool ProgramSummer ProgramMonthly Mash-up Helpful Links Parenting BlogSuccess StoriesSupportGifting Also available on Educators Teach with us For TeachersFor Schools and DistrictsData Protection Addendum Impact Success Stories Resources Lesson PlansClassroom ToolsTeacher BlogHelp & Support More Programs SpringBoardSummer Learning Our Library All By Grade PreschoolKindergartenGrade 1Grade 2Grade 3Grade 4Grade 5 By Subject MathEnglish By Topic CountingAdditionSubtractionMultiplicationPhonicsAlphabetVowels One stop for learning fun! Games, activities, lessons - it's all here! Explore All Games By Grade Preschool GamesKindergarten GamesGrade 1 GamesGrade 2 GamesGrade 3 GamesGrade 4 GamesGrade 5 Games By Subject Math GamesReading GamesArt and Creativity GamesGeneral Knowledge GamesLogic & Thinking GamesMultiplayer GamesMotor Skills Games By Topic Counting GamesAddition GamesSubtraction GamesMultiplication GamesPhonics GamesSight Words GamesAlphabet Games Learning so good, it feels like play! Explore hundreds of fun games that teach! Dive In Worksheets By Grade Preschool WorksheetsKindergarten WorksheetsGrade 1 WorksheetsGrade 2 WorksheetsGrade 3 WorksheetsGrade 4 WorksheetsGrade 5 Worksheets By Subject Math WorksheetsReading Worksheets By Topic Addition WorksheetsMultiplication WorksheetsFraction WorksheetsPhonics WorksheetsAlphabet WorksheetsLetter Tracing WorksheetsCursive Writing Worksheets Stuck on a concept? We're here to help! Find the perfect worksheet to reinforce any skill. Explore Worksheets Lesson Plans By Grade Kindergarten Lesson PlansGrade 1 Lesson PlansGrade 2 Lesson PlansGrade 3 Lesson PlansGrade 4 Lesson PlansGrade 5 Lesson Plans By Subject Math Lesson PlansReading Lesson Plans By Topic Addition Lesson PlansMultiplication Lesson PlansFraction Lesson PlansGeometry Lesson PlansPhonics Lesson PlansGrammar Lesson PlansVocabulary Lesson Plans Ready-to-go lessons = More time for teaching! Free K to Grade 5 plans with activities & assessments, all at your fingertips. Access for free Teaching Tools By Topic Math FactsMultiplication ToolTelling Time ToolFractions ToolNumber Line ToolCoordinate Graph ToolVirtual Manipulatives Make learning stick! Interactive PreK to Grade 5 teaching tools to bring lessons to life. Use for free Articles By Topic Prime NumberPlace ValueNumber LineLong DivisionFractionsFactorsShapes Math definitions made easy Explore 2,000+ definitions with examples and more - all in one place. Explore math vocabulary Log inSign up Log inSign up Parents Parents Explore by Grade Preschool (Age 2-5)KindergartenGrade 1Grade 2Grade 3Grade 4Grade 5 Explore by Subject Math ProgramEnglish Program More Programs Homeschool ProgramSummer ProgramMonthly Mash-up Helpful Links Parenting BlogSuccess StoriesSupportGifting Educators Educators Teach with us For TeachersFor Schools and DistrictsData Protection Addendum Impact Success Stories Resources Lesson PlansClassroom ToolsTeacher BlogHelp & Support More Programs SpringBoardSummer Learning Our Library Our Library All All By Grade PreschoolKindergartenGrade 1Grade 2Grade 3Grade 4Grade 5 By Subject MathEnglish By Topic CountingAdditionSubtractionMultiplicationPhonicsAlphabetVowels Games Games By Grade Preschool GamesKindergarten GamesGrade 1 GamesGrade 2 GamesGrade 3 GamesGrade 4 GamesGrade 5 Games By Subject Math GamesReading GamesArt and Creativity GamesGeneral Knowledge GamesLogic & Thinking GamesMultiplayer GamesMotor Skills Games By Topic Counting GamesAddition GamesSubtraction GamesMultiplication GamesPhonics GamesSight Words GamesAlphabet Games Worksheets Worksheets By Grade Preschool WorksheetsKindergarten WorksheetsGrade 1 WorksheetsGrade 2 WorksheetsGrade 3 WorksheetsGrade 4 WorksheetsGrade 5 Worksheets By Subject Math WorksheetsReading Worksheets By Topic Addition WorksheetsMultiplication WorksheetsFraction WorksheetsPhonics WorksheetsAlphabet WorksheetsLetter Tracing WorksheetsCursive Writing Worksheets Lesson Plans Lesson Plans By Grade Kindergarten Lesson PlansGrade 1 Lesson PlansGrade 2 Lesson PlansGrade 3 Lesson PlansGrade 4 Lesson PlansGrade 5 Lesson Plans By Subject Math Lesson PlansReading Lesson Plans By Topic Addition Lesson PlansMultiplication Lesson PlansFraction Lesson PlansGeometry Lesson PlansPhonics Lesson PlansGrammar Lesson PlansVocabulary Lesson Plans Teaching Tools Teaching Tools By Topic Math FactsMultiplication ToolTelling Time ToolFractions ToolNumber Line ToolCoordinate Graph ToolVirtual Manipulatives Articles Articles By Topic Prime NumberPlace ValueNumber LineLong DivisionFractionsFactorsShapes Log inSign up Resources>Games>Math>First Grade>Identifying the Related Fact Game Identifying the Related Fact Game In this fun math game, kids dive into addition strategies by exploring the associative property of addition. Through interactive tasks, young learners practice addition and subtraction within 20, making these concepts less confusing and more enjoyable. This game helps children become more comfortable with math by applying smart strategies. Perfect for budding mathematicians! Play game Assign to class GRADE & STANDARD GRADE 1 GRADE 2 SUBJECTS & TOPICS MATH GAMES ADDITION GAMES ADDITION PROPERTIES GAMES Know more about Identifying the Related Fact Game What will your child learn through this game? Your child will practice addition strategies in this fun game. Concepts like addition can be confusing for kids, but with the practice of addition strategies, they can gradually get more comfortable. This game consists of smartly designed tasks to help your young mathematician learn to apply the associative property of addition. Explore Amazing Games on Addition Properties View all 6 Games Addition Adding Two Numbers (Up to 5) Game Help your child master addition with this engaging game focused on adding numbers up to 5. Kids will enhance their problem-solving skills while practicing addition concepts in a fun and interactive way. The game encourages fluency in adding and subtracting, setting a strong foundation for future math success. Enjoy learning math with ease and excitement! Pre-K K VIEW DETAILS Addition Finding Sum (Up to 10) Game This engaging math game helps kids practice addition by finding sums up to 10. With various levels of complexity, kids choose the correct answer from options, building confidence in math. Perfect for young learners, it enhances their ability to add numbers without visual aids, making math fun and interactive. Join the adventure and master addition today! Pre-K K VIEW DETAILS Addition Solving 'Add To' Scenarios Game This exciting game helps kids tackle "add to" scenarios with ease. Through engaging word problems, children practice addition within 10 while building confidence in math. The game offers a fun approach to learning, ensuring kids get the practice they need to become more comfortable with adding numbers. Perfect for young learners to sharpen their addition skills! Pre-K K VIEW DETAILS Addition Solving ''Put Together'' Scenarios Game This engaging math game helps kids practice addition by solving "put together" word problems. Children will read story situations and pick the right numbers to complete tasks. It's a fun way to build fluency in adding within 10, while developing essential math skills. Perfect for young mathematicians eager to learn. Get started and enjoy the adventure! Pre-K K VIEW DETAILS Addition Solving ''Add To'' Word Problems Game In this colorful game, your child will tackle "add to" word problems, making addition fun and easy. By solving a set of problems on "add to" scenarios, kids will build confidence in math and improve their skills in adding and subtracting within 20. Perfect for young learners eager to practice and enjoy math challenges. Let the learning adventure begin! Pre-K K VIEW DETAILS Addition Solving ''Put Together'' Word Problems Game In this exciting math game, kids solve "put together" word problems to master addition. They'll tackle real-world scenarios, analyze story situations, and choose the right answers. This engaging activity helps them gain fluency in addition and subtraction while working with numbers up to 20. Perfect for young learners to make math fun and practical! Pre-K K VIEW DETAILS Addition Modeling to Add Numbers Game In this exciting math game, kids will model how to add numbers to illuminate a spooky cave. They'll engage with number orbs, boosting their understanding of addition and subtraction. As young mathematicians, they'll learn to build models for adding numbers, making math a thrilling adventure. Perfect for sparking interest in math concepts while having fun! Pre-K K VIEW DETAILS Addition Building the Model to Add Numbers Game Embark on a thrilling journey with Blu through caves, forests, mountains, and lakes while mastering addition with 10-frame visuals. This engaging game helps kids understand addition concepts and boosts their math skills as they solve fun challenges. Perfect for young explorers eager to learn and play in a captivating math adventure. Pre-K K VIEW DETAILS Addition Counting to Find One More Game Hop along with Hoppy in this exciting game where kids count using objects to find one more. Perfect for learning addition strategies, children will solve problems with numbers up to 5 while enjoying a playful journey. This game enhances counting skills and introduces the basics of addition and subtraction in a fun, interactive way. Let the adventure begin! Pre-K K VIEW DETAILS Addition Adding Objects within 10 Game In this delightful math game, kids will assist Hoppy in adding objects within 10 to complete her collection of fruits and flowers. With a variety of problem structures, young learners will practice addition skills, becoming counting ninjas. This engaging activity ensures a balanced challenge, making math fun and rewarding. Perfect for building confidence in addition! Pre-K K VIEW DETAILS Addition Recording Addition in Sentences Game In this vibrant math game, kids will explore addition sentences within 5 using concrete objects for visual support. They'll learn to record addition sentences, enhancing their understanding of basic math concepts. Perfect for young mathematicians, this game makes learning addition interactive and fun. Watch your child build confidence in math with every play! Pre-K K VIEW DETAILS Addition Addition Symbol Game This exciting game introduces kids to addition using real-world objects as visual aids. They'll practice addition sentences within 5, using the "count all to add" strategy. The game helps kids recall addition concepts and enhances their understanding through fun, interactive tasks. Perfect for young learners to build a strong math foundation. Start for free! Pre-K K VIEW DETAILS Addition Finding the Total Game Join your child in a playful journey to master addition with this interactive game. They'll learn to find totals and understand addition sentences within 5. Designed for young learners, this game makes math fun and engaging. Perfect for introducing kids to the basics of addition and subtraction, ensuring they build a strong foundation in math. Start for free! Pre-K K VIEW DETAILS Addition Representing Addition Game This fun game helps kids practice addition by solving problems and identifying the correct expressions. Focusing on addition sentences within 5, it strengthens problem-solving skills and builds a strong math foundation. Children will recall their addition knowledge, making learning interactive and enjoyable. Perfect for young math learners! Pre-K K VIEW DETAILS Addition Solving Addition Sentences Game Get ready to solve addition sentences within 10 using engaging 10-frames! This fun math game helps kids apply the count all to add strategy, choosing the right answers from given options. Perfect for young learners, it makes addition easy and interactive, building strong math skills while having fun. A great way to introduce addition and subtraction concepts! Pre-K K VIEW DETAILS Discover Fun Games on Addition View all 369 Games Addition Identifying the Total Game This exciting math game helps kids master addition within 10 using real-world objects. By employing the "count all to add" strategy, children will enjoy solving problems while building a strong foundation in math. The game offers visual aids, making abstract concepts easier to grasp. Perfect for young learners to practice and improve their addition skills in a playful way! Pre-K K VIEW DETAILS Addition Representing Addition Scenarios Game In this engaging math game, kids tackle real-world addition problems using "add to" scenarios. They'll represent each story on a 10-frame with everyday objects, making math relatable and fun. This hands-on approach helps kids understand addition concepts and improve their math skills in a playful, interactive way. Perfect for young learners eager to explore math! Pre-K K VIEW DETAILS Addition Composing to Make a Number Game Wake up the snails and feed them the exact amount they need by composing numbers up to 10. This engaging math game teaches kids to understand number relationships with 5 and 10 through fun tasks. Kids will learn addition and subtraction skills while enjoying the challenge of feeding the hungry snails. Perfect for young learners eager to explore numbers! Pre-K K VIEW DETAILS Addition Adding One by Making a Model Game In this engaging math game, kids will solve addition problems using a 10-frame model. They'll tackle word problems and practice "put together" scenarios, enhancing their problem-solving skills. Perfect for young mathematicians, this game makes learning addition fun and interactive. Students will enjoy boosting their math skills with each challenge! Pre-K K VIEW DETAILS Addition Adding within 5 by Making a Model Game In this exciting math game, kids will tackle addition word problems by creating models. They'll work through engaging "put together" scenarios, adding numbers within 5. This game enhances problem-solving skills and boosts confidence in math by using 10-frames. Perfect for budding mathematicians eager to explore addition in a fun way! Pre-K K VIEW DETAILS Addition Model and Adding (within 10) Game In this exciting math game, kids will practice addition within 10 by solving real-world scenarios. They'll model each story situation and find totals, enhancing their understanding of "put together" situations. This interactive game makes learning addition fun and helps build essential math skills through practical application and engaging challenges. Pre-K K VIEW DETAILS Addition Addition Using the Count-All Strategy Game In this engaging math game, kids will solve addition problems using the count-all strategy with visual aids. They'll analyze options and pick the correct answers, boosting their addition and subtraction skills. Perfect for young learners, this game makes math fun and interactive. Kids will learn to recall addition using models and explore strategies for easy learning. Pre-K K VIEW DETAILS Addition Adding and Matching Game In the Add and Match Game, kids tackle addition facts of 1 in a playful setting. This game clears up math misconceptions and boosts fluency in addition. Children will enjoy working with addition facts, enhancing their skills in a fun way. Perfect for young learners, it combines learning with excitement, making math a delightful adventure. Let's master addition facts of 1! Pre-K K VIEW DETAILS Addition Matching the Total Game Master addition facts with this exciting game! Kids will practice sums of 2, developing fluency in a fun way. They'll enjoy solving problems tailored to their level, boosting math skills with ease. This game makes learning addition facts enjoyable and effective, encouraging young mathematicians to explore and recall addition and subtraction facts. Pre-K K VIEW DETAILS Addition Adding and Matching the Total Game Join the fun with Add and Match the Total Game! Kids will tackle exciting tasks to learn addition facts of 3. This engaging math game helps them recall and apply addition and subtraction facts, building confidence and fluency. Perfect for young learners, it turns math practice into an enjoyable adventure. Get ready to add, match, and win! Pre-K K VIEW DETAILS Addition Completing the Addition Sentence Game This engaging game helps kids become addition experts by composing and decomposing numbers. They'll find missing numbers in sentences, making math fun and interactive. Perfect for young mathematicians, this game builds essential skills in addition and subtraction while boosting confidence in number handling. Explore the joy of learning with each challenge! Pre-K K VIEW DETAILS Addition Addition Sentences (Up to 10) Game This vibrant game helps kids practice composing and decomposing numbers up to 10. By solving math sentences with visual models, children improve their addition and subtraction skills. The game offers fun and effective practice, ensuring a solid grasp of composing numbers. Perfect for young learners to boost their math confidence and understanding! Pre-K K VIEW DETAILS Addition Decomposing to Make a Number Game Feed the hungry snails while mastering the art of decomposing numbers! This engaging math game encourages kids to use the count-out strategy to understand how numbers relate to 5 and 10. Through playful tasks, young learners will practice addition and subtraction skills, making math both fun and educational. Get ready to dive into a world of numbers and snails! Pre-K K VIEW DETAILS Addition Completing the Addition Game This exciting game invites kids to tackle addition problems by finding missing numbers in math sentences. Focusing on addition facts of 1, children will have fun while strengthening their math skills. They'll pick the correct answers from multiple options, reinforcing their knowledge of addition and subtraction facts. A great way to make learning math enjoyable and interactive! Pre-K K VIEW DETAILS Addition Composing Number in Different Ways Game Get ready for a fun math adventure with colorful blobs! In this game, kids will compose and decompose numbers within 10 using visual aids. They'll solve problems and work with different number sets, enhancing their addition and subtraction skills. It's a lively way to learn math concepts while having a blast with playful, sleepy blobs. Let's get the party started! Pre-K K VIEW DETAILS Your one stop solution for all grade learning needs. Give your child the passion and confidence to learn anything on their own fearlessly Parents, Sign Up for Free Teachers, Use for Free 4413+ 4567+ Explore Games Preschool Math GamesGrade K Math GamesGrade 1 Math GamesGrade 2 Math GamesGrade 3 Math GamesGrade 4 Math GamesGrade 5 Math GamesPreschool English GamesGrade K English GamesGrade 1 English GamesGrade 2 English GamesGrade 3 English GamesGrade 4 English GamesGrade 5 English Games Explore Worksheets Preschool Math WorksheetsGrade K Math WorksheetsGrade 1 Math WorksheetsGrade 2 Math WorksheetsGrade 3 Math WorksheetsGrade 4 Math WorksheetsGrade 5 Math WorksheetsPreschool English WorksheetsGrade K English WorksheetsGrade 1 English WorksheetsGrade 2 English WorksheetsGrade 3 English WorksheetsGrade 4 English WorksheetsGrade 5 English Worksheets Explore Lesson Plans Grade K Math Lesson PlansGrade 1 Math Lesson PlansGrade 2 Math Lesson PlansGrade 3 Math Lesson PlansGrade 4 Math Lesson PlansGrade 5 Math Lesson PlansGrade K English Lesson PlansGrade 1 English Lesson PlansGrade 2 English Lesson PlansGrade 3 English Lesson PlansGrade 4 English Lesson PlansGrade 5 English Lesson Plans SplashLearn Content For ParentsFor ClassroomsFor HomeSchoolersBlogAbout UsCareersSuccess Stories Help & Support ParentsTeachersDownload SplashLearnContact UsGift SplashLearn Terms of UsePrivacy PolicyCookie Policy StudyPad & SplashLearn are registered Trademarks of StudyPad, Inc. Our site uses cookies to read, store, and write information on your browser and your device. This helps us deliver content, maintain security, customize your experience, improve our services, and support our marketing efforts. Privacy Policy Manage Cookies Allow All Close You can use this section to manage your cookie preferences. You can choose to allow or disallow different categories of cookies. There are some cookies that are strictly necessary for us to operate our services and they can’t be disallowed. Cookie Policy. Strictly Necessary Always Active These cookies are essential for us to operate our website and provide the services to you. Without these cookies, our website may not function correctly and securely. You also may not be able to access the various functionalities of our website without these cookies. Domain Name Type www.splashlearn.com firstTimeLoginDone, Scroll_Tracking_Of_For, hidePtlSignupPopup, user_log, suspected_popup_status, NoBlocker_, home_school_popup_shown, gamezonetabanimationseen, gridViewPitchBlinkerReportTabDisplayed, studentExperienceVisitedViaOnboarding, user_preferred_language, tutor_portal_session_id, _session_id, bytesize, ela, student_center_login, klass_code_logged_in_klass_id, annimation_shown, unconfirmed, pt, smp_id, parent_signup_flow, sp_attempt_data, pseudo_logged_out, skipped_parent_select_account, existing_user_ptl_flow, cancelled_subscription_id, sm_school_admin_filters, should_renew, show_ad_cta, website_version, force_curr_code, curriculum, no_subdomain_redirect, ab_test, parent_logged_in, active_student_id, byte_size_account_view, student_center_teacher_meta_session_id, tutoring, student_center, mf_attempt_data:, current_guest_id, redirected_from_aru_url, plan_duration, apc, adc, qr_code_login_flow, web_view, abtest, g_csrf_token, has_seen_downtime, metadata, visited_new_seo, post_three_pm_forgot_password_flow, redirection_page, playable_code, clever_student_id, preferred_autologin_student_id, jwt_authorization, qr_token, theme, redirect_to_upgrade_pitch_page, add_school_screen_details , school_screen_step, worksheet_playable_url, game_playable_url, ipad_blocking_disabled, after_signout_path, dailyReportCurrDateTable-, consent-type, gridViewCoachmarksDisplayed, school_logged_in, ga_enabled, cookie_modified, url, teacher_social_signup_clicked, child_tour, tour_shown, oauth_signup_error, age_popup_error, hideAndroidBanner, timezone, cpc, gridReportExpandedContentGrp, cookie_bar_seen First Party .enzuzo.com cookies-functional, cookies-analytics, cookies-marketing, cookies-preferences First Party www.google-analytics.com test_cookie, DSID, AMP_TOKEN, GA_OPT_OUT First Party youtube.com YSC First Party cdnjs.cloudflare.com __cfruid, cf_clearance, cf_#_id Third Party stripe.com __stripe_sid, __stripe_mid Third Party Marketing [x] marketing Toggle These cookies are set by our advertising partners to provide advertising analytical data. These cookies help us measure the effectiveness of our advertising campaigns and optimize them and make them more effective. These cookies are also used to ensure that the same advertisement is not shown to a user repeatedly. Domain Name Type www.splashlearn.com siteid, sm_addwords_params, ms_click_id, gclid First Party youtube.com GED_PLAYLIST_ACTIVITY, ACLK_DATA, VISITOR_INFO1_LIVE, VISITOR_INFO1_LIVE__k, VISITOR_INFO1_LIVE__default First Party googleads.g.doubleclick.net _gcl_au Third Party Analytics [x] analytics Toggle These cookies are set to provide quantitative measures of website visitors. With the usage of these cookies we are able to count visits and traffic sources, analyze how users are using our services which in turn help us to improve the usability and performance of our website. Domain Name Type www.splashlearn.com ga_referrer_set, signup_flow, meta_user_session_id, sm_acquired_source, sm_utm_campaign, sm_referrer, sm_referrer_medium, sm_referrer_user, sm_referrer_campaign, sm_referrer_campaign_code, sm_referrer_details, medium, source, lpu, semu, device_type, sm_first_time_visit_for_analytics, sm_product_subscription_for_google_analytics, sm_product_type_for_google_analytics, oddbods, utm_source, utm_medium, landing_page_url, eventFromLinkName, eventPreviousPageName, eventCurrentPageName, __utma, __utmb, __utmz, ga_client_id First Party www.google-analytics.com gid, _ga, _ga, gat, FPLC, gac, __utmc, __utmt, __utmv, FPID, dc_gtm, gaexp, _gaexp_rc, _opt_awcid, _opt_awmid, _opt_awgid, _opt_awkid, _opt_utmc, _gd#, _ga#, gat, _gat_UA-#, _gat_gtag_UA#, dc_gtm_UA-#, OTZ, _gsid, _gat_A, _gac#, __utmzzses, utmvc, dc_gtm_#, ga_client_id , utma, __utmb, __utmz Third Party mixpanel.com mp_#_mixpanel First Party Preferences [x] preferences Toggle These cookies are set by us or by third party service providers that we use to implement additional functionalities or to enhance features and website performance. These cookies may be used to remember your preferences and to boost the user experience of our website, however they are not strictly necessary. Domain Name Type www.splashlearn.com user_email, content_type, content_code, tab_code, logged_in_klasses_info_remember_me_details, account_selected, sp_export_finished, sp_import_finished First Party Allow All Decline Accept Selected Close As described in our Privacy Policy, we collect personal information from your interactions with us and our website, including through cookies and similar technologies. We may also share this personal information with third parties, including advertising partners. We do this in order to show you ads on other websites that are more relevant to your interests and for other reasons outlined in our privacy policy. Sharing of personal information for targeted advertising based on your interaction on different websites may be considered “sales”, “sharing”, or “targeted advertising” under certain U.S. state privacy laws. Depending on where you live, you may have the right to opt out of these activities. If you would like to exercise this opt-out right, please follow the instructions below. If you visit our website with the Global Privacy Control opt-out preference signal enabled, depending on where you are, we will treat this as a request to opt-out of activity that may be considered a “sale” or “sharing” of personal information or other uses that may be considered targeted advertising for the device and browser you used to visit our website. By clicking “opt out”, the browser on this device will be opted out of sharing personal data. You have opted out on this browser. Opt Out
772
https://pmc.ncbi.nlm.nih.gov/articles/PMC8594275/
Development of NIST Atomic Databases and Online Tools - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Atoms . Author manuscript; available in PMC: 2021 Nov 16. Published in final edited form as: Atoms. 2020;8(3):10.3390/atoms8030056. doi: 10.3390/atoms8030056 Search in PMC Search in PubMed View in NLM Catalog Add to search Development of NIST Atomic Databases and Online Tools Yuri Ralchenko Yuri Ralchenko 1 National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Find articles by Yuri Ralchenko 1,, Alexander Kramida Alexander Kramida 1 National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Find articles by Alexander Kramida 1 Author information Copyright and License information 1 National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Correspondence: yuri.ralchenko@nist.gov (Y.R.); alexander.kramida@nist.gov (A.K.) Author Contributions: Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( PMC Copyright notice PMCID: PMC8594275 NIHMSID: NIHMS1753137 PMID: 34790564 The publisher's version of this article is available at Atoms Abstract Over the last 25 years, the atomic standard reference databases and online tools developed at the National Institute of Standards and Technology (NIST) have provided users around the world with the highest-quality data on various atomic parameters (e.g., level energies, transition wavelengths, and oscillator strengths) and online capabilities for fast and reliable collisional-radiative modeling of diverse plasmas. Here we present an overview of the recent developments regarding NIST numerical and bibliographic atomic databases and outline the prospects and vision of their evolution. Keywords: atomic databases, standard reference databases, atomic spectroscopy, collisional-radiative modeling, laser-induced background spectroscopy (LIBS), bibliographic databases 1. Introduction The development of atomic databases at the National Institute of Standards and Technology (NIST) is an integral part of the NIST Standard Reference Data (SRD) Program. As follows from its clearly defined responsibilities under 15 U.S. Code §290 to collect data, evaluate data, and publish high-quality SRD, NIST creates SRD products, of which atomic databases containing evaluated and recommended atomic data are an important component. The creation and development of atomic databases is closely connected to the atomic physics program, and this synergy is extremely beneficial for both efforts. The atomic spectroscopy research at NIST goes back more than 100 years. To the best of our knowledge, the first results were published in 1913. Since then, NIST has become the leading research institution in the United States for the analysis of such atomic structure parameters as energy levels, oscillator strengths, and transition probabilities. This research is very active nowadays, and our experimental program is supported by an unparalleled combination of light sources (e.g., sliding sparks or an electron beam ion trap) and unique spectrometers ranging from infrared Fourier Transform Spectrometers (FTS) to a 10.5-m vacuum ultraviolet spectrometer and to an X-ray cryogenic transition-edge-sensor spectrometer that generate high-precision spectroscopic data from hard X-rays to the infrared parts of electromagnetic spectrum. Moreover, involvement of the NIST scientists in genuine world-class experiments allows them to gain experience in analysis and uncertainty evaluations of accurate spectroscopic data for atoms and ions. The evaluation of atomic data at NIST started with the classical work of Charlotte Moore-Sitterly in the mid-1940s , which culminated in her famous compilation of atomic energy levels [3-5]. Thereafter, this activity has become one of the most visible and important components of research in the Atomic Spectroscopy Group. Although the actual number of physicists involved in data evaluation varied significantly over decades, it has never been neglected and the new data compilations are released and published on a regular basis. After invention of the World Wide Web (WWW), the natural step for atomic databases was to utilize numerous advantages of WWW and develop the online versions of the NIST atomic data collections. This work was initiated in mid-1990s under the guidance of W.C. Martin and rapidly resulted in the creation of a number of atomic databases, from the most general ones, such as the Atomic Spectra Database (ASD) , to the more specialized (i.e., the Chandra database for only four elements of interest for the NASA Chandra X-ray telescope). With time, it was found that to provide users with more convenient access to the whole set of the available data, it is beneficial to join all sets of the evaluated atomic data into ASD. Therefore, over the last 10+ years, our efforts for numerical evaluated atomic data have been solely focused on ASD. Since the mid-2000s, much effort was put into development of other atomic-physics-related databases and tools. To this end, in collaboration with the Lawrence Livermore National Laboratory we developed an online version of the highly popular and versatile collisional-radiative code FLYCHK , which allows extremely fast calculation of light emission from practically any plasma. Additionally, a new generation of NIST atomic bibliographic databases was established. Below we provide an overview of the NIST atomic databases and online tools. We pay special attention to the data selection process and describe numerous features of our databases. Some examples of online calculations are provided as well. 2. Data Selection The main principle of the NIST atomic databases is internal consistency. The second important requirement follows from the “standard” nature of these databases: all the data must have critically evaluated uncertainties. There are many other atomic databases providing useful data, but none of them satisfy these requirements. It should be noted that, although a significant portion of the ASD data taken from old legacy sources does not have explicitly shown uncertainties, they were evaluated in the original compilations and can be retrieved from the references quoted in ASD. For such data, a rough estimate of uncertainty is implied by the number of significant figures displayed in the database. Providing critically evaluated internally consistent data imposes severe limitations on responsiveness of NIST databases to new publications. For the last several decades, about 500 new papers containing atomic data have been published every year. The NIST Atomic Spectroscopy Group has a set of specially designed bibliographic databases [8-10], in which these new publications are stored with a small delay of a few months or even weeks after the paper is published. However, getting the published data into our main numerical information system, ASD , is much more time-consuming. Most of the published articles contain fragmentary studies covering small subsets of data on certain atomic spectra. For example, one or a few atomic transitions could be precisely measured using laser spectroscopy in a cold atomic trap or by other high-precision techniques. Within its scope, such a study provides valuable data on energy levels and transition frequencies. However, different transitions studied by other authors may (and usually do) provide alternative data on the same parameters, which often disagree with the new measurements. To provide a self-consistent set of data for each spectrum, it is necessary to analyze the new measurements in the broad context of all other available data. Resolving contradictions between different sources of data is very difficult and sometimes requires extensive analytical work. Sometimes it requires making new measurements and/or theoretical calculations. A typical example is the recent critical compilation of Cu II spectral data , which took several years to complete. As a result, most of the data sets in ASD are missing the latest new determinations. We are constantly working on updating and extending the database, but on average, only a few spectra are added or updated in each new version of ASD released every year. The methods used in our critical compilations have been explained in a number of publications, notably by Wiese , Reader , and Kramida . To ensure traceability, we normally do not average data available from multiple sources. Instead, we select the “best” data, which means those that are consistent with other data and have the smallest uncertainties. This selection is usually non-trivial, since many data are published without strictly defined uncertainties for each determined value. This is especially true for theoretical data, which constitute about half of all data on transition probabilities in ASD. Until now, only about 2% of all theoretical papers provided estimated uncertainties for the calculated quantities. Only in recent years did we start to see an increasing number of theoretical studies using the NIST methodology to estimate uncertainties of their results. If this positive trend continues, it may make future data compilations faster and easier to do. It should be noted that atomic theory continues to lag far behind the precision of experimental measurements in most spectra. Hydrogen- and helium-like ions are rare exceptions, where modern theoretical calculations can compete with experimental precision. Calculations of radiative transition rates and collisional cross-sections are especially difficult. In most theoretical studies, only a tiny fraction of the strongest transition rates involving low or moderately excited energy levels are calculated with a precision of a few percent or smaller. Quasi-random errors increase very fast with decreasing transition strength . For atoms and ions with medium or large number of electrons (20 or greater), a common case is that a vast majority of the calculated results have large uncertainties (greater than a factor of two and up to several orders of magnitude). Rapid development of theoretical methods notably improved the accuracy of calculations in several types of atomic systems (e.g., atoms/ions with one electron outside a closed shell or one hole in a closed shell), but even for these systems the modern theory cannot handle highly excited and autoionizing states with adequate precision. Thus, analysis of observed spectra and accurate prediction of various atomic properties remain daunting tasks integral to our data evaluation process. As of mid-2020, the data in ASD have been taken from about 3000 papers or other sources. Of these references, about 2000 are for energy levels and spectral lines (wavelengths, intensities, classifications), while the rest are for transition probabilities. At the same time, the NIST atomic bibliographic databases [8-10] contain about 34,000 references to published articles. About 26,000 of these articles contain numerical data on energy levels, spectral lines, and transition probabilities of atoms and atomic ions (the rest are either papers on line broadening, review articles, or descriptions of relevant experimental or theoretical methods or codes). From these numbers, one can see that only about 10% of all published data are incorporated in ASD. Thus, the bibliographic databases are a very important resource for researchers looking for data. The users seeking the latest and greatest data should not completely rely on the data sets displayed in ASD. They should use the links to “current literature” on each spectrum displayed at the top of ASD output and browse through the papers published after the date of the primary data source indicated in the ASD output. These links are carefully designed so that they retrieve the most complete lists of references either on energy levels, spectral lines, or transition probabilities. With each new release, some atomic data sets in ASD are being replaced or corrected. When this happens, the previous version of the data becomes inaccessible to online users. However, starting with ASD version 5, we have kept copies of each released version of ASD stored on our internal server and can provide the old data if requested by users. 3. Atomic Spectra Database The NIST Atomic Spectra Database is the world’s largest source of evaluated and recommended data for atomic parameters such as energy levels, spectral lines, radiative transition probabilities (and oscillator strengths), and ionization potentials. As of mid-2020, ASD contains 112,142 energy levels, 280,135 spectral lines, 118,203 transition probabilities, and 6019 ionization potentials. In the past few years, ASD received about 60,000 distinct data requests per day on average. The spectroscopic data are available for elements from H to Ds (Z = 110), although not for all ions of heavy species. It is certainly not unanticipated that the neutral and low-charge ions are best represented in ASD since very high temperatures and/or energies are required to produce highly-charged ions. Figure 1 presents the current contents of ASD for spectral lines. Different colors correspond to different numbers of lines for a particular element. As one might expect, it is the low-Z elements that are well covered within ASD. Indeed, such important astrophysical and terrestrial spectroscopy elements as O, Si, and Fe have 5913, 5696, and 29,609 spectral lines, respectively, in the database. As for the heavy elements, W (14,703 lines) and Pt (9136) and Th (20,143) are represented very well too: the former due to its importance for the magnetic fusion program, and the latter due to their crucial role in the calibration of spectroscopic instruments on astronomical telescopes. As for the energy levels, the iron-group elements have the largest number of data available varying from 2229 for V to 4429 for Fe. Figure 1. Open in a new tab Contents of ASD version 5.7.1 (released in October 2019) for spectral lines. ASD offers a number of options for search and selection of data. In addition to rich graphical tools to explore the output (see below), a user can perform a search for specific ions or isoelectronic sequence in different spectral ranges, look for observed or Ritz wavelengths, ask for data on transition probabilities including forbidden lines, set limits on level energies or transition strengths, modify the output units, and so on. Importantly, in most cases the output contains complete information on level quantum numbers, including configurations, terms, and total angular momentum. It is essential that data uncertainties for wavelengths are displayed by default. The output can be produced in either HTML or various ASCII formats (fixed column width, tab-delimited, or CSV) for easy downloads, and wavelength-ordered or LS-multiplet-ordered tables are generated on demand. Many of the output options for spectral lines are also available for the level output. Since the first online version of ASD, significant efforts were put into the development of advanced graphics and visualization options that would enhance users’ experience and facilitate analysis of the ASD data. Here we briefly discuss the main graphical options and tools available for ASD users. 3.1. Grotrian Diagrams The Grotrian diagrams were introduced in the early 1930s and have been available in ASD since 2005. They depict the energy structure, ionization energies, and radiative transitions of an ion using proper energy scales. The interactive diagrams of ASD provide immediate access to a wealth of information on levels and lines. Clicking on a particular level not only highlights the levels and all possible radiative transitions from and into it, but also displays almost all important physical data including its energy, configuration, atomic term, J-value, and so on. With the “isolate” button, a user can remove all other levels and lines not related to the selected level and thus better explore possible connections of the level. Then, by clicking on a line, a user can easily see the upper and lower levels of this radiative transition with all related data, and the corresponding wavelengths (both observed and Ritz), transition probability, oscillator strengths, and so on. Again, the isolation option is available for lines, too. There is a multitude of other options, including selection of particular configurations, subconfigurations, J-values, and multiplicities, setting a range of transition probabilities for the output, zooming in and out, and others. An example of a Grotrian diagram for the C+ ion is given in Figure 2. The atomic levels are grouped into series according to the nl values of the outermost electron. The ionization limits for different core configurations are shown by horizontal magenta lines. Additionally, the autoionizing levels can be easily identified. Finally, the box on the right shows detailed information on the selected spectral line, which is highlighted in red. Figure 2. Open in a new tab Grotrian diagram for C II. The box represents the available data for the selected spectral line (in red). 3.2. Line Identification Plot This simple option was added on a direct request from ASD users who were interested not in relative line intensities but rather in positions of spectral lines. Therefore, selecting this option would produce a bar-like graph, where all spectral lines of each ion have the same height, which is different from other ions. This allows one to easily recognize the contributions from different ions to the measured spectra, although a more detailed analysis should utilize more sophisticated approaches (see below). 3.3. Saha-LTE Spectrum Saha-LTE (i.e., local thermodynamic equilibrium) spectra can be generated directly from the ASD input of the ion range for a specific element, spectral range, electron temperature and density, and ion temperature for Doppler broadening. A more advanced tool for the same kind of online calculations was developed more recently (see Section 3.4), but it was decided to keep the Saha-LTE service for legacy purposes. Both tools use the same physical model (i.e., Saha-LTE) but differ in flexibility of interactive options, as explained in the next subsection. 3.4. LIBS Database In some cases, the fundamental atomic data stored in ASD can also be used to derive spectral characteristics of relatively simple equilibrium plasmas. As is well known, in the most general case of arbitrary plasma parameters, one has to implement some kind of a collisional-radiative model in order to determine the atomic state populations and the ensuing spectra [15,16]. This requires utilization of large sets of collisional parameters (e.g., cross-sections or rate coefficients) that are not available in ASD. However, the ionization distributions, level populations, and spectral emissivities in optically thin low-temperature high-density equilibrium plasmas can be determined from analytical Saha–Boltzmann (or LTE) formulas that only depend on ionization energies, level energies, their statistical weights, and radiative transition probabilities. Fortunately, these are precisely the physical parameters that ASD contains, and thus Saha–Boltzman spectra can be generated online easily and quickly. Such Saha-LTE distributions are typical for low-temperature, high-density plasmas . One of the most known examples of such plasmas is produced in laser-induced breakdown experiments. Here, a short laser pulse evaporates a tiny amount of a sample material, heating it to about 1 eV. A relatively new analytical technique, the laser-induced breakdown spectroscopy (LIBS) is used primarily to determine the compositions of elements in various materials, for instance, rock and minerals on Mars. Currently, there are literally hundreds of LIBS-related papers published annually, thereby exemplifying importance of LIBS research in contemporary science and technology. The LIBS database at NIST allows a user to calculate the Saha-LTE spectrum for an arbitrary combination of chemical elements that can be found in ASD. Of course, such a calculation requires, as mentioned above, energy levels and transition probabilities, which, unfortunately, are not available for all spectral lines in our database. Nonetheless, since LIBS is typically used for low temperature plasmas with low-charge ions, this situation is rectified by the fact that such ions are quite well presented in ASD. The actual list of physical parameters to be used on the LIBS input page includes the following: List of chemical elements with their percentages in the mix (the total sum must be 100%); Range of wavelengths; Choice of vacuum or vacuum+air output for the wavelengths; Wavelength units; Spectral resolution; Electron temperature; Electron density. The advanced options also include the maximum ion charge and the minimum relative line intensity (with regard to the strongest line) to be included in calculations. The output page presents a detailed graph of the total spectrum along with the original data table in a number of formats. This output is fully dynamic and thus allows on-the-fly modifications and recalculations of the spectrum via varying the input parameters. A user can not only download the calculated spectrum but also upload a file of the measured spectra to have a very convenient comparison of the LIBS-calculated and original data. As mentioned above, the old Saha-LTE graphical tool accessible via the line section of ASD remains functional and provides essentially the same kind of modeling. The LIBS database interface was tailored for typical needs of LIBS researchers. It is more flexible than the Saha-LTE tool in its ability to model arbitrary mixtures of elements, to provide online comparisons with experimental spectra, and in the variety of interactive options in plots. However, it has no provisions for the modeling of spectra of specific isotopes, which is available in the Saha-LTE tool. Another difference is specific to the hydrogen spectrum, where the LIBS database uses only the unresolved lines corresponding to transitions between centers of gravity of fine-structure levels with certain principal quantum numbers, while the Saha-LTE tool allows for a choice between displaying spectra for unresolved or resolved fine structure. 4. Online Plasma Emission Modeling Spectroscopic diagnostics of plasmas utilize a variety of techniques and approximations. It is customary to separate plasma emission into free-free (bremsstrahlung), free-bound (radiative recombination), and bound-bound (line emission) parts corresponding to different paths of electron movement in the energy space (Figure 3). The bremsstrahlung intensity is largely determined by the mean charge of the ions in the plasma and the electron temperature. As for the bound-free and bound-bound transitions, in addition to the purely atomic parameters (i.e., spontaneous transition probability for the latter and radiative recombination cross-section for the former), the intensity depends on the population of the initial state, which in turn is affected by all possible important physical processes that can bring an electron to that particular atomic state. In order to determine the corresponding populations, it is customary to make use of advanced collisional-radiative models that use (time-dependent) rate equations for this task. Figure 3. Open in a new tab Energy scheme for free-free, free-bound, and bound-bound electron transitions. FLYCHK is a time-dependent collisional-radiative code for calculation of level populations, plasma emissivities, intensities, radiative power losses, and opacities for elements from H to Au (Z = 79). Its original goal was to provide experimentalists with an extremely fast (a typical calculation takes only a few seconds) and reliable tool that can assist in diagnostics of various plasmas. While the methods and techniques used in FLYCHK simulations are best applicable to complex ions in high-temperature mid- to high-density plasmas, it has been successfully used for relatively cold and diluted plasmas as well. Currently the FLYCHK code at NIST has more than 1200 registered users from dozens of countries and laboratories, and it is used many times every day. FLYCHK can calculate the ionization balance and emission/absorption spectra in steady state and transient plasmas, including a variety of plasma effects. For instance, both electron temperature and electron, ion, or mass densities can have arbitrary dependencies on time. Moreover, the code allows for a mixture of elements, opacity effects, an arbitrary electron energy distribution function and radiation field, an ion temperature different from that for electrons (to better describe Doppler broadening), ionization potential lowering in different approximations (e.g., Stewart–Pyatt and Ecker–Kroll), and other effects. A typical set of the output parameters includes plasma mean ion charge and ionization distributions; radiative losses (bound-bound, free-bound, free-free, and total); and intensities, emissivities, opacities, and optical depths. An example of the graphical output for FLYCHK calculations is given in Figure 4. Here, the task was to determine level populations and other parameters for a steady-state Xe plasma at electron temperatures of 400, 1200, and 2000 eV, and electron densities of 10 18 and 10 24 cm−3. This calculation took only 7 s on a very modest PC. In this figure, the ionization curves for higher densities are shifted to the right due to the well-known effect of enhanced ionization via excited states. Figure 4. Open in a new tab FLYCHK ionization distributions for a steady-state Xe plasma at electron temperatures 400, 1200, and 2000 eV, and electron densities 10 18 and 10 24 cm−3. The ionization curves for higher densities are shifted to the right due to enhanced ionization via excited states. One of the most important features of FLYCHK is its capability to generate spectra. Figure 5 presents the bound-bound emissivity calculated for a steady-state Xe plasma at 2000 eV and 10 18 cm−3 in the spectral range of 500 to 2000 eV. Such data allow direct comparison with the measured spectra and thus provide an important tool for diagnostics of plasmas. Figure 5. Open in a new tab Bound-bound emissivity for a steady-state Xe plasma at 2000 eV and 10 18 cm−3. 5. Collaborations The development of modern online atomic and molecular databases is more and more becoming an international collaborative effort rather than a local project. There are many atomic and molecular databases varying in data coverage, data quality, and underlying database management systems. Nonetheless, the programmatic, and in particular, data exchange issues are rather common across the field and thus, interactions between database developers are highly beneficial. The NIST ASD team tries to develop and utilize such interactions to their fullest extent, and the benefits of collaborations (e.g., FLYCHK development) are numerous and visible. One of the examples of such fruitful collaboration is provided by the Virtual Atomic and Molecular Data Centre (VAMDC) . The current status of VAMDC is described in detail in a separate paper of this Special Issue. NIST was a part of the VAMDC project from its very beginning, and ASD is one of the many databases that can be directly queried from the VAMDC portal . Additionally, NIST scientists were actively involved in development of the XML Schema for Atoms, Molecules and Solids (XSAMS), which has become the standard for data exchange within VAMDC. This collaboration has significantly affected our approach to database development and it continues to positively influence our present and future activities. 6. Conclusions The development of atomic databases and online tools represents a very important activity at NIST. This work relies on rich experience gained over decades of production of evaluated and recommended atomic data. Modern research in atomic physics is clearly shifting away from “classical” tasks on identification of atomic spectra; nonetheless, it is hard to overestimate the value of precise atomic data for different types of spectroscopic research, from cold atoms and atomic clocks to industrial plasmas to astrophysics and extremely hot fusion plasmas. The Atomic Spectroscopy Group at NIST is fully committed to continuation of this important work of high value for the atomic physics community. The content of ASD is gradually expanding to ensure more complete coverage of atomic spectra. We will continue to update ASD in the form of new yearly releases. The current work includes a compilation of satellite spectra of all Li-like ions, which are important for plasma diagnostics in astrophysical and laboratory high-temperature plasmas. Spectra of atoms and ions of the iron group elements are also in our current plans. Priorities for our work on ASD are greatly influenced by user requests that we receive via contact links provided in the output pages of ASD. In parallel with data expansion and correction, we will continue improving the supporting software, which also relies to a large extent on interactions with users. The bibliographic databases on atomic spectra [8-10] will continue to be updated at least every week, ensuring complete coverage of current and past literature. Other online resources described in the article are also continuously being developed and improved. In particular, the database of collisional and radiative data used in the FLYCHK modeling is being upgraded with new data, and we will continue to improve its spectral modeling features. Acknowledgments: The databases and tools described above would have not been possible without multi-decade efforts regarding the production of standard reference atomic data from the previous and current generations of NIST spectroscopists. We extend our gratitude to all of them. We are also grateful to H.-K. Chung and R.W. Lee for their enthusiastic willingness to deploy FLYCHK at NIST and their continuing support of its development. Funding: This work was funded in part by National Aeronautics and Space Administration, grant number 80HQTR19T0051. Abbreviations The following abbreviations are used in this manuscript: ASD Atomic Spectra Database EBIT Electron Beam Ion Trap FTS Fourier Transform Spectroscopy LIBS Laser-Induced Breakdown Spectroscopy LTE Local Thermodynamic Equilibrium NIST National Institute of Standards and Technology SRD Standard Reference Data Footnotes Conflicts of Interest: The authors declare no conflict of interest. References 1.United States Code. Title 15: Commerce and Trade, Chapter 7A: Standard Reference Data Program, Edition 2018. Available online: (accessed on 3 september 2020). 2.Moore CE A Multiplet Table of Astrophysical Interest. Revised Edition. Part I—Table of Multiplets. Contrib. Princet. Univ Obs 1945, 20, 1–110. [Google Scholar] 3.Moore CE Atomic Energy Levels as Derived from the Analysis of Optical Spectra—Chromium through Niobium; Nat. Stand. Ref. Data Ser., NSRDS-NBS 35, Vol. I (Reprint of NBS Circ. 467, Vol. I, 1949); National Bureau of Standards: Washington, DC, USA, 1971; 359p. [Google Scholar] 4.Moore CE Atomic Energy Levels as Derived from the Analysis of Optical Spectra—Chromium through Niobium; Nat. Stand. Ref. Data Ser., NSRDS-NBS 35, Vol. II (Reprint of NBS Circ. 467, Vol. II, 1952); National Bureau of Standards: Washington, DC, USA, 1971; 230p. [Google Scholar] 5.Moore CE Atomic Energy Levels as Derived from the Analysis of Optical Spectra—Chromium through Niobium; Nat. Stand. Ref. Data Ser., NSRDS-NBS 35, Vol. III (Reprint of NBS Circ. 467, Vol. III, 1958); National Bureau of Standards: Washington, DC, USA, 1971; 245p. [Google Scholar] 6.Kramida A; Ralchenko Y; Reader J; NIST ASD Team. NIST Atomic Spectra Database (Version 5.7.1); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2019. Available online: (accessed on 31 july 2020). [Google Scholar] 7.Chung H-K; Chen MH; Morgan WL; Ralchenko Y; Lee RW FLYCHK: Generalized Population Kinetics and Spectral Model for Rapid Spectroscopic Analysis for All Elements. High Energy Density Phys. 2005, 1, 3–12. Available online: (accessed on 5 August 2020). [Google Scholar] 8.Kramida A Atomic Energy Levels and Spectra Bibliographic Database (Version 2.0); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2010. Available online: (accessed on 31 july 2020). [Google Scholar] 9.Kramida A; Fuhr JR NIST Atomic Transition Probability Bibliographic Database (Version 9.0); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2010. Available online: (accessed on 31 july 2020). [Google Scholar] 10.Kramida A; Fuhr JR Atomic Line Broadening Bibliographic Database (Version 3.0); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2010. Available online: (accessed on 31 july 2020). [Google Scholar] 11.Kramida A; Nave G; Reader J The Cu II Spectrum. Atoms 2017, 5, 9. [Google Scholar] 12.Wiese WL The Critical Assessment of Atomic Oscillator Strengths. Phys. Scr 1996, T65, 188–191. [Google Scholar] 13.Reader J Critical Compilation of Wavelengths and Energy Levels for Atoms and Atomic Ions. J. Plasma Fusion Res. Ser 2006, 7, 327–330. Available online: (accessed on 3 september 2020). [Google Scholar] 14.Kramida A Critical Evaluation of Data on Atomic Energy Levels, Wavelengths, and Transition Probabilities. Fusion Sci. Technol 2013, 63, 313–323. [Google Scholar] 15.Ralchenko YV; Maron Y Accelerated Recombination Due to Resonant Deexcitation of Metastable States. J. Quant. Spectrosc. Radiat. Transf 2001, 71, 609–621. [Google Scholar] 16.Ralchenko Y (Ed.) Modern Methods in Collisional-Radiative Modeling of Plasmas; Springer: Cham, Switzerland, 2016; doi: 10.1007/978-3-319-27514-7. [DOI] [Google Scholar] 17.Griem HR Principles of Plasma Spectroscopy; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar] 18.Kramida A; Olsen K; Ralchenko Y NIST LIBS Database (Version 1.0); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2019. Available online: (accessed on 31 july 2020). [Google Scholar] 19.Dubernet M-L; Boudon V; Culhane JL; Dimitrijević MS; Fazliev AZ; Joblin C; Kupka F; Leto G; Le Sidaner P; Loboda PA; et al. Virtual Atomic and Molecular Data Centre. J. Quant. Spectrosc. Radiat. Transf 2010, 111, 2151–2159. [Google Scholar] 20.Albert D; Antony BK; Ba YA; Babikov YL; Bollard P; Boudon V; Delahaye F; Del Zanna G; Dimitrijević MS; Drouin BJ; et al. A Decade with VAMDC: Results and Ambitions. Atoms 2020, submitted. [Google Scholar] 21.VAMDC Portal. 2020. Available online: (accessed on 3 september 2020). ACTIONS View on publisher site PDF (1.0 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract 1. Introduction 2. Data Selection 3. Atomic Spectra Database 4. Online Plasma Emission Modeling 5. Collaborations 6. Conclusions Acknowledgments: Abbreviations Footnotes References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
773
https://dummit.cos.northeastern.edu/teaching_sp20_3527/numthy_4_unique_factorization_and_applications_v2.10.pdf
Number Theory (part 4): Unique Factorization and Applications (by Evan Dummit, 2020, v. 2.10) Contents 4 Unique Factorization and Applications 1 4.1 Integral Domains, Euclidean Domains, and Unique Factorization . . . . . . . . . . . . . . . . . . . . 1 4.1.1 Integral Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4.1.2 Euclidean Domains and Division Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.1.3 Irreducible and Prime Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.1.4 Unique Factorization Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.2 Modular Arithmetic in Euclidean Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2.1 Modular Congruences and Residue Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2.2 Arithmetic in R/rR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.2.3 Units and Zero Divisors in R/rR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2.4 The Chinese Remainder Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2.5 Orders, Euler's Theorem, Fermat's Little Theorem . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Arithmetic in F[x] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3.1 Polynomial Functions, Roots of Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3.2 Finite Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3.3 Primitive Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.4 Arithmetic in Z[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.4.1 Residue Classes in Z[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.4.2 Prime Factorization in Z[i] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4 Unique Factorization and Applications In this chapter, we extend the notion of a division algorithm to more general rings and then formalize the idea of when a ring possesses unique factorization. Our ultimate goal is to extend number-theoretic properties of Z to other number systems, so we then generalize the notion of modular arithmetic and establish the analogues of the Chinese remainder theorem, Fermat's little theorem, and Euler's theorem in the general setting. We then apply our results in two rings relevant to number theory: the polynomial ring F[x] and the Gaussian integer ring Z[i]. In particular, we study the structure of the polynomials with coecients in Z/pZ and use the results to study nite elds and to characterize those m for which a primitive root exists modulo m. We also make in-depth study of the structure of the Gaussian integers, including giving a description of the modular arithmetic and prime factorization in Z[i]. Along the way, we will also study nite elds from several perspectives, and establish Fermat's characterization of the integers that can be written as the sum of two squares. 4.1 Integral Domains, Euclidean Domains, and Unique Factorization • Our goal in this section is to describe the class of rings that possess a division algorithm similar to that in Z, and then establish that such rings have unique factorization. 1 4.1.1 Integral Domains • Recall that in a general commutative ring R, we say that a|b if there exists k ∈R such that b = ak. • (Reminder) Denition: If R is a commutative ring, we say that x ∈R is a zero divisor if x ̸= 0 and there exists a nonzero y ∈R such that xy = 0. (Note in particular that 0 is not a zero divisor!) ◦We originally dened zero divisors when discussing the ring structure of Z/mZ. ◦In Z/6Z, since 2 · 3 = 4 · 3 = 0, the residue classes represented by 2, 3, and 4 are zero divisors. ◦As a general philosophy, zero divisors can be a bit troublesome (at least, to a novice ring theorist), since they behave counter to one's natural intuition that products of nonzero elements should be nonzero. • We recall a few important properties of zero divisors: ◦An integer a is a zero divisor modulo m if and only if 1 < gcd(a, m) < m. In particular, Z/mZ contains zero divisors if and only if m is composite. ◦The ring Z/pZ is a eld (which, in particular, contains no zero divisors). ◦In a commutative ring with 1, a unit can never be a zero divisor. • Denition: If R is a commutative ring with 1 that contains no zero divisors, R is called an integral domain (or often, just a domain). ◦Example: Any eld is an integral domain. More generally, any ring that is a subset of a eld is an integral domain: hence, the integers Z and the ring Z[ √ D] for any D are integral domains (since they are all subsets of the eld of complex numbers C). ◦Example: The ring of polynomials F[x] where F is a eld is also an integral domain. • Integral domains generally behave more nicely than arbitrary rings, because they obey more of the laws of arithmetic that are familiar from Z: • Proposition (Properties of Integral Domains): If R is an integral domain, the following hold in R: 1. Multiplication in R has a cancellation law: specically, if a ̸= 0 and ab = ac, then b = c. ◦Proof: Suppose that ab = ac: then by rearranging we see that a(b −c) = 0. ◦Then since R is an integral domain, we must either have a = 0 or b −c = 0. Hence, if a ̸= 0, we must have b −c = 0 and so b = c. 2. If a|b and b|a and a, b are nonzero, then a = bu for some unit u. ◦Proof: Since a|b, there is some u with a = bu. Since b|a, there is some w with b = aw. ◦Multiplying the two equations gives ab = abuw, so ab(1 −uw) = 0. Since a and b are nonzero and R is a domain, we can cancel to see that 1 −uw = 0, so that u is a unit. 3. For any m ̸= 0, a|b is equivalent to (ma)|(mb). ◦Proof: Follows directly from the cancellation property (1). • The situation in property (2) of the proposition above is important enough that we give it a name: • Denition: If R is a commutative ring with 1 and a′ = ua for some unit u, we say that a and a′ are associates. ◦Notice that if a and a′ are associates, then a|a′ and a′|a. For this reason, associates have very similar divisibility properties to one another. ◦Example: In Z, the elements 2 and −2 are associates. Indeed, n and −n are associates for any n ∈Z. ◦Example: In Z[i], the elements 1 + 2i and 2 −i are associates, because 2 −i = −i(1 + 2i). ◦Example: In F3[x], the elements x2 + 2 and 2x2 + 1 are associates, because 2x2 + 1 = 2(x2 + 2). ◦We will remark that being associate is an equivalence relation on R. 2 • Denition: Let a, b ∈R where R is an integral domain. We say d is a common divisor if d|a and d|b, and we say that a common divisor d ∈R is a greatest common divisor of a and b if d ̸= 0 and for any other common divisor d′, it is true that d′|d. ◦Example: 2 is a greatest common divisor of 14 and 20 in Z, and −2 is also a greatest common divisor of 14 and 20. In particular, note that the greatest common divisor is no longer unique. ◦Example: In the polynomial ring C[x], a greatest common divisor of x2 −4 and x2 −5x + 6 is x −2: we see x −2 divides both x2 −4 = (x −2)(x + 2) and x2 −5x + 6 = (x −2)(x −3), and there cannot be any common divisor of greater degree. ◦Note that we have not given a complete proof that these are actually greatest common divisors, since we would need to nd all other common divisors and verify that they do divide the claimed gcd. (We will establish the correctness of these calculations shortly.) • As an important warning, we will observe that in arbitrary integral domains, there can exist pairs (a, b) that do not possess a greatest common divisor. • Example: Show that 2(1 + √−5) and 6 do not possess a greatest common divisor in the ring Z[√−5]. ◦First observe that 2 and 1 + √−5 are both common divisors of 2(1 + √−5) and 6. ◦Now we show using norms there is no element d that divides 2(1+√−5) and 6 that is also itself divisible by 2 and 1 + √−5. ◦So suppose d does divide 2(1 + √−5) and 6. Then necessarily N(d) would divide N(2 + 2√−5) = 24 and N(6) = 36, so N(d) divides 12. ◦Also, N(d) would also necessarily be a multiple of N(2) = 4 and N(1 + √−5) = 6, hence be a multiple of 12. ◦The only possibility is N(d) = 12, but there are no elements of norm 12, since there are no integer solutions to a2 + 5b2 = 12. Thus, there cannot be any such element d, meaning that 2(1 + √−5) and 6 do not possess a greatest common divisor. • If two elements do have a greatest common divisor, then it is unique up to taking associates: • Proposition (GCDs and Associates): Let R be a commutative ring with 1 and a, b ∈R. If d1 and d2 are both greatest common divisors of a and b, then d1 and d2 are associates. Conversely, if d is a greatest common divisor of a and b, then so is any associate of d. ◦Proof: Since d1 is a gcd, and d2 is a common divisor, we see d1|d2, and similarly d2|d1. By property (2) of divisibility in integral domains above, this implies d1 = ud2 for a unit u, so d1 and d2 are associates. ◦For the other statement, suppose that d is a greatest common divisor of a and b and ud is any associate of d (where by assumption u is a unit). ◦Then since d|a, there exists c ∈R with a = dc, and so a = (cu−1)(ud). This means (ud)|a. Likewise, (ud)|b, so ud is also a common divisor of a and b. ◦Also, if d′ is any other common divisor, then d′|d by assumption. Since d|(ud) this means d′|(ud), so every common divisor divides ud. Hence ud is also a greatest common divisor of a and b. • In particular we can also dene the analogue of relatively prime elements in an arbitrary domain: • Denition: If R is a commutative ring with 1 and 1 is a greatest common divisor of r and s, we say r and s are relatively prime. ◦By the proposition above, r and s are relatively prime if and only if they have a greatest common divisor that is a unit. ◦Example: 2 and 5 are relatively prime in Z since they have a gcd of 1. 3 4.1.2 Euclidean Domains and Division Algorithms • We now discuss what it means for an integral domain to possess a division algorithm. • Denition: If R is a domain, any function N : R →N ∪{0} such that N(0) = 0 is called a norm on R. ◦Observe that this is a rather weak property, and that any given domain may possess many dierent norms. ◦We will mention that the norm we have dened on the rings Z[ √ D] is not technically a norm under this denition (it is, however, if we take the absolute value). We will leave the exact choice of whether the absolute value is included up to context. • Denition: A Euclidean domain (or domain with a division algorithm) is an integral domain R that possesses a norm N with the property that, for every a and b in R with b ̸= 0, there exist some q and r in R such that a = qb + r and either r = 0 or N(r) < N(b). ◦The purpose of the norm function is to allow us to compare the size of the remainder to the size of the original element. ◦Example: Any eld is a Euclidean domain, because any norm will satisfy the dening condition. This follows because for every a and b with b ̸= 0, we can write a = qb + 0 with q = a · b−1. ◦Example: The integers Z are a Euclidean domain, because if we set N(n) = |n|, then, as we have already proven, the standard division algorithm allows us to write a = qb + r with either r = 0 or |r| < |b|. • Before we give additional examples, we will remark that the reason Euclidean domains have that name is that we can perform the Euclidean algorithm in such a ring (in precisely the same manner as in R): • Denition: If R is a Euclidean domain, then for any a, b ∈R with b ̸= 0, the Euclidean algorithm in R consists of repeatedly applying the division algorithm to a and b as follows, until a remainder of zero is obtained: a = q1b + r1 b = q2r1 + r2 r1 = q3r2 + r3 . . . rk−1 = qkrk + rk+1 rk = qk+1rk+1. ◦By the construction of the division algorithm, we know that N(r1) > N(r2) > · · · , and since N(ri) is a nonnegative integer for each i, this sequence must eventually terminate with the last remainder equalling zero (else we would have an innite decreasing sequence of nonnegative integers). • In precisely the same as in Z, we can use the Euclidean algorithm to establish the existence of greatest common divisors and give a procedure for calculating them: • Theorem (Bézout): If R is a Euclidean domain and a and b are arbitrary elements with b ̸= 0, then the last nonzero remainder d arising from the Euclidean Algorithm applied to a and b is a greatest common divisor of a and b. Furthermore, there exist elements x, y ∈R such that d = ax + by. ◦The ideas in the proof are the same as for the proof over Z. ◦Proof: By an easy induction (starting with rk = qk+1rk+1), d = rk+1 divides ri for each 1 ≤i ≤k. Thus we see d|a and d|b, so the last nonzero remainder is a common divisor. ◦Suppose d′ is some other common divisor of a and b. By another easy induction (starting with d′|(a − q1b) = r1), it is easy to see that d′ divides ri for each 1 ≤i ≤k + 1, and therefore d′|d. Hence d is a greatest common divisor. ◦For the existence of x and y with d = ax + by, we simply observe (by yet another easy induction starting with r1 = a −q1b) that each remainder can be written in the form ri = xia + yib for some xi, yi ∈R. 4 • Corollary: Any two elements in a Euclidean domain always possess a greatest common divisor. • With the motivation for our choice of denition in hand, we can now give our two fundamental examples of Euclidean domains. First, we describe the division algorithm in Z[i]: • Proposition (Z[i] is Euclidean): The Gaussian integers Z[i] are a Euclidean domain, under the norm N(a+bi) = a2 + b2. ◦Explicitly, given a + bi and c + di in Z[i], we will describe how to produce1 q, r ∈Z[i] such that a + bi = q(c + di) + r, and N(r) ≤1 2N(c + di). This is even stronger than is needed (once we note that the only element of norm 0 is 0). ◦Proof: We need to describe the algorithm for producing q and r when dividing an element a + bi by an element c + di. ◦If c+di ̸= 0, then we can write a + bi c + di = x+iy where x = (ac+bd)/(c2 +d2) and y = (bc−ad)/(c2 +d2) are real numbers. ◦Now we dene q = s + ti where s is the integer closest to x and t is the integer closest to y, and set r = (a + bi) −q(c + di). Clearly, (a + bi) = q(c + di) + r. ◦All we need to do now is show N(r) ≤1 2N(c+di): rst observe that r c + di = a + bi c + di −q = (x−s)+(y−t)i. ◦Then because |x −s| ≤1 2 and |y −t| ≤1 2 by construction, we see that r c + di 2 = |(x −s) + (y −t)i|2 = (x −s)2 + (y −t)2 ≤1 4 + 1 4 = 1 2. ◦Clearing the denominator yields N(r) = |r|2 ≤1 2 |c + di|2 = 1 2N(c + di), as desired. • By using the Euclidean algorithm (by computing the quotient and remainder as described in the proof above) we may now compute greatest common divisors in Z[i] and write them as explicit linear combinations, just as we did in Z: • Example: Find a greatest common divisor of 50 −50i and 43 −i in Z[i], and write it as an explicit linear combination of 50 −50i and 43 −i. ◦We use the Euclidean algorithm. Dividing 43−i into 50−50i yields 50 −50i 43 −i = 44 37 −42 37i, so rounding to the nearest Gaussian integer yields the quotient q = 1−i. The remainder is then 50−50i−(1−i)(43−i) = (8 −6i). ◦Next, dividing 8 −6i into 43 −i yields 43 −i 8 −6i = 7 2 + 5 2i, so rounding to the nearest Gaussian integer (there are four possibilities so we just choose one) yields the quotient q = 3 + 2i. The remainder is then 43 −i −(3 + 2i)(8 −6i) = (7 + i). ◦Finally, dividing 7 + i into 8 −6i yields 8 −6i 7 + i = 1 −i, so the quotient is 1 −i and the remainder is 0. ◦The last nonzero remainder is 7 + i so it is a gcd. To express the gcd as a linear combination, we solve for the remainders: 8 −6i = 1 · (50 −50i) −(1 −i) · (43 −i) 7 + i = (43 −i) −(3 + 2i)(8 −6i) = (43 −i) −(3 + 2i) · (50 −50i) + (3 + 2i)(1 −i) · (43 −i) = (−3 −2i) · (50 −50i) + (6 −i) · (43 −i) and so we have 7 + i = (−3 −2i) · (50 −50i) + (6 −i) · (43 −i) . 1For the rings Z[ √ D] in general, the function N(a+b √ D) = a2 −Db2 is a norm, but it does not in general give a division algorithm. Only for certain small values of |D|, like D = −1, will this function allow us to construct quotients and remainders where the remainder is smaller (in norm) than the element being divided by. 5 • Example: Find a greatest common divisor of 11 + 18i and 8 −3i in Z[i], and write it as an explicit linear combination of 11 + 18i and 8 −3i. ◦We use the Euclidean algorithm: 11 + 18i = 2i · (8 −3i) + (5 + 2i) 8 −3i = (1 −i) · (5 + 2i) + 1 5 + 2i = (5 + 2i) · 1 ◦The last nonzero remainder is 1 so it is a gcd. To express the gcd as a linear combination, we solve for the remainders: 5 + 2i = (11 + 18i) −2i · (8 −3i) 1 = (8 −3i) −(1 −i) · (5 + 2i) = (8 −3i) −(1 −i)(11 + 18i) + 2i(1 −i)(8 −3i) = (−1 + i)(11 + 18i) + (3 + 2i)(8 −3i) and so we have 1 = (−1 + i)(11 + 18i) + (3 + 2i)(8 −3i) . • We now show that F[x] is Euclidean: • Proposition (F[x] is Euclidean): If F is any eld, the ring of polynomials F[x] in the variable x with coecients in F is a Euclidean domain, under the norm given by N(p(x)) = deg(p). ◦The idea is simply to show the validity of polynomial long division. The reason we require F to be a eld is that we need to be able to divide by arbitrary nonzero coecients to be able to perform the divisions. (Over Z, for instance, we cannot divide x2 by 2x and get a remainder that is a constant polynomial.) ◦Explicitly, we will show that if a(x) and b(x) are polynomials with b(x) ̸= 0, then there exist q(x) and r(x) such that a(x) = q(x)b(x) + r(x), and either r(x) = 0 or deg(r) < deg(b). ◦Proof: We prove this by induction on the degree n of a(x). The base case is trivial, as we may take q = r = 0 if a = 0. ◦Now suppose the result holds for all polynomials a(x) of degree ≤n −1. If deg(b) > deg(a) then we can simply take q = 0 and r = a, so now also assume deg(b) ≤deg(a). ◦Write a(x) = anxn + an−1xn−1 + · · · + a0 and b(x) = bmxm + · · · + b0, where bm ̸= 0 since b(x) ̸= 0. ◦Observe that the polynomial a†(x) = a(x)−an bm xn−mb(x) has degree less than n, since we have cancelled the leading term of a(x). (Here we are using the fact that F is a eld, so that an bm also lies in F.) ◦By the induction hypothesis, a†(x) = q†(x)b(x)+r†(x) for some q†(x) and r†(x) with r† = 0 or deg(r†) < deg(b). ◦Then a(x) =  q†(x) + an bm xn−m  b(x) + r†(x), so q(x) = q†(x) + an bm xn−m and r(x) = r†(x) satisfy all of the requirements. ◦Remark: It is also straightforward to see that the quotient and remainder are unique under the require-ment that deg(r) < deg(b), by observing that if a = qb + r = q′b + r′, then r −r′ has degree less than deg(b) but is also divisible by b(x), hence must be zero. • Example: Find a greatest common divisor d(x) of the polynomials p = x6 + 2 and q = x8 + 2 in F3[x], and then write the gcd as a linear combination of p and q. ◦We apply the Euclidean algorithm: we have x8 + 2 = x2(x6 + 2) + (x2 + 2) x6 + 2 = (x4 + x2 + 1)(x2 + 2) and so the last nonzero remainder is x2 + 2 . 6 ◦By back-solving, we see that x2 + 2 = 1 · (x8 + 2) −x2(x6 + 2) . • When performing the Euclidean algorithm in F[x], the coecients can often become quite large or complicated: • Example: Find a greatest common divisor d(x) of the polynomials p = x3 + 7x2 + 9x −2 and q = x2 + 4x in R[x], and then write the gcd as a linear combination of p and q. ◦We apply the Euclidean algorithm: we have x3 + 7x2 + 9x −2 = (x + 3)(x2 + 4x) + (−3x −2) x2 + 4x = (−10 9 −1 3x)(−3x −2) + (−20/9) −3x −2 = 27x + 6 20 (−20/9) and so the last nonzero remainder is −20/9. Thus, by rescaling, we see that the gcd is 1 . ◦By back-solving, we see that −3x −2 = 1 · (x3 + 7x2 + 9x −2) −(x + 3) · (x2 + 4x) −20/9 = x2 + 4x + (10 9 + 1 3x)(−3x −2) = (10 9 + 1 3x) · (x3 + 7x2 + 9x −2) −(7 3 + 19 9 x + 1 3x2) · (x2 + 4x) and thus by rescaling, we obtain 1 = (−1 2 −3 20) · (x3 + 7x2 + 9x −2) + (21 20 + 19 20x + 3 20x2) · (x2 + 4x) . 4.1.3 Irreducible and Prime Elements • Now that we have given the denition of a general ring having a division algorithm, we would like to discuss when a ring has unique factorization. ◦In order to do this, we rst need a notion generalizing the idea of a prime number in Z: namely, an element that does not have any nontrivial divisors. • Denition: If R is an integral domain, a nonzero element a ∈R is irreducible if it is not a unit and, for any factorization a = bc with b, c ∈R, one of b and c must be a unit. ◦Example: The irreducible elements of Z are precisely the prime numbers (and their negatives). ◦Example: The element 5 is reducible in Z[i], since we can write 5 = (2 + i)(2 −i) and neither 2 + i nor 2 −i is a unit in Z[i]. ◦In Z[ √ D], we can often test for reducibility using norms. ◦Example: The element 2 + i is irreducible in Z[i]: if 2 + i = bc for some z, w ∈Z[i], then taking norms yields 5 = N(2 + i) = N(b)N(c), and since 5 is a prime number, one of N(b) and N(c) would necessarily be ±1, and then b or c would be a unit. Likewise, 2 −i is also irreducible. ◦Example: The element 2 is irreducible in Z[√−5]: if 2 = bc then taking norms yields 4 = N(2) = N(b)N(c), and since there are no elements of norm 2 in Z[√−5], one of N(b) and N(c) would necessarily be ±1, and then b or c would be a unit. ◦Important Warning: Whether a given element is irreducible depends on the ring R of which it is an element. For example, 5 is irreducible in the ring Z, but 5 is not irreducible in the ring Z[i] because in this ring we can write 5 = (2 + i)(2 −i) and neither of these elements is a unit. • The irreducible elements of F[x] are the irreducible polynomials of positive degree: namely, the polynomials that cannot be factored into a product of polynomials of smaller positive degree. 7 ◦In polynomial rings, we can often use degrees to see immediately that a given polynomial is irreducible, since if f = gh is a nontrivial factorization, then deg(f) = deg(g) + deg(h), where deg(g) and deg(h) must both be positive. ◦Example: Any polynomial of degree 1 is irreducible, since if f = gh with deg(g), deg(h) positive, then deg(f) = deg(g) + deg(h) ≥2. ◦Example: The polynomial x2 + x + 1 is irreducible in F2[x], since the only possible factorizations would be x · x, x · (x + 1), or (x + 1) · (x + 1), and none of these is equal to x2 + x + 1. ◦Example: The polynomial x4+4 is reducible in Q[x], since we can write x4+4 = (x2+2x+2)(x2−2x+2). ◦Example: The polynomial x2 + 1 is irreducible in R[x], since there is no way to write it as the product of two linear polynomials with real coecients. ◦Important Warning: Whether a given polynomial is irreducible depends on the ring F[x] of which it is an element. For example, x2+1 is irreducible in R[x] but not in C[x], since we can write x2+1 = (x+i)(x−i) in C[x]. • An irreducible element behaves much like a prime number in Z. However, there is a separate notion of a prime element in a general domain: • Denition: If R is an integral domain, a nonzero element p ∈R is prime if it is not a unit and, for any a, b ∈R such that p|ab, it must be the case that p|a or p|b. ◦Example: The prime elements of Z are precisely the prime numbers (and their negatives). ◦Example: The prime elements of F[x] are the irreducible polynomials of positive degree. ◦Based on these two examples, it may seem that irreducible and prime elements are always the same. They are indeed closely related, but they do not always coincide: ◦Non-Example: Although the element 2 is irreducible in Z[√−5], it is not prime: note that 6 = (1 + √−5)(1 −√−5) is divisible by 2, but neither 1 + √−5 nor 1 −√−5 is divisible by 2. • In fact, prime elements are always irreducible: • Proposition (Primes are Irreducible): If R is an integral domain and p ∈R is a prime element, then p is irreducible. ◦Proof: Suppose p is prime and has a factorization p = bc. ◦By denition, it must be the case that p|b or p|c; by relabeling assume p|b, with b = pu for some u. ◦Then p = puc so p(1 −uc) = 0. Cancelling p yields uc = 1, so c is a unit. ◦Thus, in any factorization p = bc, at least one term must be a unit, and this means p is irreducible. 4.1.4 Unique Factorization Domains • We would like to say a ring possesses unique factorization if every nonzero element can be written uniquely as the product of irreducibles. However, there are some issues with this attempted denition. ◦To illustrate, observe that in the Gaussian integers, we can write 5 = (2 −i)(2 + i) = (1 + 2i)(1 −2i). ◦It would seem that these are two dierent factorizations, but we should really consider them the same, because all we have done is moved some units around: (2 −i) · i = 1 + 2i and (2 + i) · (−i) = 1 −2i. ◦We should declare that two factorizations are equivalent if the only dierences between them are by moving units around, which is equivalent to replacing elements with associates. • Denition: An integral domain R is a unique factorization domain (UFD) if (i) every nonzero nonunit r ∈R can be written as a nite product of irreducibles r = p1p2 · · · pk, and (ii) such a factorization is unique up to associates: if r = q1q2 · · · qd is some other factorization, then d = k and there is some reordering of the factors such that pi is associate to qi for each 1 ≤i ≤k. • We often prefer to speak of a prime factorization, rather than a factorization into irreducibles. In a Euclidean domain, these are equivalent: 8 • Proposition (Primes and Irreducibles in Euclidean Domains): If R is a Euclidean domain, then p ∈R is prime if and only if it is irreducible. ◦Proof: We showed that primes are irreducible earlier. Now suppose p is irreducible and that p|ab: we wish to show that p|a or p|b. ◦If p|a, we are done, so suppose p ∤a, and let d be a gcd of p and a, which exists since R is a Euclidean domain. ◦By hypothesis, d divides p, so (since p is irreducible) either d is a unit, or d = up for some unit u: however, the latter cannot happen, because then up (hence p) would divide a. Hence d is a unit, say with inverse e. ◦By the Euclidean algorithm, we see that there exist x and y such that xp + ya = d. Multiplying by be and regrouping the terms yields (bce)p + ey(ab) = (de)b = b. Since p divides both terms on the left-hand side, we conclude p|b. • Our main result is the following: • Theorem (Euclidean Domains are UFDs): If R is a Euclidean domain, then R is a unique factorization domain. ◦The main ideas of the proof are the same as those over Z, but they are a bit obfuscated by some of the technical diculties. The existence portion of the proof contains precisely the same ideas as in our characterization of the gcd of a collection of integers as the minimal positive linear combination of those integers. The uniqueness portion is likewise essentially the same as for Z, namely, an induction argument on the number of irreducible terms in a factorization. ◦Proof (Existence): Let R be a Euclidean domain and r a nonzero nonunit. ∗If r is irreducible, we are done. Otherwise, by denition we can write r = r1r2 where neither r1 nor r2 is a unit. ∗If both r1 and r2 are irreducible, we are done: otherwise, we can continue factoring (say) r1 = r1,1r1,2 with neither term a unit. If r1,1 and r1,2 are both irreducible, we are done: otherwise, we factor again. ∗We claim that this process must terminate eventually: otherwise, there necessarily exists an innite chain of elements x1, x2, x3, ... , such that x1|r, x2|x1, x3|x2, and so forth, where no two elements are associates. ∗Consider the set I of all (nite) R-linear combinations I = {r1x1 +r2x2 +· · ·+rkxk : k ≥1, ri ∈R}, and let y ∈I be a nonzero element in I of minimal norm. ∗We claim that every element in I is divisible by y: otherwise, if there were some element s ∈I with y not dividing s, applying the division algorithm to write s = qy + r† would yield the element r† = (s −qy) ∈I of smaller norm than y, contradiction. ∗But now since y ∈I, we can write y = r1x1 + · · · + rdxd for some d. Since xd|xd−1| · · · |x1, the LHS is a multiple of xd, meaning xd|y. ∗But now since xd|y and y|xd+1 we conclude xd|xd+1. But by assumption, xd+1|xd, meaning that they are associates; this is a contradiction. ∗Hence the factoring process must terminate, as claimed. ◦Proof (Uniqueness): Let R be a Euclidean domain and r be a nonzero nonunit. We prove the factorization of r is unique by induction on the number of irreducible factors of r = p1p2 · · · pd. ∗If d = 0, then r is a unit. If r had some other factorization r = qc with q irreducible, then q would divide a unit, hence be a unit (impossible). ∗Now suppose d ≥1 and that r = p1p2 · · · pd = q1q2 · · · qk has two factorizations into irreducibles. ∗Since p1|(q1 · · · qk) and p1 is irreducible, repeatedly applying the fact that p irreducible and p|ab implies p|a or p|b shows that p1 must divide qi for some i. ∗Then qi = p1u for some u: then since qi is irreducible (and p1 is not a unit), u must be a unit, so p1 and qi are associates. ∗Cancelling then yields the equation p2 · · · pd = (uq2) · · · qk, which is a product of fewer irreducibles. By the induction hypothesis, such a factorization is unique up to associates. This immediately yields the desired uniqueness result for r as well. 9 ◦Remark (for those who like ring theory): The set I from the existence proof is an ideal. The underlying idea of the proof given above is to show that any ideal in a Euclidean domain is actually principal (generated by a single element): in other words, that Euclidean domains are principal ideal domains. In fact, it is actually true that a Euclidean domain is a principal ideal domain, and that any principal ideal domain is a unique factorization domain. • Corollary: The Gaussian integers Z[i] are a unique factorization domain, as is the polynomial ring F[x] for any eld F. • We will note that, although most of the unique factorization domains we will discuss are also Euclidean domains, there exist UFDs that are not Euclidean domains2. 4.2 Modular Arithmetic in Euclidean Domains • We have previously described the division algorithm over Z and used it to study modular arithmetic in Z. The goal of this section is to show that there is a meaningful extension of the notion of modular arithmetic modulo a general element r in a Euclidean domain R, and then to establish the analogues of the major results from in Z: the Chinese remainder theorem, and the theorems of Fermat and Euler. ◦Our primary interest is when R is Z[i] or F[x], for F a eld. ◦However, many of the notions will hold in general, and so we will work in the general setting whenever possible. We will see that almost all of the proofs are exactly the same as over Z. ◦We will also, when possible, remark when the results we prove hold (or fail to hold) for more general classes of rings. ◦Remark (for those who like ring theory): All of what we do here is subsumed by the theory of ideals in a general ring R, and our construction of modular arithmetic is a special case of the quotient of a ring by an ideal. (Specically, we are studying the quotient rings of the form R/I where I is a principal ideal. Every ideal is principal in a Euclidean domain, so we do not lose anything here by studying quotients without speaking of ideals explicitly.) 4.2.1 Modular Congruences and Residue Classes • Our underlying denition of modular congruences and residue classes are exactly the same as over Z: • Denition: Let R be a commutative ring with 1. If a, b, r ∈R, we say that a is congruent to b modulo c, written a ≡b (mod c), if c|(b−a). The residue class of a modulo r, denoted a, is the set S = {a+dr : d ∈R} of all elements in R congruent to a modulo r. ◦Example: In Z[i], it is true that 13−3i ≡2−i modulo 3+4i, because (13−3i)−(2−i) = (1−2i)(3+4i). ◦Example: In F2[x], it is true that x3 + x ≡x + 1 modulo x2 + x + 1, because (x3 + x) −(x + 1) = (x + 1) · (x2 + x + 1). • All of the properties of residue classes and congruences from Z extend to R: • Proposition (Congruences and Residue Classes): Let R be an integral domain. For any r, a, b, c, d ∈R, the following are true: 1. We have a ≡a (mod r), a ≡b (mod r) if and only if b ≡a (mod r), and if a ≡b (mod r) and b ≡c (mod r) then a ≡c (mod r). 2. If a ≡b (mod r) and c ≡d (mod r), then a + c ≡b + d (mod r) and ac ≡bd (mod r). 3. We have a ≡b (mod r) if and only if a = b. 4. Two residue classes modulo r are either disjoint or identical. 2One example is the ring C[x, y] of polynomials in the two variables x and y, with coecients in C. There is no Euclidean division algorithm in this ring (the degree map is not a Euclidean norm since, e.g., there is no way to divide y2 by x and obtain a remainder of degree zero). However, polynomials in C[x, y] can still be factored uniquely into a product of irreducible terms. 10 ◦Proof: The proofs of all of these statements are the same as over Z. • Proposition (Modular Arithmetic): The set R/rR consisting of all residue classes in R modulo r forms a ring under the addition and multiplication operations a + b = a + b and a · b = a · b. ◦Proof: The most dicult part is showing that the addition and multiplication operations are well-dened: that if we choose dierent elements a′ ∈¯ a and b′ ∈¯ b, the residue class of a′ + b′ is the same as that of a + b, and similarly for the product. ◦Explicitly, suppose a′ ∈¯ a and b′ ∈b. Then there exists k1 ∈R such that a′ = a + k1r and also k2 ∈R such that b′ = b + k2r. ◦Then a′ + b′ = (a + b) + r(k1 + k2), and since these dier by a multiple of r, we see that a′ + b′ = a + b, so addition is well-dened. ◦Similarly, a′b′ = (a + k1r)(b + k2r) = ab + r(k1b + k2a + k1k2r), so a′b′ = ab, and multiplication is also well-dened. ◦Then the ring axioms [R1]-[R8] all follow directly from their counterparts in R. The additive identity in R/rR is ¯ 0, the additive inverse of a is −a, and the multiplicative identity is 1. • Over Z, we usually work with a specic collection of representatives for the residue classes modulo m, generally the integers 0 through m −1. ◦In a general ring, there is not usually a natural choice for residue class representatives. ◦Or, if there does happen to be a good choice, it is not always obvious what that choice is. (For example, try coming up with a natural choice of residue class representatives for Z[i] modulo 3 + 4i.) ◦Unfortunately, this lack of an obvious choice makes it somewhat dicult to give concrete examples in situations that require a complete list of representatives. We will later describe ways to nd a set of representatives for Z[i] modulo a prime p, and for F[x] modulo an arbitrary polynomial. 4.2.2 Arithmetic in R/rR • In the ring F[x], where F is a eld, we do get a natural collection of residue class representatives arising from the division algorithm. • Proposition (Residue Classes for F[x]): If R = F[x] and q(x) ∈R is a polynomial of degree d, then the polynomials in F[x] of degree ≤d −1 are a full set of residue class representatives for R/qR. ◦Proof: By the division algorithm, every polynomial is congruent modulo q(x) to some polynomial of degree less than d, namely, to the remainder after dividing by q(x). ◦Conversely, each of these residue classes is distinct, because two distinct polynomials in F[x] of degree less than d cannot be congruent modulo q(x): if they were, their dierence would be a multiple of q(x) of degree less than d, but the only such multiple is 0. • Example: Describe the addition and multiplication in the ring R/qR, where R = R[x] and q(x) = x2 + 1. ◦From the proposition above, since q has degree 2, the elements of R/qR are are of the form a + bx where a, b ∈R. ◦The addition is simply addition of polynomials: (a + bx) + (c + dx) = (a + c) + (b + d)x. ◦The multiplication is also simply multiplication of polynomials, subject to the relation x2 + 1 = 0. ◦Thus, in general we can write (a + bx) · (c + dx) = ac + (bc + ad)x + bdx2 = (ac −bd) + (bc + ad)x. ◦This multiplication should look very familiar: in fact, it is exactly the same as the multiplication of complex numbers (a + bi) · (c + di) = (ac −bd) + (bc + ad)i. ◦There is an obvious reason for this: the ring R/qR is really just the complex numbers C, where instead of using i2 = −1, we say x2 = −1. (In the language of algebra, we would say that R/qR and C are isomorphic as rings, meaning that their ring structures are exactly the same: we have just labeled the elements dierently.) 11 ◦In particular, since C is a eld, we see that R/qR is also a eld. • When F is an innite eld, there will be innitely many residue classes in R/pR, so except in nice cases it is dicult to write out the multiplication explicitly. However, we can easily construct addition and multiplication tables when F is nite. • Example: With R = F2[x], here are the addition and multiplication tables for R/pR with p = x2: + 0 1 x x + 1 0 0 1 x x + 1 1 1 0 x + 1 x x x x + 1 0 1 x + 1 x + 1 x 1 0 · 0 1 x x + 1 0 0 0 0 0 1 0 1 x x + 1 x 0 x 0 x x + 1 0 x + 1 x 1 ◦Notice that this ring has a zero divisor (namely x), and that the elements 1 and x + 1 are units. Notice that p(x) = x2 is reducible in R, since it has the factorization x2 = x · x. • Example: With R = F2[x], here are the addition and multiplication tables for R/pR with p = x2 + x + 1: + 0 1 x x + 1 0 0 1 x x + 1 1 1 0 x + 1 x x x x + 1 0 1 x + 1 x + 1 x 1 0 · 0 1 x x + 1 0 0 0 0 0 1 0 1 x x + 1 x 0 x x + 1 1 x + 1 0 x + 1 1 x ◦Notice that this ring is a eld, since every nonzero residue class is a unit. Observe also that the polynomial p(x) = x2 + x + 1 is irreducible in R, since it has no roots. • Example: With R = Z[i], here are the addition and multiplication tables for R/rR with r = 2 + i: + 0 i −1 −i 1 0 0 i −1 −i 1 i i 1 −i 0 −1 −1 −1 −i i 1 0 −i −i 0 1 −1 i 1 1 −1 0 i −i · 0 i −1 −i 1 0 0 0 0 0 0 i 0 −1 −i 1 i −1 0 −i 1 i −1 −i 0 1 i 1 −i 1 0 i −1 −i 1 ◦It is less obvious that these elements do give representatives of all of the residue classes: this follows because any possible remainder x upon dividing by 2 + i must have N(x) ≤1 2N(2 + i) = 5 2, so the only possible remainders are the elements of norm 0, 1, and 2: namely, 0, ±1, ±i, and ±1 ± i. The last four are all seen to be equivalent to the rst ve, since, for example 1+i ≡−i (mod 2+i) and 1−i ≡i (mod 2+i)), and the rst ve all yield distinct residue classes since no two of them are congruent modulo 2+i. ◦Thus, there are 5 residue classes modulo 2 + i: 0, i, −1, −i, and 1. ◦Notice that this ring is a eld, since every nonzero residue class is a unit. As we have also shown previously, 2 + i is irreducible in R. 4.2.3 Units and Zero Divisors in R/rR • As suggested by the examples above, and also by the analogies between Z/mZ and R/rR, we can characterize the units and zero divisors in R/rR: • Proposition (Units in R/rR): If R is a Euclidean domain, an element s ∈R is a unit in R/rR if and only r and s are relatively prime, and an element s ∈R is a zero divisor in R/rR whenever s ̸= 0 and r and s are not relatively prime. ◦Proof: If r and s are relatively prime, then since R is a Euclidean domain, there exist a, b ∈R such that ar + bs = 1. Then, modulo r, we have b · s = 1, meaning that s is a unit in R/rR. ◦Conversely, suppose that s is a unit in R/rR: then there exists some b such that b · s = 1 in R/rR. 12 ◦This means there exists some a ∈R with bs = 1 −ar, which is to say, with ar + bs = 1. Then since any gcd of r and s must divide ar + bs, we conclude that any gcd must be a unit. Since all gcds are associates, we conclude 1 is a gcd of r and s. ◦For the second statement, if s is a zero divisor than it cannot be a unit, so by what we just showed, this means r and s cannot be relatively prime. Conversely, suppose that d is a greatest common divisor of r and s and d is not a unit: then s · r/d = s/d · r = 0 in R/rR, and since r/d is not zero modulo r since d is not a unit, this means s is a zero divisor. • Just as in Z, the proof of the result above gives us a procedure for computing the inverse of a unit u in R/rR (namely, by using the Euclidean algorithm to write 1 as a linear combination of u and r). ◦There is an additional minor wrinkle in that the result of the Euclidean algorithm may yield a gcd that is not 1 but rather some other unit in R: in such a case we need only scale both sides of the resulting linear combination by the inverse of that unit to obtain a linear combination of 1. • Example: In Z[i], show that 7 −2i is a unit modulo 11 + 8i and nd its multiplicative inverse. ◦We apply the Euclidean algorithm: 11 + 8i = (1 + i)(7 −2i) + (2 + 3i) 7 −2i = (1 −2i)(2 + 3i) + (−1 −i) 2 + 3i = −2(−1 −i) + i −1 −i = (−1 + i)(i) The greatest common divisor is the last nonzero remainder of i. Since this is associate to 1, we see that 7 −2i is a unit modulo 11 + 8i. ◦To compute the inverse we solve for the remainders: 2 + 3i = 1(11 + 8i) + (−1 −i)(7 −2i) −1 −i = 7 −2i −(1 −2i)(2 + 3i) = (−1 + 2i)(11 + 8i) + (4 −i)(7 −2i) i = 2 + 3i + 2(−1 −i) = (−1 + 4i)(11 + 8i) + (7 −3i)(7 −2i) and so i = (−1+4i)(11+8i)+(7−3i)(7−2i). Multiplying by −i yields 1 = (4+i)(11+8i)+(−3−7i)(7−2i), and then reducing modulo 11 + 8i yields (−3 −7i) · (7 −2i) ≡1 (mod 11 + 8i). ◦Hence the inverse of 7 −2i modulo 11 + 8i is −3 −7i . • Example: For R = F5[x], nd the multiplicative inverse of x2 + 2 modulo x3 + 1. ◦First we apply the Euclidean algorithm in R: x3 + 1 = x · (x2 + 2) + (3x + 1) x2 + 2 = (2x + 1) · (3x + 1) + 1 3x + 1 = (3x + 1) · 1 and so the gcd of x2 + 2 and x3 + 1 is 1. Hence x2 + 2 is indeed a unit modulo x3 + 1. ◦To compute the inverse we solve for the remainders: 3x + 1 = (x3 + 1) −x · (x2 + 2) 1 = (x2 + 2) −(2x + 1)(3x + 1) = (2x2 + x + 1)(x2 + 2) −(2x + 1)(x3 + 1) and thus by reducing both sides modulo x3 + 1, we see that the multiplicative inverse of x2 + 2 modulo x3 + 1 is 2x2 + x + 1 . 13 • One of the other nice properties of Z/mZ is that if p is prime, then Z/pZ is actually a eld. This remains true if we replace Z with an arbitrary Euclidean domain: • Proposition (R/pR and Fields): If R is a Euclidean domain, the element p ∈R is a prime element (equivalently, irreducible) if and only if R/pR is a eld. ◦Proof: Suppose p is a prime element. If p|a, then a ≡0 (mod p), so a = 0. Now suppose that p does not divide a. ◦Then because p is prime (hence irreducible), the only possible common divisors of a and p are units. This means a and p are relatively prime, so a is a unit modulo p. Thus, every nonzero element in R/pR is a unit, so R/pR is a eld. ◦Conversely, suppose R/pR is a eld. If a, b ∈R are such that p|ab, then ab ≡0 (mod p). ◦Since R/pR is a eld, it has no zero divisors, meaning that a ≡0 (mod p) or b ≡0 (mod p), which is to say, p|a or p|b. Thus, p is a prime element of R. ◦Remark: This proposition is not true if R is only assumed to be a general commutative ring with 1, or even a unique factorization domain. (The unique factorization domain R = C[x, y] is a counterexample: the element x is prime, but the ring R/xR is not a eld, since y has no multiplicative inverse there.) The correct equivalence in that case is the element p ∈R is a prime element if and only if R/pR is an integral domain. • The above proposition gives us a very easy way to construct new elds, which we will explore shortly. 4.2.4 The Chinese Remainder Theorem • Another foundational result of arithmetic in Z was the Chinese Remainder Theorem. This result generalizes to arbitrary Euclidean domains, with essentially the same statement. • We rst start with the analogous proposition on solving a single linear congruence. • Proposition (Linear Congruences): Let R be a Euclidean domain, with a, b ∈R, and let d any gcd of a and r. Then the equation ax ≡b (mod r) has a solution for x ∈R if and only if d|b. In this case, if a = a′d, b = b′d, and r = r′d, then ax ≡b (mod r) is equivalent to a′x ≡b′ (mod r′) and the solution is x ≡(a′)−1b′ (mod r′). ◦The proof is the same as over Z. ◦Proof: If x is a solution to the congruence ax ≡b (mod r), then there exists an s ∈R with ax −rs = b. Then since d divides the left-hand side, it must divide b. ◦Now if we set a′ = a/d, b′ = b/d, and r′ = r/d, our original equation becomes a′dx ≡b′d (mod r′d). ◦Solving this equation is equivalent to solving a′x ≡b′ (mod r′), by one of our properties of congruences. ◦But since a′ and r′ are relatively prime, a′ is a unit modulo r′, so we can simply multiply by its inverse to obtain x ≡b′ · (a′)−1 (mod r′). • Example: Solve the congruence (7 + i)x ≡3 −i modulo 8 −9i in Z[i]. ◦Using the Euclidean algorithm we can verify that 7 + i and 8 −9i are relatively prime: 8 −9i = (1 −i)(7 + i) + (−3i) 7 + i = (2i)(−3i) + (1 + i) −3i = (−2 −2i)(1 + i) + i 1 + i = (1 −i)(i) and so i, and hence 1, is a gcd. ◦By solving for the remainders we can write 1 as a linear combination explicitly as 1 = (11 −i)(7 + i) + (−4 −5i)(8 −9i). Hence the inverse of 7 + i modulo 8 −9i is 11 −i. ◦Multiplying both sides of the original congruence by 11−i yields x ≡(11−i)(7+i)x ≡(11−i)(3−i) ≡3+i (mod 8 −9i), so the solution is x ≡3 + i (mod 8 −9i) . 14 • As over Z, the above proposition converts a problem of solving a general system of congruences in the variable x to a system of the form x ≡ai (mod ri). • Theorem (Chinese Remainder Theorem): Let R be a Euclidean domain and r1, r2, . . . , rk be pairwise relatively prime elements of R, and a1, a2, . . . , ak be arbitrary elements of R. Then the system x ≡ a1 (mod r1) x ≡ a2 (mod r2) . . . . . . . . . x ≡ ak (mod rk) has a solution x0 ∈R. Furthermore, x is unique modulo r1r2 · · · rk, and the general solution is precisely the residue class of x0 modulo r1r2 · · · rk. ◦The proof is the same as over Z. ◦Proof: Since we may repeatedly convert two congruences into a single one until we are done, by induction it suces to prove the result for two congruences x ≡ a1 (mod r1) x ≡ a2 (mod r2). ◦For existence, the rst congruence implies x = a1+kr1 for some k ∈R; plugging into the second equation then yields a1 +kr1 ≡a2 (mod r2). Rearranging yields kr1 ≡(a2 −a1) (mod r2). Since by hypothesis r1 and r2 are relatively prime, by our proposition above we see that this congruence has a unique solution for k modulo r2, and hence a solution for x. ◦For uniqueness, suppose x and y are both solutions. Then x−y is 0 modulo r1 and 0 modulo r2, meaning that r1|(x −y) and r2|(x −y). But since r1 and r2 are relatively prime, their product must therefore divide x −y, meaning that x is unique modulo r1r2. Finally, it is obvious that any other element of R congruent to x modulo r1r2 also satises the system. • Example: In R = C[x], solve the system q(x) ≡1 (mod x −1), q(x) ≡3 (mod x). ◦Since x −1 and x are relatively prime polynomials, by the Chinese Remainder Theorem all we have to do is nd one polynomial satisfying the system. ◦If we take the solution q(x) = 3 + ax to the second equation and plug it into the rst equation, we must solve 3 + ax ≡1 (mod x −1). ◦Since 3 + ax ≡(3 + a) mod (x −1), we can take a = −2. ◦Hence the polynomial q(x) = 3 −2x is a solution to the system. The general solution is there-fore 3 −2x + x(x −1) · s(x) for an arbitrary polynomial s(x) ∈R. Equivalently, the solution is q(x) ≡3 −2x (mod x2 −x) . 4.2.5 Orders, Euler's Theorem, Fermat's Little Theorem • We can also study powers in R/rR in the same way as in Z/mZ, with the only caveat being that some elements may not have a nite order: • Denition: If R is a commutative ring with 1 and u is a unit of R, then the smallest k > 0 such that uk ≡1 (mod m) is called the order of u. (If there exists no such k, then we say u has innite order.) ◦Example: The element −1 has order 2 in Z (and also in Q, R, and C), and the element i has order 4 in Z[i] and in C. ◦Example: The element 2 does not have nite order in R, since no positive power of 2 is equal to 1. • All of our properties of order hold in general commutative rings with 1: 15 • Proposition (Properties of Orders): Suppose R is a commutative ring with 1 and u is a unit in R. 1. If un ≡1 (mod m) for some n > 0, then the order of u is nite and divides n. 2. If u has order k, then un has order k/ gcd(n, k). In particular, if n and k are relatively prime, then un also has order k. 3. If un ≡1 (mod m) and un/p ̸= 1 (mod m) for any prime divisor p of n, then u has order n. 4. If u has order k and w has order l, where k and l are relatively prime, then uw has order kl. ◦Proof: The proofs are the same as in Z/mZ. • One of our foundational results in Z/mZ was Euler's theorem. There is a natural generalization of the Euler ϕ-function and of Euler's theorem that holds in the case where there are nitely many units in R/rR. • Theorem (Generalization of Euler's Theorem): If R is a commutative ring with 1 and r ∈R, let ϕ(r) denote the number of units in the ring R/rR, assuming this number is nite. Then if a is any unit in R/rR, we have aϕ(r) ≡1 (mod r). ◦The proof is the same as over Z/mZ: the point is that if a is a unit and u1, · · · , uk are the units in R, then the elements au1, · · · , auk are the same as u1, · · · , uk, just in a dierent order. ◦Proof: Let the set of all units in R/rR be u1, u2, . . . , uϕ(r), and consider the elements a · u1, a · u2, . . . , a · uϕ(r) in R/rR: we claim that they are simply the elements u1, u2, . . . , uϕ(r) again (possibly in a dierent order). ◦Since there are ϕ(r) elements listed and they are all still units, it is enough to verify that they are all distinct. ◦So suppose a · ui ≡a · uj (mod r). Since a is a unit, multiply by a−1: this gives ui ≡uj (mod r), but this forces i = j. ◦Hence modulo r, the elements a · u1, a · u2, . . . , a · uϕ(r) are simply u1, u2, . . . , uϕ(r) in some order. ◦Therefore we have (a·u1)(a·u2) · · · (a·uϕ(r)) ≡u1 ·u2 · · · uϕ(r) (mod r), and so cancelling u1 ·u2 · · · uϕ(r) from both sides yields aϕ(r) ≡1 (mod r) as desired. • Although this is the reverse of our approach over Z, we can obtain Fermat's little theorem quite easily using Euler's theorem. • Corollary (Generalization of Fermat's Little Theorem): If R is a Euclidean domain and p ∈R is a prime element, and the number of elements in R/pR is n, then an ≡a (mod p) for every a ∈R. ◦Proof: Since R/pR is a eld, the only nonunit is zero, so ϕ(p) = n −1. ◦By the generalization of Euler's theorem, we know that aϕ(p) ≡1 (mod p) for every a that is a unit modulo p, so an = aϕ(p)+1 ≡a (mod p) for such a. ◦Since an ≡a (mod p) is also true when p|a, we see that it holds for every a ∈R. • Example: Verify the result of Euler's theorem for the element x in R/pR where R = F3[x] and p = x2 + x + 2. ◦It is straightforward to see that p = x2 + x + 2 is irreducible in F3[x], so R/pR is a eld. ◦We also know that the residue classes have the form a + bx for a, b ∈F3. Thus, R/pR has 9 elements, 8 of which are units. ◦To verify Euler's theorem we need to evaluate x8, which we can do using successive squaring: x2 = 2x + 1, x4 = (2x + 1)2 = 2, and then x8 = 2 2 = 1. ◦Thus, x8 = 1, meaning that x8 ≡1 (mod p), as dictated by Euler's theorem. 16 4.3 Arithmetic in F[x] • In this section, we use all of the ring-theoretic machinery we have developed to study the arithmetic of the polynomial ring F[x]. ◦We will rst discuss polynomials as functions and use the results to give ways to determine when poly-nomials of small degree are irreducible. ◦Then we will discuss some of the applications of modular arithmetic in this ring to the construction of nite elds, and (in particular) establish the analogue of the Prime Number Theorem in Fp[x]. ◦We will also use the arithmetic of F[x] to establish the existence of primitive roots in nite elds, and also to characterize the moduli m for which there exists a primitive root. 4.3.1 Polynomial Functions, Roots of Polynomials • In elementary algebra, polynomials are examples of functions. We would like to extend this idea of plugging values in to a general polynomial in F[x], because this allows us to glean some information about potential factorizations. • Denition: If F is a eld and p = a0 + a1x + · · · + anxn is an element of F[x], for any r ∈F we dene the value p(r) to be the element a0 + a1r + · · · + anrn ∈F. ◦It is straightforward to see from the denition that if p and q are any polynomials in F[x] and r is any element of F, then (p + q)(r) = p(r) + q(r) and (pq)(r) = p(r)q(r). Thus, evaluation at an element of F respects the addition and multiplication structure of the polynomial ring. ◦Example: If p = 1 + x2 in C[x], then p(1) = 1 + 12 = 2, and p(i) = 1 + i2 = 0. ◦Example: If p = 1 + x2 in F5[x], then p(0) = 1, p(1) = 2, p(2) = 0, p(3) = 0, and p(4) = 2. ◦In this way, we can view a polynomial p ∈F[x] as a function p : F →F, where p(r) = a0+a1r+· · ·+anrn. ◦Warning: The traditional polynomial notation p(x) is somewhat ambiguous: we may be considering p(x) as a ring element in F[x] (in which case x represents an indeterminate), or we may be viewing it as a function from F to F (in which case x represents the variable of the function). • Example: If p = x2 + x in F2[x], observe that p(0) = p(1) = 0. ◦Thus, although p is not the zero polynomial in F2[x] (since it has degree 2), as a function from F2 to F2 it is the identically zero function! ◦More generally, if F is any nite eld with elements r1, r2, . . . , rn, then the polynomial p(x) = (x − r1)(x −r2) · · · (x −rn) is the identically zero function from F to F. ◦Thus, in general, we cannot always uniquely specify a polynomial p ∈F[x] by describing its behavior as a function p : F →F. • To begin our study of polynomial functions, we start with a pair of observations that are likely familiar from elementary algebra: • Proposition (Remainder/Factor Theorem): Let F be a eld. If p ∈F[x] is a polynomial and r ∈F, then the remainder upon dividing p(x) by x −r is p(r). In particular, x −r divides p(x) if and only if p(r) = 0. (In this case we say r is a zero or a root of p(x).) ◦Proof: Suppose p(x) = a0 + a1x + · · · + anxn. Observe rst that (xk −rk) = (x −r)(xk−1 + xk−2r + · · · + xrk−2 + rk−1), so in particular, x −r divides xk −rk for all k. ◦Now we simply write p(x) −p(r) = n X k=0 ak(xk −rk), and since x −r divides each term in the sum, it divides p(x) −p(r). ◦Since p(r) is a constant, it is therefore the remainder after dividing p(x) by x −r. The other statement is immediate from the uniqueness of the remainder in the division algorithm. 17 • We can also bound the number of roots that a polynomial can have: • Proposition (Number of Roots): Let F be a eld. If p ∈F[x] is a polynomial of degree d, then p has at most d distinct roots in F. ◦Proof: We induct on the degree d. For d = 1, the polynomial is of the form a0 + a1x for a1 ̸= 0, which has exactly one root, namely −a0/a1. ◦Now suppose the result holds for all polynomials of degree ≤d and let p be a polynomial of degree d + 1. ◦If p has no roots we are obviously done, so suppose otherwise and let p(r) = 0. We can then factor to write p(x) = (x −r)q(x) for some polynomial q(x) of degree d. ◦By the induction hypothesis, q(x) has at most d roots: then p(x) has at most d + 1 roots, because (a −r)q(a) = 0 only when a = r or q(a) = 0 (since F is a eld). • The above results, while seemingly obvious, can fail spectacularly if the coecient ring is not a eld. Here are some especially distressing examples: ◦The quadratic polynomial q(x) = x2 −1 visibly has four roots modulo 8, namely x = 1, 3, 5, 7. Further-more, q(x) can be factored in two dierent ways: as (x −1)(x −7) and as (x −3)(x −5). ◦The linear polynomial q(x) = x, despite having degree 1, is not irreducible modulo 6: it can be written as the product (2x + 3)(3x + 2). Furthermore, q(x) = x has one zero (namely x = 0), even though its two factors 2x + 3 and 3x + 2 each have no zeroes modulo 6. • In general, it is not easy to determine when an arbitrary polynomial is irreducible. In low degree, this task can be done by examining all possible factorizations. The following result is frequently useful: • Proposition (Polynomials of Small Degree): If F is a eld and q(x) ∈F[x] has degree 2 or 3 and has no roots in F, then q(x) is irreducible. ◦Proof: If q(x) = a(x)b(x), taking degrees shows 3 = deg(q) = deg(a) + deg(b). Since a and b both have positive degree, one of them must have degree 1. Then its root is also a root of q(x). Taking the contrapositive gives the desired statement. ◦Example: Over R, the polynomial x2 +x+11 has no roots (since it is always positive), so it is irreducible. ◦Example: Over F5, the polynomial q(x) = x3 + x + 1 has no roots, since q(0) = 1, q(1) = 3, q(2) = 1, q(3) = 1, and q(4) = 4. Thus, q(x) is irreducible in F5[x]. • For polynomials of larger degree, determining irreducibility can be a much more dicult task. For certain particular elds, we can say more about the structure of the irreducible polynomials. • Theorem (Fundamental Theorem of Algebra): Every polynomial of positive degree in C[x] has at least one root. Therefore, the irreducible polynomials in C[x] are precisely the polynomials of degree 1, and so every polynomial in C[x] factors into a product of degree-1 polynomials. ◦The rst statement of this theorem is a standard result from analysis over the complex numbers, and we take it for granted. ◦To deduce the second statement from the rst, observe that if p(x) is any complex polynomial of degree larger than 1, then by assumption it has at least one root r in C, so we can write p(x) = (x −r)q(x) for some other polynomial q(x): then p is reducible. ◦Therefore, the irreducible polynomials in C[x] are precisely the polynomials of degree 1. The nal statement follows from the characterization of irreducible polynomials, because every polynomial is a product of irreducibles. • Another property that we can fruitfully study in a general eld is the presence of repeated factors: ◦Example: Over C, the polynomial x3 + x2 −x −1 factors into irreducibles as (x −1)2(x + 1), which has the repeated factor x −1. ◦Example: Over F2, the polynomial x4 + x2 + 1 factors into irreducibles as (x2 + x + 1)2, which has the repeated factor x2 + x + 1. 18 • As a rst goal, we can give a necessary condition for when a polynomial has repeated roots. ◦Recall from calculus that if a polynomial q(x) has a double root at r, then q(r) and q′(r) are both zero. By the factor theorem, this is equivalent to saying that q and q′ are both divisible by x −r. ◦We can formulate a similar test over an arbitrary eld using a purely algebraic denition of the derivative: • Denition: If q(x) = n X k=0 akxk is a polynomial in F[x], its derivative is the polynomial q′(x) = n X k=0 kakxk−1. ◦Example: In C[x], the derivative of x6 −4x2 + x is 6x5 −8x + 1. ◦Example: In Fp[x], the derivative of xp2 −x is p2xp2−1 −1 = −1. Notice here that although the degree of the original polynomial is p2, the degree of its derivative is 0. ◦It is a straightforward calculation to verify that the standard dierentiation rules apply: (f + g)′(x) = f ′(x) + g′(x) and (fg)′(x) = f ′(x)g(x) + f(x)g′(x). (For the product rule, the easiest method is to check it for products of monomials and then apply the distributive law, since both sides are additive.) • Proposition (Repeated Factors): Let F be a eld and q ∈F[x]. Then r is a repeated root of q if and only if q(r) = q′(r) = 0. More generally, q has a repeated factor if and only if q and q′ are not relatively prime. ◦Proof: First suppose that q(x) has a repeated root r: then q(x) = (x −r)2s(x) for some s(x) ∈F[x]. ◦Taking the derivative yields q′(x) = 2(x −r)s(x) + (x −r)2s′(x) = (x −r) · [2s(x) + (x −r)s′(x)]. Thus, q′ is also divisible by x −r in F[x], so by the factor theorem, we conclude that q(r) = q′(r) = 0. ◦Conversely, if q(r) = q′(r) = 0, then by the factor theorem x −r divides q(x), so we may write q(x) = (x −r)a(x). Then by the product rule we see that q′(x) = a(x) + (x −r)a′(x), so q′(r) = a(r). Thus a(r) = 0 and so x −r divides a(x): then q(x) is divisible by (x −r)2 so r is a repeated root. ◦For the second statement, any root3 of a common factor of q and q′ is a multiple root (by the above) and conversely any repeated root of q will yield a nontrivial common factor of q and q′ in F[x]. • Since we can eciently compute the gcd of q(x) and q′(x) using the Euclidean algorithm in F[x], we can quickly determine if a given polynomial has a repeated factor. • Example: Determine whether q(x) = x4 + 3x3 + 3x2 + 3x + 1 has a repeated factor in F5[x]. ◦We have q′(x) = 4x3 + 4x2 + x + 3. ◦Now we perform the Euclidean algorithm: this yields x4 + 3x3 + 3x2 + 3x + 1 = (4x + 3)(4x3 + 4x2 + x + 3) + (2x2 + 3x + 2) 4x3 + 4x2 + x + 3 = (2x + 4)(2x2 + 3x + 2) and so since 2x2 + 3x + 2 is a greatest common divisor (it is associate to monic polynomial x2 + 4x + 1) we see that q(x) has a repeated factor . ◦Indeed, if we divide q(x) by x2 + 4x + 1, we will see that q(x) = (x2 + 4x + 1)2. 4.3.2 Finite Fields • We can fruitfully apply our results to the case where F = Fp = Z/pZ is a nite eld with p elements: • Theorem (Finite Fields): If q(x) ∈Fp[x] is an irreducible polynomial of degree d, then the ring R/qR is a nite eld with pd elements. ◦Proof: We simply invoke our previous results: the residue classes in the ring R/qR are given by the polynomials in Fp[x] of degree ≤d −1. 3We note here that the common factor may not have any roots in F, in which case one must (in general) instead apply this argument in a larger eld K (containing F) in which this polynomial does have a root. Such an extension always exists, and in fact it can be constructed using polynomial modular arithmetic. 19 ◦Such a polynomial has the form a0+a1x+· · ·+ad−1xd−1, where the coecients ai are arbitrary elements of Fp. There are clearly pd such polynomials. ◦Furthermore, since q(x) is irreducible, R/qR is a eld. Hence R/qR is a nite eld with pd elements, as claimed. • Example: Show that the ring R/qR, where R = F2[x] and q(x) = x2 + x + 1, is a eld with 4 elements. ◦This follows because x2 + x + 1 is irreducible modulo 2: if it had a nontrivial factorization, then since it is a polynomial of degree 2, it would necessarily have a root (which it does not). ◦We showed this fact explicitly earlier when we wrote out the addition and multiplication tables for this eld. • Example: Show that the ring R/qR, where R = F3[x] and q(x) = x2 + 1, is a eld with 9 elements. ◦This follows because x2 + x + 1 is irreducible modulo 3, since, if it had a nontrivial factorization, then since it is a polynomial of degree 3, it would necessarily have a root (which it does not). ◦Explicitly, the nine elements of this eld are 0, 1, 2, x, x + 1, x + 2, 2x, 2x + 1, and 2x + 2. Addition is taken with coecients modulo 3, and multiplication is performed under the convention that x2 + 1 = 0 (i.e., x2 = 2, since coecients are taken modulo 3). ◦Here is the multiplication table for this eld: · 0 1 2 x x + 1 x + 2 2x 2x + 1 2x + 2 0 0 0 0 0 0 0 0 0 0 1 0 1 2 x x + 1 x + 2 2x 2x + 1 2x + 2 2 0 2 1 2x 2x + 2 2x + 1 x x + 2 x + 1 x 0 x 2x 2 x + 2 2x + 2 1 x + 1 2x + 1 x + 1 0 x + 1 2x + 2 x + 2 2x 1 2x + 1 2 x x + 2 0 x + 2 2x + 1 2x + 2 1 x x + 1 2x 2 2x 0 2x x 1 2x + 1 x + 1 2x + 2 2x + 1 d + 2 2x + 1 0 2x + 1 x + 2 x + 1 2 2x 2x + 2 x 1 2x + 2 0 2x + 2 x + 1 2x + 1 x 2 x + 2 1 2x • Example: Construct a nite eld with 8 elements. ◦From our discussion, since 8 = 23, such a eld can be obtained as R/qR where R = F2[x] and q is an irreducible polynomial in R of degree 3. ◦It is easy to see that q(x) = x3 + x + 1 is irreducible in F2[x] since it has no roots, so R/qR is a nite eld with 8 elements. • We can now use Fermat's little theorem in these nite elds to extract interesting and useful information. ◦To start, observe that by (the original) Fermat's little theorem, ap ≡a (mod p). Thus, if q(x) = xp −x, then q(a) = 0 for every a ∈Fp. ◦In other words, this polynomial xp −x has the rather strange property that its value is always zero, yet it is not the zero polynomial. • Proposition (Factorization of xp −x): The factorization of xp −x in Fp[x] is xp −x = Q a∈Fp(x −a). ◦Proof: As noted above, q(x) = xp −x is such that q(a) = 0 for every a ∈Fp. ◦Hence, x −a is a divisor of q(x) for every a ∈Fp. ◦However, because this polynomial has at most p roots, and we have exhibited p roots, the factorization of q(x) must be q(x) = Q a∈Fp(x −a), since the leading terms agree. • Another immediate application of this factorization is an easy proof of Wilson's Theorem. ◦By dividing through by x, we see that xp−1 −1 = Y a∈Fp,a̸=0 (x −a). 20 ◦Now examine the constant term of the product: it is (−1)p−1 Q a∈Fp,a̸=0(a) = (−1)p−1 · (p −1)! . ◦But (modulo p) the constant term is also equal to −1, so we deduce (p −1)! ≡(−1)p−2 ≡−1 (mod p). • As we observed above, the polynomial xp −x has a nice factorization in Fp[x]. Let us now consider the factorization of the polynomial xpn −x in Fp[x]. ◦Example: For n = 2 and p = 2, we nd the irreducible factorization x4 −x = x(x + 1)(x2 + x + 1). ◦Example: For n = 3 and p = 2, we nd the irreducible factorization x8 −x = x(x + 1)(x3 + x2 + 1)(x3 + x + 1). ◦Example: For n = 4 and p = 2, we nd the irreducible factorization x16 −x = x(x + 1)(x2 + x + 1)(x4 + x3 + 1)(x4 + x + 1)(x4 + x3 + x2 + x + 1). ◦Example: For n = 2 and p = 3, we nd the irreducible factorization x9 −x = x(x+1)(x+2)(x2 +2)(x2 + x + 2)(x2 + 2x + 2). ◦Example: For n = 2 and p = 5, the list of irreducible factors of x25 −x is x, x + 1, x + 2, x + 3, x + 4, x2 + 2, x2 + 3, x2 + x + 1, x2 + x + 2, x2 + 2x + 3, x2 + 2x + 4, x2 + 3x + 3, x2 + 3x + 4, x2 + 4x + 1, and x2 + 4x + 2. • We notice (especially in the p = 5 example) that the irreducible factors all appear to be of small degree, and that there are no repeated factors. ◦In fact, it seems that the factorization of xpn −x over Fp contains all of the irreducible polynomials of degree n, or of degree dividing n. To prove this we rst require a lemma: • Lemma: If p is a prime number, then the greatest common divisor of pn −1 and pd −1 is pgcd(n,d) −1. ◦Proof: Use the division algorithm to write n = qd + r, and let a = pr(p(q−1)d + p(q−2)d + · · · + pd + 1). ◦Then it is not hard to see by expanding the products that pn −1 = (pd −1)a + (pr −1). ◦Therefore, by properties of gcds, we see that gcd(pn −1, pd −1) = gcd(pd −1, pr −1): but this means we can perform the Euclidean algorithm on the exponents without changing the gcd. The end result is pgcd(n,d) −1, so this is the desired gcd. • Theorem (Factorization of xpn −x): The polynomial xpn −x factors in Fp[x] as the product of all monic irreducible polynomials over Fp of degree dividing n. ◦We prove the result in the following way: rst, we show that there are no repeated factors. Second, we show that every irreducible polynomial of degree dividing n divides q(x). Finally, we show that no other irreducible polynomial can divide q(x). ◦Proof: Let q(x) = xpn −x and R = Fp[x]. ◦For the rst part, we have q′(x) = pnxpn−1 −1 = −1, so q(x) and q′(x) are relatively prime. Thus, by our earlier results, we know that q(x) has no repeated irreducible factors. ◦For the second part, suppose that s(x) ∈Fp[x] is an irreducible polynomial of degree d, where n = ad. ◦We know that R/sR is a nite eld F having pd elements, so by Euler's theorem in F, we see that xpd−1 ≡1 (mod s). ◦But, by the Lemma above, pd −1 divides pn −1, so raising to the appropriate power modulo s shows xpn−1 ≡1 (mod s). We conclude that s divides xpn −x, as desired. ◦For the nal part, suppose s(x) ∈Fp[x] is an irreducible polynomial that divides xpn −x and has degree d not dividing n. Since s(x) ̸= x, we can assume s divides xpn−1 −1. ◦As above, R/sR is a nite eld F having pd elements, so by Euler's theorem in F, we see that apd−1 ≡1 (mod s) for every nonzero a ∈F. ◦Since apn−1 ≡1 (mod s) holds for every nonzero a ∈F by the above assumptions, we conclude that apgcd(d,n)−1 ≡1 (mod s). ◦But this is impossible, because q(t) = tpgcd(d,n)−1 −1 is then a polynomial of degree pgcd(d,n) −1 which has pd −1 roots over the eld Fp. 21 ◦We have shown all three parts, so we are done. • As a corollary, the above theorem allows us to count the number of monic irreducible polynomials in Fp[x] of any particular degree n. ◦Let fp(n) be the number of monic irreducible polynomials of exact degree n in Fp[x]. ◦The theorem above says that pn = P d|n d fp(d), since both sides count the total degree of the product of all irreducible polynomials of degree dividing n. ◦Using this recursion, we can compute the rst few values: n 1 2 3 4 5 6 7 8 fp(n) p 1 2(p2 −p) 1 3(p3 −p) 1 4(p4 −p2) 1 5(p5 −p) 1 6(p6 −p3 −p2 + p) 1 7(p7 −p) 1 8(p8 −p4) ◦For example, the formula says that there are 2 irreducible polynomials of degree 3 over F2, which there are: x3 + x2 + 1 and x3 + x + 1. ◦In fact, we can essentially write down a general formula. • Denition: The Möbius function is dened as µ(n) = ( 0 if n is divisible by the square of any prime (−1)k if n is the product of k distinct primes . In particular, µ(1) = 1. • Proposition (Möbius Inversion): If f(n) is any sequence satisfying a recursive relation of the form g(n) = P d|n f(d), for some function g(n), then f(n) = P d|n µ(d)g(n/d). ◦Proof: First, consider the sum P d|n µ(d): we claim it is equal to 1 if n = 1 and 0 if n ̸= 0. ∗To see this, if n = pa1 1 · · · pak k , the only terms that will contribute to the sum P d|n µ(d) are those values of d = pb1 1 · · · pbk k where each bi is 0 or 1. ∗If k > 0, then half of these 2k terms will have µ(d) = 1 and the other half will have µ(d) = −1, so the sum is zero. ∗Otherwise, k = 0 means that n = 1, in which case the sum is clearly 1. ◦Now we prove the desired result by (strong) induction. It clearly holds for n = 1, so now suppose the result holds for all k < n. ◦By hypothesis and induction, we have P d|n µ(d)g(n/d) = P d|n µ(d) P d′|(n/d) f(d′) = P dd′|n µ(d)f(d′) = P d′|n f(d′) P d|(n/d′) µ(d), but this last sum is simply f(n), because P d|(n/d′) µ(d) is zero unless n/d′ is equal to 1. • By applying Möbius inversion to our particular function fp(n), we immediately obtain the following: • Corollary: The number of monic irreducible polynomials of degree n in Fp[x] is fp(n) = 1 n P d|n pn/dµ(d). ◦From this corollary, we see that fp(n) = 1 npn +O(pn/2), where the big-O notation means that the error is of size bounded above by a constant times pn/2. • This has the following interesting reinterpretation: let X be the number of polynomials in Fp[x] of degree less than n. Clearly, X = pn. ◦Now we ask: of all these X polynomials, how many of them are prime (i.e., irreducible)? ◦This is simply fp(n) = 1 npn + O(pn/2) = X logp(X) + O( √ X). ◦In other words: the number of primes less than X is equal to X logp(X), up to a bounded error term. ◦Notice how very similar this statement is to the statement of the Prime Number Theorem for the integers Z! This is not a coincidence: in fact, it is the analogue of the Prime Number Theorem for the ring Fp[x]. 22 • It is also fairly easy to show using the formula that fp(n) > 0 for every prime p and every integer n ≥1. As we showed earlier, if q(x) is an irreducible polynomial of degree n in R = Fp[x], then R/qR is a nite eld of size pn. Thus, we also obtain the following: • Corollary: For any prime p and any n, there exists a nite eld having pn elements. ◦Remark: It can be shown that the number of elements in a nite eld must be a prime power4, so this result completely characterizes the number of elements that a nite eld can have. 4.3.3 Primitive Roots • We discussed primitive roots previously, but did not categorize when they do or do not exist modulo m. We will now extend our viewpoint slightly and treat primitive roots in arbitrary rings: • Denition: If R is a commutative ring with 1 having nitely many units, an element u ∈R is a primitive root if every unit of R is some power of u. ◦More explicitly, if there are n units in R, then an element is a primitive root precisely when its order is n. ◦Example: If R is F2[x] modulo x2 + x + 1, which we have previously established is a eld, then the elements x and x + 1 are primitive roots in R, since R has 3 units and each element has order 3 (their orders divide 3 by Euler's theorem, and neither element has order 1). ◦Example: If R is F3[x] modulo x2 + 1, which is also a eld, then the element x + 1 is a primitive root in R, since R has 8 units and x + 1 has order 8 (its order divides 8 by Euler's theorem, and x + 1 4 = 2 so its order does not divide 4). • Our rst goal is to prove that every nite eld has a primitive root. To do so we require the following preliminary fact: • Proposition: Let R be a commutative ring with 1 having nitely many units. If M is the maximal order among all units in R, then the order of every unit divides M. ◦Proof: Suppose u has order M, and let w be any other unit, of order k. ◦Suppose k does not divide M. Then there is some prime q which occurs to a higher power qf in the factorization of k than the corresponding power qe dividing M. ◦Observe that the element uqf has order M/qf, and the element wk/qe has order qe. ◦Since these two orders are relatively prime, the element uqf · wk/qe has order M · qf−e, which is a contradiction because this is larger than M. ◦Remark (for those who like group theory): This result actually holds in any abelian group, with the same proof: if M is the maximal order among all elements of a nite abelian group, then the order of every element divides M. • Theorem (Primitive Roots in Finite Fields): If F is a nite eld, then F has a primitive root. ◦Our proof is nonconstructive: we will show the existence of a primitive root without explicitly nding one. ◦Proof: Suppose M is the maximal order among all units in F. Then by the nite-eld version of Euler's theorem, we know that M ≤|F| −1, since a|F |−1 = 1 in F for every unit a ∈F. ◦By the above proposition, all units in F then have order dividing M, so the polynomial xM −1 has |F| −1 roots in F. ◦But this is impossible unless M ≥|F| −1, since a polynomial of degree M can only have at most M roots in F. 4To summarize: if K is a nite eld, if we let K′ be the subeld of K generated by the element 1 (in other words, the subeld whose elements are 0, 1, 1 + 1, 1 + 1 + 1, ...), it can be shown that K′ has a prime number of elements p, and that K is a vector space over K′. Then because every vector space has a basis, if we select a basis with d elements for K as a vector space over K′, then by counting the possible linear combinations of the basis elements we see that the number of elements in K is pd, which is a prime power. 23 ◦Hence we conclude M = |F|−1, meaning that some element has order |F|−1: this element is a primitive root. • By setting F = Z/pZ, we obtain the existence of a primitive root modulo p, which we use to construct a primitive root modulo p2: • Proposition (Primitive Roots Modulo p2): If a is a primitive root modulo p for p an odd prime, then a is a primitive root modulo p2 if ap−1 ̸≡1 (mod p2). In the event that ap−1 ≡1 (mod p2), then a + p is a primitive root modulo p2. ◦Proof: Since a is a primitive root modulo p, if the order of a mod p2 is r, then since ar ≡1 (mod p2) certainly implies ar ≡1 (mod p), we see that p −1 divides r. ◦Since ϕ(p2) = p(p −1), there are two possibilities: the order of a modulo p2 is p −1 or it is p(p −1). ◦The order of a modulo p2 will be p −1 if and only if ap−1 ≡1 (mod p2). This gives the rst statement. ◦For the second statement, suppose that ap−1 ≡1 (mod p2). ◦The binomial theorem implies (a + p)p−1 = ap−1 + (p −1)p · ap−2 + p2 · [other terms], which is simply ap−1 −p ap−2 (mod p2). ◦Since ap−1 ≡1 (mod p2), we see that ap−2 −p ap−2 cannot be equivalent to 1 mod p2, because p ap−2 is not divisible by p2. So (a + p)p−1 ̸≡1 (mod p2), so by the earlier argument a + p is a primitive root modulo p2. • Example: Find a primitive root modulo 112. ◦First we show that that 2 is a primitive root modulo 11: since the order of 2 must divide ϕ(11) = 10, and 22 ̸≡1 (mod 11) and 25 ̸≡1 (mod 11), the order divides neither 2 nor 5, hence must be 10. ◦We can also compute 210 = 1024 ≡56 (mod 112), so the proposition above dictates that 2 is also a primitive root modulo 112. • Primitive roots modulo pd for d > 2 turn out to be essentially the same as primitive roots modulo p2: • Proposition (Primitive Roots Modulo pd): If a is a primitive root modulo p2 for p an odd prime, then a is a primitive root modulo pd for all d ≥2. ◦Proof: We show this by induction on d: the base case d = 2 is vacuous. ◦Now suppose that a is a primitive root modulo pd and that it has order r modulo pd+1: thus, ar ≡1 (mod pd+1). Note that Euler's theorem implies that r divides ϕ(pd+1) = pd(p −1). ◦Reducing modulo pd shows ar ≡1 (mod pd), so since a is a primitive root modulo pd we see that r is divisible by ϕ(pd) = pd−1(p −1). ◦Thus, the only possibilities are r = pd−1(p −1) and r = pd(p −1): we just need to eliminate the rst possibility. ◦By Euler's theorem, ap−1 ≡1 (mod p) so we can write ap−1 = 1 + kp for some integer k. ◦Then, since a is a primitive root modulo p2, we also know that k is not divisible by p (as otherwise a would have order p −1 modulo p2). ◦Expanding with the binomial theorem yields (ap−1)pd−1 = (1+kp)pd−1 = 1+pd−1·kp+pd+1·[other terms]. But this is ̸≡1 modulo pd+1, since k is not divisible by p. ◦Hence apd−1(p−1) ̸≡1 (mod pd+1), so a must have order pd(p −1) = ϕ(pd+1), meaning a is in fact a primitive root. • Example: Find a primitive root modulo 11100. ◦We saw in the previous example that 2 was a primitive root modulo 112. Hence by the proposition above, 2 is a primitive root modulo 11d for any d ≥2 hence (in particular) for d = 100. • Given a primitive root modulo pd, it is easy to construct a primitive root modulo 2pd: 24 • Proposition: If a is a primitive root modulo pd for p an odd prime, then a is a primitive root modulo 2pd if a is odd, and a + pd is a primitive root modulo 2pd if a is even. ◦Proof: If a is odd, then a, a2, ... , aϕ(pd) are all odd and distinct modulo pd. Hence they all remain invertible modulo 2pd, and are clearly still distinct. ◦But since ϕ(2pd) = ϕ(pd), the elements a, a2, ... , aϕ(pd) exhaust all of the distinct residue classes modulo 2pd, meaning that a is a primitive root. ◦If a is even, then a + pd is odd, and we can apply the argument above to see a + pd is a primitive root modulo 2pd. • Example: Find a primitive root modulo 2 · 11100. ◦From before, we know that 2 is a primitive root modulo 11100. Since 2 is odd, the above corollary implies that 2 + 11100 is a primitive root modulo 2 · 11100. • With all of the above results, we can now nish the characterization of the moduli that have primitive roots: • Theorem (Primitive Roots Modulo m): There exists a primitive root modulo m if and only if m = 1, 2, 4, or m = pk or 2pk for an odd prime p and some k ≥1. ◦Proof: We have already shown the existence of primitive roots in all of these cases except m = 1, 2, 4, but the existence of a primitive root for those moduli is trivial. All we have left to do is show that a primitive root cannot exist for other m. ◦We begin with the observation that if there exists a primitive root r modulo m, then necessarily the congruence x2 ≡1 (mod m) has only two solutions modulo m. ∗Suppose u = rd for some 0 ≤d < ϕ(m) is a solution to u2 ≡1 (mod m). ∗Then r2d ≡1 mod m, so since r has order ϕ(m) there are only two possibilities for d, namely d = 0 and d = ϕ(m)/2. ∗Hence, there are only two possible u (which are, indeed, u = 1 and u = −1). ◦We then see that there cannot exist a primitive root modulo 4p for any prime p (including p = 2). ∗The congruence x2 ≡1 (mod 4p) has the four distinct solutions x ≡±1 and x ≡±(2p −1), so by the above there cannot be a primitive root. ◦Similarly, there cannot exist a primitive root modulo pq for any distinct odd primes p and q. ∗By the Chinese Remainder Theorem, there are four solutions to x2 ≡1 (mod pq), obtained by solving the congruences x ≡±1 (mod p) and x ≡±1 (mod q) simultaneously. ◦We also note that if r is a primitive root modulo m and d divides m, then r is a primitive root modulo d. ∗If the powers of r yield all the invertible residue classes modulo m, then they certainly yield all the invertible residue classes modulo d. ◦Therefore: if m is divisible by 4p for any prime p, or is divisible by two distinct odd primes, there is no primitive root modulo m. These two cases together encompass everything we needed to show, so we are done. • For completeness, we restate a result we showed in a previous chapter about the number of primitive roots modulo m: • Proposition (Number of Primitive Roots): If there exists a primitive root modulo m, then there are precisely ϕ(ϕ(m)) primitive roots modulo m. ◦Proof: Suppose that there exists a primitive root u modulo m, whose order is therefore ϕ(m). ◦We know that the invertible residue classes modulo p are represented by u1, . . . , uϕ(m), so it suces to determine how many of these have order ϕ(m). ◦Since the order of uk is ϕ(m)/ gcd(k, ϕ(m)), we see that uk is a primitive root if and only if k is relatively prime to ϕ(m). 25 ◦There are ϕ(ϕ(m)) such k, so there are ϕ(ϕ(m)) primitive roots modulo m. • Example: Find a primitive root modulo 232020 and the number of primitive roots modulo 232020. ◦First, we nd a primitive root modulo 23. The order of any element will divide ϕ(23) = 22, so to see that a given element is a primitive root we need only check that its order does not divide 11 or 2. ◦It is not hard to check that both 2 and 3 have order 11, but 52 ≡2 (mod 23) and 511 ≡−1 (mod 23), so 5 has order 22 hence is a primitive root modulo 23. ◦Then we can also compute 522 ≡323 (mod 232) using successive squaring. Hence by our results above, we see that 5 is a primitive root modulo 232, and hence modulo 23d for any value of d ≥2. This means that 5 is a primitive root modulo 232020. ◦The total number of primitive roots is ϕ(ϕ(232020)) = ϕ(22·232019) = ϕ(22)ϕ(232019) = 10 · 22 · 232018 . 4.4 Arithmetic in Z[i] • In this section, we use all of the ring-theoretic machinery we have developed to study the arithmetic of the Gaussian integer ring Z[i]. ◦Our rst goal is to study modular arithmetic in this ring. ◦Then we turn our attention to characterizing the irreducible elements in this ring. Since Z[i] is a Euclidean domain, we know that prime elements are the same as irreducible elements, but we will generally use the term irreducible element when referring to Z[i], so as not to cause too much confusion with the term prime number when we refer to rational integers in Z. ◦We will reserve the letter p for a prime integer (in Z), and we will use π to denote an irreducible element in Z[i]. (The use of the letter π is traditional, and should not cause confusion with the real number π.) • Recall that in Z[i], we have the norm map N(a + bi) = a2 + b2 = |a + bi|2, taking values in the nonnegative integers, and that this map is multiplicative: N(zw) = N(z)N(w). • We collect a few basic facts about norms that hold in Z[ √ D] for a general D: • Proposition: If α ∈Z[ √ D], then α is a unit if and only if N(α) = ±1. Also, if N(α) = ±p for a prime p, then α is irreducible. ◦Proof: If α is a unit, say with αβ = 1, then 1 = N(1) = N(αβ) = N(α)N(β), so N(α) = ±1. ◦If N(α) = ±1, then if α = a+b √ D, we have (a−b √ D)(a+b √ D) = N(α) = ±1, so α times ±(a−b √ D) is equal to 1, meaning α is a unit. ◦Finally, if α = βγ and N(α) = ±p, then N(β)N(γ) = ±p. If p is prime, then one of N(β), N(γ) must be ±1, so by the above one of β, γ is a unit. 4.4.1 Residue Classes in Z[i] • A natural question is: if β ∈Z[i] is some arbitrary element, how many residue classes modulo β are there, and is there an easy way to write them down? ◦It might seem as though the division algorithm would give them to us: we proved that for any α ∈Z[i], there exist a q, r ∈Z[i] such that α = qβ + r, and where N(r) ≤1 2N(β). ◦Thus, the collection of possible remainders r with N(r) ≤1 2N(β) certainly give all the residue classes. ◦However, the quotient and remainder arising in the division algorithm are not guaranteed to be unique: there can be more than one possible r such that α ≡r (mod β) and N(r) < 1 2N(β). • It turns out that it is much easier to understand the modular arithmetic in Z[i] from a geometric point of view. 26 ◦In the complex plane, the Gaussian integers form the set of lattice points, the points whose coordinates are both integers. We can also view Gaussian integers as vectors in this lattice, since the additive structure of Z[i] agrees with the additive structure of vectors in the plane. 2 + i −1 + 2i Figure 1: The Gaussian integers as a lattice, and the two vectors β = 2 + i and iβ = −1 + 2i. ◦Now consider the multiples of a given Gaussian integer β: every multiple is of the form (x + iy)β = xβ + y(iβ), so it is an integer linear combination of β and iβ. ◦Thus, drawing all of the Z[i]-multiples of β is the same as drawing all of the vectors that can be obtained by an integer number of steps each in the direction of β or iβ, which produces a square tiling of the plane. Figure 2: The Z[i]-multiples of β = 2 + i with marked vectors β = 2 + i and iβ = −1 + 2i. ◦Geometrically, two Gaussian integers will be congruent modulo β if and only if they are located in the same position within two dierent squares. • Thus, if we take the collection of lattice points inside any one of these squares, it will yield a fundamental region for the Gaussian integers modulo β: the elements in the fundamental region will be unique represen-tatives for the residue classes modulo β. 27 Figure 3: A fundamental region for Z[i] modulo β = 2 + i and a marked set of representatives. • As shown in the gures, there is a fundamental region for Z[i] modulo 2 + i containing the 5 points 0, i, 2i, 1 + i, and 2 + i. ◦Hence, every element of Z[i] is congruent modulo 2 + i to 0, i, 2i, 1 + i, or 2 + i. ◦We conclude that there are 5 residue classes modulo 2 + i. (Recall that we showed this earlier using a dierent approach.) • Notice that N(2 + i) = 5, and there are 5 residue classes modulo 2 + i. In general, it turns out that there are exactly N(β) residue classes modulo β for any nonzero β. We can prove this using (of all things) a theorem from elementary geometry! • Theorem (Pick's Theorem): If R is a polygon in the plane whose vertices are all lattice points, then the area of R is given by the formula A = I + 1 2B −1, where I is the number of lattice points in the interior of R and B is the number of lattice points on the boundary of R. ◦Remark: We say a point of R is a boundary point if it lies on one of the sides of R. We say a point of R is an interior point if it does not lie on one of the sides of R. ◦The result is easiest to see with an example: by drawing a rectangle around the given polygon and subtracting small triangles, one can see that this polygon has area 17 2 = 5 + 9 2 −1. Figure 4: A lattice polygon with 9 boundary points (in blue) and 5 interior points (in red). ◦We will omit the full proof5, since it is not really relevant to our goals. 5To summarize: rst establish Pick's theorem for rectangles (an easy counting argument), and that it is consistent with taking unions of regions along an edge and also with removing a portion of a region along an edge. Then deduce that it holds for right triangles, then for all triangles, and nally that any polygonal region can be constructed by adding or subtracting triangular regions from rectangles. 28 • We can use Pick's theorem to give an easy computation of the number of residue classes in Z[i] modulo β: • Theorem (Number of Residue Classes in Z[i]/β): If β is a nonzero Gaussian integer, the number of distinct residue classes in Z[i] modulo β is equal to N(β). ◦Proof: Consider a fundamental region for Z[i] modulo β. ◦By our geometric arguments above, every Gaussian integer has a unique representative modulo β that lies in the fundamental region, which we can take to be the square whose vertices are 0, β, iβ, and β +iβ in the complex plane. ◦Each interior point of this square yields one residue class. ◦The boundary points of the square come in pairs (on opposite edges of the square) each yielding one residue class, except for the four vertices (0, β, iβ, β + iβ) which all lie in the same residue class. ◦Thus, the total number of residue classes is I + B −4 2 + 1 = I + 1 2B −1. ◦But by Pick's Theorem, this is precisely the area of the fundamental region. Since this region is a square with side length |β|, the area is simply |β|2 = N(β). • Thus, to list all of the residue classes modulo β ∈Z[i], we need only give a list of N(β) inequivalent residue classes, which must therefore be exhaustive. (To generate this list, we can draw a fundamental region for Z[i] modulo β.) • Example: Find representatives for the residue classes modulo 2 + 2i in Z[i]. ◦We have N(2 + 2i) = 8 so there are 8 residue classes. ◦It is then not hard to verify that the 8 values 0, 1, 2, 3, i, 1 + i, 2 + i, and 3 + i are all pairwise distinct modulo 2 + 2i. Thus, these are representatives of all of the residue classes. 4.4.2 Prime Factorization in Z[i] • We now turn our attention to factorization in Z[i]. ◦If π ∈Z[i], π certainly divides N(π). So if π is irreducible in Z[i], then since irreducibles are prime elements in a Euclidean domain, we conclude that π must divide one of the (integer) prime factors of the integer N(π). ◦Therefore, to identify the irreducible elements of Z[i], we need to study how primes p ∈Z factor in Z[i]. • Proposition (Reducibility and Sums of Squares): If p is a prime integer, then p is irreducible in Z[i] if and only if p is not the sum of two squares (of integers). In particular, 2 is reducible in Z[i], while any prime congruent to 3 modulo 4 is irreducible in Z[i]. ◦Proof: Suppose that p = (a + bi)(c + di) for some nonunits a + bi and c + di, where p is a prime in Z. ◦Taking norms yields p2 = N(p) = (a2 + b2)(c2 + d2), and now since a + bi and c + di are not units, both a2 + b2 and c2 + d2 must be larger than 1. ◦The only possibility is a2 + b2 = c2 + d2 = p, so we see that p = a2 + b2 for some integers a and b. ◦Conversely, if p = a2 + b2 for some integers a and b, we immediately have the factorization p = (a + bi)(a −bi). ◦For the last statement, clearly 2 = 12 + 12, and also, any square is 0 or 1 modulo 4, so the sum of two squares cannot be congruent to 3 modulo 4. • We are now left to analyze primes congruent to 1 modulo 4. ◦By testing a few small cases like 5 = (2 −i)(2 + i) and 13 = (3 + 2i)(3 −2i), it would appear that such primes always factor into a product of two complex-conjugate irreducible factors in Z[i]. This turns out to be the case. 29 • Proposition (Factorization of 1 Mod 4 Primes): If p is a prime integer and p ≡1 (mod 4), then p is a reducible element in the ring Z[i], and its factorization into irreducibles is p = (a + bi)(a −bi) for some a and b with a2 + b2 = p. ◦First, we show that there exists some integer n such that p divides n2 + 1. Then we use the result to show that p is reducible in Z[i]. ◦Proof: For the rst part, let p = 4k + 1 and let u be a primitive root modulo p (which we have shown necessarily exists). ◦Then u4k ≡1 mod p, so u2k ≡−1 (mod p), since its square is 1 but it cannot equal 1 (as otherwise u would have order ≤2k and thus not be a primitive root). ◦Then uk = n is an element whose square is −1 modulo p, so p divides the integer n2 + 1. ◦For the second part, we see that p divides n2 + 1 = (n + i)(n −i) in Z[i]. ◦Then, since p is a real number, if p divides one of n ± i then taking complex conjugates would show that p also divides the other. But this is not possible, since then p would divide (n + i) −(n −i) = 2i, which it clearly does not. ◦Therefore, p is not a prime element in Z[i], so it must be reducible. By the previous proposition, this means there exist integers a and b with p = a2 + b2. ◦Then N(a+bi) = N(a−bi) = p so these two elements are both irreducible, meaning that the factorization of p in Z[i] is p = (a + bi)(a −bi) as claimed. • This completes our characterization of the irreducible elements in Z[i]. Explicitly: • Theorem (Irreducibles in Z[i]): Up to associates, the irreducible elements in Z[i] are as follows: 1. The element 1 + i (of norm 2). 2. The primes p ∈Z congruent to 3 modulo 4 (of norm p2). 3. The distinct irreducible factors a+bi and a−bi (each of norm p) of p = a2 +b2 where p ∈Z is congruent to 1 modulo 4. ◦Proof: The above propositions show that each of these are irreducible elements; we need only show there are no others. So suppose π = a + bi is an irreducible element in Z[i]. ◦Then N(π) = p1p2 · · · pk for some (integer) primes pi ∈Z; since π is a prime element we conclude that it must divide one of the pi. But we have characterized how pi factors into irreducibles in Z[i], so it must be associate to one of the elements on our list above. • Using this characterization of irreducible elements, we can describe a method for factoring an arbitrary Gaussian integer into irreducibles. (This is the prime factorization in Z[i].) ◦First, nd the prime factorization of N(a + bi) = a2 + b2 over the integers Z, and write down a list of all (rational) primes p ∈Z dividing N(a + bi). ◦Second, for each p on the list, nd the factorization of p over the Gaussian integers Z[i]. ◦Finally, use trial division to determine which of these irreducible elements divide a + bi in Z[i], and to which powers. (The factorization of N(a+bi) can be used to determine the expected number of powers.) • Example: Find the factorization of 4 + 22i into irreducibles in Z[i]. ◦We compute N(4 + 22i) = 42 + 222 = 22 · 53. The primes dividing N(4 + 22i) are 2 and 5. ◦Over Z[i], we nd the factorizations 2 = −i(1 + i)2 and 5 = (2 + i)(2 −i). ◦Now we just do trial division to nd the correct powers of each of these elements dividing 4 + 22i. ◦Since N(4 + 22i) = 22 · 53, we should get two copies of (1 + i) and three elements from {2 + i, 2 −i}. ◦Doing the trial division yields the factorization 4 + 22i = −i · (1 + i)2 · (2 + i)3 . (Note that in order to have powers of the same irreducible element, we left the unit −i in front of the factorization.) 30 • The primes appearing in the example above were small enough to factor over Z[i] by inspection, but if p is large then it is not so obvious how to factor p in Z[i]. We briey explain how to nd this expression algorithmically. ◦Per the proof given above, we rst want to nd n such that p divides n2 + 1, which is equivalent to nding a square root of −1 modulo p. ◦One way to search for such values is to choose a (random) unit u modulo p: then since up−1 ≡1 (mod p), we know that the square of u(p−1)/2 will be ≡1 (mod p). We will show later that half of the units modulo p will have u(p−1)/2 ≡−1 (mod p), in which case the value u(p−1)/4 will be a square root of −1 modulo p. By trying various choices for u, we can eventually nd the desired n. (Note of course that we can compute u(p−1)/4 very eciently using successive squaring.) ◦Now suppose we have computed such an n: if we factor p = ππ in Z[i], then since π divides n2 + 1 = (n + i)(n −i) and π is a prime element, either π divides n + i or π divides n −i. Equivalently, either π divides n + i or π divides n + i. ◦Furthermore, since p clearly does not divide n + i, we see that exactly one of π and π divides n + i. Therefore, either π or π is a greatest common divisor of p and n + i in Z[i]. ◦Thus, to compute the solution to p = a2 + b2, we can use the Euclidean algorithm in Z[i] to nd a greatest common divisor of p and n + i in Z[i]: the result will be an element π = a + bi with a2 + b2 = p. • Example: Express the prime p = 3329 as the sum of two squares. ◦Using modular exponentiation, we can verify that 3(p−1)/4 ≡1729 (mod p). Thus, our discussion above tells us that 1729 is a square root of −1 modulo p, and indeed, 17292 + 1 = 898 · 3329. ◦Now we compute the gcd of 1729 + i and 3329 in Z[i] using the Euclidean algorithm: 3329 = 2(1729 + i) + (−129 −2i) 1729 + i = −13(−129 −2i) + (52 −25i) −129 −2i = (−2 −i)(52 −25i) ◦The last nonzero remainder is 52 −25i, and indeed we can see that 3329 = 522 + 252 . • As a corollary to our characterization of the irreducible elements in Z[i], we can deduce the following theorem of Fermat on when an integer is the sum of two squares: • Theorem (Fermat): Let n be a positive integer, and write n = 2kpn1 1 · · · pnk k qm1 1 · · · qmd d , where p1, · · · , pk are distinct primes congruent to 1 modulo 4 and q1, · · · , qd are distinct primes congruent to 3 modulo 4. Then n can be written as a sum of two squares in Z if and only if all the mi are even. Furthermore, in this case, the number of ordered pairs of integers (A, B) such that n = A2 + B2 is equal to 4(n1 + 1)(n2 + 1) · · · (nk + 1). ◦Proof: Observe that the question of whether n can be written as the sum of two squares n = A2 + B2 is equivalent to the question of whether n is the norm of a Gaussian integer A + Bi. ◦Write A + Bi = ρ1ρ2 · · · ρr as a product of irreducibles (unique up to units), and take norms to obtain n = N(ρ1) · N(ρ2) · · · · · N(ρr). ◦By the classication above, if ρ is irreducible in Z[i], then N(ρ) is either 2, a prime congruent to 1 modulo 4, or the square of a prime congruent to 3 modulo 4. Hence there exists such a choice of ρi with n = Q N(ρi) if and only if all the mi are even. ◦Furthermore, since the factorization of A + Bi is unique, to nd the number of possible pairs (A, B), we need only count the number of ways to select terms for A + Bi and A −Bi from the factorization of n over Z[i], which is n = (1 + i)2k(π1π1)n1 · · · (πkπk)nkqm1 1 · · · qmd d . ◦Up to associates, we must choose A+Bi = (1+i)k(πa1 1 π1b1) · · · (πak k πkbk)qm1/2 1 · · · qmd/2 d , where ai+bi = ni for each 1 ≤i ≤k. ◦Since there are ni + 1 ways to choose the pair (ai, bi), and 4 ways to multiply A + Bi by a unit, the total number of ways is 4(n1 + 1) · · · (nk + 1), as claimed. • Example: Find all ways of writing n = 6649 as the sum of two squares. 31 ◦We factor 6649 = 61 · 109. This is the product of two primes each congruent to 1 modulo 4, so it can be written as the sum of two squares in 16 dierent ways. ◦We compute 61 = 52 + 62 and 109 = 102 + 32 (either by the algorithm above or by inspection), so the 16 ways can be found from the dierent ways of choosing one of 5 ± 6i and multiplying it with 10 ± 3i. ◦Explicitly: (5 + 6i)(10 + 3i) = 32 + 75i, and (5 + 6i)(10 −3i) = 68 + 45i, so we obtain the sixteen ways of writing 6649 as the sum of two squares as (±32)2 + (±75)2, (±68)2 + (±45)2, and the eight other decompositions with the terms interchanged. • As another application of our results, we can prove a classical characterization of the Pythagorean triples of integers (a, b, c) such that a2 + b2 = c2 (so named because these represent the side lengths of a right triangle). ◦If a2 + b2 = c2 for integers a, b, c, note that if two of a, b, c are divisible by a prime p, then so is the third. We can then reduce the triple (a, b, c) by dividing each term by p to obtain a new triple (a′, b′, c′) with (a′)2 + (b′)2 = (c′)2. ◦For this reason it is sucient to characterize the primitive Pythagorean triples with gcd(a, b, c) = 1. For such triples, since a and b cannot both be odd (since then a2 + b2 ≡2 (mod 4) cannot be a perfect square) we see that exactly one of a, b is even. • Theorem (Pythagorean Triples): Every triple of positive integers (a, b, c) with a2+b2 = c2 with gcd(a, b, c) = 1 and a even is of the form (a, b, c) = (2st, s2 −t2, s2 + t2), for some relatively prime integers s > t of opposite parity, and (conversely) any such triple is Pythagorean and primitive. ◦Proof: It is easy to see that (2st)2+(s2−t2)2 = (s2+t2)2 simply by multiplying out, and it is likewise not dicult to see that if s and t are relatively prime and have opposite parity, then gcd(s2 −t2, s2 + t2) = 1 so this triple is primitive. ◦To show that (a, b, c) must be of the desired form, suppose that a2 + b2 = c2, and factor the equation in Z[i] as (a + bi)(a −bi) = c2. ◦We claim that a + bi and a −bi are relatively prime in Z[i]: any gcd must divide 2x and 2y, hence divide 2. However, a + bi is not divisible by the prime 1 + i, since a and b are of opposite parity. ◦Hence, since a + bi and a −bi are relatively prime and have product equal to a square, by the uniqueness of prime factorization in Z[i], there exists some s + it ∈Z[i] and some unit u ∈{1, i, −1, −i} such that a + bi = u(s + ti)2. ◦Multiplying out yields a + bi = u (s2 −t2) + (2st)i . Since a is even, b is odd, and both are positive, we must have u = −i and s > t: then we see a = 2st, b = s2 −t2, and c = s2 + t2 as claimed. • As a third corollary of our classication, we obtain another way to construct nite elds: if p ∈Z is a prime congruent to 3 modulo 4, then, for R = Z[i], we know that R/pR is a eld of size N(p) = p2. ◦By drawing the fundamental region for R/pR, we can see that a set of residue class representatives is given by the elements of the form a + bi for 0 ≤a, b ≤p −1. ◦With p = 3, we obtain a eld with 9 elements whose elements are the residue classes of 0, 1, 2, i, 1 + i, 2 + i, 2i, 1 + 2i, and 2 + 2i. In this eld, for example, we have (1 + i) · (2 + i) = 1 + 3i ≡1 (mod 3). ◦Notice that we constructed another eld of order 9 earlier: F3[x] modulo x2+1. In this eld, for example, we have 1 + x · 2 + x = 2 + 3x + x2 = 1. ◦As can be veried by trying out a few more examples, the arithmetic in these two elds turns out to be identical! (Simply replace i by x.) ◦Here is the reason: notice that F3[x] modulo x2 + 1 is obtained from Z by rst declaring that 3 is equal to 0 (thus forming F3 = Z/3Z), and then introducing a new element x whose square is −1. ◦On the other hand, Z[i] modulo 3 is obtained from Z by rst introducing a new element i whose square is −1, and then declaring that 3 is equal to 0. ◦These two elds, therefore, are related because these two operations can be performed in either order. Well, you're at the end of my handout. Hope it was helpful. Copyright notice: This material is copyright Evan Dummit, 2014-2020. You may not reproduce or distribute this material without my express permission. 32
774
https://www.statisticshowto.com/hyperbolic-functions/
Skip to content Hyperbolic Functions Types of Functions > Hyperbolic functions are a special class of transcendental functions, similar to trigonometric functions or the natural exponential function, ex. Although not as common as their trig counterparts, the hyperbolics are useful for some applications, like modeling the shape of a power line hanging between two poles. Each trigonometric function has a corresponding hyperbolic function, with an extra letter “h”. For example, sinh(x), the Hhperbolic cosine function cosh(x) and tanh(x). While the “ordinary” trig functions parameterize (model) a curve, the hyperbolics model a hyperbola—hence the name. . Defining Using e(x) Unlike their trigonometric counterparts, hyperbolic functions are defined in terms of the exponential function ex. For example, f(x) = cosh(x) is defined by: And sinh(x) is defined as: All of the remaining hyperbolic functions (see list below) can be defined in terms of these two definitions. Properties of Hyperbolic Functions Hyperbolic functions can be even or odd functions. Even functions (symmetric about the y-axis): cosh(x) and sech(x), Odd functions (symmetric about the origin): All other hyperbolic functions are odd. Some of these functions are defined for all reals: sinh(x), cosh(x), tanh(x) and sech(x). Two others, coth(x) and csch(x) are undefined at x = 0 because of a vertical asymptote at x = 0. Derivatives of Hyperbolic Functions The derivatives of hyperbolic functions are almost identical to their trigonometric counterparts: sinh(x)′ = cosh(x) cosh(x)′ = sinh(x) tanh(x)′ = sech2(x) coth(x)′ = csch2(x) csch(x)′ = -csch(x) coth(x) sech(x)′ = sech(x) tanh(x) Limits For x→ ∞, the limits of the hyperbolic functions are: limx → ± ∞ sinh(x) = ± ∞ limx → ± ∞ cosh(x) = ∞ limx → ± ∞ tanh(x) = ± 1 limx → ± ∞ coth(x) = 0 limx → ± ∞ csch(x) = 0 limx → ± ∞ sech(x) = ± 1 References Graph of cosh(x): Desmos calculator. Stewart. Math 133. Hyperbolic Functions. Retrieved November 24, 2019 from: Comments? Need to post a correction? Please Contact Us.
775
https://byjus.com/maths/distance-between-two-points-3d/
In Mathematics, we mostly prefer the distance formula to find the distance between the two points in the coordinate plane. This distance formula is used when we know the coordinates of the two points in the plane. In that case, by substituting those points in the formula, we can easily get the distance between two points. In order to locate the position of a point in a plane or two dimensions, we require a pair of the coordinate axis. The distance of the point along the x-axis from the centre is called x-coordinate (or abscissa), and the distance of the point along the y-axis from the origin is called y-coordinate (or ordinate). The ordered pair (x,y) represents the coordinate of the point. In this article, you will learn the distance between two points in the 2D plane (two-dimensional plane) and 3D plane (three-dimensional plane), formulas and examples in detail. | | | Also, read: Distance Between Two Lines Distance Formula Perpendicular Distance of a Point From a Plane | Distance Between 2 Points Formula Consider two points A (x1, y1) and B(x2, y2) on the given coordinate axis. The distance between these points is given as: (\begin{array}{l}d=\sqrt{(x_{2}-x_{1})^2+(y_{2}-y_{1})^2}\end{array} ) Also try: Distance Between Two Points Calculator How to Find the Distance Between Two Points? To find the distance between two points in the coordinate plane, follow the procedure given below: Distance Between Two Points in 3D Distance Between 2 Points Formula in 3D The distance between two points P(x1, y1, z1) and Q(x2, y2, z2) = PQ = √[(x2 – x1)2 + (y2 – y1)2 + (z2 – z1)2] Distance Between 2 Points Formula Derivation in 3D Let the points P(x1, y1, z1) and Q (x2, y2, z2) be referred to a system of rectangular axes OX,OY and OZ as shown in the figure. Through the points P and Q, we draw planes parallel to the rectangular coordinate plane such that we get a rectangular parallelepiped with PQ as the diagonal. ∠PAQ forms a right angle and therefore, using the Pythagoras theorem in triangle PAQ, (\begin{array}{l}PQ^2= PA^2+AQ^2 …(1)\end{array} ) Also, in triangle ANQ, ∠ANQ is a right angle. Similarly, applying the Pythagoras theorem in ΔANQ we get, (\begin{array}{l}AQ^2=AN^2+NQ^2 ….(2)\end{array} ) From equations 1 and 2 we have, (\begin{array}{l}PQ^2=PA^2+NQ^2+AN^2\end{array} ) As coordinates of the points, P and Q are known, Therefore, (\begin{array}{l}PQ^2=(x_2-x_1 )^2+(y_2-y_1 )^2+(z_2-z_1 )^2\end{array} ) Thus, the formula to find the distance between two points in three-dimension is given by: (\begin{array}{l}PQ=\sqrt{(x_2-x_1 )^2+(y_2-y_1 )^2+(z_2-z_1 )^2}\end{array} ) This formula gives us the distance between two points P(x1, y1, z1) and Q (x2, y2, z2) in three dimensions. Distance of any point Q(x, y, z) in space from origin O(0, 0, 0), is given by, (\begin{array}{l}OQ=\sqrt{(x^2+y^2+z^2)}\end{array} ) Distance Between Two Points Examples Let us go through some examples to understand the distance formula in three dimensions. Example 1: Find the distance between the two points given by P(6, 4, -3) and Q(2, -8, 3). Solution: Let the given points be: P(6, 4, -3) = (x1, y1, z1) Q(2, -8, 3) = (x2, y2, z2) Using distance formula to find distance between the points P and Q, (\begin{array}{l}PQ=\sqrt{((x_2-x_1 )^2+(y_2-y_1 )^2+(z_2-z_1 )^2 )}\end{array} ) (\begin{array}{l}PQ=\sqrt{(6-2)^2+(4-(-8))^2+(-3-3)^2}\end{array} ) (\begin{array}{l}PQ=\sqrt{(16+144+36)}\end{array} ) PQ = √196 PQ = 14 Example 2: A, B, C are three points lying on the axes x,y and z respectively, and their distances from the origin are given as respectively; then find coordinates of the point which is equidistant from A, B, C and O. Solution: Let the required point be P(x, y, z). Co-ordinates of the points A,B and C are given as (a, 0, 0), (0, b, 0), (0, 0, c) and (0, 0, 0). As we know that the point P is equidistant from the given points. Hence, PA = PB = PC = PO Now, applying the distance formula for PO = PA, we get (\begin{array}{l}\sqrt{x^2+y^2+z^2}=\sqrt{(a-x)^2+y^2+z^2}\end{array} ) (\begin{array}{l}x^2+y^2+z^2=(a-x)^2+y^2+z^2\end{array} ) (\begin{array}{l}x^2=(a-x)^2\end{array} ) (\begin{array}{l}x= a/2\end{array} ) Similarly applying the distance formula for PO = PB and PO=PC, we get (\begin{array}{l}y= \frac{b}{2}\end{array} ) (\begin{array}{l}z= \frac{c}{2}\end{array} ) Therefore co-ordinates of the point which are equidistant from the points A,B,C and O is given by (\begin{array}{l}(\frac{a}{2},\;\frac{b}{2},\;\frac{c}{2}).\end{array} ) Example 3: Find the distance between two points A(7, 13) and B(10, 9). Solution: Given: Two points are A(7, 13) = (x1, y1) and B(10, 9) = (x2, y2). We know that the formula to calculate the distance between two points is: AB = √[(x2-x1)2 + (y2-y1)2] Now, substitute the values in the formula, we get AB = √[(10-7)2 + (9-13)2] AB = √[(3)2+ (-4)2] AB = √(9+16) = √25 AB = 5 Hence, the distance between two points A(7, 13) and B(10, 9) is 5. Example 4: The distance between two points (a, 2) and (3, 4) is 8. Find the value of a. Solution: Let the points be P and Q. (i.e) P ( a, 2) = (x1, y1) and Q(3, 4) = (x2, y2) We know that the distance between two points formula is: PQ = √[(x2-x1)2 + (y2-y1)2] Now, substitute the values, we get 8 = √[(3-a)2 + (4-2)2] Now, take square on both sides, we get 82 = (3-a)2 + (4-2)2 64 = (3-a)2+ 22 64 = (3-a)2 +4 (3-a)2 = 60 Now, take square root on both sides, we get 3-a = √60 3-a = ±2√15 Hence, a = 3±2√15. Therefore, the value of a is 3±2√15. Example 5: Determine the distance between two points (7, 5) and (3, 2) Solution: (7, 5) = (x1, y1) (3, 2) = (x2, y2) We know that, the distance between two points formula is: Distance = √[(x2-x1)2 + (y2-y1)2] Distance = √[(3-7)2 + (2-5)2] Distance =√[(-4)2+ (-3)2] = √(16+9) = √25 Distance = 5 units. Therefore, the distance between the two points is 5 units. Practice Problems To learn more about three-dimensional geometry, please visit our website www.byjus.com or download BYJU’S- The Learning App. Put your understanding of this concept to test by answering a few MCQs. Click ‘Start Quiz’ to begin! Select the correct answer and click on the “Finish” button Check your score and answers at the end of the quiz Congrats! Visit BYJU’S for all Maths related queries and study materials Your result is as below Request OTP on Voice Call Comments Leave a Comment Cancel reply Your Mobile number and Email id will not be published. Required fields are marked Request OTP on Voice Call Website Post My Comment Register with BYJU'S & Download Free PDFs Register with BYJU'S & Watch Live Videos
776
https://askfilo.com/user-question-answers-smart-solutions/find-the-range-of-the-function-y-a-sin-x-b-cos-x-3335363431303033
Find the range of the function y = a \sin x + b \cos x . | Filo World's only instant tutoring platform Instant TutoringPrivate Courses Tutors Explore TutorsBecome Tutor Login StudentTutor CBSE Smart Solutions Find the range of the function y = a x + b x . Question Question asked by Filo student Find the range of the function y=a sin x+b cos x. Views: 5,785 students Updated on: Jun 23, 2025 Not the question you're searching for? Ask your question Ask your question Or Upload the image of your question Get Solution Text solutionVerified Concepts Trigonometric identities, amplitude of trigonometric functions, Pythagorean identity, transformation of trigonometric expressions. Explanation The function given is y=a sin x+b cos x, which is a linear combination of sine and cosine functions. The range of such a function depends on the amplitude of the resulting sinusoidal wave. We can rewrite this function into a single sine or cosine function form using trigonometric identities. The combined amplitude is calculated by a 2+b 2​. Step-By-Step Solution Step 1 Recall the identity: a sin x+b cos x=R sin(x+α) where R=a 2+b 2​ and α is an angle such that: cos α=R a​,sin α=R b​ Step 2 The function becomes: y=R sin(x+α) Since sine function sin(θ) ranges between -1 and 1, the range of y is scaled by R. Step 3 Therefore: −R≤y≤R which explicitly is: −a 2+b 2​≤y≤a 2+b 2​ Final Answer The range of y=a sin x+b cos x is: [ \boxed{\left[-\sqrt{a^{2} + b^{2}}, \ \sqrt{a^{2} + b^{2}}\right]} ] Ask your next question Or Upload the image of your question Get Solution Get instant study help from an expert tutor 24/7 Download Filo Found 6 tutors discussing this question Olivia Discussed Find the range of the function y=a sin x+b cos x. 11 mins ago Discuss this question LIVE 11 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Download AppExplore now Trusted by 4 million+ students Students who ask this question also asked Question 1 Views: 5,956 find the factors number of 33 Topic: Smart Solutions View solution Question 2 Views: 5,343 A remedial teacher must cooperate with ___ to seek professional support with a view to helping pupils solve their problems. (1) related professionals (2) strict tutors (3) English experts (4) related book writers Topic: Smart Solutions View solution Question 3 Views: 5,126 Why did 'he' expect to be contradicted at any moment? Topic: Smart Solutions View solution Question 4 Views: 5,775 On a railway map, an actual distance of 36 miles between two stations is represented by a line 10 cm long. Draw a plain scale to show mile and long enough to read up to 60 miles. Also draw a comparative scale attached to it to show kilometre and read up to 90 kilometres. On the scale show the distance in Kilometers equivalent to 46 miles. Take 1 mile = 1609 metres Topic: Smart Solutions View solution View more Video Player is loading. Play Video Play Skip Backward Mute Current Time 0:00 / Duration-:- Loaded: 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time-0:00 1x Playback Rate 2.5x 2x 1.5x 1x, selected 0.75x Chapters Chapters Descriptions descriptions off, selected Captions captions settings, opens captions settings dialog captions off, selected Audio Track Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color Opacity Text Background Color Opacity Caption Area Background Color Opacity Font Size Text Edge Style Font Family Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Stuck on the question or explanation? Connect with our 391 tutors online and get step by step solution of this question. Talk to a tutor now 278 students are taking LIVE classes Question Text Find the range of the function y=a sin x+b cos x. Updated On Jun 23, 2025 Topic All topics Subject Smart Solutions Class Class 11 Answer Type Text solution:1 Are you ready to take control of your learning? Download Filo and start learning with your favorite tutors right away! Questions from top courses Algebra 1 Algebra 2 Geometry Pre Calculus Statistics Physics Chemistry Advanced Math AP Physics 2 Biology Smart Solutions College / University Explore Tutors by Cities Tutors in New York City Tutors in Chicago Tutors in San Diego Tutors in Los Angeles Tutors in Houston Tutors in Dallas Tutors in San Francisco Tutors in Philadelphia Tutors in San Antonio Tutors in Oklahoma City Tutors in Phoenix Tutors in Austin Tutors in San Jose Tutors in Boston Tutors in Seattle Tutors in Washington, D.C. World's only instant tutoring platform Connect to a tutor in 60 seconds, 24X7 27001 Filo is ISO 27001:2022 Certified Become a Tutor Instant Tutoring Scheduled Private Courses Explore Private Tutors Filo Instant Ask Button Instant tutoring API High Dosage Tutoring About Us Careers Contact Us Blog Knowledge Privacy Policy Terms and Conditions © Copyright Filo EdTech INC. 2025 This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
777
https://www.sciencedirect.com/science/article/abs/pii/S027869152030274X
Detoxification of paralytic shellfish poisoning toxins in naturally contaminated mussels, clams and scallops by an industrial procedure - ScienceDirect Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Patient Access Other access options Search ScienceDirect Article preview Abstract Introduction Section snippets References (24) Cited by (14) Food and Chemical Toxicology Volume 141, July 2020, 111386 Detoxification of paralytic shellfish poisoning toxins in naturally contaminated mussels, clams and scallops by an industrial procedure Author links open overlay panel Ana G.Cabado a, Jorge Lago a, Virginia González a, Lucía Blanco a, Beatriz Paz a, Jorge Diogène b, Laura Ferreres b, Maria Rambla-Alegre b Show more Add to Mendeley Share Cite rights and content Highlights •An industrial protocol aimed at reducing PSP toxin levels was developed and optimized in mussels, clams and scallops. •The procedure was applied to some batches of PSP-contaminated molluscs obtaining ±85% detoxification and a safe product. •However, one sample with an exceptionally high toxicity, 9000 μg STX diHCl equiv/kg, did not fall below the European limit. •An economically feasible bivalve canning processing was implemented, guaranteeing the manufacture of a safe product. Abstract Paralytic shellfish poisoning (PSP) episodes cause important economic impacts due to closure of shellfish production areas in order to protect human health. These closures, if are frequent and persistent, can seriously affect shellfish producers and the seafood industry, among others. In this study, we have developed an alternative processing method for bivalves with PSP content above the legal limit, which allows reducing toxicity to acceptable levels. A modification of the PSP detoxifying procedure stablished by Decision 96/77/EC of the European Union in Acanthocardia tuberculata, was developed and implemented for PSP elimination in other bivalves species. The procedure was applied to 6 batches of mussels, 2 batches of clams and 2 batches of scallops, achieving detoxification rates of around 85%. A viable industrial protocol which allows the transformation of a product at risk into a safe product was developed. Although a significant reduction was obtained, in a sample circa 9000 μg STX diHCl equiv/kg, the final toxin level in these highly toxic mussels did not fall below the European limit. The processing protocol described may be applied efficiently to mussels, clams and scallops and it may be a major solution to counteract the closure of shellfish harvesting areas, especially if persistent. Graphical abstract 1. Download: Download high-res image (345KB) 2. Download: Download full-size image Introduction Paralytic shellfish poisoning (PSP) is caused by consumption of shellfish containing PSP toxins of the family of saxitoxins (STX) (“Marine biotoxins in shellfish – saxitoxin group,” 2009). These toxins are produced by microalgae, mainly toxic marine dinoflagellates such as species of the genera Alexandrium and Gymnodinium, and also by certain freshwater cyanobacteria (Gracia Villalobos et al., 2019; Pitois et al., 2018; Fabre et al., 2017). These toxins are accumulated and sometimes metabolized into toxin derivatives in many species of filter-feeding bivalves, as mussels, clams and scallops, making them potentially toxic to humans. Harmful algal blooms (HABs) can also induce other ecological damage and adverse effects to living marine resources. In fact, some bivalves can be impaired during intense toxic episodes. For instance, a population of the surf clam Mesodesma donacium with high PSP toxic levels, died due to the desiccation caused by the incapability of the clams to burrow (Álvarez et al., 2019). To protect public health and ensure the quality of seafood, monitoring programs are implemented worldwide in order to detect and quantify these toxins, and eventually forbidding shellfish harvesting when levels of toxins exceed the legal limit laid down in current regulations. In Europe for example, harvesting and commercialization of bivalves is prohibited above the threshold of 800 μg STX diHCl equiv/kg of shellfish tissues (EC, 2004). Closure of shellfish production areas has an important economic impact for producers and other associated industries. No solutions have been found to prevent these important episodes which are seldom predictable, and despite the influence of PSP events on human health and fisheries, studies on shellfish detoxification to mitigate this problem are still very scarce. Natural detoxification occurs very slowly and it is conditioned by the presence of toxin producing microalgae in the water column. Lipophilic toxins are retained longer than the hydrophilic toxins, such as PSP toxins, although the detoxification rate depends on the species, concentration of toxins and environmental conditions (Lee et al., 2008). Several studies described that the concentration of some PSP toxin analogues in bivalves, but not all of them, can be reduced by exposing contaminated shellfish to a non-toxic diet (Reis Costa et al., 2018). Nevertheless, mitigating or modulating the presence of microalgae in the field is currently not possible, so this eventual solution should be applied by maintaining large stocks of shellfish in a closed space for several days, and the feasibility of this would be dubious. Once harvested, toxin reduction or elimination from shellfish is mainly affected by the chemical properties of the toxins. In the particular case of PSP toxins, a regulation was published after performing scientific studies which proved that a suitable heat treatment decreased the levels of PSP toxins and guaranteed the safety of the cockle Acanthocardia tuberculata (Berenguer et al., 1993; EC, 1996). A detoxification procedure would result in an economically feasible solution for a shellfish canning industry in locations where PSP toxic episodes occur very often or are persistent, and large amounts of shellfish are affected. Besides, in view of the changing environmental conditions related to climate change, a rise in the incidence of these episodes could take place in the near future (Barbosa et al., 2019). Changes in the profiling and behavior of PSP toxic episodes, leading to lower toxicity values but longer toxic episodes have been proposed (Braga et al., 2018). It is important to mention that it would not be necessary to perform important modifications in factory installations to accomplish the PSP detoxification protocol. The required equipment is the same usually employed by the canning industry and factories applying this protocol do actually exist in the case of giant cockle. If a regulation for this detoxification protocol was finally approved, the importance of such modifications will depend on each individual factory and the decision to implement it or not would be due more to economic than technical reasons. Only the duration of the whole thermal process would be slightly increased. In this paper, naturally PSP contaminated mussels, clams and scallops were specifically harvested in order to implement the thermal procedure described in the EU decision. Slight modifications were applied, in order to obtain a better efficiency of detoxification and yield of mussels, clams and scallops. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Section snippets Sampling of contaminated mussels and scallops Samples were obtained from different sampling points along the Spanish and Portuguese coasts from July 2018 to March 2019. Mussels (Mytilus galloprovincialis) were acquired from several mussel raft cultures in: a) Galicia, (samples coming from two different floating rafts in the Ría of Vigo, Pontevedra); b) Andalucía, (one batch of mussels from Benalmádena, Málaga), and c) Portugal, (one batch of mussels from Portinho da Costa, near Lisbon). In addition, other mussel batches were obtained in Results The different batches of cultivated mussels (M. galloprovincialis), clams (R. philippinarum) and scallops (P. maximus), origin and sampling place, toxic phytoplankton involved, date of harvesting and analytical results initially obtained in the raw mollusks, are summarized in Table 1. In this table, mean values±standard error of the mean (SEM) obtained for each sample analyzed by both laboratories are included. The different batches were split and samples were processed by the different Discussion Some studies have been conducted to reduce or eliminate PSP toxins in mollusks and other invertebrates. The influence of thermal processing in naturally contaminated bivalves has already been studied by our group and by other authors, finding PSP detoxification in shellfish after application of high temperatures (Berenguer et al., 1993; Lawrence et al., 1994; Reboreda et al., 2010; Vieites et al., 1999). In this study, an approved thermal procedure to decrease PSP toxins in the giant cockle, Conclusions In conclusion, an efficient and inexpensive “detoxification procedure” can be applied in PSP contaminated mussels, clams and scallops to decrease PSP toxins below the legal limit (800 μg STX diHCl equiv/kg). However, a maximum threshold level in raw material should be previously established to define if the processing will efficiently reduce PSP toxins below the legal limit. Based on our data, 5300 μg STX diHCl equiv/kg would be the highest level. Although it is still necessary that the CRediT authorship contribution statement Ana G. Cabado: Conceptualization, Investigation, Resources, Writing - original draft, Writing - review & editing, Funding acquisition. Jorge Lago: Conceptualization, Investigation, Resources, Writing - review & editing. Virginia González: Methodology, Validation, Investigation. Lucía Blanco: Methodology, Investigation, Resources, Writing - original draft. Beatriz Paz: Methodology, Validation, Investigation. Jorge Diogène: Investigation, Resources, Funding acquisition. Laura Ferreres: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement no. 773400 (SEAFOOD TOMORROW). This output reflects the views only of the author(s), and the European Union cannot be held responsible for any use which may be made of the information contained therein. The authors acknowledge the Departament d’Agricultura, Ramaderia, Pesca i Alimentació (DARP) of the Generalitat de Catalunya in the Shellfish Harvesting Areas Recommended articles References (24) A.C. Braga et al. Combined effects of warming and acidification on accumulation and elimination dynamics of paralytic shellfish toxins in mussels Mytilus galloprovincialis Environ. Res. (2018) L. Gracia Villalobos et al. Spatiotemporal distribution of paralytic shellfish poisoning (PSP) toxins in shellfish from Argentine Patagonian coast Heliyon (2019) J.F. Lawrence et al. Effect of cooking on the concentration of toxins associated with paralytic shellfish poison in lobster hepatopancreas Toxicon (1994) A. Reboreda et al. Decrease of marine toxin content in bivalves by industrial processes Toxicon (2010) J.M. Vieites et al. Canning process that diminishes paralytic shellfish poison in naturally contaminated mussels (Mytilus galloprovincialis) J. Food Protect. (1999) G. Álvarez et al. Paralytic shellfish toxins in surf clams Mesodesma donacium during a large bloom of Alexandrium catenella dinoflagellates associated to an intense shellfish mass mortality Toxins (Basel) (2019) AOAC Official method 2005.06. Paralytic shellfish poisoning toxins in shellfish. Prechromatographic oxidation and liquid chromatography with fluorescence detection. First action 2005 Official Methods of Analysis of AOAC (2005) V. Barbosa et al. Paralytic shellfish toxins and ocean warming: bioaccumulation and ecotoxicological responses in juvenile gilthead seabream (Sparus aurata) Toxins (Basel) (2019) B. Ben-Gigirey et al. Extension of the validation of AOAC Official Method 2005.06 for dc-GTX2,3: interlaboratory study J. AOAC Int. (2012) J.A. Berenguer et al. The effect of commercial processing on the paralytic shellfish poison (PSP) content of naturally‐contaminated Acanthocardia tuberculatum L Food Additives & Contaminants (1993) EC Commission Decision of 18 January 1996 establishing the conditions for the harvesting and processing of certain bivalve molluscs coming from areas where the paralytic shellfish poison level exceeds the limit laid down by Council Directive 91/492/EEC (96/77/EC) Off. J. Eur. Commun. L (1996) EC REGULATION (EC) No 853/2004 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 29 April 2004 laying down specific hygiene rules for food of animal origin Off. J. Eur. Commun. L (2004) View more references Cited by (14) Roles of company directors and the implications for governing for the emerging impacts of climate risks in the fresh food sector: A review 2022, Food Control Citation Excerpt : Cyanobacterial blooms and the subsequent release of their toxic compounds is an ongoing problem all over the globe and is intensified by climate change. This water quality problem has been widely acknowledged to cause adverse effects on crop production when used for irrigation and pose human health risks (Cabado et al., 2020; Pingali et al., 2019, pp. 241–275). Previous research highlights the importance of monitoring the significant cyanotoxins content in irrigation water. Show abstract The causal link between climate change and food safety is now well established. The role of directors is to govern their organizations. Good governance includes contributing to and overseeing strategy, risk management, legal compliance, and financial management. It also means keeping fresh food safe in their supply chains. Corporate responses to addressing new food safety risks will be critical for ensuring the secure supply of safe food globally. The linkage of climate change to fresh food safety was reviewed and analyzed. Implications are discussed based on the changing governance and regulatory landscape for individual directors, their boards, and organizations in the sector. The review identifies technical impacts as well as emerging governance requirements (including transition risks) for directors. This paper takes the perspective of directors of organizations in the fresh food supply chain (also referred to as the food safety chain) and reviews available and credible knowledge concerning climate risks. Academic and selected grey literature, regulatory position papers, investor and expert opinions, and company annual reports, were reviewed to gain insights into the direct impacts on fresh foods and financial risks from climate change and identify emerging regulatory trends in governance. While there is an identifiable link between physical threats from climate change in this area of agriculture and food production and the management of businesses in the fresh food value chain, the governance of the overall food safety chain is diverse and robust. There is extensive research underway to assess emerging risks, with international laws and standards that enable the mitigation of emerging threats. While manifesting for directors as transitional risks, each of these aspects of governance offers an opportunity for organizations (in the food value or food safety chain) to be proactive about fresh food safety. Understanding the changing expectations of directors concerning the impact of climate change, which is diverse as this review shows, has not been reviewed previously. The current study will help ensure that organizations can prepare for the inevitable climate-related impacts that will challenge the global supply of fresh food and need to be governed at a company level to anticipate these emerging risks successfully. ### Marine invertebrate interactions with Harmful Algal Blooms – Implications for One Health 2021, Journal of Invertebrate Pathology Citation Excerpt : Attempts to detoxify shellfish prior to commercial sale have included most commonly the movement of products to aquaculture zones free from HABs. Other interventions have been tried to varying levels of success, including the use of ozonation, chlorination and salinity/temperature stress (Shumway, 1990), as well as the use of detoxification agents using activated carbon (Qiu et al., 2018), magnetic nanostructured particles, industrial heat processing treatment (Cabado et al., 2020). In many parts of the world, such as Europe, North America and parts of Australasia and South America, regulatory controls incorporating official control testing have been working effectively for years, although other regions remain unmanaged and unprotected. Show abstract Harmful Algal Blooms (HAB) are natural atypical proliferations of micro or macro algae in either marine or freshwater environments which have significant impacts on human, animal and ecosystem health. The causative HAB organisms are primarily dinoflagellates and diatoms in marine and cyanobacteria within freshwater ecosystems. Several hundred species of HABs, most commonly marine dinoflagellates affect animal and ecosystem health either directly through physical, chemical or biological impacts on surrounding organisms or indirectly through production of algal toxins which transfer through lower-level trophic organisms to higher level predators. Traditionally, a major focus of HABs has concerned their natural production of toxins which bioaccumulate in filter-feeding invertebrates, which with subsequent trophic transfer and biomagnification cause issues throughout the food web, including the human health of seafood consumers. Whilst in many regions of the world, regulations, monitoring and risk management strategies help mitigate against the impacts from HAB/invertebrate toxins upon human health, there is ever-expanding evidence describing enormous impacts upon invertebrate health, as well as the health of higher trophic level organisms and marine ecosystems. This paper provides an overview of HABs and their relationships with aquatic invertebrates, together with a review of their combined impacts on animal, human and ecosystem health. With HAB/invertebrate outbreaks expected in some regions at higher frequency and intensity in the coming decades, we discuss the needs for new science, multi-disciplinary assessment and communication which will be essential for ensuring a continued increasing supply of aquaculture foodstuffs for further generations. ### A competitive colorimetric aptasensor transduced by hybridization chain reaction-facilitated catalysis of AuNPs nanozyme for highly sensitive detection of saxitoxin 2021, Analytica Chimica Acta Show abstract Saxitoxin (STX) is a small molecule toxin (Mw. ca. 299 g/mol) with high acute toxicity, and it has urgent need of facile analytical methods. Herein, a competitive colorimetric aptasensor was developed for highly sensitive detection of STX. An anti-STX aptamer was hybridized with a complementary strand on the magnetic beads and was competitively bound by STX. The supernatant containing the aptamer binding to STX was obtained by magnetic separation, which could trigger hybridization chain reaction (HCR) to generate rigid double stranded DNAs (dsDNAs) with sticky end and variable length. These HCR-dsDNAs were found to be able to facilitate significant enhancement on the peroxidase-like catalytic capability of AuNPs nanozyme towards 3,3,5,5-tetramethylbenzidine (TMB). The concentration of STX was responded in a “turn on” mode, based on the amplified colorimetric transduction thereof. The aptasensor realized high sensitivity, with a limit of detection (LOD) as low as 42.46 pM. Moreover, a wide linear detection range of 78.13–2500 pM, good selectivity, as well as good recovery rates of 106.2–113.5% when analyzing STX in real shellfish samples were obtained. This strategy could be referred to develop robust aptasensors for simple and highly sensitive detection of other small molecules and toxins. ### The wide spectrum of methods available to study marine neurotoxins 2021, Advances in Neurotoxicology Citation Excerpt : Several reports described that the concentration of some PSP analogs in bivalves can be reduced (Reis Costa et al., 2018). However, studies on shellfish detoxification to mitigate this problem are still very scarce (Cabado et al., 2020; García et al., 2010). The development of detection methods for marine neurotoxins is a field in continuous growth because of the impact they have in human and animal health. Show abstract Marine neurotoxins are extremely interesting molecules present in nature. Due to their toxicity, they determine relations among organisms. They are also responsible for severe seafood poisoning events affecting marine birds, marine mammals and humans, among other. The selection of appropriate methods to detect and quantify these neurotoxins is crucial to reach major scientific and technical challenges such as understanding their mechanism of action. This chapter is intended to provide an overview of the available methodological strategies to assess neurotoxins in the marine environment. It addresses animal bioassays, cell-based assays (CBAs), receptor-binding assays (RBAs), immunoassays, enzyme-based assays, aptamer-based assays, different types of biosensors, and instrumental analysis techniques. Their advantages and limitations are under focus, and selected examples are provided to illustrate the evolution of each marine neurotoxin detection method, and how these have contributed to the advancement of science in this field. ### Short Depuration of Oysters Intended for Human Consumption Is Effective at Reducing Exposure to Nanoplastics 2022, Environmental Science and Technology ### Harmful algal blooms and shellfish in the marine environment: an overview of the main molluscan responses, toxin dynamics, and risks for human health 2021, Environmental Science and Pollution Research View all citing articles on Scopus View full text © 2020 Elsevier Ltd. All rights reserved. Recommended articles Prenatal ethanol exposure-induced hypothalamic an imbalance of glutamatergic/GABAergic projections and low functional expression in male offspring rats Food and Chemical Toxicology, Volume 141, 2020, Article 111419 Juan Lu, …, Hui Wang ### 5-HT 2A receptor-mediated PKCδ phosphorylation is critical for serotonergic impairments induced by p-chloroamphetamine in mice Food and Chemical Toxicology, Volume 141, 2020, Article 111395 Dieu Hien Phan, …, Hyoung-Chun Kim ### Long noncoding RNA Gm20319, acting as competing endogenous RNA, regulated GNE expression by sponging miR-7240-5p to involve in deoxynivalenol-induced liver damage in vitro Food and Chemical Toxicology, Volume 141, 2020, Article 111435 Yuxiao Liao, …, Wei Yang ### In vivo antitumoral effect of 4-nerolidylcatechol (4-NC) in NRAS-mutant human melanoma Food and Chemical Toxicology, Volume 141, 2020, Article 111371 Débora Kristina Alves-Fernandes, …, Silvia Berlanga de Moraes Barros ### Accumulation and esterification of diarrhetic shellfish toxins from the aqueous phase in laboratory-exposed mussels Harmful Algae, Volume 93, 2020, Article 101797 Aifeng Li, …, Ying Ji ### Evaluating the potential allergenicity of dietary proteins using model strong to non-allergenic proteins in germ-free mice Food and Chemical Toxicology, Volume 141, 2020, Article 111398 Nathan L.Marsteller, …, Joseph Baumert Show 3 more articles About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
778
https://mein-lernen.at/deutsch/wortgrammatik/verb/verb-tempus-test/alle-6-zeitformen-verb-sehen/
Zum Inhalt springen Deutsch Verb sehen | Konjugation alle 6 Zeitformen Kategorie: Verb Tempus Beispiele Startseite, Verb sehen, Zeitwort sehen Deutsch Verb sehen | Konjugation alle 6 Zeitformen Bildung: Das Verb „sehen“ wird unregelmäßig konjugiert. Es bildet das Präteritum mit der Endung „-a-“ (z.B. „sah“) und das Partizip Perfekt mit der Vorsilbe „ge-“ und dem Stamm „sehen“ (z.B. „gesehen“). Anwendung: „Sehen“ wird verwendet, um den visuellen Wahrnehmungsprozess zu beschreiben, bei dem Objekte, Personen oder Ereignisse mit den Augen wahrgenommen werden. Beispiel: „Ich sehe den Vogel im Baum.“ Besonderheit: Das Verb „sehen“ kann sowohl für das bewusste Wahrnehmen von visuellen Eindrücken (z.B. „Er sieht die Sonne aufgehen.“) als auch im übertragenen Sinne verwendet werden, um das Verstehen oder Erkennen einer Situation auszudrücken (z.B. „Ich sehe, was du meinst.“). Weitere Lernhilfen: Wortschatz | Sätze | Redewendungen | | | --- | | 3 Stammformen | | | 1. Stammform: sehen 2. Stammform: sah 3. Stammform: gesehen | Singular: Sieh (her)! Plural: Seht (her)! Höflichkeitsform: Sehen Sie! | | Infinitiv | Partizip | | Infinitiv Präsens: sehen Infinitiv Perfekt: gesehen haben | Partizip Präsens: sehend Partizip Perfekt: gesehen | I. Konjugation vom Verb „sehen“ im Indikativ: Die Verbbildung im Indikativ ist die am meisten verwendete Konjugationsform, die einen realen Sachverhalt oder Handlung in der Realität darstellt. | | | | --- | Präsens | Perfekt | Futur 1 | | ich sehedu siehster/sie/es siehtwir sehenihr sehtsie sehen | ich habe gesehendu hast gesehener/sie/es hat gesehenwir haben gesehenihr habt gesehensie haben gesehen | ich werde sehendu wirst sehener/sie/es wird sehenwir werden sehenihr werdet sehensie werden sehen | | Präteritum | Plusquamperfekt | Futur 2 | | ich sahdu sahster/sie/es sahwir sahenihr sahtsie sahen | ich hatte gesehendu hattest gesehener/sie/es hatte gesehenwir hatten gesehenihr hattet gesehensie hatten gesehen | ich werde gesehen habendu wirst gesehen habener/sie wird gesehen habenwir werden gesehen habenihr werdet gesehen habensie werden gesehen haben | II. Konjugation vom Verb „sehen“ im Konjunktiv 1: Das Konjunktiv 1 wird gebildet, indem wir an den Wortstamm des Infinitivs (1. Stammform) die jeweilige Endung des Konjunktivs (-e, -est, -e, -en, -et, -en) anhängen. Das Hauptanwendungsgebiet des Konjunktiv 1 ist die indirekte Rede. | | | --- | | Konjunktiv I Präsens | Konjunktiv I Futur 1 | | ich sehedu sehester/sie/es sehewir sehenihr sehetsie sehen | ich werde sehendu werdest sehener/sie/es werde sehenwir werden sehenihr werdet sehensie werden sehen | | Konjunktiv I Perfekt | Konjunktiv I Futur 2 | | ich habe gesehendu habest gesehener/sie/es habe gesehenwir haben gesehenihr habet gesehensie haben gesehen | ich werde gesehen habendu werdest gesehen habener/sie/es werde gesehen habenwir werden gesehen habenihr werdet gesehen habensie werden gesehen haben | III. Konjugation vom Verb „sehen“ im Konjunktiv 2: Das Konjunktiv 2 wird eher selten verwendet, um eine Hypothese, einen Wunsch oder Unwahrscheinlichkeit auszudrücken. Es wird gebildet, indem wir an den Wortstamm des Präteritums (2. Stammform) die jeweilige Endung des Konjunktivs (-e, -est, -e, -en, -et, -en) anhängen. | | | --- | | Konjunktiv II Präsens | Konjunktiv II Futur 1 | | ich sähedu sähester/sie/es sähewir sähenihr sähetsie sähen | ich würde sehendu würdest sehener/sie/es würde sehenwir würden sehenihr würdet sehensie würden sehen | | Konjunktiv II Perfekt | Konjunktiv II Futur 2 | | ich hätte gesehendu hättest gesehener/sie/es hätte gesehenwir hätten gesehenihr hättet gesehensie hätten gesehen | ich würde gesehen habendu würdest gesehen habener/sie/es würde gesehen habenwir würden gesehen habenihr würdet gesehen habensie würden gesehen haben | Deutsch Verb helfen | Konjugation alle 6 Zeitformen Deutsch Verb laufen | Konjugation alle 6 Zeitformen Deutsch Verb sein | Konjugation alle 6 Zeitformen IV. Bildung, Anwendung und Besonderheiten: 1. Bildung: „sehen“ Das Verb „sehen“ wird unregelmäßig konjugiert. Es bildet das Präteritum mit der Form „sah“ und das Partizip Perfekt mit der Vorsilbe „ge-“ (z.B. „gesehen“). Es gehört zur starken Konjugation, bei der sich der Stammvokal in den verschiedenen Zeiten ändert (sehen → sah → gesehen). Beispiele zur Bildung: Präsens: Ich sehe, du siehst, er/sie/es sieht Präteritum: Ich sah, du sahst, er/sie/es sah Perfekt: Ich habe gesehen, du hast gesehen, er/sie/es hat gesehen 2. Anwendung: „sehen“ „Sehen“ wird verwendet, um den Sinnesvorgang des visuellen Wahrnehmens zu beschreiben. Es bezieht sich darauf, etwas mit den Augen wahrzunehmen oder zu beobachten. Beispiele zur Anwendung: Wörtlich: „Ich sehe den Sonnenaufgang jeden Morgen.“ (Ich beobachte jeden Morgen den Sonnenaufgang.) Wörtlich: „Er sieht den Film heute Abend.“ (Er schaut sich den Film heute Abend an.) 3. Besonderheiten: „sehen“ „Sehen“ wird auch metaphorisch und idiomatisch verwendet. Es kann nicht nur das physische Wahrnehmen beschreiben, sondern auch das Verstehen, Vorhersehen oder Erkennen von Situationen in einem übertragenen Sinne. Beispiele für Besonderheiten: Metaphorisch: „Ich sehe, wohin das führt.“ (Ich verstehe oder erkenne die Konsequenzen.) Idiomatisch: „Das sehe ich anders.“ (Ich habe eine andere Meinung.) V. Konjugation vom Verb „sehen“ im Konjunktiv 1: Ich sehe die Vögel im Garten. (Präsens, Indikativ) Siehst du die Wolken am Himmel? (Präsens, Indikativ) Er sieht sich den Sonnenuntergang an. (Präsens, Indikativ) Wir sahen den Film gestern Abend. (Präteritum, Indikativ) Sahst du das Fußballspiel im Fernsehen? (Präteritum, Indikativ) Sie hat die Nachrichten gesehen . (Perfekt, Indikativ) Ich hatte schon das Auto vor dem Haus gesehen , bevor ich klingelte. (Plusquamperfekt, Indikativ) Morgen werde ich endlich die Ausstellung sehen . (Futur I, Indikativ) Wirst du die neuen Folgen sehen ? (Futur I, Indikativ) Wenn wir ankommen, werde ich die Stadt sehen haben. (Futur II, Indikativ) Sieh nach, ob die Tür geschlossen ist! (Präsens, Imperativ) Seht ihr, wie schön der Regenbogen ist? (Präsens, Indikativ) Er sah mich nicht einmal an, als ich hierher kam. (Präteritum, Indikativ) Der Nebel war so dicht, dass man kaum etwas sehen konnte. (Präteritum, Indikativ) Sie hat sich in den letzten Jahren nicht oft sehen lassen. (Perfekt, Indikativ) Ich werde bald sehen , was er vorhatte. (Futur I, Indikativ) Sieh dir das Bild genauer an! (Präsens, Imperativ) Würdest du es anders sehen , wenn du alle Fakten hättest? (Konjunktiv II, Futur I) Sieh zu, dass du pünktlich bist! (Präsens, Imperativ) Wir werden sehen , wie sich die Lage entwickelt. (Futur I, Indikativ)
779
https://ajronline.org/doi/10.2214/AJR.19.22022
Skip to main content Imaging-Based Approach to Axillary Lymph Node Staging and Sentinel Lymph Node Biopsy in Patients With Breast Cancer Authors: Susie X. Sun, Tanya W. Moseley, Henry M. Kuerer, and Wei T. YangAuthor Info & Affiliations Volume 214, Issue 2 52,05441 Abstract To listen to the podcast associated with this article, please select one of the following: iTunes, Google Play, or direct download. OBJECTIVE. This review provides historical and current data to support the role of imaging-based axillary lymph node staging and sentinel lymph node biopsy as the standard of care for axillary management in women with a diagnosis of breast cancer, before and after neoadjuvant systemic therapy. CONCLUSION. The implications of surgical trials (American College of Surgeons Oncology Group [ACOSOG] Z011 and ACOSOG Z1071) on imaging protocols for the axilla are reviewed, in conjunction with the American Joint Committee on Cancer nodal staging guidelines. Historical and current data support the role of imaging-based axillary lymph node staging and sentinel lymph node biopsy (SLNB) as the standard of care for axillary management in women with a diagnosis of breast cancer, before and after neoadjuvant systemic therapy (NST). Regional Lymph Node Staging and Imaging Considerations Regional Lymph Node Anatomy Regional lymph nodes for the breast include intramammary, axillary, supraclavicular, and internal mammary nodal chains. Intramammary lymph nodes reside within the breast and are considered axillary lymph nodes for N categorization according to the AJCC Cancer Staging Manual . Axillary lymph nodes are divided into levels I, II, and III. Level I lymph nodes are located lateral to the lateral border of the pectoralis minor muscle. Level II lymph nodes lie under the pectoralis minor muscle between the muscle's lateral and medial borders. Rotter nodes are level II axillary lymph nodes. Level III lymph nodes carry a worse prognosis and are located medial to the medial margin of the pectoral minor muscle and under the clavicle. Supraclavicular lymph nodes are found in a triangle delineated by the omohyoid muscle and tendon superolaterally, the internal jugular vein medially, and the clavicle and subclavian vein inferiorly. The internal mammary nodal chain extends from the first through the sixth inter-costal space (Fig. 1). The ability of imaging, including ultrasound, CT, and MRI, to evaluate and guide biopsy of lymph nodes allows customized, individualized treatment of patients with breast cancer. Radiologic staging of regional lymph nodes may be performed during metastatic staging workup or surveillance imaging using CT (Fig. 2), staging of known breast cancer using MRI, or with gray-scale ultrasound at the time of breast cancer diagnosis. On MRI, normal axillary nodes have a reniform shape, consistent with ultrasound findings. All nodes (normal and abnormal) enhance homogeneously on dynamic contrast-enhanced imaging with washout kinetics (Figs. 3A and 3B). Abnormal nodes typically show marked enlargement, cortical thickening, hilar effacement, round shape or decrease of the longitudinal-transverse ratio, and heterogeneous enhancement pattern [2–4] (Figs. 3C and 3D). Studies report increased negative predictive value with smooth cortexes and symmetric appearance and cortical thickness less than 3 mm [2–4]. Ultrasound Anatomy of Regional Lymph Nodes Ultrasound is simple, cheap, and widely available, and it allows real-time evaluation of nodal morphology and image-directed needle biopsies. Morphologically normal lymph nodes are oval or reniform in shape, with a thin, even, smooth, C-shaped hypoechoic cortex and a hyperechoic central fatty hilum (Fig. 4A). When the fatty hilum predominates, the lymph node may be difficult to distinguish from the surrounding axillary fat and is almost always benign [2, 5] (Fig. 4B). A cortical thickness up to 3 mm is within the range of normal (Fig. 4C). Ultrasound-Guided Biopsy of Regional Lymph Nodes Lymph nodes contain afferent and efferent vessels, with the afferent pathway draining toward the center of the lymph node and the efferent pathway draining away. Metastases enter lymph nodes via afferent pathways through the cortexes of lymph nodes, which results in cortical thickening [2, 5]. Cortical thickening should be regarded as a suspicious finding, and differing patterns of cortical thickening can be observed (Fig. 5A). Focal or eccentric thickening of the cortex is a more specific indicator of metastases (Fig. 5B). Additionally, the fatty hilum becomes indented (Fig. 5B) or effaced (Fig. 5C) until the lymph node is completely hypoechoic (Fig. 5D). With the progression of metastatic involvement, lymph nodes lose their oval or reniform shape and become round (Fig. 5D). Bedi et al. found that in lymphoma and with reactive nodes, lymph nodes can appear completely hypoechoic. Nodes with diffusely thickened cortexes may be reactive or metastatic lymph nodes. Needle biopsy of suspicious lymph nodes should follow established guidelines . In a patient with breast cancer, a lymph node with a diffusely thickened cortex should be subjected to needle biopsy. Ultrasound-guided (fine-needle or core) biopsy should target the specific focal bulge or eccentric cortical thickening [2, 5]. Ultrasound-guided biopsy is well tolerated by patients and can be quickly performed. Patients are placed in a supine or supine oblique position with an arm raised comfortably above the head. The most suspicious lymph node is identified after scanning in two orthogonal planes using a high-frequency 13-18–MHz transducer. Using sterile technique, a 20- or 21-gauge hypodermic needle attached to a 10-mL syringe is inserted into the thickest part of the cortex (Fig. 6). Gentle suction is applied with continuous aspiration. Alternately, a standard spring-loaded 14-gauge core biopsy device can be used to biopsy abnormal axillary nodes [7, 8]. This procedure is performed with high tissue recovery rate and minimal risk. Regional Nodal Staging per American Joint Committee on Cancer The expert panel of the American Joint Committee on Cancer did not make substantial changes to the N categorization in the 8th edition of the staging manual [1, 9]. The clinical characterization of lymph nodes includes lymph nodes detected with imaging other than lymphoscintigraphy. N1 metastases include mobile ipsilateral lymph nodes in axillary levels I and II. N2a metastases include matted ipsilateral lymph nodes in axillary levels I and II. N2b metastases include ipsilateral internal mammary lymph nodes but no ipsilateral lymph nodes in axillary levels I and II. N3a metastases include ipsi-lateral axillary level III with or without involvement of axillary level I or II (or both). N3b metastases include ipsilateral internal mammary lymph nodes and axillary lymph nodes. N3c metastases include ipsilateral supraclavicular lymph nodes with or without involvement of axillary or internal mamma-ry lymph nodes (or both). At centers that use ultrasound for staging, the imaging evaluation should include at least the ipsilateral axillary levels I and II nodal chains . At institutions with strong breast cancer multidisciplinary collaboration involving pathology, surgical oncology, radiation oncology, and medical oncology, evaluation of axillary I, II, and III nodal levels and the internal mammary nodal chain using ultrasound may provide value in nodal staging for therapeutic (radiation) planning and eligibility for clinical trial participation [10, 11]. The implications for nodal metastases in axillary levels I and II are the same for staging. Long-term survival is reduced in patients with higher nodal staging . Sentinel Lymph Node Biopsy Background Surgical management of the axilla in breast cancer has evolved significantly in the past several decades. Before the mid 1990s, axillary lymph node dissection (ALND) was the main staging modality for locoregional lymph node disease, including in patients with early breast cancer. Although the concept of the sentinel lymph node (SLN) was first introduced in the early 1900s, it was not until the 1990s that it was tested on animal models and subsequently validated in patients with melanoma by Morton and colleagues at the John Wayne Cancer Center [13, 14]. In 1994, Giuliano [15–20] studied its utility in axillary staging for patients with breast cancer. Since then, SLNB has revolutionized breast cancer staging and redefined the role of ALND. The principle behind the SLN concept is that lymphatic channels draining a tumor, and thus, metastatic tumor cells, will track first to sentinel nodes within the regional nodal basins. Because axillary nodal status is crucial in the staging and prognosis of breast cancer, information gathered from ALND is essential even though a significant portion of the removed nodes are tumor free, serving no therapeutic benefit while placing the patient at high risk for lymphedema. Since its development, SLNB has proven to be a highly reliable modality in axillary staging with lower risk than ALND for morbidity. Data In one of the largest randomized controlled trials to date studying the safety and efficacy of SLNB was the National Surgical Adjuvant Breast and Bowel Project (NSABP) B-32 trial. In that trial, 2807 women with clinically node-negative disease were randomized to SLNB followed by ALND; 2804 women, to SLNB alone if the SLN was negative for carcinoma or SLNB followed by ALND if metastatic disease was found in the SLN. Follow-up for the original study was 8 years. The trial found no statistically significant differences in overall survival, disease-free survival, and regional control for SLNB alone versus SLNB and ALND . Additionally, the SLNB cohort showed lower rates of postoperative morbidity including shoulder abduction deficits, lymph-edema (14% in SLNB and ALND vs 8% in SLNB alone), and paresthesia . Technique Radiocolloid, such as Magtrace (Endomag) or 99mTc sulfur colloid, may be injected up to 24 hours before surgery, at which time lymphoscintigraphy may be performed to evaluate lymphatic drainage and identify locations of SLNs. SLNs are located in the ipsilateral axilla in 92–97% of patients, internal mammary and axillary lymph nodes in 14–20%, and internal mammary lymph nodes alone in 2–8% (Fig. 7). Approximately 1–5 mL of radiocolloid or blue dye is injected in the peritumoral, intratumoral, or subareolar areas. The success rate of SLN identification is similar for the different injection sites [23, 24]. Risk factors for failure to localize the SLNs include significant axillary disease burden, extranodal extension of disease, and large bulky breast tumors; however, the most important predictor of successful SLN identification is the surgeon's experience . If radiocolloid is used, a gamma probe is used to identify nodes with the highest counts. If lymphatic mapping shows drainage to the internal mammary node only, this node is excised with the assistance of a multi-disciplinary team. If there is drainage to both the axillary and internal mammary nodes, the axillary node or nodes are excised. If they show carcinoma, radiation is administered to the axillary and the internal mammary chain. All nodes with radioactive count greater than 10% of the most radioactive node should be removed as sentinel nodes (Fig. 8). If blue dye is used, blue lymphatic channels can be traced to identify the blue sentinel nodes (Fig. 9). After removal of blue or radioactive nodes, the axilla should be palpated to identify any suspicious lymph nodes. If no sentinel nodes are identified, ALND should be performed, so it is important to discuss ALND with the patient in the preoperative setting. Intraoperative assessment of SLN via frozen section or touch preparations is recommended for patients undergoing mastectomy, patients who undergo NST, and patients with suspicious nodes found intraoperatively. Although the sensitivity of frozen section to detect nodal metastasis is relatively low, ranging from 56% to 75%, its specificity is high at 100% [26, 27]. A mean of two SLNs are removed. American College of Surgeons Oncology Group Z0011 Trial and Imaging In 2016, the American Society of Clinical Oncology published updated guidelines for the use of SLNB in patients with early breast cancer, which are summarized in Table 1 . On the basis of the findings of the ACOSOG Z0011 trial, ALND is not recommended for women with early-stage breast cancer undergoing breast conservation therapy who have three or fewer positive SLNs . Additionally, ALND is recommended for women undergoing mastectomy who have evidence of nodal metastasis on SLNB. Finally, SLNB is not recommended for patients with locally advanced inflammatory breast cancer. As part of the Choosing Wisely campaign, routine SLND is not recommended for women 70 years old or older with clinically node-negative early-stage hormone receptor–positive breast cancer . TABLE 1: American Society of Clinical Oncology Clinical Practice Guidelines for Sentinel Lymph Node Biopsy | Clinical Characteristic | Sentinel Lymph Node Biopsy Recommended | Type | Evidence Quality | Recommendation Strength | --- --- | Clinical T1, T2, or N0 disease | Yes | Evidence based; benefits outweigh harms | High | Strong | | Multicentric tumors | Yes | Evidence based; benefits outweigh harms | Intermediate | Moderate | | DCIS when mastectomy is performed | Yes | Informal consensus; benefits outweigh harms | Insufficient | Weak | | Prior breast or axillary surgery (or both) | Yes | Evidence based; benefits outweigh harms | Intermediate | Strong | | Preoperative or neoadjuvant systemic therapy | Yes | Evidence based; benefits outweigh harms | Intermediate | Moderate | | Large or locally advanced invasive breast cancer (T3 or T4) | No | Informal consensus | Insufficient | Weak | | Inflammatory breast cancer | No | Informal consensus | Insufficient | Weak | | DCIS when breast-conserving surgery is planned | No | Informal consensus | Insufficient | Strong | | Pregnancy | No | Informal consensus | Insufficient | Weak | Note—DCIS = ductal carcinoma in situ. The results of the ACOSOG Z0011 trial raised questions about the necessity of preoperative axillary imaging evaluation in patients with breast cancer [28, 30]. After the Z0011 trial, the preoperative diagnosis of axillary metastases has been postulated to consign patients to ALND, including patients with a low tumor burden. Routine use of ultrasound for preoperative axillary staging varies. Although some centers perform radiologic evaluation of the axilla only for patients with palpable axillary lymphadenopathy, others use axillary ultrasound as part of the routine evaluation for all patients with newly diagnosed breast cancer. However, determining where newly diagnosed patients will fall along the Z0011 spectrum is difficult. In the Z0011 era, the utility of preoperative axillary staging is in the ability to differentiate patients with two or fewer positive axillary lymph nodes from those with more than two positive axillary lymph nodes. Although multiple studies have reported that axillary ultrasound with or without biopsy of abnormal-appearing lymph nodes is inadequate to distinguish between patients with high or low lymph node disease burden, others have found that patients with abnormal-appearing lymph nodes on imaging or those with biopsy-proven lymph node disease are more likely to have higher axillary disease burden requiring ALND. A meta-analysis by Houssami et al. showed that the sensitivity and specificity of axillary ultrasound with fine-needle aspiration biopsy to differentiate malignant versus benign disease were 79.6%, and 98.3%, respectively. However, 41% of patients with biopsy-proven lymph nodes only had one or two positive nodes and thus could have avoided ALND. Another study by Pilewskie et al. showed that among clinically node-negative women with abnormal preoperative axillary imaging, 68–73% did not require ALND according to Z0011 criteria. The same group reported separately that patients with more than one abnormal lymph node on imaging were more likely to have three or more positive lymph nodes at final pathology . Caudle et al. compared patients with positive axillary nodes found on SLNB with negative preoperative axillary ultrasound and those with positive nodes on ultrasound. They found that patients with disease found on ultrasound preoperatively had high numbers of positive lymph node at final pathology, larger metastatic lymph node deposits, and higher incidence of extranodal extension. Additionally, even if fewer than three abnormal lymph nodes were found on preoperative axillary ultrasound, these patients were still more likely than those with negative pre-operative ultrasound to have more than positive nodes at surgery. In multivariate analysis, metastasis found on preoperative ultrasound was independently predictive of having more than three positive nodes . In a similar study by Verheuvel et al. , patients with axillary disease found on ultrasound were more likely to have higher numbers of positive lymph nodes, macrometastatic disease, extranodal extension, and involvement of level III nodes with associated decrease in overall and disease-free survival. These studies suggest that preoperative axillary ultrasound accurately identifies patients meeting Z0011 criteria who will require upfront ALND, thus avoiding two procedures of SLNB followed by ALND. Although routine use of preoperative axillary imaging remains controversial and varies across different centers, the extent of axillary nodal metastases impacts diagnosis, prognosis, and treatment [35, 36]. Figure 10 outlines a proposed imaging-based approach for axillary nodal staging and SLNB biopsy for patients with primary breast cancer who undergo upfront surgery and who present for surgery after NST. Sentinel Lymph Node Biopsy After Neoadjuvant Systemic Therapy Background In recent decades, the use of NST for patients with operable breast cancer has increased to facilitate breast-conserving surgery, monitor tumor response, and provide prognostic value . However, the use of NST has raised controversy regarding the optimal surgical management of the axilla. Initially, SLNB was not recommended for patients who received NST because of concerns of high false-negative rates (FNRs) related to fibrosis of the lymphatic channels as tumor emboli respond to systemic treatment. More recent studies have shown that SLNB after NST is safe in select patient cohorts. Data For clinically node-negative patients receiving NST, several studies have shown that SLNB has similar identification rates, FNRs, and long-term outcomes when compared with patients receiving upfront surgery. A study comparing 575 patients who underwent SLNB after NST and 3171 patients who underwent surgery before adjuvant therapy found that SLN identification rates were similar between the two groups, at 97.4% and 98.7%, respectively. FNRs were also similar at 5.9% in the NST group and 4.1% in the group that underwent surgery first. Consistent with previously published data, fewer positive SLNs were seen in the NST cohort [38, 39]. Finally, no differences in locoregional recurrence, disease-free survival, or overall survival were seen between the two groups . Similar results were reported from the European multicenter Ganglion Sentinelle et Chimiotherapie Neoadjuvante (GANEA) study, with a detection rate of 94.6% and FNR of 9.4% in their clinical N0 cohort . In 2015, the National Comprehensive Cancer Network updated treatment guidelines to state that SLNB is the preferred surgical staging modality for this patient cohort, which is performed at the time of surgery for the primary breast tumor in most cases. In patients who undergo NST, the axilla should be clinically staged before and after treatment. If the axilla is negative before treatment, SLNB is the preferred surgical staging procedure. If the axilla is initially clinically positive but has a clinical complete response after treatment, SLNB may be performed. Otherwise ALND or targeted axillary dissection (TAD) should be performed . The ACOSOG Z1071 study was a multi-center trial evaluating the effectiveness of SLNB after NST for patients initially presenting with clinically node-positive disease. The authors found that SLNs were identified in 92.7% of patients and the FNR was 12.6%, which was higher than the predetermined acceptability rate of 10%. However, in subanalysis, the authors found that the FNR decreased to 10.8% when mapping was performed with both blue dye and radionucleo-tide (dual tracer) and dropped to 9.1% when three or more SLNs were removed . The Sentinel Node Biopsy Following Neoadjuvant Chemotherapy (SN FNAC) trial had similar results to the ACOSOG Z1071 with an FNR of 8% when pathologic evaluation of the SLNs using immunohistochemistry was required and any size SLN metastasis, even isolated tumor cells, were considered positive. The use of dual tracer was also found to lower FNRs . The Sentinel Neoadjuvant (SENTINA) trial was a four-armed trial, one of which evaluated initially clinically node-positive patients who converted to ycN0 after NST. The overall FNR in this patient cohort was 14.2%. Similar to the ACOSOG Z1071, the FNR decreased to 8.6% if dual tracer was used and if three or more SLNs were removed (FNR, 7.3%) . Because of the initial high FNR of 12.6% reported in the ACOSOG Z1071 trial, the authors evaluated clip placement in the biopsy-proven positive node at time of initial diagnosis and removal of this clipped node during axillary surgery as a method to decrease the FNR. The clipped node was found to be one of the sentinel nodes in 75.9% of cases with an associated FNR of 6.8% . More recently, Caudle and colleagues evaluated the safety and efficacy of TAD, during which the initially clipped node is localized and removed in addition to removal of the SLNs. In their cohort of initially node-positive patients who underwent NST, the FNR when SLNB was performed alone was 10.1%. However, when TAD was performed, the FNR was 2.0%. Similar to the ACOSOG data, the clipped node was retrieved as a sentinel node in 77% of cases. It is recommended that patients who have positive SLNs or residual disease in the clipped node undergo completion axillary dissection . These studies show that limited axillary surgery is reliable and safe in patients who initially present with clinically lymph node–positive disease to avoid the morbidity associated with ALND. However, no results from large studies evaluating long-term recurrence outcomes in patients who undergo limited axillary surgery after NST are available. Technique for Targeted Axillary Dissection Indications and contraindications for TAD are presented in Table 2. Before treatment with NST is initiated, evaluation of the regional lymph node basins should be completed with physical examination and ultrasound. If suspicious nodes are found, ultrasound-guided biopsy should be performed, followed by clip placement. According to the eligibility criteria in Table 2, patients do not have to convert to node negativity after NST to be candidates for TAD (Fig. 10). After completion of NST and before surgery, a localizing seed, reflector, or hookwire is placed in the previously clipped positive lymph node (Fig. 11A). Radiocolloid is injected preoperatively with elective lymphoscintigraphy or intraoperatively. Use of a dual tracer including both radiocolloid and blue dye is recommended to decrease FNRs, as previously discussed. The localized clipped node is removed and sent for specimen radiography to confirm removal of the clip and localization device (Fig. 11B). All lymph nodes that are blue, radioactive, or both are removed as sentinel nodes. The clipped node will also be a sentinel node in 75% of cases. The axilla should then be palpated and all suspicious lymph nodes removed. The clipped and sentinel nodes should be sent for intraoperative pathologic evaluation via frozen section or touch preparation. If metastatic disease is found, axillary dissection is recommended unless the procedure is part of a clinical trial. TABLE 2: Indications and Contraindications for Targeted Axillary Dissection | Indication or Contraindication | TNM Category | Suspicious Lymph Nodes | Other | --- --- | | Indication | T1 orT2 | Three or fewer level I or level II axillary lymph nodes on pretreatment ultrasound | | | Contraindication | Clinical N2b or N3 disease; metastatic disease to ipsilateral internal mammary, infraclavicular, or supraclavicular lymph nodes | Four or more suspicious level I or level II axillary lymph nodes | Allergy to blue dye (dual tracer use required), prior axillary surgerya | a Relative contraindication. Conclusion Lymph node staging is a crucial step in the workup of patients with newly diagnosed invasive breast cancer, and preoperative imaging of the axilla plays an important role in clinical staging. Lymph node imaging using ultrasound is predicated on morphology rather than size of the node. The additional advantage of ultrasound is the opportunity to perform real-time evaluation and immediate image-directed biopsy. As the practice of SLNB has evolved for patients who undergo upfront surgery and for those who undergo NST, the preoperative diagnosis of abnormal axillary nodes helps to avoid unnecessary SLNB, which is especially important for patients with a large tumor burden. Acknowledgment We thank Kelly Kage for her assistance with the medical illustration in Figure 1. References 1. Hortobagayi GN, Connolly JL, D'Orsi CJ, et al. Breast. In: Amin MB, Edge SB, Greene FL, et al., eds. AJCC Cancer Staging Manual, 8th ed. New York, NY: Springer International, 2017:589–636 Crossref Google Scholar a [...] AJCC Cancer Staging Manual b [...] , 8th ed. c [...] in the 8th edition of the staging manual d [...] axillary levels I and II nodal chains 2. Ecanow JS, Abe H, Newstead GM, Ecanow DB, Jeske JM. Axillary staging of breast cancer: what the radiologist should know. RadioGraphics 2013; 33:1589–1612 Crossref PubMed Google Scholar a [...] and heterogeneous enhancement pattern b [...] and cortical thickness less than 3 mm c [...] axillary fat and is almost always benign d [...] nodes, which results in cortical thickening e [...] bulge or eccentric cortical thickening 3. Murray AD, Staff RT, Redpath TW, et al. Dynamic contrast enhanced MRI of the axilla in women with breast cancer: comparison with pathology of excised nodes. Br J Radiol 2002; 75:220–228 Crossref PubMed Google Scholar 4. Korteweg MA, Zwanenburg JJ, Hoogduin JM, et al. Dissected sentinel lymph nodes of breast cancer patients: characterization with high-spatial-resolution 7-T MR imaging. Radiology 2011; 261:127–135 Crossref PubMed Google Scholar a [...] and heterogeneous enhancement pattern b [...] and cortical thickness less than 3 mm 5. Bedi DG, Krishnamurthy R, Krishnamurthy S, et al. Cortical morphologic features of axillary lymph nodes as a predictor of metastasis in breast cancer: in vitro sonographic study. AJR 2008; 191:646–652 Crossref PubMed Google Scholar a [...] axillary fat and is almost always benign b [...] nodes, which results in cortical thickening c [...] is a more specific indicator of metastases d [...] ). Bedi et al. e [...] bulge or eccentric cortical thickening 6. Gradishar WJ, Anderson BO, Balassanian R, et al. Breast cancer version 2.2015. J Natl Compr Canc Netw 2015; 13:448–475 Go to Citation Crossref PubMed Google Scholar 7. Abe H, Schmidt RA, Sennett CA, Shimauchi A, Newstead GM. US-guided core needle biopsy of axillary lymph nodes in patients with breast cancer: why and how to do it. RadioGraphics 2007; 27(Suppl 1):S91–S99 Go to Citation Google Scholar 8. Abe H, Schmidt RA, Kulkarni K, Sennett CA, Mueller JS, Newstead GM. Axillary lymph nodes suspicious for breast cancer metastasis: sampling with US-guided 14-gauge core-needle biopsy—clinical experience in 100 patients. Radiology 2009; 250:41–49 Go to Citation Crossref PubMed Google Scholar 9. Giuliano AE, Connolly JL, Edge SB, et al. Breast cancer: major changes in the American Joint Committee on Cancer eighth edition cancer staging manual. CA Cancer J Clin 2017; 67:290–303 Go to Citation Crossref PubMed Google Scholar 10. Budach W, Bölke E, Kammers K, Gerber PA, Nestle-Krämling C, Matuschek C. Adjuvant radiation therapy of regional lymph nodes in breast cancer: a meta-analysis of randomized trials—an update. Radiat Oncol 2015; 10:258 Go to Citation Crossref PubMed Google Scholar 11. Iyengar P, Strom EA, Zhang YJ, et al. The value of ultrasound in detecting extra-axillary regional node involvement in patients with advanced breast cancer. Oncologist 2012; 17:1402–1408 Go to Citation Crossref PubMed Google Scholar 12. Kuerer HM, Newman LA, Buzdar AU, et al. Residual metastatic axillary lymph nodes following neoadjuvant chemotherapy predict disease-free survival in patients with locally advanced breast cancer. Am J Surg 1998; 176:502–509 Go to Citation Crossref PubMed Google Scholar 13. Morton DL, Thompson JF, Cochran AJ, et al.; MSLT Group. Sentinel-node biopsy or nodal observation in melanoma. N Engl J Med 2006; 355:1307–1317 Go to Citation Crossref PubMed Google Scholar 14. Morton DL, Wen DR, Wong JH, et al. Technical details of intraoperative lymphatic mapping for early stage melanoma. Arch Surg 1992; 127:392–399 Go to Citation Crossref PubMed Google Scholar 15. Giuliano AE. Lymphatic mapping and sentinel node biopsy in breast cancer. JAMA 1997; 277:791–792 Go to Citation Crossref PubMed Google Scholar 16. Giuliano AE. Intradermal blue dye to identify sentinel lymph node in breast cancer. Lancet 1997; 350:958 Crossref PubMed Google Scholar 17. Giuliano AE, Haigh PI, Brennan MB, et al. Prospective observational study of sentinel lymphadenectomy without further axillary dissection in patients with sentinel node-negative breast cancer. J Clin Oncol 2000; 18:2553–2559 Crossref PubMed Google Scholar 18. Giuliano AE, Kirgan DM, Guenther JM, Morton DL. Lymphatic mapping and sentinel lymphadenectomy for breast cancer. Ann Surg 1994; 220:391–398; discussion, 398–401 Crossref PubMed Google Scholar 19. Giuliano AE, Jones RC, Brennan M, Statman R. Sentinel lymphadenectomy in breast cancer. J Clin Oncol 1997; 15:2345–2350 Crossref PubMed Google Scholar 20. Lyman GH, Somerfield MR, Bosserman LD, Perkins CL, Weaver DL, Giuliano AE. Sentinel lymph node biopsy for patients with early-stage breast cancer: American Society of Clinical Oncology Clinical Practice Guideline Update. J Clin Oncol 2017; 35:561–564 Crossref PubMed Google Scholar [a [...] ]. In 1994, Giuliano](#core-R20-1) b [...] breast cancer, which are summarized in c [...] Guidelines for Sentinel Lymph Node Biopsy 21. Krag DN, Anderson SJ, Julian TB, et al. Sentinel-lymph-node resection compared with conventional axillary-lymph-node dissection in clinically node-negative patients with breast cancer: overall survival findings from the NSABP B-32 randomised phase 3 trial. Lancet Oncol 2010; 11:927–933 Go to Citation Crossref PubMed Google Scholar 22. Ashikaga T, Krag DN, Land SR, et al. National Surgical Adjuvant Breast, Bowel Project: morbidity results from the NSABP B-32 trial comparing sentinel lymph node dissection versus axillary dissection. J Surg Oncol 2010; 102:111–118 Go to Citation Crossref PubMed Google Scholar 23. Rodier JF, Velten M, Wilt M, et al. Prospective multicentric randomized study comparing periareolar and peritumoral injection of radiotracer and blue dye for the detection of sentinel lymph node in breast sparing procedures: FRANSENODE trial. J Clin Oncol 2007; 25:3664–3669 Go to Citation Crossref PubMed Google Scholar 24. Povoski SP, Olsen JO, Young DC, et al. Prospective randomized clinical trial comparing intradermal, intraparenchymal, and subareolar injection routes for sentinel lymph node mapping and biopsy in breast cancer. Ann Surg Oncol 2006; 13:1412–1421 Go to Citation Crossref PubMed Google Scholar 25. Guenther JM. Axillary dissection after unsuccessful sentinel lymphadenectomy for breast cancer. Am Surg 1999; 65:991–994 Go to Citation Crossref PubMed Google Scholar 26. Krishnamurthy S, Meric-Bernstam F, Lucci A, et al. A prospective study comparing touch imprint cytology, frozen section analysis, and rapid cytokeratin immunostain for intraoperative evaluation of axillary sentinel lymph nodes in breast cancer. Cancer 2009; 115:1555–1562 Go to Citation Crossref PubMed Google Scholar 27. Motomura K, Inaji H, Komoike Y, et al. Intraoperative sentinel lymph node examination by imprint cytology and frozen sectioning during breast surgery. Br J Surg 2000; 87:597–601 Go to Citation Crossref PubMed Google Scholar 28. Giuliano AE, Ballman KV, McCall L, et al. Effect of axillary dissection vs no axillary dissection on 10-year overall survival among women with invasive breast cancer and sentinel node metastasis: the ACOSOG Z0011 (Alliance) Randomized Clinical Trial. JAMA 2017; 318:918–926 Crossref PubMed Google Scholar a [...] who have three or fewer positive SLNs b [...] evaluation in patients with breast cancer 29. Society of Surgical Oncology. Don't routinely use sentinel node biopsy in clinically node negative women ≥70 years of age with hormone receptor positive invasive breast cancer. Choosing Wisely website. www.choosingwisely.org/clinician-lists/sso-sentinel-node-biopsy-in-node-negative-women-70-and-over/. Published July 12, 2016. Updated June 20, 2019. Accessed June 28, 2019 Go to Citation Google Scholar 30. Pilewskie M, Jochelson M, Gooch JC, Patil S, Stempel M, Morrow M. Is preoperative axillary imaging beneficial in identifying clinically node-negative patients requiring axillary lymph node dissection? J Am Coll Surg 2016; 222:138–145 Crossref PubMed Google Scholar a [...] evaluation in patients with breast cancer b [...] ALND. Another study by Pilewskie et al. 31. Houssami N, Ciatto S, Turner RM, Cody HS 3rd, Macaskill P. Preoperative ultrasound-guided needle biopsy of axillary nodes in invasive breast cancer: meta-analysis of its accuracy and utility in staging the axilla. Ann Surg 2011; 254:243–251 Go to Citation Crossref PubMed Google Scholar 32. Pilewskie M, Mautner SK, Stempel M, Eaton A, Morrow M. Does a positive axillary lymph node needle biopsy result predict the need for an axillary lymph node dissection in clinically node-negative breast cancer patients in the ACOSOG Z0011 era? Ann Surg Oncol 2016; 23:1123–1128 Go to Citation Crossref PubMed Google Scholar 33. Caudle AS, Kuerer HM, Le-Petross HT, et al. Predicting the extent of nodal disease in early-stage breast cancer. Ann Surg Oncol 2014; 21:3440–3447 Crossref PubMed Google Scholar [a [...] ]. Caudle et al.](#core-R33-1) b [...] of having more than three positive nodes 34. Verheuvel NC, van den Hoven I, Ooms HWA, Voogd AC, Roumen RM. The role of ultrasound-guided lymph node biopsy in axillary staging of invasive breast cancer in the post-ACOSOG Z0011 trial era. Ann Surg Oncol 2015; 22:409–415 Go to Citation Crossref PubMed Google Scholar 35. Fisher B, Bauer M, Wickerham DL, et al. Relation of number of positive axillary nodes to the prognosis of patients with primary breast cancer: an NSABP update. Cancer 1983; 52:1551–1557 Go to Citation Crossref PubMed Google Scholar 36. Diepstraten SCE, Sever AR, Buckens CFM, et al. Value of preoperative ultrasound-guided axillary lymph node biopsy for preventing completion axillary lymph node dissection in breast cancer: a systematic review and meta-analysis. Ann Surg Oncol 2014; 21:51–59 Go to Citation Crossref PubMed Google Scholar 37. Mieog JS, van der Hage JA, van de Velde CJ. Pre-operative chemotherapy for women with operable breast cancer. Cochrane Database Syst Rev 2007; 2:CD005002 Go to Citation Google Scholar 38. Bear HD, Anderson S, Brown A, et al.; National Surgical Adjuvant Breast and Bowel Project Protocol B-27. The effect on tumor response of adding sequential preoperative docetaxel to preoperative doxorubicin and cyclophosphamide: preliminary results from National Surgical Adjuvant Breast and Bowel Project Protocol B-27. J Clin Oncol 2003; 21:4165–4174 Go to Citation Crossref PubMed Google Scholar 39. Fisher B, Brown A, Mamounas E, et al. Effect of preoperative chemotherapy on local-regional disease in women with operable breast cancer: findings from National Surgical Adjuvant Breast and Bowel Project B-18. J Clin Oncol 1997; 15:2483–2493 Go to Citation Crossref PubMed Google Scholar 40. Hunt KK, Yi M, Mittendorf EA, et al. Sentinel lymph node surgery after neoadjuvant chemotherapy is accurate and reduces the need for axillary dissection in breast cancer patients. Ann Surg 2009; 250:558–566 Go to Citation Crossref PubMed Google Scholar 41. Classe JM, Bordes V, Campion L, et al. Sentinel lymph node biopsy after neoadjuvant chemotherapy for advanced breast cancer: results of Ganglion Sentinelle et Chimiotherapie Neoadjuvante, a French prospective multicentric study. J Clin Oncol 2009; 27:726–732 Go to Citation Crossref PubMed Google Scholar 42. National Comprehensive Cancer Network website. Breast cancer version 1.2019. www2.tri-kobe.org/nccn/guideline/breast/english/breast.pdf. Published March 14, 2019. Accessed June 28, 2019 Google Scholar a [...] dissection (TAD) should be performed b [...] node undergo completion axillary dissection 43. Boughey JC, Suman VJ, Mittendorf EA, et al.; Alliance for Clinical Trials in Oncology. Sentinel lymph node surgery after neoadjuvant chemotherapy in patients with node-positive breast cancer: the ACOSOG Z1071 (Alliance) clinical trial. JAMA 2013; 310:1455–1461 Go to Citation Crossref PubMed Google Scholar 44. Boileau JF, Poirier B, Basik M, et al. Sentinel node biopsy after neoadjuvant chemotherapy in biopsy-proven node-positive breast cancer: the SN FNAC study. J Clin Oncol 2015; 33:258–264 Go to Citation Crossref PubMed Google Scholar 45. Kuehn T, Bauerfeind I, Fehm T, et al. Sentinellymph-node biopsy in patients with breast cancer before and after neoadjuvant chemotherapy (SEN-TINA): a prospective, multicentre cohort study. Lancet Oncol 2013; 14:609–618 Go to Citation Crossref PubMed Google Scholar 46. Boughey JC, Ballman KV, Le-Petross HT, et al. Identification and resection of clipped node decreases the false-negative rate of sentinel lymph node surgery in patients presenting with node-positive breast cancer (T0-T4, N1-N2) who receive neoadjuvant chemotherapy: results from ACOSOG Z1071 (Alliance). Ann Surg 2016; 263:802–807 Go to Citation Crossref PubMed Google Scholar 47. Caudle AS, Yang WT, Krishnamurthy S, et al. Improved axillary evaluation following neoadjuvant therapy for patients with node-positive breast cancer using selective evaluation of clipped nodes: implementation of targeted axillary dissection. J Clin Oncol 2016; 34:1072–1078 Go to Citation Crossref PubMed Google Scholar Information & Authors Information Published In American Journal of Roentgenology Volume 214 | Issue 2 | February 2020 Pages: 249 - 258 PubMed: 31714846 Copyright © American Roentgen Ray Society. History Submitted: July 16, 2019 Accepted: August 24, 2019 First published: November 12, 2019 Keywords axillary lymph node axillary lymph node dissection sentinel lymph node biopsy staging ultrasound ultrasound-guided biopsy Authors Affiliations Susie X. Sun Department of Breast Surgical Oncology, Division of Surgery, University of Texas MD Anderson Cancer, Houston, TX. View all articles by this author Tanya W. Moseley Department of Diagnostic Radiology, Division of Diagnostic Imaging, University of Texas MD Anderson Cancer, 1515 Holcombe Blvd, Unit 1459, Houston, TX 77030. View all articles by this author Henry M. Kuerer Department of Breast Surgical Oncology, Division of Surgery, University of Texas MD Anderson Cancer, Houston, TX. View all articles by this author Wei T. Yang Department of Diagnostic Radiology, Division of Diagnostic Imaging, University of Texas MD Anderson Cancer, 1515 Holcombe Blvd, Unit 1459, Houston, TX 77030. View all articles by this author Notes Address correspondence to W. T. Yang (wyang@mdanderson.org). T. W. Moseley is a consultant for Hologic, Inc. and Merit Medical, Inc. H. M. Kuerer is an advisory for Cardinal Health and Targeted Medical Education, Inc., is a member of PER-Speaker's Bureau, receives Genomic Health-Research Funding, and is an editor with NEJM Group and McGraw-Hill Publishing. W. T. Yang receives royalties from Elsevier. Funding Information Supported by the National Cancer Institute of the National Institutes of Health under award number P30 CA016672 and the Robert D. Moreton Distinguished Chair Award. Metrics & Citations Metrics Article Metrics Downloads Citations No data available. 52,054 41 Total First 90 Days 6 Months 12 Months Total number of downloads and citations See more details Posted by 4 X users On 1 Facebook pages 83 readers on Mendeley Citations Export Citations To download the citation to this article, select your reference manager software. Articles citing this article Axillary lymph node metastasis in breast cancer: from historical axillary surgery to updated advances in the preoperative diagnosis and axillary management February 27, 2025 | BMC Surgery, Vol. 25, No. 1 A comparative study of dual-layer spectral CT and 18F-FDG-PET/CT multi-quantitative parameters in the diagnosis of sentinel lymph nodes in breast cancer June 17, 2025 | Annals of Nuclear Medicine, Vol. 95 Sonographic localisation of lymph nodes suspicious of metastatic breast cancer to surgical axillary levels November 17, 2024 | Journal of Medical Radiation Sciences, Vol. 72, No. 1 Examining the false-negative rate of a negative axillary node ultrasound-guided core needle biopsy in breast cancer patients undergoing upfront surgery 1 Jan 2025 | The American Journal of Surgery, Vol. 239 Computer-assisted diagnosis for axillary lymph node metastasis of early breast cancer based on transformer with dual-modal adaptive mid-term fusion using ultrasound elastography 1 Jan 2025 | Computerized Medical Imaging and Graphics, Vol. 119 Impact of adding preoperative magnetic resonance imaging to ultrasonography on male breast cancer survival: a matched analysis with female breast cancer 1 Jan 2025 | Ultrasonography, Vol. 44, No. 1 Prospects of perfusion contrast-enhanced ultrasound (CE-US) in diagnosing axillary lymph node metastases in breast cancer: a comparison with lymphatic CE-US April 20, 2024 | Journal of Medical Ultrasonics, Vol. 51, No. 4 Real-time detection and resection of sentinel lymph node metastasis in breast cancer through a rare earth nanoprobe based NIR-IIb fluorescence imaging 1 Oct 2024 | Materials Today Bio, Vol. 28 Incidental Axillary Lymphadenopathy Found on Radiation Planning Computed Tomography 1 Aug 2024 | International Journal of Radiation OncologyBiologyPhysics, Vol. 119, No. 5 Current Status of Imaging for Breast Cancer Staging February 6, 2024 | Current Breast Cancer Reports, Vol. 16, No. 2 An Automated Breast Volume Scanner-Based Intra- and Peritumoral Radiomics Nomogram for the Preoperative Prediction of Expression of Ki-67 in Breast Malignancy 1 Jan 2024 | Academic Radiology, Vol. 31, No. 1 Magnetic resonance imaging in the evaluation of axillary lymph nodes in patients with early-stage invasive breast cancer January 1, 2024 | Radiologia Brasileira, Vol. 57 Avaliação linfonodal em pacientes com cncer de mama inicial: papel atual dos métodos de imagem January 1, 2024 | Radiologia Brasileira, Vol. 57 Lymph node assessment in patients with early-stage breast cancer: the current role of imaging methods January 1, 2024 | Radiologia Brasileira, Vol. 57 The study of ultrasonography based on deep learning in breast cancer 1 Dec 2023 | Journal of Radiation Research and Applied Sciences, Vol. 16, No. 4 Lymph node metastasis in cancer progression: molecular mechanisms, clinical significance and therapeutic interventions September 27, 2023 | Signal Transduction and Targeted Therapy, Vol. 8, No. 1 Development of a Rare Earth Nanoprobe Enables In Vivo Real-Time Detection of Sentinel Lymph Node Metastasis of Breast Cancer Using NIR-IIb Imaging August 4, 2023 | Cancer Research, Vol. 83, No. 20 Non-invasive Assessment of Axillary Lymph Node Metastasis Risk in Early Invasive Breast Cancer Adopting Automated Breast Volume Scanning-Based Radiomics Nomogram: A Multicenter Study 1 May 2023 | Ultrasound in Medicine & Biology, Vol. 49, No. 5 Review of the Sonographic Features of Interpectoral (Rotter) Lymph Nodes in Breast Cancer Staging June 1, 2023 | Ultrasound Quarterly, Vol. 39, No. 2 The impact of body mass index (BMI) on MRI diagnostic performance and surgical management for axillary lymph node in breast cancer February 23, 2022 | World Journal of Surgical Oncology, Vol. 20, No. 1 Analysis on factors behind sentinel lymph node metastasis in breast cancer by color ultrasonography, molybdenum target, and pathological detection March 8, 2022 | World Journal of Surgical Oncology, Vol. 20, No. 1 Detection of axillary lymph node metastasis in breast cancer using dual-layer spectral computed tomography October 10, 2022 | Frontiers in Oncology, Vol. 12 Development of an ultrasound-based radiomics nomogram to preoperatively predict Ki-67 expression level in patients with breast cancer August 15, 2022 | Frontiers in Oncology, Vol. 12 Sensitive and specific detection of breast cancer lymph node metastasis through dual-modality magnetic particle imaging and fluorescence molecular imaging: a preclinical evaluation May 20, 2022 | European Journal of Nuclear Medicine and Molecular Imaging, Vol. 49, No. 8 Management of early stage HER2 positive breast cancer and increased implementation of axillary imaging to improve identification of nodal metastasis March 1, 2022 | Journal of Surgical Oncology, Vol. 125, No. 8 Breast-conserving surgery and sentinel lymph node biopsy for breast cancer and their correlation with the expression of polyligand proteoglycan-1 6 Apr 2022 | World Journal of Clinical Cases, Vol. 10, No. 10 Lymph Node Staging in Newly Diagnosed Breast Cancer: Point—Preoperative Staging Axillary Ultrasound Is Valuable in the Contemporary Evaluation of Newly Diagnosed Breast Cancer September 29, 2021 | American Journal of Roentgenology, Vol. 218, No. 4 Single shot lymphoscintigraphy in breast cancer: Effective single tracer sentinel node detection protocol with reduction in procedural pain 1 Apr 2022 | Clinical Imaging, Vol. 84 The Challenging Image-Guided Preoperative Breast Localization: A Modality-Based Approach October 6, 2021 | American Journal of Roentgenology, Vol. 218, No. 3 Diagnostic Model of Superficial Lymph Nodes Based on Clinical History and Ultrasound Findings: A Prospective Cohort Study January 3, 2022 | Frontiers in Oncology, Vol. 11 Prediction of Axillary Lymph Node Metastasis in Breast Cancer using Intra-peritumoral Textural Transition Analysis based on Dynamic Contrast-enhanced Magnetic Resonance Imaging 1 Jan 2022 | Academic Radiology, Vol. 29 To assess the feasibility and diagnostic accuracy of preoperative ultrasound and ultrasound-guided fine needle aspiration cytology of axillary lymph nodes in patients of breast carcinoma 1 Jan 2022 | Journal of Family Medicine and Primary Care, Vol. 11, No. 1 Breast MRI for staging and treatment planning 1 Jan 2022 Computerized Tomography Image Features under the Reconstruction Algorithm in the Evaluation of the Effect of Ropivacaine Combined with Dexamethasone and Dexmedetomidine on Assisted Thoracoscopic Lobectomy 10 Nov 2021 | Journal of Healthcare Engineering, Vol. 2021 Assessment of Axillary Lymph Nodes for Metastasis on Ultrasound Using Artificial Intelligence August 20, 2021 | Ultrasonic Imaging, Vol. 43, No. 6 Breast Cancer Skip Metastases: Frequency, Associated Tumor Characteristics, and Role of Staging Nodal Ultrasound in Detection September 30, 2020 | American Journal of Roentgenology, Vol. 217, No. 4 Editorial Comment: Breast Cancer Skip Metastases—If You Look, You Will Find Them October 7, 2020 | American Journal of Roentgenology, Vol. 217, No. 4 Breast Cancer Staging: Updates in the AJCC Cancer Staging Manual, 8th Edition, and Current Challenges for Radiologists, From the AJR Special Series on Cancer Staging February 17, 2021 | American Journal of Roentgenology, Vol. 217, No. 2 Artificial Intelligence Algorithm-Based Ultrasound Image Segmentation Technology in the Diagnosis of Breast Cancer Axillary Lymph Node Metastasis 22 Jul 2021 | Journal of Healthcare Engineering, Vol. 2021 Axillary lymph nodes - surgical levels March 31, 2021 Ultrasound Evaluation of the Axilla in the Breast Imaging Setting March 1, 2021 | Ultrasound Quarterly, Vol. 37, No. 1 Sentinel lymph node biopsy in breast cancer—an updated overview October 1, 2020 | European Surgery, Vol. 52, No. 6 View Options View options PDF View PDF PDF Download Download PDF Figures Fig. 1 —Illustration shows axillary lymph nodes used to determine N category according to AJCC Cancer Staging Manual, 8th ed. . (© 2019 The University of Texas MD Anderson Cancer Center) Go to Figure Fig. 2 —40-year-old woman with multifocal left breast invasive ductal carcinoma who presented for staging workup. Axial CT image shows abnormal left axillary level I lymph node (thick arrow) and normal right axillary level I lymph node (thin arrow). Go to Figure Fig. 3A —60-year-old woman with bilateral invasive ductal carcinoma (estrogen receptor–negative, progesterone receptor–negative, human epidermal growth factor receptor 2–positive). Go to Figure Fig. 3B —60-year-old woman with bilateral invasive ductal carcinoma (estrogen receptor–negative, progesterone receptor–negative, human epidermal growth factor receptor 2–positive). Go to Figure Fig. 3C —60-year-old woman with bilateral invasive ductal carcinoma (estrogen receptor–negative, progesterone receptor–negative, human epidermal growth factor receptor 2–positive). Go to Figure Fig. 3D —60-year-old woman with bilateral invasive ductal carcinoma (estrogen receptor–negative, progesterone receptor–negative, human epidermal growth factor receptor 2–positive). Go to Figure Fig. 4A —Normal lymph nodes. Go to Figure Fig. 4B —Normal lymph nodes. Go to Figure Fig. 4C —Normal lymph nodes. Go to Figure Fig. 5A —Cortical and hilar abnormalities. Go to Figure Fig. 5B —Cortical and hilar abnormalities. Go to Figure Fig. 5C —Cortical and hilar abnormalities. Go to Figure Fig. 5D —Cortical and hilar abnormalities. Go to Figure Fig. 6 —60-year-old woman with multicentric invasive ductal carcinoma in left breast who underwent fine-needle aspiration biopsy procedure. Transverse gray-scale ultrasound image shows needle tip (arrow) in hypoechoic cortex of axillary node. Cytology yielded carcinoma. Go to Figure Fig. 7A —Sentinel lymph node location. Go to Figure Fig. 7B —Sentinel lymph node location. Go to Figure Fig. 8 —63-year-old woman with invasive ductal carcinoma who underwent partial mastectomy and sentinel lymph node biopsy. Photograph shows gamma probe evaluation of nodal uptake of radiocolloid during sentinel lymph node biopsy. Go to Figure Fig. 9 —41-year-old woman with extensive multicentric high-grade ductal carcinoma in situ in left breast with comedonecrosis who underwent mastectomy and sentinel lymph node biopsy (SLNB). Photograph shows intraoperative identification of blue sentinel lymph node (arrow) during SLNB. Go to Figure Fig. 10 —Proposed imaging-based algorithm for axillary lymph node staging and sentinel lymph node biopsy (SLNB) in patients with primary breast cancer before and after neoadjuvant system therapy. Dashed line indicates possibilities that should be considered. US = ultrasound, FNAB = fine-needle aspiration biopsy, CNB = core needle biopsy, ALND = axillary lymph node dissection, TAD = targeted axillary dissection. Go to Figure Fig. 11A —59-year-old woman with invasive ductal carcinoma in right breast and biopsy-proven metastatic axillary lymph node who underwent mastectomy and targeted axillary dissection after neoadjuvant systemic therapy. Go to Figure Fig. 11B —59-year-old woman with invasive ductal carcinoma in right breast and biopsy-proven metastatic axillary lymph node who underwent mastectomy and targeted axillary dissection after neoadjuvant systemic therapy. Go to Figure Tables TABLE 1: American Society of Clinical Oncology Clinical Practice Guidelines for Sentinel Lymph Node Biopsy Go to Table TABLE 2: Indications and Contraindications for Targeted Axillary Dissection Go to Table Media Share Share Copy the content Link Copied! Copying failed. Share on social media FacebookX (formerly Twitter)LinkedInemail References References 1. Hortobagayi GN, Connolly JL, D'Orsi CJ, et al. Breast. In: Amin MB, Edge SB, Greene FL, et al., eds. AJCC Cancer Staging Manual, 8th ed. New York, NY: Springer International, 2017:589–636 Crossref Google Scholar a [...] AJCC Cancer Staging Manual b [...] , 8th ed. c [...] in the 8th edition of the staging manual d [...] axillary levels I and II nodal chains 2. Ecanow JS, Abe H, Newstead GM, Ecanow DB, Jeske JM. Axillary staging of breast cancer: what the radiologist should know. RadioGraphics 2013; 33:1589–1612 Crossref PubMed Google Scholar a [...] and heterogeneous enhancement pattern b [...] and cortical thickness less than 3 mm c [...] axillary fat and is almost always benign d [...] nodes, which results in cortical thickening e [...] bulge or eccentric cortical thickening 3. Murray AD, Staff RT, Redpath TW, et al. Dynamic contrast enhanced MRI of the axilla in women with breast cancer: comparison with pathology of excised nodes. Br J Radiol 2002; 75:220–228 Crossref PubMed Google Scholar 4. Korteweg MA, Zwanenburg JJ, Hoogduin JM, et al. Dissected sentinel lymph nodes of breast cancer patients: characterization with high-spatial-resolution 7-T MR imaging. Radiology 2011; 261:127–135 Crossref PubMed Google Scholar a [...] and heterogeneous enhancement pattern b [...] and cortical thickness less than 3 mm 5. Bedi DG, Krishnamurthy R, Krishnamurthy S, et al. Cortical morphologic features of axillary lymph nodes as a predictor of metastasis in breast cancer: in vitro sonographic study. AJR 2008; 191:646–652 Crossref PubMed Google Scholar a [...] axillary fat and is almost always benign b [...] nodes, which results in cortical thickening c [...] is a more specific indicator of metastases d [...] ). Bedi et al. e [...] bulge or eccentric cortical thickening 6. Gradishar WJ, Anderson BO, Balassanian R, et al. Breast cancer version 2.2015. J Natl Compr Canc Netw 2015; 13:448–475 Go to Citation Crossref PubMed Google Scholar 7. Abe H, Schmidt RA, Sennett CA, Shimauchi A, Newstead GM. US-guided core needle biopsy of axillary lymph nodes in patients with breast cancer: why and how to do it. RadioGraphics 2007; 27(Suppl 1):S91–S99 Go to Citation Google Scholar 8. Abe H, Schmidt RA, Kulkarni K, Sennett CA, Mueller JS, Newstead GM. Axillary lymph nodes suspicious for breast cancer metastasis: sampling with US-guided 14-gauge core-needle biopsy—clinical experience in 100 patients. Radiology 2009; 250:41–49 Go to Citation Crossref PubMed Google Scholar 9. Giuliano AE, Connolly JL, Edge SB, et al. Breast cancer: major changes in the American Joint Committee on Cancer eighth edition cancer staging manual. CA Cancer J Clin 2017; 67:290–303 Go to Citation Crossref PubMed Google Scholar 10. Budach W, Bölke E, Kammers K, Gerber PA, Nestle-Krämling C, Matuschek C. Adjuvant radiation therapy of regional lymph nodes in breast cancer: a meta-analysis of randomized trials—an update. Radiat Oncol 2015; 10:258 Go to Citation Crossref PubMed Google Scholar 11. Iyengar P, Strom EA, Zhang YJ, et al. The value of ultrasound in detecting extra-axillary regional node involvement in patients with advanced breast cancer. Oncologist 2012; 17:1402–1408 Go to Citation Crossref PubMed Google Scholar 12. Kuerer HM, Newman LA, Buzdar AU, et al. Residual metastatic axillary lymph nodes following neoadjuvant chemotherapy predict disease-free survival in patients with locally advanced breast cancer. Am J Surg 1998; 176:502–509 Go to Citation Crossref PubMed Google Scholar 13. Morton DL, Thompson JF, Cochran AJ, et al.; MSLT Group. Sentinel-node biopsy or nodal observation in melanoma. N Engl J Med 2006; 355:1307–1317 Go to Citation Crossref PubMed Google Scholar 14. Morton DL, Wen DR, Wong JH, et al. Technical details of intraoperative lymphatic mapping for early stage melanoma. Arch Surg 1992; 127:392–399 Go to Citation Crossref PubMed Google Scholar 15. Giuliano AE. Lymphatic mapping and sentinel node biopsy in breast cancer. JAMA 1997; 277:791–792 Go to Citation Crossref PubMed Google Scholar 16. Giuliano AE. Intradermal blue dye to identify sentinel lymph node in breast cancer. Lancet 1997; 350:958 Crossref PubMed Google Scholar 17. Giuliano AE, Haigh PI, Brennan MB, et al. Prospective observational study of sentinel lymphadenectomy without further axillary dissection in patients with sentinel node-negative breast cancer. J Clin Oncol 2000; 18:2553–2559 Crossref PubMed Google Scholar 18. Giuliano AE, Kirgan DM, Guenther JM, Morton DL. Lymphatic mapping and sentinel lymphadenectomy for breast cancer. Ann Surg 1994; 220:391–398; discussion, 398–401 Crossref PubMed Google Scholar 19. Giuliano AE, Jones RC, Brennan M, Statman R. Sentinel lymphadenectomy in breast cancer. J Clin Oncol 1997; 15:2345–2350 Crossref PubMed Google Scholar 20. Lyman GH, Somerfield MR, Bosserman LD, Perkins CL, Weaver DL, Giuliano AE. Sentinel lymph node biopsy for patients with early-stage breast cancer: American Society of Clinical Oncology Clinical Practice Guideline Update. J Clin Oncol 2017; 35:561–564 Crossref PubMed Google Scholar [a [...] ]. In 1994, Giuliano](#core-R20-1) b [...] breast cancer, which are summarized in c [...] Guidelines for Sentinel Lymph Node Biopsy 21. Krag DN, Anderson SJ, Julian TB, et al. Sentinel-lymph-node resection compared with conventional axillary-lymph-node dissection in clinically node-negative patients with breast cancer: overall survival findings from the NSABP B-32 randomised phase 3 trial. Lancet Oncol 2010; 11:927–933 Go to Citation Crossref PubMed Google Scholar 22. Ashikaga T, Krag DN, Land SR, et al. National Surgical Adjuvant Breast, Bowel Project: morbidity results from the NSABP B-32 trial comparing sentinel lymph node dissection versus axillary dissection. J Surg Oncol 2010; 102:111–118 Go to Citation Crossref PubMed Google Scholar 23. Rodier JF, Velten M, Wilt M, et al. Prospective multicentric randomized study comparing periareolar and peritumoral injection of radiotracer and blue dye for the detection of sentinel lymph node in breast sparing procedures: FRANSENODE trial. J Clin Oncol 2007; 25:3664–3669 Go to Citation Crossref PubMed Google Scholar 24. Povoski SP, Olsen JO, Young DC, et al. Prospective randomized clinical trial comparing intradermal, intraparenchymal, and subareolar injection routes for sentinel lymph node mapping and biopsy in breast cancer. Ann Surg Oncol 2006; 13:1412–1421 Go to Citation Crossref PubMed Google Scholar 25. Guenther JM. Axillary dissection after unsuccessful sentinel lymphadenectomy for breast cancer. Am Surg 1999; 65:991–994 Go to Citation Crossref PubMed Google Scholar 26. Krishnamurthy S, Meric-Bernstam F, Lucci A, et al. A prospective study comparing touch imprint cytology, frozen section analysis, and rapid cytokeratin immunostain for intraoperative evaluation of axillary sentinel lymph nodes in breast cancer. Cancer 2009; 115:1555–1562 Go to Citation Crossref PubMed Google Scholar 27. Motomura K, Inaji H, Komoike Y, et al. Intraoperative sentinel lymph node examination by imprint cytology and frozen sectioning during breast surgery. Br J Surg 2000; 87:597–601 Go to Citation Crossref PubMed Google Scholar 28. Giuliano AE, Ballman KV, McCall L, et al. Effect of axillary dissection vs no axillary dissection on 10-year overall survival among women with invasive breast cancer and sentinel node metastasis: the ACOSOG Z0011 (Alliance) Randomized Clinical Trial. JAMA 2017; 318:918–926 Crossref PubMed Google Scholar a [...] who have three or fewer positive SLNs b [...] evaluation in patients with breast cancer 29. Society of Surgical Oncology. Don't routinely use sentinel node biopsy in clinically node negative women ≥70 years of age with hormone receptor positive invasive breast cancer. Choosing Wisely website. www.choosingwisely.org/clinician-lists/sso-sentinel-node-biopsy-in-node-negative-women-70-and-over/. Published July 12, 2016. Updated June 20, 2019. Accessed June 28, 2019 Go to Citation Google Scholar 30. Pilewskie M, Jochelson M, Gooch JC, Patil S, Stempel M, Morrow M. Is preoperative axillary imaging beneficial in identifying clinically node-negative patients requiring axillary lymph node dissection? J Am Coll Surg 2016; 222:138–145 Crossref PubMed Google Scholar a [...] evaluation in patients with breast cancer b [...] ALND. Another study by Pilewskie et al. 31. Houssami N, Ciatto S, Turner RM, Cody HS 3rd, Macaskill P. Preoperative ultrasound-guided needle biopsy of axillary nodes in invasive breast cancer: meta-analysis of its accuracy and utility in staging the axilla. Ann Surg 2011; 254:243–251 Go to Citation Crossref PubMed Google Scholar 32. Pilewskie M, Mautner SK, Stempel M, Eaton A, Morrow M. Does a positive axillary lymph node needle biopsy result predict the need for an axillary lymph node dissection in clinically node-negative breast cancer patients in the ACOSOG Z0011 era? Ann Surg Oncol 2016; 23:1123–1128 Go to Citation Crossref PubMed Google Scholar 33. Caudle AS, Kuerer HM, Le-Petross HT, et al. Predicting the extent of nodal disease in early-stage breast cancer. Ann Surg Oncol 2014; 21:3440–3447 Crossref PubMed Google Scholar [a [...] ]. Caudle et al.](#core-R33-1) b [...] of having more than three positive nodes 34. Verheuvel NC, van den Hoven I, Ooms HWA, Voogd AC, Roumen RM. The role of ultrasound-guided lymph node biopsy in axillary staging of invasive breast cancer in the post-ACOSOG Z0011 trial era. Ann Surg Oncol 2015; 22:409–415 Go to Citation Crossref PubMed Google Scholar 35. Fisher B, Bauer M, Wickerham DL, et al. Relation of number of positive axillary nodes to the prognosis of patients with primary breast cancer: an NSABP update. Cancer 1983; 52:1551–1557 Go to Citation Crossref PubMed Google Scholar 36. Diepstraten SCE, Sever AR, Buckens CFM, et al. Value of preoperative ultrasound-guided axillary lymph node biopsy for preventing completion axillary lymph node dissection in breast cancer: a systematic review and meta-analysis. Ann Surg Oncol 2014; 21:51–59 Go to Citation Crossref PubMed Google Scholar 37. Mieog JS, van der Hage JA, van de Velde CJ. Pre-operative chemotherapy for women with operable breast cancer. Cochrane Database Syst Rev 2007; 2:CD005002 Go to Citation Google Scholar 38. Bear HD, Anderson S, Brown A, et al.; National Surgical Adjuvant Breast and Bowel Project Protocol B-27. The effect on tumor response of adding sequential preoperative docetaxel to preoperative doxorubicin and cyclophosphamide: preliminary results from National Surgical Adjuvant Breast and Bowel Project Protocol B-27. J Clin Oncol 2003; 21:4165–4174 Go to Citation Crossref PubMed Google Scholar 39. Fisher B, Brown A, Mamounas E, et al. Effect of preoperative chemotherapy on local-regional disease in women with operable breast cancer: findings from National Surgical Adjuvant Breast and Bowel Project B-18. J Clin Oncol 1997; 15:2483–2493 Go to Citation Crossref PubMed Google Scholar 40. Hunt KK, Yi M, Mittendorf EA, et al. Sentinel lymph node surgery after neoadjuvant chemotherapy is accurate and reduces the need for axillary dissection in breast cancer patients. Ann Surg 2009; 250:558–566 Go to Citation Crossref PubMed Google Scholar 41. Classe JM, Bordes V, Campion L, et al. Sentinel lymph node biopsy after neoadjuvant chemotherapy for advanced breast cancer: results of Ganglion Sentinelle et Chimiotherapie Neoadjuvante, a French prospective multicentric study. J Clin Oncol 2009; 27:726–732 Go to Citation Crossref PubMed Google Scholar 42. National Comprehensive Cancer Network website. Breast cancer version 1.2019. www2.tri-kobe.org/nccn/guideline/breast/english/breast.pdf. Published March 14, 2019. Accessed June 28, 2019 Google Scholar a [...] dissection (TAD) should be performed b [...] node undergo completion axillary dissection 43. Boughey JC, Suman VJ, Mittendorf EA, et al.; Alliance for Clinical Trials in Oncology. Sentinel lymph node surgery after neoadjuvant chemotherapy in patients with node-positive breast cancer: the ACOSOG Z1071 (Alliance) clinical trial. JAMA 2013; 310:1455–1461 Go to Citation Crossref PubMed Google Scholar 44. Boileau JF, Poirier B, Basik M, et al. Sentinel node biopsy after neoadjuvant chemotherapy in biopsy-proven node-positive breast cancer: the SN FNAC study. J Clin Oncol 2015; 33:258–264 Go to Citation Crossref PubMed Google Scholar 45. Kuehn T, Bauerfeind I, Fehm T, et al. Sentinellymph-node biopsy in patients with breast cancer before and after neoadjuvant chemotherapy (SEN-TINA): a prospective, multicentre cohort study. Lancet Oncol 2013; 14:609–618 Go to Citation Crossref PubMed Google Scholar 46. Boughey JC, Ballman KV, Le-Petross HT, et al. Identification and resection of clipped node decreases the false-negative rate of sentinel lymph node surgery in patients presenting with node-positive breast cancer (T0-T4, N1-N2) who receive neoadjuvant chemotherapy: results from ACOSOG Z1071 (Alliance). Ann Surg 2016; 263:802–807 Go to Citation Crossref PubMed Google Scholar 47. Caudle AS, Yang WT, Krishnamurthy S, et al. Improved axillary evaluation following neoadjuvant therapy for patients with node-positive breast cancer using selective evaluation of clipped nodes: implementation of targeted axillary dissection. J Clin Oncol 2016; 34:1072–1078 Go to Citation Crossref PubMed Google Scholar Recommended Articles Free Access ReviewFOCUS ON: Gastrointestinal Imaging ### Liver Calcifications and Calcified Liver Masses: Pattern Recognition Approach on CT Madhavi Patnana, Christine O. Menias, Perry J. Pickhardt, Mohamed Elshikh, Sanaz Javadi, Ayman Gaballah, Akram M. Shaaban, Brinda Rao Korivi, Naveen Garg, and Khaled M. Elsayes ### Asymmetric Ductal Ectasia: An Often Overlooked Sign of Malignancy Su-Ju Lee, Lawrence D. Sobel, Michael Shamis, and Mary C. Mahoney Free Access ReviewWomen's Imaging ### Nonmass Enhancement on Breast MRI: Review of Patterns With Radiologic-Pathologic Correlation and Discussion of Management Tamuna Chadashvili, Erica Ghosh, Valerie Fein-Zachary, Tejas S. Mehta, Shambhavi Venkataraman, Vandana Dialani, and Priscilla J. Slanetz ### The Augmented Breast: A Pictorial Review of the Abnormal and Unusual Natalie Yang and Derek Muradali ### Evaluation of Cervical Lymph Nodes in Head and Neck Cancer With CT and MRI: Tips, Traps, and a Systematic Approach Jenny K. Hoang, Jyotsna Vanka, Benjamin J. Ludwig, and Christine M. Glastonbury Figure title goes here Go to figure location within the article Download figure Share on social media xrefBack.goTo Request permissions Authors Info & Affiliations Congrats! Your Phone has been verified
780
https://vizgenie.com/thermal-conductivity
Thermal Conductivity Conversion - Calcule Calcule.app Home General Scientific Everyday Thermal Conductivity Conversion - Convert W/(m⋅K), BTU/(hr⋅ft⋅°F), and More Back to Home Thermal Conductivity Converter Convert thermal conductivity measurements between different units quickly and accurately Convert Thermal Conductivity From Watt per meter-kelvin (W/(m⋅K)) To BTU per hour-foot-°F (BTU/(hr⋅ft⋅°F)) Convert Recent Conversions No recent conversions Understanding Thermal Conductivity Measurements Thermal conductivity measurement is crucial in materials science, engineering, and energy efficiency. The watt per meter-kelvin (W/(m·K)) is the base unit in the International System of Units (SI), while BTU/(hr·ft·°F) and other units are used in different regions and industries. Understanding thermal conductivity conversions is essential for insulation, electronics cooling, and heat exchanger design. Common Thermal Conductivity Units SI Units Watt per meter-kelvin (W/(m·K)) - Base unit Kilowatt per meter-kelvin (kW/(m·K)) - 1000 W/(m·K) Milliwatt per meter-kelvin (mW/(m·K)) - 0.001 W/(m·K) Other Units BTU/(hr·ft·°F) - Common in US industry Calorie/(s·cm·°C) - CGS system BTU·in/(hr·ft²·°F) - Building insulation Watt per centimeter-°C (W/(cm·°C)) - Laboratory use Common Applications | Field | Common Units | Typical Uses | --- | Building Insulation | W/(m·K), BTU/(hr·ft·°F) | Wall, roof, window insulation | | Electronics Cooling | W/(m·K) | Heat sink, thermal interface materials | | Materials Science | W/(m·K), cal/(s·cm·°C) | Material selection, research | | HVAC | BTU/(hr·ft·°F) | Heating, ventilation, air conditioning | Conversion Tips 1 W/(m·K) ≈ 0.5778 BTU/(hr·ft·°F) 1 BTU/(hr·ft·°F) ≈ 1.7307 W/(m·K) 1 cal/(s·cm·°C) ≈ 418.68 W/(m·K) Higher thermal conductivity means better heat transfer Thermal Conductivity Conversions | Conversion | Formula | Quick Answer | --- | W/(m·K) to BTU/(hr·ft·°F) | 1 W/(m·K) = 0.5778 BTU/(hr·ft·°F) | Multiply by 0.5778 | | BTU/(hr·ft·°F) to W/(m·K) | 1 BTU/(hr·ft·°F) = 1.7307 W/(m·K) | Multiply by 1.7307 | | W/(m·K) to cal/(s·cm·°C) | 1 W/(m·K) = 0.00239 cal/(s·cm·°C) | Multiply by 0.00239 | | cal/(s·cm·°C) to W/(m·K) | 1 cal/(s·cm·°C) = 418.68 W/(m·K) | Multiply by 418.68 | | W/(m·K) to kW/(m·K) | 1 W/(m·K) = 0.001 kW/(m·K) | Divide by 1000 | | kW/(m·K) to W/(m·K) | 1 kW/(m·K) = 1000 W/(m·K) | Multiply by 1000 | | W/(m·K) to mW/(m·K) | 1 W/(m·K) = 1000 mW/(m·K) | Multiply by 1000 | | mW/(m·K) to W/(m·K) | 1 mW/(m·K) = 0.001 W/(m·K) | Divide by 1000 | | BTU·in/(hr·ft²·°F) to W/(m·K) | 1 BTU·in/(hr·ft²·°F) = 0.1442 W/(m·K) | Multiply by 0.1442 | | W/(m·K) to BTU·in/(hr·ft²·°F) | 1 W/(m·K) = 6.933 BTU·in/(hr·ft²·°F) | Multiply by 6.933 | Our thermal conductivity conversion tool provides accurate conversions between all common units of thermal conductivity, making it easy to compare materials, design insulation, and optimize heat transfer. Whether you're working in construction, electronics, or research, our converter ensures precise thermal conductivity conversions for any application. Thermal Conductivity Questions Understanding heat conduction properties and their measurement units What is thermal conductivity and why is it important? What are typical thermal conductivity values for common materials? How do I convert between W/(m⋅K) and BTU/(hr⋅ft⋅°F)? What factors affect thermal conductivity? How is thermal conductivity measured? What's the difference between thermal conductivity and thermal resistance? Why do different engineering fields use different units? How does thermal conductivity relate to energy efficiency? Have more questions? Contact us! Calcule.app Fast, accurate, and easy-to-use unit conversion tools for everyone. Convert between different units with precision and confidence. Contact Us Send Message © 2025 Calcule.app. All rights reserved. About UsPrivacy PolicyTerms of Service Made with for precision
781
https://chem-guide.blogspot.com/2010/03/empirical-molecular-formula-and_31.html
CHEM-GUIDE: Empirical, Molecular Formula and Limiting Reagents CHEM-GUIDE Tutorials for Chemistry Learning !! Empirical, Molecular Formula and Limiting Reagents Every chemical substance is known by a specific name. But many a times these names are cumbersome, confusing, and do not provide information about its chemical composition. To overcome this, each chemical compound is represented by a chemical formula that gives its composition (constituent elements present) and the number of elements of each type present. There are two types of chemical formula. They are: Molecular Formula The formula that gives the symbolic representation of the actual number of atoms of various elements present in one molecule of the compound is called the molecular formula. Discrete molecules can be described by this formula. As it represents one molecule of the substance giving the names and number of atoms of the various elements present, it denotes the molecular mass of the substance. For example, the molecular formula of water is H 2 O, which means that one molecule of water contains two atoms of hydrogen and one atom of oxygen. This also represents the molar mass, which is the sum of the gram atomic mass of all the atoms. Gram atomic mass of 2 hydrogen atoms = 2 x 1.008 g Gram atomic mass of oxygen atom = 16 g Total molecular mass = 18 g Empirical Formula Empirical formula is defined as the simplest formula of the substance, which gives the relative number of atoms of each element present in the molecule of that substance. Substances which have no discrete molecules such as ionic and network covalent compounds are described by this empirical formula. This is also called as stoichiometric formula. It gives the simplest whole number ratio between the number of atoms of all the elements present in the compound. For example, in the compound benzene, C 6 H 6 there are six carbon atoms and six hydrogen atoms. The lowest whole number ratio between them is 1:1 (6:6 can be simplified to 1:1). Therefore, the empirical formula of benzene having molecular formula of C 6 H 6 is CH. Empirical formula mass or formula mass is equal to the sum of atomic masses of all the atoms present in the empirical formula. The empirical formula of benzene is CH. So, The formula mass of CH is (12+1) = 13 amu or 13 g/mol. Relationship between empirical and molecular formulae The two formulas are related as Molecular formula = n x empirical formula where 'n' may have whole number values 1,2,3… . The value of 'n' can be obtained by the following relationship For example, the molecular mass of benzene is 78 and its empirical formula is CH and therefore, its empirical formula mass is 13. Therefore, the molecular formula of benzene is 6 x (CH) = C 6 H 6. Determination of the empirical formula of a compound The empirical formula of a compound is determined from the percentage composition of different elements and atomic masses of the elements. The various steps involved in determining the empirical formula are: The percentage of each element is divided by its atomic mass. This gives the relative number of atoms of various elements in the molecule of the compound. The result obtained in the above step is divided by the smallest value to get the simplest ratio of various elements. The values obtained are made to the nearest whole number ratio (multiplied if necessary by a suitable integer to make the values whole numbers). The symbols of various elements are written side by side and the numerical value at the right hand lower corner of each symbol is inserted. Example How can we differentiate a molecular formula from an empirical formula? If the subscripts in the formula have a common divisor, it is usually a molecular formula. Generally the empirical formula is multiplied by this common divisor to get the molecular formula. Example: Empirical formula of acetic acid is CH 2 O. Molecular formula is CH 3 COOH = C 2 H 4 O 2 C 1 H 2 O 1 x 2 = C 2 H 4 O 2 [Molecular formula] Numericals based on empirical formula Example:1 An oxide of iron contains 72.41% of iron. Calculate the empirical formula for the oxide of iron [Fe = 56; O=16]. Solution Therefore simple ratio = Fe 3 O 4. Empirical formula = Fe 3 O 4. Steps for calculation Calculate the percentage by weight of each element. Find out relative number of atoms by dividing percentage of weight by atomic weight. Choose the simplest ratio and the smallest, divide all the ratios by it. If whole numbers are not obtained, then multiply it by a smallest integer to make it whole. Example: 2 The percentage composition of a compound is 71.8% antimony (Sb) and 28.2% sulphur. What is the empirical formula of this compound? Solution: In 100g of the compound, masses of elements are as follows: In 100 grams of compound, 71.8 g are antimony and 28.2% sulphur. Empirical formula is Sb 2 S 3. Determination of molecular formula from empirical formula Molecular formula is the chemical formula, which represents the actual numbers of atoms of each element present in a compound. Example: 1 Calculate the molecular formula of a compound with vapor density of 30 having 40% carbon; 6.67% of hydrogen and the rest is oxygen. Empirical formula = C 1 H 2 O 1 Empirical formula weight = 12 x 1 + 2 x 1 + 1 x 16 = 12 + 2 + 16 = 30 g Molecular weight = 2 x vapor density = 2 x 30 = 60 Molecular weight = n x empirical weight 60 = n x 30 Molecular formula = n x empirical formula = 2 x CH 2 O = C 2 H 4 O 2 Steps Calculate empirical formula. Use vapor density if given. If molecular weight given, calculate 'n' using this formula. Molecular formula = n x empirical formula Example: 2 A compound has molecular formula C 5 H 1 0. What is its empirical formula? Solution: Ratio of C atoms to H atoms is 5 : 10 = 1:2. Empirical formula is C 1 H 2. Limiting reagents: In a desired chemical reaction some reactants may be present in lesser or greater proportions than the stoichiometry as indicated by the balanced chemical equation. The reactant, which is completely used up first as per the stoichiometry, limits the amount of product that can be formed and does not allow the reaction to go further. This is the limiting reagent. The excess reactants are left behind as unconsumed reagents, being limited by the limiting reagent. In such cases the desired reaction does not go to 100% completion. No comments: Post a Comment Newer PostOlder PostHome Subscribe to: Post Comments (Atom) Contents Home General and Physical Chemistry (I) Inorganic Chemistry (I) Organic Chemistry (I) General and Physical Chemistry (II) Organic Chemistry (II) Inorganic Chemistry (II) About Me Binod SHRESTHAFRANCEView my complete profile Followers Featured Post Namaste & Welcome !! Chem-guide is a free resource for chemistry learning at school level (for 11 and 12). The resources in the blog has been compiled as pe... Awesome Inc. theme. Powered by Blogger.
782
https://journal.interpreterfoundation.org/pitfalls-of-the-ngram-viewer/
| | | | --- | StayInformed | VisitUs | | The Interpreter Foundation Supporting The Church of Jesus Christ of Latter-day Saints through scholarship Pitfalls of the Ngram Viewer Stanford Carmack Interpreter: A Journal of Latter-day Saint Faith and Scholarship 36 (2020): 187-210 Article Formats: [Page 187]Abstract: Google’s Ngram Viewer often gives a distorted view of the popularity of cultural/religious phrases during the early 19th century and before. Other larger textual sources can provide a truer picture of relevant usage patterns of various content-rich phrases that occur in the Book of Mormon. Such an approach suggests that almost all of its phraseology fits comfortably within its syntactic framework, which is mostly early modern in character. During the past decade, with the advent of Google’s Ngram Viewer (books.google.com/ngrams), many have become interested in noting the historical (textual) popularity rates of various cultural, content- rich Book of Mormon phrases such as “demands of justice.” Some have concluded by what they have seen in Ngram Viewer charts that the evidence suggests the Book of Mormon is 19th-century in character and that Joseph Smith was the author or the partial author of the text (from revealed ideas).1 My purpose here is to show that this recently developed interpretive tool is quite often misleading in relation to the Book of Mormon and that it’s important to reserve judgment on historical usage patterns until multiple textual sources have been consulted. It’s also important to recognize the type of language can tell us something definitive about Book of Mormon authorship and the fundamental nature of its language. A database such as Google Books, which contains a large number of religious writings, is potentially an appropriate corpus to use in comparing Book of Mormon English. That is because, though dictated, the Book of Mormon text presents itself as a written translation of authors and editors who also wrote out their compositions (though [Page 188]some chapters are said to be transcripts of oral discourse). The narrative complexity, matching internal references, exact phrasal repetition (sometimes at a distance), intricate structuring (both large- and small-scale), and even instances of syntactic complexity suggest a primarily written work rather than a primarily oral production. Because the text is full of biblical blending and religious language set in a framework of mostly early modern syntax, the Early English Books Online database2 provides the largest amount of matching language — religious, lexical, and syntactic. EEBO contains many religious writings, including sermons as well as the early biblical texts [1530–1610]. After EEBO, the next most relevant database for comparison is Eighteenth Century Collections Online.3 After EEBO and ECCO, the most relevant corpora are probably Google Books4 and the early American databases, Evans and Shaw-Shoemaker (these also contain many British writings republished in America, overlapping with content found in ECCO and even EEBO).5 On Content-Rich and Content-Poor Language Before considering the data, some general comments are in order about the implications of two types of textual evidence: cultural, religious phrases (content- rich) and syntax (content-poor). It’s helpful to bear in mind that cultural, religious language occurs within a syntactic framework. These are separable objects of study: it is a straightforward matter to abstract away from either one in order to carry out linguistic and literary analysis. Content-rich phrases like “demands of justice” involve a high degree of conscious thought in their production, while content- poor phraseology like “the more part” is chiefly the result of nonconscious production. Because authors do not consciously control what they nonconsciously [Page 189]produce, they reveal their native-speaker preferences in their (content-poor) syntax. Consciously produced content varies greatly in frequency according to context and subject matter and genre. In contrast, the frequency of syntactic usage is less influenced by these things (although some aspects of syntactic usage are affected by context, subject matter, and genre, such as which tenses are predominantly used). There are many generalizable usage patterns that can be analyzed and compared. Because a large amount of syntax is visible in the verbal system, studying the verbal system is of paramount importance. A late-modern view of the Book of Mormon’s cultural, religious phrases tends to be popular in the literature. Such phrases, however, are unable to establish either the fundamental character of the language or that Joseph Smith was the author of the Book of Mormon. The suggestion that content-rich phrases are dispositive evidence for determining these things stems from inadequate reflection on details and implications of natural language production. It is the syntactic building blocks of language that indicate the fundamental character of textual language. When it comes to determining Book of Mormon authorship, content-rich phrases are overruled by the syntax. The latter indicates that most of its language is early modern in character and that Joseph wasn’t the author or partial author.6 A phrase examined below, “demands of justice,” is a cultural and religious phrase that has been used in a relatively limited set of writings and contexts. It provides a substantial amount of meaning independently. Another phrase considered below, “the more part,” is a content-poor phrase that had the potential to be used in a relatively large number of writings and contexts. There is a significant difference between these two types of language in terms of their diagnostic value in relation to determining Book of Mormon authorship. Specifically, the phrase “demands of justice” is a persistent phrase that arose in the early [Page 190]modern era, while more part phraseology (the non-adverbial type) did not persist robustly past the late 1600s, although we do see some related, vestigial use in the late modern era (some of this is discussed toward the end of this article). Consider also the phrase “plan of destruction” (3 Nephi 1:16). This is a late-appearing phrase, textually speaking — it is currently first attested in 1768.7 But “plan of destruction” was conceptually part of English a century earlier, since the structurally and semantically similar phrases “plan of peace,” “plan of religion,” “plan of doctrine,” and “plan of (our) redemption” did occur in the late 1600s. As a content- rich phrase, “plan of destruction” cannot overrule the diagnostic value of content-poor phraseology such as “the more part of X” (where X is a noun phrase) or “of which hath been spoken”. These are less contextually dependent and were in obsolescence at the beginning of the late modern period. This makes the presence in the Book of Mormon of the comparative phraseology “the more part of X” and the referential phraseology “of which/whom «be»8 spoken” diagnostically important. (Ten of eleven instances of the referential phraseology are archaic in formation; all instances of more part phraseology are nonbiblical in formation.) It also means that the presence of language like “plan of destruction” is mostly diagnostically unremarkable. Cultural, religious phrases: high degree of contextual dependencelow usage rates (on balance)provide little information about nonconscious native-speaker tendencies Content-poor syntax: low degree of contextual dependencepotential for much higher usage ratesreveals nonconscious native-speaker tendencies The Google Books Database The very creators of the Ngram Viewer have pointed out the risk for their charts to mislead analysts vis-à-vis earlier cultural trends. According [Page 191]to them, the popularity trends of 18th-century cultural phrases are particularly susceptible to being misstated in the charts.9 Others have mentioned that this is the case even for early 19th-century trends,10 once again citing the published papers of the Ngram Viewer creators. This is because of the limitations of the underlying Google Books database. It’s important to note that the Viewer can be less misleading in relation to syntactic studies involving content-poor phrases. Such phrases have the potential to be more heavily represented in the underlying data. As a specific example, we are more likely to get an accurate picture of popularity in comparing usage rates of the infinitive construction “caused As mentioned, the Viewer is based on the Google Books database. This has only a fraction of the 18th-century coverage of the largest database, ECCO. The 18th-century Google Books portion is currently about 12 percent of the size of ECCO, and the first half of the 18th century is underrepresented compared to the second half of the 18th century. The underrepresentation of English usage in Google Books is even greater as we go back further in time to the early modern period (details shown below). This means that the Viewer is highly unreliable for the 16th and 17th centuries. Unfortunately, the inevitable result of this underrepresentation is that charts are often generated by the data underlying the Ngram Viewer that do not accurately represent prior usage patterns. This is shown here by a comparison of Viewer charts with the charts provided by the ECCO database and with charts generated from a 740-million-word corpus that [Page 192]covers the years 1473 to 1700 (made from Phase 1 texts of the EEBO database). Language Examined for this Study I will briefly discuss the following six phrases and phrase types: “demands of justice” [first EEBO example is 1647] “first parents” [first EEBO example is 1483] “infinite goodness” [first EEBO example is 1479] “forbidden fruit” [first EEBO example is 1550] “plan of X” [first EEBO example is 1689; X = divinity] “the more part of X” [first OED example is 1398; X = the heritage] Corpora Used in this Study Here are the three corpora that generated the charts shown in this study, along with some relevant details: Google Books (sparse coverage up to the 18th century): | | | --- | | 4.4 million | 16th-century words | | 63.9 million | 17th-century words | | 1.8 billion | 18th-century words11 | | 49.5 billion | 19th-century words | | 299.5 billion | 20th-century words | ECCO: 180,000 18th-century titles (as currently noted on the initial search page). From this number of titles and the number of 18th-century words in Google Books, we find that ECCO could have approximately 15 billion 18th-century words, with a large amount of duplication. EEBO (Phase 1 texts): approximately 740 million words in 25,367 texts, from the late 15th century through the 17th century. EEBO1 has almost 11 times the coverage of Google Books for the same time period, with high-quality transcriptions that are much more reliable. [Page 193]Popularity Profiles of Six Nonbiblical Book of Mormon Phrases “Demands of justice” [1647 (earliest attestation)] We begin our investigation of Book of Mormon phrases with the cultural, religious phrase “demands of justice,” a phrase that arose, textually speaking, in the middle of the 17th century. Because the Ngram Viewer is based on relatively sparse coverage of the first half of the 18th century, a misleading chart (Figure 1) is currently generated by the underlying data (the vertical axis gives word-occurrence rates; the values [very small] are irrelevant in the context of this paper). Figure 1 leads us to believe that there was hardly any usage of the phrase “demands of justice” in the early 18th century. (In this study, I have mostly restricted Viewer charts to the 18th century and beyond, since the data coverage of the 16th and 17th centuries is relatively minimal, frequently generating charts with discontinuous spikes.)12 Because ECCO is based on more than eight times the number of titles, its term frequency chart is more reliable than the Viewer, though not entirely, since the later one goes in the 18th century, the more books are encountered with repeated language (which is also a problem with the Viewer). ECCO’s popularity chart helps in this regard, to some degree, since it can give users the percentage of documents per year that have a given word or phrase. Figure 1. Ngram Viewer chart of “demands of justice.” [Page 194]Figure 2 is an ECCO popularity chart of “demands of justice.”13 It clearly shows usage of the phrase in the first half of the 18th century and that there was only a slight upward trend during the entire century. Against what the Viewer indicates, there was no sharp upward trend from zero that began near the middle of the century. Moreover, if we look at an earlier corpus, EEBO, we find that in the publicly available Phase 1 portion of the database (EEBO1), 0.23 percent of the documents in the 1670s have the phrase “demands of justice” (6 of 2,608 documents) and that 0.33 percent of the documents from the 1690s have the phrase (10 of 3,006 documents). Figure 3 is a composite chart of the earlier usage rates, combining EEBO1 and ECCO data (from 1473 to 1800). It shows no clear increase in the popularity of the phrase “demands of justice” from the 1670s to the 1790s. Figure 2. ECCO chart of “demands of justice.” Figure 3. Combined EEBO1 and ECCO chart of “demands of justice.” Consider too that popularity rates of uncommon content-rich phrases like “demands of justice” can vary greatly depending on the composition of the corpus — that is, the weighting of the genres in the corpus. In this case, if the corpus has a large percentage of religious texts or legal texts, then the popularity rate of “demands of justice” has the potential to be higher. If not, popularity rates will be lower. In contrast, content-poor syntactic phrases have a greater potential to give a truer [Page 195]picture of past usage rates and popularity. The genres represented in the corpus are less important in the case of such phrases, though not always of no consequence. The first appearance of the phrase “demands of justice” in EEBO occurs in 1647 (A57963, page 66). The earliest occurrences of phrases are among the most interesting to consider. Beyond showing authorial creativity, in the case of potentially inspired religious language, they are more likely to be the result of divine influence than later instances, which are more likely to be influenced by earlier usage. In this case, the 1647 author of “demands of justice,” Samuel Rutherford, a delegate to the Westminster Assembly (a multi-year Church of England reform council), provides not only this content-rich coincidence with Book of Mormon usage, but also examples of extrabiblical syntactic usage and variation found in the earliest text, such as archaic “because that S1 and that S2” usage (1648, EEBO A57980; 1 Nephi 2:11, Jacob 5:60) and nearby ye was ~ ye are variation (1664, A57970; Alma 7:18–19; also we was ~ we are: 1652, A57982). Of the four instances of “demands of justice” found in the Book of Mormon, the last one occurs closely with two instances of the phrase “plan of mercy” (Alma 42:15). This language is currently first attested in 1746, but it would not have clashed with late 1600s language, since a few different “plan of X” phrases are attested beginning in the late 1680s. The adjective phrase “perfect just” occurs right after “demands of justice,” [Page 196]meaning ‘perfectly just’; it provides a good example of characteristically early modern syntactic usage in which the adverb lacked the {-ly} suffix. In EEBO1, “perfect just” (without intervening punctuation) occurs 16 times, at a higher rate in the 16th century than in the 17th century (five times the rate; see Figure 4). Another syntactic item in this verse involves a subordinate clause headed by except with the conditional auxiliary verb should, usage that was also more characteristic of the 16th century than the 17th century (peaking textually in the 1550s; see Figure 514). Overall, the language in this passage doesn’t clash, and there are stronger reasons to classify it as early modern in character than late modern.15 Figure 4. EEBO1 chart of “perfect just.” Figure 5. EEBO1 chart of “except should ” syntax. “First parents”  The next phrase we’ll consider is another nonbiblical one, “first parents.” The phrase occurs 13 times in the Book of Mormon, first at 1 Nephi 5:11. It is used there with some archaic syntax: “Adam and Eve, which was our first parents.” This syntax corresponds precisely with the usage of Thomas Becon in 1566: “Adam and Eve, which was made of the ground.” Becon also used “first parents” in 1542 (A06719). We encounter many such coincidences in the Book of Mormon, as in this case and the case of the writings of Samuel Rutherford. EEBO1 has thousands of examples of the phrase “first parents,” including four from the 1480s alone. [Page 197]According to an ECCO popularity chart, the usage rate of “first parents” didn’t change that much over the course of the 18th century, ranging between three and six percent, as shown in Figure 6. Figure 6. ECCO chart of “first parents.” But according to the Viewer, the usage rate of “first parents” rose significantly during the 18th century, and at the beginning of the 19th century, the usage rate appears to have surged to its highest levels (see Figure 7). EEBO Phase 1 texts, however, indicate an absolute peak popularity in the 1610s (eleven percent of texts; see Figure 8). This is [Page 198]a figure significantly above the four percent of the 1790s that ECCO indicates. Figure 7. Ngram Viewer chart of “first parents.” Figure 8. EEBO1 chart of “first parents.” Some of the rise we see between 1801 and 1830 in the Viewer is a skewing brought about by later editions and the republishing of earlier texts, as previously mentioned. In any event, a doubling in the usage rate of “first parents” during the first three decades of the 1800s could have raised its per document rate to a maximum level of seven or eight percent. Based on current information, the 1610s is a stronger candidate for peak popularity of “first parents” than the early 1800s. [Page 199]“Infinite goodness”  In a review of a text-critical publication on grammatical editing in the Book of Mormon, Grant Hardy lists 16 nonbiblical phrases that he says were commonly used in the 19th century, stating that “these do occur as early as the seventeenth century.”16 The phrase “as early as” most likely conveys ‘no earlier than,’ leaving readers with the sense that these phrases were most popular after the 17th century. One of the phrases in his list is “infinite goodness,” occurring at 2 Nephi 1:10, Mosiah 5:3, Helaman 12:1, and Moroni 8:3. Hardy might not have consulted EEBO and ECCO, something that is necessary to do in order to determine when these phrases arose and to have any chance at accurately determining when they might have been most popular. It’s possible that he entered them into the Ngram Viewer and was misled by what he saw in the charts. Consider, for instance, a Viewer chart of “infinite goodness” between 1500 and 1830 (Figure 9). In this chart we see two early spikes based on seven results total. Then there is a continuous jagged rise, suggesting that the year 1830 was the height of popularity. This might have been as far as Hardy went in gauging the trajectory of this phrase’s textual popularity. Figure 9. Ngram Viewer chart of “infinite goodness.” An important issue when dealing with a phrase that might have arisen during the first half of the early modern period is spelling variation. In this case, there are six obvious variants of the word goodness to consider [Page 200]and more than that for the word infinite. This means, of course, that there are at least 40 possible spelling variants of the phrase, although the large majority of the potential spelling variants of the phrase probably never co-occurred in the textual record. There is no easy way to enter so many variants in the Viewer, and there are large gaps in Google Books’ coverage for the earlier period, especially the 1500s (see above). So, we must go to EEBO, using spelling variants, in order to approach a sense of early modern popularity. This can only be easily done using a third-party EEBO corpus. It cannot be done using the EEBO website search page, since the search engine has difficulty with complicated wildcard searches. From a WordCruncher EEBO corpus17 we obtain the chart in Figure 10, showing usage rate per document. To complete the comparison, we consult an ECCO popularity chart of “infinite goodness” (Figure 11). Taken together, these charts indicate that the height of popularity of “infinite goodness,” textually speaking, was the 1530s or the 1570s. Figure 10. EEBO1 chart of “infinite goodness.” Figure 11. ECCO chart of “infinite goodness.” The impression that Hardy gives his readers is that the 16 nonbiblical Book of Mormon phrases reached their height of popularity in the late modern period rather than the early modern period. We see that this is questionable for “infinite goodness” and “first parents” (another of his 16 phrases), and as it turns out, it’s questionable for more than half of the phrases. Hardy’s statement that these phrases occur as early as the 17th century (taken to mean ‘no earlier than the 17th century’) might be inaccurate for 69 percent of the phrases. Here is his list, ordered according to date of first attestation in EEBO (mean date = 1565; median date = 1578): | | | --- | | [Page 201]1473 | God of nature | | 1479 | infinite goodness | | 1479 | fall of man | | 1483 | first parents | | 1532 | sacrifice for sin | | 1538 | Great Mediator | | 1552 | temporally and spiritually(as temporally, spiritually & eternally) | | [Page 202]1563 | land of liberty | | 1574 | final state | | 1582 | workings of the Spirit | | 1583 | instrument(s) in the hands of God | | 1606 | watery grave | | 1637 | miserable forever (as forever miserable) | | 1641 | condescension of God | | 1652 | cold and silent grave (as cold silent grave)(cold grave: 1542; silent grave: 1590) | | 1660 | day(s) of probation | Only five of the 16 are first attested as late as the 17th century, and both cold grave and silent grave are first attested in the 16th century. So, it is accurate to state that only one-quarter of the phrases are first attested as late as the 17th century; the rest are attested earlier. I ran numbers on all 16 of these phrases in EEBO1 and ECCO and obtained usage rate profiles and peaks. Here is a list of these same phrases with the decade of peak popularity shown (in the case of the two phrases with highest popularity in the late 1400s, I have also given the next highest decade). These phrases are ordered according to greatest early modern popularity when measured against their peak in late modern popularity: | | | --- | | Phrase | Peak popularity (textual) | | temporally, spiritually | 1580s | | God of nature | 1480s, 1630s | | condescension(s) of God | 1690s | | sacrifice for sin | 1580s | | workings of the Spirit | 1670s | | first parents | 1610s | | infinite goodness | 1530s | | final state | 1650s | | fall of man | 1470s, 1610s | | Great Mediator | 1750s | | miserable forever / forever miserable | 1760s | | instrument(s) in the hands of God | 1790s | | cold grave & silent grave | 1790s | | watery grave | 1790s | | day(s) of probation | 1760s | | land of liberty | 1790s | [Page 203]The immediate co-occurrence of temporally and spiritually was most characteristic of the earlier period. The phrase “land of liberty” was most characteristic of the later period and especially the end of the 1700s. Nine of the 16 phrases turned out to be more popular during at least one decade of the early modern era than they were during any decade of the 18th century. In addition, “Great Mediator” and “miserable forever” ~ “forever miserable” weren’t strongly characteristic of the late modern period over the early modern period. In summary, most of these phrases aren’t obviously characteristic of the early 19th century, and all of them fit comfortably within a framework of mostly early modern syntax. “Forbidden fruit”  The nonbiblical term “forbidden fruit” occurs six times in the Book of Mormon (three times in close succession in 2 Nephi 2 [verses 15, 18, 19]; also in Mosiah 3:26, Alma 12:22, and Helaman 6:26). Here is one of the earliest dated examples of this phrase found in EEBO1: 1550, Thomas Becon, The flower of godly prayers [ A06743 ] If through the subtle enticements of Satan, they had not transgressed thy commandment by eating the forbidden fruit, . . . Figures 12 and 13 suggest that the height of popularity of the phrase “forbidden fruit” might have been during the first 40 years of the 17th century, not during the 18th century. The Viewer, however, when [Page 204]restricted to 1700 and later, leads us to believe that the popularity of the phrase “forbidden fruit” was greatest around the year 1810 (Figure 14). Figure 12. EEBO1 chart of “forbidden fruit.” Figure 13. ECCO chart of “forbidden fruit.” Figure 14. Ngram Viewer chart of “forbidden fruit.” “Plan of X” phrases  Textually speaking, some Book of Mormon phrases were more popular or appear to have been more popular in the 18th century than in the 17th century. One set of phrases that occurred more frequently in the 18th century than in the 17th century is “plan of X” phrases. Most of these, though conceptually in the language by the late 17th century, are [Page 205]not attested until the early 18th century.18 So the Book of Mormon’s six types of “plan of X” phrases could not have been more frequent in the 17th century than in the 18th century, since there is hardly any textual usage in the 17th century. The most common of the Book of Mormon’s “plan of X” phrases, “plan of redemption,” was the one that occurrred earliest. It appears first in the 1690s (as “plan of our redemption,” in 1697). This phrase appears in nearly 500 ECCO documents (this database primarily covers the years 1701–1800). Figure 15 is an ECCO popularity chart of the simple phrase “plan of redemption.” It shows a rise in the usage rate (per document) from zero percent to half a percent (on average). Nevertheless, because the few exclusively 18th-century phrases of the Book of Mormon are enveloped in early modern syntax, they do not change the conclusion that one could reasonably reach about the fundamental character of its language and whether Joseph Smith could have authored it. Figure 15. ECCO chart of “plan of redemption.” “The more part of X”  The Book of Mormon has almost two dozen instances of the phraseology “the more part of X.” It also has two instances of the adverbial constituent “for the more part” and two textually rare, exclusively [Page 206]early modern variants: “a more part of X” and “the more parts of X” (three instances total). The King James Bible only uses the unmodified phrase “the more part” twice (Acts 19:32; 27:12). The Book of Mormon doesn’t have this biblical usage.19 Setting aside the three minor variants of the phraseology, the 21 instances of “the more part of X” in the Book of Mormon are quite possibly the most that had appeared in a single text in 253 years, since Holinshed’s Chronicles (1577), which has 90 of the form “the more part of X” (in almost 2.5 million words). “The more part of X” is a good example of content-poor phraseology that had the potential to be used in many different contexts at relatively high rates. When we abstract away from the content-rich noun phrase X, we are able to investigate a content-poor phrase type that could have been used in a large number of contexts. It thus provides valuable information for classifying the nature of Book of Mormon language. When we consider usage rates of this phrase at the beginning of the late modern period, we find that the Ngram Viewer indicates that there was mostly persistent usage throughout the 18th century, with a slight upward trend (Figure 16). ECCO’s popularity chart also shows a low level of use throughout the 18th century, without any discernible trend (Figure 17). Figure 16. Ngram Viewer chart of “the more part of X.” Figure 17. ECCO chart of “the more part of X.” The reality, however, is that almost every 18th-century document contains examples of “the more part of X” only in passages with earlier, reprinted legal language, often from the 16th century and earlier. For example, the 14 documents published in 1725 (out of 1,310) with examples of “the more part of X” (the highest data point in Figure 17) contain instances found in earlier legal language. Nevertheless, there is some original use of “the more part of X” in the 1700s. But there is very little, and it is hard to know how much there actually is. We would have to wade through more than 600 instances, using the difficult ECCO interface, in order to find perhaps two or three originals. (ECCO currently gives 624 results, with many duplicates.) One noteworthy case — a 1768 poetic example found in the online, third edition of the OED — does not reveal itself in ECCO searches, since “the more part of mankind” was transcribed by the optical character recognition (OCR) software as “the tnore part of mankind.” The entire poetic line is in italics, and as a result, the OCR software didn’t get the [Page 207]correct letters in the case of the word more. This means, of course, that these databases currently have some fundamental limitations. In the future, better databases will yield more reliable and useful results. (The EEBO database has a very low rate of transcription error, significantly lower than either ECCO or Google Books. This is because most of EEBO was not transcribed using OCR software.) An ECCO popularity chart comparing “the more part of them” with “most of them” makes it clear that the latter was the operative phrase in the 18th century, not “the more part of them” (Figure 18). (The usage rate of “the majority of them” was also quite low during this century.) What [Page 208]looks like low-level modern usage of the archaic phrase is, in very large part, just noise emanating from reprinted language. Figure 18. ECCO chart of “most of them” and “the more part of them.” Figure 19 shows the usage rates of “the more part of X” during the early modern era. This indicates that it was primarily a phrase of the first half of the early modern period. By the 1590s, popularity of the phrase had dipped to such a degree that less than three percent of texts employed it during that decade (1591–1600, aligning the years with the century). Even this EEBO1 chart has some contamination in the late 1600s from reprinted language, but despite this it shows that usage of the phrase was close to zero in the 1690s. Only one EEBO1 text in the 1690s (the last decade of the early modern period) has an original instance of “the more part,” which is equivalent to a meager per document usage rate for that decade of just 0.03 percent.20 By that decade, “more part” phraseology was moribund. (Seven other potential examples from the 1690s were quotations of Acts 19:32 [2×], of earlier statutes [4×], and of a 16th-century author [1×].)21 Figure 19. EEBO1 chart of “the more part.” [Page 209]The high levels of “more part” phraseology found in the Book of Mormon, its two rare variants, and Figure 19 indicate that the Book of Mormon’s usage of the phraseology is best characterized as early modern, not rare late modern. Conclusion Besides the importance of being aware of the potential pitfalls we can encounter in interpreting Ngram Viewer charts (and even sometimes ECCO’s term frequency charts), the conclusion to be drawn vis-à-vis Book [Page 210]of Mormon usage is that these charts, used in isolation, very often give us the wrong idea about earlier usage patterns and rates. As it turns out, the time depth of many content-rich phrases is often greater than first appears. Here is the list of the phrases treated in this study, along with an indication of the relative popularity of these phrases (as currently indicated by raw, unfiltered textual data): “the more part of X” [popularity peaked in the 1530s] “infinite goodness” [popularity peaked in the 1530sor the 1570s] “first parents” [popularity peaked in the 1610s] “forbidden fruit” [popularity peaked in the 1630s] “demands of justice” [popularity peaked in the 1690s] “plan of X” [exclusively late modern, except for “plan of our redemption”] Most content-rich phrases of the Book of Mormon fit well with its early modern syntax. There are some phrases that are properly classified, according to the general textual record, as characteristically late modern, but most phrases were found during the early modern period, and many of these might have seen peak popularity, or close to peak popularity, during that earlier time. It’s possible that the easily accessible but unreliable information provided by Ngram Viewer charts has influenced the views of some Book of Mormon scholars. This information, colored by only a superficial consideration of its syntax, has led many to conclude that the original text is a mix of biblical language and 19th-century vernacular. Some have written or implied that this is the case, leaving many readers with the wrong impression of its English. Of course, such statements shouldn’t be made without undertaking a large amount of research in order to support them. Consequently, it would be wise to treat cautiously any comments made about the nature of Book of Mormon English until verifying that the maker of the comments has undertaken linguistic study of the original language, including its lexis and syntax. 1. An example of this is found at “19th Century Protestant Phrases in Book of Mormon,” LDS Church is True (blog), March 7, 2017, www.churchistrue.com/blog/19th-century-protestant-phrases-in-book-of-mormon/. 2. Early English Books Online, accessed March 9, 2020, 3. Eighteenth Century Collections Online, accessed March 9, 2020, www.gale.com/primary-sources/eighteenth-century-collections-online. 4. “Advanced Book Search,” Google Books, accessed March 9, 2020, 5. “Early American Imprints, Series I: Evans, 1639–1800,” Readex: A Division of Newsbank, accessed March 9, 2020, www.readex.com/content/early-american-imprints-series-i-evans-1639-1800, “Early American Imprints, Series II: Shaw- Shoemaker, 1801–1819,” Readex: A Division of Newsbank, accessed March 9, 2020, www.readex.com/content/early-american-imprints-series-ii-shaw-shoemaker-1801-1819, and Evans Early American Imprint Collection, accessed March 9, 2020, (5,000 Evans texts, freely available in WordCruncher [wordcruncher.com]). 6. The descriptive reality that the original Book of Mormon text is full of extrabiblical Early Modern English doesn’t mean it’s an early modern text, in a narrow sense. While it’s accurate to characterize the vast majority of the Book of Mormon’s verbal system (the syntactic core of the language) as early modern in character — namely, verb complementation, verb agreement, various aspects of tense, inflections, auxiliary usage, grammatical mood, negation and inversion patterns, etc. — this reality doesn’t mean that all content-rich phrases that appear within the mostly archaic framework must be or are early modern phrases. However, rather than characterizing persistent phrases (early modern through late modern) as 19th-century phrases, since they’re enveloped in mostly early modern syntax, it’s sensible to view them as early modern. 7. “Plan of destruction” can currently be found in the Evans database under the text id N08651, and in the Google Books database under the book id 8Y0BAAAAQAAJ (the phrase occurs in several books; this one may be the earliest one with the language). 8. By «be» is meant various forms of the verb be, including the perfect forms “hath been,” “has been,” and “have been.” 9. Roger Finke and Jennifer M. McClure, “Reviewing Millions of Books: Charting Cultural and Religious Trends with Google’s Ngram Viewer,” in Faithful Measures: New Methods in the Measurement of Religion, eds. Roger Finke and Christopher D. Bader (New York: NYU Press, 2017), 290, Jean-Baptiste Michel et al., “Quantitative Analysis of Culture Using Millions of Digitized Books,” Science 331 (2011): 176–82, DOI: 10.1126/science.1199644, Jean-Baptiste Michel et al., “Supporting Online Material for ‘Quantitative Analysis of Culture Using Millions of Digitized Books’,” (2011):16–17, 10. See, for example, Finke and McClure “Reviewing Millions of Books,” 290. 11. According to the Google Books total_counts file (version 20120701: Google Books Ngram Viewer, accessed March 9, 2020, the database has 21,495 18th-century titles (1701 to 1800). Just over three-quarters of the words are from the second half of the century (1751 to 1800). 12. Another current problem with the Viewer is that some links at the foot of charts don’t yield any book results, even though the chart and the link suggest that there are textual results to be verified. Links that yield no results indicate an algorithmic limitation of some kind. In many cases, however, when there is no data, the Viewer indicates this explicitly by stating that there are no valid ngrams to plot. 13. Charts were made from the general English (2012) corpus, case-sensitive, with 5-year smoothing. 14. The WordCruncher search string used was “((excepte + except) #.2,0 ?S) /subj /should”, with one additional complication not shown. (The phrase list terms /subj and /should represent many different subject pronouns and forms of the auxiliary verb should, including spelling variants.) This search permitted only pronominal subjects, excluded intervening punctuation, excluded biblical language (Matthew 24:22, Luke 9:13, Acts 8:31), and included variants of the auxiliary verb should. For EEBO1, the search returned results from 245 texts [1517–1700]. 15. Some promote the idea that the original language of the Book of Mormon is a hybrid of (1) clashing archaic language, (2) early modern usage clashing with late modern usage, (3) ungrammatical variation, and/or (4) content-rich language clashing with archaic syntax. Some of these are subjective views. Proper investigation of these matters requires a large amount of research and analysis. Because there were no large digital corpora to check these unstudied claims, scholars felt free to make them. However, now that the syntax can be seriously studied, we find that there is very little clashing language — much less than previously thought. As two specific examples, there isn’t a blatant misuse of second person pronouns in the original Book of Mormon text; it matches some earlier usage. There isn’t improper mixing of {-th} and {-s} inflection; it matches some earlier usage. More generally, a host of variational usage matches verifiable early modern tendencies, and cultural, religious, content-rich phrases don’t clash with the framing language. 16. Grant Hardy, “Approaching Completion: The Book of Mormon Critical Text Project,” BYU Studies 57, no.1 (2018): 176n20. 17. The WordCruncher program is freely available online at wordcruncher.com; the EEBO1 corpus is available in the WordCruncher bookstore. 18. See Royal Skousen, The Nature of the Original Language (Provo, UT: FARMS, 2018), 202–4. 19. Though the King James Bible has two instances of “the more part,” the Book of Mormon’s usage is demonstrably independent of the rare biblical usage. It is also not found in 25 pseudobiblical texts that were checked for this study. Thus, this phraseology is properly included in a section discussing some of the Book of Mormon’s nonbiblical phrases. 20. One original instance of “the more part of them” is found in a sermon preached by Henry Wharton [1664–1695] on July 13, 1690 at Lambeth Chapel: “while the Members of it shall all, or the more part of them, perform their Duty.” (1698, EEBO A65594, page 530.) 21. The phraseology “the more part of X” originated before the early modern era, in late Middle English. Currently, the OED’s earliest example of “the more part of X” is dated 1398: “the more parte of therytage [the heritage].” There is also an example without the, dated a1425 [that is, before 1425], most likely 1384: “But more part of þis world erreþ here.” The earliest example in EEBO is dated 1473/1474: “the more part of his sons were dead” (from the first printed book in English). A manageable ECCO search is “the more part of all … ” The Book of Mormon has three of these. If there had been any real increase in original use of “more part” syntax in the early 1700s, we would expect to see some examples of this specific phraseology with all. In ECCO, the nine results from a search performed in June 2018 turned out to yield only three actual hits; but the language dated from much earlier: 1426, 1491, and 1568. So, the 18th-century titles contained 15th- and 16th-century language. This is an important reminder that, in this endeavor, just looking at raw result totals and dates of publication can be completely misleading. This same wording — “the more part of all … ” — turns up 33 times in the 16th century in EEBO1, but not once in the 17th century. This search clearly indicates that “the more part of X” was a phrase characteristic of the 16th century (and earlier). In June 2018, I also performed a Google Books search of “the more part of X” limited to before the year 1830. A little more than 20 results were returned, but of those that I could read, all of them, besides two false positives, were examples of earlier language, many from legal documents. Go here to see the 7 thoughts on ““Pitfalls of the Ngram Viewer”” or to comment on it. Pin It on Pinterest Share This
783
https://www.laboratoriovirtual.fisica.ufc.br/maquina-de-atwood?lang=en
Atwood's Machine | Virtual Laboratory of Physics | UFC top of page Skip to Main Content ___ SIGN UP / LOGIN ___ Virtual Laboratory of Physics of Federal University of Ceara Interactive Simulations for Teaching Physics HOME SIMULATIONS Mechanics Waves and Acoustics Thermodynamics Electricity and Magnetism Optics Modern Physics FILMED EXPERIMENTS Mechanics Waves and Acoustics Thermodynamics Electricity and Magnetism Optics Modern Physics MATERIALS Mechanics Waves and Acoustics Thermodynamics Electricity and Magnetism Optics Modern Physics CONTACT Use tab to navigate through the menu items. Atwood's Machine Back Authors: Ms. Giselle dos Santos Castro - Federal University of Ceara - UFC Dr. Nildo Loiola Dias - Federal University of Ceara - UFC CONTROLS: The red cursor lets you choose the mass of the red block. The blue cursor lets you choose the mass of the blue block. The simulation places the block with the greatest mass in the top position. Choose the local gravity from three possibilities (Earth, Moon and Mars). The RELEASE button allows the movement of blocks and at the same time starts the stopwatch. Press the STOP TIME button to stop the stopwatch. - The RESET button puts the blocks in the initial position and resets the timer. DESCRIPTION OF THE SIMULATION: This simulation allows the study of Newton's second law through the Atwood’s machine: two bodies, connected by a rope of negligible mass passing through an ideal pulley. Their masses can independently be chosen by individual cursors. A stopwatch makes it possible to measure the movement time and thus calculate the acceleration of the system in order to relate it to the resulting force. It is also possible to simulate the Atwood’s Machine at different gravities and determine the local gravity acceleration. For an analysis of the data, consult one of the proposedACTIVITY GUIDE.​ PT EN © 2020-2022 Virtual Laboratory of Physics of Federal University of Ceara. Share in your feed Unique Visitors bottom of page
784
https://www.sciencedirect.com/science/article/abs/pii/B9780323555128000569
Donovanosis (Granuloma Inguinale) - ScienceDirect Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Search ScienceDirect Article preview Abstract Hunter's Tropical Medicine and Emerging Infectious Diseases (Tenth Edition) 2020, Pages 531-533 56 - Donovanosis (Granuloma Inguinale) Author links open overlay panel Nigel O'Farrell Show more Outline Add to Mendeley Share Cite rights and content Abstract Donovanosis (granuloma inguinale) is a rare cause of genital ulceration that is close to global eradication. There is debate about the nomenclature of the causative organism: phylogenetic analysis confirms close similarities between Calymmatobacterium granulomatis and the genus Klebsiella , and a proposal has been made that the infective agent be reclassified as Klebsiella granulomatis comb nov. The condition has an unusual geographic distribution, with most reported cases coming from India, Papua New Guinea, South Africa, and Brazil. Ulcers may be quite large and sometimes affect the inguinal region. Systemic spread can occur, but is uncommon. Diagnosis is usually made by the detection of the characteristic Donovan bodies in monocytes on tissue smears or biopsy specimens stained by the Giemsa method. Azithromycin has emerged as the drug of choice and should be used if the diagnosis is confirmed or suspected. Doxycycline, ciprofloxacin, and ceftriaxone are also effective. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Recommended articles References (0) Cited by (0) View full text Copyright © 2020 Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies. Recommended articles Short-course antibiotic treatment is as effective as conventional antibiotic regimen for implantable electronic device-related infective endocarditis International Journal of Cardiology, Volume 221, 2016, pp. 1022-1024 Carlos Ferrera, …, José Alberto San Román ### Palliative and End-of-Life Care in Idiopathic Pulmonary Fibrosis Interstitial Lung Disease, 2018, pp. 97-106 Kathleen Oare Lindell ### Gonorrhea Hunter's Tropical Medicine and Emerging Infectious Diseases, 2020, pp. 524-527 David A.Lewis ### Tuberculosis Hunter's Tropical Medicine and Emerging Infectious Diseases, 2020, pp. 454-471 Ana P.Cavalhiero, …, Jennifer J.Furin ### Viral Hepatitis Hunter's Tropical Medicine and Emerging Infectious Diseases, 2020, pp. 308-324 Samer S.El-Kamary, Shyamasundaran Kottilil ### Viral Infections With Cutaneous Lesions Hunter's Tropical Medicine and Emerging Infectious Diseases, 2020, pp. 267-283 Show 3 more articles About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Read strategically, not sequentially ScienceDirect AI extracts key findings from full-text articles, helping you quickly assess an article's relevance to your research. Unlock your AI access
785
https://www.youtube.com/watch?v=BbB4bj3wveA
IXL Practice: One variable equations (S4,S5) David Reeves 559 subscribers 1 likes Description 96 views Posted: 4 Feb 2021 Mr. Reeves works through 7th grade IXL math skills S4 and S5 Transcript: well hey everybody mysteries back with you as we continue to work through seventh grade math skills on ixl and once again we are all the way down at one variable equations in my previous video i went through skills s1 and s2 and now i'm going to look at s4 and s5 all right so let's go straight over to s4 s4 says write an equation so that says the length of the green line is equal to the length of the black line in other words the top part is one side of an equation and the bottom part is the other side of the equation now remember an equation is an expression set equal to a constant or an expression set equal to another expression in this case we have an expression j and if we want the entire length we're going to do j plus 12. and then what does it equal it equals 21. but if you by the way if you're wondering why didn't you put 21 equals j plus 12. we absolutely certainly could have because you can flip that equation around if a is equal to b then b is equal to a all right let's go ahead and submit this one and go to the next one same idea right our expression is r plus 14 and that expression is equal to 17 those segments are the same length so that expression is equal to that constant all right how about we jump up to the next level looks like pretty much the same thing j plus 11 is equal to 18 all right how about the next level okay now we're a little bit more tricky here okay we've got 2f plus 4f equals 36. let me read the instructions it says write an equation that says the length of the green line is equal to the length of the black line combine like terms that's what i wanted to see did they want me to combine my like terms before i wrote the equation they do want me to so two f's plus four f's that's going to make six f's six f's equals 36 and now they want us to say what f is equal to well let's see 6 times 6 is 36 or 36 divided by 6 is 6. so there we go there's our answer all right let's jump up to the next level here we go three j plus j very similar to what we just did four j equals 36 right four times what is 36 or what's 36 divided by four it's nine all right what about this one we've got n plus two n plus eight well n plus two n whoops didn't mean for that to happen n plus two n is equal to three n right so i've got three n plus eight that's what this expression is equal to once i've combined my like terms three n plus eight is equal to forty four all right now this is actually a two step equation we haven't gone through solving two-step equations so we're gonna have to do a little thinking about it if you know how to solve two-step equations you can do it otherwise maybe some guess and check let's see here i know that 3 times 10 would be 30 right 30 plus 8 would be 38 that's too small what about if i did 3 times 11 that'd be 33 33 plus 8 would be 41 what about 3 times 12 3 times 12 is 36 36 plus 8 is 44. so because we have not officially talked about solving two step equations yet i did it by guess and check of course those of you who know what to do you would subtract that 8 and then divide by 3 using inverse operations all right let's go ahead and skip up to the next level all right so here we go we've got two f's right two f's plus eight is equal to 14. so again again if you did not know how to solve two step equations we could try it by guess and check let's see well 2 times 5 would be 10 10 plus 8 is too big 2 times 4 would be 8 8 plus 8 that's also too big 2 times 3 6 right so if f were 3 it would work again for those of you who know how to solve two step equations we would subtract that 8 and then divide by two 14 minus eight is six six divided by two is three all right what about the challenge zone what do they have for us here y y y y three y's yeah that's right three y's plus seven three y's plus seven is equal to thirty four all right so again it is a two step equation if you need to do guess and check well three times ten is thirty thirty plus seven that's too big how about three times nine three times nine is twenty seven and guess what twenty seven plus seven is it's thirty four so guess and check if you don't know how to solve two step equations if you do you would do 34-7 which would be 27 divided by 3 which would be 9. all right let's skip to a higher level even within the challenge zone oh oh oh what does this mean you know what this means this means we're going to have to take it away because the whole thing take away that 5 equals 27 oh i really like this four j's right four j's not plus five excuse me four j is a minus five is equal to twenty-seven four times something minus five is equal to 27 so again if we're doing guess and check well let's see 4 times 10 too big 4 times 8 32 32 minus 5 does that equal 27 i think it does 8 right but again if we were using our method for solving two step equations we would add that five on we would do 27 plus five and then we would divide by four all right i'm going to go ahead and stop there and skip up to our next and final skill on this video which is solving one-step equations all right so in order to solve a one-step equation all right we can ask ourselves this one particular here what divided by 3 is 2 well if you want to know what divided by 3 is 2 can't you just multiply 2 by 3 that's right you can use what we call inverse or opposite operations the inverse or opposite operation of division is multiplication so if you want to say what divided by three is two simply do two times three and you get six six divided by three is two right what plus four is seven well if you wanna know what plus four is seven take that four away from the seven and what do you get you get three the opposite of adding four is subtracting four seven take away four is three all right what about this one what plus seven is ten if you wanna know what plus seven is ten simply do ten take away that seven right ten take away that 7 that was added to get that 10 gives you 3. 3 plus 7 is 10. so we're using inverse operations if something is added to the variable we're going to subtract if something is subtracted we're going to add if something is multiplied we're going to divide or in this case right here if something is divided we're going to multiply what divided by 3 is 4 well let's do 4 times 3 is 12 right we're simply using inverse operations something times 3 is 12. well again what's the opposite of multiplying dividing 12 divided by 3 is 4. inverse or opposite operations all right i'm going to go ahead and skip up two levels here 11 times something is 264. all right so they made a little more difficult by making the numbers bigger so i need to do 264 divided by 11. so maybe having a calculator sometimes might prove helpful all right i could do it without a calculator of course but time is of the essence 264 divided by 11 is 24 24 times 11 is 264. all right this one i have negative 451 and i'm going to do the same thing i'm going to divide by 11 right negative 451 divided by 11 is going to be negative 41. all right some of you probably have learned those tricks with 11's they're going to see mr reeves there's the negative 41 right there you should have known that you're right as a math teacher i should know that i actually do know that but i don't always use everything i know because you know i can't expect you guys to do the same thing here all right what about negative 21b times something is negative 861. looks like they're just going to use some more some bigger numbers to try to challenge us but if we have a calculator it's not really that much more challenging what times negative 21 is negative 861 we're simply gonna divide 861 well the negative version by 21 i should have done the negative version by that a negative divided by a negative is going to be a positive even though i forgot it so this answer is going to be 41. all right let's see if they've got anything any harder towards the end of the challenge zone they got decimals right but what are we going to do instead of adding 15.4 to get 8.4 we're going to subtract right we're going to do 8.4 minus 15.4 i'm pretty sure that's negative seven point zero right isn't it all righty there we go oh last one this one ah what does this say it says 12.32 is the opposite of something or a negative 1 times something is 12.32 so how about if we just put in negative 12.32 right a negative times a negative would be the positive or the opposite remember we can refer to this as the opposite of the opposite of a negative would be a positive all right so we're going to end there that's seventh grade math we are working with one variable equations just getting started on our equation solving all right i hope this video was helpful until next time see ya
786
https://lo.unisa.edu.au/mod/book/view.php?id=466227
Introduction to Water Engineering Practical 3: Friction and Minor Losses in Pipes Practical 3: Friction and Minor Losses in Pipes Next Introduction The energy required to push water through a pipeline is dissipated as friction pressure loss, in m. “Major” losses occur due to friction within a pipe, and “minor” losses occur at a change of section, valve, bend or other interruption. In this practical you will investigate the impact of major and minor losses on water flow in pipes. Major losses Minor losses where f = friction factor k = minor loss coefficient L = Length (m) D = Diameter (m) V = Velocity (m/s) Supporting Information Major Losses Pressure loss is proportional to L/D ratio and velocity head. For low velocities, where the flow is laminar, friction loss is caused by viscous shearing between streamlines near the wall of the pipe and the friction factor (f) is well defined. For high velocities where the flow is fully turbulent, friction loss is caused by water particles coming into contact with irregularities in the surface of the pipe and friction factor itself is a function of surface roughness. In most engineering applications, the velocity is less than that required for fully turbulent flow and f is a function of both the viscosity of a boundary layer and the roughness of the pipe surface. Values of f can be determined experimentally and plotted in dimensionless form against Reynolds Number Re to from a Moody Diagram. Minor Losses Minor losses behave similarly to major losses, where a device with a large k value leads to a high pressure loss. In general, a very sudden change to the flow path contributes to significant pressure loss. Next
787
https://www.webqc.org/compound-ALCl3-ALCl3.html
AlCl3 properties Printed from Chemical Compound Properties Database Enter a chemical formula or compound name to look up its properties: Properties of AlCl 3 (Aluminium chloride): | Compound Name | Aluminium chloride | | Chemical Formula | AlCl 3 | | Molar Mass | 133.3405386 g/mol | | Chemical structure | | | | Lewis structure | | 3D molecular structure | | Physical properties | | Appearance | Colourless crystals, hygroscopic | | Solubility | 439.0 g/100mL | | Density | 2.4800 g/cm³ | | Helium 0.0001786 Iridium 22.562 | | Thermochemistry | | Heat Capacity | 91.10 J/(mol·K) | | Boron nitride 19.7 Hentriacontane 912 | | Enthalpy of Formation | -704.20 kJ/mol | | Adipic acid -994.3 Tricarbon 820.06 | | Standard Entropy | 109.30 J/(mol·K) | | Ruthenium(III) iodide -247 Chlordecone 764 | Alternative Names Aluminium(III) chloride Aluminium trichloride Trichloroaluminum Elemental composition of AlCl 3 | Element | Symbol | Atomic weight | Atoms | Mass percent | --- --- | Aluminum | Al | 26.9815386 | 1 | 20.2351 | | Chlorine | Cl | 35.453 | 3 | 79.7649 | | Mass Percent Composition | Atomic Percent Composition | --- | | Al Aluminum (20.24%) Cl Chlorine (79.76%) | Al Aluminum (25.00%) Cl Chlorine (75.00%) | | Mass Percent Composition | | Al Aluminum (20.24%) Cl Chlorine (79.76%) | | Atomic Percent Composition | | Al Aluminum (25.00%) Cl Chlorine (75.00%) | | Identifiers | | CAS Number | 7446-70-0 | | SMILES | ClAlCl | | SMILES | Cl[Al-]1(Cl)[Cl+]Al-(Cl)Cl | | SMILES | [OH2+]Al-3([OH2+])([OH2+])([OH2+])[OH2+].[Cl-].[Cl-].[Cl-] | | Hill formula | AlCl 3 | Related compounds | Formula | Compound name | --- | | AlCl | Aluminium monochloride | Sample reactions for AlCl 3 | Equation | Reaction type | --- | | Ca + AlCl 3 = CaCl 2 + Al | single replacement | | Li + AlCl 3 = LiCl + Al | single replacement | | AgNO 3 + AlCl 3 = AgCl + Al(NO 3)3 | double replacement | | AlCl 3 + NH 4 OH = NH 4 Cl + Al(OH)3 | double replacement | | AlCl 3 + NaOH = Al(OH)3 + NaCl | double replacement | | Related | | Molecular weight calculator | | Oxidation state calculator | Aluminium Chloride (AlCl₃): Chemical Compound By the Chemistry Department, WebQC.Org Scientific Review Article | Chemistry Reference Series Abstract Aluminium chloride (AlCl₃) represents an industrially significant inorganic compound with the molecular formula AlCl₃. This hygroscopic material exists in both anhydrous and hexahydrate ([Al(H₂O)₆]Cl₃) forms, exhibiting distinct structural characteristics in different phases. The anhydrous compound demonstrates a layered crystal structure with octahedral coordination, while the vapor phase consists primarily of Al₂Cl₆ dimers that dissociate to trigonal planar monomers at elevated temperatures. Aluminium chloride serves as a prototypical Lewis acid catalyst, particularly in Friedel-Crafts alkylation and acylation reactions, with annual production exceeding 21,000 tons in the United States alone. The compound melts at 180°C with sublimation characteristics and demonstrates considerable aqueous acidity due to hydrolysis. Its chemical behavior encompasses complex coordination chemistry, making it fundamental to both industrial processes and synthetic organic chemistry methodologies. Introduction Aluminium chloride stands as one of the most commercially important aluminium compounds, classified as an inorganic chloride salt. First studied systematically in the 1830s, this compound was historically known as muriate of alumina or marine alum during the 18th century. The anhydrous form possesses particular significance in industrial chemistry, primarily serving aluminum production and functioning as a catalyst in organic transformations. Its Lewis acidic character arises from the electron-deficient aluminium center, which readily accepts electron pairs from various Lewis bases. The compound exhibits reversible structural transitions between polymeric and monomeric states at moderate temperatures, a property that underpins its diverse chemical applications. Both anhydrous and hydrated forms appear as colourless crystals, though industrial samples frequently display yellow coloration due to iron(III) chloride contamination. Molecular Structure and Bonding Molecular Geometry and Electronic Structure Aluminium chloride demonstrates remarkable structural polymorphism dependent on physical state and temperature. In the solid phase, anhydrous AlCl₃ crystallizes in a monoclinic system (space group C12/m1, No. 12) with lattice parameters a = 0.591 nm, b = 0.591 nm, and c = 1.752 nm. The unit cell volume measures 0.52996 nm³ containing six formula units. This structure features cubic close-packed chloride ions with aluminium centers in octahedral coordination geometry, isostructural with yttrium(III) chloride. The vapour phase predominantly contains Al₂Cl₆ dimers (point group D₂h) at moderate temperatures, with aluminium atoms adopting tetrahedral coordination. These dimers dissociate into trigonal planar AlCl₃ monomers (point group D₃h) above approximately 180°C, structurally analogous to boron trifluoride. The aluminium center in the monomer exhibits sp² hybridization with bond angles of 120° between chlorine atoms. The electronic configuration of aluminium ([Ne]3s²3p¹) permits the formation of three covalent bonds, leaving the central atom electron-deficient and highly electrophilic. Chemical Bonding and Intermolecular Forces The Al-Cl bonds in aluminium chloride demonstrate predominantly covalent character with partial ionic contribution. Experimental bond lengths measure 206 pm in the dimeric form, shorter than typical ionic aluminium-chlorine distances. The dimerization occurs through donor-acceptor interactions where chlorine atoms bridge between aluminium centers, forming three-center four-electron bonds. This bonding arrangement reduces the electron deficiency at aluminium centers while maintaining strong Lewis acidity. Intermolecular forces in solid AlCl₃ include ionic interactions between layers and van der Waals forces between chloride ions. The compound exhibits limited hydrogen bonding capability in its anhydrous form but forms extensive hydrogen-bonding networks in the hexahydrate. The hexahydrate [Al(H₂O)₆]Cl₃ features octahedral aquo complexes with aluminium-oxygen bond distances of approximately 191 pm. Chloride ions serve as counterions and participate in hydrogen bonding with coordinated water molecules. The molecular dipole moment of monomeric AlCl₃ measures 0 Debye due to its symmetric trigonal planar geometry, while the dimer possesses a measurable dipole moment resulting from its asymmetric structure. Physical Properties Phase Behavior and Thermodynamic Properties Anhydrous aluminium chloride appears as colourless, hygroscopic crystals with a density of 2.48 g/cm³ at 25°C. The compound sublimes at 180°C under atmospheric pressure, bypassing the liquid phase under normal conditions. The liquid phase, obtainable under pressure, demonstrates a lower density of 1.78 g/cm³ at the melting point, consistent with the structural change to dimeric form. The hexahydrate exhibits a density of 2.398 g/cm³ and decomposes rather than melting cleanly, undergoing hydrolysis at approximately 100°C. Thermodynamic parameters include a standard enthalpy of formation of -704.2 kJ/mol and Gibbs free energy of formation of -628.8 kJ/mol for the anhydrous compound. The standard entropy measures 109.3 J/(mol·K) with a heat capacity of 91.1 J/(mol·K). Vapor pressure data indicate 133.3 Pa at 99°C rising to 13.3 kPa at 151°C. Viscosity measurements yield 0.35 cP at 197°C and 0.26 cP at 237°C for the molten phase. Solubility in water ranges from 439 g/L at 0°C to 490 g/L at 100°C, demonstrating moderate temperature dependence. The compound dissolves readily in hydrogen chloride, ethanol, chloroform, and carbon tetrachloride, while exhibiting only slight solubility in benzene. Spectroscopic Characteristics Infrared spectroscopy of anhydrous AlCl₃ reveals characteristic Al-Cl stretching vibrations at 620 cm⁻¹ and 485 cm⁻¹ in the solid phase. The dimeric vapour phase shows additional bridging chloride vibrations at 350 cm⁻¹. Raman spectroscopy provides complementary data with strong bands at 580 cm⁻¹ and 380 cm⁻¹ corresponding to symmetric and asymmetric stretching modes. Nuclear magnetic resonance spectroscopy of aluminium-27 in AlCl₃ solutions shows a characteristic chemical shift at approximately 100 ppm relative to Al(H₂O)₆³⁺, consistent with tetrahedral coordination in Lewis acid-base adducts. The hexahydrate exhibits proton NMR signals at 3.5 ppm for coordinated water molecules. Mass spectrometric analysis of vapour phase AlCl₃ shows predominant peaks corresponding to Al₂Cl₆⁺ and AlCl₃⁺ ions with characteristic isotopic patterns reflecting chlorine natural abundance. Chemical Properties and Reactivity Reaction Mechanisms and Kinetics Aluminium chloride functions as a potent Lewis acid, forming adducts with a wide range of Lewis bases through donor-acceptor interactions. The reaction with chloride ions produces the tetrachloroaluminate anion [AlCl₄]⁻, which exhibits tetrahedral geometry. This complex formation represents a fundamental aspect of the compound's catalytic behavior in Friedel-Crafts reactions. In Friedel-Crafts alkylation, aluminium chloride activates alkyl halides through formation of carbocation intermediates or polarized complexes. The reaction follows second-order kinetics with rate constants dependent on the arene substrate and alkylating agent. Activation energies typically range from 50-80 kJ/mol for common alkylation reactions. For acylations, the catalyst forms a highly electrophilic acylium ion complex [RCO]⁺[AlCl₄]⁻ that attacks aromatic rings with rate-determining electrophilic substitution. The compound catalyzes ene reactions through Lewis acid activation of enophile carbonyl groups, reducing the LUMO energy and facilitating cycloaddition. Reaction rates show first-order dependence on both catalyst and substrate concentrations with turnover frequencies reaching 100 h⁻¹ under optimized conditions. Acid-Base and Redox Properties Aqueous solutions of aluminium chloride demonstrate acidic behavior due to hydrolysis of the hydrated aluminium ion. The first hydrolysis constant pKₐ measures 4.95 for [Al(H₂O)₆]³⁺ ⇌ [Al(OH)(H₂O)₅]²⁺ + H⁺, with subsequent hydrolysis steps occurring at higher pH. Solutions exhibit buffer capacity in the pH range 3.5-5.0, gradually forming aluminium hydroxide precipitates above pH 5. Redox properties include limited oxidizing power, with the standard reduction potential Al³⁺/Al measuring -1.66 V versus standard hydrogen electrode. The compound does not function as a strong oxidizing agent but can participate in disproportionation reactions under certain conditions. Stability in reducing environments is moderate, while strong oxidizing conditions may lead to chlorine evolution. Synthesis and Preparation Methods Laboratory Synthesis Routes Laboratory preparation of anhydrous aluminium chloride typically employs the reaction of aluminium metal with chlorine gas or hydrogen chloride. The direct chlorination proceeds exothermically at 650-750°C according to the equation: 2Al + 3Cl₂ → 2AlCl₃. This method requires careful temperature control to prevent excessive sublimation and product loss. Hydrogen chloride reaction follows: 2Al + 6HCl → 2AlCl₃ + 3H₂, generating hydrogen gas as a byproduct. Alternative laboratory routes include single displacement reactions using copper(II) chloride: 2Al + 3CuCl₂ → 2AlCl₃ + 3Cu. This method provides moderate yields but requires subsequent purification to remove copper contaminants. Hydrated aluminium chloride prepares readily by dissolving aluminium oxide or aluminium metal in hydrochloric acid, followed by crystallization from aqueous solution. Industrial Production Methods Industrial production predominantly utilizes the direct chlorination of aluminium metal, conducted in batch or continuous reactors at temperatures between 650°C and 750°C. The process employs recycled aluminium from various sources, including scrap metal and industrial waste. Large-scale reactors handle several tons per day with energy requirements of approximately 2.5 kWh per kilogram of product. Process optimization focuses on chlorine utilization efficiency and heat management, as the reaction releases 705 kJ per mole of product. Environmental considerations include chlorine containment and byproduct recovery systems. The global production capacity exceeds 100,000 tons annually, with major manufacturing facilities located in industrial regions with access to aluminium and chlorine sources. Economic factors involve aluminium and chlorine market prices, with production costs typically ranging from $1.50 to $2.50 per kilogram. Analytical Methods and Characterization Identification and Quantification Qualitative identification of aluminium chloride employs precipitation tests with sodium hydroxide, producing gelatinous aluminium hydroxide that dissolves in excess reagent. Quantitative analysis typically utilizes complexometric titration with EDTA at pH 4-5 using xylenol orange or eriochrome black T indicators. Spectrophotometric methods measure aluminium content after complexation with reagents such as aluminon or 8-hydroxyquinoline, achieving detection limits of 0.1 mg/L. Instrumental techniques include atomic absorption spectroscopy with detection limits of 0.01 mg/L for aluminium and ion chromatography for chloride determination. X-ray diffraction provides definitive identification of crystalline forms through comparison with reference patterns (JCPDS 01-072-0782 for anhydrous AlCl₃). Thermal analysis techniques differentiate between anhydrous and hydrated forms through characteristic decomposition patterns. Purity Assessment and Quality Control Industrial specifications for anhydrous aluminium chloride require minimum 98.5% purity with iron content below 0.01% and heavy metals below 0.005%. Common impurities include iron(III) chloride, aluminium oxychloride, and moisture. Moisture determination employs Karl Fischer titration with acceptance criteria typically below 0.5% water content. Quality control protocols include measurement of catalytic activity in standardized Friedel-Crafts test reactions. Storage stability requires airtight containers with desiccants to prevent hydrolysis. Shelf life under proper storage conditions exceeds two years for anhydrous material, while the hexahydrate demonstrates greater stability but limited catalytic utility. Applications and Uses Industrial and Commercial Applications Primary industrial application involves catalysis in Friedel-Crafts reactions for production of dyes, pharmaceuticals, and specialty chemicals. Anthraquinone production from benzene and phosgene represents a significant industrial process consuming substantial aluminium chloride quantities. The compound catalyzes alkylation reactions in petroleum refining and production of ethylbenzene for styrene manufacture. Additional applications include manufacture of aluminium alkyl compounds through reaction with Grignard reagents or alkyl aluminium compounds. The compound serves as electrolyte component in aluminium production and refining processes. Other uses encompass water treatment as a coagulant precursor, though this application primarily employs polyaluminium chloride derivatives. Research Applications and Emerging Uses Research applications focus on Lewis acid catalysis in novel organic transformations, including asymmetric synthesis using chiral aluminium complexes. Emerging uses include preparation of ionic liquids and deep eutectic solvents with aluminium chloride components. Materials science applications involve synthesis of aluminium-containing ceramics and nanomaterials through sol-gel processes. Electrochemical applications explore aluminium chloride-based electrolytes for battery systems, particularly aluminium-ion batteries. Catalytic research investigates supported aluminium chloride systems for heterogeneous catalysis, addressing limitations of homogeneous systems. Environmental applications examine aluminium chloride derivatives for phosphate removal in wastewater treatment. Historical Development and Discovery Aluminium chloride preparations were known in the 18th century as muriate of alumina or marine alum, obtained by treating clay with hydrochloric acid. Systematic chemical investigation began in the 1830s with characterization of its composition and properties. The compound's catalytic properties in organic reactions gained recognition in the late 19th century following Charles Friedel and James Crafts' pioneering work on aromatic substitutions. Structural understanding evolved throughout the 20th century with X-ray crystallographic studies clarifying the solid-state structure in the 1920s. Vapor phase electron diffraction studies in the 1930s revealed the dimeric nature of gaseous AlCl₃. Industrial production scaled significantly during the mid-20th century to meet demand from petroleum and chemical industries. Recent developments focus on environmentally benign alternatives and supported catalyst systems. Conclusion Aluminium chloride represents a chemically versatile compound with significant industrial and research importance. Its structural complexity, encompassing multiple coordination environments across different phases, provides fundamental insights into inorganic chemistry and bonding theory. The compound's potent Lewis acidity enables diverse catalytic applications, particularly in Friedel-Crafts reactions that remain cornerstone methodologies in organic synthesis. Future research directions include development of more sustainable production methods, exploration of supported and recyclable catalyst systems, and investigation of novel applications in materials science and electrochemistry. Challenges persist in managing the compound's corrosive nature and environmental impact, driving ongoing efforts to develop alternative catalysts with reduced toxicity and waste generation. The continued scientific investigation of aluminium chloride and its derivatives ensures its enduring significance in chemical science and technology. Chemical Compound Properties Database This database contains physical properties and alternative names for thousands of chemical compounds. In chemical formula you may use: Any chemical element. Capitalize the first letter in chemical symbol and use lower case for the remaining letters: Ca, Fe, Mg, Mn, S, O, H, C, N, Na, K, Cl, Al. Functional groups: D, T, Ph, Me, Et, Bu, AcAc, For, Tos, Bz, TMS, tBu, Bzl, Bn, Dmg parenthesis () or brackets []. Common compound names. Examples: H2O, CO2, CH4, NH3, NaCl, CaCO3, H2SO4, C6H12O6, water, carbon dioxide, methane, ammonia, sodium chloride, calcium carbonate, sulfuric acid, glucose. The database includes melting points, boiling points, densities, and alternative names collected from various chemical sources. What are compound properties? Chemical compound properties include physical characteristics such as melting point, boiling point, and density, which are important for chemical identification and applications. Alternative names help identify the same compound when referenced by different naming conventions. How to use this tool? Enter a chemical formula (like H2O) or compound name (like water) to look up available properties and alternative names. The tool will search through the database and display any available physical properties and known alternative names for the compound. Please let us know how we can improve this web app. Chemistry tools Gas laws Unit converters Periodic table Chemical forum Constants Symmetry Contribute Contact us Choose languageDeutschEnglishEspañolFrançaisItalianoNederlandsPolskiPortuguêsРусский中文日本語한국어 How to cite? MenuBalanceMolar massGas lawsUnitsChemistry toolsPeriodic tableChemical forumSymmetryConstantsContributeContact us How to cite? Choose languageDeutschEnglishEspañolFrançaisItalianoNederlandsPolskiPortuguêsРусский中文日本語한국어 WebQC is a web application with a mission to provide best-in-class chemistry tools and information to chemists and students. By using this website, you signify your acceptance of Terms and Conditions and Privacy Policy. Do Not Sell My Personal Information © 2025 webqc.org All rights reserved
788
https://books.google.com/books/about/Atkins_Physical_Chemistry.html?id=3QpDDwAAQBAJ
Atkins' Physical Chemistry - Peter William Atkins, Julio De Paula, James Keeler - Google Books Sign in Hidden fields Try the new Google Books Books View sample Add to my library Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books My library Help Advanced Book Search Get print book No eBook available Oxford University Press Amazon.com Barnes&Noble.com Books-A-Million IndieBound Find in a library All sellers» ### Get Textbooks on Google Play Rent and save from the world's largest eBookstore. Read, highlight, and take notes, across web, tablet, and phone. Go to Google Play Now » My library My History Atkins' Physical Chemistry ========================== Peter William Atkins, Julio De Paula, James Keeler Oxford University Press, 2018 - Science - 908 pages The exceptional quality of previous editions has been built upon to make this new edition of Atkins' Physical Chemistry even more closely suited to the needs of both lecturers and students. Re-organised into discrete Topics, the text is more flexible to teach from and more readable for students. Now in its eleventh edition, the text has been enhanced with additional learning features and maths support to demonstrate the absolute centrality of mathematics to physical chemistry. Increasing the digestibility of the text in this new approach, the reader is brought to a question, then the maths is used to show how it can be answered and progress made. The expanded and redistributed maths support also includes a greatly increased number of 'Chemist's toolkits' which provide students with succinct reminders of mathematical concepts and techniques right where they need them. Checklists of key concepts at the end of each Topic add to the extensive learning support provided throughout the book, to reinforce the main take-home messages in each section. The coupling of the broad coverage of the subject with a structure and use of pedagogy that is even more innovative will ensure Atkins' Physical Chemistry remains the textbook of choice for studying physical chemistry. More » Preview this book » Selected pages Title Page Table of Contents Index Contents PROLOGUE Energy temperature and chemistry 1 FOCUS 1 The properties of gases 3 TOPIC 1A The perfect gas 4 TOPIC 1B The kinetic model 11 TOPIC 1C Real gases 19 FOCUS 1 The properties of gases 28 The First Law 33 TOPIC 2A Internal energy 34 FOCUS 5 Simple mixtures 200 FOCUS 6 Chemical equilibrium 203 TOPIC 6A The equilibrium constant 204 TOPIC 6B The response of equilibria to the conditions 212 TOPIC 6C Electrochemical cells 217 TOPIC 6D Electrode potentials 224 FOCUS 6 Chemical equilibrium 229 FOCUS 16 Molecules in motion 235 More TOPIC 2B Enthalpy 46 TOPIC 2C Thermochemistry 51 TOPIC 2D State functions and exact differentials 59 TOPIC 2E Adiabatic changes 67 FOCUS 2 The First Law 70 The Second and Third Laws 77 TOPIC 3A Entropy 78 TOPIC 3B Entropy changes accompanyingspecific processes 88 TOPIC 3C The measurement of entropy 92 TOPIC 3D Concentrating on the system 97 TOPIC 3E Combining the First and Second Laws 104 FOCUS 3 The Second and Third Laws 111 FOCUS 4 Physical transformations of pure substances 119 TOPIC 4A Phase diagrams of pure substances 120 TOPIC 4B Thermodynamic aspects of phase transitions 128 FOCUS 4 Physical transformations of pure substances 135 FOCUS 4 Physical transformations of pure substances 138 FOCUS 5 Simple mixtures 141 TOPIC 5A The thermodynamic description of mixtures 143 TOPIC 5B The properties of solutions 155 liquids 166 solids 177 TOPIC 5E Phase diagrams of ternary systems 180 TOPIC 5F Activities 183 FOCUS 5 Simple mixtures 191 liquids 194 solids 196 TOPIC 5E Phase diagrams of ternary systems 198 TOPIC 5F Activities 199 TOPIC 16A Transport properties of a perfect gas 236 TOPIC 16B Motion in liquids 245 FOCUS 16C Diffusion 252 FOCUS 16 Molecules in motion 261 FOCUS 17 Chemical kinetics 267 TOPIC 17A The rates of chemical reactions 269 TOPIC 17B Integrated rate laws 277 TOPIC 17C Reactions approaching equilibrium 283 TOPIC 17D The Arrhenius equation 287 TOPIC 17E Reaction mechanisms 292 TOPIC 17F Examples of reaction mechanisms 299 TOPIC 17G Photochemistry 308 FOCUS 17 Chemical kinetics 315 FOCUS 18 Reaction dynamics 325 TOPIC 18A Collision theory 326 TOPIC 18B Diffusioncontrolled reactions 333 TOPIC 18C Transitionstate theory 338 TOPIC 18D The dynamics of molecular collisions 347 TOPIC 18E Electron transfer in homogeneoussystems 356 FOCUS 18 Reaction dynamics 362 FOCUS 19 Processes at solid surfaces 369 TOPIC 19A An introduction to solid surfaces 370 TOPIC 19B Adsorption and desorption 378 TOPIC 19C Heterogeneous catalysis 387 TOPIC 19D Processes at electrodes 391 FOCUS 19 Processes at solid surfaces 401 RESOURCE SECTION 407 INDEX437 Copyright Less Other editions - View all Atkins' Physical Chemistry 11e: Volume 3: Molecular Thermodynamics and Kinetics Peter Atkins,Julio De Paula,James Keeler Limited preview - 2019 Atkins' Physical Chemistry PETER. DE PAULA ATKINS (JULIO. KEELER, JAMES.),Julio de Paula,James Keeler No preview available - 2018 Common terms and phrases activation energyactivity coefficientsadiabaticadsorbedadsorptionatomsbenzeneboiling pointBrief illustrationCalculatecell potentialchemical potentialCollect your thoughtscollisioncomponentcompositionconstant pressurecurrent densitydecreasesdependencediffusiondm³electronequilibrium constantexampleexpansionexperimentalFigurefluxfollowsfunctiongasesGibbs energygivenheat capacityincreasesinitialintegralinternal energyionicionsisothermk₁kinetickJ mol¯¹kJ mol−1liquidmeasuredmechanismmixturemolar massmole fractionmolecularmoleculesoverpotentialpartial pressureperfect gasphase diagramplotpotential energypropertiesrate constantrate lawreactantsreaction Gibbs energyrelationResource sectionresultreversiblesampleslopesolidsolutionsolventspeciesspeedspontaneousstandard enthalpystandard Gibbs energystandard potentialstandard reactionsteady-state approximationstepsubstancesurfaceTablethermalthermodynamictionTopictransferv₁vapour pressureWaalszeroΔΗ Bibliographic information Title Atkins' Physical Chemistry AuthorsPeter William Atkins, Julio De Paula, James Keeler Edition illustrated Publisher Oxford University Press, 2018 ISBN 0198769865, 9780198769866 Length 908 pages SubjectsScience › Chemistry › General Science / Chemistry / General Science / Chemistry / Physical & Theoretical Export CitationBiBTeXEndNoteRefMan About Google Books - Privacy Policy - Terms of Service - Information for Publishers - Report an issue - Help - Google Home
789
https://www.paho.org/sites/default/files/2018-cde-5-global-situation-malaria-rasmussen.pdf
Global situation of resistance to antimalarial drugs Charlotte Rasmussen Drug Efficacy and Response Unit • Uncomplicated falciparum malaria Artemisinin-based combination therapies: • Artemether-lumefantrine • Artesunate-amodiaquine • Artesunate-mefloquine • Artesunate-SP • Dihydroartemisinin-piperaquine • Artesunate-pyronaridine (in areas where others ACTs are failing) • Severe malaria • Artesunate, artemether, quinine followed by ACT Treatment guidelines for falciparum malaria • In areas with chloroquine-susceptible infections: ACT or chloroquine (+primaquine) • In areas with chloroquine-resistant infections: ACT (+primaquine) Treatment guidelines for vivax malaria • Antimalarial resistance is defined as the ability of a parasite strain to survive and/or multiply despite the administration and absorption of a drug given in doses equal to or higher than those usually recommended but within tolerance of the subject; • Artemisinin resistance is defined as delayed parasite clearance following treatment with an artesunate monotherapy or with an ACT – partial resistance would be more appropriate wording; • Multidrug resistance (MDR) is resistance to more than 2 antimalarial compounds of different chemical classes. This term usually refers to P. falciparum resistance to chloroquine, sulfadoxine-pyrimethamine, and a third antimalarial compound; • Treatment failure (≠ resistance) is the inability to clear parasites from a patient’s blood or to prevent their recrudescence after the administration of an antimalarial. Many factors can contribute to treatment failure, including incorrect dosage, poor patient compliance, poor drug quality, and drug interactions and resistance. Most of these factors are addressed by therapeutic efficacy studies. Definitions • Therapeutic efficacy studies (TES) • Prospective evaluations of patients’ clinical and parasitological responses to treatment for uncomplicated malaria. • Considered the gold standard for assessing antimalarial drug efficacy. The resulting data are used to inform national malaria treatment policy in malaria endemic countries. Monitoring efficacy and resistance • Studies conducted according to the WHO protocol, repeatedly at the same sites and at regular intervals, allow early detection of changes in treatment efficacy and comparison of results within and across regions over time. • Molecular markers • Drug resistance is one of the causes of treatment failure. Once genetic changes associated with resistance are identified (molecular markers), drug resistance can be confirmed and monitored with molecular techniques. Molecular markers of drug resistance for falciparum Chemical family Drug Molecular marker 4-Aminoquinolines Chloroquine Pfcrt SNP Amodiaquine Molecular marker yet to be validated. Studies show that amodiaquine select for Pfmdr1 (86Y) Piperaquine Pfpm2-3 copy number Antifolates Pyrimethamine Pfdhfr SNP Sulfadoxine Pfdhps SNP Amino-alcohols Mefloquine Pfmdr1 copy number Lumefantrine Molecular marker yet to be validated. Studies show that lumefantrine select for Pfmdr1 (N86). Recent data do not confirm Pfmdr1 copy number as a marker of lumefantrine resistance. Sesquiterpene lactones Artemisinin and artemisinin derivates PfK13 SNP Naphthoquinone Atovaquone Pfcytb SNP SNP: Single nucleotide polymorphisms Monitoring artemisinin resistance F446I M476I R561H I543T Y493H Line between artemisinin-resistance regions N458YI R539T P553L C580Y Distribution of K13 mutants in the GMS • In vivo artemisinin resistance is defined as delayed parasite clearance. In TES seen as increased proportion of patients positive on day 3. • Artemisinin resistance is also monitored via different validated K13 mutations. Percentage of samples with C580Y mutation Percentage of parasites with C580Y Studies done with start-year 2008-2011 and n≥15 Studies done with start-year 2013-2015 and n≥15 0 0.01 - 5 5.01 - 20 Relation between ACT efficacy and K13 mutations Year Site ACT N Efficacy 28/42 days (%) K13 mutant (%) Pfmdr1 (n > 1) (%) 2011 Pailin Cambodia Artesunate-mefloquine 29 100 75.9 (C580Y) 6.9 2012-13 Dak Nong Viet Nam Dihydro-piperaquine 33 100 72.7 (C580Y; Y493H) N/A 2014 Yingjiang county Yunnan, China Dihydro-piperaquine 23 100 91.3 (F446I) N/A 2014-15 Champassak Lao PDR Artemether-lumefantrine 29 93.2 83.3 (C580Y; R539T) N/A 2014-16 Kratie, Siam Riep, Pursat, P. Vihear Cambodia Artesunate-mefloquine 305 100 94.2 (C580Y) < 5 Role of each markers in DHA-PIP efficacy in Cambodia (N = 725) K13 WT PIP WT (n=268) K13 WT PIP MUT (n=14) K13 MUT PIP WT (n=208) K13 MUT PIP MUT (n=235) Witkowski et al., Lancet Inf. Disease 2016 Clinical outcome after ACT treatment according to sensitivity pattern of each component Artemisinin Partner drug Treatment outcome Sensitive Sensitive Treatment success (ACPR) Resistance (partial -delayed clearance) Sensitive Treatment success (ACPR) Sensitive Sensitive Resistance (low grade) Resistance (high grade) Treatment success (ACPR) Treatment failure Resistance (partial -delayed clearance) Resistance Treatment failure (high rate) A 3-day treatment with artesunate used as monotherapy may cure up to 50% of patients; For amodiaquine and SP, treatment response was still adequate despite 20-30% of AQ or SP resistance in absence of artemisinin resistance Number of ACTs failing in the Greater Mekong Subregion Changes in Cambodia national malaria treatment policies Spread of DHA-piperaquine in GMS Spread of a single multidrug resistant malaria parasite lineage to Viet Nam • The spread of resistant parasites across the region linked to massive drug pressure including through MDAs. Adapted based on Imwong et al. 2017 Lancet Inf Dis. Eliminating malaria in the GMS Parasite incidence Jan-Jun 2018 (per 1000 population) ACT treatment failure rates in the WHO African Region (2010-2016) Includes: Angola, Burkina Faso, Benin, Cameroon, CAR, Chad, Comoros, Congo, Côte d'Ivoire, DRC, Equatorial Guinea, Eritrea, Ethiopia, Gabon, Gambia, Ghana, Guinea-Bissau, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Niger, Nigeria, Senegal, Sierra Leone, Somalia, Sudan, Togo, Zambia and Zimbabwe Update on antimalarial drug efficacy and drug resistance Year Countries Prevalence Study 2013 Comoros 3/46 (6.5%) TES 2015 Mozambique 0/87 (0%) TES 2015 Mozambique 1/88 (1.1%) TES 2015 Mozambique 1/89 (1.1%) TES 2015 Mozambique 2/87 (2.3%) TES 2015 Mozambique 3/61 (4.9%) Pre-MDA 2016 Mozambique 1/19 (5.3%) Post-MDA Prevalence of Pfplasmepsin2-3 increased copy number Recommendations of the TEG • presence of multicopy Pfplasmepsin 2-3 in Africa is a potential concern in terms of the use of DHA-PIP; • additional information is required regarding the in vivo and ex vivo piperaquine-resistant phenotype in African parasites; • additional African data are needed to assess the relationship between DHA-PIP treatment failures and molecular markers (Pfkelch13, Pfplasmepsin 2-3, and Pfcrt). • Surveillance for artemisinin and partner drug resistance needs to be continued and strengthened in the GMS; • There is a critical need for surveillance outside the GMS to detect potential de novo resistance or the potential introduction of resistant parasites; • Where surveillance signals a potential threat to nationally recommended ACTs, effective alternative ACTs should be identified and implemented before resistance reaches critical levels. Conclusions Malaria threats maps Antimalarial drug efficacy and drug resistance WHO website Update on drug resistance WHO website Thank you for your attention BACK-UP SLIDES Percentage F446I and C580Y Source: WHO database. Includes all studies with n>14 and a study start-year between 2010 and 2018 Year % Pf with C580Y or F446I • During the World Health Assembly in May 2018, Health Ministers and Senior representatives from GMS countries signed the Call for Action to Eliminate Malaria • Reconfirmed the commitment to malaria elimination by 2030 in the GMS Validated Candidates/ associated F446I N458Y M476I Y493H R539T I543T P553L R561H C580Y P441L G449A C469F A481V P527H N537I G538V V568G P574L F673I A675V Relation between partner drug efficacy and K13 mutations Year Site ACT N Efficacy 28/42 days (%) K13 mutant (%) 2016 Kampong Speu, Kratie Artesunate-mefloquine 69 100 95.6% (C580Y) 2017 Kampong Speu, Pursat, Stungtreng Artesunate-mefloquine 170 99.5 78.2% (C580Y, R539T, Y493H) 2017 Ratanakiri, Mondulkiri Artesunate-pyronaridine 123 97.6 72.4 (C580Y) 2017 Kachin, N. Shan Artemether-lumefantrine 71 97.2 43.7 (F446I, R561H) Distribution of C580Y mutations worldwide C580Y reported Possible “permissive” or compensatory background mutations Miotto et al., Nature Genetics 2015 • In India, Somalia and Sudan, treatment failure failures are associated with Pfdhfr and Pfdhps quadruple and quintuple mutants; • These mutations are still rare in Afghanistan, IR Iran and Pakistan. Treatment failure rates with AS+SP (2005-2015) ●Artemisinin resistance affects only ring stages of P. falciparum (no worsening seen over 15 years); ●Implication for the treatment of severe malaria (so far not increased mortality reported); ●7-day artesunate > 90% efficacy; ●All 6 partner drugs are highly efficacious as monotherapy in absence of resistance; ●Increases the risk of de novo resistance to the partner drug and/or facilitate the selection of partner drug resistance: new evidence in GMS shows that artemisinin did not facilitate emergence of mefloquine or piperaquine resistance. Consequences of artemisinin partial resistance Recommended first-line treatment for falciparum
790
https://www.khanacademy.org/math/revision-term-1-tg-math-class-9/x4b2594fb8fd16723:week-3/x4b2594fb8fd16723:statistics/v/mean-median-mode-example-indian-accent
Mean, median, & mode example (video) | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Explore Khanmigo Virginia Math Math: Pre-K - 8th grade Math: Get ready courses Math: High school & college Math: Multiple grades Math: Illustrative Math-aligned Math: Eureka Math-aligned Test prep Science Economics Reading & language arts Computing Life skills Social studies Partner courses Khan for educators Select a category to view its courses Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Help us do more We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever. Select gift frequency One time Recurring Monthly Yearly Select amount $10 $20 $30 $40 Other Give now By donating, you agree to our terms of service and privacy policy. Skip to lesson content Revision Term 1 TG Math Class 9 Course: Revision Term 1 TG Math Class 9>Unit 3 Lesson 1: Statistics Intro to data handling Creating frequency tables Reading frequency distribution tables Knowing frequency distribution table better Mean, median, & mode example Mean, median, and mode Finding mean of the grouped data with frequency Median and its use Finding Median of the data What is Mode and why is it important Mode Math> Revision Term 1 TG Math Class 9> Week 3> Statistics © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement Mean, median, & mode example Google Classroom Microsoft Teams About About this video Here we give you a set of numbers and then ask you to find the mean, median, and mode. It's your first opportunity to practice with us!Created by Skyloom (Dubbing). Skip to end of discussions Questions Tips & Thanks Want to join the conversation? Log in Sort by: Top Voted Video transcript Creative Commons Attribution/Non-Commercial/Share-AlikeVideo on YouTube Up next: exercise Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
791
https://link.springer.com/book/10.1007/978-1-0716-4698-4
Spermatogenesis: Methods and Protocols | SpringerLink Your privacy, your choice We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media. By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection. See our privacy policy for more information on the use of your personal data. Manage preferences for further information and to change your choices. Accept all cookies Skip to main content Advertisement Log in Menu Find a journalPublish with usTrack your research Search Cart Search Search by keyword or author Search Navigation Find a journal Publish with us Track your research Home Book × Spermatogenesis Methods and Protocols Book © 2025 Accessibility Information Overview Editors: Prabhakara P. Reddi0 Prabhakara P. Reddi Department of Comparative Biosciences, College of Veterinary Medicine, University of Illinois Urbana-Champaign, Urbana, USA View editor publications Search author on:PubMedGoogle Scholar Includes cutting-edge techniques Provides step-by-step detail essential for reproducible results Contains key implementation advice from the experts Part of the book series:Methods in Molecular Biology (MIMB, volume 2954) 4523 Accesses 2 Altmetric This is a preview of subscription content, log in via an institution to check access. Access this book Log in via an institution eBook USD 169.00 Price excludes VAT (USA) Available as EPUB and PDF Read on any device Instant download Own it forever Buy eBook Hardcover Book USD 219.99 Price excludes VAT (USA) Durable hardcover edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Hardcover Book Tax calculation will be finalised at checkout Licence this eBook for your library Learn about institutional subscriptions Other ways to access Licence this eBook for your library Institutional subscriptions About this book This volume presents a comprehensive collection of research methods at the cellular, molecular, and biochemical level to understand sperm production and function. Exploring spermatogenesis and testis biology, the book delves into epigenomics and transcriptomics, spermatogenic cell separation, identification of cell types and stages, and more. Written for the highly successful Methods in Molecular Biology series, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step and readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Authoritative and practical, Spermatogenesis: Methods and Protocols serves as an excellent reference for basic applied researchers in the field of spermatogenesis and male fertility working with laboratory models as well as domestic animals. Similar content being viewed by others Human Spermatogenesis and Its Regulation Chapter© 2017 Molecular Regulation of Sperm Production Cascade Chapter© 2020 Spermiation: Insights from Studies on the Adjudin Model Chapter© 2021 Explore related subjects Discover the latest articles, books and news in related subjects. Cell Biology Reproductive Physiology Search within this book Search Table of contents (17 protocols) Front Matter Pages i-xi Download chapter PDF 2. Epigenomics and Transcriptomics of Testicular Cells 1. #### Front Matter Pages 1-1 Download chapter PDF 2. Profiling Histone Modifications in Differentiating Mouse Spermatogonia with CUT&Tag Benjamin William Walters, Haoming Yu, Shubhangini Kataruka, Bluma J. Lesch Pages 3-26 3. Quantitative CUT&Tag for Epigenomic Profiling of Mouse Germ Cells Mengwen Hu, Satoshi H. Namekawa Pages 27-47 4. Spatial Transcriptomic Analyses of Spermatogenesis Ndifereke Uboh, Sean Vargas, Victoria D. Diaz, Brian P. Hermann Pages 49-94 5. Identification of RNA Targets of Highly Conserved RNA Binding Proteins Essential for Animal Sperm Development Min Zang, Liping Cheng, Eugene Yujun Xu Pages 95-117 Novel Methods to Evaluate Testis Function 1. #### Front Matter Pages 119-119 Download chapter PDF 2. Generation of Organotypic Testicular Organoids from Rat and Pig Primary Cells in Microwell Culture Anja Elsenhans, Nathalia de Lima e Martins Lara, Sadman Sakib, Ina Dobrinski Pages 121-134 3. Inhibiting Aldehyde Dehydrogenase Function Using WIN 18,446 to Synchronize Spermatogenesis Shelby L. Havel, Michael D. Griswold Pages 135-142 Spermatogenic Cell Separation Techniques 1. #### Front Matter Pages 143-143 Download chapter PDF 2. Individualization of Testicular Cells for Single-Cell Subcellular Immunocytochemistry and Cell Culture Applications Tracy M. Clement Pages 145-161 3. Isolation of Pig or Rodent Sertoli Cells for Use in Transplantation Alexis R. Rodriguez, João Pedro Tôrres Guimarães, Anand Chakroborty, Rachel L. Babcock, Jonathan M. Miranda, Gurvinder Kaur et al. Pages 163-182 4. Density Gradient-Based Separation of Testicular Cell Types Using a STA-PUT Apparatus Mirella L. Meyer-Ficca, Ralph G. Meyer Pages 183-195 Identification of Spermatogenic Cell Types and Stages 1. #### Front Matter Pages 197-197 Download chapter PDF 2. Antibody Labeling of Mouse Spermatocyte Chromosome Spreads to Study Meiosis Md Hasanur Alam, Huanyu Qiao Pages 199-206 3. Transillumination-Assisted Microdissection for Precise Staging of Seminiferous Tubules in Mice Irene Infancy Joseph, Prabhakara Poothi Reddi Pages 207-216 4. Immunofluorescence Staining of the Manchette and Developing Sperm Flagella in Mouse Changmin Niu, Zhibing Zhang Pages 217-225 Methods to Evaluate Sperm Function 1. #### Front Matter Pages 227-227 Download chapter PDF 2. Methods of Mammalian In Vitro Sperm Capacitation to Study Sperm Physiology Michal Zigo, Karl Kerns, Peter Sutovsky Pages 229-240 3. Analysis of Motion Characteristics and Plasma Membrane Intactness (Viability) in Sperm from Domestic Animals Camilo Hernández-Avilés Pages 241-259 1 2 Next page Back to top Editors and Affiliations Department of Comparative Biosciences, College of Veterinary Medicine, University of Illinois Urbana-Champaign, Urbana, USA Prabhakara P. Reddi Accessibility Information PDF accessibility summary This PDF has been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com. Please note that a more accessible version of this eBook is available as ePub. EPUB accessibility summary This ebook is designed with accessibility in mind, aiming to meet the ePub Accessibility 1.0 AA and WCAG 2.2 Level AA standards. It features a navigable table of contents, structured headings, and alternative text for images, ensuring smooth, intuitive navigation and comprehension. The text is reflowable and resizable, with sufficient contrast. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com. Bibliographic Information Book Title: Spermatogenesis Book Subtitle: Methods and Protocols Editors: Prabhakara P. Reddi Series Title: Methods in Molecular Biology DOI: Publisher: Humana New York, NY eBook Packages: Springer Protocols Copyright Information: The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature 2025 Hardcover ISBN: 978-1-0716-4697-7 Published: 03 July 2025 Softcover ISBN: 978-1-0716-4700-4 Due: 17 July 2026 eBook ISBN: 978-1-0716-4698-4 Published: 02 July 2025 Series ISSN: 1064-3745 Series E-ISSN: 1940-6029 Edition Number: 1 Number of Pages: XI, 296 Number of Illustrations: 65 b/w illustrations Topics: Cell Biology, Physiology Keywords Male factor infertility Sperm production Next generation sequencing Spermatogenic cells Testis biology Transgenic and knockout mouse lines Publish with us Policies and ethics Back to top Access this book Log in via an institution eBook USD 169.00 Price excludes VAT (USA) Available as EPUB and PDF Read on any device Instant download Own it forever Buy eBook Hardcover Book USD 219.99 Price excludes VAT (USA) Durable hardcover edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Hardcover Book Tax calculation will be finalised at checkout Licence this eBook for your library Learn about institutional subscriptions Other ways to access Licence this eBook for your library Institutional subscriptions Sections Overview About this book Table of contents (17 protocols) Editors and Affiliations Accessibility Information Bibliographic Information Publish with us Discover content Journals A-Z Books A-Z Publish with us Journal finder Publish your research Language editing Open access publishing Products and services Our products Librarians Societies Partners and advertisers Our brands Springer Nature Portfolio BMC Palgrave Macmillan Apress Discover Your privacy choices/Manage cookies Your US state privacy rights Accessibility statement Terms and conditions Privacy policy Help and support Legal notice Cancel contracts here 34.34.225.82 Not affiliated © 2025 Springer Nature
792
https://matheducators.stackexchange.com/questions/27715/a-visualization-for-the-quotient-rule
Skip to main content Mathematics Educators Asked Modified 1 year, 3 months ago Viewed 3k times This question shows research effort; it is useful and clear 15 Save this question. Show activity on this post. Context: first year didactics of mathematics course for middle school teacher students (in Norway). I have a reasonable visualization for the product rule of derivatives: Consider a rectangle with sides a and b, change them by Δa and Δb respectively, take the difference of the areas between the original and the altered rectangle. If desired, next consider parametrized side lengths f(x) and g(x) and so on. To me, this shows nicely why the cross terms are there in the product rule. It can lead to further discussions of sensitivity of product-type quantities to changes in the factors; changes in the smaller one matter more. Does a helpful visualization exist for the quotient rule? undergraduate-education calculus mathematical-analysis derivative visualization Share CC BY-SA 4.0 Follow this question to receive notifications asked Apr 16, 2024 at 13:09 TommiTommi 9,07022 gold badges2929 silver badges6161 bronze badges 6 2 Related: See the sci.math thread Long division initiated by Quentin Grady (28 Feb - 2 Mar 2007). In particular, my 2 March 2007 post discusses three algebraic methods that I titled as: (1) METHOD 1: WORKING DIRECTLY FROM (delta y/x) - y/x (2) METHOD 2: RATIONALIZING THE DENOMINATOR OF (delta y/x) METHOD 3: LONG DIVISION APPLIED TO (delta y/x) The 3rd method is what Quentin Grady's initial post asked about. Dave L Renfro – Dave L Renfro 2024-04-17 18:27:35 +00:00 Commented Apr 17, 2024 at 18:27 1 I always thought the quotient rule was just a special case of the product rule and chain rule for computational ease. It has never struck me as anything important theoretically. qwr – qwr 2024-04-19 15:02:10 +00:00 Commented Apr 19, 2024 at 15:02 wwvw′v+v′w=uv=u=u′ Then via routine algebra: w′=u′v−v′uv2. The one thing that this does not give you, that the proof via limits does give you, is that conclusion that w is differentiable. But this doesn't seem like a "visualization". Michael Hardy – Michael Hardy 2024-06-12 21:28:53 +00:00 Commented Jun 12, 2024 at 21:28 @qwr Note my comment above, about that this algebraic argument does not give you. Michael Hardy – Michael Hardy 2024-06-12 21:29:16 +00:00 Commented Jun 12, 2024 at 21:29 @MichaelHardy when I learned calculus in middle and high school, we barely proved anything. It was just computational and I'm not sure what the use of learning that was. I would've preferred public schools teach statistics and basic economics. qwr – qwr 2024-06-13 03:50:28 +00:00 Commented Jun 13, 2024 at 3:50 | Show 1 more comment 7 Answers 7 Reset to default This answer is useful 28 Save this answer. Show activity on this post. Depending on how much algebra you allow, you could make the exact same rectangle picture but label the sides g(x) and q(x) with area f(x). This geometrically enforces g(x)q(x)=f(x), aka q(x)=f(x)g(x). This gives the approximation Δf≈q(x)Δg+g(x)Δq So Δq≈Δf−q(x)Δgg(x)=Δf−f(x)g(x)Δgg(x)=g(x)Δf−f(x)Δg(g(x))2 Share CC BY-SA 4.0 Follow this answer to receive notifications edited Apr 16, 2024 at 17:21 answered Apr 16, 2024 at 13:55 Steven GubkinSteven Gubkin 26.6k44 gold badges6666 silver badges113113 bronze badges 2 13 This is a clever and nice answer. It brings to attention that the quotient rule is just an algebraic reframing of the product rule. user52817 – user52817 2024-04-16 15:22:56 +00:00 Commented Apr 16, 2024 at 15:22 @user52817 : It's not quite an algebraic reframing of the product rule, because that reframing only tells you that if u/v is differentiable, then its derivative is a certain thing. The reframing does not tell you that u/v is differentiable. The proof via limits does tell you that. Michael Hardy – Michael Hardy 2024-06-13 18:11:20 +00:00 Commented Jun 13, 2024 at 18:11 Add a comment | This answer is useful 6 Save this answer. Show activity on this post. Here are two geometric ways of thinking about the quotient rule. The first is essentially a geometric interpretation of an algebraic manipulation of the product rule. The second is an interpretation of the quotient rule as it is usually written. Consider a rectangle with length x and height y, with area A. We want to determine the change in height Δy in response to a change in length Δx and/or a change in area ΔA. If we hold x constant and increase A by ΔA, then the resulting change in y is Δy=ΔAx. If we hold A constant and increase x by Δx, then the resulting decrease in y is Δy≈−yΔxx. It's actually Δy=−(y−Δy)Δxx, but we neglect the ΔxΔy term as usual. When A and x change simultaneously, a reasonable approximation for the change in y is Δy≈ΔAx−yΔxx=ΔAx−AΔxx2=xΔA−AΔxx2. We can interpret this last line geometrically if we extrude our rectangle into a square prism with side lengths x and height y. The area of each vertical face is A=xy. Let's consider the changes in volume that result from changing x, y, and A on the front face. If we fix the height and vary the length of the front face by Δx, the volume of the prism will increase by AΔx. If we fix the length of the front face and vary the height by Δy, the volume will increase by x2Δy. Changes in x and y will result in some change in the area of the front face. If we are given that the area of the front face changes by ΔA, then the change in the volume of the prism is xΔA. We can think of the numerator of the quotient rule as representing the relationship between these changes in volume. x2Δy≈xΔA−AΔx (It's for convenience that we only vary the length of the front face and hold the length of the side face constant. If both are allowed to vary, then we will get the same result after accounting for an additional volume change of AΔx in the third dimension.) Share CC BY-SA 4.0 Follow this answer to receive notifications edited Apr 18, 2024 at 2:18 answered Apr 17, 2024 at 12:51 Justin HancockJustin Hancock 3,35588 silver badges2626 bronze badges 2 Could you please explain little bit more about x ΔA , if you keep x fixed you need to change y to change A by ΔA . Janaka Rodrigo – Janaka Rodrigo 2024-04-17 17:56:37 +00:00 Commented Apr 17, 2024 at 17:56 1 @JanakaRodrigo Yes, if x is fixed, then y must change for A to change. For the volume interpretation, I didn't consider x to be fixed while A was varying, unlike for the area interpretation. I've tried to make this clearer. You could also reason about the volumes by thinking about the change in y when A varies and x is fixed and when x varies and A is fixed, as I did with the areas. I wanted to try to show both types of reasoning. Justin Hancock – Justin Hancock 2024-04-18 02:23:08 +00:00 Commented Apr 18, 2024 at 2:23 Add a comment | This answer is useful 5 Save this answer. Show activity on this post. If you don't mind using similar triangles and are comfortable with both derivatives positive, you can just set OA=g(x),OC=f(x),CD=XZ=Δf(x),AB=ZT=Δg(x) and write f+Δfg+Δg=BTOB=AYOA=AX+XZ−YZOA=AXOA+XZOA−YZOA The first term is fg, the second one is Δfg and the main difficulty is to discern the meaning of the third (subtracted) term. By the similarity of OAY and YZT, we have ZY=AYZTOA=AYΔgg and now it boils down to how much hand-waving you are comfortable with to say that AY is essentially f (on the picture it is conveniently between f and f+Δf but it won't be so for different choices of signs). Generally I often prefer a completely different route, however, which goes along with the mantra that for the addition/subtraction one should add/subtract absolute errors but for multiplication/division one should add/subtract relative ones as a first order approximation. That story can be told before introducing the formal notion of the derivative or even of the limit, though, of course, the related computations and pictures are pretty much the same. Once it is firmly in place (you can choose the level of rigor that best suits your needs), the product and quotient rules become simple consequences for arbitrarily long products/quotients (basically you get the equations for the logarithmic derivative immediately). Share CC BY-SA 4.0 Follow this answer to receive notifications answered Apr 16, 2024 at 21:30 fedjafedja 5,01199 silver badges2828 bronze badges Add a comment | This answer is useful 3 Save this answer. Show activity on this post. Another option which isn't geometric, but which reinforces the concept of derivative as linear approximation, is as follows. First derive (by any means) that ddu1u=−u−2. Convey that the numerical meaning of this is 1u+Δu≈1u−Δuu2. Use this to do some back of the envelop approximations like 919=920−1≈920−−9202=0.45+0.0225=0.4725 Compare the approximation to the true result of 0.47368.... The general quotient rule repeats the same sort of calculation generally. f+Δfg+Δg≈(f+Δf)(1g−1g2Δg)=fg+gΔf−fΔgg2−fg2ΔfΔg≈fg+gΔf−fΔgg2 Share CC BY-SA 4.0 Follow this answer to receive notifications edited Apr 17, 2024 at 11:06 answered Apr 17, 2024 at 10:19 Steven GubkinSteven Gubkin 26.6k44 gold badges6666 silver badges113113 bronze badges Add a comment | This answer is useful 2 Save this answer. Show activity on this post. William Priestley in Calculus: A Liberal Art develops the quotient rule from the product rule and a rule for 1/f(x). Probably other folks do, too, but that's where I first saw it. If one follows that line of development, then one might appreciate a visualization (not a proof) for the derivative of 1/y, akin to the change-in-rectangle one for the product rule that is fairly common that one see's echoed @Justin Hancock's answer. (The blue curve is 1/y vs. y.) The black rectangles across the diagonal have the same area because each pair of triangles across the diagonal are congruent and the diagonal divides the rectangle in half. The area of the left rectangle is −yd(1y), and the area of the right is 1ydy. Hence d(1y)=−dyy2, or ddx(1y)=−dy/dxy2. Of course, one can use a finite difference Δ instead of the differential d as one sees fit. Share CC BY-SA 4.0 Follow this answer to receive notifications answered Jun 14, 2024 at 18:09 user1815user1815 5,8581919 silver badges3535 bronze badges Add a comment | This answer is useful 1 Save this answer. Show activity on this post. I really appreciate the area models using differences, but here’s the kind of algebraic manipulation I enjoy, assuming the product rule in place of negative exponents: f'= ((f/g)g)' = (f/g)g'+(f/g)'g ⇒ (f'-(f/g)g')/g = (f/g)' ⇒ (f/g)'= (f'g - fg')/(g^2) Share CC BY-SA 4.0 Follow this answer to receive notifications answered Apr 17, 2024 at 18:59 Dave MarainDave Marain 1911 bronze badge Add a comment | This answer is useful -1 Save this answer. Show activity on this post. Multiple answers have told us the quotient rule is just (to quote one of the comments) "an algebraic reframing of the product rule." But that misses something. In "differential algebra" one treats derivatives only algebraically, and that's exactly what is going on when you say that starting with w′=(uv)′=u′v+v′u and just doing some algebra you get (wu)′=w′u−u′wu2. However that assumes w/u is differentiable if w and u are differentiable. In differential algebra, everything is differentiable. In mathematical analysis, one wants to prove that this quotient, w/u, is differentiable if w and u are differentiable. The (full-fledged) product rule will tell us that if w and 1/u are differentiable, then so is w⋅1/u, i.e. w/u. So the problem is to prove that if u is differentiable, then so is 1/u (except at points where u=0). To that end, proceed as follows: (1u)′(x)====limΔx→01/u(x+Δx)−1/u(x)Δx−limΔx→0u(x+Δx)−u(x)Δx⋅1u(x+Δx)u(x)−limΔx→0u(x+Δx)−u(x)Δx⋅limΔx→01u(x+Δx)u(x)This last “=” is true if both limits exist and are finite.The first one exists becauseu is differentiable. The secondexists because differentiablefunctions are continuous and u(x)≠0.−u′(x)⋅1u(x)2and here again, the second limitis what it is because differentiablefunctions are continuous. Conclusion: The fact that 1/u is differentiable is not proved only by algebra plus the product rule. Share CC BY-SA 4.0 Follow this answer to receive notifications answered Jun 13, 2024 at 18:38 Michael HardyMichael Hardy 2,1761212 silver badges2626 bronze badges 5 3 This is true, but unfortunately does not really answer the question. Tommi – Tommi 2024-06-14 06:35:32 +00:00 Commented Jun 14, 2024 at 6:35 From the answer by Dave Marain: f′=(fgg)′=(fg)g′+(fg)′g⇒f′−(fg)g′g=(fg)′⇒(fg)′=f′g−fg′g2. The derivative of the quotient exists iff the algebraic expression involving limits f′g−fg′g2 exists and g≠0. This algebraic combination of limits exists if f and g are differentiable. user52817 – user52817 2024-06-14 22:39:41 +00:00 Commented Jun 14, 2024 at 22:39 @user52817 : And as I said, nothing in Dave Marain's answer implies that if f and g are differentiable, then so is f/g. Rather, it shows that the expression given by the quotient rule is the derivative IF f/g is differentiable. Michael Hardy – Michael Hardy 2024-06-15 18:50:05 +00:00 Commented Jun 15, 2024 at 18:50 The equality tells us limh→0(f/g)(x+h)−(f/g)(x)h=limh→0(f(x+h)−f(x))g(x)−f(x)(g(x+h)−g(x))hg(x)2, so the quotient is differentiable. user52817 – user52817 2024-06-16 17:29:27 +00:00 Commented Jun 16, 2024 at 17:29 @user52817 : But that is not in Dave Marain's answer. Michael Hardy – Michael Hardy 2024-06-18 22:22:31 +00:00 Commented Jun 18, 2024 at 22:22 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions undergraduate-education calculus mathematical-analysis derivative visualization See similar questions with these tags. Featured on Meta stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 15 Students using l'Hôpital's Rule on the terms of a series, instead of the Limit Comparison Test 9 Symmetric version of product and quotient differentiation rules 4 Research on the effects of simulation and interactive visualization 6 Justifying the multi-variable chain rule to students 13 How to explain what's wrong with this application of the chain rule? 6 Ideas for the introduction of the derivative? 11 An intuitive explanation of l'Hôpital's rule for ∞/∞ 15 How should I introduce the Chain Rule 3 Interpreting the derivative as instantaneous rate of change in real phenomena Hot Network Questions Hat puzzle with rock, paper, scissors "But I'll miss the bus for my game!" what is a teacher's legal liability for letting kids out of class without confirmation? Can time be understood as the record of the universe’s changing positions? What is the purpose of this opening in the airplane interior? Creating a fast connect 4 ai solver in JavaScript How can one deal with the case of someone misrepresenting his academic job title? Truth value of a sentence giving permission? How can I check only some of the passengers in a booking? How to Calculate Habitable Area of an Infinite Plane Dovecot cannot find virtual user from Postfix Examples of mathematically interesting numbers that can be expressed as an integer tetration (besides Graham’s number) A million and one constants Can old rim strips (without punctures) cause flat tubes? Written a function that opens a CSV file Upper Frequency Limit for Bouncing Off Ionosphere In gimp, how do I crop the "invisible" content outside an image's rectange? How to remove parts of all file paths in a .m3u playlist with Notepad++? How to add "ticks" of combined cell with tabularray package? Help with piano fingering from B4 to F4. Left hand Optimization of a function that recursively splits a sequence of tokens Loopy (Slitherlink) Is there a way to formally define the notion of "ease" of compressibility? Who is the model on this Cathy's Curse movie poster / DVD cover? Malicious tampering of trusted libraries Question feed
793
https://fiveable.me/law-and-ethics-of-journalism/unit-9/objectivity-journalistic-norm/study-guide/39VSuJS0uGpGcrKK
Objectivity as a journalistic norm | Law and Ethics of Journalism Class Notes | Fiveable | Fiveable new!Printable guides for educators Printable guides for educators. Bring Fiveable to your classroom ap study content toolsprintablespricing my subjectsupgrade ⚖️Law and Ethics of Journalism Unit 9 Review 9.4 Objectivity as a journalistic norm All Study Guides Law and Ethics of Journalism Unit 9 – Bias, Objectivity & Fairness in Journalism Topic: 9.4 ⚖️Law and Ethics of Journalism Unit 9 Review 9.4 Objectivity as a journalistic norm Written by the Fiveable Content Team • Last updated September 2025 Written by the Fiveable Content Team • Last updated September 2025 print study guide copy citation APA ⚖️Law and Ethics of Journalism Unit & Topic Study Guides First Amendment: Press Freedom Essentials Defamation: Libel and Slander in Journalism Privacy vs. Public's Right to Know Ethical Newsgathering Techniques Journalistic Integrity: Conflicts of Interest Protecting Sources: Ethics & Confidentiality Copyright and Fair Use in Journalism Accuracy and Corrections in Journalism Bias, Objectivity & Fairness in Journalism 9.1 Types of bias and how to recognize them 9.2 False balance and false equivalency 9.3 Framing and agenda setting 9.4 Objectivity as a journalistic norm 9.5 Fairness and impartiality Digital Media Ethics in Online Journalism Journalism and National Security Issues Broadcast Media Regulation and the FCC print guide report error Objectivity in journalism aims to report facts without bias or personal interpretation. It's a cornerstone of ethical reporting, emphasizing impartiality, neutrality, and fact-based coverage to maintain public trust in the media. The concept emerged in the early 20th century as a response to yellow journalism. It was influenced by scientific thinking and wire services, aiming to restore credibility to the profession by focusing on facts and evidence. Definition of objectivity Objectivity is a fundamental principle in journalism that emphasizes reporting facts without bias, opinion, or personal interpretation It is a cornerstone of ethical journalism and is essential for maintaining public trust in the media Objectivity requires journalists to approach their work with an open mind, setting aside their own beliefs and prejudices to report the truth as accurately as possible Impartiality vs fairness Impartiality refers to the absence of bias or favoritism in reporting, ensuring that all sides of an issue are presented equally Fairness, on the other hand, involves treating all subjects of a story with respect and giving them a chance to respond to any allegations or criticism While impartiality is a key aspect of objectivity, fairness ensures that journalism is conducted in an ethical and respectful manner Neutrality in reporting Neutrality requires journalists to avoid taking sides in a story or presenting their own opinions as fact This means reporting on events and issues without expressing personal judgment or advocating for a particular position Neutral reporting allows readers to form their own opinions based on the facts presented Fact-based journalism Objectivity demands that journalism be based on verifiable facts rather than speculation, rumor, or opinion Journalists must gather and present evidence to support their reporting, using reliable sources and data Fact-based journalism helps to ensure accuracy and credibility in reporting, building trust with the audience History of objectivity The concept of objectivity in journalism emerged in the early 20th century as a response to the excesses of yellow journalism and propaganda Prior to this, newspapers were often openly partisan, advocating for political parties or causes Emergence in 20th century In the 1920s and 1930s, journalists began to embrace the idea of objectivity as a professional norm This shift was influenced by the rise of scientific thinking and the belief that facts could be separated from opinion The advent of wire services like the Associated Press also contributed to the spread of objective reporting Reaction to yellow journalism Yellow journalism, which was characterized by sensationalism, exaggeration, and even fabrication, eroded public trust in the media Objectivity was seen as a way to restore credibility and professionalism to journalism By focusing on facts and evidence, journalists sought to distinguish themselves from the sensationalist practices of yellow journalism Influence of scientific method The scientific method, with its emphasis on observation, hypothesis testing, and verification, provided a model for objective journalism Journalists began to see themselves as impartial observers, gathering and presenting facts without bias The use of scientific methods, such as surveys and statistical analysis, also became more common in journalism Elements of objectivity Objectivity in journalism is characterized by several key elements that work together to ensure impartial, fact-based reporting These elements include accuracy, balance, separation of news and opinion, and minimization of bias Accuracy and verification Accuracy is the foundation of objective journalism, requiring reporters to get the facts right and to verify information before publishing This involves fact-checking, consulting multiple sources, and seeking out evidence to support claims Inaccuracies, even if unintentional, can undermine the credibility of a story and the media outlet as a whole Balance and multiple perspectives Balance requires journalists to present different sides of an issue, giving fair coverage to competing viewpoints This helps to ensure that readers have a comprehensive understanding of a story and can form their own opinions However, balance does not mean giving equal weight to all perspectives, especially when some views are not supported by facts Separation of news and opinion Objective journalism maintains a clear distinction between news reporting and opinion or commentary News stories should present facts without the reporter's personal views, while opinion pieces should be clearly labeled as such This separation helps readers to distinguish between objective reporting and subjective commentary Minimizing bias and subjectivity Journalists must strive to minimize their own biases and subjective interpretations in their reporting This involves being aware of one's own preconceptions and making an effort to set them aside when gathering and presenting information Techniques such as using neutral language, avoiding loaded terms, and presenting multiple perspectives can help to reduce bias Challenges to objectivity Despite its importance as a journalistic ideal, objectivity faces numerous challenges in practice These challenges can arise from the inherent biases of journalists, the influence of media ownership and advertising, and the emotional impact of events Inherent biases of journalists Journalists, like all individuals, have their own personal beliefs, experiences, and biases that can influence their reporting These biases can be conscious or unconscious and can affect the selection of stories, sources, and framing of information While complete elimination of bias may be impossible, journalists must strive to be aware of their own preconceptions and work to minimize their impact Influence of media ownership Media outlets are often owned by large corporations or individuals with their own political and economic interests These ownership structures can influence the editorial direction of a news organization and the stories that are covered or ignored Journalists may face pressure to align their reporting with the interests of their employers, potentially compromising their objectivity Pressure from advertisers and sponsors Media outlets rely on advertising revenue to fund their operations, which can create conflicts of interest Advertisers may seek to influence content or discourage reporting that could harm their business interests Journalists must navigate these pressures while maintaining their commitment to objective reporting Emotional impact of events Some news events, such as natural disasters, wars, or acts of violence, can have a profound emotional impact on journalists and the public The desire to tell compelling stories and evoke emotional responses can sometimes lead to sensationalism or bias in reporting Journalists must balance the need for empathy and human interest with the commitment to objective, fact-based reporting Objectivity in practice Achieving objectivity in journalism requires a commitment to specific practices and techniques that promote accuracy, balance, and transparency These practices include rigorous sourcing and fact-checking, presenting conflicting viewpoints, avoiding loaded language, and being transparent about the reporting process Sourcing and fact-checking Objective journalism relies on credible, reliable sources to gather and verify information Reporters must seek out multiple sources, including those with different perspectives, to ensure a comprehensive understanding of a story Facts must be rigorously checked and verified before publication to maintain accuracy and credibility Presenting conflicting viewpoints To provide a balanced perspective, journalists must present conflicting viewpoints on an issue, giving fair coverage to different sides of a story This involves seeking out sources with diverse opinions and experiences and presenting their views accurately and impartially However, not all viewpoints are equally valid, and journalists must use their judgment to prioritize facts and evidence over unsupported opinions Avoiding loaded language and framing Objective reporting requires the use of neutral, unbiased language that does not prejudice the reader for or against a particular viewpoint Journalists must avoid loaded terms, stereotypes, and framing that can skew the perception of a story This involves being mindful of word choice, context, and the potential impact of language on the audience Transparency in reporting process Transparency is essential for building trust with the audience and demonstrating a commitment to objectivity Journalists should be open about their sources, methods, and any potential conflicts of interest This can involve providing links to source materials, explaining the reporting process, and acknowledging any limitations or uncertainties in the information presented Critiques of objectivity While objectivity remains a central tenet of journalism, it has faced criticism from various perspectives Some argue that true objectivity is impossible, while others contend that the pursuit of objectivity can sometimes lead to false balance or the obscuring of important power dynamics Impossibility of true neutrality Critics argue that complete neutrality is an unattainable ideal, as all individuals, including journalists, have inherent biases and perspectives The selection of stories, sources, and framing inevitably involves subjective choices that can shape the narrative Rather than striving for an impossible ideal, some suggest that journalists should be transparent about their perspectives and focus on fairness and accuracy False balance and false equivalence The pursuit of balance can sometimes lead to false equivalence, where two opposing viewpoints are presented as equally valid, even when one is not supported by facts This can create a misleading impression of a story and give undue credibility to fringe or unsupported opinions Journalists must use their judgment to prioritize facts and evidence over the desire for artificial balance Obscuring power dynamics and inequalities Some critics argue that the objective model of journalism can obscure important power dynamics and social inequalities By focusing on presenting both sides of an issue, journalists may fail to adequately address systemic injustices or hold the powerful accountable This critique suggests that journalism should go beyond mere objectivity and actively work to expose and challenge power imbalances Limitations in complex situations The objective model of journalism can sometimes struggle to adequately capture the nuances and complexities of certain stories In situations involving moral ambiguity, competing values, or systemic issues, a purely fact-based approach may not provide a complete understanding Critics argue that journalism should incorporate more interpretive and explanatory elements to help the audience navigate complex issues Alternatives to objectivity In response to the critiques of objectivity, some journalists and media theorists have proposed alternative frameworks for ethical and responsible journalism These alternatives emphasize values such as fairness, transparency, advocacy, and interpretive reporting Fairness and accuracy Rather than striving for an unattainable ideal of objectivity, some argue that journalism should prioritize fairness and accuracy This involves presenting information in a balanced and impartial manner, while also acknowledging the inherent limitations of neutrality By focusing on fairness and accuracy, journalists can build trust with the audience and provide a reliable account of events Transparency and disclosure Transparency involves being open and honest about the reporting process, sources, and any potential biases or conflicts of interest By disclosing their methods and perspectives, journalists can build credibility and allow the audience to assess the reliability of the information presented Transparency can also involve engaging with the audience and responding to feedback and criticism Advocacy and social responsibility Some journalists and media outlets embrace an advocacy role, using their platform to promote social justice and hold the powerful accountable This approach sees journalism as a means of effecting positive change and addressing systemic inequalities While advocacy journalism may sacrifice some degree of objectivity, proponents argue that it fulfills a vital social responsibility Interpretive and explanatory journalism Interpretive and explanatory journalism goes beyond simple fact reporting to provide context, analysis, and interpretation of complex issues This approach recognizes that facts alone may not always provide a complete understanding of a story and that journalists have a role in helping the audience make sense of the information Interpretive journalism requires a deep understanding of the subject matter and the ability to convey complex ideas in an accessible manner Objectivity in the digital age The rise of digital media and the internet has posed new challenges and opportunities for objective journalism The proliferation of information sources, the spread of misinformation, and the influence of algorithms have all shaped the modern media landscape Impact of social media and citizen journalism Social media platforms have democratized the production and dissemination of news, allowing citizens to engage in journalism and share information While this has broadened the range of voices and perspectives in the media, it has also led to the spread of misinformation and unverified claims Professional journalists must navigate this new environment while upholding standards of objectivity and accuracy Proliferation of misinformation and fake news The digital age has seen a rise in the spread of misinformation, propaganda, and deliberately false or misleading content This "fake news" can undermine public trust in the media and make it harder for objective journalism to cut through the noise Journalists must be vigilant in fact-checking and debunking false claims, while also working to educate the public about media literacy Algorithms and filter bubbles The algorithms used by search engines and social media platforms can create "filter bubbles" that limit exposure to diverse perspectives and reinforce existing beliefs This can lead to a fragmentation of the media landscape and a decline in shared understanding of facts and events Journalists must be aware of these algorithmic influences and work to provide balanced and objective reporting that reaches across ideological divides Need for media literacy and critical thinking In an era of information overload and competing narratives, media literacy and critical thinking skills are essential for navigating the modern media landscape Journalists have a role in promoting these skills and helping the public to evaluate the credibility and reliability of information sources By fostering a more media-literate society, journalists can help to create a more informed and engaged citizenry that values objective, fact-based reporting 9.3 BackNext 9.5 Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. 0
794
https://jlmartin.ku.edu/courses/math409-S13/polyhedra.pdf
Notes on polyhedra and 3-dimensional geometry Judith Roitman / Jeremy Martin April 23, 2013 1 Polyhedra Three-dimensional geometry is a very rich field; this is just a little taste of it. Our main protagonist will be a kind of solid object known as a polyhedron (plural: polyhedra). Its characteristics are: • it is made up of polygons glued together along their edges • it separates R3 into itself, the space inside, and the space outside • the polygons it is made of are called faces. • the edges of the faces are called the edges of the polyhedron • the vertices of the faces are called the vertices of the polyhedron. The most familiar example of a polyhedron is a cube. Its faces are squares, and it has 6 of them. It also has 12 edges and 8 vertices. Another familiar example is a pyramid. A pyramid has a bottom face, which can be any polygon (you are probably most familiar with pyramids that have square bottoms), and the rest of its faces meet in one point. Pentagonal pyramid Square pyramid Cube Problem #1 If the bottom face of a pyramid has n sides, how many faces, edges, and vertices will it have? 1 Another familiar example is a prism, which is a polyhedron with two congruent parallel faces in which the other faces are rectangles. The two congruent faces can be triangles, quadrilaterals, or anything else. Another, perhaps slightly less familiar example is a bipyramid, which is built by taking two pyramids with congruent bases and gluing the bases together, so that only the triangular faces are left. Triangular prism Hexagonal prism Pentagonal bipyramid Problem #2 If one of the parallel faces of a prism has n sides, how many faces, edges, and vertices will the prism have? Problem #3 If the base (or “equator”) of a bipyramid has n sides, how many faces, edges, and vertices will the bipyramid have? While there are lots of different polyhedra, they all have some common features just by virtue of being polyhedra. Theorem 1. In any polyhedron,. . . • Every vertex must lie in at least three faces. (Otherwise, the polyhedron collapses to have no volume.) • Every face has at least three vertices. (It’s a polygon, so it better have at least three sides.) • Every edge must lie in exactly two faces. (Otherwise, the polyhedron wouldn’t have an inside and an outside.) As usual, you can learn a lot by playing with non-standard examples. For example: Problem #4 Can you construct a polyhedron with two parallel faces, one a triangle, the other a rectangle? Problem #5 Can you construct a polyhedron so that exactly one face is not a quadrilateral? Problem #6 Can you construct a polyhedron so that exactly one face is not a rectangle? Problem #7 Can you construct a polyhedron in which every face is a hexagon? 2 2 Euler’s formula Let v, e, and f be the numbers of vertices, edges and faces of a polyhedron. For example, if the polyhedron is a cube then v = 8, e = 12 and f = 6. Problem #8 Make a table of the values for the polyhedra shown above, as well as the ones you have built. What do you notice? You should observe that v −e + f = 2 for all these polyhedra. This relationship is called Euler’s formula, and it is vitally important in geometry, topology, and many other areas of mathematics. (The idea of adding up things in different dimensions, counting even dimensions as positive and odd dimensions as negative, just comes up everywhere.) Here is a cute proof of Euler’s formula, from p. 198 of George E. Martin (no relation), Transformation Geometry: An Introduction to Symmetry. (Springer-Verlag, 1982, New York). To prove the famous formula, imagine that all the edges of a convex polyhedron are dikes, exactly one face contains the raging sea, and all other faces are dry. We break dikes one at a time until all the faces are inundated, following the rule that a dike is broken only if this results in the flooding of a face. Now, after this rampage, we have flooded f −1 faces and so destroyed exactly f −1 dikes. Noticing that we can walk with dry feet along the remaining dikes from any vertex p to any other vertex along exactly one path, we conclude there is a one-to-one correspondence between the remaining dikes and the vertices excluding p. Hence there remain exactly v −1 unbroken dikes. So e = (f −1) + (v −1) and we have proved Euler’s formula. This is a vivid, dramatic proof. But is it correct? If we take away the colorful imagery, the proof boils down to four assertions about what happens after all the dikes are broken. Assertion #1: We have flooded f −1 faces. Assertion #2: We can walk with dry feet along the unbroken edges from any vertex to any other vertex. Assertion #3: There is a unique path consisting of unbroken edges connecting any two vertices. Assertion #4: There are exactly v −1 unbroken edges. Our task is to show that (i) these assertions are correct and (ii) they imply Euler’s formula. Proof of assertion #1: The only face we have not flooded is the one which originally contained the raging sea. Proof of assertion #2: What if there is some pair of vertices that are cut offfrom each other? Then there must have been some bridge B such that the dike network was connected just before B was broken, and disconnected just after B was broken. But that could only happen if the raging sea had already flooded both sides of B — and in that case we wouldn’t have broken B in the first place. So it’s impossible for there to be two mutually un-walkable-between vertices. Proof of assertion #3: Suppose, by way of contradiction, that there were two different paths between some pair of vertices. But then those paths would enclose some unflooded region, and that means that we didn’t break enough bridges. Proof of assertion #4: Pick a vertex p. There are exactly v −1 vertices other than p, and for each other 3 vertex q, there is a unique edge f(q) that is the first step from q to p. If q and q′ are different vertices, then it cannot be the case that f(q) = f(q′) (otherwise one or both of the paths to p involves doubling back). On the other hand, every edge is f(q) for some q (because the network is connected, so it’s possible to walk from p towards that edge and eventually cross it). That is, the function f :  vertices other than p →  edges is a bijection, and so e = v −1. 2.1 Inequalities from Euler’s formula When combined with other observations about the numberse v, e and f, Euler’s formula has other conse-quences. Remember that the degree of a vertex is the number of edges attached to it. For instance, all vertices in a cube have degree 3, while all vertices in an octahedron have degree 4. The degrees don’t have to be the same in an arbitrary polhedron: In a pentagonal pyramid (see p. 1), the apex of the pyramid has degree 5, while each of the base vertices has degree 3. If you add up all the degrees of vertices in a polyhedron (in fact, in any graph), each edge will be counted twice. That is, (degree of vertex #1) + (degree of vertex #2) + · · · + (degree of vertex #v) = 2e. (1) If you add up the numbers of edges in all the faces of a polyhedra, you will again count each edge twice (because each edge lies in exactly two adjacent faces). That is, (number of edges in face #1)+(number of edges in face #2)+· · ·+(number of edges in face #f) = 2e. (2) For example, a cube has 8 vertices of degree 3 each (so the sum of all degrees is 24), and 6 quadrilateral faces (so the sum in (2) is also 24), and 12 (= 24/2) edges. Formula (1) goes by the name of the Handshaking Theorem in graph theory — if you think of the vertices as people and each edge as a handshake between two people, then the degree of v is the number of people with whom v shakes hands, so the theorem says that adding up those numbers for all people, then dividing by 2, gives the total number of handshakes. Problem #9 Verify that these general rules are true for your favorite polyhedra. Now every face has to have at least 3 sides. So the sum in (2) has to be at least 3f. this observation together with (2), it tells us that 3f ≤2e or equivalently f ≤2e 3 . Now, substitute this inequality into Euler’s formula to get rid of the f: 2 = v −e + f ≤v −e + 2e 3 = v −e 3 or 6 ≤3v −e or e ≤3v −6. 4 Also, every vertex has to have degree at least 3, so the same calculation says that e ≤3f −6 or equivalently 2e f ≤6 −12 f < 6. So what? Well, 2e/f is the average number of edges in a face (just because there are f faces in total, and the sum of their numbers of edges is 2e). Therefore, we have proved: Theorem 2. In every polyhedron, the average number of sides in a face is less than 6. In particular, it’s impossible to build a polyhedron all of whose faces are hexagons! 3 Cubes, cubes and more cubes What does a 4-dimensional cube look like? This is a scary-sounding question — how can you see anything in 4-dimensional space? But if we work by analogy and use what we can see about ordinary (i.e., visible) 1-, 2- and 3-dimensional spaces, then we can get a pretty good idea. We know what a 3-dimensional cube looks like. What’s a 2-dimensional cube? It’s a square — the thing you get by pressing a cube flat. And if you press the squuare flat, you get a line segment (a 1-dimensional cube), and if you press the line segment flat, you get a single point (an 0-dimensional cube). Q3 Q2 Q0 Q1 squash squash squash Of course, in the preceding paragraph, the word “cube” is being used more generally than it usually is; it’s shorthand for “thing that is analogous to a cube but happens to live in a different-dimensional space.” A common notation for the n-dimensional cube is Qn: so Q0 is a point, Q1 is a line segment, Q2 is a square, and Q3 is the familiar three-dimensional cube. We’ve seen how to make smaller-dimensional cubes from bigger ones. What about the reverse process? To make a square (Q2) out of a line segment (Q1), you can make two copies of the line segment and attach each corresponding pair of vertices with a new line segment. This same procedure lets you build a cube (Q3) starting with a square (Q2), or even a line segment (Q1) from a point (Q0). So, what about Q4? By analogy, you can build Q4 by starting with two copies of Q3 and attaching each corresponding pair of vertices with a line segment. 5 It’s not so easy to see the symmetry in this picture, but miraculously there is a beautiful and surprising other way to draw Q4. Make a 7 × 7 chessboard and place dots in the middle squares of the bottom and top rows. Then, draw in all the possible ways that a knight can move from one dot to the other in four moves. The result is Q4! How many copies of Q3 are there inside Q4? Since • there are two Q0’s inside Q1 (i.e., a line segment has two points); • there are four Q1’s inside Q2 (i.e., a square has four sides); • there are six Q2’s inside Q3 (i.e., a cube has six faces); 6 if this pattern continues, then the answer should be eight. You can see this by coloring the edges of Q4, like this: Now each set of three colors (and there are four such sets) can be used to make two Q3’s. For example, here are the two Q3’s with red, green and blue (but no purple) edges: In general, how many copies of Qk sit inside Qn? If we call this number f(n, k) and make a table of values, we notice a variety of wonderful patterns: 7 k = 0 k = 1 k = 2 k = 3 k = 4 k = 5 n = 0 1 0 0 0 0 0 n = 1 2 1 0 0 0 0 n = 2 4 4 1 0 0 0 n = 3 8 12 6 1 0 0 n = 4 16 32 24 8 1 0 n = 5 32 80 80 ? 10 1 Here are some of the patterns: • f(n, n) = 1. This is pretty simple; everything contains one copy of itself! • f(n, 0) = 2n. A point has 1 vertex, a line segment has two, a square has four, a 3D cube has eight, . . . In general, since you can build Qn out of two copies of Qn−1, it makes sense that the number of vertices doubles each time. • f(n, n −1) = 2n. We already noticed this. • What about f(n, 1), i.e., the number of edges (i.e., line segments)? This is the k = 1 column of the table. The pattern is not as obvious, but it turns out to be f(n, 1) = n · 2n−1. Here’s why. Each vertex in Qn has degree n, and there are 2n vertices, so adding up all the degrees gives n · 2n. This, we know, is twice the number of edges. Therefore, the number of edges re are (n · 2n)/2 = n · 2n−1 edges. • What about f(n, 2), i.e., the number of squares in Qn? This is the k = 2 column of the table, and the pattern is even less obvious, but fortunately we can use a similar counting technique. First of all, every vertex belongs to n edges, and each of those pairs of edges forms a square. So every vertex belongs to n 2  = n(n−1)/2 squares, and multiplying this by the number of vertices gives (n(n −1)/2) · 2n = n(n −1) · 2n−1. On the other hand, we have now counted each square four times — once for each of its corner vertices. Therefore, the actual number of squares in Qn is f(n, 2) = n(n −1) · 2n−1 4 = n(n −1) · 2n−3. Using these observations, we can fill out almost all of that table, except for the question mark for f(5, 3). What about Euler’s formula? In this notation, the relation v −e + f = 2 for the cube Q3 becomes f(3, 0) −f(3, 1) + f(3, 2) = 2. This is true: 8 −12 + 6 = 2. What if we look at the alternating sum of numbers for each row (ignoring the 1’s)? n = 1 : f(1, 0) = 2 = 2 n = 2 : f(2, 0) − f(2, 1) = 4 −4 = 0 n = 3 : f(3, 0) − f(3, 1) + f(5, 2) = 8 −12 + 6 = 2 n = 4 : f(4, 0) − f(4, 1) + f(5, 2) − f(5, 3) = 16 −32 + 24 −8 = 0 n = 5 : f(5, 0) − f(5, 1) + f(5, 2) − f(5, 3) + f(5, 4) = 32 −80 + 80−? + 10 = . . . If this pattern continues (and it does!), then the alternating sum for n = 5 should be 2, which means that the question mark should be f(5, 3) = 40. In fact, this is a general rule about polyhedra in all dimensions! For any n-dimensional polyhedron, the alternating sum 8 number of vertices (0-dimensional pieces) − number of edges (1-dimensional pieces) + number of faces (2-dimensional pieces) − number of 3-dimensional pieces (whatever they’re called) . . . ± number of (n −1)-dimensional pieces comes out to either 2 (when n is odd) or 0 when n is even). 4 The Platonic solids Definition 1. A polyhedron is called regular (or a Platonic solid) iff(a) all of its faces are congruent; (b) all of its faces are regular polygons; and (c) each of its vertices meets the same number of edges as every other vertex. So cubes are Platonic, but most prisms and pyramids are not. You may have heard of the following Platonic solids besides the cube: • tetrahedron (the faces are 4 equilateral triangles); • octahedron (8 equilateral triangles); • dodecahedron (12 regular pentagons); • icosahedron (20 equilateral triangles). Octahedron Tetrahedron Cube Dodecahedron Icosahedron You haven’t heard of any others because. . . Theorem 3. There are exactly five Platonic solids. This is a surprising theorem. Frequently when we define a nice class of objects in mathematics it is large, in fact usually infinite (think of the set of primes, the set of isometries of the plane, the collection of all regular polygons...). But this class is not only finite; it has only five things in it. One part of the proof — that there are at least five Platonic solids — is already done. We know what they are. In fact you’ve built them out of Zometool. So the interesting part is proving that there aren’t any more. In order to do this, you need to think carefully about how you fit polygons together to make polyhedra — both the numerical observations that we made above (see Theorem 1), and some geometric facts: for example, thinking about the angles that faces meet at. 9 Problem #10 Fold a piece of paper. Cut it so that you have two congruent polygons joined at the fold. Let p be a point at the edge of the fold, and let ℓ, m be the sides through p which do not coincide with the fold. Fold and unfold the paper so that you narrow and expand the angle between ℓand m at p. When is this angle the biggest possible? If the two faces were a piece of a polyhedron, would you need at least one more face at p? Could you have more than one more face at p? Problem #11 Fit some polygons together at a vertex p to start a polyhedron What is the sum of the angles touching p? Now flatten the part touching p (you will have to cut at least one edge to do this). No matter what your polyhedron is, can the sum of the angles touching p sum up to more than 360◦? Can they sum up to exactly 360◦? Explain briefly. Platonic solids are very special, because each vertex must belong to the same number of faces, say n, and each face must be a polygon with the same number of sides, say s. What can we say about these numbers? We already know that s ≥3 and n ≥3 (see Theorem 1). Each face F of P is a regular polygon with s sides and s angles. The sum of the angles of F is 180(s −2), so each single angle measures 180(s −2)/s. Therefore, if we consider the n faces that fit together at a single vertex, we see that their angles add up to 180(s −2)n s . On the other hand, we know that this quantity has to be < 360◦. If n and s are too large, then this condition will fail. So we can figure out all the possibilities just by brute force: n = 3 n = 4 n = 5 n = 6 · · · s = 3 180 240 300 360 · · · s = 4 270 360 450 · · · s = 5 324 432 · · · s = 6 360 · · · . . . . . . There are evidently only five possibilities: • the faces are equilateral triangles (s = 3); the number n of faces meeting at a vertex may be 3 (the tetrahedron), 4 (the octahedron), or 5 (the icosahedron). • the faces are squares (s = 4); the number of faces meeting at a vertex must be 3 (the cube). • the faces are regular pentagons (s = 5); the number of faces meeting at a vertex must be 3 (the dodecahedron). This almost proves the theorem. Except, how do you know that the parameters s and n determine the polytope? Problem #12 Suppose you have a bunch of equilateral triangles, squares, or regular pentagons. Suppose you are told how many of such faces each vertex of a regular polyhedron must meet. Suppose your constraints are consistent with conclusion 2. Then there is exactly one way to construct the regular polyhedron, and it must be one of those listed in conclusion 2. 10
795
https://study.com/academy/lesson/product-to-sum-identities-uses-applications.html
Product-to-Sum Identities | Formula, Derivation & Examples - Lesson | Study.com Log In Sign Up Menu Plans Courses By Subject College Courses High School Courses Middle School Courses Elementary School Courses By Subject Arts Business Computer Science Education & Teaching English (ELA) Foreign Language Health & Medicine History Humanities Math Psychology Science Social Science Subjects Art Business Computer Science Education & Teaching English Health & Medicine History Humanities Math Psychology Science Social Science Art Architecture Art History Design Performing Arts Visual Arts Business Accounting Business Administration Business Communication Business Ethics Business Intelligence Business Law Economics Finance Healthcare Administration Human Resources Information Technology International Business Operations Management Real Estate Sales & Marketing Computer Science Computer Engineering Computer Programming Cybersecurity Data Science Software Education & Teaching Education Law & Policy Pedagogy & Teaching Strategies Special & Specialized Education Student Support in Education Teaching English Language Learners English Grammar Literature Public Speaking Reading Vocabulary Writing & Composition Health & Medicine Counseling & Therapy Health Medicine Nursing Nutrition History US History World History Humanities Communication Ethics Foreign Languages Philosophy Religious Studies Math Algebra Basic Math Calculus Geometry Statistics Trigonometry Psychology Clinical & Abnormal Psychology Cognitive Science Developmental Psychology Educational Psychology Organizational Psychology Social Psychology Science Anatomy & Physiology Astronomy Biology Chemistry Earth Science Engineering Environmental Science Physics Scientific Research Social Science Anthropology Criminal Justice Geography Law Linguistics Political Science Sociology Teachers Teacher Certification Teaching Resources and Curriculum Skills Practice Lesson Plans Teacher Professional Development For schools & districts Certifications Teacher Certification Exams Nursing Exams Real Estate Exams Military Exams Finance Exams Human Resources Exams Counseling & Social Work Exams Allied Health & Medicine Exams All Test Prep Teacher Certification Exams Praxis Test Prep FTCE Test Prep TExES Test Prep CSET & CBEST Test Prep All Teacher Certification Test Prep Nursing Exams NCLEX Test Prep TEAS Test Prep HESI Test Prep All Nursing Test Prep Real Estate Exams Real Estate Sales Real Estate Brokers Real Estate Appraisals All Real Estate Test Prep Military Exams ASVAB Test Prep AFOQT Test Prep All Military Test Prep Finance Exams SIE Test Prep Series 6 Test Prep Series 65 Test Prep Series 66 Test Prep Series 7 Test Prep CPP Test Prep CMA Test Prep All Finance Test Prep Human Resources Exams SHRM Test Prep PHR Test Prep aPHR Test Prep PHRi Test Prep SPHR Test Prep All HR Test Prep Counseling & Social Work Exams NCE Test Prep NCMHCE Test Prep CPCE Test Prep ASWB Test Prep CRC Test Prep All Counseling & Social Work Test Prep Allied Health & Medicine Exams ASCP Test Prep CNA Test Prep CNS Test Prep All Medical Test Prep College Degrees College Credit Courses Partner Schools Success Stories Earn credit Sign Up Copyright Math Courses / Precalculus: High School Course Product-to-Sum Identities | Formula, Derivation & Examples Lesson Transcript Alireza Farvard, Yuanxin (Amy) Yang Alcocer Author Alireza Farvard Alireza has taught K-12 mathematics for 5+ years. Over the course of his career, Alireza has written and developed lesson handouts, worksheets and assessments for a wide range of levels. He studies Computer Science (BA) at York University. View bio Instructor Yuanxin (Amy) Yang Alcocer Amy has a master's degree in secondary education and has been teaching math for over 9 years. Amy has worked with students at all levels from those with special needs to those that are gifted. View bio Learn about product-to-sum trigonometric identities. Discover how to express sine and cosine relationships as both the product and sum of trigonometric functions. Updated: 11/21/2023 Table of Contents Product to Sum Identities List of Product to Sum Identities Product to Sum Identities Examples Lesson Summary Show Frequently Asked Questions How do you use the product to sum formulas? In order to use product to sum formulas, simply substitute the values from the given expression into the corresponding product to sum formula. Depending on what is asked, convert product to sum or sum to product and simplify. If needed, product to sum formulas can be modified by multiplying/dividing a number on both sides. How do you find the sum of a product? Use the product to sum identities/formulas in order to find the corresponding sum of a product of sine and cosine functions. After finding the formula which contains the required sum, the formula can be rearranged accordingly to make it easier to use. How do you convert sum to product? The four product to sum identities can also be used to convert sum of sine and cosine functions into their corresponding product. The identities can be used as is, or can be isolated for the needed sum or difference. How do you write a product as a sum? In order to write a product as sum, use the four product to sum trigonometric identities or formulas. After finding the formula that contains the required product, the values from the product can be substituted into the sum section of the formula and then simplified. What is the product to sum formula? A product to sum formula or identity is a trigonometric identity used to convert product of sines and cosines to sum and vice versa. There are four product to sum identities in total that for all possible combinations of products of sines and cosines. Create an account Table of Contents Product to Sum Identities List of Product to Sum Identities Product to Sum Identities Examples Lesson Summary Show Product to Sum Identities ------------------------- Product to sum identities are a set of trigonometric identities used to convert the product of sine and cosine expressions to sum and vice versa. A product to sum identity, also called a product to sum formula, can be used to simplify a trigonometric expression that involves the product or sum of sine and/or cosine functions. Product to Sum and Sum to Product Forumula. Proving trigonometric identities or solving trigonometric equations to simplify in some cases requires a product expression, where perhaps it is necessary to find a common factor, or cancel an expression out. In other cases, a sum expression is more useful in order to easily move a component to the other side of the equation to perhaps form another identity, or simplify. In all these cases the product to sum identities can be used. Therefore, product to sum identities can be used directly to simplify or prove a more complex identity, or can be used as a part of simplifying a trigonometric expression which then enables us to use other known trigonometric identities as needed. There are a total of four product to sum identities for all possible combinations of products of sine and cosine functions. To unlock this lesson you must be a Study.com Member. Create your account Click for sound 4:41 You must c C reate an account to continue watching Register to view this lesson Are you a student or a teacher? I am a student I am a teacher Create Your Account To Continue Watching As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. Get unlimited access to over 88,000 lessons. Try it now Try it now. Already registered? Log in here for access Back Resources created by teachers for teachers Over 30,000 video lessons& teaching resources‐all in one place. Video lessons Quizzes & Worksheets Classroom Integration Lesson Plans I would definitely recommend Study.com to my colleagues. It’s like a teacher waved a magic wand and did the work for me. I feel like it’s a lifeline. Jennifer B. Teacher Try it now Back Coming up next: Sum-to-Product Identities: Uses & Applications You're on a roll. Keep up the good work! Take QuizWatch Next Lesson Replay Just checking in. Are you still watching? Yes! Keep playing. Your next lesson will play in 10 seconds 1:37 Uses & Applications 2:04 Example 1 3:00 Example 2 4:07 Lesson Summary View Video Only Save Timeline 15K views Video Quiz Course Video Only 15K views List of Product to Sum Identities --------------------------------- Below is a list of the product to sum identities. It is important to note that these formulas can be used to convert trigonometric products to sums, and sums to products as well. sin⁡(α)cos⁡(β)=1 2[sin⁡(α+β)+sin⁡(α−β)] cos⁡(α)sin⁡(β)=1 2[sin⁡(α+β)−sin⁡(α−β)] cos⁡(α)cos⁡(β)=1 2[cos⁡(α+β)+cos⁡(α−β)] sin⁡(α)sin⁡(β)=1 2[cos⁡(α−β)−cos⁡(α+β)] Note that in some cases it may be useful to modify these formulas to make them fit better for a certain application. This includes, but is not limited to, multiplying both sides of the equation by two to remove the 1 2. Modified Product to Sum Identity How to Derive the First Product to Sum Identity The product to sum identities can be derived from the sum and difference identities. For the first identity listed above, use the sine sum identity and the sine difference identity. Recall the trigonometric identities listed below: sin⁡(α+β)=sin⁡(α)cos⁡(β)+cos⁡(α)sin⁡(β) sin⁡(α−β)=sin⁡(α)cos⁡(β)−cos⁡(α)sin⁡(β) Adding these two formulas results in the following: sin⁡(α+β)+sin⁡(α−β)=sin⁡(α)cos⁡(β)+cos⁡(α)sin⁡(β)+sin⁡(α)cos⁡(β)−cos⁡(α)sin⁡(β) Canceling out the expression cos⁡(α)sin⁡(β) with −cos⁡(α)sin⁡(β) in the right side of the equation: sin⁡(α+β)+sin⁡(α−β)=sin⁡(α)cos⁡(β)+sin⁡(α)cos⁡(β) Combining the like terms in the right side results in: sin⁡(α+β)+sin⁡(α−β)=2 sin⁡(α)cos⁡(β) Now divide by two on both sides of the equation: sin⁡(α+β)+sin⁡(α−β)2=2 sin⁡(α)cos⁡(β)2 By cancelling the two in the right side the end result is: sin⁡(α+β)+sin⁡(α−β)2=sin⁡(α)cos⁡(β) Which can also be written as: 1 2[sin⁡(α+β)+sin⁡(α−β)]=sin⁡(α)cos⁡(β) The process above shows how to derive the first product to sum identity listed above. How to Derive the Second Product to Sum Identity For second identity, use the same two sum and difference identities used above. However, this time instead of adding, subtract the sine difference identity from the sine sum identity. Subtract the two trigonometric identities: sin⁡(α+β)−sin⁡(α−β)=sin⁡(α)cos⁡(β)+cos⁡(α)sin⁡(β)−[sin⁡(α)cos⁡(β)−cos⁡(α)sin⁡(β)] Distribute the negative sign in the right side of the equation: sin⁡(α+β)−sin⁡(α−β)=sin⁡(α)cos⁡(β)+cos⁡(α)sin⁡(β)−sin⁡(α)cos⁡(β)+cos⁡(α)sin⁡(β) Cancel out the expression sin⁡(α)cos⁡(β) with −sin⁡(α)cos⁡(β): sin⁡(α+β)−sin⁡(α−β)=cos⁡(α)sin⁡(β)+cos⁡(α)sin⁡(β) After simplifying: sin⁡(α+β)−sin⁡(α−β)=2 cos⁡(α)sin⁡(β) sin⁡(α+β)−sin⁡(α−β)2=2 cos⁡(α)sin⁡(β)2 sin⁡(α+β)−sin⁡(α−β)2=cos⁡(α)sin⁡(β) 1 2[sin⁡(α+β)−sin⁡(α−β)]=cos⁡(α)sin⁡(β) This concludes the derivation of the second product to sum identity listed above. How to Derive the Third Product to Sum Identity For the third identity, use the cosine sum identity and the cosine difference identity. Recall the following two trigonometric identities: cos⁡(α+β)=cos⁡(α)cos⁡(β)−sin⁡(α)sin⁡(β) cos⁡(α−β)=cos⁡(α)cos⁡(β)+sin⁡(α)sin⁡(β) Add these two identities: cos⁡(α+β)+cos⁡(α−β)=cos⁡(α)cos⁡(β)−sin⁡(α)sin⁡(β)+cos⁡(α)cos⁡(β)+sin⁡(α)sin⁡(β) Canceling out expressions −sin⁡(α)sin⁡(β) and sin⁡(α)sin⁡(β), results in: cos⁡(α+β)+cos⁡(α−β)=cos⁡(α)cos⁡(β)+cos⁡(α)cos⁡(β) After simplifying: cos⁡(α+β)+cos⁡(α−β)=2 cos⁡(α)cos⁡(β) cos⁡(α+β)+cos⁡(α−β)2=2 cos⁡(α)cos⁡(β)2 cos⁡(α+β)+cos⁡(α−β)2=cos⁡(α)cos⁡(β) 1 2[cos⁡(α+β)+cos⁡(α−β)]=cos⁡(α)cos⁡(β) Therefore, this shows the derivation of the third product to sum identity. How to Derive the Fourth Product to Sum Identity For the last identity, subtract the cosine sum identity from the cosine difference identity: cos⁡(α−β)−cos⁡(α+β)=cos⁡(α)cos⁡(β)+sin⁡(α)sin⁡(β)−[cos⁡(α)cos⁡(β)−sin⁡(α)sin⁡(β)] cos⁡(α−β)−cos⁡(α+β)=cos⁡(α)cos⁡(β)+sin⁡(α)sin⁡(β)−cos⁡(α)cos⁡(β)+sin⁡(α)sin⁡(β) Now cancel expressions cos⁡(α)cos⁡(β) and −cos⁡(α)cos⁡(β) out: cos⁡(α−β)−cos⁡(α+β)=sin⁡(α)sin⁡(β)+sin⁡(α)sin⁡(β) After simplifying: cos⁡(α−β)−cos⁡(α+β)=2 sin⁡(α)sin⁡(β) cos⁡(α−β)−cos⁡(α+β)2=2 sin⁡(α)sin⁡(β)2 cos⁡(α−β)−cos⁡(α+β)2=sin⁡(α)sin⁡(β) 1 2[cos⁡(α−β)−cos⁡(α+β)]=sin⁡(α)sin⁡(β) Finally, the last product to sum identity has been derived. To unlock this lesson you must be a Study.com Member. Create your account Product to Sum Identities Examples ---------------------------------- Below are a few examples to further better understand product to sum identities and how to use them. Example 1: Express the following expression as a sum or difference: 8 sin⁡(3 θ)cos⁡(2 θ). Since this is a product of a sine and a cosine function, use sin⁡(α)cos⁡(β)=1 2[sin⁡(α+β)+sin⁡(α−β)] to convert the product to sum: 8 sin⁡(3 θ)cos⁡(2 θ)=8[sin⁡(3 θ)cos⁡(2 θ)] =8[1 2[sin⁡(3 θ+2 θ)+sin⁡(3 θ−2 θ)]] =8 2[sin⁡(5 θ)+sin⁡(θ)] =4[sin⁡(5 θ)+sin⁡(θ)] =4 sin⁡(5 θ)+4 sin⁡(θ) Example 2: Express the following expression as a product: 3[cos⁡(6 θ)+cos⁡(2 θ)]. Since this shows the sum of two cosines, use cos⁡(α)cos⁡(β)=1 2[cos⁡(α+β)+cos⁡(α−β)]. First, find α and β. Knowing α+β=6 θ and α−β=2 θ, form a system of equations: {α+β=6 θ α−β=2 θ After solving the system: α=4 θ and β=2 θ Before continuing with the formula, since the 1 2 is not included in the given expression, modify the formula by multiplying a 2, to make it easier to use in this case: 2[cos⁡(α)cos⁡(β)]=2[1 2[cos⁡(α+β)+cos⁡(α−β)]] 2[cos⁡(α)cos⁡(β)]=cos⁡(α+β)+cos⁡(α−β) Now use this modified formula to convert the sum to product: 3[cos⁡(6 θ)+cos⁡(2 θ)]=3[2[cos⁡(4 θ)cos⁡(2 θ)]] =6 cos⁡(4 θ)cos⁡(2 θ) Example 3: Evaluate the following: 4 sin⁡(75)sin⁡(15). Since this is the product of two sines, use sin⁡(α)sin⁡(β)=1 2[cos⁡(α−β)−cos⁡(α+β)] to first convert the product to sum and eventually try to evaluate the expression: 4 sin⁡(75)sin⁡(15)=4[sin⁡(75)sin⁡(15)] =4[1 2[cos⁡(75−15)−cos⁡(75+15)]] =4 2[cos⁡(60)−cos⁡(90)] =2[cos⁡(60)−cos⁡(90)] Knowing cos⁡(60)=1 2 and cos⁡(90)=0: 2[cos⁡(60)−cos⁡(90)]=2(1 2−0) =2(1 2) =2 2 =1 Example 4: Prove the following identity: sin⁡(8 x)+sin⁡(2 x)sin⁡(5 x)=2 cos⁡(3 x). First, label the left side as LS and the right side as RS: sin⁡(8 x)+sin⁡(2 x)sin⁡(5 x)L S=2 cos⁡(3 x)R S Start from the LS: L S=sin⁡(8 x)+sin⁡(2 x)sin⁡(5 x) Since there is the sum of two sines in the numerator, use the sin⁡(α)cos⁡(β)=1 2[sin⁡(α+β)+sin⁡(α−β)] identity. Find α and β. Knowing α+β=8 x and α−β=2 x, form the following system of equations: {α+β=8 x α−β=2 x After solving: α=5 x and β=3 x Before using the formula, modify it to account for the lack of a 1 2 in the numerator. By multiplying a 2 to both sides of the formula: 2[sin⁡(α)cos⁡(β)]=2[1 2[sin⁡(α+β)+sin⁡(α−β)]] 2[sin⁡(α)cos⁡(β)]=sin⁡(α+β)+sin⁡(α−β) Now continue with proving the identity by replacing the numerator with its equivalent using the modified identity and the values of α and β: L S=2 sin⁡(5 x)cos⁡(3 x)sin⁡(5 x) Cancel the sin⁡(5 x) as it is present in the numerator and the denominator: L S=2 cos⁡(3 x) This results in: L S=R S sin⁡(8 x)+sin⁡(2 x)sin⁡(5 x)=2 cos⁡(3 x) As a result, this identity has been proved. To unlock this lesson you must be a Study.com Member. Create your account Lesson Summary -------------- Product to sum identities/formulas are a set of trigonometric identities used to convert products of sines and cosines to sum and vice versa. Product to sum identities can be used to prove the more complex trigonometric identities, solve trigonometric equations and simplify trigonometric expressions that involve product or sum of sines and/or cosine functions. The four product to sum identities cover all possible combinations of products of sine and cosine functions: sin⁡(α)cos⁡(β)=1 2[sin⁡(α+β)+sin⁡(α−β)] cos⁡(α)sin⁡(β)=1 2[sin⁡(α+β)−sin⁡(α−β)] cos⁡(α)cos⁡(β)=1 2[cos⁡(α+β)+cos⁡(α−β)] sin⁡(α)sin⁡(β)=1 2[cos⁡(α−β)−cos⁡(α+β)] The product to sum identities can be derived from the sum and difference identities. To derive the first identity, add the sine sum identity and the sine difference identity. To derive the second identity, subtract the sine difference identity from the sine sum identity. To derive the third identity, add the cosine sum identity and the cosine difference identity. Lastly, to derive the fourth identity, subtract the cosine sum identity from the cosine difference identity. To unlock this lesson you must be a Study.com Member. Create your account Video Transcript Product-to-Sum Identities In trigonometry, we make use of identities or true statements. As you've seen, there are many. In this video lesson, we will talk about the group of identities known as the product-to-sum identities. These identities are the true trig statements that show you how to go from the product of two trig functions to the sum of two trig functions. Think of these as definitions. Since they are definitions, they are also interchangeable. You can use either the product form or the sum form to describe the same thing. We have a total of four of these identities. They all involve just the cosine and sine functions. How can you remember these? Look for patterns. Look at the first two. Notice how the left side has both the sine and cosine functions. The right sides have only the sine function. The right side has our two angles added together and then subtracted from each other. Also, notice that if the sine function comes first on the left-hand side, then we have a plus in between our sine functions on the right-hand side. Now look at the last two functions. The left-hand side has two of the same functions. They are either both cosines or both sines. The right-hand side has only the cosine function. If both of our functions on the left-hand side are cosines, then our cosines are added on the right-hand side. Also, notice that our subtraction and addition of our two angles has been switched from the first two identities. Now we have the subtraction of our two angles coming first. Take a moment and look for other patterns. What else do you see that will help you? Uses & Applications It's good to memorize these if you can because these identities will help you simplify more complicated trig problems as well as help you prove other trig statements. You will come across this set of trig statements in tests and in your other trigonometry classes. These trig identities are also useful in helping to solve the integrals of higher math in calculus problems. Do you want to look at a couple of examples of what kinds of problems you can expect to come across? Okay, let's take a look. Example 1 For our first problem, we will look at a simplifying problem. Rewrite the function cos (40) sin (30) without using multiplication. We've read the problem. The problem wants us to rewrite the function so that we don't have the multiplication going on. What can we do? We think about the identities we just learned at the beginning of this lesson. Don't these identities help us turn a multiplication problem into one without multiplication? Yes, they do. So, let's look at our identities again. Ah, we see the second identity fits our problem. So, it begins with cosine and it is being multiplied by the sine function. Now, we can go ahead and follow what it says on the right hand side to find our answer. For the first sine function, we add our two angles up to 70. For the second sine function, we subtract 40 - 30 to get 10 for that argument. We get an answer of 1/2(sin (70) - sin (10)) and we are done! Example 2 Our second problem now is about proving a trig statement. Prove the trig statement 2 sin (x) sin (y) = cos (x - y) - cos (x + y). Recall that to prove a mathematical statement, we begin with the more difficult side, and we try to simplify that side into the other side. So, instead of working with both sides, we work with just the one side. For our problem, it looks like both sides are somewhat the same in difficulty. So, we will just pick a side and go from there. We pick the left side. So, we want to turn the left side into the right side somehow. We turn to our identities. We see that we can substitute the fourth identity into the left side. Doing that we get 2 (1/2) (cos (x - y) - cos (x + y). Hmm. It looks like we can cancel the 2 with the 1/2. Doing that, we are left with cos (x - y) - cos (x + y), our right side. We did it! Since this is a proving problem, our full answer includes all the steps that we took. Lesson Summary Let's review what we've learned now: Our product-to-sum identities are the true trig statements that show you how to go from the product of two trig functions to the sum of two trig functions. These identities are used to simplify more complicated trig problems and also to prove other trig statements. We have a total of four product-to-sum identities. All of these identities involve just the cosine and sine functions. Learning Outcomes After reviewing this lesson, you should have the ability to: Describe the purpose of the product-to-sum identities Identify the four product-to-sum identities Explain how to use these identities to solve more complicated problems or prove other trig statements Register to view this lesson Are you a student or a teacher? I am a student I am a teacher Unlock Your Education See for yourself why 30 million people use Study.com Become a Study.com member and start learning now. Become a Member Already a member? Log In Back Resources created by teachers for teachers Over 30,000 video lessons& teaching resources‐all in one place. Video lessons Quizzes & Worksheets Classroom Integration Lesson Plans I would definitely recommend Study.com to my colleagues. It’s like a teacher waved a magic wand and did the work for me. I feel like it’s a lifeline. Jennifer B. Teacher Try it now Back Recommended Lessons and Courses for You Related Lessons Related Courses Recommended Lessons for You Advanced Trigonometry | Study.com ACT® Math Test Prep Sum & Difference Identities | Overview & Examples Related Courses DSST Business Mathematics Study Guide and Test Prep Algebra 2 and Trig Textbook 10th Grade Geometry Textbook Saxon Calculus Homeschool: Online Textbook Help GED Math: Quantitative, Arithmetic & Algebraic Problem Solving AP Calculus AB & BC: Exam Prep ELM: CSU Math Study Guide Study.com SAT Study Guide and Test Prep Study.com PSAT Study Guide and Test Prep Common Core Math - Number & Quantity: High School Standards Common Core Math - Algebra: High School Standards Common Core Math - Statistics & Probability: High School Standards Common Core Math - Geometry: High School Standards Common Core Math - Functions: High School Standards SAT Subject Test Mathematics Level 1: Practice and Study Guide SAT Subject Test Mathematics Level 2: Practice and Study Guide NY Regents Exam - Integrated Algebra: Test Prep & Practice NY Regents - Geometry Study Guide and Exam Prep Precalculus: High School Trigonometry: High School Start today. Try it now Precalculus: High School 26 chapters | 205 lessons Ch 1. Working With Inequalities Review Inequality Signs in Math | Symbols, Examples & Variation 7:09 Graphing Inequalities | Definition, Rules & Examples 7:59 Inequality Notation | Overview & Examples 8:16 Graphing Inequalities | Overview & Examples 12:06 Solve & Graph an Absolute Value Inequality | Formula & Examples 8:02 Absolute Value Inequalities | Definition, Calculation & Examples 9:06 Translating Math Sentences to Inequalities 5:36 Ch 2. Absolute Value Equations Review Ch 3. Working with Complex Numbers... Ch 4. Introduction to Quadratics Ch 5. Working with Quadratic Functions Ch 6. Basics of Polynomial Functions Ch 7. Working with Higher-Degree... Ch 8. Graphing Piecewise Functions Ch 9. Understanding Function... Ch 10. Graph Symmetry Ch 11. Graphing with Functions Review Ch 12. Rate of Change Ch 13. Rational Functions & Difference... Ch 14. Rational Expressions and Function... Ch 15. Exponential Functions & Logarithmic... Ch 16. Using Trigonometric Functions Ch 17. Trigonometric Graphs Ch 18. Trigonometric Applications Ch 19. Solving Trigonometric Identities Ch 20. Vectors, Matrices and... Ch 21. Mathematical Sequences and... Ch 22. Sets in Algebra Ch 23. Analytic Geometry & Conic Sections... Ch 24. Polar Coordinates and... Ch 25. Continuity Ch 26. Limits Product-to-Sum Identities | Formula, Derivation & Examples Related Study Materials Related Lessons Advanced Trigonometry | Study.com ACT® Math Test Prep Sum & Difference Identities | Overview & Examples Related Courses DSST Business Mathematics Study Guide and Test Prep Algebra 2 and Trig Textbook 10th Grade Geometry Textbook Saxon Calculus Homeschool: Online Textbook Help GED Math: Quantitative, Arithmetic & Algebraic Problem Solving AP Calculus AB & BC: Exam Prep ELM: CSU Math Study Guide Study.com SAT Study Guide and Test Prep Study.com PSAT Study Guide and Test Prep Common Core Math - Number & Quantity: High School Standards Common Core Math - Algebra: High School Standards Common Core Math - Statistics & Probability: High School Standards Common Core Math - Geometry: High School Standards Common Core Math - Functions: High School Standards SAT Subject Test Mathematics Level 1: Practice and Study Guide SAT Subject Test Mathematics Level 2: Practice and Study Guide NY Regents Exam - Integrated Algebra: Test Prep & Practice NY Regents - Geometry Study Guide and Exam Prep Precalculus: High School Trigonometry: High School Related Topics Browse by Courses CLEP College Algebra Study Guide and Exam Prep CLEP Precalculus Study Guide and Exam Prep Quantitative Analysis Holt Geometry: Online Textbook Help SAT Subject Test Mathematics Level 2: Practice and Study Guide NY Regents Exam - Integrated Algebra: Test Prep & Practice LSAT Study Guide and Test Prep SAT Subject Test Mathematics Level 1: Practice and Study Guide UExcel Precalculus Algebra: Study Guide & Test Prep CLEP College Mathematics Study Guide and Exam Prep NY Regents - Geometry Study Guide and Exam Prep DSST Business Mathematics Study Guide and Test Prep Statistics 101: Principles of Statistics Algebra II: High School Glencoe Algebra 1: Online Textbook Help Browse by Lessons How to Solve Trigonometric Equations for X Double Angle | Formula, Theorem & Examples Double Angle Formula | Sin, Cos & Tan Half Angle Formula | Quadrant Rule & Examples Sum-to-Product Identities: Uses & Applications Verifying a Trigonometric Equation Identity Half-Angle: Formulas & Proof Additive Property of Inequality | Definition & Examples How to Solve Two-Step Equations with Fractions Cross Multiplication of Fractions | Overview & Examples How to Write 0.0005 in Scientific Notation: Steps & Tutorial How to Show 10 to the 2nd Power: Steps & Tutorial How to Write 1000 in Scientific Notation Similar Shapes in Math: Definition & Overview Cartesian Plane | Definition, History & Quadrants Create an account to start this course today Used by over 30 million students worldwide Create an account Like this lessonShare Explore our library of over 88,000 lessons Search Browse Browse by subject College Courses Business English Foreign Language History Humanities Math Science Social Science See All College Courses High School Courses AP Common Core GED High School See All High School Courses Other Courses College & Career Guidance Courses College Placement Exams Entrance Exams General Test Prep K-8 Courses Skills Courses Teacher Certification Exams See All Other Courses Upgrade to enroll× Upgrade to Premium to enroll in Precalculus: High School Enrolling in a course lets you earn progress by passing quizzes and exams. Track course progress Take quizzes and exams Earn certificates of completion You will also be able to: Create a Goal Create custom courses Get your questions answered Upgrade to Premium to add all these features to your account! Upgrade Now Upgrade to Premium to add all these features to your account! Download the app Plans Student Solutions Teacher Solutions Study.com for Schools Working Scholars® Solutions Online tutoring About Us Blog Careers Teach For Us Press Center Ambassador Scholarships Support FAQ Site Feedback Download the app Working Scholars® Bringing Tuition-Free College to the Community © Copyright 2025 Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. Terms of UsePrivacy PolicyDMCA NoticeADA ComplianceHonor Code For Students ×
796
https://tutorial.math.lamar.edu/extras/algebratrigreview/logarithmfcns.aspx
Paul's Online Notes Show/Hide Show all Solutions/Steps/etc. Hide all Solutions/Steps/etc. Sections Basic Exponential Functions Logarithm Properties Chapters Trigonometry Classes Algebra Calculus I Calculus II Calculus III Differential Equations Extras Algebra & Trig Review Common Math Errors Complex Number Primer How To Study Math Cheat Sheets & Tables Misc Contact Me MathJax Help and Configuration Extras Download Complete Book Problems Only (No Solutions) Other Items Get URL's for Download Items Print Page in Current Form (Default) Show all Solutions/Steps and Print Page Hide all Solutions/Steps and Print Page Paul's Online Notes Home / Algebra Trig Review / Exponentials & Logarithms / Basic Logarithm Functions Prev. Section Notes Next Section Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best viewed in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (you should be able to scroll/swipe to see them) and some of the menu items will be cut off due to the narrow screen width. Section 3.2 : Basic Logarithmic Functions Show All Solutions Hide All Solutions Without a calculator give the exact value of each of the following logarithms. ({\log _2}16) ({\log _4}16) ({\log _5}625) (\displaystyle {\log _9}\frac{1}{{531441}}) (\displaystyle {\log _{\frac{1}{6}}}36) (\displaystyle {\log _{\frac{3}{2}}}\frac{{27}}{8})Show Solution To do these without a calculator you need to remember the following. [y = {\log _b}x\hspace{0.25in}\hspace{0.25in}{\mbox{is equivalent to }}\hspace{0.25in}\hspace{0.25in}x = {b^y}] Where, (b), is called the base is any number such that (b > 0) and (b \ne 1). The first is usually called logarithmic form and the second is usually called exponential form. The logarithmic form is read “(y) equals log base (b) of (x)”. So, if you convert the logarithms to exponential form it’s usually fairly easy to compute these kinds of logarithms. (a) ({\log _2}16 = 4\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{2^4} = 16) (b) ({\log _4}16 = 2\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{4^2} = 16) Note the difference between (a) and (b)! The base, (b), that you use on the logarithm is VERY important! A different base will, in almost every case, yield a different answer. You should always pay attention to the base! (c) ({\log _5}625 = 4\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{5^4} = 625) (d) (\displaystyle {\log _9}\frac{1}{{531441}} = - 6\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{9^{ - 6}} = \frac{1}{{{9^6}}} = \frac{1}{{531441}}) (e) (\displaystyle {\log _{\frac{1}{6}}}36 = - 2\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{\left( {\frac{1}{6}} \right)^{ - 2}} = {6^2} = 36) (f) (\displaystyle {\log _{\frac{3}{2}}}\frac{{27}}{8} = 3\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{\left( {\frac{3}{2}} \right)^3} = \frac{{27}}{8}) 2. Without a calculator give the exact value of each of the following logarithms. 1. (\ln \sqrt{{\bf{e}}}) 2. (\log 1000) 3. ({\log _{16}}16) 1. ({\log _{23}}1) 2. ({\log _2}\sqrt{{32}})Show Solution There are a couple of quick notational issues to deal with first. [\begin{align}\ln x & = {\log _{\bf{e}}}x & \hspace{0.5in} & {\mbox{This log is called the natural logarithm}}\ \log x & = {\log _{10}}x & \hspace{0.5in} & {\mbox{This log is called the common logarithm}}\end{align}] The (e) in the natural logarithm is the same (e) used in Problem 2 above. The common logarithm and the natural logarithm are the logarithms are encountered more often than any other logarithm so get used to the special notation and special names. The work required to evaluate the logarithms in this set is the same as in problem in the previous problem. (a) (\ln \sqrt{{\bf{e}}} = \frac{1}{3}\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{{\bf{e}}^{\frac{1}{3}}} = \sqrt{{\bf{e}}}) (b) (\log 1000 = 3\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{10^3} = 1000) (c) ({\log _{16}}16 = 1\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{16^1} = 16) (d) ({\log _{23}}1 = 0\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}{23^0} = 1) (e) ({\log _2}\sqrt{{32}} = \frac{5}{7}\hspace{0.25in}\hspace{0.25in}{\rm{because}}\hspace{0.25in}\hspace{0.25in}\sqrt{{32}} = {32^{\frac{1}{7}}} = {\left( {{2^5}} \right)^{\frac{1}{7}}} = {2^{\frac{5}{7}}}) [Contact Me] [Privacy Statement] [Site Help & FAQ] [Terms of Use] © 2003 - 2025 Paul Dawkins Page Last Modified : 11/17/2022
797
https://preparatoriaabiertapuebla.com.mx/wp-content/uploads/2021/12/MOVIMIENTO-RECTILINEO-UNIFORME.pdf
PREPARATORIA ABIERTA PUEBLA MOVIMIENTO RECTILÌNEO UNIFORME ELABORÓ LUZ MARÍA ORTIZ CORTÉS Movimiento rectilíneo uniforme • Un cuerpo que se desplaza con velocidad constante a lo largo de una trayectoria rectilínea se dice que su movimiento es rectilíneo uniforme. • La palabra uniforme indica que el valor de la velocidad permanece constante en el tiempo. • Por ejemplo: un automóvil que se desplaza por una carretera recta y plana a una velocidad de 70 km/h, significa que el auto recorrerá 70 km en 1 hora, 140 km en 2 horas, 210 km en 3 horas. Este auto recorre una trayectoria recta en la que realiza desplazamientos iguales en tiempos iguales. • Podemos notar que la distancia recorrida por el automóvil se obtiene multiplicando la velocidad por el tiempo transcurrido en el movimiento. Movimiento rectilíneo uniforme • El objeto que describe una trayectoria recta en la que realiza desplazamientos iguales en tiempos iguales, efectúa un movimiento rectilíneo uniforme y la velocidad permanece constante. d1 = 8 m d2 = 16 m d3 = 24 m t1 = 1 s t2= 2 s t3 = 3 s v1= 8 m/s v2= 8 m/s v3 = 8 m/s Movimiento rectilíneo uniforme • El cambio en una variable se representa por medio de la letra griega ∆ delta. • La fórmula de la velocidad se puede escribir en función de los cambios en su desplazamiento respecto al cambio en el tiempo de la siguiente manera: v= ∆d = d2 - d1 ∆t t2 - t1 Si el movimiento de un móvil es en línea recta, en el que recorre desplazamientos iguales en tiempos iguales, la relación: ∆d ∆t Será un valor constante. Donde: ∆d = k= constante ∆t Movimiento rectilíneo uniforme d(m) La pendiente de la recta representa la magnitud de la velocidad del cuerpo. 12 10 . d2 8 . 6 . ∆d 4 . Gráfica de la magnitud del 2 . Ꝋ d1 desplazamiento realizado t1 ∆t t2 t(s) por un móvil en un 1 2 3 4 5 6 determinado tiempo. Movimiento rectilíneo uniforme • Al graficar las diferente magnitudes del desplazamiento de un cuerpo en función del tiempo y unir los puntos se obtuvo una línea recta. La pendiente de la recta representa la magnitud de la velocidad e indica que ésta permanece constante, ya que sólo para una línea recta las variaciones iguales a lo largo de un eje corresponden a variaciones iguales sobre el otro eje. Por tanto, existe una relación de proporcionalidad directa entre la variable magnitud del desplazamiento del cuerpo y la variable del tiempo. • Se puede decir también que la pendiente de la recta obtenida de la gráfica magnitud del desplazamiento-tiempo es la constante de proporcionalidad entre las dos variables y representa a la magnitud de la velocidad. Cuanto mayor es la pendiente de la recta, mayor será la magnitud de la velocidad del móvil. Movimiento rectilíneo uniforme • Para calcular la magnitud de la velocidad se determina la tangente de la recta, es decir, el valor de su pendiente en cualquier punto de ella. Se dibuja un triángulo rectángulo entre dos puntos cualquiera de la recta, la cual equivale a la hipotenusa. De acuerdo con el triángulo rectángulo trazado en la gráfica: tan Ꝋ= cateto opuesto = v = ∆d cateto adyacente ∆t donde Ꝋ = ángulo que forma la línea recta y el eje del tiempo: v= d2 –d1 = 10 m – 2m = 8 m = 2 m t2-t1 5 s - 1 s 4 m s VELOCIDAD MEDIA • La mayor parte los desplazamientos que realizan los cuerpos no son uniformes, es decir, sus desplazamientos generalmente no son proporcionales al cambio de tiempo por lo que es necesario considerar el concepto de velocidad media. Por ejemplo un autobús que de la Cd. De México a la de Puebla hace una hora con treinta minutos, al recorrer la distancia de 128 minutos que las separa, podemos calcular la magnitud de la velocidad media durante el viaje: vm= d = 128 km = 85.3 km/h t 1.5 h VELOCIDAD MEDIA • La magnitud de la velocidad del autobús durante el viaje no puede ser constante puesto que en las partes rectas la magnitud de su velocidad será mayor que en la curvas. Por lo que la magnitud de una velocidad media representa la relación entre la magnitud del desplazamiento total hecho por un móvil y el tiempo en efectuarlo. • La magnitud de la velocidad media o promedio de un móvil se puede obtener sumando las magnitudes de las distintas velocidades experimentadas durante su movimiento y dividiéndolas entre el número de magnitudes de las velocidades sumadas. Velocidad media • Un automóvil recorre una distancia de 150 km y desarrolla durante los primeros 120 km una velocidad media de 80 km/h en tanto que en los últimos 30 km tiene una velocidad media de 60 km/h. ¿Cuál fue el tiempo total? • El tiempo transcurrido al recorrer los primeros 120 km se obtiene despejando el tiempo de la fórmula: vm = d t= d t vm t1 = 120 km t= 1.5 h 80 km/h Velocidad media • En los últimos 30 km, el tiempo transcurrido: t2= 30 km t2= 0.5 h 60 km/h El tiempo total: t= 1.5 h + 0.5 h= 2 h b) ¿Cuál fue la velocidad media del automóvil en el transcurso total siendo 150 km la distancia total recorrida y 2 h el tiempo total de viaje? vm= 150 km vm= 75 km/h 2 h Velocidad instantánea • Si los intervalos de tiempo considerados en el movimiento de un cuerpo son cada vez más pequeños, la velocidad media se aproxima a una velocidad instantánea. La velocidad del cuerpo será instantánea cuando el intervalo de tiempo sea tan pequeño que casi tienda a cero. • Si la velocidad de un móvil permanece constante, la velocidad media y la velocidad instantánea son iguales. • Si el valor de la velocidad de un cuerpo no se mantiene constante, se dice que tiene movimiento variado. Por ejemplo, un automóvil cuyo velocímetro indica diferentes valores a cada instante. El valor que el velocímetro indica en un instante dado, representa la velocidad instantánea del automóvil en dicho momento. Velocidad instantánea • En un movimiento variado la velocidad instantánea está dada por: v= ∆d ∆t Siendo el ∆t menor posible. Problemas resueltos 1. Un auto en movimiento uniforme reporta los siguientes datos: 60 km en 1 hora 120 km en 2 horas 180 km en 3 horas 240 km en 4 horas a) Trazar la gráfica d x t Gráfica d(km) 240 180 B 120 ∆d 60 A ∆t t(h) 1 2 3 4 Problemas resueltos Si se elige una escala adecuada y se señalan los puntos correspondientes para los valores de t y d se obtiene una recta que pasa por el origen. Con base a la gráfica se calcula la velocidad del auto. La velocidad está dada por la pendiente de la gráfica d x t, es decir: v= ∆d ∆t Al elegir dos puntos cualquiera de la gráfica por ejemplo, los puntos A y B, se tiene: ∆t = 3 h – 1 h= 2 h ∆d= 180 km -60 km=120 km/h Por lo que: v= ∆d = 120 km donde v= 60 km/h ∆t 2 h Problemas resueltos • Como se observa al graficar las diferentes magnitudes del desplazamiento en función del tiempo y unir los puntos se obtuvo una línea recta. La pendiente de la recta representa la magnitud de la velocidad e indica que ésta permanece constante. Existe una relación de proporcionalidad directa entre la variable de la magnitud del desplazamiento del cuerpo y la variable del tiempo. PROBLEMA RESUELTO 2. Una persona caminó 4 m al norte y después recorrió 5 m al este, ¿Cuál fue su desplazamiento? N (m) 4 3 2 1 O E 0 1 2 3 4 5 (m) d= 6.4 m al noreste Problema resuelto • Se observa en la gráfica que el desplazamiento es de 6.4 m en dirección noreste, sin embargo, la distancia recorrida fue de 9 m. Problema resuelto 3. Un automóvil partió hacia el norte recorriendo 4 km y después recorrió otros 4 km al sur. ¿Cuál fue su desplazamiento? Solución: Resulta que aunque recorrió 8 km en total, su desplazamiento es cero, ya que regresó al mismo punto de partida. Problema resuelto 4. Con los datos del desplazamiento de un móvil en función del tiempo, se obtuvo la siguiente gráfica 40 30 B C 20 D 10 A E t(s) 1 2 3 4 5 6 7 8 9 d (m) d1 d2 t1 t2 Problema resuelto a) ¿Qué posición tenía el móvil antes de iniciar su movimiento? b) ¿Cómo se comporta la magnitud de la velocidad del móvil durante los primeros 2 segundos y cuál es su magnitud? c) ¿Qué magnitud tiene la velocidad durante el intervalo de tiempo entre los puntos B y C? d) ¿Cuál fue la posición más alejada del móvil? e) ¿En que instante invirtió el sentido de su recorrido? f) ¿Cual fue la magnitud de la velocidad del móvil del punto C al D? g) ¿Regresó al punto de partida? Problema resuelto a) La posición del móvil era de 10 m antes de iniciar su movimiento. b) La magnitud de la velocidad del móvil permanece constante y es igual es: v= d2 - d1 = 30 m - 10 m = 20 m = 10 m/s t2 - t1 2 s - 0 2 s c) Entre los puntos B y C el móvil permanece detenido pues no se mueve durante el intervalo de tiempo que va de los 2 a los 5 segundos, conservando su posición de 30 m. Por lo tanto la velocidad es cero. d) La posición más alejada del móvil fue de 30 m. e) El sentido de su recorrido lo invirtió a los 5 segundos y a los 30 m en el punto C. Problemas resueltos f) La magnitud de la velocidad del móvil se calcula con la pendiente de la recta que va de C a D, trazada en la gráfica: V C – D = d2 - d1 = 20 m - 30 m = -10 m = -5 m/s t2 - t1 7 s - 5 s 2 s La magnitud de la velocidad tiene signo negativo ya que el desplazamiento es negativo, esto se observa en virtud de que el móvil invirtió su recorrido y por tanto d2 es menor que d1. g) El móvil regresó a su punto de partida porque a los 8 s, instante en que terminó su recorrido, se encuentra de nuevo en la posición de 10 m, misma que tenía al iniciar su movimiento. Problemas resueltos 5. La velocidad de las embarcaciones generalmente se miden con una unidad denominada nudo, cuyo valor es de aproximadamente 1.8 km/h. ¿Qué distancia recorrerá una embarcación si desarrolla una velocidad constante de 20 nudos durante 10 hs.? Datos: Fórmula: Despeje: v= 20 nudos v= d d= v x t t= 10 hs t d= ? Conversión: 20 nudos x 1.8 km/h = 36 km/h 1 nudo Problema resuelto Sustitución: Resultado d= 36 km x 10 h= h d= 360 km Problemas resueltos 6. ¿Cuál es la magnitud de la velocidad promedio de un autobús de pasajeros que recorre una distancia de 120 km en 1.6 h? Datos: Fórmula: vm= ? vm= d d= 120 km t t= 1.6 h Sustitución: Resultado: vm = 120 Km 1.6 h v m = 75 km/h Problemas resueltos 7. Determina la magnitud de la velocidad promedio de un móvil que lleva una velocidad inicial cuya magnitud es de 3 m/s y su velocidad final es de una magnitud de 4.2 m/s. Datos: Fórmula: vm = ? vm = vf + vo vo = 3 m/s 2 vf = 4.2 m/s Sustitución: Resultado: vm= 4.2 m/s + 3 m/s 2 vm= 3.6 m/s Problemas resueltos 8. Encuentra el desplazamiento en metros que realizará un ciclista durante 7 segundos, si lleva una velocidad media de 30 km/h al norte. Datos: Fórmula: Despeje: d= ? vm= d d= vm x t t= 7 s t vm = 30 km/h Conversión de unidades: 30 km x 1 h x 1000 m = 8.33 m/s h 3600 s 1 km Sustitución: Resultado: d= 8.33 m/s x 7 s = d= 58.3 m al norte Problemas resueltos 9. Calcular el tiempo en horas que un automóvil efectúa un desplazamiento de 3 km si lleva una velocidad media de 50 km/h al sur. Datos: Fórmula: Despeje: t=? vm= d t= d d= 3 km t vm vm=50 km/h Sustitución: Resultado: t= 3 km 50 km/h t= 0.06 h Problemas resueltos 10. Encontrar la velocidad media o promedio de un móvil que durante su recorrido hacia el norte tuvo las siguientes magnitudes de velocidades: v1= 19 m/s v2= 23 m/s v3= 21 m/s v4= 22.5 m/s Datos: Fórmulas v1 = 19 m/s v2 = 23 m/s ∑v = v1 + v2+ v3 +v4 v3 = 21 m/s vm= ∑v v4 = 22.5 m/s 4 Sustitución: Resultado: vm= 85.5 m/s 4 vm= 21.375 m/s al norte Problemas resueltos 11. Calcular la velocidad promedio de un móvil si partió al este con una velocidad inicial de 3 m/s y su velocidad final fue de 5 m/s. Datos: Fórmula : vm= ? vm= vo + vf vo = 3 m/s 2 vf = 5 m/s Sustitución: Resultado: vm= (3 + 5) m/s v m= 4 m/s al este 2 Problemas resueltos 12. Determinar el tiempo en que un automóvil recorre una distancia de 40 m si lleva una velocidad media de 5 m/s al sur. Datos: Fórmula Despeje: t=? vm= d t= d d= 40 m t vm vm= 5 m/s al sur Sustitución: Resultado: t= 40 m 5 m/ s t= 8 s Problemas resueltos datos 13. Calcular la distancia en metros que recorrerá un motociclista durante 10 s si lleva una velocidad media de 70 km/h al oeste. Datos: Fórmula: Despeje: d= ? Vm= d d= vm x t t= 10 s t vm= 70 km/h Conversión: 70 km x 1 h x 1000 m = 194.4 m/s h 3600 s 1 km Problemas resueltos • Sustitución: Resultado: d= 19.44 m/s x 10 s d= 194.4 m Problemas propuestos 1. ¿Cuál será la magnitud de la velocidad promedio de un autobús que recorre una distancia de 150 km en 1.5 h? 2. ¿Qué distancia en metros recorrerá un motociclista si durante 9 s lleva una media de 70 km/h al oeste? 3. ¿En qué tiempo en horas efectuará un automóvil un desplazamiento de 5 km si lleva una velocidad media de 60 km/h al sur? 4. Determinar la velocidad media o promedio de un móvil que durante su recorrido hacia el este tuvo las siguientes magnitudes de velocidades: v1= 20 m/s, v2= 18 m/s, v3= 21 m/s, v4= 22 m/s, v5= 20.5 m/s 5. Determinar el tiempo en que una embarcación recorrerá una distancia de 180 km si lleva una velocidad constante de 25 nudos. 4 h Respuestas 1. vm= 100 km/h 2. d= 175 m 3. 0.083 h 4. vm= 20.3 m/s 5. 4 h. Bibliografía • Física para Bachillerato Pérez Montiel, Héctor. Editorial: Patria. 2011 • Física general Alvarenga, Beatriz. Máximo, Antonio. Editorial: Oxford. 2014
798
https://www.ncbi.nlm.nih.gov/books/NBK534855/
An official website of the United States government The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log in Account Logged in as:username Dashboard Publications Account settings Log out Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Show details Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Cerebral Salt Wasting Syndrome Walter A. Hall; William Thorell. Author Information and Affiliations Authors Walter A. Hall1; William Thorell2. Affiliations 1 SUNY Upstate Medical University 2 UNMC Last Update: June 2, 2025. Continuing Education Activity Cerebral salt wasting (CSW) is a critical but often underdiagnosed condition characterized by hyponatremia, excessive urinary sodium loss, and hypovolemia, typically occurring in patients with acute central nervous system disorders such as subarachnoid hemorrhage, traumatic brain injury, or neurosurgical interventions. The pathophysiology of CSW is not fully understood, though proposed mechanisms involve disruptions in sympathetic nervous system regulation and increased natriuretic peptide activity leading to renal sodium wasting. Differentiating CSW from the syndrome of inappropriate antidiuretic hormone secretion (SIADH) is particularly challenging, as both conditions share overlapping laboratory findings, including low serum sodium and high urinary sodium levels. However, CSW involves a true volume depletion, unlike SIADH, which carries significant therapeutic implications. Accurate diagnosis is crucial, as mismanagement—particularly inappropriate fluid restriction—can worsen hypovolemia and increase the risk of morbidity and mortality. This educational activity equips healthcare professionals with the knowledge and skills to identify, diagnose, and manage CSW effectively. Participants learn to recognize the nuanced clinical features distinguishing CSW from SIADH, interpret relevant laboratory and diagnostic tests, and apply evidence-based treatment strategies to restore sodium balance and intravascular volume. Emphasis is placed on interprofessional collaboration among clinicians, including physicians, nurses, pharmacists, and other allied health professionals in achieving optimal outcomes. Collaborative care enhances diagnostic accuracy, ensures timely intervention, and supports individualized treatment plans, ultimately reducing complications and improving recovery in neurologically vulnerable individuals. Objectives: Identify the clinical signs and symptoms associated with cerebral salt wasting, including hyponatremia and hypovolemia. Differentiate between cerebral salt wasting and the syndrome of inappropriate antidiuretic hormone secretion based on volume status, laboratory values, and clinical presentation. Select appropriate fluid and pharmacologic therapies to correct sodium imbalance and hypovolemia. Collaborate with interprofessional team members to ensure comprehensive care for patients with cerebral salt wasting. Access free multiple choice questions on this topic. Introduction Cerebral salt wasting (CSW) is a potential cause of hyponatremia associated with central nervous system (CNS) disease. This condition is characterized by hyponatremia, elevated urine sodium levels, and hypovolemia. The literature is currently debating whether CSW is a distinct clinical entity or a variant of the syndrome of inappropriate antidiuretic hormone (SIADH) secretion. Differentiating between CSW and SIADH is crucial, as their treatments differ significantly: CSW is managed with fluid and sodium replacement, whereas SIADH requires fluid restriction. CSW typically resolves within weeks to months but may persist chronically. The proposed pathophysiology of CSW involves either the release of brain natriuretic peptide or hypothalamic damage leading to impaired sympathetic nervous system function. Etiology The etiology of CSW is not fully understood; it is most frequently observed following central nervous (CNS) injury, with aneurysmal subarachnoid hemorrhage being the most commonly reported precipitating event. CSW occurs more often after aneurysmal subarachnoid hemorrhage than after traumatic subarachnoid hemorrhage or other CNS insults. The reason for the difference remains unexplained. Similarly, it is unclear why CSW is relatively uncommon in other CNS conditions, such as tuberculous meningitis, despite significant involvement of the CNS in these diseases. Epidemiology Because CSW is an uncommon condition, its exact incidence and prevalence are difficult to determine. This condition most frequently occurs following aneurysmal subarachnoid hemorrhage, but can also result from other CNS injuries. CSW has been reported in a variety of other conditions, including postsurgical cases involving pituitary tumors or apoplexy, as well as in association with hypothalamic tumors, acoustic neuromas, gliomas, and metastatic carcinomas; it has also been observed after calvarial remodeling procedures, cranial trauma, and infections such as tuberculous and viral meningitis. Some clinicians estimate that CSW accounts for up to one-quarter of cases of severe hyponatremia following aneurysmal subarachnoid hemorrhage. In contrast, CSW resulting from other causes of CNS damage, such as pituitary apoplexy, is generally described only in case reports. The incidence and prevalence of CSW in patients without CNS injury have not been reliably reported. Pathophysiology The true etiology of CSW remains an area of debate and ongoing investigation. Some clinicians suggest that CSW is not a distinct entity but a variant of SIADH. Currently, 2 main theories have been proposed to explain the pathophysiology of CSW: one involves a circulating factor such as BNP, and the other centers on sympathetic nervous system dysfunction. Both mechanisms are believed to lead to excessive renal sodium loss, resulting in hyponatremia and volume depletion. According to the first theory, CNS injury prompts the brain to release brain natriuretic peptide(BNP), which enters systemic circulation through a disrupted blood-brain barrier. BNP acts on the collecting ducts of the renal tubules to inhibit sodium reabsorption and suppresses renin release, contributing to natriuresis and hyponatremia. The second theory suggests that damage to the sympathetic nervous system, such as may occur with hypothalamic tumor surgery, impairs sodium reabsorption and alters renin release. The disruption in neurohormonal regulation may also contribute to the salt-wasting state. Despite these hypotheses, the exact mechanism of CSW remains unresolved and continues to be a subject of active research and debate. History and Physical CSW is a condition characterized by hyponatremia and hypovolemia due to excessive renal sodium loss, typically occurring in patients with intracranial pathology such as subarachnoid hemorrhage, traumatic brain injury, or neurosurgical procedures. The most common cause of CSW is hyponatremia following aneurysmal subarachnoid hemorrhage. Patients with CSW often present with symptoms related to hyponatremia and volume depletion. These may include nausea, vomiting, headache, lethargy, confusion, and in severe cases, seizures or coma. Symptoms usually occur within days to weeks following the inciting cerebral event. On physical examination, patients with CSW usually show clear signs of volume depletion. Blood pressure is often low, and a compensatory tachycardia may be present. Mucous membranes appear dry, and skin turgor is diminished. Neurologically, the spectrum ranges from mild confusion to profound encephalopathy, and seizures can occur in the setting of severe hyponatremia. Evaluation Differentiating CSW from SIADH is crucial, as their treatments differ significantly. In CSW, patients exhibit hypovolemia and high urinary sodium excretion, whereas SIADH is characterized by euvolemia or hypervolemia with low urinary sodium excretion (see Table.Comparison of CSW and SIADH). Laboratory testing typically reveals marked hyponatremia accompanied by very high urinary sodium excretion, often exceeding 100 mmol/L. Polyuria is also common, with daily urine volumes frequently surpassing 2.5 L. These findings together support the diagnosis of CSW and differentiate it from other causes of hyponatremia. SIADH presents with a laboratory profile similar to that of CSW, including hyponatremia and increased urine sodium levels; however, a key distinguishing feature is the patient’s volume status. In SIADH, patients are typically euvolemic or mildly hypervolemic due to retained free water, whereas in CSW, patients are hypovolemic compared to the hypovolemic picture of CSW. Other potential causes of hyponatremia should also be sought, including polydipsia, renal disease, use of diuretics, heart failure, hypothyroidism, malignancies, hormone deficiency, and pseudohyponatremia. Many times, CSW becomes a diagnosis of exclusion after labs reveal serum hyponatremia with increased urine sodium levels. Table Table. Comparison of CSW and SIADH. ADH, antidiuretic hormone; CNS, central nervous system Treatment / Management The treatments for CSW and SIADH are fundamentally different, making an accurate diagnosis essential before initiating therapy. Misclassification can lead to inappropriate management, potentially worsening the patient's clinical condition. CSW is most commonly associated with aneurysmal subarachnoid hemorrhage, so the initial management should focus on identifying and treating the underlying CNS insult. If the cause is aneurysmal, the aneurysm must be secured promptly. Please see StatPearls' companion resource, "Subarachnoid Hemorrhage," for more information. Once the source of the CNS insult is managed, the next critical step is assessing and correcting the patient's volume status. In CSW, hypovolemia is a hallmark finding, and intravenous fluid replacement is necessary to restore intravascular volume and correct hyponatremia. Patients typically start on isotonic saline (0.9% sodium chloride), replenishing volume and raising serum sodium levels. In moderate to severe cases of hyponatremia, more aggressive sodium replacement may be required using 3% hypertonic saline. However, the correction rate should be carefully controlled, with the maximum increase in serum sodium limited to less than 8 mEq/L over 24 hours to avoid osmotic demyelination syndrome. In high-risk individuals, intravenous desmopressin may be administered to prevent overly rapid correction of serum sodium levels. Salt tabs (1 to 2 grams up to 3 times daily) can also be administered orally or via gastrostomy tube as an adjunct to correct hyponatremia. Additionally, some clinicians have advocated for fludrocortisone, a mineralocorticoid, to promote sodium retention and support volume status in patients with CSW. When correcting hyponatremia, serum sodium levels should be monitored frequently to ensure safe and effective management. Overcorrection can result in hypernatremia, which may lead to complications such as muscle twitching, lethargy, seizures, and even death. Equally important is avoiding overly rapid correction of hyponatremia, particularly in cases of chronic or long-standing hyponatremia. Rapid shifts in serum sodium can lead to osmotic demyelination syndrome, most notably central pontine myelinolysis. To minimize this risk, limit correction to no more than 10 mEq/L over 24 hours, or approximately 1 mEq/L every 2 hours. Hyponatremia in CSW may persist for weeks to months following the initial CNS event, necessitating ongoing monitoring and management. Throughout treatment, frequent assessment of the Glasgow Coma Score and neurological examination is critical to detect any signs of clinical deterioration. While most patients with CSW not caused by subarachnoid hemorrhage tend to have a favorable prognosis, some may continue to experience mild neurological deficits despite appropriate management. The most important clinical consideration is distinguishing between CSW and SIADH, which require different therapeutic approaches. SIADH is typically managed with fluid restriction, demeclocycline, or furosemide. Intravenous 0.9% saline should be avoided in treating SIADH due to the rapid and unpredictable fluctuation of serum sodium levels. In contrast, if a patient has true CSW, they are hypovolemic, and employing SIADH treatment modalities would worsen their condition by further exacerbating the hypovolemia. Differential Diagnosis Distinguishing between CSW and SIADH is critical. Hyponatremia and an elevated urine sodium level, concentrated urine, and absence of edema characterize both conditions. The main distinguishing factor is the patient's volume status: in CSW, the patient is hypovolemic, whereas in SIADH, the patient is either euvolemic or hypervolemic. The differential diagnosis for the etiology of CSW includes various CNS insults, such as: Head injury Brain tumor (hypothalamic or pituitary tumor) Stroke Intracranial surgery Aneurysmal subarachnoid hemorrhage Intracerebral hemorrhage Pituitary apoplexy Craniosynostosis repair Tuberculous meningitis Prognosis The prognosis for CSW can vary depending on the underlying cause and the timeliness and appropriateness of treatment. CSW is associated with significant morbidity and mortality, particularly in patients with severe neurological conditions such as traumatic brain injury and subarachnoid hemorrhage. Patients with CSW often experience increased complications, including prolonged hospitalizations, extended stays in intensive care units (ICUs), and a greater need for ventilator support. Research results by Chendrasekhar et al demonstrated that individuals with traumatic brain injury who developed CSW had more severe injuries, spent more time in the hospital and ICU, and required longer ventilator support compared to those without CSW. Notably, survival to hospital discharge was lower in patients with CSW (88%) than those without the condition (99%). Further, Tolunay et al found that the average duration required to correct hyponatremia in pediatric CSW cases was about 20 days. Of the 9 children evaluated, 1 did not survive. CSW is being recognized more frequently in pediatric critical care settings. Although it is generally treatable with proper fluid and sodium management, it remains a significant clinical challenge. Ultimately, the outlook for patients with CSW is strongly influenced by the severity of the underlying neurological injury and how promptly and effectively treatment is initiated. Management strategies such as sodium supplementation, volume repletion, and—in some cases—using mineralocorticoids like fludrocortisone are essential for improving patient outcomes. Early diagnosis and tailored treatment are key to minimizing complications and enhancing recovery. Complications CSW can lead to a range of complications, primarily due to the resulting hyponatremia and hypovolemia. These complications can significantly affect outcomes, especially in patients with preexisting neurological injuries or conditions. Prompt recognition and targeted management are essential to minimize risks and improve recovery. Key complications associated with CSW include: Hyponatremia: Low serum sodium levels can result in neurological symptoms ranging from confusion and lethargy to seizures and coma if not promptly corrected. Hypovolemia: Loss of intravascular volume may lead to hypotension, reduced perfusion, and end-organ dysfunction. Increased morbidity and mortality: Patients with CSW often experience longer hospital and ICU stays, increased ventilator dependency, and a higher risk of adverse outcomes. Electrolyte imbalances: In addition to sodium loss, patients may develop disturbances in other electrolytes, further complicating management. Therapeutic challenges: Differentiating CSW from conditions like SIADH is critical, as mistreatment can exacerbate hypovolemia and worsen patient outcomes. Early diagnosis and appropriate fluid and sodium replacement are crucial to reducing the impact of these complications and optimizing patient care. Delayed or incorrect treatment can lead to worsening neurological function, prolonged hospitalization, and potentially life-threatening outcomes. Deterrence and Patient Education Patients diagnosed with CSW should be thoroughly educated on the condition so that they can manage their health and prevent complications effectively. Essentially, patients should understand the nature of CSW and that it is often associated with neurological insults such as subarachnoid hemorrhage, traumatic brain injury, or recent neurosurgical procedures. Monitoring and management play a central role in ongoing care. Patients must recognize the importance of regular blood tests to monitor serum sodium levels and fluid status, ensuring that electrolyte imbalances are promptly corrected with appropriate sodium and fluid replacement. They should also be educated on the signs and symptoms of hyponatremia, such as headache, nausea, confusion, and seizures, as well as indicators of hypovolemia like dizziness, low blood pressure, and decreased urine output. Prompt communication with healthcare providers upon noticing these symptoms is critical. Treatment adherence is another key component of patient education. Treatment regimens may include isotonic or hypertonic saline infusions and, in some cases, using mineralocorticoids like fludrocortisone. Patients should be informed about potential adverse events, such as hypokalemia and hypertension, and the need for routine follow-up to monitor treatment response and adjust medications as needed. Equally important is educating patients about how CSW differs from SIADH, a condition with a similar presentation but vastly different management. Unlike CSW, which requires sodium and fluid replacement, SIADH is typically managed through fluid restriction, and misdiagnosis could lead to harmful treatment choices. Pearls and Other Issues Key facts to keep in mind about CSW include the following: CSW is characterized by renal loss of sodium during intracranial disease, leading to hyponatremia and decreased extracellular fluid volume. CSW is often associated with subarachnoid hemorrhage, traumatic brain injury, neurosurgery, bacterial meningitis, and other CNS pathologies. Patients present with hyponatremia, hypovolemia, excessive natriuresis, and high urine output. Differentiating CSW from the SIADH is crucial. CSW is characterized by hypovolemia, whereas SIADH involves euvolemia or hypervolemia. Key diagnostic features include symptomatic hyponatremia, high urinary sodium excretion, and increased urine volume. Assessment of extracellular volume status is essential. The primary treatment involves volume and sodium repletion using isotonic or hypertonic saline. In complicated cases, mineralocorticoids like fludrocortisone may be used. Prognosis depends on the underlying neurological condition and the timeliness of treatment. Potential complications include severe hyponatremia, hypovolemia, increased morbidity and mortality, and management challenges due to the need to differentiate from SIADH. Enhancing Healthcare Team Outcomes CSW frequently occurs following a significant CNS insult, such as an aneurysmal subarachnoid hemorrhage. Caring for those with CSW often requires a coordinated, multidisciplinary approach; treatment, particularly intravenous fluid administration, can potentially worsen complications like cerebral edema, pulmonary edema, heart failure, and renal dysfunction. Clinicians must pay careful attention to the types and volumes of intravenous fluids administered, especially when combined with other medications, to avoid inadvertently delivering excessive free water. Improved outcomes are more likely when an interprofessional healthcare team delivers care. This team may include primary care and emergency clinicians, neurologists, neurosurgeons, critical care specialists, specialty care nurses, and pharmacists. Neuroscience and critical care nurses play a vital role by administering treatment, monitoring clinical status, educating patients and families, and relaying condition updates to the broader team. Pharmacists contribute by reviewing medication regimens, identifying potential drug-drug interactions that could worsen hyponatremia, and recommending therapeutic adjustments when necessary. Collaborative communication and shared decision-making among all team members are essential to achieving the best possible outcomes for patients with CSW. Review Questions Access free multiple choice questions on this topic. Click here for a simplified version. Comment on this article. References 1. : Leonard J, Garrett RE, Salottolo K, Slone DS, Mains CW, Carrick MM, Bar-Or D. Cerebral salt wasting after traumatic brain injury: a review of the literature. Scand J Trauma Resusc Emerg Med. 2015 Nov 11;23:98. [PMC free article: PMC4642664] [PubMed: 26561391] 2. : Jin S, Long Z, Wang W, Jiang B. Hyponatremia in neuromyelitis optica spectrum disorders: Literature review. Acta Neurol Scand. 2018 Jul;138(1):4-11. [PubMed: 29654708] 3. : Yang F, Cao Z, Wang X, Cui Z, Cheng D, Li Z, Lv B, Zhang H, Guo P, Feng Y, Liu W. A multi-parameter study of the etiological diagnosis of hyponatremia after hypothalamic tumor surgery. Clin Neurol Neurosurg. 2021 Nov;210:106963. [PubMed: 34715556] 4. : Wu J, Yan Z, Li B, Yu X, Huang H. Pituitary Apoplexy-associated Cerebral Salt Wasting Syndrome: A Case Report and Literature Review. Clin Ther. 2023 Dec;45(12):1293-1296. [PubMed: 37778916] 5. : Maesaka JK, Imbriano LJ, Grant C, Miyawaki N. New Approach to Hyponatremia: High Prevalence of Cerebral/Renal Salt Wasting, Identification of Natriuretic Protein That Causes Salt Wasting. J Clin Med. 2022 Dec 15;11(24) [PMC free article: PMC9786136] [PubMed: 36556061] 6. : Cerdà-Esteve M, Cuadrado-Godia E, Chillaron JJ, Pont-Sunyer C, Cucurella G, Fernández M, Goday A, Cano-Pérez JF, Rodríguez-Campello A, Roquer J. Cerebral salt wasting syndrome: review. Eur J Intern Med. 2008 Jun;19(4):249-54. [PubMed: 18471672] 7. : Busl KM, Rabinstein AA. Prevention and Correction of Dysnatremia After Aneurysmal Subarachnoid Hemorrhage. Neurocrit Care. 2023 Aug;39(1):70-80. [PubMed: 37138158] 8. : Arieff AI, Gabbai R, Goldfine ID. Cerebral Salt-Wasting Syndrome: Diagnosis by Urine Sodium Excretion. Am J Med Sci. 2017 Oct;354(4):350-354. [PubMed: 29078838] 9. : Cui H, He G, Yang S, Lv Y, Jiang Z, Gang X, Wang G. Inappropriate Antidiuretic Hormone Secretion and Cerebral Salt-Wasting Syndromes in Neurological Patients. Front Neurosci. 2019;13:1170. [PMC free article: PMC6857451] [PubMed: 31780881] 10. : Oh JY, Shin JI. Syndrome of inappropriate antidiuretic hormone secretion and cerebral/renal salt wasting syndrome: similarities and differences. Front Pediatr. 2014;2:146. [PMC free article: PMC4302789] [PubMed: 25657991] 11. : Yee AH, Burns JD, Wijdicks EF. Cerebral salt wasting: pathophysiology, diagnosis, and treatment. Neurosurg Clin N Am. 2010 Apr;21(2):339-52. [PubMed: 20380974] 12. : Ziu E, Khan Suheb MZ, Mesfin FB. StatPearls [Internet]. StatPearls Publishing; Treasure Island (FL): Jun 1, 2023. Subarachnoid Hemorrhage. [PubMed: 28722987] 13. : Reddy P. Clinical Approach to Euvolemic Hyponatremia. Cureus. 2023 Feb;15(2):e35574. [PMC free article: PMC10063237] [PubMed: 37007374] 14. : Maesaka JK, Imbriano LJ, Miyawaki N. High Prevalence of Renal Salt Wasting Without Cerebral Disease as Cause of Hyponatremia in General Medical Wards. Am J Med Sci. 2018 Jul;356(1):15-22. [PubMed: 30049325] 15. : Lamotte G. Central pontine myelinolysis secondary to rapid correction of hyponatremia historical perspective with Doctor Robert Laureno. Neurol Sci. 2021 Aug;42(8):3479-3483. [PubMed: 33950364] 16. : John CA, Day MW. Central neurogenic diabetes insipidus, syndrome of inappropriate secretion of antidiuretic hormone, and cerebral salt-wasting syndrome in traumatic brain injury. Crit Care Nurse. 2012 Apr;32(2):e1-7; quiz e8. [PubMed: 22467619] 17. : Rahman M, Friedman WA. Hyponatremia in neurosurgical patients: clinical guidelines development. Neurosurgery. 2009 Nov;65(5):925-35; discussion 935-6. [PubMed: 19834406] 18. : Moritz ML. Syndrome of Inappropriate Antidiuresis. Pediatr Clin North Am. 2019 Feb;66(1):209-226. [PubMed: 30454744] 19. : Chendrasekhar A, Chow PT, Cohen D, Akella K, Vadali V, Bapatla A, Patwari J, Rubinshteyn V, Harris L. Cerebral Salt Wasting in Traumatic Brain Injury Is Associated with Increased Morbidity and Mortality. Neuropsychiatr Dis Treat. 2020;16:801-806. [PMC free article: PMC7104213] [PubMed: 32273706] 20. : Tolunay O, Celik T, Celik U, Kömür M, Yagci-Kupeli B. Cerebral salt wasting in pediatric critical care; not just a neurosurgical disorder anymore. Neuro Endocrinol Lett. 2015 Dec;36(6):578-82. [PubMed: 26812288] 21. : Taplin CE, Cowell CT, Silink M, Ambler GR. Fludrocortisone therapy in cerebral salt wasting. Pediatrics. 2006 Dec;118(6):e1904-8. [PubMed: 17101713] : Disclosure: Walter Hall declares no relevant financial relationships with ineligible companies. : Disclosure: William Thorell declares no relevant financial relationships with ineligible companies. Copyright © 2025, StatPearls Publishing LLC. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Bookshelf ID: NBK534855PMID: 30521276 Share Views PubReader Print View Cite this Page Hall WA, Thorell W. Cerebral Salt Wasting Syndrome. [Updated 2025 Jun 2]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. In this Page Continuing Education Activity Introduction Etiology Epidemiology Pathophysiology History and Physical Evaluation Treatment / Management Differential Diagnosis Prognosis Complications Deterrence and Patient Education Pearls and Other Issues Enhancing Healthcare Team Outcomes Review Questions References Related information PMC PubMed Central citations PubMed Links to PubMed Similar articles in PubMed Hyponatremia in the postoperative craniofacial pediatric patient population: a connection to cerebral salt wasting syndrome and management of the disorder.[Plast Reconstr Surg. 2001] Hyponatremia in the postoperative craniofacial pediatric patient population: a connection to cerebral salt wasting syndrome and management of the disorder. Levine JP, Stelnicki E, Weiner HL, Bradley JP, McCarthy JG. Plast Reconstr Surg. 2001 Nov; 108(6):1501-8. Etiology of postoperative hyponatremia following pediatric intracranial tumor surgery.[J Neurosurg Pediatr. 2016] Etiology of postoperative hyponatremia following pediatric intracranial tumor surgery. Williams CN, Riva-Cambrin J, Bratton SL. J Neurosurg Pediatr. 2016 Mar; 17(3):303-9. Epub 2015 Nov 27. Review Hyponatremia in patients with central nervous system disease: SIADH versus CSW.[Trends Endocrinol Metab. 2003] Review Hyponatremia in patients with central nervous system disease: SIADH versus CSW. Palmer BF. Trends Endocrinol Metab. 2003 May-Jun; 14(4):182-7. Evaluation of NT-ProBNP as a marker of the volume status of neurosurgical patients developing hyponatremia and natriuresis: A pilot study.[Neurol India. 2018] Evaluation of NT-ProBNP as a marker of the volume status of neurosurgical patients developing hyponatremia and natriuresis: A pilot study. Tobin G, Chacko AG, Simon R. Neurol India. 2018 Sep-Oct; 66(5):1383-1388. Review Mechanism, spectrum, consequences and management of hyponatremia in tuberculous meningitis.[Wellcome Open Res. 2019] Review Mechanism, spectrum, consequences and management of hyponatremia in tuberculous meningitis. Misra UK, Kalita J, Tuberculous Meningitis International Research Consortium. Wellcome Open Res. 2019; 4:189. Epub 2021 Mar 29. See reviews...See all... Recent Activity Clear)Turn Off)Turn On) Cerebral Salt Wasting Syndrome - StatPearls Cerebral Salt Wasting Syndrome - StatPearls Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers
799
https://math.stackexchange.com/questions/99016/proof-that-e-sum-limits-k-0-infty-frac1k
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Proof that $e=\sum\limits_{k=0}^{+\infty}\frac{1}{k!}$ Ask Question Asked Modified 8 years, 7 months ago Viewed 12k times 13 $\begingroup$ How can it be proved that the Euler constant equals the limit of the sum of all $\frac{1}{k!}$ when $k$ goes from $0$ to $+\infty$ ? limits exponential-function Share edited Feb 13, 2017 at 18:41 Martin Sleziak 56.3k2020 gold badges211211 silver badges391391 bronze badges asked Jan 14, 2012 at 15:12 Cydonia7Cydonia7 90122 gold badges77 silver badges1414 bronze badges $\endgroup$ 13 12 $\begingroup$ what is your definition of e? $\endgroup$ Matthew Towers – Matthew Towers 2012-01-14 15:15:02 +00:00 Commented Jan 14, 2012 at 15:15 3 $\begingroup$ and exp is... ? $\endgroup$ Dustan Levenstein – Dustan Levenstein 2012-01-14 15:20:32 +00:00 Commented Jan 14, 2012 at 15:20 8 $\begingroup$ no, we need a definition. Some people define exp as $\displaystyle\sum_{k=0}^\infty \frac{x^k}{k!}$ $\endgroup$ Dustan Levenstein – Dustan Levenstein 2012-01-14 15:23:03 +00:00 Commented Jan 14, 2012 at 15:23 2 $\begingroup$ If you grant that $\exp$ satisfies $f'=f$, try a MacLaurin expansion. $\endgroup$ Neal – Neal 2012-01-14 15:26:56 +00:00 Commented Jan 14, 2012 at 15:26 3 $\begingroup$ @Skydreamer that differential equation has lots of solutions. You have to impose y(0)=1. If you already believe that that DE has a unique solution, just verify that the power series defined by Dustan earlier solves it (differentiate term by term), then plug in x=1. $\endgroup$ Matthew Towers – Matthew Towers 2012-01-14 15:27:56 +00:00 Commented Jan 14, 2012 at 15:27 | Show 8 more comments 3 Answers 3 Reset to default 23 $\begingroup$ But I encountered the same doubt when I was reading the " Synopsis of elementary results in mathematics ", I convinced myself with this two facts ( I don't know whether they are true or not, that should be decided by Mr.Srivatsan ) . The function $e^x$ has derivative equal to itself. Then the Maclaurin series for any function which can be differentiated as many times as you like is $$f(x) =\large \frac{f(0)}{0!} + f^\prime(0)\cdot\large \frac{x}{1!} + f^{\prime\prime}(0)\cdot\large \frac{x^2}{2!} + f^{\prime\prime\prime}(0).\frac{x^3}{3!} + \cdots$$ For $f(x) = e^x$, you have $e^x = f(x) = f^\prime(x) = f^{\prime\prime}(x) = f^{\prime\prime\prime}(x) = \cdots 1 = f(0) = f^\prime(0) = f^{\prime\prime}(0) = f^{\prime\prime\prime}(0) = \cdots$ and the Maclaurin series for $e^x$ is then $$e^x =\large 1 + \frac{x}{1!} + \frac{x^2}{2!} +\frac{ x^3}{3!} + \frac{x^4}{4!} + \cdots$$ Now set $x = 1$, and you get the series about which you asked. Another version: The definition of $e$ is $$e = \lim_{n\to \infty}(1+1/n)^n $$ Consider the binomial expansion for$ n = 1, 2, 3, 4, 5, \ldots$ $$(1+1/n)^n = \sum^n_{i=0}C(n,i) (1/n)^i$$ For $i = 0, 1, 2, 3, \ldots$ one has $$C(n,i)(1/n)^i = \rm{ \large \frac{n!}{(n-i)!i!n^i}}$$ $$ = (1)(1-1/n)(1-2/n)\cdots (1-[i-1]/n)/i!$$ whose limit as n grows without bound is $\large\frac{1}{i!}$ . Then $$ \lim_{ n\to \infty} (1+1/n)^n = \lim_{ n\to \infty} \sum^n_{i=0} C(n,i) (1/n)^i$$ $$= \sum^\infty_{i=0} \lim_{n\to \infty} C(n,i)(1/n)^i$$ $$e = \sum^{\infty}_{i=0} 1/i!$$ Hence the result. ( Credits of editing goes to Mr.Srivatsan , as he taught me to use ' instead of \prime and many more things which made my answer appear more neatly, and also for Mr.Michael Hardy, for editing the answer which now appears more neatly ). Thank you. Yours truly, Iyengar. Share edited Jan 15, 2012 at 3:18 answered Jan 14, 2012 at 17:03 IDOKIDOK 5,41822 gold badges4343 silver badges6767 bronze badges $\endgroup$ 5 1 $\begingroup$ +1 because of the second proof it is really beautiful and view from another direction. $\endgroup$ speedyGonzales – speedyGonzales 2012-01-14 20:11:07 +00:00 Commented Jan 14, 2012 at 20:11 3 $\begingroup$ As mentioned in the other answer based on the binomial expansion of $(1+1/n)^n$, one should add an argument justifying the $\lim\sum=\sum\lim$ step, to transform the part called Another version into a full proof. $\endgroup$ Did – Did 2012-01-14 22:52:46 +00:00 Commented Jan 14, 2012 at 22:52 4 $\begingroup$ The first version is not a complete proof. It is false that a "function which can be differentiated as many times as you like" equals its Maclaurin series. Some times it does and some times it does not. (When it does, we say that the function is analytic.) See the references posted in comments at mathoverflow.net/questions/81613/… by Dave Renfro. $\endgroup$ Andrés E. Caicedo – Andrés E. Caicedo 2012-01-15 00:04:33 +00:00 Commented Jan 15, 2012 at 0:04 $\begingroup$ The issue is, one has to show that 1. The Maclaurin series converges and, if it does, that 2. It converges to the right value. We check that this is the case for $e^x$, but it is the key part of the proof. $\endgroup$ Andrés E. Caicedo – Andrés E. Caicedo 2012-01-15 00:06:08 +00:00 Commented Jan 15, 2012 at 0:06 $\begingroup$ @Michael Hardy : Thanks a lot Michael sir, for editing the post more neatly and I whole-heatedly appreciate your efforts. $\endgroup$ IDOK – IDOK 2012-01-15 03:17:28 +00:00 Commented Jan 15, 2012 at 3:17 Add a comment | 2 $\begingroup$ I'll assume $e=\lim_{n\to\infty}(1+1/n)^n$. Here is a heuristic argument that can be made rigorous. Apply the binomial theorem to $(1+1/n)^n$ to get $$(1+1/n)^n=\sum_{k=0}^n \binom{n}{k}n^{-k}=1+n/n+\frac{n(n-1)}{2n^2}+\cdots$$ This is approximately $1+1+\frac{1}{2}+\frac{1}{3!}+\cdots.$ Taking the limit as $n$ goes to infinity, we get $e=\sum_{k=0}^\infty \frac{1}{k!}$. I've made it a community wiki in case anyone wants to supply some of the missing details to make it fully rigorous. Share answered Jan 14, 2012 at 16:54 community wiki Cheerful Parsnip $\endgroup$ 10 $\begingroup$ Thank you for the answer ! $\endgroup$ Cydonia7 – Cydonia7 2012-01-14 16:57:40 +00:00 Commented Jan 14, 2012 at 16:57 $\begingroup$ @Skydreamer : I have been editing the answer in TeX and another person posted the same answer before me, its my bad luck, all my strain gone in vain. Anyway you got the answer, thats good, thank you. $\endgroup$ IDOK – IDOK 2012-01-14 17:07:05 +00:00 Commented Jan 14, 2012 at 17:07 $\begingroup$ I gave you a +1 but I won't unaccept an answer, that's not a nice behavior. Thank you anyway ! $\endgroup$ Cydonia7 – Cydonia7 2012-01-14 17:23:32 +00:00 Commented Jan 14, 2012 at 17:23 1 $\begingroup$ @iyengar: This happens frequently. Two people will be putting in their answer at the same time. It's happened to me on many occasions. One strategy is to post a quick answer so people can see you're working on the problem, and then take the time to edit the answer into a more detailed form afterwards. (I learned this strategy from Bill Dubuque.) $\endgroup$ Cheerful Parsnip – Cheerful Parsnip 2012-01-14 17:45:19 +00:00 Commented Jan 14, 2012 at 17:45 2 $\begingroup$ @Skydreamer: since I made this a CW, I don't get any points anyway, so there's no problem switching the accepted answer to iyengar's more complete answer. Some people get irritated about such things, but I'm not one of them. Cheers! $\endgroup$ Cheerful Parsnip – Cheerful Parsnip 2012-01-14 17:55:58 +00:00 Commented Jan 14, 2012 at 17:55 | Show 5 more comments 1 $\begingroup$ You have to prove that the sequence of partial sums of the series converges. But for all $x$ , $e^x=1+x+....+x^n/n!+r(x)$ where $r(x)$ is the rest of order $n$. Prove that for $x=1$ the sequence $r(x)$ converges to zero. You can use the formula of Lagrange, and use $e<3$. Share edited Jan 14, 2012 at 16:49 Karatuğ Ozan Bircan 4,43911 gold badge2727 silver badges5454 bronze badges answered Jan 14, 2012 at 16:39 alpha.Debialpha.Debi 1,09477 silver badges77 bronze badges $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions limits exponential-function See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 3 Why are these representations of e the same? 1 I would like to prove that $e = \lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n = \sum_{k=0}^{+\infty}\frac{1}{k!}.$ Prove that $\lim_{n \rightarrow \infty} \sum_{k=0}^{n} \frac{1}{k!} = e$ Does $\sum_{n = 1}^{\infty} \frac{1}{2^n}$ converges? Related 3 Prove that $\sum\limits_{r=1}^n (sr^{-1} -r^{-3}) -s \log n$ has a limit as $n \to \infty$, $\forall s>1$ Prove $\lim_{n\to\infty} \sum\limits_{k=1}^n \frac{\Lambda(k)}{k}-\ln(n)=-\gamma $ 0 How to solve $\lim_{n \to \infty} \frac{x^n}{1+x^n}$ 0 limit of summation $\lim_{N \to \infty}\sum\limits_{n=1}^{N} \frac{n^2+1}{2^n}$ 5 Check that $\lim\limits_{n\to\infty}\sum\limits_{i=1}^{n}\left(\frac{i+x}{n}\right)^n=\frac{e^{x+1}}{e-1}$ 1 Epsilon-Delta proof for the $\lim \limits_{n \to \infty} \frac{i}{n}$ Prove that $\lim\limits_{k\to\infty}\left(\frac1k\sum\limits_{n=1}^k\left\lfloor\frac kn\right\rfloor-\ln k\right)=2\gamma-1$ Prove that $\lim\limits_{x\to 0^+}\sum\limits_{n=1}^\infty\frac{(-1)^n}{n^x}=-\frac12$ 2 How to solve $\lim\limits_{n \to \infty} \frac{\sum\limits_{k=1}^{n} k^m}{n^{m+1}}$ 3 Evaluate $\lim\limits_{n\to\infty}\sum\limits_{k=1}^{n}\frac{k}{k^2+n^2}$ Hot Network Questions Implications of using a stream cipher as KDF Numbers Interpreted in Smallest Valid Base The geologic realities of a massive well out at Sea Clinical-tone story about Earth making people violent Fix integral lower bound kerning in textstyle or smaller with unicode-math Is this commentary on the Greek of Mark 1:19-20 accurate? how do I remove a item from the applications menu Can a GeoTIFF have 2 separate NoData values? How do I disable shadow visibility in the EEVEE material settings in Blender versions 4.2 and above? Can a cleric gain the intended benefit from the Extra Spell feat? Do sum of natural numbers and sum of their squares represent uniquely the summands? ICC in Hague not prosecuting an individual brought before them in a questionable manner? Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds” What meal can come next? Why is the definite article used in “Mi deporte favorito es el fútbol”? Quantizing EM field by imposing canonical commutation relations How many stars is possible to obtain in your savefile? Is it ok to place components "inside" the PCB Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Identifying a thriller where a man is trapped in a telephone box by a sniper What happens if you miss cruise ship deadline at private island? Interpret G-code RTC battery and VCC switching circuit How do you emphasize the verb "to be" with do/does? more hot questions Question feed